Welcome to the September 2024 issue of
Paulsen
Paulsen
Even if broadcast becomes all MVPDs and internet—and we never use those big-power, tall towers again—they still hold high value for society. For broadcasters, they are either an expensive burden or a big opportunity.
For a century, we have enjoyed a social contract where we entertain and inform, especially in crisis, for anyone and everyone who has access to a receiver; in return for spectrum and the authorization to monetize it. How this works and the technology underpinning it has transformed in a thousand ways. Once, transmitters and a channel number were foundational. Today, few would notice if OTA vanished. This is not an existential crisis. However, it is disruptive. Done well, it will be very good in many ways.
If there were no other reasons for a next generation of broadcasting, Emergency Mass Communications would be more than enough. ATSC’s 3.0 Advanced Emergency Alerting and Informing (AEI) would cement OTA broadcasting’s relevance. Further, there is a good argument that AEI could reinvigorate NextGen development. Further, as we have become exceedingly dependent on GPS, the world has been searching for a backup. Broadcast Positioning System (an alternative that could supplement GPS in case of failure) is doubtless the best of the alternatives.
Everything we might do with our transmitters depends on the number of receivers. Expanding our reach into phones, home routers/gateways, cars, etc. is far more imperative than TVs. Metcalfe’s law states that “the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system.” Today, broadcasters should be more motivated to “put 3.0 chips” in devices than attracting eyeballs.
If we want to keep our transmitters and spectrum—and there is no technical reason we can’t survive with only MVPDs and streaming—we should be steering to a favorable recasting of the social contract and infrastructure.
The idea that the wireless cell network is reliable enough for Emergency Mass Communications is absurd. The fragile wireless network collapses in any modest emergency. Even on a very bad day, scraped Earth incident, there has always—each and every time—been broadcast signals continuously relaying critical information. WEA and the wireless network is literally the fire extinguisher that doesn’t work in the presence of smoke.
No one is happy with EAS or WEA. AEI/BPS is the opportunity to bring Emergency Mass Communications in line with current technology and expectations, all the while increasing our relevance and reach. There isn’t a broadcast newsroom that doesn’t crave being the universal gateway to critical information.
Both BPS and basic AEI signaling are easy lifts that consume trivial spectrum. To us, this is nothing. To the world, this is huge. If we do our part and the FCC/FEMA pursues AEI/BPS we become more relevant.
With little drama, we should open the door to put ATSC 3.0 in more devices. It is that rare winwin all around. l
Fred Baumgartner is a retired broadcast engineer. He once took time off to develop and promote EAS.
Vol. 42 No. 9 | September 2024 FOLLOW US www.tvtech.com twitter.com/tvtechnology
CONTENT
Content Director
Tom Butts, tom.butts@futurenet.com
Content Manager Terry Scutt, terry.scutt@futurenet.com
Senior Content Producer George Winslow, george.winslow@futurenet.com
Contributors: Gary Arlen, James Careless, Fred Dawson, Kevin Hilton, Craig Johnston, Bob Kovacs and Mark R. Smith
Production Managers: Heather Tatrow, Nicole Schilling
Art Directors: Cliff Newman, Steven Mumby
ADVERTISING SALES
Managing Vice President of Sales, B2B Tech Adam Goldstein, adam.goldstein@futurenet.com
SUBSCRIBER CUSTOMER SERVICE
To subscribe, change your address, or check on your current account status, go to www.tvtechnology.com and click on About Us, email futureplc@computerfulfillment.com, call 888-266-5828, or write P.O. Box 8692, Lowell, MA 01853.
LICENSING/REPRINTS/PERMISSIONS
TV Technology is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com
MANAGEMENT
SVP, MD, B2B Amanda Darman-Allen VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane VP, Head of US Sales, B2B Tom Sikes VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban
FUTURE US, INC.
130 West 42nd Street, 7th Floor, New York, NY 10036
is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein.
If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents,subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.
Please Recycle. We are committed to only using magazine paper which is derived from responsibly managed, certified forestry and chlorine-free manufacture. The paper
ESPN has announced a new long-term deal with Genius Sports Limited that the network said will help transform ESPN’s live data-driven storytelling for NCAA sports and enhance broadcasts for the NBA and WNBA.
As part of the agreement, ESPN will expand its college sports coverage with real-time team and player stats from Genius Sports, the exclusive distributor of official NCAA Data, with access to data across 65,000 NCAA games a year inclusive of basketball, football, ice hockey volleyball and soccer.
Additionally, ESPN is extending its deal with Genius Sports around NBA playertracking data insights while adding the WNBA to its portfolio of sports properties leveraging its data. With the deal, ESPN has access to Genius Sports’ cutting-edge Insight tool used by every NBA and WNBA team. This tool will enable ESPN to easily
research, identify and produce video clips that break down team and player performance across both leagues.
“Genius Sports is a proven entity in the space that we know very well from our previous work together,” said Jeff Bennett, vice president, stats & information group, ESPN. “The new long-term agreement creates a runway for both sides to ideate into the future using augmented reality live execution concepts in ways that unlock the next generation of fan experiences.”
ESPN and Genius Sports have previously worked together in a variety of capacities from powering immersive “altcasts” such as the NBA ‘Marvel Arena of Heroes’ presentation in 2021 to collaborating on live broadcast augmentations for immersive experiences around the 2022 NCAA Division I Women’s Basketball Championship.
In a development that has important implications for the ongoing rollout and success of ATSC 3.0, the FCC has closed its reconsideration of a 2021 ruling on distributed television systems.
Microsoft had requested that the FCC reconsider the DTS Report and Order that modified the Commission’s technical rules. The modifications were designed to promote expanded use of distributed transmission systems (DTS) by broadcast television stations. The NAB and broadcasters had requested the changes to help broadcasters speed up the deployment of NextGen TV services and the development of data services.
difficult environment” for TV white spaces deployments, the filing noted.
The NAB responded with a May 25, 2021 blog post by NAB Deputy General Counsel Patrick McFadden blasting Microsoft’s efforts to get the FCC to revise the January 2021 ruling, calling Microsoft’s Airband Initiative “hot air” and a heavily hyped solution to the rural broadband gap that has not lived up to its promises.
Microsoft filed a petition in March 2021 asking the FCC to reconsider that decision because the order would hurt their Airband Initiative to use TV white spaces to provide broadband services in rural areas. It “needlessly worsens an already-
In an August 3rd filing with the FCC, APTS and the NAB continued their opposition to Microsoft, arguing that “Microsoft’s petition presents no legitimate case for reconsideration and the Commission should promptly deny it.”
On August 5, the FCC reported that “Microsoft filed a request for withdrawal of the Petition stating that it is `no longer pursuing or advocating for the matters raised in the Petition’ and that the Petition may be dismissed.”
Streaming providers worldwide are increasing their infrastructure buildout to reduce latency and hiccups for consumers and the results are beginning to show, according to NPAW, a provider of streaming video intelligence services.
The first six months of 2024 saw a large global increase in quality of experience, with a 54% decrease in buffer ratio for Video-on-Demand (VOD) services, compared to the same period last year, according to the company’s 2024 1st half Video Streaming Industry Report. This KPI illustrates a worldwide shift in the quality of streaming video services. It’s also an indication of the global commitment of OTTs, Telcos and Broadcasters to providing better streaming quality, NPAW said.
Linear TV buffer ratio also improved in H1 2024, with a global decrease of 34% vs H1 2023 and 24% vs H2 2023. The Asia region alone experienced a 35% decrease in buffer ratio.
“Telcos, broadcasters and OTTs are investing heavily, thus improving the overall quality of the streaming video landscape. With new players entering the market regularly, it’s becoming much more competitive. We are seeing the world rapidly shift away from traditional TV and towards streaming providers.” said Ferran G. Vilaró, NPAW CEO & Co-Founder.
The report explores the state of the video streaming industry globally and regionally, comparing engagement and quality data from the first half of 2024 with the same period in 2023. The analyzed data were collected from the NPAW Suite for January to June 2024 and contrasted with data from January to June 2023. Advertising data were also gathered from January to June 2024.
The decision of Brazil’s SBTVD Forum in July to recommend the ATSC 3.0 physical layer was trumpeted by ATSC as a major achievement—an understandable large feather in the cap for the organization that began work on the NextGen TV standard more than a decade ago.
But what does the accomplishment mean to U.S. broadcasters grappling with the myriad of challenges the voluntary transition to 3.0 has created? In other words, why does the SBTVD Forum’s decision matter to us?
Kurz
First, the recommendation comes after the SBTVD Forum conducted “rigorous testing,” in Noland’s words. The Forum’s decision to recommend ATSC 3.0 over all other OTA broadcast physical layers —including the newer Advanced ISDB-T— should instill confidence in U.S. broadcasters. “If you are wondering whether you are using the very best standard in the world, you are,” she said.
which the two can help each other as Brazil deploys TV 3.0 and the U.S. grows its ATSC 3.0 presence.
“There’s going to be a very rich twoway street of information and business development between Brazilian broadcasters and U.S. broadcasters,” said Noland.
Pizzi, who spearheaded the ATSC efforts in Brazil, pointed out that unlike the U.S., Brazil currently only authorizes its educational broadcasters to transmit more than one service per 6 MHz channel assignment.
“We think the regulators there may change these rules for TV 3.0, allowing all Brazilian TV broadcasters to provide both multicasting and datacasting for the first time,” he said.
I put that question to ATSC President Madeleine Noland and Skip Pizzi, chair of ATSC’s Brazil Implementation Team, recently. Noland enumerated three reasons. Pizzi agreed and offered a few more details.
Second, strength in numbers: There are 71.5 million TV households in Brazil where OTA TV is big—as many as 85% have a TV that receives broadcasts off air. “The majority watch over-the-air TV as their primary source of television—from 65% to 75% of people,” said Noland.
Those sorts of numbers give chip vendors, set-top box makers and TV manufacturers a big reason to support ATSC 3.0. The greater the market size and product runs, the greater the economies of scale—meaning U.S. consumers will have lower-cost NextGen TV products, and U.S. broadcasters bigger NextGen TV audiences, she said.
Third, Brazilian broadcasters have a head start on their U.S. counterparts when it comes to interactivity, mobile apps and advanced advertising thanks to their experience with the country’s TV 2.5 standard. However, they are not well-versed in datacasting. That sets up a future in
After the interview, I came up with a fourth—but I will be the first to admit it may be wishful thinking.
My reason: Regulatory embarrassment. How embarrassing would it be here if regulators in Brazil take steps, such as modifying the nation’s multicasting and datacasting rule, to make TV 3.0 successful, while U.S. broadcasters continue to wait—more than a
year at this point—for the public-private effort to bear fruit in removing obstacles to an ATSC 1.0 shutoff.
Add to that the Broadcast Positioning System (BPS). There’s “great interest” among Brazilian broadcasters and regulators in deploying BPS as a possible ancillary TV 3.0 application, said Pizzi.
How embarrassing will it be if Brazil beats the U.S. to the punch with BPS as a crucial complement/back-up for the Global Positioning System (GPS), especially given the national security and economic implications if there were a loss of accurate time and location data? l
Now you can build affordable live production and broadcast systems with SMPTE-2110 video! Blackmagic Design has a wide range of 2110 IP products, including converters, video monitors, audio monitors and even cameras! You get the perfect solution for integrating SDI and IP based systems. Plus all models conform to the SMPTE ST-2110 standard, including PTP clocks and even NMOS support for routing.
The Blackmagic 2110 IP Converters have been designed to integrate SDI equipment into 2110 IP broadcast systems. The rack mount models can be installed in equipment racks right next to the equipment you’re converting. Simply add a Blackmagic 2110 IP Converter to live production switchers, disk recorders, streaming processors, cameras, TVs and more.
the SMPTE-2110
Blackmagic 2110 IP products conform to the SMPTE ST-2110 standard for IP video, which specifies the transport, synchronization and description of 10 bit video, audio and ancillary data over managed IP networks for broadcast. Blackmagic 2110 IP products support SMPTE-2110-20 video, SMPTE-2110-21 traffic shaping/ timing, SMPTE-2110-30 audio and SMPTE-2110-40 for ancillary data.
Blackmagic 2110 IP Converters are available in models with RJ-45 connectors for simple Cat6 copper cables or SFP sockets for optical fiber modules and cables. Using simple Cat6 copper cables means you can build SMPTE-2110 systems at a dramatically lower cost. Plus copper cables can remote power devices such as converters and cameras. There are also models for optical fiber Ethernet.
One of the biggest problems with SMPTE-2110 is needing an IT tech on standby to keep video systems running. Blackmagic 2110 IP converters solve this problem because they can connect point to point, so you don’t need to use a complex Ethernet switch if you don’t want to. That means you get the advantage of SMPTE2110 IP video with simple Ethernet cables, remote power and bidirectional video.
By Gary Arlen
If there were any doubts that streaming TV has finally hit its stride in 2024, NBC was ready to knock them down last month.
Although the network didn’t break out the share of 2024 Olympics viewing split between its three platforms—Peacock streaming, broadcast TV (NBC, Telemundo) and its cable channels (such as USA Network, E! and NBCSports)—the company’s enthusiasm about Peacock’s performance from Paris underscored the growing perceived value of streaming video in the media mix. NBC crowed that the 23.5 billion minutes of Paris Olympics coverage streamed via Peacock during the games, July 19–Aug. 11, was up “40% from all prior Summer and Winter Olympics combined.”
NBCUniversal Media Group Chairman Mark Lazarus said the streaming usage “marked a groundbreaking moment for Peacock, which delivered… cutting-edge innovation while shattering all-time Olympics streaming records.”
The Olympics streaming victory lap surfaced amidst a marathon of other developments that illustrate the hurdles and leaps that face the industry. Days after NBC’s declaration of streaming success, a Federal court in New York issued a preliminary injunction to stop Venu Sports, the joint venture streaming service from Disney, Fox and Warner Bros. Discovery, which had planned to launch in time for the NFL season.
‘WAIT AND SEE’
These developments emerged just after Nielsen’s latest “The Gauge” report, which calculated that 40.3% of TV viewing is now watched on streaming platforms, followed by cable (27.2.1%) and over-the-air broadcast (25.5%). The streaming share was up from 37.7% a year earlier in Nielsen’s analysis of how Americans watch TV across platforms.
Collectively, this summer’s avalanche of streaming exuberance (and stumbles) mirrors the ways that the buzzy business of video streaming is taking off in countless directions. At the same time, dozens of challenges are becoming apparent in this latest competitor
to (or collaborator with?) broadcast TV. They encompass technology, economics, legal/ regulatory issues and consumer preferences for ad-supported programming vs. paid content.
The emergence of Venu, the proposed $42.99/ month bundle of streaming content has been a major source of enthusiasm. Its program package is intended to include ABC, Fox, ESPN, TNT, TBS programs and by extension a slew of major football, baseball, basketball and hockey league games. Analysts are waiting to see how it will fare against alternatives such as the Xfinity StreamSaver bundle that Comcast is assembling by bringing Peacock, Netflix and Apple TV+ into one $15 per month package.
Rick Ducey, managing director of BIA Advisory Services, who analyzes the migration of media platforms, characterizes the current situation as “a very complicated environment for everyone to navigate.” He cites “nearly 2,000 streaming services available to consumers configured, bundled and sold across various platforms, publishers, and content aggregators.”
Ducey observed that industry providers and consumers are evaluating the very different business models that are available.
Other analysts offer similar perceptions of the cloudy near-term outlook. “This environment makes it challenging for
consumers to keep track of what they are spending, which causes a great deal of frustration,” says Adriana Waterston, executive vice president and insights and strategy lead at the Horowitz Research Division of M/A/R/C Research. “This is why churn has been such a big issue.” She points to data showing that “consumers navigate these costs by timing which services they pay for when.”
Waterston sees the decision-making process about streaming tied to viewers’ “increased expectation for content that reflects [their] identities and views on the world,” and said she expects an even “bigger impact” if and when Venu debuts as “sports fans get a sense of the breadth of content this service could offer.”
In her firm’s recent research, 42% of sports fans said they would subscribe [to Venu], and among those who were likely to sign up, 38% said they would likely make a change to the other services they get because of it.
NBC, in its post-Olympics victory lap, pointed to streaming video’s ability to give viewers what they want to see. Peacock’s “Gold Zone,” a compendium of whip-around coverage of each day’s Olympic highlights, consistently ranked among Peacock’s top five most-watched Olympics segments and nearly quadrupled its viewership during the two weeks in Paris,
according to NBC’s analysis.
One in five Olympics viewers tuned into “Gold Zone” and more than a quarter of Olympics viewers on Peacock watched via “Multiview,” with half of their time spent on featured live events, and half watching the “quad box” view of multiple events.
Yet in the deluge of viewer research, an unsettling picture emerges. In Xperi’s latest TiVo Video Trends Report, 20% of consumers said they believed “they have too many services.” The study found that at the end of 2023, the average home used 11.1 services (down slightly from 11.5 a year earlier)—but that the number of free services increased while paid services declined year over year.
Parks Associates also identified a 30% decline in spending on streaming services since 2021. Current spending is about $64 per month compared to $90 monthly three years ago, according to Sarah Lee, a Parks research analyst. “Consumers are spending less, but rather than go without, many are using adbased alternatives to save on costs,” Lee said. “A service needs to provide unique and ongoing
value if it is to charge a premium.”
Separate research by LG Ads indicates that 80% of viewers watch Free Ad-supported Streaming TV (FAST) channels and 63% prefer this format to other on-demand formats. And that leads to questions about what viewers want to see on streaming channels.
MoffettNathanson media analyst Michael Nathanson, in a mid-summer evaluation, examined a shift away from original content, which had established Netflix in its early years. Now, Nathanson said that Netflix, along with Paramount+ and Warner Bros. Discovery’s (WBD) Max, are showing streaming gains even “as they released less content.”
“The market has shifted to allow the company to drive an increasingly large share of its viewership with its competitors’ content,” Nathanson said. “This is reflected in acquired titles’ (and especially nonexclusive acquired titles) rapidly increasing share of the list of top streamed titles.” He pointed out that only two of the top 20 most-streamed shows on Netflix in Spring were originals.
Seth Skolnik, chief operating officer of Vivid Labs, draws on his experiences at Paramount, Technicolor and new media startups to conclude that the bubble has burst. “We already see a significant decline in new show production, smaller deals, and only with established showrunners and stars,” he said. Buckle your pants, we are all going on a diet.”
BIA’s Ducey is also trying to interpret how streaming customers’ viewing preferences will affect future production and distribution. “Content investment strategy has shifted towards more focused content offerings such as TV shows and films in genres like action, medical or police dramas, international series and films, live sports [and]… science fiction,” he said. “The total investment in content and number of titles produced may have reached a limit for now as streaming businesses rationalize” growth and profitability metrics.
“Cross-platform (linear TV plus streaming) campaign planning, activation, optimization and improved measurement and ROI using relevant Key Performance Indicators will provide a lot of lift to streaming’s role in the local media ecosystem,” he added.
As the content and marketing landscape takes shape, streaming is already facing increased legal scrutiny. Congressional forces are urging the Justice Department and the Federal Communications Commission to probe the Venu alliance with an emphasis on a potential antitrust violation in pooling sports league contracts of the several networks.
In the current legal challenge to Venu, plaintiff streamer Fubo claims it is being forced to carry dozens of channels in order to get licensing rights to the sports events.
“The FCC hasn’t regulated streaming services to date, other than in very discrete areas such as closed captioning,” explains Ari
Meltzer, a communications attorney at the Wiley Rein law firm in Washington. He points out that there are already disputes about whether the FCC has authority over streaming video—an issue that is being bruited around quietly on Capitol Hill. For now, oversight comes under other legal umbrellas, such as antitrust, contracts, copyright, and unfair and deceptive trade practices, Meltzer adds.
“There are tradeoffs: while streaming services don’t have to comply with the same regulations as broadcast and cable/satellite, they also aren’t entitled to certain benefits, such as statutory copyrights,” he added. “Most streaming services seem content trading the need to obtain copyrights for less regulation.”
Last month’s ruling on Venu from the U.S. District Court, Southern District of New York, has changed the momentum. There is no indication about how long the court’s temporary restraining order will stay in effect. Fubo, a nine-year old streaming service that concentrates on live sports (including NFL, MLB, NBA, NHL, MLS and international football), filed the lawsuit in February, claiming
that Venu would control up to 80% of live broadcast sports content.
Venu’s owners said they plan to appeal the Court ruling.
Fubo cofounder/CEO David Gandler welcomed the ruling, saying “We seek equal treatment from these media giants, and a level playing field in our industry.” He cited the network and league that “monopolize the market, stifle competition and cheat consumers from deserved choice.”
Determining the Venu legal status “introduces a bit of a wild card” to the landscape, Ducey added. “If Venu does move forward and survives these threats, it certainly could [prove] how to bring some collaborative innovation to the market by trying to offer a ‘best-in-class’ sports experience to streaming viewers.” But he acknowledged that the highvalue sports licensing rights could “challenge the viability” of Venu.
“Something has to give. Consolidation may help share costs but then partners stand to lose some competitive differentiation with their other direct-to-consumer and distribution
Central to many streaming providers’ programming and pricing decisions is the flavor of streaming video that appeals to viewers. Among the options:
• AVOD (Ad-Supported Video on Demand)
• FAST (Free Ad-Supported Streaming TV)
• SVOD with ad-supported discount tiers
• TVOD (Transactional VOD, i.e., one-time rentals of a movie or show from Prime Video)
• PVOD (Premium VOD, additional fee for access to exclusive content such as major event or early-access viewing)
Add to that acronym jumble the emerging options for commercial operations, such as:
• CSAI (client-side ad insertion): ads aimed direct to customers
• SSAI (server-side ad insertion): ads put into video streams
Advertisers are evaluating the comparative values of CSAI, which enables more individual personalization to viewers vs. SSAI, which are less prone to disruptions or latency issues.
In its latest survey of viewer acceptance of ad-supported streaming, Hub Entertainment Research found that “an increasing number of TV viewers are accepting
advertising in streaming video and they are readily able to discern the differences in how various services deliver the ad experience.” Hub said that “two-thirds of TV viewers would prefer watching ads if it saves on subscription costs” and the level of total ad intolerance has dropped from 17% in 2021 to 12% in June 2024.
Along with the ad-structure decisions comes a confrontation with a question that many media veterans fear: “Are we reinventing cable TV?” For example, the SVOD vs.
platform strategies,” Ducey added. “It’s not clear how this nets out at this point.”
In his 2024 memoir “Hits, Flops, and Other Illusions,” TV and film producer/director/ writer Ed Zwick mused about Hollywood’s shift toward the economics of streaming.
The celebrated creator (“thirtysomething,” “Glory,” “The Last Samurai,”), laments that, “storytelling in this new age of streaming platforms seems deliberately crafted to create a new kind of anxiety designed to induce gorging rather than fulfillment, conversation rather than catharsis, consumption instead of closure.”
“The thoughtful has given way to marketable, and the complex idea replaced by the 15-second TikTok,” Zwick contends as he dissects Hollywood’s current “pressure to hold to …commercial viability” and the preference for “pre-sold IP [intellectual property] [that] can be marketed in a single sentence.”
After expressing his frustrations, Zwick kvetches that the modern Hollywood approach is “to aim low and hit the target.” l
FAST deliberation revives memories of the 1970s and ’80s introductions of ad-supported cable networks along with HBO, Showtime and other extra-fee “paid” channels.
Adriana Waterston of M/A/R/C calls content consolidation (such as Venu) into à la carte packages “the beginning of the new cable TV/multichannel bundle,” adding “I believe that at least from a value standpoint, this is what consumers really need, even if it’s not what they think they want.”
❚ Gary Arlen
In a recent report Hub Research said that two-thirds of viewers would prefer watching ads if it saves on subscription costs.
By Phil Rhodes
Video over IP, in the sense that’s used in film and television, sometimes feels like a newer idea than it really is. That might be because the high-end, uncompressed ST 2110 standard took a while to finalize, or because of its rather forward-looking demands on IT infrastructure, both of which made a quick uptake less feasible than many might have preferred. The compressed alternatives might be a reaction to that—but alternatives inevitably mean choices with profound implications.
“I think the original promise was that if you go IP, you save money” says Axel Kern, senior director, Cloud and Infrastructure Solutions at Lawo. “Pretty quickly the industry confirmed
the opposite. Investing in switches is definitely not saving you money [and] you have to deal with the complexity and abstraction of IP.”
With that in mind, Kern contends, considering ST 2110 to be a direct replacement for SDI is missing the point. “That wasn’t the original promise—it [offers] higher agility and significantly higher flexibility of doing things. Now, with IBC 2024 coming, there are many options where you can really see flexibility that’s not possible on an SDI backbone.”
Kern suggests that the circumstances of specific installations
tend to suggest an approach. “if you’re a truck owner, there is a decision between baseband and IP. If you’re a facility, you have this option to go IP and if it’s a ‘brownfield’ approach, where you take part of your facility… brownfield would allow you to go IP and test it out. Greenfield approaches would go IP full stop. The other topic you have to consider is how much of the gear you’re putting in your truck can be found with purely SDI capabilities.”
Somewhat-compressed alternatives allow facilities to sidestep the need for extremely expensive networks, as Kern says. “We see this when people use NDI or even SRT [Secure Reliable Transport]
over the wild internet to connect facilities and do remote production,” he said. “This is the logical consequence of things like NDI, which are still high quality but for a fraction of the bandwidth.”
Involving the public internet makes cloud resources more available, though there’s recently been caution over the sheer bandwidth of video in circumstances where cloud providers charge per unit data leaving the network.
“The egress costs on cloud are sometimes outweighing the inventory cost you would have going your own way,” Kern adds. “Doing a like-for-like in the cloud is not the way forward. What we look for is whether we can make use of the technology the cloud brings us, microservices and software-as-a-service applications which you can spin up on commercial, off-the-shelf IT gear.”
As long-time observers might be aware, JVC was an early adopter of IP for broadcast, and long before either ST 2110 or NDI became the forces they are today.
“We introduced video over IP a long, long, long time before it became a standard,” says Edgar Shane, general manager, Engineering and Product Development for JVC. “We integrated [it] into video
cameras well before the infrastructure was ready. Most of the stuff I see on a daily basis today is stuff we’ve been doing for years.”
“The cost of COTS equipment is coming down… we’re in a space that moves forward much faster than the broadcast industry does.”
DEION LACOINTE, SONY
Edgar
JVC’s Connected Cam architecture handles video at modest bandwidths, making wireless and public networks a plausible alternative to costlier modes of transport. The software-definability of the camera’s encoder also offered a degree of future-proofing.
“It used to be that you had to have your camera and an IP encoder separately,” Shane says. “So, we integrated IP encoders into the cameras… enabling NDI in those cameras is easy, because
we already have the encoder built in. On our cameras today you can select NDI or SRT, which is still needed when you send video over the internet.”
The explosive success of NDI might suggest an industry still cautious about the bandwidth demands of ST 2110, though Shane is clear about the different intent. “[2110] is pretty demanding of IP infrastructure,” he said. “You need to have minimum latency and precise switching… but it’s for a different audience. If you’re routing the whole house for broadcast it absolutely makes sense.”
At the higher end, though, even NDI can begin to demand more than an average office network. “NDI exists in two formats,” Shane says. “There’s full NDI, which is 180 Mbps, then there’s NDI HX2 and HX3 which is only 20–30 Mbps. This is important because customers are using standard gigabit Ethernet and people are saying they can have several cameras. [But ] for full NDI, people are putting in 10-gigabit networks.”
Economies of scale in the IT market seem likely to make ST 2110 steadily easier. But as Shane relates, the brute force approach of more bandwidth is just one solution in a world with ever more CPU horsepower.
“Cameras evolve,” Shane says. “The front end with the lens and sensor evolves less because it’s traditional. Newer sensors have more sensitivity and 4K and beyond, but we’re exploring better codecs. HEVC allows us to send video with lower bitrate and higher quality. We’re also exploring AI, because our latest PTZ cameras feature object tracking. They can recognize human beings and you can select which person out of three, four, five people.”
Sony, too, describes a combined approach. “In broad strokes, IP and 2110 have become pretty common within broadcast, live production,” says Deon LeCointe, director of Networked Solutions at Sony Electronics’ Imaging Products & Solutions. “And that’s not just 2110—we find a lot of our customers are using 2110, NDI... some people have started using SRT.”
LeCointe concurs that costs remain a bugbear at the high end. “From where I sit, 2110 is being sold as a premium over SDI. SDI has been around for decades. The cost to build the chips is lower, the cost to build a product that can support 2110 is higher.” Even so, he predicts, the finances seem likely to ease naturally. “The cost of COTS equipment is coming down… we’re in a space that moves forward much faster than the broadcast industry does.”
“What’s new,” LeCointe says, “is people
focusing on how to leverage IP. We can add 5G live production, private 5G or even in the future public IP… there’s a potential for 5G to be leveraged by smaller networks and vendors as well to get from camera to a control room to a broadcast head end.”
Sony’s approach combines purpose-built hardware with software-based solutions— something video over IP was always intended to facilitate. “From a hardware perspective,” LeCointe concludes, “cameras, switchers and monitors all have 2110. We have been really pushing the envelope on 5G. We introduced our new RPU-7 with the NXL-ME80, which is a little box that can do eight channels of video. Nevion is a software solution that has visibility across your entire media environment, to ensure that media flows get from sources to their destinations.”
Blackmagic Design is a comparatively late entry to ST 2110, although the company’s meteoric rise as a camera manufacturer over the past decade sets something of a precedent.
Craig Heffernan, UK technical sales director, describes a considered approach. “We looked
at how everything had settled in the IP market… and decided that 2110 was the route we wanted to go, in comparison to NDI or an alternative. It lets us do what we do best, which is to put some customization into it. We can have Blackmagic connected technologies but also be open enough to bring in third parties.”
The company’s key innovation is a codec that handles high-resolution, high-framerate material on 10-gigabit networks. “We’re compliant in terms of how 2110 works, but within that is the Blackmagic IP10 codec, which for us replaces TICO… we needed to get 2160p at 50 and 60 frames into a 10-gigabit space, with headroom. We wanted it to be entirely lossless and we wanted to keep the Ethernet network costs down.”
whitepaper. The nature of the world is we’re happy to be part of the portfolio but we’re not everything to everyone. We want to create a network of OEM and developer support so the interaction between Blackmagic and non-Blackmagic is there for the customer. It’s being discussed with SMPTE.”
Blackmagic’s approach summarizes the ambitions of the industry: Economies of scale are making video over IP more affordable. Task-specific hardware seem on the way out. Flexibility is a watchword.
Attractive as that is, the advantage of IP has often been manufacturer agnosticism. Heffernan assures us that the company intends to make IP10 a standard. “We’ve released the
“Our cameras’ ability to stream back to control via SRT means we get a high-quality, reliable connection into the bonded 5G at very low latency and very high quality,” Heffernan concludes. “The cost of satcom, private uplinks and all of those environments could be done away with.” Video over IP, in all its forms, is probably closer than ever to fulfilling its promise. “It’s quicker to deploy, faster to set up, easier to manage.” l
But can ML go ‘too far?’
Throughout the 20th century, knowledge has continually expanded, stemming from the evolution of eras such as the industrial revolution, the space program, the atomic-bomb and nuclear energy and, of course, computers. In some cases, it may appear to the masses that artificial intelligence is about as common as a latte or peanut-butter-andjelly sandwich. Yet the initial developments of AI date at least as far back as the 1950s steadily gaining ground and acceptance through the 1970s.
“memory senses”), trial and error, and repetitive steps (slow walking or crawling) while a parent also observes the child’s actions to mitigate accidents or errors—you’ll quickly see, figuratively, what occurs in a set of computers or servers built to use software to train itself over time to create in part what will become a large language model (LLM) by applying its programming repetitively that eventually finds a way (i.e., to model) the original programming into a formidable model that reaches the desired goal.
establishing a performance figure or criteria (i.e., how well did it do or how far off were the results); and (E) finding a resulting data set, the experience, given the data set it was provided for such an analysis.
Machine learning is a continual process whereby trials create results that get closer and closer to the “right solution” through reinforcement. Computers/servers learn which data set is then classified as “more right” or “less right” (i.e., more or less correct); and then stores those results that then modify the learning algorithm(s) until the practice gets as close to “fully right” as possible without “overshooting” the answer and risking a data overload or a false (hallucinogenic) outcome.
It wasn’t until the late 1970s and early 1980s that computer science began to emerge from a data-driven industry using large “main-frame” computational systems into platforms for everyday uses at a personal level. While the Mac and early PCs (beginning in the 1980s) were game changers, they were certainly limited on compute power and not designed to “learn” or render complex tasks with modeling or predictive capabilities. Computers of that time relied on programming based essentially on an “if/then” language structure with simplified core languages aimed at solving repetitive problems driven by human interactions and coordination.
As sufficient human resources and computer solutions began to develop and create “expert systems” (circa 1990s–2000s) the computer world rapidly moved into a new era—one built around “knowledge” and driven by language models with basic abilities to train itself using repetitive and predictive models not unlike those that infants or toddlers use to hear, absorb, repeat, say and “tune” their mind and physical stature to communicate, to learn physical principles (such as walking), coordination and more. If you’ve ever observed a two-year old learn how to maneuver around a slick pool edge surface, make their way to the steps into a pool and gradually ease themselves into the water in a fashion that uses mental observation (aka
Machine learning is just that kind of process and is the basis of AI, whereby computers can learn without being explicitly programmed. This generalization of ML has classifications that are utilized to differing degrees as diagrammed
Machine learning is a continual process whereby trials create results that get closer and closer to the “right solution” through reinforcement.
in the figure on Machine Learning Tasks (Fig. 1). Fundamentally, machine learning involves feeding data into coding algorithms that can then be trained (through repetition with large sums of data) to identify patterns and make predictions, which then form new data sets from which to better predict the appropriate outcome or solution.
A variety of applications such as image and speech recognition, natural language processing and recommendation platforms make up a new library of systems. The value relationship gained in this training process can be developed by (T) defining a task; (P)
One downfall in ML is that the system may go “too far” (i.e., it has too many iterations), which then generates an exaggerated or wrong output and produces a “false-positive” that gets further from the proper or needed solution. Then one questions, “just how far does the generative process go before it is stopped?” When the system gets to the “regression” point—a case where the outputs are continuous rather than individual or discrete—then a categorizing sensing algorithm would necessitate supervisory termination or a resteering of the operation to a more dimensioned (contained) level, which restricts the continued processing.
So, in addition to the learning algorithm, there are sets of management algorithms that must be applied throughout the learning process to mitigate these so called “hallucination” possibilities. Remember the toddler in the pool, this manager may be the parent in this case, the individual who stops the child from being hurt or risking a task (T) that could be catastrophic in nature.
ML can (and is) used in many everyday solutions including email filtering, telephone SPAM filtering, anomaly detection in financial institutions, social media facial recognition, customer data analysis (purchase history, demographics) and trends, price adjustments (such as with non-incognito searching), and emerging advanced solutions including selfdriving cars and medical diagnostics. The “balancing” apparatus must weigh multiple
solutions, alternatives and decision points, which in turn keep a runaway situation from occurring, resulting in an unnatural or impossible situation or solution.
Besides the rapidly developing capabilities, there are as many challenges in this evolving AI industry as there are opportunities. Data Bias and Fairness (e.g., in social media) is highly dependent on the data it has available for training. Bias can obviously lean toward and potentially lend to discriminatory solutions. Privacy protection as well as security breaches head the users into areas that result in illegal or illegitimate practices. Given the ease in spinning up huge data server systems in the cloud, the possibility of running tens of thousands of iterations on passwords or account numbers means the risk to the customer (as in credit card fraud) can grow
1: Diagrammatic representations of machine-learning tasks with outlined examples of the three machine-learning types (supervised, unsupervised, reinforced) and tasks.
exponentially to the number of credit card holders. Banks and credit services use very complex AI models to protect their customers. This in turn opens the door to another level of AI—that is risk, fraud protection analysis and monitoring. It’s a huge cost to the credit card companies, but one that must be spent in order to protect their integrity.
Another concern is in automation and the potential for job displacement. It is inevitable that some people will be displaced by automated AI solutions. In turn, these new dimensions are opening up new job opportunities across new sectors of the workforce, such as data analysis, machine learning management, data visualization, cloud tool development, laboratory assistants or managers who test and hone these algorithms.
There continue to be many misconceptions related to these new words and their actions. The buzz words such as machine learning, deep learning and artificial intelligence have many people thinking these are all the same thing whenever they hear the phrase “AI.” Regulations are being developed internationally and within our own legislatures that directly relate the “AI word” to machine learning or vice versa. Most of these early “rules” are done in a pacifistic way, likely
because the legislative authors have little actual knowledge or background in this area. Obviously, we risk going in the negative direction by reacting improperly or too rapidly—but things will surely happen that will need to be corrected further downstream. Doing nothing can be as risky as doing too much.
As for the media and entertainment industry, efforts are well underway to put dimension on the topics of AI, ML and such. As with any of the previous standards developed, user inputs and user requirements become the foundation for the path towards a standardization process. We start with definitions that are crafted to applications, then refine the definitions that reinforce repeatable and useful applications. Through generous feedback and group participation, committee efforts put brackets around the fragments of the structures to the point that the systems can be managed easily, effectively and consistently.
For example, industry might use AI or ML to assemble, test or refine structures, which will in turn allow others to learn the proper and appropriate uses of this relatively new technology. Guidance is important, as such, SMPTE, through the development of Engineering Report (ER) 1010:2023 “Artificial Intelligence and Media,” is moving forward on this development and has a usable structure from which newbies and experienced personnel can gain value from. The 41+ page document can be found at SMPTE.org or by Googling books on these topics. Even the news media has interest in its future in AI. In their book “Artificial Intelligence in News Media: Current Perceptions and Future Outlook,” Mathias-Felipe de-Lima-Santos and Wilson Ceron state that “news media has been greatly disrupted by the potential of technologically driven approaches in the creation, production and distribution of news products and services.” l
Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. You can reach him at karl@ivideoserver.tv.
evolution of audio over the past five
There have been a series of significant milestones in the evolution of Olympic broadcast sound culminating in the greatest audio production of an Olympics ever. Clearly NBC has led and dominated the soundwaves for more than four decades and 2024 is the pinnacle of their persistence.
Basically, live television sound was mono until the 1980s with the first Olympics stereo broadcast in 1988. Under the direction of sound designer Bob Dixon, NBC placed a shotgun microphone alongside the host broadcaster’s single shotgun microphone to capture XY stereo. This was no easy feat politically or technically since the Host Broadcaster Korean Broadcasting System (KBS) was only obligated to delivered mono sound to the rightsholders, including NBC.
The Host Broadcaster was traditionally the national broadcaster of the host country, but after 2008, Olympic Broadcast Services (OBS) became the permanent Host Broadcaster, fully under the direction of the International Olympic Committee.
The Host Broadcaster for the 1992 Games in Barcelona was TVE, Spain’s national broadcaster, where they produced the Opening and Closing Ceremonies in stereo, but sports were captured and produced in mono. The year 1996 was the first Olympics that the Host Broadcaster, Atlanta Olympic Broadcaster (AOB), captured and produced all events in stereo sound.
After 1996, Dixon encouraged Mike Edwards and Ken Reichel of Audio Technica to manufacture a stereo shotgun. In fact, Edwards initiated and supervised the development of three stereo microphones and two mono shotgun microphones, which were still heard on all Olympic sports and ceremonies in Paris.
In addition to the extensive development of suitable microphones, there was considerable work to be done with signal management and distribution. Broadcasters had to solve the problem of multichannel audio over a stereo
infrastructure. Various schemes were developed to get more than two channels of sound to the home viewer/listener, but it was not until the sound was digitized that a credible surround sound was possible for distribution and transmission.
In 2006, after consulting with his audio director, Olympic Broadcast Services chief Manolo Romero determined that producing surround sound with unprocessed, discrete audio channels was the best way to satisfy the needs of all the rights holders at the Olympics—including NBC.
With 2008 came the implementation of surround sound at the Summer Olympics with the host broadcaster delivering six discrete channels of sound for the 5.1 sound format.
The sound of the 2008 Games was a significant challenge for NBC because so much of the viewing/listening audience was still listening in stereo since soundbars had only recently entered the marketplace in early 2000s.
NBC continued to develop surround sound with various Dolby analog schemes, but it was NBC’s adoption of ATSC 1.0 with Dolby AC3, along with a market full of affordable soundbars that multichannel sound would take off.
NBC polished its surround sound coverage but was persistent with the goal of true immersion. For the 2012 Games in Russia, NBC Sports Sound Designer and Olympic Supervisor, Karl Malone vigorously pursued immersive sound, beginning in Rio where NBC mixed Opening and Closing Ceremonies in immersive sound.
Pyeongchang and Tokyo followed, and immersive sound was heard in both Opening and Closing Ceremonies as well as big venues. For Beijing, all primetime coverage was produced in immersive sound. Finally in Paris, all primetime, USA Network and all Olympic sports received the immersive sonic enhancements.
By the Tokyo 2020 Summer Games, immersive sound was also captured and
delivered by the permanent Host Broadcaster OBS. Nuno Duarte, the OBS sound designer and supervisor, was dealt a surprise—no audience! And the same for Beijing. The magic of sports and particularly the Olympics is seeing and hearing the arenas and stadiums full of spectators.
The games were delayed a year and everyone had time to prepare a design without spectators. The actual sound signal capture and production of immersive sound is not difficult particularly if you have a sonic layer of ambiance and atmosphere.
Malone had to make some difficult decisions about the sound design.
“Tokyo gave us the ability to focus on the details of the sports and the athletes,” he said. “The sound of the hands on the gymnastics apparatus and the creak of the wood on the parallel bars; the actual physical exertion of the athlete, the breath, the sighs, the joy all there to be captured without the masking by the crowd. We knew we were missing the passion of the crowd as much as the athletes were, but adding any fake crowd
was unthinkable, even for a company like NBC, which is largely an entertainment one.”
The 2024 Summer Games in Paris were also a milestone as they were the first summer games where immersive sound was produced with the return of the audiences, the third dimension that was missing in Tokyo.
NBC seamlessly creates a production mix that includes “stems” of the host broadcaster’s 5.1.4 mix, plus any microphone splits, any additional crowd capture, any camera microphones, plus commentators, replays and music. Along with NBC’s branding and personal touch, it makes it look like NBC did the entire production.
“The absence of the crowd in an event of the magnitude of the greatest sporting contest in the world is almost unthinkable, and Paris is a return to what an event of this scope requires for the greatest athletes, the biggest crowds and the loudest cheers,” Malone said.
I anxiously tweaked up my Yamaha, Dolby Atmos-equipped soundbar and paid my subscription to Peacock over my Roku streamer and noticed a significant improvement with the sonic quality of the broadcasts.
The return of the spectator and the abundance of “open-air” stadiums has created a rich gumbo of sound succulence! I applaud Nuno Duarte and Karl Malone and thoroughly enjoyed listening. l
There is a way to add more capacity without leaving viewers behind
Stations wanting to convert to ATSC 3.0 have had trouble finding an ATSC 1.0 home for their high-definition (HD) and standard-definition (SD) streams. Using MPEG-4 (AVC) for ATSC 1.0 has been tried but created problems for viewers with older TV sets or old “coupon” converter boxes. I’ll suggest some options that might work, if the FCC allows it.
SPACE FOR HOMELESS ATSC 1.0 PROGRAM STREAMS
There is much support for ATSC 3.0, including recent interest from the U.S. government, which is studying the standard’s Broadcast Positioning System utility as a backup for GPS. New features are being added, including standards based (DRM) radio. Work has started on developing ways for 3.0 to work with 3GPP wireless technology.
While LG is no longer selling ATSC 3.0 TV sets, other manufacturers such as Sony, Samsug, Hisense and now TCL are offering them, and consumers now have a variety of inexpensive set-top box receivers to choose from. Features like virtual channels (delivered by internet to the TV rather than over-theair) and broadcaster applications that provide additional content and the ability to restart programs (using the internet) will make 3.0 even more attractive to viewers.
However, spectrum and data bandwidth for ATSC 3.0 are limited. Converting existing stations to 3.0 requires finding a home for their ATSC 1.0 programs on the stations remaining on 1.0. The popularity of ATSC 1.0 “diginets” has made it difficult to find a space for a new 3.0 host’s streams. One obvious solution is to improve the efficiency of the existing ATSC 1.0 capacity by using MPEG-4 video, which has roughly twice the efficiency of MPEG-2.
BY THE NUMBERS
Table 1 shows an example of average bandwidth allocation for a station carrying
two HD streams and four SD streams in their 19.392 Mbps ATSC 1.0 stream. The HD streams are assumed to have a primary 5.1 audio and a secondary stereo audio stream for audio description or second language. Extra bandwidth is allowed for null packets and PSIP data. Quality will vary depending on program content and use of aggressive statistical multiplexing is required to give an HD stream more bandwidth when needed and let the SD streams have more bandwidth when the HD streams don’t need it.
Early tests used MPEG-4 (AVC) encoding on one or more SD streams. Table 2 shows the result of converting all the SD streams in Table 1 to MPEG-4, keeping the same audio bit rates and reducing the video bit rate to 650 kbps.
That allows equal or greater video quality than a 1,000 kbps MPEG-2 stream. This configuration provides an extra 1,400 kbps of bandwidth, which could be used for one more MPEG-4 SD streams. Reducing the MPEG-4 SD bit rate to 50% of the MPEG-4 bit rate would allow two additional MPEG4 SD streams at the lower rate.
there was a way to hide the MPEG-4 content from the older TVs, it would likely eliminate the complaints, but I haven’t found a way to do that. As a result, use of MPEG-4 by full power stations has been limited.
There is a way to add significantly more capacity to lighthouse stations without leaving any viewers behind—transmit the HD content in MPEG-4 with a simulcast in SD in MPEG-2. Leave all existing SD diginets in MPEG-2. Viewers with older TVs would not lose any programs, but the main program would be in SD on those sets. This would not matter for anyone using the NTIA coupon
Unfortunately, attempts to use MPEG-4 have resulted in complaints that the program had audio but no video from viewers with older TVs or NTIA coupon set-top boxes. If
set-top boxes as their output is SD only. Table 3 shows the bandwidth allocation for MPEG-4 only HD and MPEG-2 SD, assuming a 50% reduction in bandwidth for the HD streams compared to MPEG-2. Audio is 5.1
for the MPEG-4 HD streams and stereo for the SD MPEG streams, with room for a stereo secondary audio on the SD simulcasts. This scenario provides over 3,600 kbps of extra bandwidth, enough for three more MPEG-2 SD streams with stereo audio.
If you’re interested in this approach, take some time to build a spreadsheet and test other scenarios. Consider dropping one of the MPEG2 SD diginets—that would provide enough bandwidth for another MPEG-4 HD and its companion MPEG-2 SD. Another option would be to use stereo audio on the MPEG-4 HDs and a 96-kbps mono secondary audio tracks. With a small reduction in video bandwidth that would also allow an additional HD and companion SD. Encoder experts may notice a potential
problem with the simulcast approach. Statistical multiplexing allocates bandwidth based on stream content and priority. If two streams are airing complex content requiring extra bandwidth at the same time it may limit the bandwidth available to other streams. However, if the HD and SD companion streams
can share the same audio (5.1 main and stereo secondary) the bandwidth gained could offset that effect and perhaps work better than delaying the video one of the streams.
The FCC has made it clear they want no viewers left behind. Allowing conversion of HD streams to MPEG-4 while providing an SD stream (with secondary audio and audio description) to viewers with older sets is one way to accomplish that goal. It will require approval from the FCC and likely negotiations with cable companies, if the SD MPEG-2 stream alone remains the “primary stream.”
Ideally, the FCC would consider both streams, MPEG-2 and MPEG4, if both were identical, as “primary” for regulatory purposes.l
As always, your comments and questions are welcome. Email me at dlung@transmitter.com.
Media Composer 2024.6 offers a new Transcript Tool for PhraseFind, enhanced compatibility with Avid’s Pro Tools audio production software and expanded availability of Avid Huddle, the company’s cloud service for real-time collaborative editing sessions. The latest version enhances Avid PhraseFind with the Avid Ada AI. The new Transcript Tool allows editors to work more efficiently by letting them jump to specific moments in a clip based on spoken phrases, editing directly from selections in the transcript into the timeline. v2024.6 also improves integration with Pro Tools, adding support for sub-frame automation for volume and pan. Pro Tools can now export Media Composer-compatible session files that preserve essential metadata; the release also makes Avid Huddle available for all versions of Media Composer and Avid | Edit On Demand. z www.avid.com
The VCLX LI 1600, the latest evolution in Anton/Bauer’s VCLX range of block battery systems, delivers more power than ever from a lightweight and IP65-rated weatherproof unit.
The VCLX LI is a 1600Wh capacity battery, more than double the power of the VCLX NM2 launched last year, while being lighter at just 25.5 lbs. It provides multi-voltage output (14.4V, 28V, and 48V) through two XLR4 outputs and one XLR3, offering seamless power to cameras, monitors, and lighting equipment.
This ensures optimal performance for high-current demand devices. For example, an ARRI ALEXA 35 rig could require about 300W at 28V DC—135W for the camera and another 165W for the monitor, wireless transmitter, wireless focus lens motors and receiver, focus assist, and lens light—the VCLX LI will power that package for 5.3 hours.
z www.antonbauer.com/en/products/vclx
The OPT-RMOC-12G is a new module option for Wohler’s iSeries Audio and Video monitoring products, making it possible to migrate a 3G-SDI capable monitor to a 12G-SDI (4K) capable monitor.
The module Includes a single BNC as well as a SFP cage for a 12GSDI or ST2110 (@1080p) input and works with most 1U & 2U iSeries in-rack monitors. Installed in the option card slot, OPTRMOC-12G can monitor 16 channels of audio (iAM & iVAM Series), plus a single video input on iVAM Series. Both audio & video are selectable from a choice of sources connected directly to the card, or inputs connected to the in-rack monitor.
Canon’s new EOS R1 and EOS R5 Mark II are two full-frame mirrorless cameras for professional still photography and video production. The EOS R1 is designed to meet the needs of professionals who shoot sports and news as well as those involved in high-end video production. The EOS R5 Mark II features improved video focused features for advanced creators and a real-time multi-recognition tracking system for those who focus on still photography.
A notable key feature is the fact that this new module also adds Remote Monitoring capability for use with Wohler’s new MAVRIC Cloud-based monitoring solution. z www.wohler.com.
Vegas Pro 22, featuring video/audio editing, color grading, effects, and cloud-integrated content management and acquisition tools, introduces several carefully chosen AI features designed to handle labor-intensive tasks and free users to fully express their creativity, giving “AI” a new meaning: “Assistance with Intention.”
In the new version, Vegas Creative Software has taken a measured and intentional approach to integrating AI capabilities into its workflow. This ensures that all AI features are designed to assist artists and content creators and to ease tedium in their workflows while providing them complete control and creative freedom in their work. z https://www.vegascreativesoftware.com/us/vegas-pro
Canon’s EOS R1 is high-performance, reliable and weather-resistant camera that offers a back-illuminated stacked 24.2 megapixel fullframe sensor and new processing system with video capture up to 4K with 6K RAW available as an option.
z https://www.usa.canon.com
Bitcentral has integrated Pixalate’s ad fraud protection, privacy and compliance analytics solution Post-Bid Analytics and Reporting into its ViewNexa streaming platform. The integration enables users to eliminate wasted ad spend and rectify problems in complex and fragmented supply paths.
Almost 20% of Supply Chain Objects (SCO) have failed verification and had a likely 32% higher invalid traffic (IVT) rate, according to Pixalate. Pixalate’s Post-Bid Analytics detects, tracks and identifies the sources and causes of IVT, unauthorized sellers, redundant supply paths and app/domain spoofing at scale across every supply path. The tool enables ViewNexa users to perform audits based on chain length and various reason codes, including IVT and viewability percentage metrics, which are broken down by CTV (connected TV), mobile and web.
z https://viewnexa.com
Wooden Camera’s new Elite Accessory System for the Canon EOS C400, features custom camera accessories designed to provide an intuitive and flexible foundation for various setups. The system is designed to integrate with a wide range of Wooden Camera, Canon and third-party accessories.
The Wooden Camera Core Accessory System for the Canon EOS C400 includes: Battery Slide Pro for Gold Mount or V-Mount batteries, which provides battery voltage to the camera through a 4-pin XLR connector. Base Plate System, the ARCA Riser Plate and ARCA Base Plate allow users to streamline or build out their rigs; Top Plate System, a lightweight, adaptable, modular system with a three-piece Top Plate (center plus two symmetrical left/right plates) and a front-facing Dual Rod Clamp; Top Handle Cheese Plate, which attaches to the camera handle with 1/4”-20 screws and provides additional space for accessories; NATO Side Rail, and Bolt-on Canon EVF Adaptor. z https://woodencamera.com
Bitmovin’s new Multiview feature enables audiences to watch multiple streams simultaneously and is designed for use cases such as sports, esports, and live events. Bitmovin’s Multiview is built on Bitmovin’s Playback, which is proven to deliver high-quality video streams at scale on all devices. It’s accompanied by in-built features such as advertising support, low latency streaming, and Digital Rights Management (DRM). Multiview is also supported by Bitmovin’s Encoding.
Multiview is an end-to-end solution that includes Bitmovin’s awardwinning encoding technology and pre-integrated analytics. Bitmovin’s Encoding ensures audiences enjoy high-quality streams regardless of bandwidth with its Per-Title encoding, multi-codec support with 8K, and multi-HDR support. The pre-integration of Bitmovin Analytics ensures that audience adoption of multiview can be measured, together with pinpointing & resolving any playback issues in real-time before they impact the viewer.
z https://bitmovin.com
AI+ 2.0 is an AI-powered suite for media professionals to revitalize their existing content libraries with AI-generated metadata. Perifery AI+ 2.0 unlocks new monetization opportunities for the content M&E companies already own and automates distribution workflows by
updated transcription, metadata generation, facial and object recognition and automatic translation.
The additions streamline asset management processes, simplify technology administration and automate content organization, enabling organizations
By Michael Sill Video Engineer University of Notre Dame
SOUTH BEND, Ind.–Founded in November 1842, the University of Notre Dame is America’s leading Catholic, undergraduate research institution. It’s home to approximately 8,900 undergraduate students, 4,200 graduate/professional students, and hosts several of the United States’ most renowned athletics teams. Collectively known as the “Fighting Irish,” as their sporting prowess has developed, so has the desire to produce better live game coverage.
Built in 2017, our Notre Dame Studios production facility was designed to be a fully IP from the start, however it’s not part of the athletics department. As part of the Office of Informational Technology (OIT), we’re a
centralized production facility that serves all of the Notre Dame campus. We produce a variety of events year-round, ranging from performing arts and academic events to weekly coverage of the Catholic Mass from the Basilica of the Sacred Heart.
Our three studios cover a range of content, from simple studio shoots to linear broadcasts, covering approximately 125 annual athletic contests for ESPN’s ACC Network. More recently we have provided the backbone for all of Notre Dame’s home ice hockey games to NBC’s Peacock streaming service.
Our commitment to NBC made us rethink our audio infrastructure. The sheer number of microphones we have in place around the rink, combined with providing content between periods, means we produce a whole professional show for NBC. We knew we needed to upgrade to
provide the level of audio quality that NBC required.
After seeking advice from our own A1, Garry Elghammer, as well as several visiting NBC A1s with whom we share feeds, we installed a Calrec Artemis audio console, Type R surface and IP ImPulse cores into our Rex and Alice A. Martin Media Center. The Artemis is installed in Audio Control Room 2 (ACR2), the Type R is in ACR1, and we operate three smaller control rooms that use Axia Fusion consoles.
The Artemis is the heart of the entire audio system, and it’s when covering hockey that we really stretch its legs! It’s a 48-fader console with an empty bay that will enable us to extend to 56 faders in the future, which is NBC’s minimum requirement for football coverage. It has redundant ImPulse cores, which not only allow us to maintain our full IP workflows but means that the whole Notre Dame facility can interoperate on the same SMPTE
2110 network, whether that’s an audio product like Calrec or Axia, or a video product like Evertz.
We have a modular I/O unit in the equipment room for interfacing, a 32x32 EDAC I/O unit in ACR2, a 12x4 fixed format I/O box located in one of our studios, and three portable 24x8 fixed format units that we use around campus. It enables us to work like a REMI; with fiber operating between all our venues we can facilitate various types of productions. It makes bringing in all the different audio feeds really easy, and it means that the Calrec network benefits the whole university and not just athletics. With I/O boxes strategically placed across campus, we have enough flexibility to cover a variety of events. ACR1’s Type R is a more compact 36 fader surface but has all the features of a larger production console, enabling us to mix for shows for ESPN, and it’s well understood by visiting A1s, which increases the pool of contractors/freelancers we can use.
Our Calrec IP infrastructure enables us to flex our network when required to service the whole university. We’re also committed to training the next generation in all areas of broadcast, even non-broadcast students are learning operational skills to give them valuable, real-world paid experience in professional audio with many students seriously considering a career in broadcast. l
Michael Sill is a video engineer at the University of Notre Dame in Indiana. He can be reached at msill@nd.edu.
For more information on Calrec visit https://calrec.com.
By Randy Bobo Founder and Sound Designer Independent Studios
MILWAUKEE–The documentary “The Loop” looks back at Chicago AM and FM radio stations with that moniker during their heyday from the late 1970s to the 1990s. The Loop was known for its on-air personalities who were notorious for crossing the line: Jonathon Brandmeier, Steve Dahl and Garry Meier, Kevin Matthews, and Danny Bonaduce. They inspired and were copied by stations and DJs across the country.
I was brought onboard by production company Duncan Entertainment to produce 5.1 and stereo mixes, plus stems. I used DaVinci Resolve Studio’s Fairlight audio post production tools, a Fairlight Desktop Console, Mac mini, Focusrite interface and Lipinski and JBL monitors as part of my workflow.
Given the project is a documentary, interviews were key, and most were done in decent locations, so noise wasn’t too much of a problem. However, DaVinci Resolve Studio’s AIbased voice isolation did come in handy on several occasions, and it’s so nice to have it on every channel.
As the name implies, it allowed me to isolate dialogue from background sounds, easily removing any unwanted noises. And because it’s AI-powered, I only had to click a button, and the DaVinci Neural Engine did all the heavy lifting.
I’ve found setting the processing to around 30% cleans
up the dialogue yet still sounds very natural, and I’ve rarely had to use 100%. I used to reach for another solution for these realtime cleanup jobs, but now I can just press the voice isolate button in DaVinci Resolve Studio, which is incredibly easy and saves me time.
While most of my work was carried out in the Fairlight page, I also used DaVinci Resolve Studio’s edit page to lock the reference video or delete multiple video tracks I didn’t need, and the delivery page came in handy when sending the client a rough mix with reference picture. “The Loop” documentary was a joy to work on. It’s very entertaining and irreverent. Most documentaries are serious and even somber, but not this one.
Since my work on “The Loop,” I’ve had the chance to dabble in the beta of DaVinci Resolve Studio 19, and some new features have found their way into my
workflow right from the start, including dialogue separator, music remixer and ducker.
The music remixer FX is my current favorite. With it, I can lower the vocal of a song when it’s under dialogue, keeping the rhythm and percussion strong. Ducker does what it’s supposed to: one track can auto-adjust the level of another track without needing side chain compression or automation curves. For example, you can automatically set background noise or music to lower when dialogue is present. Nothing flashy here, but very useful and so handy.
Another AI-based tool, dialogue separator, helps bring dialogue upfront while lowering random noises that I swear are from outer space. Sometimes it makes you wonder what is happening on set, but thankfully in post, I have a tool that makes it easy to rebalance the dialogue against background sound.
Finally, audio panning to video is another AI tool that I have my
eye on to use in my next project as it works amazingly well. Using DaVinci Resolve Studio’s IntelliTrack AI point tracker, which tracks people and objects, I can quickly pan multiple actors, for example, and control their voice positions as they move across 2D and 3D spaces.
I’ve had features like these in third-party plugins for quite some time but having them in DaVinci Resolve Studio on every track, only one click away, is a luxury. l
Randy Bobo is founder and sound designer at Independent Studios, which delivers crafted audio and visual post production to filmmakers, advertising agencies, corporations and digital media producers. For more information about Independent Studios, call 414-347-1100 or visit www.independentstudios.tv
For more information, contact Blackmagic Design at 408-9540500 or visit www.blackmagicdesign.com.
By Mathieu Lantin Audio Engineer WNM
LIÈGE, Belgium— WNM is a Belgian company specializing in audio and multicamera distribution as well as ENG activities and intercom provisioning. It also provides technical services for global sporting events and consultancy for broadcast infrastructure projects.
For top-tier events like the recent UEFA European Football Championship, where nothing must go wrong, focusing on the split-second delivery of the highest-quality content is obviously important, although the final result also depends on the tools you have at your disposal. Audio is a core ingredient of any TV production and getting this part right essentially hinges on the flexibility afforded by the mixing system.
The ability of Lawo’s mc² consoles to function in an open-standards IP infrastructure makes it easy to distribute the required signals to the right destinations. This networking aspect is critical in large venues with a distributed pool of audio stageboxes. IP allows me to easily send audio essences to other sites where they can be reshuffled and embedded as necessary.
This is the kind of audio and video flexibility I needed for the European soccer championship in Germany, for instance. I love
the ability to reliably patch audio streams directly on my mc² console, because speed is of the essence when the world is watching.
The A__UHD Core provides a stellar density of fully-featured DSP channels in a compact footprint. While 1024 DSP channels may seem overblown, I quite like having one channel per signal with no need to compromise on the number of
the sweet spot without the need for a KVM system allows me to stay in the flow.
I’m also able to conveniently handle 5.1(.4) productions and other delivery formats in parallel. Then there is the sound quality: the dynamic range of Lawo’s preamps is just perfect for sporting events where levels tend to span the
even looking at the controls in question.
Did I already mention that I just love the fact that I can assign any parameter to a fader? This is a major timesaver for my typical workflow. During the championship match in Germany on July 14th, for instance, I used this functionality to manage the stereo width and digital gain of my ambience bus as well as to fine-tune the delay settings.
inputs and outputs. End-to-end redundancy is another big plus. At the Euros in Germany, for instance, the ability to quickly select all relevant audio channels made my mixing job a lot easier.
I often use the Remote Desktop functionality to tweak parameters in a software application running on a remote computer right from the mc²’s screens. Whether management software, playout apps or external effects—remaining in
gamut. Basically, you can throw any signal at the A__stage I/O boxes and still get a natural and crisp sound from a mix of oncamera, referee, field-of-play and ambience microphones.
The mc²’s physical and graphic user interfaces have a clear structure, with consistent colorcoding for easy identification, and a layout that plays to my muscle memory. I can access the functions fast and intuitively, so that I can tweak settings without
These General-Purpose Controllers (GPC) can also be linked to the Audio-followsVideo function in order to tweak a resource’s settings by means of a video event. So, there you have it: I am extremely fond of Lawo’s mc² consoles, the A__UHD Core and the A__stage I/O boxes. They allow me to focus on the mix and deliver world-class audio content. l
Mathieu Lantin is an audio engineer at the Belgiumbased service provider WNM. After his studies to become an audio engineer, he briefly worked in a recording studio and then as a sound supervisor for TV productions, specializing in sports coverage. He was first asked for a European soccer championship in 2008 and has been working on subsequent installments ever since. He is also a regular at other world championships (rugby, swimming), global athletics jamborees, cycling, and much more. Whenever he has a moment, Mathieu works as a systems integrator and consultant for several TV stations. He can be reached at mathieu@wnm.be
More information is available at https://lawo.com.
Solid State Logic’s will debut new S400 console and a variety of enhancements for its System T broadcast audio production platform as part of its V4.1 software update at IBC2024. Featuring flagship control for live-to-air broadcast applications in a compact and cost-effective format, the new S400 is well-suited for OB, event space and music applications, or anywhere premium control features are required but space is a consideration.
The S400 console offers the same high-quality fader experience as the S500, featuring premium 100mm touch-sense faders and a dedicated OLED display for every fader. Level metering and status LEDs covering dynamics, auto mix and external control are also present by every fader to further enhance visual feedback. The V4.1 software update is available across all System T consoles and DSP engines (including virtual).
https://solidstatelogic.com
The Axient Digital ADX3 Plug-On Transmitter, the latest addition to Shure’s Axient Digital Wireless System, is ideal for audio professionals seeking an industry-standard transmitter that enables real-time remote control of key parameters.
ADX3 transforms any XLR-terminated microphone into an advanced, portable Axient Digital ADX Series wireless microphone. It delivers the same transparent audio quality, RF performance and wide tuning of the AD3 Axient Digital Plug-On Wireless Transmitter, with the addition of Shure’s proprietary ShowLink technology. ShowLink enables comprehensive, real-time control of all transmitter parameters, including interference avoidance, over a robust 2.4GHz diversity wireless connection. ADX3 includes the transmitter, two SB900 batteries, USB-A to USB-C cable, belt clip/pouch, and zippered bag.
www.shure.com
Pliant’s CRP-C12 is a newly designed compact Radio Pack created specifically for use with any CrewCom or CrewCom CB2 system. It’s one of the industry’s smallest, fully featured, wireless professional intercom belt packs, measuring approximately 3.5 in. x 3.5 in (9 cm x 9 cm), while weighing just under 9 oz./ 255 g. The compact, fully featured CRP-C12 includes support for up to two
Wohler’s new iAM1-12G is a well featured, easy to use, competitively priced 16 channel IP ready audio monitor which comesstandard with 12G-SDI and Analog inputs, loudness monitoring an d phase status. Developed from customer feedback, it shares many of the same features as the company’s iAM-AUDIO-1 PLUS. Other signal formats and features can be licensed including Dolby, AoIP, Toslink and 8 channel Analog, when needed, either initially or after purchase.
iAM1-12G is designed to be easy to operate, providing fast access to meters, menus and presets. Options for additional signals, including AES3, MADI64, Dante, Ravenna, SMPTE-2110 and SMPTE-2022-7. All iAM Series monitors contain an onboard web server. Multiple units on the same network can be updated, monitored and controlled via a browser.All iSeries and eSeries Monitors are also compatible with Wohler’s new MAVRIC Remote Monitoring Solution. www.wohler.com
assignable conferences with a simultaneous dual listen option. It also includes two assignable function buttons that can be configured via a user profile for functions like stage announce, call, or trigger of contact closures.
The CRP-C12 works seamlessly with CrewCom, Pliant’s flagship expandable, networked-based wireless intercom system as well as the CrewCom CB2, a full-duplex, installation-friendly, and cost-effective wireless intercom system.
https://plianttechnologies.com
Wheatstone’s Virtual Strata provides direct connectivity to major production automation systems for a fully integrated user experience between automation and console functions. It has all the mix-minus, automixing, control and routing features needed for managing audio productions independently.
AES67/SMPTE 2110 compatible and WheatNet IP audio networked, this virtual mixer has all the navigation and functions of a professional mixing desk, including familiar buttons, knobs and multitouch navigation and menuing for adjusting EQ curves, filtering and other custom settings. Virtual Strata can be used independently or as a companion to the Strata fixed console surface with mirrored faders and controls. www.wheatstone.com
NUGEN’s Loudness Toolkit—a NLE/DAW solution for loudness-compliant delivery—provides everything needed to produce reliable, loudness-normalized audio that seamlessly integrates into production workflows from stereo up to 7.1.4 surround. Loudness Toolkit comprises three NUGEN plug-ins, which work together to deliver the measurement and correction, real-time metering and True Peak limiting that meets all major loudness specifications.
The toolkit includes NUGEN’s VisLM loudness metering solution; ISL True Peak limiter; and LM-Correct automatic quick-fix compliance tool. For primary loudness parameters, VisLM combines an instant overview with detailed historical information, as well as loudness logging and timecode functions. ISL uses standardized True Peak algorithms and related standards, while LM-Correct provides immediate, hassle-free correction. https://nugenaudio.com
Dante Connect, cloud-based software helps broadcasters centralize audio production in the cloud. By leveraging the extensive network of Dante-enabled devices installed in stadiums, entertainment venues, and studios worldwide, Dante Connect enables broadcasters to overcome barriers and seamlessly centralize audio production in the cloud.
Dante is the de facto networking standard for professional AV, and with Dante Connect, broadcasters can put more devices to work to maximize productions, on or off-site. With Dante Connect, broadcasters can rethink their production approach by leveraging Dante audio devices anywhere in the world, creating cloud-based networks of these devices, and managing from wherever their production staff is based.
www.audinate.com
BATON 9 Series automatically detects and corrects loudness variations in audio and video, ensuring compliance with CALM, EBU and other major broadcast standards.
BATON achieves this by analyzing loudness along with video/audio quality, performing comprehensive checks on media files upon ingest and analyzes loudness compliance, video/audio defects, and offers audio channel mapping. Before sending assets to playout servers, BATON ensures their integrity and that timecodes are in sync. If a file’s loudness falls outside of the specified standards, BATON Content Corrector (BCC) seamlessly rectifies this. BCC also lets users play the corrected file in its high-quality BATON media player for precise evaluation. www.interrasystems.com
From its durable construction and ease of use to its leading acoustical properties, the DPA 2017 Shotgun Microphone can capture the energy of any broadcast, AV or event application. The 2017 is a unique, versatile, all-purpose, easy to set-up mic ideal for repeated use in harsh indoor or outdoor environments. Measuring just 7.24 inches in length, it is more compact than many popular solutions, yet still offers impressive technological features, including extreme durability, high directivity, clarity and con-
The 2017 offers flexibility and the ability to withstand difficult environments or extreme weather in an array of applications. The microphone persists in humid conditions and direct rain showers, as well as dry, arid environments and has been tested for use in settings with temperatures from -40°F to 113°F. www.dpamicrophones.com
Part of the Evolution Wireless Digital family—which includes EW-D and EW-DP—EW-DX wireless microphone systems simplify professional workflows by utilizing refined technologies and decades of research to deliver a digital UHF system that can be scaled with ease. With its advanced feature set and hardware options, live productions benefit from a network-ready system that is designed to integrate seamlessly.
The system quickly assesses a user’s environment and uses equidistant channel spacing to deploy frequencies while tools like Sennheiser’s Control Cockpit, Wireless Systems Manager, and Smart Assist App allow users to monitor their systems remotely. Other features include In-device charging, 12-hour battery operation, switching bandwidth up to 88 MHz, and more www.sennheiser.com/en-de
By Alex Blanding VP, Engineering & Technology SNY
NEW YORK—SNY is a multiplatform regional sports network and the television home of the New York Mets. Our productions include pre- and post-game shows, game highlights, sports talk shows, radio simulcasts, plus additional Mets entertainment programming airing throughout the year on our broadcast channel, streaming and our social media channels.
Audio is such a key part of making sports video exciting and since we converted to digital wireless microphone technology—specifically the Sony DWX series—our productions have become easier to set up and manage.
That flexibility is a main contributor to helping us produce such a range of diverse content, and it’s also a huge shift from using wired microphones and even the early days of analog wireless. Once we moved to Sony technology—first analog and then digital wireless—we’ve never encountered any issues. Since 2016, we’ve been all Sony digital wireless. It’s the way of the future for us, especially considering our all-digital workflow from our newer consoles to Dante interfaces with our overall communications system.
SONY DWX WIRELESS
SNY has a full Sony audio solution from the mic head all the way through including a DWRR03D dual channel receiver. We also have 16 Sony DWX wireless beltpacks (DWT-B03R) and all are in use seven days a week. The
number of packs used on a typical show is four, but we’ve had up to eight going at the same time and if both our studios are in use, it could be all 16 at once. In addition, we use Sony’s ECM-90 and ECM-77 lavalier mics.
We like the sound quality and the noise rejection, plus the elements are really perceptive to everything we do. I’ve had talent raise concerns about issues like scratching or hair falling onto the mics, which is always scary for live audio. But these Sony mics avoid those problems by being really good at rejecting noise, according to SNY’s lead audio engineer Adam Young.
The compact size of the packs are the smallest I’ve seen compared to other top microphones. The size alone has made our work easier, and the talent certainly appreciates the added mobility and flexibility. Often, talent might wear clothing that makes it harder to attach a bigger microphone pack and keep it on if they are moving around the
studio quite a bit. The small Sony micro packs make it easy for our A-2s to clip on and not have to worry about a big burden of weight on the talent’s back.
Show set-up and management is easier and faster, especially with features like Cross Remote and the Wireless Studio software.
Most other wireless microphone systems are unidirectional—the beltpack transmits audio, battery level and that’s about it.
With Sony you can actually send commands to the beltpack that the talent is wearing and change the microphone gain. You can go from low or medium to high power and do a clear channel scan without going to the beltpack manually—it’s all done remotely.
If we ever notice we are having frequency issues—which is extremely rare—the Sony transmitters make it easy to do a clear scan and pick up a clear frequency immediately. The system is
user-friendly in the sense that if such a problem were to arise, it’s an easy fix—you literally just click a button, hold it and everything pairs really fast.
With this system, we can take comfort knowing we are getting a high-quality signal and good battery life just by looking at the Wireless Studio software. It gives us “an extra hand” to deal with the usual challenges that always pop up.
We’re doing live television, and nothing goes wrong until you’re absolutely live. But this system gives us peace of mind in every sense. l
Alex Blanding has worked at SportsNet NY for more than 15 years, responsible for all aspects of the network’s sports programming, from broadcast television design and integration to technical operation and management. He can be reached at ablanding@sny.tv
More information is available at https://pro.sony.
David Kearnes Senior Engineer Winnercomm
TULSA, Okla.–Winnercomm, a well-respected production company, along with sister company Major League Fishing, the world’s premier tournament-fishing organization, recently collaborated to unveil a state-of-the-art in-house studio and control room.
Since 1981, Winnercomm has been at the forefront of providing content across all media platforms. It has a distinguished track record of delivering top-tier content to a diverse range of clients, earning 16 Emmy Awards as a result. Winnercomm produces Major League Fishing’s TV shows, which air on a variety of platforms including the Outdoor Channel, World Fishing Network, and Discovery Channel.
As a senior engineer at Winnercomm—where we previously leased a studio and relied on a remote truck as a control room—I knew that our increased production needs required us to build our own production and editing spaces.
We now have a dedicated studio and control room at Winnercomm’s facilities in Tulsa, Oklahoma. This transition marks a significant milestone, bringing Major League Fishing’s production operations back inhouse.
Our new facility includes a
selection of audio and video equipment with extensive Avid editing capabilities. This includes two complete Pro Tools suites for audio sweetening, VO’s, etc. Additionally, we have several Dante-compatible solutions from Studio Technologies that deliver pristine digital audio, including Model 372A beltpacks, Model 203 announcer boxes, and various intercom stations including the Model 348, Model 5304, and two Model 5312. We also have two Model 5422A Dante Intercom Audio Engines as the core of our comms system.
I have had a lot of great experiences with Studio Technologies gear in the past so when it came time to equip our studio and control room, they were at the top of my list. The company’s digital audio solutions, particularly those
with Dante capabilities, offered solid reliability and were easy to integrate.
These features made the Studio Technologies solutions a must have on my equipment list. The products we’ve deployed are delivering pristine digital audio, with little interference or distortion. This level of quality is essential for our live broadcasts, which now number over 80 events per year, with audiences across multiple networks and streaming platforms.
The Studio Technologies Model 372A Intercom Beltpack is a highly compact userworn device that combines a single channel of talk audio and two channels of comms or program listen. The Model 203 Announcer’s Console,
which supports Dante Audioover-Ethernet (AoE), offers simple deployment and the audio resources needed to directly support a complete professional on-air position.
The Model 348 Intercom Station is a tabletop unit that provides eight independent talk and listen channels designed to serve as an audio control center for production and support personnel.
The Model 5304 Intercom Station is also a desktop unit that provides four independent talk and listen channels that are compatible with Dante audioover-Ethernet networks. The Model 5312 Intercom Station is a rackmounted unit that is designed to serve as an audio control center for production and support personnel, and the Model 5422A Dante Intercom Audio Engine is a high-performance, cost-effective solution for creating party-line (PL) intercom circuits when used with Dantecompatible products. l
David Kearnes is a senior engineer at Winnercomm. He has 43 years of experience in broadcast, studio, postproduction and mobile trucks. At Winnercomm, he manages the studio during production and coordinates integrating the remote cameras into the studio. He also archives content for postproduction. He held the position of senior editor at Winnercomm for 17 years and has been in the senior engineer role for the past six years. He can be reached at david.kearnes@winnercomm.com.
More information can be found at https://studio-tech.com/.
Gen-IC Virtual Intercom is a Clear-Com cloud-hosted subscription service enabling 24 channel partyline deployments worldwide over IP without dedicated intercom hardware or network requirements. Agent-IC and Station-IC Virtual Clients offer mobile device or PC users with multiple intercom channels, each with dedicated talk, listen, and level control as well as visual call signals and device notifications. Gen-IC is scalable with low latency and operational robustness.
Gen-IC can optionally be integrated into onprem intercom and audio systems or peripherals using LQ interfaces. It can be used as a virtual Intercom, hybrid intercom and bridging intercom (LQ interfacing to hardware intercom systems at multiple geographically displaced locations supported by Gen-IC allows the systems to be bridged without dedicated networking). www.clearcom.com
RTS NEO Intercom Management Suite is designed for faster and more flexible configuration of OMNEO-based RTS intercom systems. It supports ADAM, ADAM-M, and ODIN matrices, as well as OMS from the RTS Digital Partyline family and is an all-encompassing interface that puts every aspect of intercom system orchestration at the user‘s fingertips.
TP9116 and TP9416, AEQ’s new 9000 series rackmounted and desktop intercom user panels, are based on full color displays and 4-way levers with two functions per lever, as well as individual level adjustment on the key itself.
It works with the existing AZedit software platform for RTS matrix systems, enhancing the AZedit feature set. A range of tools allow users to optimize their communications workflows for next-level efficiency and effectiveness—no matter how complex the system.
https://rtsintercoms.com
Telos Infinity Virtual Commentator Panel (VCP) is a unique extension to the Infinity VIP product family that enables remote contributors to access a commentary or announcer station from anywhere in the world through the VIP intercom platform, using an HTML5 browser on PCs, tablets, or smartphones, or via the VIP Phone app for iPhone and Android smartphones.
Separate on-air and intercom mic channels are provided, with automatic onair muting when the comms channels are in use. Programmable inputs with rotary gain controls for IFB, program, mix-minus, and aux sources allow users to create a customized monitor mix.
https://telosalliance.com
The TP9000 user panels are based on“Talk” and “Listen” functions and individual volume control for each communication point, via a 4-way lever-type key. They offer 16 cross-stitch keys, four pages, manage up to 64 destinations or groups with the EP9116 expansion rackmount panel and are designed for Conexia and CrossNET systems. IP connectivity handles audio in DANTE or AES67 format and low bit rate voice.
www.aeq.eu
By Michael McDermott VP of Live Events 4Wall
LAS VEGAS—As vice president of Live Events at 4Wall Entertainment, I’ve seen first hand the evolution of intercom systems in the broadcasting and streaming industry. At 4Wall, we specialize in providing cuttingedge solutions for live events, broadcast productions, and streaming services.
Our expertise lies in creating seamless, high-quality audiovisual experiences for a wide range of clients, from major TV networks to online content creators. Integral to our operations are the intercom systems that ensure clear and reliable communication among our team members.
One intercom product we rely on heavily is the Bolero wireless intercom by Riedel Communications. We chose Bolero based on its exceptional audio quality, flexibility, and robust performance in demanding environments. Although we considered several options, Bolero stood out due to its superior technology and practical integration capabilities.
We utilize Bolero for a variety of purposes, including live event coordination, onset communication during broadcasts, and managing streaming productions. The ability to communicate seamlessly across different departments ensures that our productions run smoothly, without the hiccups that can occur with inferior systems.
One of the primary challenges Bolero addresses is the need for a reliable, high-quality wireless communication system that can perform well even in crowded RF environments. In live events and broadcast settings, interference can be a significant issue, but Bolero’s advanced RF robustness and flexibility in integration mitigate this problem effectively. This reliability allows our team to focus on delivering top-notch content without worrying about communication breakdowns. Deploying Bolero is straightforward and efficient. Its decentralized architecture supports flexible deployments, accommodating various production sizes and setups, from small studio broadcasts to large live events. The use of 4-wire ties via NSA-02s is particularly useful for connecting Bolero with existing wired intercom systems, providing a unified and streamlined communication network. This was crucial as technicians had to monitor multiple broadcast program streams during rehearsals and the
live event.
One operator, Carlos Ostero, had never used Bolero prior to a recent event. Despite his lack of experience with the system, he found it incredibly easy to program and set up. The intuitive user interface and straightforward configuration enabled Carlos to get the system running without any issues—a testament to Bolero’s user-friendly design.
Our favorite Bolero features include its intuitive user interface and the ease of integrating it with other systems. The beltpacks are lightweight and user-friendly, and have plenty of battery life, which is critical for long productions. Additionally, the ability to configure the system via a web browser provides us with the flexibility to make adjustments on the fly, ensuring optimal performance.
Unique features of Bolero that have impressed us include its patented Advanced DECT receiver technology—designed to reduce sensitivity to multipath
reflections—and its ability to operate in standalone mode. This allows for a highly flexible and scalable deployment, making it an ideal solution for various production environments.
One feature that pleasantly surprised us is Bolero’s NFC beltpack registration. This allows for quick and easy registration of beltpacks, which is especially useful when managing a large number of units. This feature not only saves time but also ensures we get up and running quickly, even in fast-paced environments.
Having Bolero as a tool in our arsenal is significant because it elevates the quality of our productions. Clear and reliable communication is the backbone of any successful broadcast or live event and Bolero’s performance and reliability, combined with its seamless integration capabilities, give us the confidence to push the boundaries of what we can achieve, knowing that our communication system will support us every step of the way.
Bolero has proven to be an invaluable asset to 4Wall Entertainment—its advanced features, reliability, and ease of use make it the ideal choice for our diverse range of projects. As we continue to innovate and expand our services, we know that Bolero will play a crucial role in our success, enabling us to deliver exceptional audio-visual experiences to our clients and audiences. l
Michael McDermott, VP of Live Events, 4Wall, has been with 4Wall for more than three years and in the M&E industry for almost 20 years. He can be reached at mmcdermott@4wall.com and 702-263-3858. For more information visit https://www.riedel.net/en/.