Welcome to the December 2024 issue of








Make quick, precise audio adjustments from anywhere, any time. Complement your news automation system with this virtual mixer.
Adjust the occasional audio level with the Virtual Strata mixer as an extension of your production automation system. Mix feeds and manage the entire audio production with all the mix-minus, automixing, control and routing features you need from your touchscreen monitor or tablet. Fits in any broadcast environment as an AES67/SMPTE 2110 compatible, WheatNet-IP audio networked mixer console surface
Connect with your Wheatstone Sales Engineer Call +1 252-638-7000 or email sales@wheatstone.com
For years, broadcasters have spent many wakeless nights worrying about how streaming companies are threatening the industry’s last bastion of exclusivity and attention: live sports. While last month’s debacle with Netflix’s glitchedfilled “broadcast” of the Jake Paul-Mike Tyson fight won’t allay those fears, the incident does remind us of the perils that go with the hubris that characterizes so many streaming services today.
Netflix spent an estimated $15 million over an 8-month-long campaign to market the fight, which was promoted by many as a generational clash and the “fight of the century” by others. Netflix wanted viewers and expected that they could handle the traffic because, after all, they’re Netflix. But, as we all learned afterward, the streamer wasn’t ready, resulting in a glitch-filled broadcast that blew up social media and overshadowed what most described as a mediocre fight at best. The event attracted an estimated viewership of 65 million viewers, far eclipsing the audience for a live-streamed event ever. The final number was more than three times the prior largest live stream hosted by Netflix, a Chris Rock special in early 2023.
Netflix acknowledged the problem but nevertheless thought it went well considering the circumstances.
“This unprecedented scale created many technical challenges, which the launch team tackled brilliantly by prioritizing stability of the stream for the majority of viewers,” a Netflix exec told company employees in a memo after the fight. “I’m sure many of you have seen the chatter in the press and on social media about the quality issues. We don’t want to dismiss the poor experience of some members, and know we have room for improvement, but still consider this event a huge success.”
NAB was more than happy to remind us that when it comes to televizing live events, broadcasting is still the best option. “[The event] was a good reminder that when it comes to live sports, no other medium can match broadcast television’s high-quality, reliable viewing experience,” the association noted in a blog. “No costly subscriptions. No worrying about your internet speed. Just the excitement of the game, delivered in high-definition to your TV screen,” the association noted in a blog.
For good or bad, viewers are not flocking to broadcast TV in droves and professional sports, which dominates television ratings, is going where the viewers are—online. Streaming companies such as Amazon and Apple TV+ have demonstrated that they can handle live sports, albeit at not the level of the numbers tuning in a Netflix fight.
Netflix’s next test comes later this month when it broadcasts the highly anticipated NFL matchup between the Houston Texans and Baltimore Ravens on Christmas day—exclusively. Add in a Beyonce halftime show and you have the ingredients of an even bigger disaster if the streamer can’t resolve its issues in time.
Any student of broadcast TV history will tell you about the important role live boxing played in putting new TV sets into American homes back in the 1950s. So it’s the height of irony that the sport—which has declined in popularity over the decades—would be the one thing that would come back to remind the streaming industry of the importance of being prepared for anything when televising live events.
Tom Butts Content Director tom.butts@futurenet.com
Vol. 42 No. 12 | December 2024
FOLLOW US www.tvtech.com twitter.com/tvtechnology
CONTENT
Content Director
Tom Butts, tom.butts@futurenet.com
Content Managers
Michael Demenchuk, michael.demenchuk@futurenet.com
Terry Scutt, terry.scutt@futurenet.com
Senior Content Producer
George Winslow, george.winslow@futurenet.com
Contributors: Gary Arlen, James Careless, David Cohen Fred Dawson, Kevin Hilton, Craig Johnston, and Mark R. Smith
Production Managers: Heather Tatrow, Nicole Schilling
Art Directors: Cliff Newman, Steven Mumby
ADVERTISING SALES
Managing Vice President of Sales, B2B Tech Adam Goldstein, adam.goldstein@futurenet.com
Publisher, TV Tech/TVBEurope Joe Palombo, joseph.palombo@futurenet.com
SUBSCRIBER CUSTOMER SERVICE
To subscribe, change your address, or check on your current account status, go to www.tvtechnology.com and click on About Us, email futureplc@computerfulfillment.com, call 888-266-5828, or write P.O. Box 8692, Lowell, MA 01853.
LICENSING/REPRINTS/PERMISSIONS
TV Technology is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com
MANAGEMENT
SVP, MD, B2B Amanda Darman-Allen VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane VP, Head of US Sales, B2B Tom Sikes VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban
FUTURE US, INC. 130 West 42nd Street, 7th Floor, New York, NY 10036
reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein.
If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents,subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.
Please Recycle. We are committed to only using magazine paper which is derived from responsibly managed, certified forestry and chlorine-free manufacture. The paper in this magazine was sourced and produced from sustainable managed forests, conforming to strict environmental and socioeconomic standards.
TV Technology (ISSN: 0887-1701) is published monthly by Future US, Inc., 130 West 42nd Street, 7th Floor, New York, NY 10036-8002. Phone: 978-667-0352. Periodicals postage paid at New York, NY and additional mailing offices. POSTMASTER: Send address changes to TV Technology, P.O. Box 848, Lowell, MA 01853.
Rise, the advocacy group for gender diversity in the media technology sector, has named Megan Mauck, senior vice president of media operations at NBCUniversal as its 2024 Woman of the Year Award. Mauck received her award, sponsored by Zixi, during the Rise Awards 2024 ceremony last month at Troxy, London.
Asked why Mauck was named Woman of the Year for 2024, Mark Harrison, Rise advisory board member and founder and chief creative officer of the Digital Produc tion Partnership (DPP) said, “During a turbulent time, when so many senior technology and operational leaders have found it difficult to convey clear, let alone inspirational, messages about innovation and strategic business initiatives, she has stood out as a leader for our times.”
Mauck began her career as an engineering co-op at Toyota before joining General Electric’s Operational Management Leadership Program in 2006, where she launched her career at NBCUniversal, then owned by GE. Over the years, she has progressed through a series of increasingly senior operational and project management roles, ultimately advancing to her current position as a senior executive.
Mauck has a degree in mechanical engineering from Purdue University and is based in Los Angeles.
In her current position, Mauck leads Comcast-owned NBCUniversal’s 24/7 media operations across Los Angeles, Denver and New York, overseeing the ingestion, preparation, packaging and distribution of its domestic and international content.
Visitors to the CBS News and Stations-owned KPIX-KPYX San Francisco will find a bustling local news operation producing 60 hours of local news each week that has recently abandoned a longtime centerpiece of traditional local news production—the TV news set.
Those sets disappeared in September, about a year after the stations deployed an innovative augmented reality/virtual reality platform for its news operations. The new AR/VR system had been so successful, first with weather and then all their coverage, that “One day, we asked ourselves why we needed hard sets?” CBS Bay Area
President and General Manager Scott Warren recalled. “So, we just walked in and started busting them out. Now it is all AR/VR. We have no hard sets. There’s no desk, there’s no monitors. And every single show was like that.”
“I believe we are the first station to do that,” he added.
The technology has been applied to a wide range of subjects, from immersing viewers in San Francisco’s famous fog, to sports, politics, entertainment and election coverage.
The decision to go all in on AR/VR technologies was part of a wider effort by the station group to reinvent the way local news stories are presented and distributed to audiences that has led to the creation of a new hyperlocal news operation in Detroit, a Local News Innovation Lab in Texas, a dramatic expansion of the local news offered by CBSowned stations and other initiatives.
In a notable expansion of the way viewers can watch local PBS stations, Amazon is launching more than 150 public TV stations and the PBS Kids channel ad-free over the coming months as a Prime Video FAST offering.
This marks the first time this programming will be available for free on a major streaming service to customers across the country.
Additionally, PBS Distribution (PBSd), a leading distributor of public media content around the world, will offer two new FAST channels, PBS Drama and PBS Documentaries. These channels were made available exclusively for a limited time on Prime Video starting Nov. 26. PBSd will also offer a “pop-up” FAST channel featuring a rotating selection of classic PBS shows, starting with “Reading Rainbow.”
This launch marks a significant milestone for PBS in delivering a curated lineup of programming from member stations to a broader FAST audience, particularly at a time when two in three U.S. viewers use ad-supported streaming platforms. FAST Channels on Amazon are accessible through Prime Video and Fire TV. Non-Prime users will have free access to PBS programming under the “Watch for Free” section within Prime Video, PBS and Amazon said.
”We’re delighted to bring PBS’s trusted, high-quality programming to our FAST channels,” Prime Video Marketplace Head Ryan Pirozzi said. “We have put together one of the most exciting FAST offerings today for Prime Video customers, driven by a highly personalized experience, and a leading selection of channels featuring fan-favorite series, movies, news, sports and more.
We know our customers will be excited to discover that their beloved PBS member stations and two new PBS FAST Channels are now part of our growing offering.”
PBS, PBS Kids and local station content is also available on PBS.org and pbskids.org, as well as the PBS app and the PBS KIDS Video app available on iOS, Android, Roku, Apple TV, Amazon Fire TV, Chromecast and smart TVs from Samsung, Xumo and Vizio. Members of local PBS stations may also view an extended library of programming via PBS Passport.
Time to dust off the turban, cue the pungi and climb out on a limb with my top 10 TV broadcast predictions for 2025.
I certainly claim to have no psychic skill. Rather, these predictions come from my reporting over the past year and observations made along the way.
One caveat: If my No. 1 prediction turns out poorly for broadcasters, all bets are off on at least six others. Time will tell. Enjoy.
4. Testing of ATSC 3.0 and 5G Broadcast concurrently transmitted from the same transmitter. At least one transmitter vendor has told broadcasters it can be done. The proof, of course, will be in the tasting of the pudding, and that vendor will step into the kitchen to conduct a test.
Despite all drama leading up to Election Day, Nielsen says viewing across 18 networks on election night was notably lower than 2020, with over 42 million viewers turned in between 7 p.m. and 11 p.m. ET on Nov. 5.
1. It’s make or break time for ATSC 3.0. A court decision in the Constellation Designs-LG Electronics patent case is expected in 2025. A favorable decision for LG likely means the company’s re-entrance into the NextGen TV market. A decision for Constellation Designs could mean all NextGen TV makers pull the plug, and 3.0 becomes a footnote in the history of U.S. television.
2. Cord cutting continues its march. MVPDs have lost considerable audience since the advent of streaming alternatives. Comparitech puts the annual subscriber decline at 6 million between 2019 and 2022. Since February 2023, Nielsen says more than one-third of U.S. TV usage is based on streaming. This trend will continue in 2025 and beyond. A study by Digital TV Research projects fewer than 60 million payTV households in the U.S. and Canada by 2029.
3. The number of OTA households will inch higher as the benefits of hybrid over-theair and over-the-top delivery come into focus. In February 2024, Nielsen pegged the number of U.S. TV households with at least one over-the-air set in use at 18%. With 70% of U.S TV homes owning one or more smart TVs— 14 million of which include ATSC 3.0 tuners and more on the way—an increasing number of viewers are experiencing 3.0-based hybrid OTA-OTT features. One, NBCU’s start over, gives viewers a TV experience more akin to streaming with the ability to start a show over. Another is Broadcast-Enabled Streaming TV (BEST) (aka Broadcast IP). Some commercial TV viewers and public broadcast fans, too, are already enjoying access to a fuller menu of 3.0 channels in their markets thanks to the hybrid broadcast-streaming technology.
5. The ATSC 1.0 logjam will break. The November election will change the makeup of the commission. The FCC will move forward with a workable plan to expedite the sunset of 1.0 and stomp down on the accelerator for 3.0. Broadcasters may find the solution bittersweet if it involves another spectrum auction.
6. ATSC 3.0-based Broadcast Positioning System gets a chance to prove itself. ATSC 3.0-based BPS offers the nation an affordable, resilient complement/backup to GPS for critical timing and positioning applications. The change of national leadership opens a path to BPS trials.
7. Broadcasters earn revenue from 3.0 datacasting. The efforts of Sinclair and OTA Wireless (the joint venture of Nexstar Media Group and E.W. Scripps) to create a new revenue stream based on datacasting begin to pay dividends. Other broadcasters take note and look for ways to cash in.
8. Public, private and on-prem clouds continue to transform broadcast and production infrastructure and workflows. From live production to playout, momentum will continue to grow for cloud-based workflows as broadcasters look for greater efficiency and savings.
That was 25% less than the 56.9 million viewers aged 2 and older in primetime during the 2020 election, Nielsen said.
Overall, about 28.45 million homes tuned in during that time slot, with more than half of all viewers, 24.35 million aged 55 and older. There were 4.39 million viewers in the 18 to 34 demo and 11.4 million in the 35 to 54 demo.
Reported networks include: ABC, CBS, Fox, NBC, The CW, Merit Street Media, Scripps News, Telemundo, Univision, CNBC, CNN, CNNe, Fox News Channel, Fox Business Network, MSNBC, Newsmax, NewsNation and PBS. Among the major network and news channels, Fox News was the most popular with 9.8 million viewers in the 8–11 p.m. (ET) time slot, followed by MSNBC with 6 million, ABC with 5.9 million, NBC at 5.5 million, CNN with 5.1 million, CBS at 3.6 million viewers and Fox Broadcasting at 2.0 million.
9. Politicians consider shaking up their media strategies for 2026. Some politicians and consultants mulling over the results of the 2024 campaign will look to long-form podcasts to replicate the success of President-elect Donald Trump. Constrained by the equal time rule, broadcasters may decide they have to take a pass, or they may spin up new ad-hoc digital subchannels to make swallowing the equal-time pill a bit easier.
10. Internationally, ATSC 3.0 records more wins. The Brazilian government will adopt the recommendation of its SBTVD Forum for that nation’s TV 3.0 standard, choosing to incorporate major portions of ATSC 3.0, including the physical layer, into its broadcast future. The ATSC 3.0 standard will also see progress in India, Canada and Mexico.
The youngest generations of
From rapidly evolving viewer tastes to a transformative election, change is now a constant for an already disrupted industry.
By Daniel Frankel
In the parlance of best-selling sci-fi novel turned Netflix series “Three Body Problem,” you have to think long and hard for the last time the television business enjoyed a “stable era.”
Certainly, 2024 saw what seems like a never-ending chain of disruption continue: Hollywood conglomerates and streaming companies had to manage major content pipeline disruption from the previous year’s talent-guild strikes; streaming continued to eat away at viewership share for linear platforms, while big streaming companies gobbled away more life-blood, live sports, from the content ecosystem; and now, with Republicans sweeping into federal power come a whole new mind-blowing array of M&A possibilities.
‘DNA LEVEL’ CHANGE
In fact, the transformation of the video business is now occurring on a DNA level, with many denizens beginning to ponder what “television” is in the first place. In June, for example, research company Maverix Insights & Strategy published data suggesting that “Gen Alpha” (roughly defined as those
born between 2013 and 2024) devotes 78% of its screen time to watching video distributed on social media.
The youngest generations of viewers never had a “cord” to cut. And rather than “Netflix and chill,” they’re choosing an entirely new form of video entertainment as their staple choice of consumption.
The movement toward social video is more broadly apparent in the monthly U.S. viewership share reports published by Nielsen. In September, the research company’s “The Gauge” showed that YouTube
Data from Maverix Insights & Strategy suggests that “Gen Alpha” (roughly defined as those born between 2013 and 2024) devotes 78% of its screen time to watching video distributed on social media.
consumed 10.6% of all living room viewing of “television” in American homes. That’s up from 9% in September 2023 and 8% in September 2022.
Overall, Nielsen suggests that streaming accounted for 41% of American TV consumption in September 2024, up from 37.5% in 2023 and 37% in 2022.
Across the board, Hollywood conglomerates and Silicon Valley technology companies reported major increases in streaming revenue. For example, the world’s biggest streaming company, Netflix, reported a 15%
Across the board, Hollywood conglomerates and Silicon Valley technology companies reported major increases in streaming revenue.
increase in third-quarter revenue to $9.825 billion, while growing its global subscriber base by 14.4% year over year to 282.72 million. Netflix also reported an all-time high in free cash flow in Q3 of $2.194 billion.
Traditional media companies felt the lift, too. Paramount Global, for example, saw a 10% sales increase in Q3 from direct-toconsumer streaming platforms including Paramount+, with subscription streaming platform’s user base increasing by 3.5 million to 72 million paying customers. Paramount said it’ll still lose money on streaming in 2024
but will start turning a profit next year.
Over at Warner Bros. Discovery, meanwhile, the DTC segment—which includes subscription video-on-demand platform Max—saw sales rise 8% in the third quarter to $2.634 billion. Max now has 110 million streaming subscribers worldwide now.
And Disney saw its streaming operation deliver a $321 million profit from JulySeptember vs. a $387 million loss for the same period of 2023.
At least for the traditional Hollywood entertainment giants, however, the green shoots of streaming were not enough in 2024 to offset the declines of linear revenue channels.
For one, the crippling Hollywood guild strikes of summer 2023 continued to take their toll on revenue generation well into 2024.
Despite its streaming success with not just Paramount+, but also free ad-supported streamer (FAST) Pluto TV, Paramount Global saw its third-quarter revenue drop 6% to $6.73 billion, a result it directly attributed
to the content pipeline slowdown caused by the strikes. Paramount management said the “hangover” of the strikes was felt across Paramount’s CBS Television Network, its cable channels, and its theatrical movie business.
Streaming just isn’t enough of a business yet to offset these linear declines, particularly in the moribund “basic cable networks” business.” WBD reported flat revenue growth to $7.662 billion from its DTC efforts through the first nine months of 2024. The company’s linear networks generated around twice as much revenue over that same period, $15.407 billion, but saw a decline of 5% YoY.
The future for these media networks isn’t necessarily looking brighter ahead. WBD’s TNT, for example, will lose its NBA rights starting in 2025, with Amazon muscling its way into the league’s national TV partnership group with a $1.8 billion-a-season deal… and muscling WBD out in the process.
The entry of Amazon, Apple and Netflix into the live sports business threatens to further erode media company linear networks in the
years to come.
For operators and bundlers of linear TV networks, and everyone connected to them, there was some good news in the latter part of 2024 suggesting that cordcutting might actually be slowing down a little. The three biggest publicly traded pay TV operators (Charter Communications, Comcast and Dish Network) each reported declines in Q3 in the number of subscribers quitting their services.
And the pay TV industry is finally responding in a meaningful way to changes to video subscriber consumption habits that have been rapidly occurring for a decade.
creating a value proposition that could actually cause customers to reconnect the cord.
Charter, for example, signed a series of pay TV carriage deals in 2024 with WBD, NBCUniversal, Paramount and other media companies that allow it to bundle these media conglomerates’ SVOD services into pay TV packages at no additional cost. In 2025, once all these services get integrated into Charter’s delivery system, the cable company said it will be delivering $80 a month of SVOD services to subscribers of its most popular video tier, the $125-a-month Spectrum TV Select+, at no additional cost.
Charter has received numerous shoutouts across the video business lauding it for
But traditional pay TV distribution remains a challenging business. In November, for instance, Alaska’s biggest cable operator, GCI, announced plans to shut down its pay TV operation in mid-2025. GCI will now offer customers the Xumo streaming platform jointly developed by Comcast and Charter.
Meanwhile, also faced with extinction, the two big satellite TV companies, DirecTV and Dish Network, announced a renewed effort in 2024 to join forces through M&A. Previous attempts to merge had been shut down by regulators, but this time, it was an ill-fated negotiation over how to handle
“We’re very excited about the upcoming regulatory environment. It does feel like a cloud over the industry is lifting here.”
CHRIS RIPLEY, SINCLAIR BROADCAST GROUP
Dish’s significant debt load that ended the engagement. It’s widely believed, however, that if the internal talks hadn’t collapsed, regulators wouldn’t have stood in the way.
In fact, moving into 2025, regulators might not stand in the way of much of anything.
With former president Donald Trump and the Republican Party sweeping back into power following a convincing Nov. 5 election win, there’s a belief among media titans that the federal government will rapidly shift to a laissez-faire stature on media company dealmaking.
Among Trump’s early appointments was veteran federal regulator Brendan Carr to head up the FCC. Carr has publicly agreed with the incoming administration’s pledge to slash regulations as well as go after Big Tech.
Hoping the new regime modernizes old FCC rules on station ownership, Sinclair Broadcast Group CEO Chris Ripley told investors two days after the election, “We’re very excited about the upcoming regulatory environment. It does feel like a cloud over the industry is lifting here.”
Speaking to investors the same day (Nov. 7) during his company’s Q3 earnings call, WBD CEO David Zaslav remarked, “It’s too early to tell, but it may offer a pace of change and an opportunity for consolidation that may be quite different, that would provide a real positive and accelerated impact on this industry that’s needed.”
Of course, we are indeed “early” following the tumultuous aftermath of the 2024 national election. And Zaslav perhaps simply didn’t remember that former AT&T CEO Randall Stephenson had to sue the U.S. Justice Department in 2017 to clear the purchase of the company Zaslav now runs, then called “Time Warner Inc.,” because Trump, in his first presidential term, wanted a certain Time Warner Inc.-owned cable network he doesn’t particularly favor, CNN, cast out of the deal. ●
The South
a
Whether SDI or IP, touch panels or physical controls, today’s switchers are meeting live production demands
By Kevin Hilton
Production switchers are a fine balance between the technical—handling multiple incoming video sources—and the creative, mixing between various sources to create seamless, exciting or engaging and comprehensive coverage of a live event or studio production. High-end broadcasters have always put high demands on this key piece of equipment, but now other users have equally exacting requirements.
Keith Vidger, principal technical consultant for media and entertainment at Panasonic Connect, sums this situation up in four words: “More, fewer, higher, lower. These relate to more content being created by people who use fewer traditional outlets to distribute at higher production values with a lower cost.”
As Vidger observes, switchers are generally thought of as the province of those involved in live broadcasting, such as
broadcasters and call-letter stations.
“Those that derive their revenue from selling advertising [and] create cool content, including news and sports, are still there, but in addition there are corporate customers who used to produce fairly straightforward content to a group of people within their own domain, in other words employees, which are now producing shows with enormously high production values at a lower cost,” he says.
Vidger explains that new technologies—
“Whenever we speak to customers about a new switcher, whether they’re still SDI or not, they’re looking for an upgrade path to IP.”
SCOTT MCQUAID, SONY
”cutting-edge software and reliable hardware” —have been applied to switchers and reduced costs while achieving “if not the same results as we had before, maybe even better ones.”
This, he adds, has allowed users to achieve their high production goals for less money.
Another sector likely to benefit from the evolution of production switchers is the second-tier sports market. Satoshi Kanemura, president of FOR-A Americas, says another factor is the budget and staff cuts being made by some of the large U.S. broadcasters.
“Broadcasters are facing up to how they can do productions more cost-effectively [but without] gigantic 4 M/E [mix/effect] 100 input switchers, which were very useful for live events but are difficult to afford now,” he says.
This, Kanemura explains, has led to companies like FOR-A producing smallerscale switchers featuring only 2 M/E and 40 inputs. In addition to this, he adds, the
younger generation of operators and technical directors now coming into the business are familiar with touch panels, iPads and iPhones, but unfamiliar with big traditional switcher panels.
“The trend is that the productionswitcher surface stays the same but has more of a web GUI [graphical user interface] setup and control,” he says. “In the near future, with the number of younger people in broadcasting increasing, maybe web GUIs or touch panels will be the main interface. This may be seen especially in coverage of minor-league sports that prefer to go to cloud operations. It could be the next trend and although it’s not coming soon to the broadcast market, we are in a transition period.”
A key function of switchers is to help set the visual style for a TV station, observes Greg Huttie, vice president of production switchers at Grass Valley. “In general, broadcasters are interested in developing the look and feel [of their output],” he says. “People might tune in to a football match or a baseball game or auto racing from a particular network that has only eight cameras, or it could have 32 or, as with the Super Bowl, 96 to 100 cameras,” Huttie says. “But the broadcasters don’t want the viewer to realize how many cameras there are. They want something that looks the same [regardless of the number of cameras] and has an impact.
“The ‘wow factor’ is an important element in how manufacturers and their R&D departments develop switchers because there needs to be a consistency across the board, whether you’re on site or doing something as a remote or what kind of processing engine you’re using,” he adds.
While IP is now moving steadily into the
controller, which is basically the old SDI router, handling the 2110 video, audio and metadata all at once,” McQuaid adds. “You can choose where you send those individual signals or send all three to the switcher or the audio board.”
IP and the idea of a small, softwarebased control surface has led to discussion of “virtual switchers,” although, as McQuaid observes, everyone has their own name for the concept.
“I call them hybrid cloud software switchers, which is a virtual switcher that lives up in, say, an AWS or Google cloud,” he says. “The software lives in the cloud and you access it from anywhere you want. You can also go with that software running on a COTS server that’s on-prem, it just depends on how you want to move signals and where you want to go with them.”
broadcast market, Scott McQuaid, product manager for switchers at Sony Electronics Professional Solutions Americas, comments that SDI “still has a significant role to play” but points out that large broadcasters and networks are moving towards IP.
“Most anyone building a brand-new facility would probably tend towards IP, but in the smaller [station] market, they’re still going to go SDI because IP is very expensive to implement,” he says. “But whenever we speak to customers about a new switcher, whether they’re still SDI or not, they’re looking for an upgrade path to IP.”
McQuaid says more customers are looking for smaller switchers with hybrid processing and a mix of hardware and software.
“The frames are more compact, but we’re still able to handle multiple inputs,” he explains. “One IP connection at 100G can handle up to 32 ins and 32 outs, so your smaller panel/switcher hardware can deal with 64 ins and 64 outs in IP.
“The infrastructure will include extensive 100, 200, 400G switches and be connected to a broadcast
Nigel Spratling, vice president of switchers and servers at Ross Video, comments that to support remote production workflows, switchers now offer various operating modes to enable remote control and operation.
“Control panels, which were once tethered to local processing frames, can now be located remotely and connected via VPN [virtual private network] technology,” he says. “Additionally, comprehensive software-based control panels allow operators to manage productions from virtually anywhere.”
In these scenarios, FOR-A’s Kanemura says, the processing engine will be virtual but people can continue to use a hardware panel connected to a software-based switcher. “Eventually all operations will be on a touch-panel basis, but I don’t know how long that’s going to take,” he says.
Grass Valley demonstrated switching control in conjunction with the Apple Vision Pro mixed-reality headset at IBC 2024. Huttie confirms that some end-user testing is now taking place “in the realm of virtual monitoring” combined with a physical panel, such as the company’s Maverik MAV GUI modules.
“Broadcasters are open to anything that achieves their goal,” he says. “But for major events, it’s all about the content and what it looks like. Whether they can achieve that with a software-based switcher or a traditional engine, they flip the coin. People are trying a lot more things today if it’s what their production needs and it’s reliable.” ●
By David Cohen
From more than 200 feet off the ground outside of LA’s iconic Dodger Stadium, a camera gives viewers of Game 5 of the World Series a charming mood shot of the stadium’s exterior to accompany a break in the action. Sports fans have seen similar shots a million times over the years, right? Not like this.
On this night, the shot begins to track into the venue as the camera, attached to a remotely piloted drone, soars between the flagpoles atop the stadium’s main entrance and then rises into the sky, revealing a breathtaking view of the Los Angeles skyline amid the stunning glow of a lavender sunset.
Visuals like this, that bring a cinematic atmosphere into the production of major live sporting events, help broadcasters bring viewers at home closer to the action. And, increasingly, they’re doing it with shots that only drones can capture.
“The coverage that we get with a blimp and the helicopters is great, but typically speaking, they’re quite high in the air … 3,000 feet or higher,” said Michael Davies, senior vice president, Technical and Field
Operations for Fox Sports, which has covered some of the world’s most watched sporting events including the World Series, NFL football and NASCAR. “What that means is you get that very typical shot we’ve seen for 40 years, just circling around. What the drone does is add some dynamics to those kinds of shots. They can be especially useful flying in areas and in conditions that blimps, planes or helicopters can’t fly.”
vice president of Technical and Field Operations for Fox Sports
Fox isn’t alone in their use of drones and other new toys, of course. The past decade has seen a meteoric rise in the use of small, lightweight cameras attached to
just about anything, delivering incredible access to sports viewers. Cameras are now routinely worn by on-field officials and players, implanted in playing surfaces (think golf’s “bunker cams”), attached to speeding cars, baseballs and the list goes on and on. But drones, with their unique ability to access tight spaces in a hurry—and get out of the way just as quickly—provide opportunities that dramatically change the nature of the coverage.
“We’ve been working with drones since 2015 and, at the time, we weren’t entirely sure what drones would bring to the equation,” Davies continued. “Over time, we began to say, quite honestly, ‘The more we can make our live sports coverage look like a video game, the better, right?’”
This unorthodox take on injecting creativity into the coverage of sports not only speaks to the storytelling power of new technologies but also the changing appetites of an audience that is inundated with information and seemingly always wants more.
Using drones to cover live sports beyond exteriors and wide cover shots, though, is very much tied to physics and reality, unlike video games. High-end cameras and lenses typically
used in big-time sports are extremely heavy. Getting such equipment up in the air on a drone isn’t easy and isn’t something you necessarily can buy off the shelf.
As a result, even the most experienced sports production teams rely on outside expertise when perfecting their use of these new technologies. In the case of Fox Sports,
“We’ve been working with drones since 2015 and at the time, we weren’t entirely sure what drones would bring to the equation.”
MICHAEL DAVIES, FOX SPORTS
they’ve been working with Beverly Hills Aerials, an Emmy Award-winning drone production company out of California, for more than 10 years.
“We spend quite a bit of time customizing our drones—it’s very much a ‘right tool for the
right job’ situation for us,” explained Michael Izquierdo, founder and chief pilot at Beverly Hills Aerials (BHA). “Once we understand what is available to us in terms of access, flight restrictions and other safety and security concerns, we can go to the production crew and say ‘Here’s what we can do.’”
From there, the BHA team gets to work outfitting the appropriate machine with the appropriate equipment, including, in some cases, using the exact same camera and coloring equipment used throughout the rest of the production. There are many considerations and calculations including overall weight of the rig, speed, number and type of cameras, where it will be flying and how long it needs to be in the air. Izquierdo estimates his company has accumulated north of 1,000 drones over the years because of the customizations required.
“We work with an outside firm like Beverly Hills Aerials for two main reasons,” Davies explained. “First, they’re incredibly creative and they’re the experts in the use of these tools and what can be accomplished. Sometimes, our crew will just ask them to ‘give us something’ and it’s up to the pilot and camera operator to serve up some things. Second, they’re just extremely good with the paperwork and they’ve got tremendous
relationships with some of the governing bodies for the airspace. These types of companies have become synonymous with following the rules and avoiding surprises and those are the two big things that are table stakes for drones.”
“We probably spend more time working out logistics, approvals and clearances prior to a production than we do at the actual event,” Izquierdo confirmed.
The use of drones to cover live game action, though, can be very tricky. For starters, government regulations around the use of drones can be very restrictive, especially in crowded venues. And for good reason. Just weeks ago, hundreds of Boston Celtics fans were gathered in Boston’s City Hall Plaza celebrating the opening of the 2024–25 NBA season at a watch party when a drone shooting aerial visuals of the event crashed into the crowd. Fortunately, none of the injuries sustained were life-threatening, but the incident highlights the need to proceed with caution.
When asked if this incident had an impact on the professional drone community, Izquierdo replied, “I hope it did. Too many people think drones equipped with cameras are just toys that anyone can put up in the air and start using.”
“I don’t know the details of what happened in Boston but I know preventing things like
highly skilled and some are champion flyers from the Drone Racing League.” Izquierdo also offered that his company has never had an incident like what occurred in Boston.
the technology into their coverage to lend a more cinematic look to their coverage. Dazzling shots of the coastline at Pebble Beach and a visual tour of one of the world’s most beautiful cities during the Olympic Games in Paris add to the splendor of the games being played and give the sports storytellers the chance to further engage their audience.
“When we first started, drone coverage was really considered a low-quality, lowcost thing,” Izquierdo said. “With our partners, we’ve been able to continue improving and add a level of creativity that was never possible before.”
So, what’s next for drones in sports? Fox has pushed the envelope with their coverage of the UFL, a football league partially owned by the network. During UFL games last summer, for example, drones hovered within the field of play, very close to the field, and were able to move alongside players to capture angles that simply aren’t available using any other technology. Having nearly unfettered access to the game certainly helps Fox in flexing their creative muscles in the UFL. What about for other leagues like the NFL?
“I think that, as you go up the line, usually the popularity of the sport and the speed at which new technology is implemented are inversely proportional. There’s a lot at stake,” Davies concluded. “Obviously, we don’t want to do crazy stuff that is going to impede the action or somehow get in the way. But now I think that
Programming languages, prompt engineering and data skills are an important part of the AI future
Media technologists need to consider how to adapt to a future where artificial intelligence (AI) plays a larger role. What skills will the average media engineer require in the future? This is an important question for technology management, as they must determine how to upskill their existing workforce while recruiting new team members aligned with that future.
There are various media-specific technology topics to consider for future development, such as high dynamic range (HDR), new codecs and new methods of signal distribution. Additionally, there are fundamental skills to develop in both infrastructure and software development.
Most importantly, the media engineer of the present has become a specific type of information technology (IT) professional. This has been evolving for more than 20 years. At this point it’s fair to say that a media engineer cannot be successful in their work without a professional level set of IT skills. While there are some cultural differences between traditional IT and media IT, media professionals are now expected to understand the basics of networking, storage and other core infrastructure.
In fact, it is critical that IT infrastructure skills be at an advanced level in media engineers. Media engineers should possess advanced IP networking skills and should seek significant design and build experience of complex networks, including SMPTE ST 2110 environments. Furthermore, it is important to have advanced expertise in on-prem and cloud storage and compute systems.
Knowing what it takes to at least build a small data center in your facility that can process high data rate live video should be the goal for anyone with more than a few years’ experience. Additionally, every media technology professional should possess at least one basic architect-level certification from a major cloud provider. Knowing how to relatively quickly spin up workflows in a cloud environment will be a skill in high demand going forward that is even more critical in AI workflows.
How does AI add to all the above basic
skills? To discuss this, it is important to make some reasonable predictions on the near to mid-term future of AI in media. I think it is valuable to think of AI as essentially a new generation of softwarebased automation. It is my opinion that AI— as a concept—will fade into the background in media and will be embedded in tools that perform media functions (editing, visual effects, etc.) that we care about. In fact, this has been the case for more than a decade in the industry.
In fact, I expect the evolution of AI to follow a path already tread by software systems in general in our industry. At first, there were large monolithic systems that didn’t interchange very well with other companies’ products except in finished and flat content—you would classically need to buy into a given vendor’s entire suite of products for it all to work together. This is the stage AI is at now. Today, most AI image generation systems output only a final image in a way analogous to the “print to tape” workflows of the 1990s.
In the future I would expect standards for interchange and control to lead to the next evolutionary step, which is AI systems with more specialized capabilities that can pass components or data or instructions to another AI to handle. This is similar to the technology we had in the 2000s when we really began networking products from different vendors into single workflows.
Finally, we then will go through a set of phases like the “microservices” phase of the 2010s in which these AI models get very specialized and smart and reliable about doing a small scope of work and get orchestrated into agile workflows, perhaps even by AI-based orchestration engines.
If this future is accurate, then the skill sets that will be needed to be most effective in understanding, building, troubleshooting and managing these technologies in the future are the software-oriented skill sets seen in developers today.
It is important to develop a strong skillset in software development in at least one programming language. Doing so develops the logic and systems-flow skills that will be a part of systems designs that are heavily software-based. Python is very popular now and contains all the elements found in many languages, and so would be a good choice for beginners.
The fact that AI Large Language Models (LLMs) are generating code based on prompts
While there are some cultural differences between traditional IT and media IT, media professionals are now expected to understand the basics of networking, storage and other core infrastructure.
does not take away from the need to learn a programming language. There are potentially significant limitations on LLM systems, which mean that for some time it will be necessary to understand code to debug and deploy quality systems.
Data-related skills are another area that will be critical in an AI-enabled future. A media engineer will need to be a data engineer/scientist too, which means gaining a fundamental understanding of databases and data structures as well as the methods to extract data such as Structured Query Language (SQL) and many others. Particularly relevant to AI are technologies such as vector databases and it may also be wise to refresh
your knowledge of linear algebra as it is core to many generative AI technologies. Understanding the basics of statistics will also be important, including probabilities and other concepts.
It has been argued by some that prompt engineering has reached a level of complexity such that it is akin to a new kind of programming language. Regardless of the veracity of that statement, learning how to write good prompts for different systems is a skill that everyone, including technologists should really get under their belt.
As has always been the case, the media engineer is a multiskillset engineer. Good media engineers have always known at least a basic amount about a lot of technical fields. That will remain true. But, more important than any of the technical skills listed above will be the continuing growth of softer skills—project management, change management, understanding business requirements and how to communicate. These are the skills that have been growing in importance in technical fields for decades and those skills will remain useful for your career— wherever it takes you. ●
John Footen is a managing director who leads Deloitte Consulting LLP’s media technology and operations practice.
Cloud computing administrators and their superiors probably have been asked at least once in their life, “What keeps you up at night?” An AI-overview answer (courtesy Google AI) came back with this answer: “Essentially, the responsibility of ensuring critical data and applications remain accessible and secure within a dynamic cloud environment.” But this “overview” didn’t really answer the question, so this article may help open your eyes to what’s coming.
In this example, performance of your systems (ground or cloud) must meet TS-level criteria, and your administrative people must be reinvestigated for continued eligibility every five years.
Fig. 1 summarizes the major points discovered in the research for this article and not surprisingly most of the findings from several sources round out to about the same bullet points from 2024 and before. We’ll take some of these points and practices apart in the following segments and see if you agree on the importance of their implementation.
The elephant in the room is most likely security. Like many other branches of technology, security is a pressing concern in cloud-based computing—no surprise here, and certainly not contained to just cloud. In general, security has become the biggest issue for all organizations—irrespective of size, revenues or locations (ground, cloud or home). Refining the security topic a bit brings into play “sensitive” data, which is another major concern. Sensitive data is defined as information that, if disclosed, misused or accessed without authorization, could result in harm, discrimination or adverse consequences for the individual to whom the data pertains (per a simple Google search).
Fig. 2 briefly summarizes the categories of sensitive data for any organization. However, if your organization associates at a government (e.g., intelligence community) level, your people may require a top-secret (TS) clearance, the most restrictive level. TS-data is that information which can cause grave damage to national security if disclosed without authorization.
Security risks for cloud replicate those found in everyday activities with mobile devices, home computing, work compute environments and daily life. Included in the risk-list and prevention methodologies are avoiding phishing emails, forged messages (e.g., fictitious representations) and ensuring strict user access control policies are in place.
Securing data (including its accessibility) is becoming increasingly complex and challenging. Those organizations with poor data management practices, weak network security, little, poor or no encryption methods, and/or a lack of endpoint protection may face significant challenges in 2025 and beyond.
Data is one of the most valuable assets to the organization or individual. Learning basic, primary methods for protecting sensitive or confidential data is critical to the organization to avoid potential data breaches or data loss. Data loss can be devastating, often resulting in identity theft, loss of business or exposure of classified or confidential information.
Data classification is a good first step in managing your organization’s information— whether in the cloud or on the ground. Data classification are those processes whereby the user organizes its data into
multiple categories within a system to make it easier to access and secure.
Ranking the data by sensitivity to reduce storage and backup costs is just one step. Such policies can greatly reduce inefficiencies and create better safeguards whether for personal or company data. This further helps assess how sensitive data is used, which permissions or accessibility can be administered, all of which increase data privacy and security for third and fourth parties.
Any time data storage or data processing is involved, it’s important to assess and identify each potential risk before they occur. Data protection impact assessments (DPIA) are active (live) tools designed to help organizations secure their data if they involve significant risk to exposure of personal information.
For the United States, this makes cloud data management more complex if your organization deals with any international (i.e., EU) people or companies. DPIA defines data processing roles within the company, data flow between systems and individuals, and the security policy in the event of a cyberattack.
Organizations operating with highly sensitive data (in the cloud or on the ground) should consider encryption to prevent unauthorized parties from accessing it. Using complex algorithms and ciphers, data can then be protected from being stolen or exposed during a cyber-event. Blockchain is a methodology used to protect authenticity and identification of data alteration. It is often used to protect sensitive media (motion pictures, especially those in production) and in the exchange or execution of contracts.
U.S. military and government entities have used data encryption to transmit and receive any classified communications for decades. For businesses, some cloud service providers may also provide those capabilities for additional fees.
Data masking, similar in practice to data encryption, replaces the original data with fictional data to protect its security. Masking processes are generally for internal use to prevent developers, testers or researchers from accessing sensitive data—thus mitigating potential leaks or breaches by disgruntled employees or human errors (the biggest cause of security violations).
Data masking is sometimes used in the testing process to evaluate patching services/ systems, various security protocols, or to involve building new features without using real user data.
Most of us are, by now, aware of twofactor authentication (2FA) or multifactor authentication (MFA), which is used by banks, credit card companies and such to protect user accounts and mitigate enterprise breaches.
2FAs or MFAs are the easiest—and most protective of the ata security practices—and while a bit more complicated in a cloud environment, are not beyond implementation with a couple extra steps to access the cloud or the network.
According to ZDNET, Microsoft revealed 99.9% of compromised accounts did not use MFA, and that only 11% of enterprise accounts had MFA in place.
This is not a single-solution-only practice. Protecting the network often employs many different security solutions to best protect data from being compromised (i.e., stolen or simply accessed). IT managers must create and manage a secure environment on-prem before leveraging the advantages of the cloud for additional services.
Since the cloud essentially “sits at the edge” of the network, administrators must
feel confident that the cloud side of the enterprise meets a minimum set of criteria to facilitate the needs of the organization. Solutions include antivirus and antimalware software applications, data loss protection (DLP) practices, intrusion detection systems (IDS) and prevention (IPS), proper firewalls, VPNs and endpoint response and detection (EDR). Additional suggestions include network segmentation and secure data removal tools.
Briefly, in addition to a well-defined and properly managed security solution set, the other suggestions to consider when looking into the cloud for enterprise operations include:
• Internet dependency: Given that cloud computing generally relies on internet connectivity, should there be an outage, businesses might not be able to access their data or applications. Providing a hardened alternative (dedicated lines) to/from the access points can be valuable, but costly, too.
• Vendor lock-in: Businesses using cloud computing are ill-advised to become dependent on a specific single vendor, making it difficult to switch providers should something unforeseen occur.
• Complexity: For businesses that are new to the technology, cloud computing can be complex. Be prepared for a lengthy startup period and engage a solutions architect who is experienced in what you do and with various cloud provider solutions.
• Cost management: Spending can be a significant challenge. Be certain your financial officers are prepared for the sometimes rapidly changing cost structures and “bring them along” in the process.
• Skills: The skills gap is one of the biggest challenges for cloud computing technologies. These individuals are hard to find, sometimes hard to engage and can cost a pretty penny to employ.
This is by no means a “complete” list of the 2025 and beyond” cloud-computing concerns, but they hit the high points of the topics. If you’re even slightly concerned about how to address these issues, hire a professional to get you through the planning and implementation. You’ll be glad you did. ●
Karl Paulsen recently retired as a CTO and is a long-time contributor to TV Tech. He can be reached at karl@ivideoserver.tv.
Although it’s an important question, few clients ever ask how bright their new lighting will be. In the hectic job of revamping a studio, such technicalities are usually left up to whoever’s doing the lighting. But because the answer will impact several systems down the line, choosing the right intensity needs to be based on your studio’s particular needs and not randomly picked. Here’s why.
Lighting will affect everything seen in the studio. This broad-ranging impact is why I suggest going through a process to discover exactly how much light is appropriate for the project. Rather than pulling a number out of a hat, I try to find the “Goldilocks” level that’s “Not too bright. Not too dark. Just right!”
Back in the early 1970s, cameras needed a whopping 300 fc (foot-candle) to produce decent pictures. That was bright enough to make you squint, rather than smile for the camera. Over the years, cameras have improved, and lighting levels have dropped a lot. Today, studio levels are a comfortable 45–65 fc. But there’s more to the task than simply picking a number in that range.
Getting beautiful images requires more than just a correct exposure. Skilled photographers, cinematographers and videographers use a variety of camera techniques in crafting their images. Unless you happen to be a cinematographer as well as a lighting designer, you might not appreciate how much lighting affects the camera image. One particular camera adjustment that’s directly linked with light levels is the aperture (or iris). As we’ll see, this affects more than just the exposure. In television, we want the viewer’s attention on the stories and the people telling them. By controlling which part of the shot is in sharp focus we direct the viewer’s eye to what’s important on the screen. By design we can control that sharp focus through manipulation of the “depth of field.”
Our sense of depth is possible because we have two eyes with overlapping fields of view. Cameras lack this type of binocular vision, called stereopsis, because they only “see” through a single lens. In the absence of stereopsis, a camera can still indicate depth by providing other visual cues for dimension. Depth of field is one such cue; by alternating a sharply focused zone against softer focused areas, our brain subconsciously builds a threedimensional understanding of the pictured space.
So, how do we create this sense of depth by selecting the right light intensity? The lens adjustment that affects depth of field, as well as regulating how much light reaches the camera’s sensor, is the iris. Bear with me while I cover what may be familiar ground to you. The iris (or aperture) regulates the light in increments called f-stops. The size of this aperture affects the depth of field through its impact on focus. Without delving into how the “circle of confusion” impacts focus, let’s just note that the smaller the iris opening (higher number f-stop), the greater the depth of field. The lower the f-stop number, the shallower the depth of field.
The photographer Ansel Adams famously used f/64 to get everything
In television, we want the viewer’s attention on the stories and the people telling them.
in sharp focus for his iconic images of the American West. While that infinite depth of field was appropriate for his scenic images, we don’t want that for our studio shows.
In newscasts or interviews, we want the visual emphasis on the talent. Towards that end, we focus attention on them with a bust-shot that’s in sharper focus than the background. That selective focus is achieved by choosing an f-stop that simultaneously
provides the depth of field we want and the right amount of light for a good exposure. But which f-stop is that?
My personal choice for the “Goldilocks” f-stop sits between f/2.8 and f/4. That aperture provides an adequate depth of field to keep the anchors in focus, and just the right amount of background softening to suggest some separating distance. That “f/2.8–f/4 split” is the sweet spot where I start the process of determining how much light I need to use.
To find out how much light you need for your cameras, frame a shot of the chip chart (with teleprompter to account for the light loss from the mirror) from roughly 8 feet away to approximate the “normal” anchor-tocamera distance.
White-balance with the clear camera filter at whatever color temperature lighting you’re using in your studio. Set the iris to the mark between f/2.8 and f/4. Then, adjust the chipchart light brightness until the chips fall into their proper levels on a waveform monitor.
Once that’s done, take a light meter
reading of the chart intensity from in front of the chart (sphere diffuser, pointing back to the camera lens). Whatever intensity reading you get is the value to use for your lighting positions. For most studio cameras today, it
will probably be somewhere between 45 fc and 65 fc.
If the number you get is lower than 45 fc, you may need to make a modification to boost that number higher. This is especially true if your set incorporates video displays, which perform poorly at very low brightness levels. The solution is to rerun the process with the cameras set at a –3 db “gain,” which will nudge the required light level higher.
That’s my process for deciding what the “right” light level is. Neither the intensity nor the f-stop should be arbitrary or accidental in crafting an image. That’s why this process is driven by the choice of f-stop, which is, in turn, based on the depth of field we want.
The “f/2.8–f/4 split” is a subjective choice that I think looks best for most news sets. Your criteria may be different, based on how much (or little) depth of field you want. The process will run the same regardless of which f-stop you choose. The point is to choose by design, rather than haphazardly. ●
Bruce Aleksander invites comments from others interested in lighting. He can be reached at TVLightingguy@hotmail.com.
Comcast Technology Solutions’ Cloud TV platform is a centralized ingest, transcoding and video-processing solution that supports live and on-demand video for broadcasters and video service providers, with complete multiplatform support delivered from the cloud via a managed 24/7 service.
Cloud TV’s flexible and scalable platform provides Tier-1 broadcasters, content owners and distributors with a flexible framework supporting customizations, metadata management, content protection and rights enforcement, advanced VideoAI applications, content recommendations, server-side and contextual advertising, commerce and subscription management, content delivery and OTT distribution, and advanced analytics and insights. www.comcasttechnologysolutions.com
SDK 2.4.0 for Sony’s Spatial Reality Displays (SRD) adds multidisplay support, viewer log functionality, and extended development platform compatibility, allowing content creators to produce even more immersive and high-impact 3D content.
SDK version 2.4.0 introduces the ability to com bine multiple Spatial Reality Displays for larger and more engaging visual setups. This new functionality offers Vertical Array, which allows users to stack up to four displays for life-sized, fullbody figures or tall content that brings a lifelike presence to immersive installations; Horizontal Array Align, which allows up to three displays for wide-screen, panoramic 3D content, ideal for large-scale presentations and events; and Grid Array which allows users to create a 2x2 grid of four displays, producing the equivalent of a 55-inch screen, perfect for displaying large objects or creating impactful viewing experiences. www.pro.sony/ue_US/solutions
DaVinci Resolve 19.1/ Fusion Studio 19.1
DaVinci Resolve 19.1 includes features and improvements that provide more control and significantly speed up multicam, audio and visual-effects workflows.
DaVinci Resolve 19.1 is available now as a free download from Blackmagic’s website.
Fusion Studio 19.1adds new Fusion effects including generators to quickly create backgrounds such as a star field and gradient or radial lines. New title templates make it easy to add more interesting titles to the user’s work. In addition, Fusion media inputs will now honor upper- or lower-field dominance for interlaced footage. This means the effects users apply will display and render correctly based on the original scan pattern.
www.blackmagicdesign.com
CinCraft Scenario 2.1
Version 2.1 of the Zeiss CinCraft Scenario camera tracking system now allows users to manually calibrate lenses when no prefilled template is available, opening the possibility to work with any spherical lens. The CinCraft Scenario calibration process is guided and designed to be easy to conduct. In addition, users can also manually enter measurements for the CamBar to the main camera offset instead of relying on the automated Offset Assistant.
The Zeiss CinCraft Scenario system also features an enhanced Export 2.0 update. The new Undistortion ST Maps can be used to undistort camera plates and the Export Point Cloud provides VFX artists with data detailing camera position during production.
www.zeiss.ly
Marketing Insights is a new tool in Brightcove’s video insights and analytics platform that measures the business impact of video content to give marketers greater insight into their strategies.
With the new tool, marketers can track data from inbound traffic to their videos, capture viewer origins through Urchin Tracking Module (UTM) parameters and analyze which campaigns are effective at driving views and conversion. Linking performance and marketing efforts provides insights marketers can use to improve strategy refinement and make better resource allocation decisions, the company said. Marketing Insights is available as part of Brightcove’s Marketing Studio premium package. www.brightcove.com/en
Consumer data platform provider Hightouch has announced a new solution it says will streamline and accelerate connected TV (CTV) advertising. This unified offering allows advertisers to target match, and measure first-party audiences across the currently fragmented CTV landscape of demand-side platforms, original equipment manufacturer platforms and various measurement tools.
Hightouch said its “composable CDP,” a marketing solution that integrates customer data from multiple sources into a centralized data warehouse, solves these problems by empowering advertisers with a fast, affordable and automated solution that offers.
https://hightouch.com
Sony has updated firmware for its Camera Remote Toolkit used with its professional handheld, digital cinema and alpha interchangeable-lens cameras. Sony’s Camera Remote Toolkit takes complete control over the company’s range of cameras and lenses in any environment with the Camera Remote SDK and Camera Remote Command. Users can develop software allowing them to change camera settings remotely and perform live-view monitoring, such as shutter release, framing and focus.
The Camera Remote SDK v1.13 update, planned for release by the end of the year, offers compatibility with additional cameras; absolute position settings for optical zoom; remote emulation; exposure notification timing adjustment; enhanced video recording workflows and more. Sony also is making a remote firmware update available to users of the ILX-LR1 industrial camera, which be updated within the user’s camera application. www.pro.sony
Waymark, a developer of AI video-creation technology, has launched “Variations,” a new technology that generates video ads in multiple lengths and aspect ratios with a single click for all devices and platforms, maintaining a cohesive look and feel, the company said.
Waymark said its AI-powered video platform enables creators to generate high-quality commercials for local businesses in five minutes or less. With Variations, creators will be able to take their Waymark videos and convert them into additional lengths such as 30- 15- and 5-second clips, as well as complementing formats such as 16x9 and 4x5, with a single click, creating an “omnichannel” video campaign, the company said. www.waymark.com
Magewell’s compact USB Capture devices enable computers including laptops to capture high-quality AV signals through a USB interface, with no additional power source required. The new USB Capture HDMI 4K Pro offers what the existing USB Capture HDMI 4K Plus model does, while leveraging 20 Gbps USB transfer performance on compatible host systems to enable the capture of 4K video at higher frame rates and color precision.
When used with a host computer that has a 20 Gbps USB 3.2 Gen 2x2 interface, the USB Capture HDMI 4K Pro can capture HDMI inputs up to 4096x2160 (including 3840x2160 Ultra HD) at 60fps with 4:4:4 chroma fidelity. The device is also compatible with USB 3.2, USB 3.1 and USB 3.0 interfaces for capturing lower video signals, with capture capabilities dependent on the host USB connectivity and operating system. www.magewell.com
The CV630-BI and CV630-WI IP-enabled PTZ cameras feature 25x Ultra HD HEVC capabilities. Available with a black (CV630-BI) or white (CV630-WI) housing, the cameras use an 8-MP, 1.25-inch imaging sensor to capture up to Ultra-HD 3840-by-2160p video for use in live broadcast, newscasts, reality TV, concerts, corporate and government applications, courtrooms, houses of worship and schools. Other supported resolutions include 1920 by 1080p, 1280 by 720p and 1920 by 1080i.
Both cameras are equipped with synchronous pan, tilt and zoom motors for smooth and silent movements during operation. A 25x optical zoom block provides flexibility from 4.6mm to 120.5mm, with a nearly 68-degree angle-of-view at its widest. Multiple simultaneous video streams are available over HDMI, 3G-SDI and triple-stream IP (H.265/H.264) with stereo audio input embeddable on all available outputs. PoE+ (power-over-Ethernet) provides an economical and easy solution for integrators during installation. www.marshall-usa.com
Streaming Metrics is designed to provide strategy, finance and competitive analysis teams at streaming services with access to historical and forecasted economic performance metrics—subscribers, revenue, ARPU, churn—combined with catalog insights (including exclusivity and windowing), on a market-by-market basis globally. Streaming Metrics says it also provides region and country-level metric breakdowns, representing “the industry’s most comprehensive and granular insights.”
This data will help customers drive investment decisions, benchmark their performance against competitive streaming services in each market and globally, and inform positioning, pric ing and growth strategies by under standing both the financial and content metrics behind streaming success. www.parrotanalytics.com
Fujifim is developing Fujifilm GFX Eterna, its first digital filmmaking camera, with a planned release next year. The camera will use the large-format GFX 102MP CMOS II HS sensor, about 1.7x larger than a 35-mm full-frame sensor, in combination with the X-Processor 5 high-speed image processing engine. It features a native G mount with a removable PL adapter.
The sensor-processing engine paring will enable filmmakers to recreate rich, true-to-life visuals with enhanced postproduction flexibility. Fujifilm introduced the GFX System of mirrorless digital cameras in 2017 and has added improvements in video capabilities. With GFX Eterrna, the company is combining its filmmaking experience and experience as a premiere source of lenses under the Fujinon brand with the advanced technology of its GFX System cameras. Fujifilm is also developing a GF power zoom lens with a planned focal length of 32-90mm.
www.fujifilm.com/us/en
David J. Fernandes Director / Producer/Creator “Creepy Bits”
ONTARIO—Back in 2021, suffering from severe pandemic-related cabin fever, I decided to do something I’d never done before: make and release some ultra short horror movies entirely online.
And so, with a tiny self-funded budget, a lot of goodwill and plenty of favors, we managed to shoot six short horror movies in six days with a crew of only 12 people.
“Creepy Bits” was born as a web series and released into the world on YouTube—one at a time—in the lead up to Halloween 2021.
ALL TOGETHER NOW
Without the usual budget to pay people to do it all for us, we had to figure out a good workflow to manage post production. In the end, DP Gregory Bennett and I chose to use DaVinci Resolve Studio because we both already owned the studio license, and Resolve had recently added editing to its mix of capabilities. Since we were going to use DaVinci Resolve for color grading anyway, we thought it would be easier for us to do all post-production in the application.
“Creepy Bits” season one ended up having a very successful festival run playing at 17 international festivals, garnering 24 award nominations, five wins and two distribution deals. It did well enough for us to get the interest of LaRue Entertainment in making another season, but this time with 15 times the budget and five times the runtime.
Unlike season one, this time we’d have a lot more data and five
editors to work with, so after thoroughly researching workflows, we ended up choosing DaVinci Resolve Studio again largely because of the new cloud features.
“Creepy Bits” season two was shot in May 2024 with two Alexa LF cameras in ProRes 4444XQ and in 4.5K open gate. We created two identical 32TB RAIDs and filled up about 22 TB of them with the footage from the show. Our DMT Mollie Milchberg not only ensured all the camera cards were copied properly, but she also created new Blackmagic Cloud projects for each episode, ingested, audio synced, relabeled all the footage and then created an assembly edit timeline from the script supervisor’s logs.
By the time we had finished filming, we had two full backups and a Resolve project ready to be
edited. Each of the five editors went home with HD proxies they could work from for editing, and all they had to do was log into their Blackmagic Cloud account, open their episode, and relink their proxies.
Gregory and I kept the two RAIDs with the masters (the editors worked off proxies), and all the projects were stored in the cloud. This enabled the entire post team of about 15 different people to work off the same cloud library (with working files on local drives) and saved us piles of time and money from not having to constantly courier hard drives around. And at any point, I was able to pop into any episode and see where things were at in realtime with zero confusion about which version to look at.
When it came to color correction, Gregory was able to do a first pass from his copies of the mas-
ters at his studio, and when we were ready for the online, we did it together at mine.
The combination of Resolve’s world-class color grading, competent NLE, integrated media transcoding, and the new cloudbased workflow proved to be an excellent choice and enabled us to finish VFX-heavy post on 105 minutes of new content in just under four months, with a comparatively small team and tight budget.
David J. Fernandes is a Canadian director/producer, creator, showrunner and post supervisor for “Creepy Bits” season two, as well as the co-creator of upcoming new music documentary series “The New Vibes” for Bell FIBE TV1. He can be contacted at david@davidfernandes.ca or visit https://davidfernandes.ca. More information is available at www.blackmagicdesign.com.
EditShare’s FLOW media asset management platform is designed to simplify and automate every stage of the creative workflow, from capture through delivery. With a suite of robust video and production management tools, FLOW empowers teams to efficiently ingest, edit, organize, and deliver media—whether on-premises or remotely. Built for seamless collaboration, FLOW connects distributed teams in real time and integrates effortlessly with industry-leading creative tools like Adobe Premiere, DaVinci Resolve and Avid Media Composer.
By automating routine tasks, FLOW boosts productivity and efficiency across all departments. It also provides powerful tools for archiving, backup, and secure media sharing, while its AI-driven features enhance searchability and metadata tagging. www.editshare.com
SMART Central is a powerful web-based Business Process Management (BPM) solution for broadcast facilities. At its core, SMART Central offers an array of automated management and reporting tools to give operators and decisionmakers immediate access to critical information. The building block solution can be scaled, and apps can be added to help facilities optimize operations and reduce costs. SMART Central is designed for multi-departmental use including sales, traffic and master control to help create efficiency.
More specifically, SMART Central provides quick access to reports and logs; allows users to view low-res proxy files and approve content; monitor multiple on-air channels and equipment; scale workflows and add apps as the needs of a station grow; control access to apps and reports based on user permission levels; automatically receive email notifications based on user define rules; control any channel in your station using Remote AirBossX app; and more. www.florical.com
Cubix Yunify is an end-toend, multitenanted asset management, automation and orchestration platform designed to be modular, scalable and agile. It includes close integration with best of breed on-premise infrastructure as well as cloud services for storage, transcoding, content discovery and more. Combined with task-focused portals for user access, and its modular architecture, Cubix Yunify is a powerful solution for any media environment.
Cubix Yunify can be deployed fully on site, fully in public cloud or a hybrid mixture of both environments, allowing clients to leverage both local and cloud-based resources taking best advantage of both cost and speed. A SaaS-based version, Cubix Halo, provides all the benefits of the full platform. www.ortana.tv
ENCO’s ClipFire brings automatic ingest, media asset management, graphics and playout automation together into a unified platform for live and automated media playout in broadcast environments and AV. ENCO recently added powerful new features, including the ability to ingest and play out multiple channels of video simultaneously with support for baseband SDI and NDI inputs and outputs. On-the-fly transcoding enables ClipFire to play a variety of mixed file formats and resolutions with transitions, while a new native Clip Editor allows users to adjust in/ out points and merge clips within the ClipFire application.
A resizable L-bar automates live video squeeze backs to accommodate wraparound graphics for a sophisticated look in live playout environments. www.enco.com
Tellyo Pro is a cloud-based platform for live clipping, editing, and publishing to all social and digital channels simultaneously. It allows users to create clips, highlights, and montages for live and VOD content on the cloud with a fast, intuitive, short-form video creation platform. Users can seamlessly edit content with frame accuracy, add branding elements, overlays, and graphics, and apply multiple aspect ratios before publishing across 100+ destinations in less than 30 seconds. They can also use live event metadata to automate and speed up video production workflows and restream content to OTT and CDN streaming destinations simultaneously.
Telly Pro provides tools to edit CC toolset with multiple languages and subtitle tracks, supporting CEA/EIA-608, 708 and WebVVT standards. The Tellyo Talent app expands social media reach and boost engagement by securely sharing approved video content with athletes, sponsors, brand ambassadors and influencers.
www.amagi.com
Aveco’s GEMINI media and workflows management platform collects, keeps, enriches and guarantees that content is searchable and available when and where needed. Its advanced logistics and processing features assures content is automatically adjusted to match source and the destination constraints. GEMINI also controls processing devices such as AI-based enrichment, automated quality checking, and transcoding systems and its flexible workflow management features ensure processes are systematically respected, automated when relevant, and executed efficiently.
GEMINI’s innovative, cloud native, clustered and scalable architecture can be deployed in the public cloud, on prem, or hybrid. The user interface relies on HTML5 standards; the underlying platform is a robust Unix-based OS (Linux). Tools allow users to efficiently document, prepare and package content for archiving, broadcasting or distribution to digital platforms. www.gemini.aveco.com
Jim Jacobs, Studio Manager
Ben Peace, Dubbing Mixer Wounded Buffalo Sound Studios
BRISTOL, U.K—Wounded Buffalo Sound Studios is an audio post-production facility that has worked on countless award-winning TV productions, including “Planet Earth III,” “Earthsounds,” “Our Great National Parks” and “David Attenborough: A Life on Our Planet”.
From their facility in Bristol, in the west of England, Wounded Buffalo Sound Studios has carved out a unique niche in the world of audio post production. It is known for its exceptional work in natural history documentaries.
Working on natural history documentaries presents many unique challenges. Since most subjects are shot on very long lenses, you can film something from 400 yards away but you can’t easily capture the sound from that distance.
This often means that when we receive footage, the accompanying audio is sometimes minimal or of poor quality. In the field, audio is rarely a priority, or even available, with shoots often capturing “first-ever” seen footage in extreme environments.
But we do get some atmospheric sound if we’re lucky. Sometimes the production team will leave a remote camera with a built-in mic somewhere, and even though the sound can be pretty terrible, it’s really useful; it lets us know what we’re supposed to be aiming for.
Natural history content requires incredible attention to detail. We regularly consult with scientists and episode produc-
ers familiar with the footage or landscapes they depict to ensure the authenticity of the sound. Each layer is pretty fastidiously researched. The wrong bird at the wrong time of day can ruin the authenticity.
The pressure to maintain accuracy is immense. We often work with footage that needs the sound to be reconstructed entirely. This means creating a soundscape that is believable yet captivating.
One of the more surprising aspects of natural history sound work is dealing with underwater audio. Given the important role of clear underwater audio in ocean scenes, we advise recording teams on how to capture it for best effect—for example, stopping the boat’s engine so that the noise doesn’t interfere with natural sounds.
Underwater stuff is actually quite tricky. Swimming generally doesn’t make much noise, but when you see it, it kind of needs
to make a sound to look right. So, if you’re underwater and a dolphin swims past the camera and above you, you can follow it with sound.
To address those difficult issues at Wounded Buffalo, our facilities now include three dubbing theaters featuring Dolby Atmos technology and six edit suites:
• Dubbing Theatre A (Dolby Atmos): Avid Pro Tools HDX system with S6 M40 console, and wall-to-wall speakers.
• Dubbing Theatre B: Avid S6 control surface and Pro Tools.
• Dubbing Theatre C: Two Avid S1 modules and Avid Dock.
• Sound Editing Suites: Pro Tools with various plugins including iZotope RX, a powerful noise reduction software package.
A turning point for us came with the integration of Avid’s S6 control surface, marking a significant upgrade. When the S6 came along, we jumped ship and went to Avid. It’s very versatile.
The integration has been
particularly crucial for our work on complex projects. Everything is in one place. The integration is seamless, and it brings forward everything we need for a session.
Jim Jacobs’ career saw him upgrading and expanding audio departments, designing studios, writing music for TV, and even restoring pianos before the pandemic brought him back to audio postproduction in Bristol. He started as a runner in post production, worked his way up to being a post manager in London, then specialized in audio. Email him at Jim.Jacobs@woundedbuffalo. co.uk.
Ben Peace joined Wounded Buffalo in 1997 and has seen the studio evolve from using primitive audio files to the sophisticated digital tools of today. He started messing around with DJing raves back in the 90s. Email him at ben.peace@woundedbuffalo. co.uk.
More information is available at www.avid.com.
Post-production of audio in natural history programming creates serious challenges that we have been able to overcome with a variety of Avid editing and production tools.
Ross Video’s Media I/O provides ingest and playout and transcodes signals live for seamless integration into postproduction, media asset management and delivery workflows. The solution allows every channel to record, play or transcode almost any video format, codec and transport. Its scalability allows users to start as small as a single channel and grow as large as needed, and easy to use, with flex channels allowing easy switching between countless arrangements and combinations.
The battle-tested software has been under active development and backed up by world-class customer support and is adaptable by allowing users to repurpose or group channels for ingest, playout or transcode as necessary. Finally, Media I/O is agnostic, offering the capability of running on-prem, virtualized or in the cloud, and managed with a modern webbased UI on any connected browser. www.rossvideo.com
StreamMaster products include StreamMaster CREATE for building graphics templates for Gallium and StreamMaster solutions; StreamMaster PRODUCE for graphics creation and playout for the production environment; StreamMaster PRIME, a cost-effective appliance for graphics and simple channel playout; StreamMaster BRAND for graphics playout and channel branding; StreamMaster DELIVER for integrated playout of thematic or reactive linear TV channels.
StreamMaster PRODUCE multichannel solution for live graphics creation and playout supports live, multichannel 3D graphics and DVEs for eyecatching live production, and supports all popular production formats including 3G & 12G SDI, NDI and ST-2110 up to 2160P60 resolutions. Equally at home hosted on COTS hardware for OBs and Studios, or cloud based for remote productions, StreamMaster’s real-time 3D DVEs and graphics are immediately familiar to designers and operators seeking a reliable, scalable, and future-proofed solution for live production environments. www.pixelpower.com
Versio Graphics is a complete solution for channel branding, encompassing all the elements required for graphic creation, preparation, insertion and control within Imagine’s Versio Integrated Playout ecosystem. It simplifies the graphics process, enabling the creation or modification of templates using an intuitive HTML interface with full-motion preview and video overlay.
Supporting dual DVE and dynamic rendering from external data sources, Versio Graphics provides web-based tools to embed complex instructions into a graphics file, allowing media companies to easily maintain a network’s identity through dynamic branding with no need for specialist staff and technology. Operators can quickly compose layouts and automate graphics playout with macros or create more complex animations using direct import from Adobe After Effect and add dynamic, data-driven content and real-time tags, including content triggering. www.imaginecommunications.com
Included with Cablecast VIO video servers, SAM (Smart Asset Manager) is a virtual assistant for video automation workflows. Built to reduce user effort and potential user error for predictable parts of your file management, SAM copies content between video server and other storage locations—such as a SAN, NAS or the cloud. Directed by rules-based automation, SAM automatically moves content to archive storage after a specified time period, retrieve assets from archives when needed again for the program schedule, or backup media files to a designated file store for safekeeping.
Available in Cablecast v7.5.0 and beyond, SAM moves or copies content from your video server to alternative storage locations such as a SAN (storage area network), NAS (network attached storage) or Cablecast Cloud Storage. www.cablecast.tv
Media Manager adds power to any production operation by keeping track of all available media across the network and making sure it reaches the intended destination. The software automatically identifies new content on the entire network and allows users to categorize and organize it the way they wish. They can browse for the content they need and use it everywhere; the system automatically transfers the content for where it’s needed.
Key benefits include: intuitive (from one interface users can easily find, catalog, preview, transfer, and manage all media available on your network); web-based (thanks to a web-based interface and solution, you can manage all media available from everywhere); user management (give access to all your team, creating different types of users with different permissions); tags (create as many tags as you wish for each content to easily find what you need at any time) and more. www.wtvision.com
Dalet Flex is designed to make it easier and less costly to produce, manage, package and distribute content by providing a unified and collaborative media environment that is easy to scale. Using Flex, users can organize and curate assets in a central library, search for and update metadata, and trigger workflows. They can also upload, tag and organize assets as well as browse, filter, sort and search and build collections to curate and prepare content for delivery.
In addition, Flex provides for improved collaboration with tools that allocate and manage user tasks, as well as the review and approval process with time-coded comments. These features also make it easier to re-use media assets with a secure and metadata-rich archive. The platform allows users to tailor Flex exactly to their needs. It’s available as a service, fully hosted and managed by Dalet. www.dalet.com
Mike Palmer AVP of Media Management Sinclair Broadcast Group
COCKEYSVILLE, Md.—At Sinclair, Sony’s Ci Media Cloud serves as a central component in our operational workflow, supporting various tasks across multiple departments, including master control, traffic, programming, news, sports and engineering. Currently, more than 5,000 users across Sinclair rely on Ci, making it part of daily operations for more than 80% of our staff.
Sony Ci has been deployed so widely at Sinclair because of its intuitive user interface, which requires little training, and provides a solid set of fundamental features which can be applied to many different workflows. From a user’s perspective, this reduced complexity results in less time spent in training and faster user adoption. We’ve frequently found our users can apply these tools to create solutions specific to their individual needs, without the need to engage technical staff.
For instance, Ci provides features to receive and send files, send file requests and file links, view proxies, add or change metadata, and search. These basic functions can be linked and arranged to support very different workflows. These may include enabling stringer contribution for high school football, contribution from our own news crews in the field, or reception of commercial spots and syndicated programming from various production houses.
These features are made even more powerful because most are also available via API. We’ve
At Sinclair we have been able to widely deploy Sony’s Ci because its intuitive user interface requires little training and because it has a solid set of fundamental features that can be applied to many workflows.
made extensive use of this capability to automate search, modification and movement of files through integration with other systems, including various workflow and orchestration engines. Systems as diverse as production asset management, channel playout, traffic and title management interact with Sony Ci thousands of times a day at Sinclair. Ci’s API event reporting is so granular that we’ve built dashboards in business reporting tools that track and assign cost to specific events and user actions. This contributes to Sinclair’s core philosophy of being a data-driven business.
Ci also subscribes to one of our fundamental design tenets: media and metadata should be independent of applications, such as media asset management. In the past, MAM systems would ingest media after it was often more than less locked in the MAM tool.
We subscribe to the MovieLabs 2030 vision in which services
come to the media, rather than the other way around. We believe media and metadata (in the form of “side cars”) should be available directly in shared storage. We design architectures in which multiple systems simultaneously access media from shared storage.
Sony Ci integrates nicely within this architecture and sits on top of storage that we control, populated with media by other systems, which is destined for use by even more systems. In essence, Ci provides a window into this shared storage, giving users and automation the ability to search and view media, and adjust metadata. These actions in turn control or trigger additional workflows by yet other systems downstream.
Our media supply chain, which we call the “Cloud Media Pipeline” (CMP), processes on average 30,000 files per week. Sony Ci is an integral component of the CMP.
The flexible and intuitive nature of Ci proved invaluable during an event, which required us to activate our own disaster
recovery plans. Users across the organization were able to apply Ci in ways we had not considered as they worked through recovery. In our opinion this was due to both the intuitive Ci user interface and Ci’s cloud-native architecture. In our experience Ci, as a product, is clearly focused on providing a tool that can solve many problems, as opposed to being hyper- and inflexibly designed around a single use case.
Overall, Sony Ci functions effectively in a wide range of operational contexts, providing a versatile solution for media management across Sinclair’s distributed infrastructure.
Mike Palmer is AVP of Media Management, Sinclair Broadcast Group. Mike joined Sinclair’s Advanced Technology group in 2021. One of his key roles is selecting and integrating new media management technologies for Sinclair’s diverse business units. He can be reached at mpalmer@sbgtv.com. More information is available at www.cimediacloud.com.
Robin Pickston Manager MAM & Front End Support for MultiChoice
—Building audience loyalty is the ultimate goal for storytellers around the world. In an age with never-ending content, how can you guarantee audiences will consistently turn to you?
As Africa’s leading entertainment platform, we understand that challenge well. It’s beyond owning the rights to the right content; it’s also about knowing how to distribute, archive and leverage it well.
MultiChoice South Africa offers entertainment in all its forms: from SuperSport, Africa’s premier sports broadcaster, to DStv, a video entertainment company, and GOtv, a digital terrestrial television platform known for its affordability.
Delivering engaging stories across all platforms is no small feat. Knowing our content is enjoyed by over 23.5 million households in 50 markets across sub-Saharan Africa, we understand our duty to our audience.
For our extensive media portfolio, receiving live and file-delivered content requires a system capable of prioritizing and distributing that content, at scale, in a 24/7 operation. This is why we count on Viz One, Vizrt’s enterprise media asset management (MAM) system, to strengthen both the live content distribution and storage management.
MultiChoice uses Viz One
to produce, acquire and package media for sports and entertainment for over 200 channels broadcast across the African continent. In essence, Viz One catalogs MultiChoice’s vast media portfolio and makes it available to our users. They can search across asset and timeline level metadata captured from every item in the system (well over x hours at time of writing), to preview material and create shot selections, quickly finding the right piece of media in the expansive archive in a matter of minutes.
including football, cricket, rugby, motor racing— as well as any other program and commercial material, are properly stored and readily available. Then, our MAM team creates the highlights, program versions and stage media for playout every day of the week.
Tivoli Storage Management digital tape storage with 800 terabytes of data—managing and transferring commercial content only and the Ardome MAM System—to an IBM Tivoli Storage Management digital tape storage with more than 72 petabytes of data. This was replicated at our DR site, managing all content processing of the entire content broadcast on MultiChoice and SuperSport linear and digital channels.
It’s paramount for the efficiency of all MultiChoice platforms and programs that we use our digital tape storage as efficiently as possible.
It’s undeniable the potential the creative industry has across Africa. Being responsible for researching and helping implement new technologies that streamline our platforms and processes, I know well how technology can help ease the creative process.
The MAM system makes content available for editing and dubbing workflows, regardless of whether it is currently importing or sits within our ever-growing archive, which currently stands at over 1.5 million hours of HD content. This means that the assets captured for top tier sports—
In 2010, the Media Asset Management Engineering and Support team was created at MultiChoice, and by 2015, we had more than doubled in personnel. I’m responsible for managing the full MAM system and engineering support team, which means coordinating between multiple teams, including the MultiChoice Operations and other engineering teams, for road mapping the workflows and any process enhancements required.
For the last 14 years, we have grown the MAM System from a small IBM
Because we have the trust of millions of households across the continent, we have taken this rich revenue of the attention economy to amplify African stories, investing back in the talent of our audiences. From supporting M-Net’s channels, which creates original productions of authentic stories, to enterprise development such as MultiChoice’s Innovation Fund and Africa Accelerator, we’re committed to elevating African talent.
By leveraging our unique platform, we enrich a broad ecosystem of consumer services for creatives across Africa— all underpinned by scalable technology. As we grow, so do the stories we share and keep.
Robin Pickston is manager MAM & Front End Support for MultiChoice. He can be reached at Robin.Pickston@multichoice. co.za. More information is available at www.vizrt.com.
Oasis Media Asset Manager (MAM) media workflow solution is designed to optimize news production for teams, whether in the newsroom or working remotely. Its, high-performance search engine provides centralized access to raw footage, works-in-progress, completed segments, and archived content—all within a single platform. Oasis integrates effortlessly with top NRCS and NLE vendors and enables federated searches and content transfers across multiple station locations.
With steadfast reliability, Oasis offers versatile storage options, including onpremises, cloud-based, and hybrid storage with Bitcentral’s Fusion Hybrid Storage. It forms a key part of the Core News suite, which also includes Precis for ingest and playout, and Create for editing. www.bitcentral.com
NVerzion’s NGage automated newsroom file-based content playout system provides native digital file exchanges with news delivery services and editors for streamlining newsgathering and airing. NGage is MOScompliant, ensuring compatibility with the ENPS Newsroom system. The NGage Operating System, along with the NFinity Server, can fully integrate with the Associated Press ENPS News system as a News Media storage and playout server.
All news-related content resides within the NFinity Server, while NGage functions as a news playout system, creating complete schedules prior to the news program, managing rapidly changing events by prioritizing playlist items, and consolidating the NFinity Video Server, ENPS and MOS protocol. www.nverzion.com
Framelight X is designed to be a nextgeneration asset management solution native to GV’s AMPP Platform that improves the efficiency of content creation. By allowing collaboration between globally distributed teams, and building systems that adapt dynamically to demand, Framelight X allows media organizations to realize significant savings all while producing more content. In addition, it enables globally distributed, fast turnaround production workflows targeted to news and sports; allows users to work remotely in a browser within seconds of a live record without the need for operators onpremise or on location; and increases yield per asset by federating content into a single, global asset management system, enabling content sharing and reducing duplication. www.grassvalley.com
TASCAM US is looking for:
A BROADCAST AUDIO SPECIALIST who is an expert in all aspects of audio for linear television, OB, radio, and IPoE. Qualified candidate will need to be very familiar with current audio formats and broadcast mixing protocol as well as an expert in upcoming technologies such as ST 2110. Knowledge of equipment suppliers and sales distribution for audio broadcast equipment is also required. Position is a retained consultant (not an employee) and can be remote.
If qualified and interested send a resume to: pyoungblood@tascam.com.
For possible inclusion, send information to tvtechnology@futurenet.com with People News in the subject line.
Vizrt
Vizrt has named Rohit Nagarajan as CEO, replacing Michael Hallén, who led the company for eight years. He comes aboard with more than two decades of experience in sales and financial software technology, with roles at SAP, SoftwareONE and Salesforce and has He’s tasked with focusing on international growth and enhancing offerings for customers, partners and users, Vizrt said. Hallén, Vizrt’s CEO for the past eight years, will remain on as chairman of NDI.
Elena Ritchie has been promoted to senior VP of video at Spectrum, the cable and connectivity brand operated by Charter Communications. In the new role, she’ll lead teams working on the operator’s video strategy, including video experience, hardware and software architecture and engineering, product and digital markeplace. She will report to Danny Bowman, Charter executive VP of product, and will continue to be based in Greenwood Village, Colo.
IHSE USA, a manufacturer of keyboard, video and mouse (KVM) and display management systems, has named Michael Spatny as its new managing director and CEO, succeeding Dr. Eno Littman. Spatny, who had been chief sales officer, will bring to the role a special focus on building and leading international sales organizations and channel partner networks and a background in communication engineering, ISHE said. Littman will remain adviser and industry ambassador.
E.W. Scripps has promoted Matt Simon to VP of Scripps News, a new post responsible for the vision, overall leadership, editorial identity and programming for the station group’s national, centralized news organization. He had been deputing managing editor and senior executive producer at Scripps News, managing the production of distributed and syndicated content across Scripps News and Scripps Local Media, including the programs “Scripps News Reports” and “Good to Know.”
Dhanusha Sivajee was named senior VP and chief experience officer at Tegna, tasked with overseeing research, communications, brand, performance and life-cycle marketing, as well as developing consumer digital products. She’ll report to CEO Mike Steib. Sivajee was most recently chief marketing officer for Angi, where she was responsible for the company’s major rebrand into a one-stop shop for homeowners to find, schedule and book local service professionals for home projects.
Sinclair unit One Media Technolgies has tapped Matthew Goldman as VP of strategic and technical initiatives. He’s tasked with leading the implementation, expansion and support of advanced technologies such as high dynamic range (HDR) video, managing conformance testing and developing operational handoff processes for ATSC 3.0. He has been with Sinclair since 2021 and most recently was senior director of media engineering and architecture.
SEAN PERKINS NAB
The National Association of Broadcasters has named Sean Perkins chief marketing officer and senior VP of Global Connections and Events (GCE), tasked with heading up marketing efforts for major industry events NAB Show and NAB Show New York. He reports to Karen Chupka, managing director and executive VP, GCE, at NAB. Perkins comes from the Consumer Technology Association, where he was was VP, marketing, directing the comprehensive marketing and advertising efforts for CES.
ShowSeeker, the ad-tech firm behind the Pilot campaign and order management platform, has promoted Nick Anaclerio to senior VP of product strategy. Originally provider of the ShowSeeker Plus content discovery system, the tech vendor has grown to offer a single-order management platform for both pay TV providers and broadcasters. The company also said it plans to launch a Predictive Programming platform later this year, followed by predictive ratings algorithms in early 2025.