EMBRACES VIRTUAL PRODUCTION
Afew weeks ago on a wet Wednesday morning just before 7am UK time, I watched Artemis 1 take off from Kennedy Space Center; the first stage to put humans back on the Moon. As I watched the TV pictures of the rocket take to the sky, I got goosebumps and it reminded me of watching the first Space Shuttle launch. I was a young girl of six and my whole school year crowded into Mrs Clark’s classroom to watch the launch on one of those old TVs on a wheelie stand (you remember the ones).
We were all transfixed. The cousin of one of our teachers had worked on the shuttle and we had been following the plans for take off for months. Now, I don’t think anyone from Ermine Infant School ended up working at NASA but those memories mixed with the Artemis launch got me thinking about inspiration and TV’s role in inspiring anyone of any age to do something new. Not everyone is going to help return mankind to the Moon, but watching TV can inspire us to bake for the first time, take up dancing, or start playing a new sport.
It’s been quite a year for sport, from the Beijing Winter Olympics back in February to countless World Cups (ICC, Rugby League, FIFA), sport has been a mainstay of both linear and streaming
TV during 2022, and while the amazing sports men and women can inspire young girls and boys to get off the sofa, would it have the same impact without the hard work of all the TV crews working behind the scenes to get those tournaments on our screens? I would argue not.
Inspiration is so important, not just in sports but also in our working lives. One of my highlights of 2022 was visiting the Rise Up Academy Summer School and meeting some of the students who were taking part. They told me how it opened their eyes to opportunities available within the TV industry. Now if that’s not inspiring, I don’t know what is. In our own way, TVBEurope is aiming to inspire with our Meet The… series. When we launched it back in April, I hoped it would help show those interested in the industry all the amazing roles there are available. But I’ve also had lots of feedback from people already part of the media tech industry who say they had no idea such roles existed!
Being inspired and inspiring others is important to us all. I can’t wait to see what inspirations 2023 has in store. n
IN THIS ISSUE DECEMBER 2022
12 TheroadtoavirtualWeatherfield
Earlier this year, virtual production made the leap from high-end TV productions to traditional linear TV as Coronation Street became the first continuing drama to use the technology. Jenny Priestley finds out how they did it 18
Whatdoesthefutureholdfor MAM?
Asset management, in whatever form it takes, is an important part of any broadcaster’s or streaming service’s capabilities. TVBEurope asked experts in the field for their thoughts on MAM’s move to the cloud, the biggest driver of innovation, and challenges for the sector 28
Theyearinbroadcastand media
Paolo Pescatore takes a look back at a year that started with hope and ends with uncertainty 34
HDready
UKTV’s OTT platform UKTV Play has relaunched with brand new shows available in HD, while back-catalogue content is recoded in phases. UKTV’s head of digital product Declan Clarke takes TVBEurope through the process 38
Amusicalrevolution
SphereTrax, a new music licensing platform, aims to revolutionise the way music is placed and licensed in the entertainment industry. Founder and CEO Sefi Carmel explains more
44 Makinghistory
International production company and host broadcaster Quality Media delivered an all-remote multi-sport production for the XII South American Games in October. Chief production officer Pablo Reyes takes TVBEurope through the workflow
48
SMPTEplansforthefuture
Inspiring the next generation of engineers is key for the media tech industry to keep moving forward. Newly elected SMPTE officers Richard Welsh and Chris Johns lay out their plans to Russell Trafford-Jones
52
Whatareyoutalkingabout?
Kevin Emmott investigates how artificial intelligence can make the unintelligible, intelligible
How live production in the cloud is changing the game
By Mathieu Yerle, senior vice president of strategy, ChyronThe world of live sports production has traditionally been divided into two categories: the haves and the have-nots. The former are the conventional moneymakers – the top professional leagues, the high-profile international competitions, and so on – with wellestablished brands and large viewing audiences. The latter are lowerlevel regional and university leagues, and niche sports events.
While there are without doubt fans with a passion for these lessvisible sports, it has been tricky for content producers to build a brand and an audience for competitions without the benefit of an OB van equipped with up-to-date production gear. And for those enterprising individuals out there trying to make it happen for the sports they love by creating their own MacGyver-style mobile production systems, without the six-figure budget that a broadcast-grade OB van requires, it’s a very steep challenge. Perhaps they successfully create the go-to channel for a specific sport and eventually work toward investment in a mobile production unit. But in doing so, they must overcome significant obstacles and limitations.
For someone looking to develop a global audience for a niche sport played worldwide, it requires shipping precious gear all over the world, sourcing talent in different countries, and spending years just trying to build up stock and capacity. Each time that content producer chooses to make a jump up in production quality and raise the bar for their live streams, they need to make a significant new investment across all their gear. For every two steps forward, they take one step back.
Access to broadcast-quality tools in a cloud-native production environment changes all that. It removes barriers to building niche sports or smaller-market leagues into recognisable and increasingly popular and profitable entities. In the cloud, independent content producers have access to a complete tool set that they simply can’t buy outright. Today’s cloud-based production solutions give them the same tools and graphics horsepower on which the major players have built their brands; switching, CG-quality graphics, and even telestrated replay and audio mixing... the works.
Everything the big networks use to engage audiences with a sports event is now at the fingertips of a content producer motivated to bring their sport to a broader audience.
Production in the cloud isn’t free, but it’s economical in a way that allows dedicated content producers – a group of massively dedicated family and friends or enterprising player-fans of the sport – to quickly start building a brand and a following. It’s an investment in a few PTZ cameras, an encoder, and maybe $50 an hour in subscription fees. Gone are the time, budget, and functional constraints that come with being tethered to physical production gear.
And because the long, painful process of working toward [investment in] an OB van is no longer a prerequisite to creating live content that looks great, content producers can immediately begin to offer engaging sports coverage and stories that capture the attention of fans. They can build off high production value and growing viewer interest to bring sponsorship opportunities into the live production. These factors work with each other to bring the fan base together and build new audiences; a viewing community with a shared interest in the sport and its athletes.
Depending on the complexity of the show, a solo operator could produce it alone or work with other contributors (who can log-in to the same production from anywhere and fulfil different roles as needed). Because functionality is built into a web browser rather than siloed by physical systems, production team members can take on new and expanded roles. The same talent doing commentary at half-time or after the match might be grabbing highlights or doing telestration during live play.
While cloud-based live production is a huge win for the ‘little guy’ because it bridges a gap in access to resources, it is in fact an opportunity for all players, small and large. Larger broadcasters can use the same cloud environment and tools to test out new channels and markets, maintaining the network look and feel without shifting resources away from the traditional moneymakers.
Reducing costs, simplifying logistics, facilitating remote collaboration, and providing web-based access to top-tier production tools, the cloud-native live production environment is empowering content producers of all sizes to develop vibrant new sports platforms, brands, viewing audiences, and sponsor communities. n
Technology for the next generation
By Dr Bill Garrett, CTO at BHVIdon’t mean to write a ‘dark skies’ story about the future of broadcast equipment, but one must ask the question, ‘as an industry are we delivering the appropriate kit for the workforce of tomorrow?’. Of course, if we adapt, the risks are mitigated, but my time at IBC 2022 says that some manufacturers need to look closely at their product set.
I’ll start by presenting a scenario. A market that many ‘clear thinking’ kit manufacturers have moved into is pro AV. Corporate communications, houses of worship and many other previously ‘non broadcast’ organisations have made steps towards becoming content providers. The sale of PTZ cameras, which had already been enjoying growth, have rocketed along with all the associated peripherals, mini vision mixers and streaming gateways.
However, the scenario in my head involves three smartphones on lighting sticks, their 8K image sensors enabling virtual pan, tilt and zoom, some clever warping to give the image that ‘mechanical movement’ feel, with their multiple image sensors providing numerous shot options, and sound via bluetooth microphones. The producer, assuming there is one, sits at a tablet screen selecting the appropriate shots; AI is tidying the picture quality by automatically balancing colour and brightness, and is probably sorting audio too. The devices’ 5G connectivity streams all the content to the cloud. The total count of traditional broadcast manufacturers involved: zero!
All of this can be done right now. Sport producers are already using smartphones connected with 5G. What will make this scenario a reality is not cost but usability. All the functions are delivered by apps and a cloud-connected infrastructure, hence there’s no cabling. Production power is not heavy v-lock batteries but three USB power packs picked up at the service station.
If you are unsure that the new workforce of tomorrow really will be different then let me explain a situation facing the automotive industry by posing a question. ‘Would a child born today learn to drive a manual gearbox car? Would a child born today learn to drive at all?’ With the phasing out of hydrocarbon fuels, the first part is easy to answer with a resounding ‘no’! The second part needs a little forethought. There will be social changes that affect the choice to own and drive, there is also the question of autonomous vehicles and how progressed will these be. As it takes about seven years for a car to leave the drawing board and enter the showroom, automotive manufacturers have just over two
vehicle design cycles to work out what to do. Worse still, the numbers are already in: the total UK driving tests taken in 2016 was around 500,00; by 2019 the pre-pandemic figure was 400,000. Car makers are currently scrabbling around for ideas. Let’s not find ourselves having lacked forethought.
Deep breath, now back to broadcast. There can be a bright future when we recognise the threats. But, our industry offering needs to change. I will spare the blushes of some manufacturers, but I saw new products on show at IBC that had an interface featuring a two-line text-based LCD display, with a ‘helpful’ rotary encoder! To set up this ‘sophisticated and modern network aware’ piece of equipment is like looking through a letterbox. Ovens and coffee machines have better. It is an interface that has been around for 40 years, it’s clunky, awkward and two generations away from the industry entrants who spent the pandemic glued to an iPad.
There are cheap and easy solutions to improve the interface between equipment and user. Technologies developed for smartwatches now enable cost-effective rich user interfaces without the need of a stack of silicon or the headache of a Linux subsystem. Development tools released in the last 18 months now enable designers to deliver graphical, icon-driven UIs with gesture control.
There is no need for kit designed today to have a text-only interface. This change brings the opportunity of a tutorial-driven ‘at the device’ guidance through an interface capable of providing it. Users will get much more from your product, thus enhancing your reputation for user experience.
Designers need to think about connectivity too. A good number of engineers have quit the industry when asked to swap the BNC for a CAT5 cable because a trip to the classroom to learn new skills later in life didn’t sound appealing. When connectivity sorts itself out the frustration and fear subsides. There are networking technologies that can resolve the connectivity challenge, 5G being an important stepping stone where devices leave the complex connectivity overhead to a telco.
Every place in the chain where there is overhead and manual activity needs to be under review because the marketplace is doing so from the inside and out. The new workforce of tomorrow will influence what gets used, and within a short time that workforce becomes the decisionmaker. If the industry hasn’t adapted, those dark skies will be upon us. n
How KVM is changing the broadcasting game
By Jamie Adkin, VP EMEA at Adder TechnologyThe world of sports has seen a rapid adoption in technology in recent years both in and away from the playing arena. But technology has not just elevated what you see on-screen or at the stadium; real-time control over information streams, at the flick of a switch, has transformed the behind-the-screen functionality.
Broadcasting live sports requires control, reliability, efficiency, and adaptability. Missing any aspect of the game simply isn’t an option; it impacts the fan experience wherever and however they are engaged with the action. The synchronisation and co-ordination of multiple video feeds, audio, on-screen graphics and live action at exceptional speed is a huge challenge for sports broadcasters and those responsible for the big screen stadium experience. To this end, we continue to see the adoption of internet protocol (IP) technology as the bedrock of flexible connectivity in broadcast IT ecosystems.
Keyboard video and mouse (KVM) has become a core technology as the most reliable, robust, and adaptable infrastructure for enabling access to, and control of, all the broadcast IT systems. KVM connectivity is commonplace for video assistant referees, studio management and control rooms delivering live feeds, as well as live in-stadia broadcasting on huge multiscreen video walls.
PUTTING BROADCASTERS IN CONTROL
Broadcasting live sports needs to be delivered in high quality with real-time control for fans and spectators to have an immersive experience. Capturing footage from myriad viewpoints in high resolution, and broadcasting with consistent quality and speed is vital. Cue IP KVM technology: the perfect solution for producers, operators and technicians working from an outside broadcast truck. Purpose-built, small form-factor IP KVM transmitters save crucial space when it’s at a premium and enable the production crew on board to access multiple video and computer feeds directly.
When it comes to matters of fairness in a sports game, KVM is again mission-critical in enabling virtual match officials to replay and switch between different cameras, giving them all the angles needed in pixel-perfect clarity to make informed and fair decisions.
ENABLING COLLABORATION AND TEAMWORK
Ensuring the quality and consistency of the live stream is maintained during game-day is essential. As broadcasting teams will need to respond to action in real time, it’s important that multiple teams are able to contribute to content presented to the viewer, such as on-screen graphics, advertising, and slow-motion replays.
The teams in the control room (on-site, or in OB trucks), on the studio floor, and those located around the stadium need to work in unison to ensure the broadcast runs as smoothly as possible. Here, IP KVM can facilitate access to vital data needed by hosts, analysts, and expert pundits. Video replays, performance metrics and analysis tools can be delivered directly to their in-studio devices, while the powerful computers driving them are installed in dedicated technical areas.
Real-life examples of such collaboration can be found within many sporting stadiums where ‘mega screens’ are becoming increasingly popular for broadcasting the game and associated content in the stadium itself. Here, IP KVM’s ability to share screens and data improves production workflows, optimising and enriching the experience.
KICKING OFF INTO THE FUTURE
Ultimately, IP KVM can be found at the core of live broadcast; the unseen ‘player of the match’. Broadcast operators can sit at any workstation on a network, log in, and deliver a pixel-perfect stream, without ever knowing that the small receiver under their desk is the lynchpin to making it all happen.
KVM makes the live sports broadcast workflow seamless and puts every person in that workflow in total control of the content we enjoy as our team battles to win.
However, as viewer demands increase, so too must the capabilities of the technology that supports it. Whilst the tools at a user’s disposal allow high fidelity and real-time control over a stream, innovations within the audiovisual sector require outputs of 4K, 5K and even higher, without compromising responsiveness.
All this reinforces the symbiotic relationship between IP KVM and the broadcast industry as a whole as innovation drives the world forwards. n
TVBEurope’s website includes exclusive news, features and information about our industry. Here are some featured articles from the last few weeks…
EXCLUSIVE: ANDREW CROSS STEPS DOWN AS GRASS VALLEY CEO
Grass Valley CEO Andrew Cross has stepped down from his role, but will remain working with the company as an advisor.
Cross took the decision following a restructure of the company due to market conditions, explained Grass Valley CMO Neil Maycock, speaking exclusively to TVBEurope
CREATING
https://bit.ly/3i7CwL6
A year after numerous UK broadcasters and streaming services signed the Content Climate Pledge at Cop26, TVBEurope catches up with BBC Studios’ Sally Mills to discuss the company’s commitment to net zero and how suppliers can help it reach its target.
https://bit.ly/3EZ1LrV
BT SPORT COMPLETES PROOF-OF-CONCEPT
CLOUD-BASED LIVE BROADCAST
BT Sport has completed a proofof-concept trial of producing a UEFA Youth League match fully in the cloud. The match between Tottenham Hotspur and Sporting CP at the Hotspur Way Training Ground in Enfield was the first UEFA club match produced in the cloud;
showcasing the future roadmap for live football production options, as well as the possibility of significantly reducing sustainability footprints in comparison to a traditional outside broadcast workflow.
https://bit.ly/3Vn6jO5
“When we worked on strategy with Andrew as a management team, and when we looked at the requirements for the immediate term, Andrew didn’t really feel he was the right person to lead the company through that phase of transformation,” said Maycock.
https://bit.ly/3VGxWCf
MAIDSTONE STUDIOS CELEBRATES 40TH BIRTHDAY
Maidstone Studios is celebrating 40 years since its doors first opened in 1982.
The site was first earmarked as a possible production base in 1979 and hosted its first live transmission in autumn 1982.
Over the years it’s been home to shows from Art Attack to Take Me Out and Later…With Jools Holland, and last year it hosted ITV’s coverage of Euro 2020.
https://bit.ly/3U64qEH
Eric Whipp started his career as a colourist in the mid 1990s, specialising in films. In 2007 he worked with George Miller on the Academy Award-winner, Happy Feet, and the pair reunited in 2014 for Mad Max: Fury Road
Today, Whipp is co-founder and senior colourist at Alter Ego in Toronto, Canada’s leading colour and VFX facility. His latest project was yet another
collaboration with George Miller: Three Thousand Years of Longing
This fim is a contemporary retelling of the story of a djinn (genie). It follows the story of a lonely scholar (Tilda Swinton) who, on a trip to Istanbul, discovers a djinn (Idris Elba) and he offers her three wishes in exchange for his freedom.
https://bit.ly/3XxEBjy
CLIMATE-CONSCIOUS POST PRODUCTION FACILITY BUMBLEBEE TAKES FLIGHT
Earlier this autumn, Zinc Media Group announced the launch of Bumblebee, a climate-conscious, work-anywhere post production solution that enables editors and producers to work from anywhere around the UK. TVBEurope caught up with Zinc’s chief technology officer Olly Strous to find out more about the service and the technology behind it.
https://bit.ly/3XwZaN0
HOW DO YOU CREATE THE SOUND OF MARS?
Sound designer Mark Mangini is no stranger to working on major sci-fi movies having won Oscars for his work on Mad Max: Fury Road and Dune. He’s also credited with recording and editing a new sound for Leo the Lion, MGM’s mascot.
Mangini’s latest project takes him literally out of this world, creating the sound for Good
Night Oppy, a documentary that charts the story of Opportunity, a NASA exploration rover that was sent to Mars for a 90-day mission but ended up surviving for 15 years.
https://bit.ly/3EX6TwS
CASE STUDY: BASELIGHT HELPS GEORGE MILLER ACHIEVE HIS VISION FOR THREE THOUSAND YEARS OF LONGING
THE ROAD TO A VIRTUAL WEATHERFIELD
Earlier this year virtual production made the leap from high-end TV productions to traditional linear TV as Coronation Street became the first continuing drama to use the technology. The production team worked with Manchester’s Recode XR Studios on the project. Jenny Priestley finds out how they did it
Being the first is always scary, but being the first when you have an audience of millions brings even more pressure. That’s what the production team on ITV’s Coronation Street faced when it became the first ever serial drama to employ virtual production. For a show that’s been on air for over 60 years, it was a huge move forward.
The producers decided to use the technology for the culmination of character Kelly Neelan’s storyline, which included scenes of Neelan standing on a rooftop overlooking Manchester. In order to get the shots they wanted, the Coronation Street production team worked with Manchester’s Recode XR Studios.
Recode’s journey to the Weatherfield cobbles began two years ago when, during lockdown, co-founder and virtual production supervisor
Paul McHugh read an article stating that many AV companies were not as active during the pandemic. McHugh, who spent ten years in the games industry and has a background using Unreal Engine for AR and VR projects, took to Google to find companies in the Manchester area who might be interested in working together to explore virtual production opportunities.
He reached out to Gareth Turner and the two decided to collaborate initially on a trial of a six-by-three metre backdrop. “We had an HTC Vive from some VR work that I’d done and you can use that as a tracking system,” explains McHugh. “We got a small Blackmagic Pocket 4K camera, stuck the Vive on top of that, and then got it all working in 3D. We did a job with that actually, but we had the wall built up to 12 metres by 4 metres with 6mm panels, which are the equivalent of
outdoor panels... we’d never really film on them. But when we did this job and when people saw what we’d done they couldn’t believe how good it was.”
A FULL-SERVICE COMPANY
That led to the founding of Recode XR Studios, which has gone from strength to strength over the last couple of years. The pair have since upgraded their panels and also acquired an Ncam tracking system and other technology for a full-on virtual production studio.
However, it’s not been easy, as McHugh explains: “Even though we know Unreal Engine extremely well, and we know the cameras extremely well, when you bring them and the wall and the tracker all together, it’s incredibly complicated and can have big pitfalls. There are
lots of things you can do wrong, but over the last two years we’ve made sure we’ve learnt what all the pitfalls are and how to work around them.”
McHugh adds that Recode’s unique selling point is the fact that it offers the whole pipeline of virtual production, from asset creation to filming and post production. “When we’re working with other directors and DoPs who are not experienced at all in the process of virtual production we’re able to guide them through it because it is a bit of a minefield if you don’t know what you’re doing,” he adds. “We also have the skills to render content that’s captured in-camera. We can do all the visual effects work as well. Whenever someone sees anything that we’ve worked on they ask, ‘was that filmed at the studio?’, and that always leads them on to believe that we know what we’re talking about.”
McHugh and Turner tapped into virtual production at just the right
time. Disney Plus launched in the UK right at the start of the first lockdown. The Mandalorian gave many viewers, and people working in the industry, their first look at how virtual production could be used for the small screen. Since then the industry has seen numerous companies opening virtual production stages.
McHugh admits he’s not surprised that virtual production has been welcomed with open arms, not least because of the current focus on sustainability. “We can create an environment and put it on the LED screens in about a minute,” he states. “If that was a real set, then obviously that’s not possible.”
The Recode team can also customise a set in real time, move furniture around, change the lighting, or even change the colour of the walls, whatever the client wants. “When we work with advertising companies they can really see the benefit of the technology because we can create so many versions. We did a test where we kept the main foreground the same, and we put it in six different environments, and we did that in two hours,” adds McHugh. “From the point of view of broadcast production companies for drama and the like, other benefits are the fact that you can go into locations that you cannot get to, or [where] it would be astronomical [expensive] to actually film.”
Testing is important to the team at Recode. As well as holding a number of test days themselves, they will often work with a client on a test before the camera officially starts rolling. “At the end of the
day, you’re using a really complicated piece of software with the game engine. From a client’s point of view, I think it’s important that they are able to deal with a company that’s able to provide that full service. You can’t underestimate how important that is,” says McHugh.
CORONATION STREET
The Coronation Street opportunity came about via word of mouth. “Somebody told somebody who told somebody,” laughs McHugh. “Out of the blue they called up and we had some of the senior post production team come over for a visit because they’re based only about ten minutes away from us. They were amazed at the work that we showed them and that gave them the confidence to trust us.
“The job wasn’t just handed to us, we had to jump through a few hoops because technically they have certain requirements. We had to make sure that we could meet that. We did a test and then we came up with a couple of alternatives in terms of what the pipeline would be. They liked that, so then we started working with the director on his vision, and then the art director.”
The storyline involved Neelan standing on the edge of a wall with Manchester (or Weatherfield) in the background. The Coronation Street team had identified a mill that they liked the looked of, but of course they couldn’t shoot at the real life location because of health and safety. It was also 23 miles away from the city. The team from Recode travelled to the mill, took photos for reference, and used photogrammetry to turn those photos into models. “Those models were no good for virtual production because they’re way too detailed, so we made low poly
versions and then projected high poly versions onto the low poly and to get the details through displacement maps,” explains McHugh.
The Recode team also adapted the look of the mill for the director’s needs, taking away some of the building and adding two towers and extending one of the floors to give it a bigger drop to the area below. “We also did all the other full CG shots,” adds McHugh. “There’s a driving shot and David Beauchamp the director wanted a shot where the camera goes straight between the lights, so that was full CG. There were three other full CG shots including a drone shot of the rooftop.”
McHugh continues: “We worked really closely with the art department at Coronation Street and with David Beauchamp as to what his vision was. He came into the studio and we spent a full day with him going through all the shots, positioning the camera in a virtual world. We’ve got a model of our studio, and we can position the camera where it would be and show what that would look like in the actual camera. So, we were able to get all these things blocked off and David could use that as his bible because he knew the shots. It was the first time that he’d done anything like that, so it was an eye-opener for him and it gave him the confidence that the shot was going to work.”
A BLACKMAGIC RAW PIPELINE
Throughout the shoot, McHugh and his team used technology from Blackmagic Design, including the URSA 12K and a Blackmagic URSA 4.6K. “I think the Blackmagic codec is brilliant for studio work,” he says, “We also use Resolve. If there was a Desert Island Discs for software, I’d take Resolve. Our whole pipeline is based on it. I think Blackmagic
is one of those companies that is game-changing. It’s democratising the whole production chain.”
Working on virtual production for a much-loved show, and it being the first time the production team had used the technology, meant there was added pressure for McHugh and his team. When the Coronation Street team visited the studio for the first test day some 50 people turned up to see what was going on. “There’s normally five or six of us on set,” says McHugh. “I always describe it as being the Bang and Olufsen of production. The only people there are the ones who are really necessary.
“But with this, there were so many people; some were there just to see it, they weren’t part of the shoot. I was working on the camera and telling people what the Wi-Fi password was and where the toilet was. It couldn’t have been more stressful. Then we had another test shoot because there were some issues around the camera placement between where the Corrie team wanted it and where it would actually work. The actual final shoot lasted five days in total.”
However, the team worked through all the challenges, providing around ten minutes of footage for the scenes involving Neelan. McHugh says by the end of the shoot the Coronation Street team had complete confidence in what Recode were doing. “At the end of the shoot, the head of post production said that we’d done an outstanding job. They’d never had a company deliver in the way that we did.
“They invited us down to see the episode before it aired, which was a bit nerve-wracking, but I found myself watching it with a totally different perspective. David Beauchamp said it had been an amazing experience, and if he could work with virtual production for the rest of his career, he would.”
When the scenes aired in September, McHugh ventured onto Twitter to see what the audience’s reaction would be. “The comments weren’t about the work we’d done, they were all about the action and the story,” he says.
“One comment I did notice was someone said, ‘I thought this was going to be a big visual effects thing, where are the visual effects?’ That means we’ve done a good job if the viewer can’t see them. That’s exactly what you want to do.” n
SPORTS BROADCASTERS EMPLOY AI TO EXPAND FAN ENGAGEMENT
From
Sporting events have long been a driving force for technology innovation. Whether it is global quadrennial events like the FIFA World Cup or local youth sports, they all have builtin fan bases that crave access, engagement and immersion with their favourite teams. The latest technology breakthrough is the use of artificial intelligence.
With objectives for streamlining and expanding sports broadcasting while simultaneously improving fan engagement, broadcasters and sports leagues are adopting AI-powered solutions. Leveraging algorithms to perform tasks that normally require human intelligence – such as visual perception, speech recognition and decision-making –these solutions invite fans to enjoy their favourite teams and players in new ways.
This is where IBM Watson plays a key role for both the Wimbledon and US Open tennis tournaments; providing the intelligence behind the IBM Power Rankings which correlate data observed in thousands of news sources with facts about a player’s performance. The data collected and analysed includes more than 22 factors such as win ratio, quality of win, injury status,
tournament progression and win margin, resulting in a series of predictive insights highlighting the players to watch, upset alerts, likelihood to win, and more.
The Power Rankings are combined with IBM Match Insights which assesses on-court action. Using natural language processing, IBM Watson searches millions of news articles, blog posts and other online media to collect factoids about players. The resulting statistics are helpful to both fans and broadcasters when previewing upcoming matches. They reflect the strengths of each player and how they compare to competitors with specific characteristics and styles of play. The results of these combined AI-generated analyses are published to tournament websites along with personalised recommendations for highlight reels based on a fan’s interest in certain players.
highlights to cameras, artificial intelligence is becoming ever more present in sports broadcasting. Peggy Dau investigates
Using AI to augment sports broadcasting takes many forms. WSC Sports is aiming to enrich the fan experience using its cloud-based AI platform to automate and accelerate the creation of sports highlights. This is done using action detection and event recognition algorithms. Its solution enables sports media rights owners to create personalised and customised videos, in real time, resulting in new experiences for fans everywhere.
WSC’s proprietary algorithms start by indexing the broadcast feed. The logic behind the algorithms is informed by experts teaching the system the rules of each sport. Machine learning is used to analyse audio, video, data, graphics, objects, and scene changes. It also assesses commentary for volume and pace. Data is ingested from multiple sources across the sports industry to provide additional context about what is taking place during the game.
WSC automatically creates play-by-play clips and associated metadata, for organisations such as the NBA, DAZN, Bleacher Report and PGA Tour as well as other sports leagues and media organisations around the world. With workflows to overlay graphics, the clips can be published to any destination (e.g., TikTok, YouTube) using API integrations. The efficiency of the platform eliminates bottlenecks in content creation, providing fans with snackable content available on websites and social media channels. “We know that fans are consuming sports content on their own terms, whether on their phones, OTT, on social media, or live,” explains Itai Epstein, director of business development, head of EMEA and China, at WSC Sports. “With our real-time highlights that can be delivered anywhere, anytime, in any shape and form, we are helping our partners get the content directly into fans’ hands, building engagement and fandom along the way.”
The use of artificial intelligence is not limited to professional leagues and sports broadcasts. Pixellot believes its automated sports production platform, powered by AI, machine learning and camera arrays, will democratise sports at all levels. The company says it has automated the entire process of capturing, producing and streaming live sports in a way that is accessible to the smallest of sports organisations. Its system, which integrates mobile cameras, GoPro cameras or its own Pixellot Prime cameras with its back-end platform, has been installed in over 15,000 courts and fields around the world. Captured content is stored locally and in the cloud for editing and streaming.
Pixellot Prime includes multiple panoramic cameras capturing the entire field of play. Algorithms based on game rules for more than 19 sports provide the artificial intelligence that helps the cameras follow gameplay, zoom in on key action and create highlight clips. The Pixellot cameras simulate the movements of a human camera operator to capture what’s happening, including switching between cameras. For example, on a cricket or baseball field, one camera is focused on the batter. Once the batter hits the ball, a second camera takes over, automatically tracking the flight of the ball and the actions of players to field the ball. The Pixellot system provides production tools to edit and tag game highlights, add graphics, share clips and broadcast or stream live sports. This allows coverage of games that might not normally be broadcast or streamed as they would require the presence of a human armed with a camera.
Yossi Tarablus, associate VP of marketing at Pixellot, explains: “Digital transformation, allowing fans to engage with content remotely, is becoming a necessity. We enable sports teams, organisations and clubs to broadcast, analyse and monetise games that were previously unavailable to coaches, players or fans.”
Sport provides a unique challenge for AI as our brains use existing knowledge to identify actions taking place, characteristics of players or game strategies being used. We understand that when a foul occurs in football, a free kick or penalty is awarded. Play will stop and restart. Camera angles will shift between players. Highlight reels will need to be created. Humans are training artificial intelligence platforms to recognise patterns related to the sport itself.
The adoption of AI by sports broadcasters creates greater coverage of more sports in more places. The opportunity is to use AI-enabled platforms to create more content to feed fans’ appetites for more data and more video, wherever they are. n
WHAT DOES THE FUTURE HOLD FOR MAM?
Asset management, in whatever form it takes, is an important part of any broadcaster’s or streaming service’s capabilities. TVBEurope asked experts in the field for their thoughts on MAM’s move to the cloud, the biggest driver of innovation, and challenges for the sector
H OW FAR AWAY IS MAM FROM MOVING COMPLETELY TO THE CLOUD? WILL THAT EVER HAPPEN?
Jeremy Bancroft (JB), director, Media Asset Capital: According to recent research from Omedia, in excess of 50 per cent of broadcasters surveyed stated that they have already deployed MAM in the cloud.
equipment purchases that happened six to ten years ago. All those systems need to be upgraded to respond to the evolving needs of our industry.
Jeremy BancroftWe are not aware of any vendor that is currently focused on on-premise MAM solutions. Every new vendor that we have researched is offering cloud-based solutions, and legacy vendors have either completed or are in the process of migrating to hosted, cloud-based MAM solutions.
Kerem Can (KC), VP solutions, asset management, EVS: That’s a very tough question, for which there is no simple answer. The shift to the cloud is already happening. While many providers now propose full cloud MAM solutions, not all of them have begun that journey. What I can say for sure is that the market has changed a lot in recent years, and a MAM should be much more open and flexible than before. Today, more and more stakeholders need to access the MAM content directly, such as OTT platforms, partners or even sponsors. Another driver is the replacement or upgrade period associated with massive
Of course, the cloud is an important part of this evolution, but it comes with many limitations and concerns, such as security, connectivity, storage and cost. Another very important point is the migration of existing content to a new system. Therefore, I think it’s a legitimate question and we need to provide a proper answer to the emerging needs, but with an iterative and balanced approach. At EVS we have a ‘balanced computing’ approach that consists of the best of both worlds. In short, a hybrid approach where we use the existing systems to offer more possibilities thanks to cloud-based services.
Stephen Tallamy (ST), CTO, EditShare: I think there are two important, but linked, points here. First, I think the old idea of an asset management system as a means of tracking metadata and content, is completely outdated. Asset management is an enabling technology in the way you define and operate the workflows in your business.
The second point, then, is that if you are talking about your workflow orchestration layer being in the cloud, then it depends on whether your workflows sit naturally in the
cloud. That is a business and operational decision. I think increasingly production and post companies are finding advantages in managing at least some of their assets in the cloud, and that will probably increase. But it is not the technology that is driving this, it is the need for the workflows.
Julian Fernandez-Campon (JFC), CTO, Tedial: The industry is moving to cloud-based supply chains but the speed of transition will vary from company to company depending on business requirements. In some instances, companies require a very specific content supply chain where everything can be done in the cloud. The main advantage of a full cloud solution is that the customer can focus on the business and not worry about the infrastructure, the upgrades, etc; but there’s a cost.
Other companies when transitioning to the cloud are asking for immediacy of the services, so that they can start spinning up services in the cloud without having to wait for the hardware and start working in a few minutes. But some customers, still prefer to work on-prem because of the costs. The ideal solution is the hybrid approach where users can have the operation on-prem because it’s cheaper, and under control. If they need to do something in the cloud, they can do it cost-efficiently because content is not moved up and down and can be in every location depending on the workflows.
Simon Bergmark (SB), chief product officer, Codemill: Media asset management is quite a broad term. It covers both the storage of assets, as well as the management of those assets. There are benefits to be had from separating out the two processes. Our own MAM has been hybrid for some time, and we are moving towards making it a more cloud-native system.
With a hybrid approach, users can have the best of both worlds. It is possible to get all the benefits of the cloud by utilising it for asset management and orchestration, but at the same time storing assets where they are best suited. That might be on-premises, or in nearline cloud storage, or if the content needs to be kept for posterity but will not be accessed regularly, it can be moved to deep archive storage such as Amazon Glacier. It’s important that media businesses are able to be flexible and have the right type of solution in place for their needs. For some, it might make sense to completely move MAM to the cloud, but that does not apply across the board.
WHAT DO YOU FORESEE AS THE BIGGEST DRIVER OF INNOVATION IN MAM IN THE SHORT TERM?
JB: SaaS business models have become very popular with institutional investors. Businesses with reliable, predictable, monthly revenues often find it easier to attract
investment from the financial community. Most newlyminted MAM vendors have launched with SaaS models, and many legacy MAM vendors are transitioning to SaaS, and although this can take time, it will have a profound effect on the companies.
From a technology point of view, ‘marketplaces’ (such as the AWS Marketplace) allow system vendors to incorporate micro-services from other vendors. Examples are auto QC, frame rate conversion, metadata enhancement through voice-to-text services, etc. The availability of these micro-services will both accelerate the development of new MAM capabilities and provide greater choice to customers.
KC: Asset management is key and more specifically asset location management. It is important to have different storage levels with different accessibility and cost and offer the best-in-class user experience by hiding the complexity of this undertaking.
ST: Finding smart ways to empower remote working, which maximises productivity and uses resources (including interconnectivity bandwidth) to the best effect. That includes the ability to provide intelligent links between storage silos, including the cloud, and using remote working – even up to editing in the cloud using desktop emulation software like Teradici – to let the creative talent work the way that is best for them.
JFC: MAM technology has come a long way in a very short time and we’re now coming through the next stage of development as an industry. Traditionally, MAM systems have been extremely complex but that has changed with the introduction of technology from the IT industry. Now, business teams can use existing digital technology to develop new modular products and services without starting from scratch every time. This is great news for IT departments because it gives developers back an element of freedom and creativity that they previously haven’t had time to develop. By using modular principals, based on components or modules, systems contain several components with well-defined functionality that fit together seamlessly. This allows M&E companies to easily adapt to changing business scenarios and to be innovative in these approaches.
SB: Customers want to use the best high-end solutions for specific tasks, and they want these tools to all tie-in to a central MAM system. It’s definitely a pick-and-mix approach, when selecting tools and infrastructure, so seamless navigation and consistent use of metadata is crucial to stop teams working in silos. Media organisations need the best editing solution, the best subtitle solution,
the best tools for QC and validation, and so on. There is a real need for better integration so that customers can choose the best tools for the job, rather than settling on an all-in-one solution that excels at none of the areas in a content chain. This approach requires the ability to ‘glue’ together all of the separate components. But this is not as straightforward as pressing a button and connecting those ideal products together.
Sanjay Duda (SD), chief operating officer, Planetcast International: Several things are driving change. Probably the most important is the increased need to integrate at least basic editing functionality into the MAM system to enable re-purposing of content. In parallel, broadcasters are looking to AI, or more accurately machine learning, to enable highly efficient re-purposing through techniques such as machine-assisted editing, localisation through captioning, as well as AI-driven object identification, enabling local customs and sensitivities to be adhered to, such as no tattoos in Korea, or even localising product placement if that’s allowed by the rights holder. All of these are functionalities that we are currently developing in our MAM.C cloud-based solution. Another area of change is the need for prosumerlevel MAM for the increased number of YouTubers and vloggers that are profiting from the production of large amounts of video. They may not need the MAM capabilities of a Disney or a Viacom, but they increasingly need to efficiently manage their asset management, handle metadata correctly and feed into the MAM solutions of large media organisations.
complicates communication and integration of the different departments (sports, news, entertainment, archive, current affairs, etc). Common PAM/MAM tools are becoming a must.
ST: It is now widely recognised that content storage and metadata need not be a monolith in a fixed location. You can put storage silos where you need them, and interconnect between them for operational flexibility and content security. You can store some or all of your content in the cloud, either for operational access or just for resilience. The questions we hear now are around how best to do this, given the requirements for productivity and for interconnectivity.
For example, I had a conversation at IBC with a company producing a reality television show at a location some hundreds of kilometres from its home base. It needed fast turnaround editing, but sending a team of editors to the location was not only expensive, it was unpopular. We talked through ideas about working with proxies to manage the content and make the edits, limiting the transfer of full-resolution material to manageable bandwidths.
JFC: M&E companies want a system with the following benefits: an intuitive UI; simplified operations and easy to upgrade; no vendor lock in; flexibility and scalability; self-sufficiency; agility; low-risk; sophisticated business intelligence that provides a comprehensive view of areas of the operation; reduced TCO and improved ROI; and future-proof. Tedial’s smartWork Media Integration Platform provides a NoCode approach that enables the design of scalable, future-proof solutions and media services for the M&E industry, accessible to everybody everywhere, that can be seamlessly aligned with business needs to add value.
JB: There is a huge amount of hype about artificial intelligence and machine learning. At the moment there is more smoke than fire, but this will change over time. The key area that AI and ML can help with is the automatic generation of metadata from examining the content. Who is featured in the news clip? What are they saying? Where was it shot? How many similar pieces of content exist in our library? This is all valuable information, as increased metadata = increased content value.
KC: The user experience! It is increasingly important to provide a unique way to access management and product content. We are frequently asked to merge PAM [production asset management] and MAM functions into a unique solution. Currently, multiple PAM/MAM solutions reside within broadcasters’ systems which
SB: We get a lot of requests around integrating specific workflows within the MAM, such as the ability to quickly fix subtitles so content isn’t delayed. For common errors such as the audio and subtitle syncing, pacing and spelling, this makes a huge difference to media processing times. There is a need to empower media operators to take control and make their own quick changes, rather than creating a situation where there is a lot of time-consuming back and forth. We know from customers that significant resources are used when generating proxies and handling multiple versions. Being able to get rid of those unnecessary steps and limit the number of duplicate assets would be a significant benefit for a lot of media companies. Just-in-time transcoding removes this duplication by transcoding in real time rather than creating multiple proxy versions.
WHAT NEW QUESTIONS/CHALLENGES ARE YOUR CUSTOMERS PRESENTING TO YOU IN TERMS OF THE CAPABILITIES THEY NOW LOOK FOR IN ASSET MANAGEMENT SYSTEMS?Sanjay Duda
SD: New challenges from customers include enabling their skilled editors to spend all their time making artistic decisions rather than non-creative versioning and compliance tasks such as checking existing content and, for example, cutting-out nudity for a country with a requirement to remove it. We are building systems that take an AI-plus-human approach; for non-artistic operations that have traditionally been handled manually we strive to increase the workload for AI and decrease it for the human operator.
WHAT WOULD YOU SAY ARE THE MAIN CONSIDERATIONS FOR BROADCASTERS LOOKING TO UPGRADE THEIR MAM SYSTEMS?
JB: For the majority, cost is still a key factor in the final choice. However, for cloud-based systems, especially where the content is stored in the cloud, the cost of ownership can be difficult to determine in advance. The cost is driven by a combination of software licence fees (which typically scales with the number of users), cloud storage fees (scaled by the volume of content in gigabytes) and ‘egress charges’ which you pay every time a piece of content is retrieved from the cloud.
One difficulty is that the more successful a cloudbased solution is at meeting the needs of the users, the more it is likely to be used and therefore the greater the running costs. It is really important to understand how the solution being considered can minimise those running costs without limiting functionality. In this respect, not all vendors are equal.
Very few broadcasters buy ‘off the shelf’ systems and require some degree of customisation. Historically, this was performed exclusively by the vendor, but now we are seeing broadcasters deploying their own software staff to perform integrations and extensions to commercial systems. One of the key challenges is finding, training and retaining such staff as there is a significant skills shortage in this area.
In practice, most broadcasters already operate multiple asset management systems: production asset management (PAM) within production and post; media asset management (MAM) for linear ingest, distribution and archive; and content management systems (CMS) for OTT operations. Many are searching for a system that will replace all of these, but this is currently not a reality.
KC: I think it is essential to have a solution that is future-proof. Remote access, flexibility, and scalability will become soon a commodity. One aspect that should not be neglected is the migration path to the new MAM with a highly flexible data model and workflow engine to perfectly map the current system and ensure its evolution in the future.
ST: Broadcasters will be looking to leverage as much automation as possible and so if they invest in an asset management solution, they will want their workflows to be portable between on-prem, cloud and hybrid deployments. Equally as important, they will want to retain freedom around the tools they use; they will need the flexibility to build workflows that work equally well for the main NLEs, colour grading systems and other tools that their teams may embrace over the years. In other words, an open asset management solution, with great integrations with existing tools and a rich suite of APIs to build upon will allow for a long-term investment in a platform that can simplify workflows whilst retaining flexibility.
JFC: Users want seamless, effortless integrations and developers want to spend less time on repetitive software programming tasks and integration maintenance. The key to success for media companies moving forward is to replace the monolithic systems, or multiple integrated applications, with solutions that integrate all the components and complexity of the media supply chain, using all the APIs and additional methods required, to read older systems.
What’s important now is to consider the distributed networking nature of media supply chains and to include partners such as content producers, media companies and delivery platforms. Done successfully, this provides a distributed multisite environment that moves content in an interconnected efficient and agile way. By using a cloud media integration platform (NoCode iPaaS for media) M&E companies can create a technology infrastructure that takes full advantage of reusable modules and services. This becomes the foundation of the infrastructure.
SB: Cost is the main consideration because to upgrade an entire system from on-premises to the cloud is a significant investment. So, many companies are looking to spread that investment and pay on a rolling basis using a capex model. Media companies are also looking at ways to balance the cost implications with the benefits that moving to the cloud brings, such as spin-up and spindown infrastructure. Where content is stored also has a significant impact on the overall cost of the system, so restrictive MAMs that cannot consolidate assets regardless of where they are located can be problematic.
SD: There are a whole range of reasons why broadcasters are looking at new MAM solutions. It could be down to a merger. When two organisations merge, it’s quite likely that moving to a new, modern cloud-based MAM system makes more sense than standardising on one of the two existing solutions or trying to get the two systems to work together. Some broadcasters are looking to go
cloud-based because they know that it will provide the flexibility they need moving forward. Others are looking for the potential long-term cost savings of cloud over on-premise. It’s now certain that almost all greenfield operations will choose a cloud-based system with a minimal or zero on-premise element.
WHAT ARE THE MAIN ISSUES YOU SEE WITH CUSTOMERS STILL USING OLDER SYSTEMS?
JB: Typically, in large, on-premise MAM installations, the vendor has provided a range of customer integrations and functionality which means that each installation is unique. As time passes, it becomes increasingly difficult to support the multiple versions of the software. As the vendor develops new product functionality and new product updates, these are likely to be incompatible with earlier versions, so the customer ends up with an ‘orphaned’ system. Further updates become difficult, and the customer is left with the choice of either upgrading to the vendor’s latest product, or changing MAM vendors. Either way, this results in significant cost and upheaval.
KC: Issues faced by customers using older systems are that they don’t have all the new features we talked about, such as remote accessibility, automatic enrichment and different PAM and MAM solutions with limited exchange between them. We also see the lack of flexibility, inconsistency of data model between departments, workflows that are not flexible (need support from vendor), lack of API or openness of the system.
ST: What modern workflow management brings is a high level of automation. You can set workflows and business rules for many content flows, then manage by exception. That happens at least in part even in creative environments. You set rules by the production: for example, as content comes in the camera LUT is imposed and edit proxies generated which are automatically sorted into bins by scene. It is this sort of slick, automated, efficient workflow that users are looking towards.
JFC: Over the years, legacy MAM systems have become restricted to a set of third-party systems integrated point-to-point, embedding the workflows within the MAM, making the evolution of the business processes and the integration of new systems really challenging (transcoders, AI tools, etc).
In recent years, while M&E companies were busy digitising their content, these systems worked well but technology has moved on and we now have content that needs to be quickly shipped around to multiple platforms in multiple formats. This requires far more functionality. There are many media applications and IT solutions to
process complex workflows and create new media services in the cloud and on-prem. It is the responsibility of the MAM vendor to update the latest versions of third-party integrations in its orchestrator. This is impossible with legacy systems as their orchestrators are not technologically sufficient to do this and this can hold back the digital transformation of M&E companies. The same can be said for legacy CMS and DAM systems.
Instead of having orchestration, automation and workflow design services as part of the MAM system, vendors must move them to the media integration platform, which will provide a digital ecosystem that provides a more agile and open architecture. The MAM, which was once the beating heart of the facility, now becomes another application in the system.
This new generation of MAM systems is based on microservices, which are integrated with the media integration platform, and focus on asset management ensuring that the media supply chain is fully operable and compatible with all the other systems in the M&E facility. This will allow companies to keep the platform up to date with painless upgrades.
And finally, to ensure a seamless transition to the next generation of MAM systems, the key is to use a media integration platform. This enables the current MAM system to stay in operation while new business processes are completed by the media integration platform and the content is migrated.
SB: For the customers continuing to use older systems, they are not going to be in the position to save money on capital expenditure. Company real estate based in expensive locations, or physical hardware and servers could tie them into commercial agreements and upgrades for years to come. They will also not benefit from the agility of the cloud for asset management and orchestration. In these areas the cloud offers better tools, more efficient working, and on-demand resources that can be easily scaled up and down as business needs require.
SD: Thinking about our new MAM customers, we’ve solved a whole host of issues by moving them to our cloud solution. We have a new requirement from a customer that moved to the cloud because its legacy MAM had a one petabyte file limit that was causing issues. Another customer had a warehouse full of media assets on tape. We used RFID tags to make the assets easier to find physically and over the last year we have been digitising and ingesting these assets.
For most of our new MAM customers, the flexibility of the cloud solves issues that legacy on-premise MAM just can’t cope with. n
MOVING MEDIA MOUNTAINS
swXtch.io steps up to problem-solve the convergence of on-premises and cloud networks for broadcasters and video producers
Broadcast is an industry in near-constant transition, even if these transitions are gradually paced. Now two decades in, the transition from baseband to IP workflows remains a hybrid landscape, though technology has advanced to the point where most broadcasters view IP as a primary networking and delivery mechanism. With IP adoption accelerating in recent years, broadcasters and content producers are looking to the cloud as the next fortress of innovation that will change the way they operate.
On a foundational level, broadcasters quickly recognised the business benefits of moving their production and media workflows to the cloud. The ability to scale cloud systems with ease and develop reliable, redundant architectures to produce, store, playout and monitor content were obvious gamechangers.
Same as IP, the concept of sudden, wholesale shifts to the cloud were detached from reality. In addition to standard business considerations around costs and resources, there are plenty of technical limitations that gave even the most aggressive technology adopters pause. One significant barrier is how to mine the opportunities of the cloud while fully leveraging the power of the broadcaster’s on-premises network.
A new company emerged in the weeks leading up to IBC that intends to solve this foundational problem, while also creating new business opportunities through technologies and applications that were previously not available in the cloud. That company, swXtch.io, even found its way onto the booths of Microsoft and Evertz, where their problem-solving approach to merging on-premises and cloud networks were on full display.
MAKING THE SWITCH
Though introduced to the broadcast industry at IBC, swXtch.io is not brand new. The company was founded in 2020 to help IEX Group (IEX), which builds ‘technology to level the playing field’, with its registered US stock exchange, IEX Exchange, to migrate portions of its equities exchange infrastructure from physical data centres to the cloud.
“The stock exchange of today is a data centre that houses hundreds of software workloads, and that software processes on different network packets,” says Geeter Kyrazis, head of operations, swXtch.io. “There are hundreds of different servers running applications that talk to each other, and they have to communicate at a much higher speed given the very high rate of speed that stock exchanges move.”
Kyrazis notes that many of the typical benefits of moving to the cloud were also attractive for IEX Exchange, from alleviating the traditional maintenance and management responsibilities of on-premises networks to elastically increasing capacity. “The main impetus, however, was to place their hundreds of processing systems into virtual machines, and have them behave identically to the systems running in their physical data centres,” he adds.
Early research into making such a transition proved that existing networking protocols were not expansive enough to support IEX Exchange’s desired throughput. Enter Brent Yates, an experienced IT network engineer hired to create a high-performance, scalable solution for IEX Exchange’s transition.
Yates and Kyrazis had previously worked together to develop networking solutions, and Yates brought in his longtime peer to identify issues and develop a prototype solution.
“The root of the throughput issue was that key networking protocols including multicast and PTP (precision time protocol) were missing from the major cloud networks,” explains Yates, now CTO of swXtch io. “We needed to fill the gap in cloud networks with an extremely high-performance, scalable solution that would not only change IEX Exchange’s journey to the cloud, but also change the way that people can develop on cloud networks.”
This led to the development of cloudSwXtch, a technology that enables network feature parity with bare metal networks in the cloud. The multicast, PTP, and other network features of cloudSwXtch allowed IEX Exchange to move previously blocked workloads to the cloud.
Yates and Kyrazis soon realised that they could use the same technology to help broadcasters and media companies solve a similar problem: unlock missing network features needed to freely move video workflows from on-premise networks to the cloud.
FORGING NEW TERRITORY
Yates and Kyrazis first crossed paths at a company called HellaStorm, a software company that uses FPGAs to increase the performance of data applications. “Brent was the technical genius and I was the business mind, and we later started a company to develop an operating system for FPGAs,” says Kyrazis. “His experience with programming FPGAs dates back to his work at NASA in the 1980s. FPGAs are programmable chips, and it requires extensive electrical engineering expertise to programme them right.”
By merging Yates’s programming and network engineering experience with Kyrazis’s broadcast experience – Kyrazis once managed digital asset management products and later the entire government business unit for Harris Broadcast, today known as Imagine Communications – they were confident that cloudSwXtch brought something much desired and needed to the broadcast and media industry.
“We have covered a lot of ground, together and separately, that broadcasters care about; from low-level processing applications to video distribution and broader media workflows,” says Kyrazis. “We developed a perfect storm at swXtch.io that identified a global cloud transition problem, and have now constructed a solution that both works and scales to solve it.”
The ability for cloudSwXtch to support multicast distribution in the cloud was seen as the initial ‘groundbreaking’ salvo. Multicast is a packet distribution method that requires devices to subscribe in order to receive video content, for example, ultimately helping broadcasters optimise network configuration and reduce cloud distribution costs.
“Our ability to support multicast is what helps broadcasters move their private broadcast network infrastructure into the cloud, and bridge that infrastructure with distribution networks at the data layer,” adds Yates. “The multicast capability ensures that cloudSwXtch provides and replicates only the packets requested by the subscribing devices for efficient bandwidth management and network traffic.”
Multicast distribution is just one of many innovations in the
cloudSwXtch architecture. At its core, cloudSwXtch is a virtual overlay network architecture featuring a virtual switch and a virtual network interface card (NIC). These are analogous to physical components but allow users to deploy, manage, and monitor their cloudSwXtch networks and easily bridge them onto other networks. For example, cloudSwXtches may be joined in mesh configurations to create global networks. Such expanse, they argue, makes the benefits of multicast distribution in the cloud abundantly clear.
But Kyrazis is quick to point out that cloudSwXtch is not all about solving the multicast problem. “We quickly discovered that multicast was only one component of the overall challenge,” he states. “The reality is that cloud engineering teams are tasked with many different and difficult problems to do with scale, performance, reliability and security. As they develop solutions to solve general-purpose problems, features like multicast that are important to media companies get left behind. We realised that a virtual switching and NIC architecture that mimics a physical infrastructure would not only allow us to develop multicast support, but that we could really offer a development tool to introduce more features for broadcasters, as well as for cloud networks in general.”
That’s where cloudSwXtch’s bare metal topology comes to the fore, allowing the company to bridge the gap between onpremises networks and the cloud.
“Another unique feature we are very proud of is the ability to support the latest SMPTE 2022-7 technical standard in the cloud for IP network redundancy,” continues Yates. “We call that ‘hitless merge functionality’, and it strengthens reliability by instantly repairing media streams if packets on any of the network data paths are lost during transport. We also support uncompressed workflows with SMPTE 2110 in the cloud, and synchronising different SMPTE 2110 IP streams through our PTP support. That brings the power of frame-level synchronisation to remote production in the cloud, for example.”
Kyrazis also points out that cloudSwXtch’s protocol translation and fanout capability is particularly important, as it enables seamless transition between multicast, unicast, SRT and other protocols. “This allows endpoints that require different protocols to interact without reconfiguration or management, eliminating the need for software changes in the system.”
Kyrazis concludes that all of these features (and more), which can be added to the bare metal topology of cloudswXtch, are what brings parity between the physical and the virtual.
“We provide cloud-to-ground and ground-to-cloud bridging, helping broadcasters and content producers move video and media between on-premises and cloud infrastructure,” says Kyrazis. “What we are really doing at the core, however, is helping our customers forge new ground. Sometimes cloud networks put restrictions and throttling on their core capabilities. cloudswXtch helps broadcasters and media companies overcome these limitations, and migrate their most demanding data workloads to the cloud using the same systems and processes available in dedicated broadcast networks.” n
A CELEBRATION TO LAST THROUGHOUT THE YEARS
In April 2023, the National Student Television Awards (or NaSTAs) will celebrate its 50th anniversary. Every year the awards showcases the work of students who spend their time creating TV content at university; whether that be for broadcast via a local Ofcom licence or streaming content on YouTube.
Not only will the 2023 NaSTAs mark a milestone anniversary, but it returns to London for the first time since 2001, and Ravensbourne University London is hosting the ceremony for the very first time.
Awards entry is open to any university across the UK that has its own TV station, with categories including live broadcast, drama, comedy, sport, director, and production talent. In 2022, each category presented Gold, Silver and Bronze awards in order to celebrate more of the nominees, and that will continue in 2023.
Each of the categories will be judged by industry professionals, including Sky Sports’ Formula One reporter Ted Kravitz, and Sky News director Polly Stevenson, who studied at Ravensbourne. “We want to have a mix of backgrounds judging the awards,” explains 2023 host officer Jack Phillips, “so we have judges who studied philosophy and economics at university and are now technical supervisors at Sky. The NaSTA system has worked brilliantly for people like that, where they’ve not necessarily studied television but have a passion for it.”
Submissions for the awards are open until February, with judging expected to take around three weeks. Phillips adds that the Ravensbourne team will also be spending time in the run-up to the awards night on 15th April working hard in the edit suite. “We’ve got to edit the nomination VTs for the night and there are so many of them!”
he laughs. “We have about 21 to edit. Fortunately, I just plug in cables, editing is not my thing!”
The organising committee has already brought the likes of IMG and MOOV on board as sponsors, but is looking for more and believes it’s a fantastic opportunity for companies to make contact with the next generation of talent. “One of the things we’ve realised is that broadcast companies right now are desperate for new people, so this is a great way for them to reach out to students,” explains Phillips.
“A lot of companies don’t know about the NaSTAs. Anyone who’s been nominated as a student does, but we’re trying to make more companies in the industry more aware of it. We want to bring the industry and students together.”
“We’re offering them a big talent pool of students who are interested in and passionate about TV,” adds fellow student and NaSTA commercial director, Lia Saunders. “The good thing about the NaSTAs and all the student TV societies as a whole is that no one is forced to be there. Everyone is there because they’re passionate about TV, which is one of the main things you need to work in the industry. The sponsors will have access to 200 people that want to work in the industry, which is a really good selling point.”
Another major aim for the 2023 committee is to include elements that haven’t been done before, such as more industry involvement with a series of stalls at Ravensbourne’s campus where companies can showcase their kit. “We’ll also be holding an exhibition to celebrate the previous 50 years of the NaSTAs and personalising it to each winning station,” adds Saunders. “It’s a very different type of environment at Ravensbourne
compared to some of the other universities that compete in the NaSTAs. They’re often in very historic buildings whereas Ravensbourne is very modern, so even the look of the awards is going to be very different. It’ll be a taste of the new, modern NaSTAs.”
Celebrating the “new” is a key theme of 2023’s event, with Phillips stating the organisers want to highlight new ways in which people watch TV. “The NaSTAs have been quite traditional over the past 50 years, so we want to showcase the evolution of student broadcasting,” he continues. “At the moment, Ofcom licences are very expensive so many student TV stations aren’t broadcasting in their local area. Instead, a lot of them are now based on YouTube.”
“Streaming is bigger than ever,” agrees Saunders. “A lot of viewers nowadays stream more than they watch standard TV, so it’s important to bring the two together with the NaSTAs and celebrate the progression of broadcasting.”
Phillips and Saunders are in their third year at Ravensbourne, studying broadcast engineering and TV production respectively. They both stress the importance of university TV stations to help students, whatever they’re studying, understand that there are opportunities for them in the industry. “We’ve got some judges who studied something completely different to TV at university and so the NaSTAs help us get that exposure and show students that this is a viable career path,” states Saunders. “We’re told so often that creative courses aren’t a good career and there’s no funding, etc. But these awards promote the fact that there really is because you’ve got some amazing people that have gone into the industry on the back of being involved in a student TV station and winning at the NaSTAs.”
Phillips adds that being part of a student TV station also helps build contacts within the industry. “There are students who want to get
involved but because they’re studying a non-media course they don’t have the right contacts to say this is the route into the industry. Having companies from the industry involved in the NaSTAs means they can tell a first year, for example, what they need to do to get to a certain point.”
“It’s important to have that support system around you,” adds Saunders. “We’ve got people at Ravensbourne who have come from a fashion course and they said to us, ‘I’m interested in getting involved but would I be relevant?’ And we said, ‘of course you would, every skill is transferable.’”
It’s important for the industry to support the next generation. One of the easiest ways to do that is to give them opportunities to see what it’s like to work on a production. That wasn’t possible during Covid, but as normality has returned, more opportunities are becoming available.
“Jack and I have both been quite fortunate that we’ve had experience in the industry and that’s obviously massively helped with our knowledge,” says Saunders. “There’s such a need within the industry for people like us, who can bring in new ideas and help move it from broadcast to streaming and online. Companies want that if you show enough initiative.”
She adds that students just need one chance to show what they can do and make a good impression. “I would always say it’s that first one that is the hardest but once you have it, I do feel like the industry is really supportive. I’ve been to jobs where I haven’t used certain technology before and someone has sat me down and showed me what to do. I feel like they’d rather you be honest about what you know than pretend and then it all goes very wrong. Companies appreciate that honesty, it’s a big part of their trust and support.” n
For more information about the 2023 National Student Television Awards, visit www.nasta.tv
THE YEAR IN BROADCAST AND MEDIA
This year started with trepidation and hope for what could lie ahead as restrictions slowly eased, welcoming the return of in-person events and travel.
However, this initial excitement was short lived when Netflix reported its first quarterly subscriber loss in ten years. Worryingly, the company gave a bleak outlook as it expected to lose another two million subscribers in the second quarter; but that turned out to be just under a million. For Netflix, this was a reality check which coincided with a price rise aimed at striking a fine balance of retaining subcribers, whilst increasing revenue. Rivals are still in stealth mode in terms of attracting subscribers, encountering higher costs and losses. This clearly shows they are all at a different phase of growth, with Netflix still the benchmark.
Attention quickly turned to driving revenue as the streaming pandemic party came to an end. This set the tone for the year with an increasing focus on business models, the emergence of advertising, and FAST.
All of this comes at a crucial time given the cost-of-living crisis which is having a profound impact on all companies. It will cause a domino effect, impacting users’ willingness to sign up or continue paying for a subscription that is no longer needed. Prices are all heading in the wrong direction; with consumers feeling the pinch and tightening their belts. Adding an advertising tier helps streamers to diversify their business models further, offering something for everyone which should resonate with a broad range of users. Ultimately, this latest move will negatively impact linear TV networks and free-to-air broadcasters who are heavily reliant on advertising as the main revenue source.
Unsurprisingly, there was a distinct lack of innovation at the industry’s key trade shows, NAB and IBC, but the huge excitement for people to reconnect again could not be ignored. Interestingly, both events attracted a smaller crowd with a razor-sharp focus on conducting business. Empty stands with some companies choosing not to attend, in part due to cost as well as travel concerns, highlighted the ongoing challenges. During these unprecedented times, where uncertainty is the new certainty, scale will help improve margins. Hence why the focus on consolidation and corporate activity as a strategic option will return to the fore in 2023.
Sports continues to generate interest from all providers. We are now finally seeing some form of a reality check. While competition seems to be healthy, battle for rights and subscribers is leading to a hugely fragmented landscape. This coupled with a lack of a diversified business model is placing further pressure on providers leading to a new joint venture in the UK between BT Sport and Warner Bros Discovery, DAZN acquiring Eleven Sports, Viaplay’s acquisition of Premier Sports, and more to come.
The Apple and Major League Soccer ten-year deal seems like a pivotal moment and ushers a new era of sports rights and opportunities for streaming. I’ve long argued that the Premier League is seeking a path towards this streaming future. Apple’s growing installed-base of iOS devices is attractive to all sports associations/leagues who will be looking at this deal very closely. Sports clearly represents the next battleground for ownership of the living room among the big tech companies. It drives users to watch live TV and allows Apple to tap into a lucrative ad space.
While the industry is in flux and facing economic disruption, recent events underline the value of linear TV. The State Funeral of HM Queen Elizabeth II was one of the most watched across all corners of the world. According to the BBC, 32.5 million in the UK tuned into its coverage.
Full credit should be given to all of the TV production crews in dealing with the scale of the task to deliver a flawless and seamless broadcast. A day like no other which is a testament to implementing a combination of technologies that became prevalent during the pandemic, such as remote and virtual production.
Significantly, 5G is now starting to play an increasing role for contribution, production, delivery and consumption. Expect this to grow next year as part of a converged offering from providers. Cloud services are being marketed extensively as adoption continues to grow with the support of more features, and functionalities driving efficient workflows. An open collaborative approach still remains paramount as a means of addressing costs and security issues.
Sustainability was rightly a heavily debated topic throughout the year as companies continue to make an effort to reduce the environmental impacts of production.
Significantly a global recession looms with some countries already feeling the pinch. The year ahead promises to bring more bumps and further losses for all providers; especially for streamers as there’s no silver bullet to profitability. With the big tech companies implementing costcutting measures, this will extend to other verticals including the media and broadcast industries.
As we move into a recessionary period, it will be increasingly hard to predict consumer behaviour and spending. Users and households alike will all rethink their content requirements. Ultimately, users will retain the most crucial services that are akin to utilities. Therefore, businesses will place greater focus on cutting costs and driving greater efficiencies to improve margins for the year ahead. n
ONES TO WATCH IN 2023
With advances in technology, a changing broadcast landscape, increased energy costs and a muchneeded drive to become more sustainable, 2023 seems set to be a year of change for the media and entertainment industry.
Sustainability is at the front of industry minds as viewers are taking more interest in whether content is being produced in a sustainable way from beginning to end and in all layers of production. Sony has always taken an interest in this topic and has a commitment to reach net zero by 2040, but all businesses are now having to consider how sustainable their practices are.
One area where sustainability is a particularly hot topic is in live production as it requires large amounts of equipment and people to be transported to locations. One increasingly prevalent way to make live production more sustainable is the use of modular solutions. These provide a more tailored set-up for each broadcast, meaning less equipment has to be transported from place to place.
Cloud and virtual production are set to dominate the media solutions industry in 2023, in part because they require fewer physical resources than more conventional methods of production, making them a sustainable option. However, rapidly improving cloud and virtual production technology also provides more creative freedom to filmmakers efficiently. By being able to create and store high-quality content virtually, content creators will be able to be more creative in a less expensive and less time-consuming way.
IP-based workflows for remote and distributed production are also set to establish themselves as a staple in 2023. Whilst remote production has become more commonplace since the pandemic, it is likely to become a routine part of the industry due to the improving technology
facilitating it, with greater expectations from companies to be able to create content with full freedom of place, people and processing power. An increased focus on sustainability means many companies want to minimise their carbon footprint and costs.
Quality storytelling is being demanded by content-hungry viewers, so it’s likely we will see 4K broadcasts becoming more commonplace, as well as a greater presence for 8K UHD as more broadcasters employ 8K-ready technology. Broadcasters are also moving towards using more cinematic images, which bring a more dramatic look with a shallower depth of field compared to conventional broadcasting. A cinematic feel is becoming a popular way to add depth to live productions, particularly with the launch of new cameras creating new cinematic possibilities in live production.
Security will be an especially significant topic for 2023 as more content is stored in the cloud, leaving it vulnerable to sophisticated cyber-attacks. ‘Zero trust’ systems, where strong identity verification is required to access files, are set to be used more and more. These systems require verification at every stage of a digital interaction, meaning one password does not provide access to all files, something which could be serious if storing sensitive content.
The metaverse offers up possibilities for content creators on numerous fronts where they can look to monetise and create content suitable for the new interactive universe. Quality volumetric content and photogrammetry will be key in providing users with a believable and valued experience. Sony’s software development kit (SDK) to operate cameras remotely, which is available for an expanding part of its camera lineup, is already opening up opportunities for third parties to develop content with the metaverse in mind. n
MAKING FAST PROGRESS
By Steve Reynolds, president at Imagine CommunicationsAs everyone who works in the broadcast and media industry knows, we love an acronym. At IBC, a lot of the buzz was around FAST; free, ad-supported streaming television. It is probably a good idea to look at why FAST has become such a hot topic.
For an innovation to really take off, it needs two things: the enabler (the technology platform that underpins it); and the demand from the target market. These came together in FAST.
The enabler is high-quality video streaming over ubiquitous broadband. Those with video to share could deliver it direct to consumers, and all the big producers, syndicators and aggregators were developing strategies around it.
Demand was triggered by the Covid pandemic. Consumers could not spend money on going to the cinema, eating out, or travelling. They were forced to stay at home, and watch television. Getting your content in front of consumers was suddenly a big opportunity, but there was a need to move quickly. A long list of back-burner or field-trial projects were quickly moved to the front of the line.
If you have a substantial catalogue of content, what is the fastest, cheapest way of making it available to all these people who are now desperate for entertainment? The answer is FAST.
FAST channels are as simple as a broadcast operation can be. There are no live events or interventions; they are pre-planned playlists with some simple branding graphics. They are ideal for cloud implementation, which means you could stand up multiple channels very quickly. If you already own the content, then you need very little capital investment: it is all opex, which you can tie to your income.
The nature of the channels is often very niche. On my Roku box at home in Colorado there is a channel that is 24/7 reruns of The Carol Burnett Show. Another is wall-to-wall Rockford Files. It may well be that there are hours in the day when the audience for these channels is zero, but they cost so little to run that provided there are some to view the commercials at other times of the day, it is all worth it.
The key take-home message of FAST is that it proves there is still an incredible demand for content that is free to view. Whether we are talking legacy linear channels, streaming services or even on-demand, people like ‘free’.
Free television gives the consumer the opportunity to explore, to try new programming. If you buy an SVoD service, you are limited to what
that service offers. Increasingly, consumer behaviour sees subscriptions taken out to see the show everyone is talking about – Stranger Things on Netflix, for example – with the subscription cancelled when the series comes to an end.
And as you will not have failed to notice, the world is facing some pretty tough economic times. During the pandemic, when you were not eating out or putting fuel in your cars, you had money to spend on subscriptions. Now money is going to be in short supply, so those subscriptions are harder to justify.
FAST is advertising-supported, but by being direct to the consumer, you cut out some of the middlemen in the advertising chain. Start-ups can implement ad tech that relies heavily on automation to sell and place spots and integrates dynamic ad insertion in the common workflow. That will allow inventory to be sold in different ways. Big brands will want affinity with the programmes – they want to be in NFL broadcasts or Chicago Med – and will want to buy spots. Others will simply be shopping for impressions. This is going to lead our industry to the next major evolution of FAST, where ad buying and selling will begin to support rules around brand protection and placement to deliver the broadcast premium experience across FAST and AVoD services. The next generation of FAST and AVoD platforms is going to allow publishers to deliver the same quality of advertising and experience that brands and consumers are accustomed to on linear channels, while still providing digital-style targeted advertising.
Which brings me to one more set of initials: CTV, as in ‘converged television’. FAST is CTV and offers the best of both worlds: whether you want to micro-manage your campaign or hand it all over to automation, you want to be able to buy and sell through the same, automated portal. The next big thing in CTV – which we’re calling CTV 2.0 – is going to be the fusion of the quick-to-market content we saw with the initial launches of FAST and AVoD and the more profitable and sustainable advertising quality that is the hallmark of linear TV.
What FAST means for the industry is a new business model: direct to consumer, direct to advertiser, with the minimum of intervention and the maximum of automation. It requires multi-platform selling tools and ad servers that can support DAI with broadcast-quality placement rules. But these, too, are now available as cloud services, wrapping it all up into a package that is... well, fast to implement. n
CAPTURE TO THE MAX
Envy is an award-winning full-service post production house that sees a wide range of projects, including documentaries, entertainment, factual entertainment, scripted and shortform content, pass through its doors.
The London-based facility has built a reputation for its ability to offer clients what it describes as ‘streamlined and bespoke workflows to support their requirements’.
Envy’s newest division, Capture, is a specialist remote workflow division, delivering record and media management services on location tailored to overcome the challenges of fixed rig productions. The custom built set-up creates high-resolution and proxy versions of the incoming video feeds to provide edit-ready media, removing the need for back-up and ingest workflows out in the field.
The creation of Envy Capture involved a lot of custom-designed hardware that needed to operate as efficiently as possible with as few people on set as possible. “Although Cinegy works out-of-the-box, we needed to push it much further in order to meet our technical goals” says Toby Weller, solutions architect at Envy Capture.
With the volume of content handled by the post house – an average show can rack up in excess of 2,500 hours of live feeds – it was critical that the workflow could quickly and reliably support the ingest of multiple feeds across different formats. The solution was Cinegy Capture, which now powers the fixed-rig record offering.
As Weller explains, “One of the things we wanted to do, which other fixed-rig providers don’t do, is provide quality on scale. We wanted to offer a much higher quality native XAVC Intra codec as standard and combine that with live-creating proxy versions. This saves a huge amount of time on the back-end instead of creating them on Avid or other transcode engines.”
Cinegy Capture was designed to make ingest as flexible and reliable as possible, supporting a wide range of video formats and creating multiple formats and proxy copies at the same time; enabling solutions to replace separate ingest and transcode machines typically required to achieve the same results. Each Envy Capture system supports up to 48 HD channels, up to 768 uncompressed audio tracks and up to 12 UHD channels.
Weller adds, “Each of our Capture nodes will record 16 video
a
a
so in all, each of our Capture nodes is creating up to 48 different live encodes. We then pack in up to six nodes per rig and duplicate records to create a fully redundant, incredibly dense system.”
In a typical OB environment, Envy will use Cinegy Multiviewer side-by-side with live production feeds to monitor the stream status and incoming audio, as well as provide the production teams with a confirmation of record.
When you work on the scale that Envy does, it can create storage bottlenecks, which is something no project can afford. As Weller explains, “The great thing about Cinegy Capture is that it runs on x86 hardware. This offers an awful lot of buffering potential; you can choose where those buffers are and how much protection you have to circumvent the bumps in the road. This enables us to record straight to ultra-fast network storage, allowing immediate creation of LTOs as well as sending data back to base for editing.”
Using Cinegy Capture, Envy has been able to create a compact, highly mobile piece of kit that is far smaller than would otherwise be possible and is very accepting of different temperatures and solar radiation. The workflow has already been used on projects such as Channel 4’s Love Trap, Amazon’s Lovestruck High, and BBC Three’s Love in the Flesh n
‘With the volume of content handled by the post house, it was critical that the workflow could quickly and reliably support the ingest of multiple feeds across different formats’
USING COLOUR AND LIGHT FOR THE AUDITION
Tense, harrowing thriller The Audition is a film from director Bizhan Tong, where an actor’s audition for a new role turns into a deadly game, putting his friends at risk. Starring Kevin Leslie, Anita Chui and Xander Berkeley, with cinematography from Elliott Banks and edited by Mitchell Tolliday, the film was graded in DaVinci Resolve by Michelle Cort.
The primary setting for the film is the flat belonging to the main character, Larry, the look of which changes significantly throughout the feature. At the start of the film, Larry’s flat is brightly lit with the curtains open, but as the film progresses and mayhem ensues, the flat becomes increasingly darker to match the sinister tone.
“One of the main challenges of the grade was to maintain the same shade of blue in Larry’s flat throughout the film,” says Cort. “We’d chosen a rich, glossy look for the opening scenes and a few shots were filmed later than they were supposed to be in the timeline of the film, so we had to use various qualifiers and tools to bring those shots back up to daytime exposure levels. I primarily used the colour warper tool to keep tones consistent, but also had to keep an eye on Larry’s skin tones to ensure they didn’t go too saturated, as well as checking the colours of the lights in the room.”
Larry’s flat is seen on camera and within a laptop screen, so the laptop image needed to look realistic and consistent with the look of the main location. The laptop images were created via green screen, and Cort used Resolve to manage the spill from the green screen to the actors and their surroundings. For example, throughout the movie, Larry wears a particularly shiny watch, and Cort had to use qualifiers and curves to remove the green reflections from the image.
The film also features other locations, including long corridors and industrial sites, which posed a particular challenge for the grade. “There was a risk that the industrial sites and corridors would look dull and featureless,” says Cort. “But we turned this into an opportunity to bring the focus back to the characters. For example, one of the characters wears a teal dressing gown, and we really pulled out the colours on that to make it pop.”
Resolve was also used for the reviewing process, and a streamlined workflow was essential, with Cort based in the UK and director Tong in Hong Kong. “At the beginning of the process, we needed to spend a lot of time together, so the time zones didn’t work in our favour,” Cort continues. “Later on, once we were in a good flow, it worked well,
and I could get feedback from the team at the end of their day, but the start of mine. The integration between Frame.io and Resolve accelerated this even more; rather than getting notes with timecodes on, I could see markers and get to the notes straight away, then cross them off as I was done.”
“The work was incredibly satisfying because colour and light are so important to horror-thrillers,” concludes Cort. “You can create much stronger looks and pull on people’s emotions with the colour palette, so there’s much more creative freedom. A lot of this mood is achieved in-camera through the set lighting, but you can push this even further in the grade, making the shadowy corners really dark and threatening, keeping the tension high and the viewer hooked until the last second.”
HD READY
WHY HAS UKTV DECIDED NOW IS THE TIME TO LAUNCH HD CONTENT ON UKTV PLAY?
UKTV Play is at the beginning of a major digital step change, and the team is focused on creating a truly end user-focused AVoD proposition that supports brand awareness but also a premium user experience through content discovery and consumption. The majority of UKTV Play content is consumed on connected TVs and it is here that HD content will have the most impactful user experience benefits.
Following on from our recent UKTV Play rebrand, support for HD content is the next strategic step in positioning UKTV Play as a go-to AVoD destination that users like and want to use. It positions UKTV Play on par with premium SVoD services and a clear market differentiator in the UK AVoD market.
HOW DID YOU RELEASE NEW CONTENT BEFORE? WAS IT FILMED IN HD FOR BROADCAST AND THEN DOWN-SCALED FOR OTT?
UKTV broadcasts a wide range of content, from the latest entertainment and documentary productions to UK classic dramas and comedies. This breadth of content means we have a wide range of content source formats. Where we have an HD source asset, this is currently downgraded to our current SD streaming resolution. Once HD content is supported, we will support 720p and 1080p adaptive bit rate streaming where we have a source HD asset available.
HAVE YOU HAD TO MAKE ANY CHANGES TO THE PLATFORM TO ACCOMMODATE THE MOVE TO
HD?
We were fortunate in that our content pipeline was already configured to support HD content with minimal changes. Modifications are limited to:
• Edits to our internal tool for asset management (built on AWS) to ensure that where a show has an HD source asset available (approximately 69 per cent of our catalogue), it’s delivered to our Brightcove account for encoding for subsequent consumption by our viewers
• Edits to our Brightcove ingest profile, which provides the rules that determine how we encode our assets for delivery, to support 720p and 1080p HD renditions as part of its output
Internal resource has also been dedicated to quality assurance, verifying that HD assets playback as expected, and in a performant manner, across the wide array of hardware that is in our viewers’ homes for the multiple different direct-to-consumer platforms that we support.
WHAT TECHNOLOGY ARE YOU USING TO UPGRADE OLDER CONTENT TO HD?
We’re working closely with our partner Brightcove on video encoding and delivery. We’re adding 720p and 1080p renditions, in addition to the 5 SD renditions that we’re already providing our users.
Our focus is on delivering HD content to platforms where content playback is facilitated by DASH or HLS streaming; along with PlayReady, FairPlay or Widevine DRM, which constitutes the majority of our estate. As is standard, content is delivered via adaptive bit rate streaming, so that the specific rendition that a user is playing back at any given moment is optimised for their bandwidth conditions.
Moreover, as part of the introduction of HD content onto UKTV Play, we’re taking advantage of Brightcove’s Context-Aware Encoding (CAE) algorithm, in lieu of a conventional bit rate ladder.
CAE means that our encoded assets have a variable bit rate, which is intelligently determined in response to the content on screen in each frame. This enables us to deliver better playback performance to our users, while conserving our storage and bandwidth costs, without a visible reduction in quality in the content delivered to our viewers. It also supports our sustainability goals as an organisation.
HOW LONG DOES THE PROCESS TAKE?
The process has taken approximately three months with re-encoding of our back-catalogue to continue through 2023.
HOW
MANY PEOPLE ARE WORKING ON THE PROJECT?
The initiative to support HD content on UKTV Play has been driven by a small team of two or three people but has involved most teams across the organisation from operations, production and development, design, marketing, editorial, legal, commercial and product.
ARE YOU PRIORITISING CERTAIN SHOWS? HOW MANY DO YOU EXPECT TO INCLUDE IN EACH RELEASE?
To maximise our user’s experience, we’ve opted to iteratively roll out
re-encoding by audience demand.
WILL ALL NEW UKTV CONTENT NOW BE AVAILABLE TO STREAM IN HD?
All new and re-encoded content will be available in HD where a source HD asset for that content is available. Some of our older content (e.g. classic comedies) will continue to be streamed in SD.
ANY PLANS TO RELEASE CONTENT IN UHD OR HDR?
Supporting HD content positions UKTV Play as a premium AVoD provider in the UK and on par with our SVoD peers. There are no current plans to support UHD or HDR in the medium term. n
HERE COMES THE
Following its acquisition by Ross Video, a new era beckons for Spidercam. Jenny Priestley takes a closer look
If you’re a sports fan, it’s more than likely that you’ve seen Spidercam’s technology whizzing around a stadium or arena. The Austrian company’s cable cameras made their TV debut in 2005 and haven’t looked back since. These days they can be found everywhere from a Formula One race track to a football stadium to Metallica’s latest tour. The Spidercam system is suspended on four cables with winches that navigate a camera dolly, utilising a stabilised head to provide HD or 4K video. System rentals include an accompanying technician, pilot,
and camera operator. The company also recently launched a fixed mini solution for smaller internal production environments.
In September, Ross Video announced it had acquired Spidercam with a commitment to offer more capabilities to customers. As well as both further
developing and integrating Spidercam’s technology into its own tech stack, Ross also intends to help it break further into the North American market. “Spidercam had got to a particular size, and as you can imagine cable camera systems are actually quite expensive to manufacture,” explains Karen Walker, VP, camera motion systems at Ross Video. “So for them to develop and grow they needed to find someone that could help them achieve that.”
“Spidercam fits really nicely into our camera motion systems portfolio,” agrees Ross’s chief marketing officer, Jeff Moore. “We are already very strong in camera robotics and studio robotics, so it’s complementary to that and expands our footprint.”
Other areas of synergy between the two companies include both production and creative services. Rocket Surgery, Ross Video’s creative services team, has already worked with Spidercam on numerous augemented and virtual reality projects. “We’ve worked together on a number of esports productions,” adds Moore. “So we have that integration, where we can get tracking data for Spidercam and then render augmented reality graphics, which for esports really helps to enhance the experience for viewers.”
The acquisition also offers the opportunity for technical innovation, particularly around control platforms. Walker explains that Ross Video wants to help Spidercam further develop the control of its cable cameras. “Ross has a lot of experience with user interfaces,” she adds. “A lot of the products that we make are predominantly software orientated so we have a lot of experience in terms of what customers need and want.”
At the same time, there are certain features and functions that Spidercam is keen for Ross to help it develop. “OverDrive, our automated production control, can control some of the Spidercam functions,” continues Walker. “That would be great because then you would have a single controller point of view, which is particularly important when we start looking at whether we want to develop these products to be used in
studios. Now that a lot of studios are integrating XR and bringing their floors into play, I can see many applications where the ability to see something from higher up will work really well. News, elections, anything like that; broadcasters are using AR and graphics to get that story across.”
“There are a few things that Spidercam has done as well that we are looking at,” she adds. “They’re very safety orientated. That’s a big thing that we can learn from a Ross point of view, not just software features, but also the hardware. Graphics is an obvious one, we can develop that further together in order to create a more integrated solution.”
Asked if remote production is part of the roadmap for Spidercam, Walker admits that while many broadcasters are looking at the workflow, line of sight is incredibly important for these kinds of cameras. “You need someone there to see what’s going on. A lot of the control is very close to the field where you can actually see the equipment moving around. Even in studios or indoor-type events, you still want to have someone available, particularly from a safety point of view, because if you’re flying over audiences that’s fine until maybe somebody decides they want to stand up. But with environments where there’s no audience, or it’s very structured, I think you can use cable cameras more remotely. One of the big things with cable camera systems is that you need both an operator and pilot. Can we reduce that maybe to one operator and make it a bit more user-friendly? Yes, I think we can and it’s something we would like to do.”
With all of these plans for innovation, the Ross Video team is keen to stress that Spidercam will keep serving its customers in the same way it always has. “We’re continuing to provide the same service and the same team,” says Moore. “The goal is to make it better and not radically change.” n
A MUSICAL REVOLUTION
SphereTrax, a new music licensing platform, aims to revolutionise the way music is placed and licensed in the entertainment industry. Founder and CEO Sefi Carmel explains more
WHY HAVE YOU DEVELOPED SPHERETRAX?
As a composer myself, with over 35 years experience in the industry, I have repeatedly encountered three key issues with the way that music is discovered, synced and licensed, both as a creator and as a client. I created SphereTrax to solve these issues and drive positive change in the sync licensing industry, focusing on a composer-first, ethical business model. The issues with production music libraries at the moment are primarily: the antiquated delivery formats provided; the poor search experience offered; and the ever-complicated, time-consuming licensing process. We believe that we have solved these issues through our combination of cutting-edge tech, world-class creators and simplified user experience.
DID YOU BUILD THE PLATFORM IN-HOUSE OR WORK WITH A TECHNOLOGY COMPANY?
Our platform is currently being built by both members of our in-house team and with the help of (our CTO) Robin Kohze’s company, Vaionex.
HOW DOES IT WORK?
The SphereTrax platform contains the world’s first search engine for music licensing, designed from the ground up to accommodate the needs of the modern creative. Users can select from a large variety of more than 150 faders, which they can use to tweak, adjust and radically
SphereTrax is expected to launch in beta mode in January
change musical parameters. In real time, this will then search through our large gallery of tracks to find the closest match. These range from moods (cinematic, tension, happiness), to genres (electronic, hip-hop, rock) to lyrical content (romance, motivation, meditation), even to colours and musical terms. Once the right track has been found, users can then drag and drop both the SphereTrax audio and their own videos into our timeline view, allowing for instant preview of their project alongside the audio they wish to license. We have simplified purchasing a licence, whilst maintaining one of the most customisable licensing models on
the market. In a few clicks, users can license their tracks, with the added ability to save presets, license entire projects at once, and more.
TELL US ABOUT THE TECHNOLOGY BEHIND SPHERETRAX
Our platform is currently being developed using many cutting-edge technologies, such as natural language processing (machine learning), audio deep learning, blockchain (specifically for our licensing capabilities), and browser-based Dolby Atmos playback.
SPHERETRAX HAS BEEN DEVELOPED FOR THE METAVERSE.
WHY DID YOU DECIDE TO DO THAT?
At SphereTrax, we firmly believe that the metaverse will bring rapid change to the entire media industry and that this technology will rely on truly immersive audio content to provide convincing, captivating experiences to the masses. This is the primary reason why all of our tracks are mixed in Dolby Atmos and other state-of-the-art audio formats.
WHO IS IT AIMED AT?
Our target markets include film/TV production companies, advertising and creative agencies, gaming studios, trailer houses, independent online creators, corporate videos and much more. Any individual or organisation that needs audio for their latest production could utilise the SphereTrax platform in one way or another.
COULD IT BE USED FOR LIVE PRODUCTION?
SphereTrax’s platform does not create the tracks live. Instead, we will allow our clients to use our AI-powered search tools to discover highquality, curated audio content.
As our licensing system is powered by blockchain, when you find that perfect track and wish to keep it only for your project, the track can be purchased as an NFT, effectively removing it from the platform and granting you the perpetual sync rights.
IS THE AUDIO ONLY AVAILABLE IN DOLBY ATMOS?
All of our content will be available in stereo (wav, mp3), surround (5.1, 7.1), Dolby Atmos, and potentially other immersive formats soon.
WHY DID YOU DECIDE TO INCLUDE DOLBY ATMOS?
Dolby Atmos is the next step in audio formats. It allows for greater audience engagement in music, films, TV shows and video games. As our competitiors do not yet provide for clients who are looking to build great immersive content, we built SphereTrax to solve this problem.
WHERE DOES THE PLATFORM STAND AT THE MOMENT? WHEN DO YOU EXPECT TO LAUNCH?
Our product is currently in early-stage testing and our public beta will launch worldwide in January 2023. We are excited to share further details about the launch soon. n
GOING WITH THE FLOW OF THE CLOUD
Why live sports production is moving to the cloud, by
The proliferation of new OTT platforms and channels has created many more opportunities for sports fans to watch their favourite sports, sometimes with the same content viewable in more than one place. Consequently, sports broadcasters and producers face increasing competition for viewership and advertising revenue. Now that fans are not limited to tuning in to linear broadcasts at a specific time, viewers and advertisers can spread the love around. Combatting that fragmentation requires constant innovation to attract viewers and keep them engaged. Productions continue to push the envelope with enhanced graphics, on-screen data, interactive elements, new and better cameras, 4K, 8K... the list goes on.
At the same time, broadcasters are under pressure to increase profit margins and find more cost-effective ways to reach their increasingly fragmented audiences. Traditional live sports production is inefficient, expensive, and demanding, and traditional broadcast production economics are being challenged.
Producing a traditional live sports broadcast involves bringing everything – cameras, microphones, production equipment, operators, producers, talent – to wherever the event is taking place. It is expensive, time-consuming, and risky to send people and gear from place to place several times a week. Working on location – from an OB van, for example – can make the workflow more complicated and less flexible. Take graphics production, for example. Many sports broadcasters and third-party production companies transport their graphics machines from one place to another. When operators need to share the graphics package with someone else, they must upload a massive 1- or 2-gigabyte file. And it’s difficult to collaborate with graphics producers back at home-base or to change graphics on the fly. It involves an inefficient process of making changes, uploading, downloading again... and a lot of waiting in between.
On top of all that, the sports season is grueling for production teams. Depending on the sport, there could be ten or 15 games per week during a season, often concentrated on weekends and taking place in different cities or even different countries. The production crew must travel from one city to another to cover the games, sacrificing time with their families.
Gabriel Baños, CEO ofFlowics
Therein lies the conundrum: the traditional broadcast workflow makes it difficult to innovate and compete at a time when it’s needed most. With the logistical, economic, and human challenges of producing live sport, companies and broadcasters are turning to cloud-based and hybrid solutions. That includes contribution from the venue to the cloud, all aspects of production, and distribution to viewers or other platforms.
Cloud-based tools enable remote production, so moving production to the cloud makes things far more efficient, far more flexible, and far less costly than traditional workflows. Broadcasters see benefits up and down the live sports production value chain, and here are some examples.
LOWER EQUIPMENT COSTS AND NO MORE SHIPPING
Because everything happens in the cloud, there’s no need to ship equipment around or to rent it, saving time, money, and potential loss or damage. In many cases, cloud tools also eliminate the need for OB trucks.
THE FLEXIBILITY OF SOFTWARE
Cloud-based production relies primarily on software instead of hardware, so it’s more flexible in the sense that you can change one component for
another without dealing with physical appliances. And you can do it all remotely in the cloud from any location.
EASY COLLABORATION
Because everything happens in the cloud, everyone involved in the production can easily work with each other to get things done. Expanding on the graphics example, with cloud-based tools, a graphics artist can sit anywhere in the world and work on the design. Then, when it’s time to go live, the graphics operator can organise and pull those graphics into the broadcast while sitting somewhere else. If they need to make a change in the middle of the show, it’s as simple as calling the graphics artist, who might be on the other side of the world. The artist can fix the graphics and update the version in the cloud, and then the operator can update the graphics package to the new version. No-one has to be at the same location, and they don’t need to exchange any files between them. Also, because the graphics package is hosted in the cloud, sharing it with a third party is as simple as sharing a URL.
EASY VERSIONING
Likewise, when you’re working in the cloud, it’s easier to create multiple versions of the same broadcast or even personalised feeds of the same broadcast. For example, one European broadcasting group produces shows for certain sporting events and then delivers the signal to all the different broadcasters in the group. With the broadcasters being spread across Europe, many of them need additional graphics,
graphics in different languages, or some other level of personalisation. As part of a larger proof of concept for cloud-based production, this host broadcaster used a cloud-based broadcast graphics engine that made it easy to personalise graphics in the various feeds before sending them on.
LOWER TRAVEL EXPENSES, LARGER TALENT POOL
Without the need to ship equipment to location, you no longer have to pay as many people to travel from place to place. And, because people can work remotely, you are no longer restricted to either working only with collaborators who live in the area where the event is taking place; or paying people to travel to that location. Instead, you can access a larger pool of talent anywhere that makes sense.
BETTER WORK-LIFE BALANCE
Since production crews and freelancers can do their jobs without being on-site, they can travel less and spend more time at home. Better worklife balance is a valuable benefit for many people.
SaaS- and cloud-based technologies enable new and decentralised remote production workflows that are changing the way content is contributed, produced, and distributed. Through cloud-based tools, broadcasters have discovered that live sports production can be a lot simpler and doesn’t have to take place in a studio or an OB van. The cost, speed, and human advantages help broadcasters win the competition for fans and advertisers. n
MAKING HISTORY
International production company and host broadcaster Quality Media delivered a “world-first all-remote multisport production” for the XII South American Games in October. Chief production officer Pablo Reyes takes TVBEurope through the workflow
We at Quality were the first production company to offer the Spanish market an established model of remote IP production. We have been proving the worth of our all-remote production workflow for single sport competitions, all over the world, ever since. Over the last five years, we have produced live content for international federations such as FIFA, FIBA, FINA, IWF, Rugby Europe and various world sports committees, plus many private sports promoters such as Grises de Humacao and Peter Auto.
Because our all-remote model has been so successful, matching the quality and production value of a traditional live sports set-up, it made
sense for us, as the joint host broadcaster of the XII South American Games, to use the same model in Paraguay. This time, our workflow was scaled-up and super-charged; it was the first all-remote multisport production in television history and involved a staggering 4,526 athletes and 53 sports.
WHAT ARE THE BENEFITS OF GOING ALL-REMOTE?
For the XII South American Games alone, the implementation of an all-remote production workflow amounted to massive NPV (net present value) savings of approximately USD $2.5million.
If you think about our all-remote workflow compared to a traditional multi-OB truck set-up for a large-scale sports production
like this, we curtailed the movement of around six tonnes of extra material and 40 people; not to mention the installation of hundreds of kilometres of cable that would usually take around two weeks.
It would also be impossible to cover several sports in one day with one or multiple OB vehicles because of cost implications and physical distances. The flexible workflow meant that cameras were not tethered to OB vans and could easily be moved from one venue to the next. The time and cost saved on logistics, resources and time was monumental.
WHAT TECHNOLOGY DID YOU USE, AND WHY?
We had 90 cameras using over 100 EnGo mobile transmitters at the event locations. The EnGos were used for primary and back-
up transmission, blending a combination of fibre and 4G cellular connectivity from a variety of local network providers, to send signals to 23 Dejero WayPoint receivers at the international broadcast centre (IBC) in Asunción. The director communicated with the camerapeople via the EnGo IFB.
Ten additional WayPoint receivers, located at Quality’s Buenos Aires hub, were used to reconstruct and decode the video feeds which then entered into a Blackmagic 72x72 matrix and were shared with an ATEM 2M M/E production switcher and replay systems. The packaged content was then uplinked via satellite for distribution to the rights holding broadcasters (RHBs) and inserted into an ingest system that we developed ourselves for logging and clipping. The RHBs included national local broadcasters, TIGO and Paraguay TV, as well as TyC Sports in Argentina, TVN Chile, CDO Chile, COB Brazil and Panam Sports.
In cases like this, you have to be very sure your technology will work on the day and can deliver the highest quality video to the viewers. Dejero allowed us to go in with the solid belief that transmission and connectivity would perform how it should anywhere.
HOW MANY CREW MEMBERS WERE ON-SITE AND HOW MANY BACK AT BASE?
Only the camera operators and minimal supporting production crew were at the sports event venues. A microphone at the hub in Buenos Aires, connected via XLR to a WayPoint receiver, enabled the director to communicate with camera operators using the EnGo IFB voice communication feature.
At the IBC there were 18 technicians, two NOC operators, three switcher master operators, ten editors, four directors, four mixers, and four replay operators. As we achieved uninterrupted connectivity from any location, the majority of the production function and staff (producers, directors, graphics operators and technicians) remained in Buenos Aires to package content for distribution from the hub.
HOW MANY CAMERAS DID YOU EMPLOY FOR THE PROJECT?
Because there were multiple venues and different types of sports, we deployed more than 90 cameras across Asunción, which included 30 Sony 4300s, 40 Panasonic 600s and 20 Panasonic 250s. In terms of
camera configuration examples, we used nine cameras for water sports, we set up eight 5x5 cameras for football and basketball, and for athletics there were 18 cameras. We also used a number of roaming cameras to cover events over longer distances such as rowing, canoeing, cycling, and the star discipline, the marathon. Drones were also used for the opening and closing ceremonies.
WHERE WAS THE GALLERY BASED AND HOW FAR AWAY WAS IT FROM THE SPORTS AS THEY WERE TAKING PLACE?
The galleries were located at the IBC at the Olympic committee facilities. There were at least 20 sports taking place there, but the other 30 sports were located across Asuncion and Paraguay; some of them with a distance of 20km from the IBC and the others were thousands of kilometres away.
DID YOU HAVE RF-BASED BACK-UP IN CASE OF ANY PROBLEMS?
No, we relied solely on Dejero EnGo transmitters as main and back-up. Each camera had one EnGo as the main and another as the back-up. If you made a proper internet survey of each scenario, you can guarantee the correct work of the system.
WHAT WAS THE BIGGEST CHALLENGE YOU FACED?
We completed more than 1,000 hours of live production remotely in a country that doesn’t have the best connectivity infrastructure, working with organisers that didn’t have any real background and knowledge of remote production. Despite these fundamental challenges, the event was a huge success. We’re incredibly proud of this feat.
WHAT DID YOU LEARN FROM THE PROJECT?
We learned that no matter how complex a project, no matter how many venues or simultaneous feeds we have to deal with, an all-
remote workflow can be scaled up completely successfully. To the wider sports broadcast community, this production, which was watched by tens of millions of people and broadcast ten hours per day over a 15-day period, should be seen as a progressive example of how much time, effort, cost and logistics can be saved using an entirely remote workflow.
HOW LIKELY ARE YOU TO GO BACK TO A TRADITIONAL WORKFLOW?
I can’t think of an event that cannot be produced entirely remotely. With Dejero and Smart Blending Technology, connectivity can be achieved from any location; whether we’re at sea, in the mountains or deep underground, we can combine any type of network to get the reliable connectivity to get signals back to any of our hubs (whether that is 4G/5G cellular, satellite, fibre or any other IP network). For us it doesn’t make sense to go back to a traditional workflow no matter the scale or location of the production n
Dejero connectivity solutions were a key contributor to Quality’s all-remote workflow that is said to have saved $2.5million in production costsDirectors at Quality’s remote hub communicated with camera operators at each venue using the EnGo IFB voice communication feature
AT THE CUTTING EDGE OF ROBOTICS
To celebrate its 25th anniversary Al Jazeera wanted to continue in its objective to “push the boundaries in storytelling” by building a new studio infrastructure that “integrates technology at the cutting edge of broadcast”. To do so, Al Jazeera chose MRMC’s StudioBot XL robotic arm to replace its manual dolly/crane-based workflow.
The technical team at Al Jazeera, based out of Doha, Qatar, was seeking a solution that could address the issues it was having with a crane’s limited range of motion, and manual operability, which inevitably led to inconsistent shots. Situated within the U-shaped Set-3 of the flagship Studio-5 of Al Jazeera Channel (Arabic), the StudioBot XL now enables the team to create instantaneous, pre-programmed shots via software tools.
Commenting on the benefits of the MRMC technology, Ahmed Marzouq AlFahad, executive director of technology and network operations at Al Jazeera, says, “The technology team conducted thorough alternative analysis comparing industry leaders’ solutions. We are pleased to have taken the step to deploy MRMC products within our new studios, which provides our audience an innovative and unique experience.”
MEETING AMBITIOUS REQUIREMENTS
Having gone through the operation and technical requirements of the project, the following factors were taken into consideration:
• Range of motion
• Programmability/repeatability
• Integration with automation system
• Integration with existing VR System
• Ease of use and integration with the studio robotic system
• Safety feature (collision avoidance)
MRMC’s StudioBot XL was the chosen solution, and is the first of its kind to be deployed at Al Jazeera. With a large operating envelope that offers
‘breathtaking while being intuitive and useful enough for the operators to adapt seamlessly to the system’.
MRMC says that one of the key advantages of its robotic arm solutions is that they can be operated by a manual user or be controlled by a studio automation system, which in the case of Al Jazeera is Mosart. The company explains that this provides “the best of both worlds for Al Jazeera as they have the ability to create stunning moves for their marquee shows but also have impressive camera movement at other times when manual operators aren’t available by using Mosart to recall the presets.”
MRMC conducted live internet demonstrations with the Al Jazeera team, from the early phases of the project installation to going live.
INTEGRATION WITH AR/VR SETS
The Al Jazeera team is also leveraging StudioBot’s tracking data capability which is used to produce AR/VR shows and segments. The tracking data passed to the GFX system is accurate enough to produce “hasslefree tales”, helping the crew to use MRMC as an integral part of their everyday operations. The studio space is equipped with several video walls and is adjacent to the newsroom. Set-3 allows Al Jazeera to create more “participatory story-telling productions” to bring the audience closer to the genuine experience.
Asef Hamidi, director of news, explains: “The technology team’s market research confirmed the superior quality of MRMC’s services to Al Jazeera, which demonstrated how pioneering they are in their line of work. We are looking forward to seeing more products from MRMC that will enrich and unearth new possibilities in news production.” n
SMPTE PLANS FOR THE FUTURE
SMPTE, the Society of Motion Picture and Television Engineers, recently announced the results of its 2022 elections for officers and governors. From the UK, Richard Welsh, SVP of innovation at Deluxe, was elected to the role of executive vice president. Welsh was involved in the founding of the organisation’s UK section in 2012 and has been a part of the SMPTE standards community since 2005. His new role on the board is one of strategy. He explains, “I’m looking at where the industry is going and how SMPTE can support that move.” In this two-year role, Welsh has a major focus on diversity and inclusion. “As a society, we need to look at what we want to be and how we can get there. My role is to develop this vision and build a mission around it.”
When asked how he’ll know he’s succeeding, Welsh says he sees conferences as the industry’s shop window. “They represent who we are. Today, I don’t think they look good to younger people coming into the industry. When that starts to change, I’ll know the work is starting to pay off.”
The term ‘diversity’ is often used to refer to gender diversity, but Welsh sees diversity as also being about age, background, ethnicity, sexuality and way of thinking. “I feel strongly about supporting neurodiverse people within the industry,” he adds, referring to everything organisations can do to embrace and maximise the talents of people who think differently. For context, the UK’s Chartered Institute of Personnel and Development suggests over ten per cent of the UK population can be considered neurodiverse whether through autism, dyslexia, dyspraxia, ADHD or another diagnosis. “It’s about making the workplace an accepting and accommodating place for those who don’t fit into the ‘normal’ social models,” Welsh explains. “The fact is – and I say this from experience – that those people may have an exceptional talent
for an engineering role by literally ‘thinking different’, particularly in software development.”
Looking to the future, Welsh sees the benefit of nurturing the next generation. “The Rise Up Academy is the single most important initiative in the industry,” he asserts. The outreach programme delivers broadcast engineering and technical workshops to primary and secondary school children aged 9 to 18 years old. It intends to inspire, educate and inform children and young adults about the pathways and career opportunities available to them within the broadcast technology industry.
In 2023, the Academy will become a charity allowing it to access more funding and start working internationally. Welsh adds, “I hope SMPTE can become more and more a part of this work and similar initiatives. It’s a long-term project with a return on investment a decade or more in the future, but as an industry, we need this. It is one of the biggest things I think SMPTE will be engaged with.”
SMPTE also announced the election of Chris Johns, chief engineer, group architecture Sky UK, to the position of United Kingdom region governor. Like Welsh, Johns was instrumental in the setting up of SMPTE in the UK. “Since being the first chair of the UK section a decade ago SMPTE in the UK has gone from strength to strength,” he says. “We’ve succeeded because we’ve acknowledged the different need of members between the UK and US.”
One innovation has been trialling academic institutional memberships as a way of bringing students into SMPTE and promoting inclusion from the get-go. The initial trial with one university, which gave students access to the journal, subsidised travel to UK meetings and a hoodie, was so successful ties with three more universities have been established. “We’ve been careful to balance what we get from the university with what we give back to the
students. Once students are at the events, we are proactive in including them in conversations with our wider membership.”
Johns sees his governor role on the board as a liaison. “I’m there to give visibility to the talent and experience we have here. A lot of research in telecommunications stems from the UK so we have plenty to share. Currently, I’m talking about object-based media and virtual production environments. There’s a lot happening here, but often by smaller entities compared to Hollywood.”
It’s a two-way conversation on the board alongside regional governors for other geographies. “During my term, I’d like to establish strong links with sections in Europe and explore what we can do together,” adds Johns. “It’s not easy for everyone to get to the US, so I’d like to explore Europe-based technical events to complement the SMPTE Media Technology Summit.”
The annual SMPTE Media Technology Summit is a multiday event highlighting the year’s work developing standards and deploying technology. Welsh lays down a challenge: “Imagine you
took everything based on a SMPTE standard out of the signal path. SMPTE’s standards are so intrinsic, yet often go unnoticed.”
Just a quick run-through of key SMPTE standards comes up with timecode, SDI, IMF, DCP and the IP essences described by ST 2110 and ST 2022. Without these, most TV and film productions and transmissions would disappear, which is the basis of Welsh’s other focus: “We should find a way to celebrate the intrinsic nature of SMPTE,” he says. “Standards get a bad rap at the moment as they’re seen as slow and even irrelevant in the software age. But there aren’t enough baseline standards in software to build a stack on top of. If we concede this ground, the entire layer that represents the media environment will become a wild west of proprietary systems.”
Delivering on Welsh’s vision seems to require the diversity improvements for which he is also fighting. “SMPTE does have to modernise, becoming more nimble and quick but we must maintain high standards,” he states. “We need to think about how we set up for success and create a pipeline of new talent.” n
“The Rise Up Academy is the single most important initiative in the industry” Richard Welsh
Pixotope Fly has already been trialled on live broadcast coverage of American stock car racing
AUGMENTED REALITY TAKES TO THE SKIES
Pixotope’s new camera tracking technology uses drone cameras and the natural world to create augmented reality graphics. Jenny Priestley finds out more
Broadcasters have been using augmented reality graphics to highlight key information for viewers for a number of years. From a player’s stats on Match of the Day to the latest polling in an election, AR has become ever more pervasive on screen. Pixotope has decided to take that technology one step further with the launch of Pixotope Fly, a camera tracking software solution that combines augmented reality elements with drones and cable cameras.
The solution uses computer vision technology to leverage video from the onboard camera to identify anchor points in the natural environment such as rocks, trees, and buildings, which it locks onto and tracks against 3D graphics.
There are two schools of approach with camera tracking: sensors that point into the area where the camera is and track its movement; or sensors on the camera itself. In the early days of tracking, motion sensors were placed on a jig, which would then track the camera as it moved around. About a decade ago, that moved on to optical systems that rely on markers being placed in the scene or where the sensor can see them.
The technology for Pixotope Fly was initially developed by TrackMen, a real-time 3D camera and talent-tracking company based in Germany, acquired by Pixotope in April of this year.
“TrackMen has been leading the way on optical systems for a while and this is a fully optical system that operates either with markers that can be placed where the sensor can see them, or with natural markers, i.e. the natural environment,” explains Pixotope’s VP of global marketing, Ben Davenport.
This means that if the camera is operating where it can see hard edges or contrasts, such as in a stadium or arena, there is no need to place markers. “We mostly still use a sensor camera to do that rather than using just the image that comes through the lens of the video camera; I guess that’s the best way to describe it, for a number of reasons,” he adds. “Mainly because it gets very complicated with zoom. If you defocus for creative effect it becomes very difficult to latch on to anything whether they’re natural or otherwise.”
One of the obvious applications for the technology is drones, says Davenport. “We didn’t start by thinking how drone footage is becoming popular,” he adds, “it was more that we had the possibility of doing this and tracking is a really good use case that could enable our customers to do something different with augmented reality.”
To put it simply, the drone launches into the air and captures the scene, Pixotope’s solution then creates a keyframe from which the software starts to generate a point cloud. As the operator moves the drone a second keyframe is
created, and as the point cloud continues to build it becomes a 3D map of the scene. “Once you’ve configured it and you’ve got your first keyframes and point clouds, you can set your points of origin and lock your AR scene to the real-world scene,” explains Davenport. “The drone can theoretically move anywhere from that as long as it continues to see points of reference and continues to build the map.”
All of this is done in mere minutes, with the point cloud continuing to build in real time from launch. “It’s quite a cool thing to see live,” says Davenport. “You send the drone up, click a number of keyframes which takes 90 seconds/two minutes, and then you’re laying your mesh so you can lock the real world to the virtual world.”
Pixotope Fly has already been trialled on live broadcast coverage of American stock car racing by Silver Spoon Animation. But Davenport believes there are plenty of areas where the technology can be used. “It is very much live events and the majority of those are sport, but there are use cases around large concerts, outdoor awards ceremonies. It is primarily live so I don’t think you’d see it used on documentaries that tend to use a lot of drone footage, although maybe you could for pre-visualisation. It has been used for a certain American football event. Motor racing is an obvious one, whether that’s circuit tracks or stadium tracks. Anything in a stadium is also pretty obvious.”
Of course, there are always challenges around creating any new software, and Pixotope Fly was no different. The first was finding talent that understood what Pixotope wanted to achieve. The other was creating technology is very complex but making it simple for the end user. “That’s something that Pixotope really strives for in everything we do,” says Davenport. “One of the reasons that we acquired TrackMen was that we knew they had incredible technology and we could make it easier to use and accessible to more people. Broadcast requires utmost reliability, we always require deterministic performance, and yet we’re always pushing the boundaries of innovation. Sometimes those two don’t sit so well together.”
Like any good piece of technology, the team at Pixotope are already working on the next iteration, which Davenport says will include some ‘logical developments’. “One will be around constantly pushing on usability and the other element we’ll see is some of the benefits of the technical integrations between Pixotope and TrackMen, working on [and simplifying] the user experience and hooking it together so the graphics work seamlessly together in one interface,” he concludes. “That will be the next development and then we’ll see some clever stuff which I’ll save for later.” n
“You send the drone up, click a number of keyframes, and then you’re laying your mesh so you can lock the real world to the virtual world ” Ben Davenport
WHAT ARE YOU TALKING ABOUT?
Over the last decade, intelligibility on TV has been quite the hot topic. A few years ago, it was even given its own word when “mumblegate” became part of the vocabulary to describe television dramas that were difficult to hear. These programmes caused outrage across the spectrum, but there is one group of people for whom it is a more serious issue.
According to the World Health Organisation, more than a quarter of people older than 60 are affected by hearing loss, and research from Statista’s 2021 report states that 19 per cent of Europe’s population is over 60 years of age, more than any other geographic region. That’s a lot of people.
Our consumption of content has also exploded. With media on a wider range of devices in an even wider range of locations; we can enjoy programming while we are at work, at the gym or on the bus. It’s only a matter of time before the issue raises its muffled head again.
But there’s an element of next generation audio (NGA) that may provide some clarity, and MPEG-H might be the key to unlocking it.
MAKING IT PERSONAL
NGA is a term that covers several audio enhancements, but unlike its headline-grabbing immersive sibling, personalisation isn’t about making things bigger; it’s about making things more accessible and giving
viewers more control over what they hear. The ability for viewers to independently adjust things like commentary, language and crowd noise can all be on the table.
At this year’s IBC Show in Amsterdam, Salsa Sound and Fraunhofer IIS provided a world-premiere look into how this might work, demonstrating a complete end-to-end automated workflow for personalising live sports audio. Five years in the making, they did it by enabling Salsa’s MIXaiR production tool to author Fraunhofer’s MPEG-H popular codec.
WHAT IS MIXAIR?
“MIXaiR is a production tool that uses AI to completely automate the complex audio workflows of live sports broadcast,” explains co-founder and director of Salsa Sound, Ben Shirley.
“Mixing live sports audio for broadcast has always been resourceintensive, with the mixer constantly manipulating console faders to ensure the mic nearest the ball is always in the mix, whilst managing the balance of this mix against the crowd and commentary, and remembering that the output may be in multiple output formats like stereo, 5.1 or even immersive,” he adds.
“MIXaiR manages all the available inputs and microphone feeds and can create multiple audio mixes in real time. For example, multiple commentators, crowd mics, studio presentation mixes and pitch-side
mics can be grouped to create different outputs for multiple audiences, such as home, away or alternate language mixes, as well as to platformspecific formats and loudness requirements.”
It does this by assigning incoming feeds as audio objects and applying the desired relative loudness of each element. In this way, the AI engine can output different groups and different loudspeaker configurations.
Salsa’s development with Fraunhofer means that they can also be authored as MPEG-H outputs.
WHAT IS MPEG-H?
MPEG-H was standardised in 2015 and is increasingly popular; in fact, it’s key to a number of international broadcast standards. It is included in the ATSC, DVB, TTA (Korean TV) and SBTVD (Brazilian TV) TV standards and Fraunhofer works with CE manufacturers like LG, Samsung, Sennheiser and Sony to develop consumer products with MPEG-H support.
“There are many elements which make up the MPEG-H audio standard, but MPEG-H Part 3 is all about 3D audio,” says Yannik Grewe, senior engineer of audio production technologies at Fraunhofer IIS.
“It enables immersive and spatial audio as channels, objects or ambisonic signals, as well as universal delivery over any loudspeaker configuration. In short, MPEG-H is an audio standard with features designed to provide viewers with the flexibility to adapt the content to their own preferences, regardless of the playback device.
“This means that in addition to MPEG-H audio being adopted by various CE manufactures, it has also gained interest from production companies, streaming service providers and broadcasters, and Fraunhofer is working with multiple industry partners like Blackmagic, Linear Acoustic and Jünger Audio.”
Salsa Sound is also on that list and the desire to automate personalisation saw collaboration between the two companies ramp up at the start of 2022. “The aim was to make it easy for broadcasters to produce live content that makes use of the personalisation of MPEG-H,” explains Shirley.
“The integration involved creating an additional output group type which provides an authored MPEG-H output and control of all metadata.
In a standard SDI-based infrastructure the MPEG-H output consists of a total of 16 channels; one channel carrying MPEG-H metadata and 15 channels of audio which can consist of whatever channels, presets or groups are required to allow viewers to switch between different audio options. These can be sound beds, objects or groups such as commentaries, different languages or crowd mixes.”
WHAT WAS ON SHOW, AT THE SHOW?
The IBC demo showcased both ends of the production process. At the production end, a DAW played out raw microphone feeds from a football match into MIXaiR which mixed pitch-side mics, managed commentary levels and crowd mixes, and output them as either channel beds or audio objects with MIXaiR-generated MPEG-H metadata. The 16 MPEG-H audio and metadata channels were encoded in a broadcast encoder and sent to an MPEG-H-compliant settop box on the other side of the stand.
“The signals were defined as MPEG-H audio components and user interaction was enabled,” says Grewe. “This resulted in three presets for user interaction: a standard TV mix, an enhanced dialogue mix, and a no-commentary mix. We also provided the ability to manually adjust the prominence of commentary in relation the stadium atmosphere.
“The automatically generated digital audio channels were embedded with the corresponding video and fed into the MPEG-H encoder. In the ‘home’ environment, the set-top box received the encoded signal over IP and viewers were able to select their preferred audio option using a standard remote control.”
This level of MPEG-H integration means that MPEG-H metadata could be authored to create a complete scene ready for encoding and transmission, with all the end userpersonalisation features that the format enables.
THE GAME IS AFOOT
“Some service providers already offer a second audio stream to select between an audio mix with and without commentary, but this implementation is very limited,” adds Grewe. “With MPEG-H such user interaction can be achieved without the necessity to switch streams.”
Which brings us full circle and back to mumblegate. Personalisation can provide the ability to give people with different requirements access to the same content, and AI could help to do this without piling additional pressure onto production staff.
“I’ve always considered personalisation to be the greatest benefit of NGA. Not everyone wants immersive audio set-ups or soundbars in their home, but everyone watches media in a different environment, on different equipment and with different individual preferences and needs,” says Shirley. “Object-based audio facilitates access; the mix may have enhanced dialogue for those with a hearing impairment, and an audio object could even be an audio description for visually impaired viewers.
“The possibilities are huge for increasing both engagement and accessibility,” he concludes. n
THE CURRENT STATE OF WEBRTC
By Jonas Birmé, VP of research and development, Eyevinn TechnologyWebRTC is the technology for realtime streaming online with broad support across browsers. But, it is still not widely used for real-time streaming of professionally produced content as there are still a few gaps in the ecosystem that need to be bridged. Bridging these gaps with standard solutions will maximise interoperability and streamline development and adoption.
Initiatives to bridge these gaps exist, and Millicast introduced WebRTC HTTP Ingest Protocol (WHIP) in 2020, while DASH-IF formed a public taskforce that published a report in March 2022 with examples of use cases where a combination of MPEG-DASH and WebRTC would be required. In June 2022, independent consultants Eyevinn Technology proposed an Egress protocol called WebRTC HTTP Playback Protocol (WHPP) which was later replaced by WebRTC HTTP Egress Protocol (WHEP), introduced by the same IETF working group that drafted WHIP. In September 2022, Cloudflare announced Cloudflare Stream in open beta to support WebRTC-based streaming using WHIP and WHEP.
WebRTC is a well-established set of standards for real-time communication, supported across all browsers. Designed for ultra-low latency communication it powers video conference services such as Google Meet and Microsoft Teams. A conference service provides the participants with the application, and the signaling protocol between application and back-end is proprietary to the conference service provider. In broadcast streaming, you have instead an ecosystem of producers, distributors and players, and open standard protocols between the producer and distributor (ingest) and between distributors and player (egress). As the main use case for WebRTC has been video conferencing, the need for this type of interoperability has not been necessary.
To adopt WebRTC into the broadcast streaming world this interoperability gap is the fundamental part that needs to be solved first. A proposal to solve this issue on the ingest-side was introduced by Millicast (now Dolby.io) in 2020, an HTTP-based protocol for the exchange of SDP messages between the sender and the receiver: WebRTC HTTP Ingestion Protocol (WHIP). As all modern browsers support WebRTC this protocol would also enable people to go live
directly from a web browser without having to install a native application.
While WHIP fills the ingest issue, the gap on the egress and playback side remains.
In an initiative by Eyevinn Technology to develop a prototype for how standardised WebRTC-based streaming could be achieved, a similar HTTP-based protocol was proposed called WebRTC HTTP Playback Protocol (WHPP). This protocol specified a standard for the exchange of SDP messages between the WebRTC media server and the receiver.
With this protocol in place the gap both on the ingest and egress side could be filled, and any player implementing this protocol could consume a stream from any WHPPcompatible WebRTC distribution platform. A couple of months later, the IETF working group behind WHIP presented a draft for WebRTC HTTP Egress Protocol (WHEP) which seems to be the standard to be adopted for playback.
The first company to provide a standardised end-to-end WebRTC based broadcast distribution was Cloudflare who, in September 2022, announced the support for WHIP and WHEP on Cloudflare Stream. And the Swedish CDN vendor GlobalConnect announced in October that it is adopting WHIP and WHEP for its WebRTC-based distribution platform.
DASH-IF identified use cases where integration and interoperability points between MPEG-DASH and WebRTC are necessary. For example, a complex real-time live event with multiple synchronised streams using WebRTC representations with multiple adaptation sets for different camera angles, or a real-time live event using WebRTC while ad periods are delivered with MPEG-DASH.
With the fundamentals of standardised signaling in place, further work is needed to support digital rights management (DRM), captions and subtitles and advanced audio and video codecs. Overcoming the gaps in these areas will require the browser vendors to also be engaged in these initiatives.
Live events with interactivity, betting, and in-stadium experiences require lower latency than what can be achieved with HTTP streaming. The efforts mentioned in this article will enable the of use WebRTCbased distribution for these use cases in a standardised way. n
A WIN AT THE WORLD GAMES
Every four years, athletes from around the world gather to compete against one another to prove they are the best at what they do. But this isn’t the Olympics. Instead, it is the World Games, which traditionally takes place the year after its more famous cousin. More than 3,600 athletes take part in more than 30 unique, multi-disciplinary sports from parkour to boules to tug of war.
The World Games celebrated its 40th anniversary in July with the 2022 event taking place in Birmingham, Alabama, the first time it had been in the United States since the inaugural event in 1981.
In order to reach as wide an audience as possible, the International World Games Association brought in International Sports Broadcasting (ISB) as the host broadcaster and to sell the rights to the event around the world. “It was difficult to get as many broadcasters on board as we’d hoped this year because it was affected a lot by other sports moving their dates due to the pandemic. That impacted broadcasters’ budgets and schedules,” explains Jorge Pickering, head of engineering at ISB.
However, 20 broadcasters over 50 countries did buy the rights, and
ISB wanted to deliver as much content as possible. “Our commitment was originally to produce The World Games Channel, which was a 24/7 channel that included some live events and some that were delayed, because it’s impossible to broadcast everything live, especially with the amount of sports and simultaneous activities that the Games have,” continues Pickering. ISB also produced two daily highlights programmes, lasting 30 minutes each, and delivered coverage of the Games around the clock to all rights holders.
“We had eight OB vans in order to do eight simultaneous streams and we had flypacks to cover the other sports,” Pickering adds. “In total considering all the sports and the amount of hours [of coverage] that we produced, we had more than 200 cameras, including hard cameras, handhelds, PoVs, and super slows.”
Originally, ISB planned to distribute the world feed to the international broadcasters via satellite. But, as discussions continued Pickering and his team noticed an increase in demand for additional content. So, in order to deliver the eight live video feeds to rights holders, ISB employed Haivision’s cloud-based video networking technology as well as a fleet of Haivision Makito X real-time video encoders, and the SRT video transport protocol.
Makito’s low latency encoding capabilities make it perfect for a live sports production, especially where every millisecond counts. “You want to keep the latency as low as possible at the onset because when you’re then sending it off to someone located in a different part of the world,” states Mark Horchler, marketing director for products and solutions at Haivision.
“The international broadcasters have other equipment they are using for producing the content before it goes on air, so the lower latency the feeds are that you’re providing them, the better quality productions they’ll be able to create.”
“The picture quality of course is very important as was reliability,” he adds. “For an event like the World Games that is on air 24/7 for two weeks, it’s important to have an encoder that can be run reliably for long periods of time.”
Haivision and ISB have worked together before but when ISB was awarded the host broadcaster contract for the Games, discussions intensified around a solution that would be able to replace fibre and satellite. “They wanted their rights holders to be able to access streams over the internet, and that’s where SRT comes in,” explains Horchler. “Having access to those live video feeds from the OB vans at low latency is really critical because that allows the rights holders to use those feeds for live production.”
“We started talking to Haivision last year,” adds Pickering, “because everybody’s talking about remote production which has been expedited because of Covid. Nevertheless, I believe that many of the stories that we hear are not complete stories. Everyone says that remote production is the final solution because it’s simple, cheap, and it’s fast. But when you are really involved in this business, you know that not everything is shiny and perfect. Some major events have used remote production, but not completely on its own; you still need to have satellite and fibre or a mixture of all three. That’s why we looked at SRT and after reviewing a lot of brands and services we decided to go with Haivision.”
All of the broadcasters wanted as much content as ISB could deliver so that viewers could follow the success of athletes from their home countries. “That’s when we decided to go with the eight feeds as well as the additional content that all those broadcasters wanted to take,”
Pickering continues. “We decided the only way to do that in a feasible manner, with very good quality that didn’t go below our standards, and would fulfil the needs of the broadcasters, was to use the Makito encoders with SRT for 1080i video.”
However, that decision wasn’t straightforward. “The OTT platforms prefer progressive, but we still have a lot of traditional broadcasters who wanted the interlace. So we went with the interlace standard and that way we were able to deliver to both markets; the traditional and the digital,” he adds.
Of course, deploying new technology on a major sporting event is never easy, and certainly can’t be done without some rigorous testing. ISB and Haivision held numerous discussions to ensure that the technology would live up to the host broadcaster’s needs. “We were a little concerned in the sense that when you work with different broadcasters all around the world, each broadcaster can have a different brand of decoder, or they have different software or audio cards,” Pickering explains.
“In many cases, you need to have close communication with your rights holders, you need to provide as much information as possible and prepare documentation and drawings. I prefer to use a lot of drawings because that’s a universal language. When you deal with broadcasters from around the world, not everybody understands English, so images are the perfect language.”
Those tests revealed that even though vendors will often state that their technology will work with any other brand, that’s not always the case. For example, the ISB team discovered that some of the broadcasters were using decoders whose passwords wouldn’t allow special characters. “Luckily that was very simple to solve,” says Pickering, “but there were things that not even Haivision were aware of. So we learnt a lot, it was a good experience for everybody participating in the project.
“The most satisfying thing is that the story has a happy ending. Everybody received images with good quality, very low latency. We didn’t have any issue with any encoders for ten days and they were transmitting the channel 24/7, and the other eight feeds were an average of ten to 12 hours a day. So it really was a very successful operation,” he concludes. n
THE GREAT EQUALISER
Jay Tyler, director sales, Wheatstone, explains why he believes broadcast is the original globalist, and what the studio of tomorrow has in storeof worldwide
How did you get started in the media tech industry?
I started at Wheatstone in the mid-90s as a sales person for our Audioarts division and am now director of worldwide sales. My time at Wheatstone is split between our manufacturing HQ in New Bern, North Carolina, and a good amount of time out in the field helping our customers design studios, visiting our distributors around the globe, training them on our new technology, and visiting their clients.
How has the industry changed since you started your career?
When I started at Wheatstone, everything was analogue and the console was the centre of everything and at that time, we put in centralised audio routers and sometimes an intercom system on top of that. This created a massive infrastructure of cabling and blocks with specialised engineers to take care of each piece. Now, mixing, routing and intercom are all integrated into one network, managed and maintained by a fraction of the people, sometimes even from other parts of the world.
It’s the people. I know everyone says that, but it’s true. Some of the most innovative, creative and iconic individuals got their start in this industry and are a part of it still.
If you could change one thing about the media tech industry, what would it be?
Technology adoption has always been cautionary, and for good reason. I certainly wouldn’t change that about this industry because it has served us well in the past. But if I had to change anything, it would probably be the rate of tech adoption by broadcasters. There is every reason to believe that the faster we can adopt new enterprise tech, the more benefit we derive from it sooner.
How do we encourage young people that media technology is the career for them?
By giving them the tools of their generation. If you give a millennial or Gen Z an old analogue console that is wired into the wall, chances are they’re going to ask you where the touch screen is. You have to give them the right tools to do the job, and that means touch screens and tablets and iPhones and connectivity. This is an industry unlike any other that requires heavy human interaction with technology. We spend a lot of R&D on this very human element of technology because without that, the rest doesn’t happen.
Where do you think the industry will go next?
Broadcasting is about to become a lot more ubiquitous. As I mentioned earlier, broadcast functions now live in software and part of that is because we’re able to build upon enterprise technology such as IP and Linux. The great part of this is that Linux and IP are also the basis for cloud-based systems. We can consolidate operations and functions not only in the geographical sense, but also with regard to cloud-based systems whether said system is a giant server farm run by a large broadcast group in just one or two locations, or true cloud-based systems running well off-site and operated by third-party entities such as AWS.
What’s the biggest topic of discussion in your area of the industry?
Companies like Wheatstone are investing a great deal in the studio of tomorrow, specifically how to safely and effectively migrate the studios of today into more cloud or serverbased operations. The biggest question, and the answer isn’t the same for everyone, is how to make that migration. Security and cost-effectiveness are all part of that equation, because broadcast remains a trusted media that people rely on. We want to preserve that reputation into the next new opportunity for broadcasters.
If you’re talking about inclusion in the real sense – of people who not only look different and live different lifestyles, but also think differently – you’ll find a lot of that in this industry. Broadcasting, after all, was the great equaliser everywhere around the world long before we had social media and communication as we know it now. We are the original globalists with deep, deep roots in different geographies, cultures and ethnicities.
What should the industry be talking about that it isn’t at the moment?
We need to continue to talk about and develop new ways to increase the reach of broadcasting. That not only involves technology but also new ways of reaching the public through technology. It’s an exciting time to be in this industry. n
What makes you passionate about working in the industry?
How inclusive do you think the industry is, and how can we make it more inclusive?