14 minute read

Open source software continues to drive the industry

Russell Trafford-Jones investigates how open source is a catalyst for technological advancement

Open source software is an invisible foundation of the modern world. Coined in 1998, the term refers to any software for which the source code is made available to anyone to use under licence. Many operating systems are open source and in the media space, open source programmes like FFmpeg, GStreamer and VLC are widely used by end users as well as under the hood of commercial products. These examples have wide applicability to many industries, but we are fortunate to have a wealth of broadcast-focused open source projects that, although very niche, are used by broadcasters, streaming providers and vendors alike. In a sea of paid-for software, why do companies release their code for free?

Eyevinn Technology is a Swedish independent video streaming consultancy that makes many of its test tools available on its open source repository on GitHub. “We believe that the greater benefit for the industry is to push new technology and make it easier to adopt,” explains Jonas Birmé, VP of research and development. “When we are solving a problem for a client, it’s often easier to have a dedicated test tool or software component. If we develop that, why should others have to invent the wheel?”

The company’s most popular open source software is the VOD2Live engine which creates a linear channel from pre-encoded VoD assets. “From a proof of concept in 2018, we’ve seen very wide adoption in a production,” adds Birmé, who points out another reason for using open source. “We’ve been approached to develop features for VOD2Live. Everyone’s welcome to contribute to the projects, but part of our business model is to accept paid development to extend a project. Afterwards, we contribute the new code back to the repository for

Peter Brightwell Willem Vermost

everyone to use.” Eyevinn Technology’s software can be found at https:// github.com/Eyevinn and includes a multiviewer, a proxy to introduce errors to HTTP streams to test players, a test ad server, and a transport stream generator among many others.

MediaInfo is a well-known tool for reading detailed metadata in most media files and has been around for over 20 years. Widely used to get technical details about A/V files and built into asset management workflows, MediaInfo is an open source project from MediaArea. “We want customers to choose us because they trust us and value our skills,” states Jérôme Martinez, digital media analysis specialist at MediaArea. The company also maintains DVRescue which repairs errors in transfers from DV tapes, conformance engine MediaConch and a BWF Metadata editor, all of which were developed for companies who sponsored the development. Martinez continues: “With open source, we don’t hold the keys to the software so customers can use our skills or develop on their own. For a lot of open source developers, open source software is a goal, but for us it is a means.”

The IP showcase saw its debut at IBC 2016. Even while the SMPTE ST 2110 suite of standards was still being drafted, the showcase demonstrated how uncompressed IP can work as well as vendor interoperability. The Video Services Foundation (VSF) first undertook collaborative work describing how to deliver SDI as IP in specifications which later fed into the SMPTE standardisation process. ST 2110 is still being extended and has been the result of work from the global broadcast community. The European Broadcast Union (EBU) was one such contributor and even before 2016 was taking a strong interest in the work. Willem Vermost is the product owner for the EBU Live IP Software Toolkit, also known as LIST, available on the EBU’s open source GitHub repository. “Before ST 2110 was a document, we felt it only made sense that the move to IP should be hand-in-hand with a move to software,” he explains. “But we knew burstiness of IP packets would be much harder without hardware.”

Since its humble beginnings as an excel spreadsheet, LIST has become a live test and measurement tool for timing compliance for ST 2110 and even has an online version which analyses uploaded packet captures. But Vermost explains the tool caused a problem. “It was important for the EBU to stay neutral in its efforts to contribute to ST 2110 and also in our role to bring people together.” Off the back of the first IP Showcase, a regular formal test event, called JT-NM, was devised which would test vendor interoperability. “We used LIST as a tool to get the test and measurement companies together and being open source was a critical part of the strategy. Once this was open source, we were no longer competing with any products and, importantly, we were then able to add definitions to the project,” he adds. As new software came on the market, each product labelled and calculated things differently. “The LIST project became a place to document the formulae and descriptions for common measurements. This collective knowledge will soon be included in the 2110 suite as SMPTE RP 2110-25.”

Part of the benefit of ST 2110 is the separate carriage of the essences needed for uncompressed live production. To control all these media flows, a group of specifications called Networked Media Open Specifications (NMOS) has been developed by the Advanced Media Workflow Association (AMWA) who describe NMOS as “open, industry developed, free of charge and available to everyone”. It’s no surprise, then, that the automated NMOS API Testing Tool is available on AMWA’s GitHub repository but Peter Brightwell, lead R&D engineer at BBC Research & Development, explains that the reasoning is deeper.

“We made some of the BBC’s early NMOS implementations open source to seed community interest,” Brightwell starts. “For a controlplane API such as NMOS, interoperability testing is essential so we started holding regular interoperability workshops.” As more vendors came on board, so the interoperability workshops grew. In fact, the latest interoperability workshop saw 17,000 tests on over 70 devices which is only practical with automation.

Brightwell continues, “Developing the NMOS specifications came to require a large amount of regular manual testing. To remove this and to speed up our interoperability workshops, the automated API Testing Tool was born.” Easy to deploy and controllable by a locally-generated web interface, the ever-growing suite of NMOS specifications such as IS04, IS-05 can be quickly tested. Brightwell concludes, “By open-sourcing the tool, vendors can see exactly how the tests work and what’s more, seeing the code for an NMOS specification can make understanding and implementing NMOS in your own product much quicker.”

A common theme in media and broadcast has been collaboration within the industry using the principle that a rising tide floats all boats. From the standardisation of film that created SMPTE, to the resounding success of SDI and more recently low-latency streaming with CMAF, collaboration has been vital.

The projects discussed here show that not only can open source be part of a business model but that it’s a powerful tool to bring today’s best and brightest to the conversation, to build the future and to help broadcasters and vendors alike swiftly adopt new technology. n

Jérôme Martinez Jonas Birmé

“For a lot of open source developers, open source software is a goal, but for us it is a means” Jérôme Martinez

HIGH-VOLUME CONTENT LOCALISATION

Geoff Stedman, CMO, SDVI, takes a look at how a cloud-based media supply chain enables agile resource management to deliver compelling, culturally relevant content

In the modern era of media consumption, content has appeal for audiences if it is relevant to them. No matter where it was created, or by whom, content that is culturally relevant can be consumed and appreciated; and even generate an appetite for more. This is the power of content localisation when done well, and the resulting business opportunity is immense.

Media companies today have countless distribution outlets through which to deliver localised, culturally appropriate content to viewers around the globe (some large media companies have upwards of 600 delivery endpoints). The challenge lies in meeting this massive consumer demand with engaging content, and doing so quickly and cost-effectively.

OPPORTUNITY FOR EFFICIENCY Content localisation work can include adding secondary language audio tracks, adding secondary language captions or subtitles, adding or modifying graphics or branding to suit the local market, and modifying video to meet local regulations regarding language, nudity, violence, and drug/alcohol use, among others. While some of these tasks can be tedious or routine, others demand artistry. Some can be automated, requiring human intervention only to bring accuracy and quality up to par after the majority of the work has already been completed. Others, such as dubbing, are intensely creative processes that can make all the difference in how well content is received.

Selecting the right talent for a particular dubbing project and managing those resources to deliver a compelling show or movie for the target audience and culture is a vital element of successful localisation. It’s also an example of the human creative processes that traditionally have been difficult to integrate with automated elements into a seamless media supply chain. This fragmentation has made localisation a costly and timeconsuming process, even a barrier to global media distribution.

Cloud-enabled content localisation is changing all that, making it possible to apply automation and artificial intelligence (AI)- and machine learning (ML)-enabled tools to the challenge, and then to integrate machine activity with various human contributions to the process as content is moved through the media supply chain.

This approach facilitates smart, agile resource management across both processing infrastructure and the various personnel – whether technical staff or actors in a voice studio – that drive completion of a high-quality product. In addition to enabling more effective supply chain management, with much improved visibility and communications across systems and vendors, this model introduces dramatic gains in both flexibility and efficiency.

Media organisations thus can scale localisation processing capacity according to demand, free up more time for critical creative tasks that improve the quality of the end product (and the viewer experience), and accelerate delivery of culturally appropriate movies and TV shows to new and expanding markets worldwide.

CLOUD-ENABLED CONTENT LOCALISATION Built on cloud infrastructure from a vendor such as AWS, Google, or Microsoft and managed with a cloud-native media supply chain management platform, a modern high-capacity content localisation supply chain leverages various third-party AI tools to perform content analysis that informs and accelerates manual content modification.

The localisation process begins with content receipt, either from an external content producer or from the archive. As media is submitted into the supply chain, the platform inspects the content and extracts technical metadata about the asset, storing it as searchable metadata in the system.

Through integration with order management, traffic/scheduling, or a business contracts system, the media supply chain management platform receives data and orchestrates placeholder creation, asset normalisation, asset validation, and proxy creation. Once programme titles and their requisite metadata, assets, and imagery are centrally organised, content is ready for localisation. (In fact, once assets are in the cloud and managed by the supply chain management platform, a wide range of media analysis engines can be triggered.)

Localisation job requests from the order management, traffic/ scheduling, or business contracts system typically specify factors such as languages needed, the outputs required (subtitles, new language audio tracks, graphics, etc), and the SLA (completion time) desired. Any other requirements specific to the target region and culture (and legal requirements too) are also part of the overall localisation project.

The management platform slots each request into the supply chain queue based on the SLA and provides job details and links to the appropriate content files for download (secured by user account). The authorised service provider, partner, or independent contractor accepts work orders as they come in, selecting the highest priority item first, and provides updates at key milestones or in the event of any roadblocks. For new audio tracks, the provider does the work of finding and recording the talent, providing status updates periodically, and then delivering the finished product. At that point, the management platform integrates the new localised version and continues to the next step defined for the supply chain.

To facilitate efficient manual content modification throughout the localisation process, the management platform leverages cloud-based automated video analysis and detection tools – Amazon Rekognition, Google Video Intelligence, Microsoft Azure Video Analyzer, and VideoAI from Comcast Technology Solutions – to gather information about the nature of content. Providing AI- and ML-powered analysis capabilities, these tools automatically perform tasks such as object, scene, and activity identification; text detection; and face detection and analysis. Metadata associated with any desired result (nudity, inappropriate language, alcohol, etc) is harmonised by the supply chain management platform, which then initiates a work order for manual review by a QC operator or compliance editor.

Because it is time-based, this metadata guides the editors to scenes that might need moderation or modification. The supply chain management platform presents all detected items in a custom panel within Adobe Premiere Pro or Accurate.Video Edit, allowing editors to move quickly from scene to scene as needed to make necessary edits. (This guided process can reduce by 80 per cent the time it takes to create approved versions of localised content.) Once again, when the work order is marked complete, the platform registers it as finished, updates the status of the asset, and moves the content on to the next step in the supply chain.

With this scalable model for content localisation, media organisations can be more effective and efficient in leveraging all of their resources toward creation of engaging, culturally appropriate content. Taking advantage of cloud-based media supply chains, they can employ the latest and greatest AI-driven processing tools and improve their efficiency in managing the human and creative elements of localisation. Moving away from the highly fragmented manual models of the past, media organisations can implement more agile workflows, with the flexibility to produce as many versions of content as different audiences and markets require.

THE CLOUD LOCALISATION BLUEPRINT A great example of putting this all together is the Cloud Localisation Blueprint (CLB) unveiled at IBC 2022 show as part of the IBC Accelerator Programme. It documents an end-to-end cloud localisation workflow and codifies implementation so that it can be deployed quickly and in a repeatable manner. The blueprint is intended to give media organisations more formal instruction and documentation as they make the shift to this new model. “The economic prize of technological transformation is potentially enormous,” it states, and “this blueprint intends to serve as a guide on how to get there.”

Developed by a core group of companies primarily to outline best practices and key considerations for building a cloud-based media supply chain for content localisation, the blueprint is presented as a model implementation. It provides a well-documented architecture and established integrations that can be adopted with confidence to build an ideal supply chain. But in keeping with the notion that the cloud brings agility and choice, the blueprint also gives media organisations the flexibility to work with the systems that best suit their preferences and requirements. Organisations can take advantage of automation when they need it, access service providers that connect to human talent, and manage all that within a system with end-to-end visibility they’ve never had before.

The CLB is a remarkable achievement and a valuable resource for any organisation seriously considering smart, scalable ways to move forward in meeting today’s massive demand for localised content. The document includes a hypothetical scenario with Warner Bros Discovery licensing content from Pokémon for international presentation in six countries on the HBO Max platform, and it’s a comprehensive exploration of all the processes involved in content localisation, from rights management through compliance. For organisations ready to take a deep dive into the subject, it’s worth reviewing. It’s certainly a strong example of the multivendor collaboration and platform integration that needs to happen to support media organisations in meeting the requirements of a high-volume content localisation workflow.

CREATING THE CONSUMER EXPERIENCE While content is typically perfected for its target audience when it’s first created, a nicely localised piece of content can provide a compelling consumer experience for a whole new audience. Once people find that content, the quality of localisation contributes to retention; how willing viewers are to keep watching or go looking for more.

A cloud-enabled media supply chain gives media organisations the ability to manage all essential contributors to effective localisation – machine activity and human technicians and creatives – in an efficient manner, with visibility into each step of the workflow. Using the cloud to scale capacity and media supply chain management to ensure efficiency, organisations can respond quickly to worldwide demand for engaging entertainment experiences. n

This article is from: