-->
Save your seat for Streaming Media NYC this May. Register Now!

Buyers' Guide to Context-Aware Encoding 2019

Article Featured Image

One benefit to using cloud-based infrastructures for context-based encoding is the ability to speed up the creation of rendition parameters by throwing more computational horsepower at a single video asset. Coupled with a microservice approach, the use of cloud-based parallelization allows multiple computing instances to analyze a video for specific parameters.

A solution that uses parallel analysis will more than likely aggregate those parameters into a unified rendition for a given piece of content on a particular network on a particular device. But not all context-aware encoding offerings use this approach, so please confirm this prior to signing up for any service.

The use of cloud-based infrastructures also offers a chance to spot-check these rendition recipes across multiple episodes, or even to gather parameter details from several episodes at once to use in weighted analysis of the best encoding parameters.

A More Optimized Approach

With parallelization emerging as an affordable and practical tool in the overall workflow needed to generate context-aware rendition recipes on a per-title basis, is there a step beyond this?

The answer is yes, and it relies on using massive parallelization to dig down into a video asset itself, analyze scenes in that asset, and then apply those parameters to similar scenes within the video asset as well as across multiple episodes of a given episodic season.

As we mentioned in the 2018 Sourcebook, innovators have tried using “multiple codecs within a single title, with the best codec chosen for each shot or series of shots, known as a scene” for a more optimized approach based on the best codec on a per-scene basis.

That’s one way to approach content. Another way to approach it was presented by Roger Pantos, an Apple engineer who is credited with inventing the Apple HTTP Live Streaming (HLS) IETF specification, in his presentation at Streaming Media West 2018. Pantos noted that it is possible to use two different codecs in a single HLS manifest file.

Dror Gill, the CTO of Beamr, reiterated this point in a recent episode of the Video Insiders podcast when he mentioned that many set-top boxes are capable of playing AVC and HEVC.

In an effort to further reduce bandwidth for particular types of content encoded in either AVC or HEVC, the content provider could easily send a flag to specific set-top boxes, if presented with a manifest file that has AVC and HEVC. This would, in turn, prompt the box to play back HEVC content to lower the bandwidth.

The beauty is that this is not a technical issue per se, but rather a technical implementation issue. In addition, since it’s an implementation issue, there is absolutely no reason that an OTT provider could not provide both HEVC and AVC within the same manifest file and then allow the content provider or consumer device to choose particular AVC or HEVC renditions in a context-aware setting.

Measuring Down

Beyond the art of manifest manipulation is a more detailed awareness of quality as it relates to objective measurement tools.

We’ve spent a great deal of time in previous Sourcebook and Streaming Media magazine articles talking about quality, a topic that’s much too detailed to get into in this short article. Having said that, I’d like to leave potential buyers with a tip on quality measurements, specifically focused on how they relate to both the content itself and the context in which this content will be delivered: Look beyond peak signal-to-noise ratio (PSNR).

While the PSNR measurement approach prevailed for more than a decade, it was also fatally flawed when compared to real-world test scores using the best-known quality control tool—the human visual system (HVS).

By late 2015, a number of additional tools had emerged that more closely mimicked HVS when measuring quality. Some of these build on PSNR, such as statistical similarities (SSIM), while others leverage HVS with mean opinion scores (MOS).

A recent entrant with promise is the Video Multi-method Assessment Fusion (VMAF), which Netflix championed along with researchers at several universities and then open-sourced for the wider video-quality-measurement community to assess and integrate into products and services.

The net result of these advances in video quality measurements is this: Experimentation with various bitrates, resolutions, sample sizes, and color depths can be automated, to a degree that assessments from both the HVS and the test measurement align. This alignment allows the industry as a whole to experiment with overall lower data rates, effectively measuring down to the optimal data rate that maintains quality, but offers a lower overall delivery cost for any given delivery context.

[This article appears in the March 2019 issue of Streaming Media Magazine as "Context-Aware Encoding."]

Get instant access to our 2019 Sourcebook. Register for free to download the entire issue right now!

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Welcome to the Matrix: Beyond Scene-Based Encoding for Video

The encoding ladder served its purpose, but as streaming becomes more nuanced a ladder just doesn't provide enough options. It's nearly time for the video matrix.

Announcing the Swift Death of Per-Title Encoding, 2015-2019

Once revolutionary, pre-title encoding was replaced by shot-based encoding and then context aware encoding. Here's how to evaluate vendors when choosing a solution.

Buyers' Guide to On-Prem Encoding 2019

Despite all the hype around the cloud, plenty of use cases still call for on-prem video encoding. Here's what to look for when choosing a solution.

Encoding.com Goes Ludicrous With Ultra-Fast Encoding Service

As files get larger, Encoding.com does its best to ensure encoding times stay small. Ludicrous HLS processes HD and UHD movies in minutes.

SME 2018: Brightcove's Matt Smith Talks Machine Learning and Context-Aware Encoding

Streaming Media Contributing Editor Tim Siglin interviews Brightcove's Matt Smith at Streaming Media East 2018

Buyers' Guide to Content- and Context-Aware Encoding 2018

A new generation of encoders looks at the context of content to deliver better video playback and higher efficiency. Here's what publishers need to know about CAE.

Brightcove Announces Context-Aware Encoding, Up to 50% Savings

Rather than forcing content into pre-determined adaptive bitrate ladders, this system creates a unique ladder for each piece of video.