Four More Ways Streaming Quality Must Improve to Compete With TV
Viewers aren't seeing 50 shades of gray because streaming video color calibration is outdated. Here's what else needs to impove.
In last month’s Streams of Thought column, I wrote about three quality issues the industry needs to solve as quickly as possible: artifacting, AV synchronization, and buffering.
I ended with a cliffhanger, not unlike what happens when an online video suddenly stops midstream and retrieves the next few seconds of video. In many instances, this is registered in player logs as dropped frames, a topic we’ll cover below.
In some instances, the video just stutters because the playback device can’t handle the complexity of the encoded content. Those instances aren’t measured by any media player that I know of, but we need a way to do so, as they’re the online video equivalent of “rain fade” on MPEG-2 video delivery from satellite TV. If it happens too much, it becomes a running joke among consumers, one that now has enough reach to discourage potential customers from trusting satellite’s ability to deliver quality service consistently.
The next pressing quality issue of 2015, if we’re going to get to TV-quality delivery, is color calibration. As part of the compression process, many codecs still use the Motion JPEG (M-JPEG) model for color space, which itself is based on JPEG imaging technologies from more than 20 years ago.
In JPEG and M-JPEG, nuanced colors and gradients are often replaced by a single color, derived from the averaging colors around a pixel. So not only do you not get 50 shades of gray, you don’t get the palette of blues or greens or reds needed to represent “normal” colors we see in reality either—or even the less-than-real television color space. Proper color calibration for online video would go a long way towards maintaining the perception of quality, if the previously mentioned issues are addressed first.
Dropped frames might never be completely eradicated, but we’ve made progress in the past 2 years with adaptive bitrate (ABR) technologies towards at least eliminating the majority of dropped frames. In recent testing I performed via Transitions for a major online media company, it was difficult to tell when dropped frames occurred without using a logging tool.
This is probably due to the fact that we’re seeing fewer instances of multiple frames dropping all at one time—although we still have the issue of buffering and stuttering, which logging systems don’t count as dropping frames if all the frames are eventually delivered—and more instances of a single frame dropped here or there. Often times a single frame drop is masked by a consistent and fluid audio track, but the industry needs to work toward a day when no frame is left behind.
If you’ve ever had reason, or just the urge, to watch a live stream on a laptop or mobile device within earshot of a television, you’ve undoubtedly faced the frustration of latency. My oldest daughter recently told me she watched a live stream on YouTube of a presidential address to the nation, and was frustrated that the live stream lagged by almost 40 seconds. “What’s the point of watching breaking news on a computer?” she asked me.
Fair question, because 40 seconds could be the difference between life and death if one were watching a weather report about tornadoes in your vicinity. We celebrate the fact that we can rise to TV-scale audiences with ABR streaming, but we’re sacrificing an “acceptable” delay on the grounds so we can reach a wider audience. It’s not an acceptable trade-off and needs to be fixed. Now.
The final quality issue that needs to be addressed is resolution. It’s not enough to say that content is delivered in 720p or 1080p, if we’re using compression to extrapolate every second or third pixel. Just calling content 1080p isn’t enough.
I ran into this issue recently while testing on online video platform: I uploaded content into the system via a “slicer” tool, which segmented the content on the local desktop and then uploaded it to the cloud for viewing. When I played the content back, testing it on multiple networks and devices, it would never play back higher than 720p at a certain bitrate, and was noticeably softer and less crisp than the original content.
Upon querying the company’s representative, I was told the system didn’t change the content at all before uploading it, but when pressed, the representative mentioned that 1080p content is never played back at 1080p unless requested by the content owner. So much for the simplified pricing that the company touts on its website; if you choose to play the content at its intended resolution, there’s a different pricing scheme that’s not publicly available. Not an impressive introduction to this company’s seemingly user-friendly content management and video delivery system.
The seven quality topics we’ve covered over the last two Streams of Thought columns are key to bringing streaming media quality in line with television quality. The industry should pledge to take on each of these quality issues—en masse, if possible, or piecemeal, if necessary—by the middle of 2015. Only then can we really begin to blur the lines between over-the-top and over-the-air television viewing.
This article appears in the November/December 2014 issue of Streaming Media magazine as "The Spiderman Rule for Online Video, Continued."
There are seven glaring quality issues that still plague online video. The industry needs to get serious if it hopes to rival television in popularity.
How can video compressionists assess the quality of different files? Only by combining objective mathematical comparisons with their own professional judgments.