Streaming Gets Smarter: Evaluating the Adaptive Streaming Technologies

Article Featured Image

At a high level, all adaptive bitrate streaming technologies work the same way. That is, you encode the source video file (or live event) to multiple resolutions and data rates and then send the player the first few seconds of video. As it plays, the video player monitors playback-related indicators such as file download time, buffer levels, and CPU utilization to determine if there are any connectivity or playback issues.

For example, if the video buffer isn’t filling at an adequate rate, or the video data takes too long to download, the viewer may run out of data if the video continues at the current quality level. So the player requests a lower-bitrate stream for the next few seconds of video and continues to monitor playback status.

Given the similarity in operation, which technology is best? As always, there is no one-size-fits-all technology, but to identify which is best suited for you, you have to ask yourself several high-level questions.

Which Protocol Does It Use?
Move, Microsoft, and Apple all use HTTP (Hypertext Transfer Protocol) to transport their streams. This protocol has three theoretical advantages over the Real Time Messaging Protocol (RTMP) used by Adobe.

Figure 2
Figure 2. Adobe announced dynamic streaming in May 2008, but the only recent nondemodeployment was by 2009’s Cannes Film Festival.

The first relates to access to viewers behind corporate firewalls. Firewalls ordinarily allow HTTP traffic but may block packets transmitted via RTMP. While this may be an issue for those trying to reach viewers at defense contractors and other security-oriented sites, given the number of RTMP-driven Flash streams distributed over the internet—including videos from sites such as NBC, CBS, and FOX—RTMP doesn’t appear to be an issue for general consumer-oriented streaming. To be clear, these sites currently don’t deploy dynamic streaming, but they do use the Flash Media Server (or the equivalent) to distribute streams via RTMP, which they obviously wouldn’t use if they experienced significant connection issues with their target viewers.

The second theoretical advantage of HTTP over RTMP relates to cache servers. Specifically, video streamed via HTTP can be stored on cache servers located within the networks of ISPs, corporations, and other organizations, while video streamed via RTMP cannot. These cache servers collect chunks of live or on-demand data that were requested multiple times and distribute them to multiple viewers, providing potentially higher quality of service (QOS) to local viewers and reducing the overall bandwidth required to serve those viewers.

For example, imagine the Obama inauguration being watched by 20 viewers on a single network behind a cache server. Ten viewers are watching a stream from a news site distributing video via HTTP, and 10 are watching a stream from a site using RTMP. Since the HTTP data is cacheable, the website distributing via HTTP would send a single stream to the first requester. Once delivered, the cache server would cache and then serve the other nine viewers the cached stream. In contrast, the website distributing via RTMP would have to send 10 separate streams, increasing the overall transfer bandwidth and potentially delivering a lower QOS.

To understand how this theoretical benefit factors into the practical and economic reality of large-volume content distribution, I spoke with representatives from two major content delivery networks. Since these organizations support all streaming technologies and have to stay technically agnostic, I won’t name either the individuals or the companies, but I appreciate their taking the time to educate me on their view of the technology discussion.

Regarding cache servers, let’s examine the practicality of the benefit. First, understand that cache servers only store the most popular content, so if you’re a relatively small site distributing hundreds or even thousands of streams a day, you’ll probably see minimal benefit. If you’re a three-letter network, you may not be all that excited about local caching because cached data is inherently insecure. The CDN representatives that I spoke with said that premium video files are often set to "expire quickly," removing them from cache servers for security reasons but limiting the efficiency the servers can provide.

In addition, given how adaptive bitrate streaming is implemented, caching may not prove as beneficial as it is with other HTTP data or single-stream video files. For example, suppose the Los Angeles Lakers win the NBA championship and ESPN posts the first interview with Kobe Bryant. All through Southern California, Lakers fans tune in to watch, many using ISPs that deploy a cache server. In a single-video-file environment, the 15-minute interview would likely be cached in its entirety, and the bandwidth efficiency would be enormous. In an adaptive bitrate streaming environment, the file might be encoded into four separate streams at varying bandwidths, with each stream divided into 2-second chunks. That’s 1,800 chunks of data rather than a single file. How many of these will meet the popularity threshold for caching? Impossible to tell, but it’s likely to be much less than 100%. Clearly, from a caching perspective, adaptive bitrate streaming will be much less efficient in many environments than single file streaming.

Streaming Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Adaptive Streaming in the Field

How do organizations like MTV, Turner, NBC, Deutsche Welle, Harvard, and Indiana University actually deploy adaptive streaming technologies? Read on for all the juicy details.