-->
Save your seat for Streaming Media NYC this May. Register Now!

Online Video Jumps on the Big Data Bandwagon

Article Featured Image

This cross-connecting between “hyper-giants” instead of across the traditional Tier 1 interconnects is in itself a Big Data problem. Each of the CDNs and Top 10 sites uses its own database structures, including SQL-based relational databases and NoSQL-based document or graphing databases, and none of the systems talk to one another. While there have been moves to integrate some data sharing between sites, it’s been primarily limited to single sign-on options where, for instance, a Facebook or Yahoo! Mail account can be used to sign on to smaller sites.

The one area of promise in all this is federated CDN models, which deal with both data sharing and content delivery across varied networks. These have been discussed at length, and there are a number of trials underway; you can read more about the Content Delivery Summit discussion.

INDEXING AND METADATA

While the issue of storing and delivering on-demand video files is fairly straightforward, the issue of metadata has grown more complex.

Consider, for instance, the issue with manifest files, especially the M3U8 version for HLS, which requires a manifest request for every set of on-the-fly segments retrieved from the server.

“With HLS you have to request a manifest at the same frequency as you request fragmented segments,” says Will Law, secretary of the MPEG-DASH Industry Forum (DASH-IF) during a recent teleconference, referring to the HLS manifest file also known as an M3U8. “This requirement to very frequently request the M3U8 may cause a resource issue when we move to large-scale HTTP-based live streaming.” 

In addition, there’s the bigger Big Data question of content extraction. If we look at metadata as Big Ddata, it’s easy to see that indexing of video on a frame-accurate basis could yield a sizable amount of small data assets that need to be further indexed to generate a map-reduce query result.

The efficient use of metadata to hone in on key video assets represents the greatest Big Data challenge for streaming. However, if this can be overcome, these targeted sections of an overall video hold more value than the video as a whole.

This gestalt approach can probably best be explained in terms of sound bites used in a newscast. While a reporter and camera operator interview the subject of a nightly news story, the final newscast often only has very small segments of the interview, interspersed with commentary. From NBC Nightly News to The Daily Show and other comedy shows, the sound bite is used to drive forward a particular telling of the story surrounding the interview.

Sometimes, the subject of the story objects to the newscast’s telling of his or her story. If so, the classic public relations move is to release an unedited version of the interview, as proponents of corporations and political figures as varied as Iranian president Mahmoud Ahmadinejad and U.S. vice presidential candidate Sarah Palin have done over the past few years.

The problem with this fire hose approach, though, is that it’s antithetical to public relation’s (PR) strong suit: crafting a story. Those of us who have practiced crisis management PR know the burden lies on us to both get the truth out and make it easy for the public to find the message. Relying on the semi-concerned viewer to wade through an hour’s worth of raw interview to find the points of disconnect between the actual interview and the newscast’s 20 seconds of sound bites doesn’t cut it.

Applying metadata to the problem, though, would allow an enterprise to not just share the story of how the newscast was pieced together -- through elimination or reordering of key phrases -- but to also generate an augmented counter-story showing how a particular newscaster has used comparable tactics across a number of similar interviews in a particular industry or market vertical.

In other words, the Big Data metadata opportunity isn’t just one particular video stream but an analysis of hundreds of hours of contents across thousands of video streams.

Big Data in Action

One company that’s addressing this indexing issue is Sonic Foundry. The company is best known for its rich media recorder line of MediaSite products, but underneath it all lies a Big Data story.

Speaking to Sonic Foundry vice president Sean Brown at the 2013 InfoComm show, it became apparent the company is thinking beyond the streaming and content management aspects, focusing on retrieving added value for its customers from what might be otherwise stagnant assets.

The key to this, according to Brown, is combining the critical mass of education and enterprise content with the power of indexing. “Even just a few years ago, we didn’t have the critical mass of content within the enterprise to warrant deep-dive indexing,” says Brown. “Last year, we quietly added a feature that has been increasing the accuracy and depth of results: optical character recognition.”

There are two intriguing points about the addition of optical character recognition (OCR) to the Mediasite line of rich media recorders and content servers. First, this is a return to the past for the Mediasite product line. Long before Sonic Foundry bought Mediasite, back when Sonic Foundry was focused on audio and video production tools, the Mediasite team was working magic with all sorts of content indexing. I remember being fascinated by a 1999 demonstration by a co-panelist -- one of Mediasite’s founders -- where numerous bits of information about each video frame were extracted.

It doesn’t really come as a surprise, as Mediasite was a spinoff of Carnegie Mellon University with its world-class incubator and technology transfer facilities, but what’s interesting is just how long it’s taken to reach a critical mass of content where these indexing tools really begin to add value.

The second intriguing point about adding OCR to Mediasite centers on the implications for processing power requirements. With AVC/H.264 committed to silicon in everything from general-purpose processors (GPPs) to graphics processors (GPUs) the need to use the majority of a GPP or GPU’s processing cycles for encoding is waning. The extra processing horsepower, however, can now be thrown at Big Data problems, such as extracting metadata from video-based assets.

“We see a need for four types of servers in the near term,” says Brown. “One for encoding, one for indexing, one for transcoding of legacy content, and one for the content management database.”

“These don’t need to all be physically separate servers,” he adds, “depending on the use case. But we see these four functions as critical for mission-critical use of video content.”

The Future of Big Data

The three areas mentioned in this article -- content management, delivery, and indexing for metadata extraction -- are only several of the key Big Data challenges facing the streaming world. Yet without addressing these areas, Big Data may be more hype than substance when it comes to streaming.

We don’t yet have a trend toward NoSQL or document-based databases in the streaming world, although we’ve certainly got our share of “dirty data” opportunities. The biggest area of opportunity, when it comes to all the capabilities that exist for indexing at a frame-accurate level, is the concept of auto-tagging sections of a video so that graphing databases can be used to find correlation between types of content.

Another challenge in all this is the ability to retrieve content at a granular level. Brown alludes to it, but the ability to extract so much information needs a complementary way to deliver just the portion of content that’s needed. With advances in on-the-fly segmentation, the next logical step on the Big Data path for content retrieval would be automating the segmentation of content not by 2-second bits but by bite-sized regions of pertinent content within a long-form video.

Big data graphic via Shutterstock.

This article appears in the August/September 2013 issue of Streaming Media magazine as "The Big Data Bandwagon."

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Streaming Media East '15: Big Data Baffles Many Video Providers

Some of the biggest video publishers around are sitting on several years' worth of viewer data that they're only now beginning to sift through.

Rovi Debuts Entertainment Analytics, Big Data for Multiscreen

After acquiring IntegralReach, Rovi is unveiling a big data analytics solution for targeted reach.

MediaCom: Big Data Is Essential to the Success of Connected TV

Streaming video is entering the home through a variety of connected devices. Advertisers are following, relying on big data to reach the right targets.