-->
Save your seat for Streaming Media NYC this May. Register Now!

The Arrival of Live 3.0

The question is a common one: “Why is live video so complex, and why should it be any different than the video-on-demand (VOD) content readily available everywhere—including the many thousands of new VOD clips made available on YouTube every day? 

Simply put: The implications of on-time delivery of live events are significant. Publishing VOD content is not time-sensitive. If we are minutes or even days late publishing VOD content, there is usually no business effect or viewer impact. With live events however, timing is crucial as there are specific business implications and viewers who are waiting to tune into an event looking for a time sensitive result like the winner of the Super Bowl or NBA finals. If those live events do not start on time, as advertised to millions of fans, content providers would lose a large majority of viewers, if not the entire audience, compromising the content holder’s or distributer’s ability to monetize the event via pay per view, sponsorship, or advertising. Availability of the content at a specific time is the key differentiator and the key risk in creating live content and making it available online. This critical element has huge implications for the process or workflow required to bring live content online to a connected audience. 

VOD vs. Live Video Workflow

In its simplified form, creating VOD content for online distribution involves transforming some form of mezzanine asset, or high-quality source file, into one or more web-ready formats, a process known as transcoding. There are a number of different software tools, or transcoders, which can perform the transcoding process. Once transcoded into one or more web-ready formats, VOD content is placed on the origin, or content origination point, and streamed to the viewer via a content delivery network (CDN). If for any reason the transcoding process fails, one can simply restart it again; although it will take longer to make content available online, no part of the content is lost. Quality of VOD content depends on the quality of the source of the original content and using the right transcoding profiles (e.g., selection of bitrates, resolutions, and other transcode settings applied to achieve the best experience for targeted devices). 

Creating live event content for online distribution, also in its simplified form, starts with content acquisition (typically through satellite downlink and decode, video fiber decode, or IP multicast), and encoding this live source into one or more web-ready formats. The encoded content is then published to one or more publishing locations in real time so it can be streamed live to IP-connected devices. On the surface this process appears to be straightforward, but when you factor in availability of live content at a specified start time, it quickly changes from a simple linear process to a complex parallel and time-sensitive process that leaves no room for error of any kind. Complexities quickly arise from a number of factors:

  1. Inability to acquire live source, such as inaccessibility due to location
  2. Changes in live source, such as receiving a live source at 480p when it was expected to be 1080i
  3. Transmission issues, such as loss of signal or degradation in signal quality
  4. Various encoding issues, such as dropped frames or out-of-sync audio
  5. Publishing issues, such as inability to publish or interruptions in publishing

When creating VOD content, we can simply readjust the process and restart it without affecting the video experience, but in a live event scenario any error or interruption manifests itself immediately in the form of poor user experience. Poor user experience can be anything from a live event not being available at the specified time to it not being available at all. It can also include frequent buffering, freezing, out-of-sync audio, or macro blocking. Any of these parallel steps in the live video workflow—by themselves or in combination—can affect the live event video at any time.

How do we improve this process to ensure a consistently great live video viewing experience? What are the improvements we’re going to see over the next few years in the live video workflow? Before we look forward, let’s take a look at what we’ve already seen.

Live 1.0

In the early days of live video streaming dominated by RealNetworks RealVideo, Microsoft Windows Media, and Adobe Flash, the digital media industry and audience became fairly comfortable with the software and services used to create live event video, as well as the quality of the video being played back mostly on PCs and Macs. To improve the video quality meant increasing video resolution and bitrate, and such increases closely followed adoption of broadband and increases in broadband connection speeds. Most of the time, the audience was presented with bitrate selection, usually three bitrates—sometimes identified as low, mid, and high—and generally the audience expectations were low. Apart from the sporadic streaming of some sports, media, and entertainment events, streaming live mainly meant streaming corporate events and niche content.  But the video experience was often degraded by buffering, dropped frames, and out of sync audio. While the live online experience was new and exciting, the quality of the video viewing was significantly less than the traditional broadcast-quality viewers were used to.      

Live 2.0

Live 2.0 was ushered in with the advent of adaptive bitrate (ABR) streaming, much of which was driven by Move Networks. Adaptive streaming uses adaptive bitrates (ABR), and client-side intelligence to constantly measure the client’s quality of internet connection and CPU availability to automatically adjust video quality. All of a sudden, the possibilities were limitless for audiences and content providers. Users were no longer presented with confusing bitrate selections. Buffering, which was a frequent issue for low bandwidth connections, was significantly reduced resulting in delivery of high-quality video across a broad set of IP-connected devices. With these improvements in technology and bandwidth speeds, the world saw the first true 720p HD video on the web. Major media brands including Disney, ABC, Fox, ESPN, and others quickly took notice of this technology and made some of their top premium content available for the first time.

In addition to ABR, adaptive streaming in its various flavors including Apple’s HTTP Live Streaming (also known as HLS), Microsoft’s IIS Smooth Streaming, and Adobe’s HTTP Dynamic Streaming (also known as HDS) furthered the ability to seamlessly stream premium content not only to the web but to mobile devices as well. Now instead of viewing on just PCs or Macs there were new devices streaming live video including tablets, various OTT devices such as Roku, Boxee, Apple TV, Google TV, and game consoles.

However, these technology evolutions came at a cost. Adaptive streaming introduced additional complexities in terms of how content is encoded, published, and protected, as well as how the overall workflow is managed and monitored. This in turn made monitoring and scaling much more difficult, and costs associated with this process went up significantly, creating barriers for entry and making ROI difficult to achieve.

The Arrival of Live 3.0

All of this brings us to the phase we are currently entering into: the third phase in live video evolution, or Live 3.0.  The leap from Live 1.0 to 2.0 was brought about by adaptive streaming, with a focus on improving the end viewing experience. The leap from Live 2.0 to 3.0 has been sparked by advances in the live video workflow, moving from an expensive hardware and SDI (serial digital interface)-based infrastructure to a process that is IP-enabled from end to end. The focus is now on improving the process in four key areas:

  1. Solving for the complexity of multiple screen, formats, and platforms
  2. Substantially lowering the cost of bringing massive amounts of live video to connected audiences and changing how acquired video (the video taken from the live event as it’s being created) is routed or sent to encoders or video workflow pipeline where it’ll be processed for web enabled distribution
  3. Enabling scalability with elasticity to provide flexibility to scale up to digitize massive amounts of live video, and also scale down when those resources are not needed
  4. Enabling real-time monitoring of the entire video workflow pipeline and making automated real-time intelligent decisions, instead of having to separately access various disparate parts of the video workflow pipeline, extrapolate and analyze data, and then make decisions.

How will this be achieved? Advances in decoders and satellite receivers have led to the ability to output video in IP format, opening the doors to IP acquisition and routing. Essentially, the heavy SDI cabling and routing infrastructure that was needed to connect receivers to routers to encoders is no longer necessary. Instead, an IP infrastructure has been created that allows the video to be acquired by the IP-based video workflow pipeline from an IP source, removing the need for dedicated cabling and equipment which was the true cost of video acquisition and media processing.

The Live 3.0 workflow will provide scheduling capabilities so that both broadcast quality and externally encoded content can be used as an input source, encoded in the desired number of bitrates, and muxed (also known as packaging content in multiple formats) into a predefined number of formats. The content can then be protected and published to as many publishing locations as needed, and monitored in real time with automatic error correction capabilities. Operators will simply create workflows as necessary in a matter of minutes, use them for the duration of the event, and shut them down when no longer needed, releasing any and all resources used. Not only will the quality match and perhaps even exceed broadcast TV quality, but it will also match its reliability.  Today, 99.99% of live streaming video workflow uptime could be considered unrealistic, but in years to come, even 99.9999% will be feasible. 

The Road Ahead

In addition to a sophisticated workflow that increases scalability while decreasing costs, the constantly changing world of live and on-demand streaming will need to evolve once again as major innovations are on the horizon including true, glasses-free 3D and holographic video. We believe that Live 3.x will evolve to address these innovations, effectively becoming a “living platform” that will enable third parties to seamlessly and securely connect via APIs from any device and any platform to build and run their own applications. In the process, it will tear down the walls that currently prohibit many from distributing their live events and live linear content online, lighting the spark of innovation in ways hitherto unseen—forever changing the world of digital media.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

CNN, NBC and PBS Have Big Online Plans for Party Conventions

The games of summer continue. With the London Olympics now over, the streaming world turns its eyes to the upcoming Republican and Democratic conventions.

iStreamPlanet Looks at Online Video Industry Challenges

Online video success means scaling up distribution to a vastly larger audience while trying to control costs.

Much Ado About the Cloud

Cloud services for media workflows go far beyond transcoding, and the pressure on pricing means that the battle between capital and operating expenditures will ultimately be won at the finance table

Driving the Digital Experience: Challenge Meets Opportunity

The explosion in over-the-top (OTT) video viewing for both live and on-demand content calls for new innovation and collaboration across the entire online video ecosystem and workflow pipeline

How to Produce for Adaptive Streaming

Adaptive streaming is the ideal way to deliver video content online. Here's how to do it right.