-->
Save your seat for Streaming Media NYC this May. Register Now!

The State of QoS 2017

Article Featured Image

2016 proved to be a year that over-the-top (OTT) vendors began acknowledging what OTT consumers have been saying all along: Quality is key to gaining and retaining customers, from subscribers to content owners.

A case in point was the very public feedback after the launch of DirecTV Now.

AT&T’s play at offering a bundle of cable and broadcast TV channels as an OTT offering sounded great on paper, especially when AT&T offered discounted introductory costs and threw in a new Apple TV to allow consumers to start watching one of the dozens of live-linear OTT channels to which they subscribed. But the experience for consumers, judging by hundreds of complaints posted on AT&T, DirecTV, and other user forums, was anything but great.

According to Enrique Rodriguez, AT&T Entertainment’s CTO, the acquisition earlier in 2016 of Quickplay was strategic for both owning the OTT delivery technology that rides on AT&T’s infrastructure and also for allowing the rollout to meet expectations.

“Absolutely there were problems,” said Rodriguez, in an interview with FierceCable at the 2017 CES show in January. “The problems were not as big as I expected. I’m so proud of the quality we delivered.”

Subscribers seem unimpressed, even a month on from CES, and competitors like Sling are using this discontent to drive disaffected DirecTV Now subscribers into trying Sling’s more mature OTT live-linear bundle offering.

Why QoS?

There’s no argument in the streaming industry that a properly architected QoS approach should yield a quality experience. In fact, it’s one of the industry’s primary mantras.

There are many ways to improve a delivery network’s QoS, which, in turn, benefits an end user’s quality of experience (QoE).

Most of these approaches are outside the control of the content owner, and many are also outside the reach of the content publishing solution (e.g., last mile, multiple devices on a home network, etc.). A few, though, can be directly influenced by the choice of protocols, codecs, and even segment or chunking sizes.

We covered some of those choices in a recent Streaming Media article called “Latency Sucks!” that looked at both small segmentation size and newer interactive video protocols such as WebRTC. The article offers a great primer for understanding how latency affects the overall QoE for an end user.

Beginning at the End

OK, now that we’ve determined just how broad a reach QoS can have, with complex networking and technology challenges impacting an end user’s QoE, what happened in 2016 that changed the QoS conversation?

One significant growth area in QoS implementations revolves around understanding consumer consumption through the use of measurement tools.

In both kindergarten and Alice in Wonderland, you were told to start at the beginning and persevere until you reach the end. But in the world of QoE, it turns out you need to start at the end to properly trace down problems that might occur once content is served up from your media server or online video platform (OVP) of choice.

In other words, as Jiddu Krishnamurti observed in The Only Revolution, “The ending is the beginning, and the beginning is the first step, and the first step is the only step.”

Let’s start at the furthest end-user point of consumption: the device on which content that you’ve streamed out to viewers is ultimately played.

Measuring Analysis With Practical QoS Impact

Whether you’re a content owner, a content publisher, or a network engineer responsible for the infrastructure used to deliver streaming content to end users, the advent of real-time analytics, based on data gathered from real users, is a step in the right direction that gained traction in 2016.

To understand the world of analytics—especially real-time analytics, which can be used to turn a nominally good end-user experience into a quality user experience—we need to first understand the way that the industry approaches end-user measurements.

The primary approach to measuring end-user QoE is known as a real-user measurement, or RUM for short.

A number of companies in the industry provide RUM tools. Streaming Media’s Dan Rayburn provided an overview of a number of these companies in a September 2016 blog post, including a QoE checklist.

Another area to focus on is the dashboard itself. After all, it’s hard to take action with just data. NeuLion, for example, has rolled out an OTT dashboard, which combines QoS monitoring—for both live and VOD content—with a heat map of geographic clusters of viewing and device types. NeuLion says its dashboard updates “every 30 seconds with views broken down by device, bitrates, location” as a way to ensure that content is delivered properly to areas identified in the heat map.

The team at Nice People at Work (NPAW) use its Youbora big data analytics to optimize a multi-CDN approach for broadcasters. NPAW claims its solutions provide “real-time information on the delivered video experience, with granular data specific to individual end users,” meaning it is measuring at both the CDN and end customer points of the delivery pipeline.

NPAW and Ooyala, an OVP with its own analytics package, integrated their solutions as a way to use analytics and metrics for both OVP/CDN load balancing as well as Ooyala’s audience engagement tools. This trend will probably continue into 2017, with OVPs either creating their own real-time analytics and load balancing or partnering with one of the QoS companies to help adjust traffic-shaping requirements in real time.

In early 2016, a lawsuit filed by Conviva against NPAW accused the latter of infringing on multiple U.S. patents held by Conviva. “Conviva has spent a decade investing in and developing award-winning video monitoring and optimization solutions, and holds several foundational patents protecting its technology,” said Conviva CEO Hui Zhang in March 2016. “We believe innovation and diversity provides for a stronger ecosystem. However, we cannot stand by and watch Conviva’s IP be intentionally and flagrantly exploited, and we will defend our intellectual property rights through every means available to us.”

For reasons still unknown, the lawsuit was dropped in September 2016, 2 days after the appointed judge had heard oral arguments in the case centered on three patents: 8,874,725; 9,100,288; 9,246,965.

Are Real-Time Analytics Really Real Time?

We’ve all acknowledged the benefits that analytics can have on building out infrastructure in organically grown video on demand (VOD) markets. But what about the real-time impact during a live-streaming event, beyond just the post-game analysis that’s beneficial in beefing up overall infrastructure?

Some companies, such as Cedexis, center solutions on multi-CDN strategies for delivering video in real time. The aptly named Cedexis Buffer Killer can be used, according to the company, with “a combination of CDNs with cloud origins, as a strategy of choice to improve end user experience while controlling costs.”

For Conviva, with its Precision performance mapping, QoS also implies meeting not just end-user QoE expectations but also the various traffic and dollar commitments a content publisher may have to multiple CDNs.

One of the key differentiators between these analytics offerings is the number of data points, but pay attention to the type of data points each company specializes in. Some may be more generic HTTP delivery, while others may offer granularity for specific streaming protocols.

In the same way, just having the highest number of data measurement points doesn’t mean that actionable QoS can be implemented. Ask companies to demonstrate how their services handle a variety of network environments using the example of a similarly sized company in your specific industry.

So how quickly can decisions be made on based on RUM data?

Cedexis says that the decision-making process sweet spot is 7–15 seconds. This timeframe—well within the bounds of a typical live stream, which is often 15–30 seconds delayed in reaching the end user’s device— includes the time to gather RUM data from hundreds of thousands, or even millions, of end-user devices.

This is key, since the business logic needs to be robust enough to handle not just the streaming video issues that an enterprise might face but also industry-specific rules to safeguard an organization within regulatory frameworks.

“Data goes from being sent from the user to being fully invested and guiding decisions in 7 to 15 seconds,” said Simon Jones, streaming evangelist at Cedexis, whose Openmix product is a global server load balancer (GSLB) that provides decisioning capabilities.

Jones says that the Cedexis Live service can heat-map even “very small outages” which are then reflected through the Openmix decisioning engine simultaneously. Decisioning also has a predictive element, according to Jones, so problems are very quickly recognized and actionable.

What’s the trend going forward? “Predictive” is a term to watch in 2017.

“If there is a goal in terms of accelerating impact, I think it would be fair to say that we are working toward increasingly predictive traffic management,” said Jones.

QoS on Wi-Fi Networks

With a significant amount of OTT consumption occurring on portable devices, what is being done to address QoS for Wi-Fi devices in the home, at retail establishments, or at restaurants?

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Latency Sucks! So Which Companies Are Creating a Solution?

It's one of the biggest challenges facing live video streaming today, and it's remained frustratingly stubborn. Explore the technical solutions that could finally bring latency down to the 1-second mark.

Companies and Suppliers Mentioned