Streaming Media

Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn Streaming Media on YouTube

Video: Why Use A/B Testing in HTML5 Video Player Development?
Streamroot's Erica Beavers explains what A/B testing is and how it can benefit organizations building custom video streaming players.

In this excerpt from her presentation at Streaming Media West 2016, Streamroot's Erica Beavers outlines the basic principles of A/B testing, and how organizations developing HTML5 video players can use it to generate a data-driven analysis of their players' effectiveness.

Read the complete transcript of this video:

Erica Beavers: A/B testing, as I think most people probably know, is a way to test two versions, an A version and a B version of your product or your feature. It's essentially a way to be data-driven in your analysis. Instead of guessing, you can actually make very informed decisions by using the numbers actually in production. The great thing about A/B testing is that most often you're using a resource that is already at your disposal, and that's your own user data.

We're applying this today to video players, to HTML 5 video players, and to quality-of-service metrics, but this can obviously be used for any number of scenarios. A/B testing doesn't always make sense, but it does when you can't test locally, so when you're dealing with a highly distributed or unmanaged networks that really can't be produced, reproduced locally. We're going to use the example of A/B algorithms with our peer to peer solution. Secondly, when you're deploying a completely new feature that's going to break with your current logic to see where any changes are coming from. Any time that you're going to be using a subjective hypothesis or magic numbers, I'm sure you've had to deal with this a lot. I mean, you've all done it when you're setting a buffer to 10 seconds instead of 12 seconds, and you don't really know why. This is a way to test what actually works in production.

Before starting your test, you want to keep a couple basic principles in mind. The first is, when you're defining your metrics, you're obviously going to be looking at something that's relatively complicated, especially if you're doing something in terms of user experience. Make sure that your success metrics are actually what you want to test. Are you trying to look for a video play time, or are you trying to look for total session time? Which is more relevant for your business? For example, do you want to look at the number of re-buffering events, or the total time spent re-buffering, or the actual number of milliseconds spent re-buffering during that session? Make sure what you're testing is going to give you something relevant. That sounds very, very obvious, but it is a mistake that, actually, a lot of people make. They think they're wanting to test for something that's not really relevant for their business case.

Second, going along with that, is to really define your variables, understand your variables, and make sure your testing isn't conditioned by a variable other than what you're actually trying to test. Of course, also make sure that improvement to one metric and one ... To one metric, is not causing any sort of decreases in the other crucial metrics that you might have.

Third, you obviously want to determine an appropriate sample size and testing time frame, depending on your use case, depending on your resources. Finally, you want to use identical populations as much as possible. When you begin testing, make sure you're doing it intelligently. You're using, if you're testing for quality of service on a video, you want to be using viewers that are on the same stream, that are in the same region, are using the same ISP, et cetera, et cetera. You don't want to introduce any sort of discrepancy in the testing that could bias your results.

Related Articles
Nice People at Work's Diane Strutner, Studio71's Mike Flynn, and WillowTree's Jeremy Stern debate the merits of in-house vs. third-party analytics tools for video brands.
Streamroot's Nikolay Rodionov walks viewers through the key elements of an effective A/B testing workflow for testing video players during development.
Disney's Mark Arana and Wowza's Chris Knowlton discuss the challenges content owners face in migrating their video from Flash to HTML5, and the importance of knowing where their video will land before planning their migration strategy.
Akamai's Will Law discusses the broad adoption of HEVC and the advantages for content owners of absorbing the additional cost of encoding to HEVC and VP9 for the ROI of expanded reach to supporting platforms.
Streamroot's Erica Beavers gives content owners migrating from Flash to HTML5 a whirlwind tour of HTML5 player components and UIs, likening an HTML5 player's inner workings to the component of a loaded hamburger.
Streaming Learning Center's Jan Ozer compares four approaches to testing video compression quality in this clip from Streaming Media East.
Brightcove's Matt Smith explains how to meet the challenges of live event streaming from technical to QA to process issues to securing bandwidth in this presentation from Live Streaming Summit.