-->
Save your seat for Streaming Media NYC this May. Register Now!

Sweet Streams: Optimizing Video Delivery

I work with a company called StackPath. We launched our first product back in September. Our big news, as of earlier this year, was our acquisition with Highwinds. We have grown by leaps and bounds over a very short period of time.

I’m one of the engineers at StackPath, so I work on a lot of the back-end stuff. This article is going to be a bit technical due to the nature of the subject. My assumption is you’re all professionals. You’ve already created your video. You’ve already encoded it. You’ve transcoded it. It is the way that you want it to be, and you’re ready for the next step: video distribution. How do you get the content that you’ve created out to the end viewers who are eager to see it? That’s the problem we have to solve.

Optimizing One-to-One Distribution

One of the problems with delivering video on the web is that it requires an optimized distribution system in order to do it. Delivering video was a lot easier in the days of strict over-the-air. You’d have a nice station, it would broadcast, everybody could tune in, and that was how you did it. But in today’s one-to-one world, you don’t have this sort of mass broadcast phenomenon. You have a very different approach to the problem.

With traditional over-the-air, you were less worried about what the individual user experience was because you knew. You could control what the primary experience was going to be.

That’s no longer the case. In a one-to-one world, you have one end-user contacting a server, negotiating their own connection, which could be completely and wildly independent of everybody else’s connection. The experience of one person can be wildly different from the next person even over the same networks and using the same equipment.

To deliver on this promise, we require a couple of things. There are some general guidelines that to follow. We require low latencies. We require high bandwidth. We require high reliability.

Most importantly, we require scalability. This is one of the big differences. If you have a server, it has some finite amount of bandwidth assigned to it. You take that, divide it by 100 people, great. Divide it by 1000 people, and everyone has one-thousandth of the bandwidth available as opposed to one-hundredth of the total bandwidth. The more people that you get, the smaller and smaller and smaller amount of that bandwidth they can claim. It’s just basic math.

Here’s an example. Brainstorming on this topic at StackPath, we were looking at passing around who’s got what in the office. As it turns out, half the office has an iPhone. If I care about iPhone users, how do I get the video to them? I use a protocol called HLS.

If I’m using HLS, how does the HLS object get to the iPhone users? If you’re on an iPhone, you’re on a 4G network. What’s wrong with 4G networks? Their connections can be spotty. Latencies vary wildly depending upon where you are. If you’ve ever walked through a tunnel with your cell phone and lost the signal, you know what the problem with 4G is: it varies constantly.

So, let’s say I want to deliver a video to tens of thousands of iPhones simultaneously using HLS over a 4G connection. That goes back to the earlier bandwidth problem. I can’t do this with just one server.

And where are the end viewers? This is the last problem that a lot of providers don’t think about, because of the old broadcast model. With over-the-air, you’d have just one central broadcast tower that serves everybody in the local area. That’s no longer the case. The whole wide world can talk to your server, and if it’s very far away from them, they’re going to have a very bad time latency. If they’re very close, it’s going to be all right.

In this case, strictly for a thought experiment, let’s place half of our users in Asia and half in the U.S.

Bandwidth and latency are our two key metrics. Bandwidth is the size of the pipes, the amount of data we can send in a given second. Obviously, the more data you can send, the higher the bandwidth. But we also have this concept of latency, which is the time it takes for information to get from one device to another. Time-to-first-byte is a nice representation of that. The problem is that bandwidth and latency work together in non-obvious ways to deliver what the end user is going to see.

This article is Sponsored Content

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Sharpening the Edge: The Evolution of CDN and Cloud Security

Learn how StackPath is building an extensible platform at the cloud's edge to deliver on the past and future promise of edge services—including CDN and all its applications—and make the Internet safe.