How CDNs and Content Providers Flatten Network Peaks
See more clips like this on the Streaming Media YouTube channel.
Learn more about CDN traffic management at the next Content Delivery Summit!
Read the complete transcript of this clip:
Peter Chave: With this unprecedented growth that we were all expecting suddenly to happen, ISPs and even government and regulatory authorities were concerned that we could see huge congestion on these last-mile networks, which would lead to very poor performance. Ethernet works really well up to a certain point, like 80, 90%. You can't get that last 10% just because everyone's backing off and contending, and all of the other algorithms kick in. So there was a risk that we felt we were going to run out of internet. Now we were fortunate in the fact that, because we were planning ahead for a very big 2020 anyway, a lot of the CDNs and ISPs had built up capacity to deal with the Olympics and with the big sporting events, and we've got elections here in the U S coming. We've got all of these other things going on which would drive up demand. As well we had Disney+, HBO Max, all these other big services launching. We put a lot of capacity, which, thankfully, with some other techniques has proved to be enough so far. Either through service providers' stepping in, and in response to requests from their ISP partners or from regulatory authorities--particularly in Europe--saying, "Hey, we're concerned about this," they went through a series of processes that basically looked at video streaming contenr--which probably cancer about 70% of all traffic being delivered--and said, "What can we do to try and lower the top bitrates? What can we do to bring down that total envelope and ensure there's at least a good chance of fair share and that everyone's going to get a good experience out there?"
So there are really three places you can do that. You can go back to your content and say, "Let's raincoat it a little bit." People are doing that because they want to look at these new per-title and context-aware encoding techniques. There are new video codecs like HEVC, which target devices or TVs or boxes or sticks, have pretty good support for now. So now is the time that a lot of people are going back and starting to look at that. That may not be quick enough to help you meet a real-time request like "How can we lower bitrate?"
So the real simple way, a lot of people did it was, they literally went into their manifests and they took out the highest bitrate. So they had 6.5 Mbps at the top, then 4 Mbps, 3 Mbps, et cetera, and they just simply went in and deleted the 6.5 Mbps out of the manifest. That meant, when the player starts, it just simply wouldn't get the 6.5. Now there may be a 1080p rendition at 4.5. That's okay. It's not as good, but it certainly means at least everyone can access the content rather than just a few people. And then it starts to fall apart.
So that kind of manifest manipulation is the core of a lot of the other things you can do. You can go to the player and you can actually put player behavior in there that says, "Under certain conditions or triggered through an API, I'd like you to only go up to a certain bitrate. I don't want you to just try and consume as much as possible. I want you to go up to say, 5 Mbps and stop there." That's what people like Netflix did in Europe. They said, "We're going to basically limit everyone to a 480p experience." So that's like an SD experience, rather than let them go all the way up to HD--things you can do from the CDN. That's actually a feature we have when we deliver the manifest--we can use our manifest manipulation capabilities to dynamically adapt that manifest as we deliver it. So rather than you have to go in and edit all your files and manage that process on your origin, we can--based on time of day, based on a particular network, if it's going to say a DSL node versus a CMTS network, or it's going to a particular city or a particular geo where a regional ISP is struggling,--we can set a rule that says, "Based on those criteria, I'm only going to serve a manifest as bit rates up to 4.5 Mbps."
So if there are a bitrate renditions, at 6 and 7 in there, we'll just remove those automatically, but only for those viewers in that geo. So you don't have to get back and mess with your whole catalog, and only maybe, say, in the evening. So between say four and six in the evening, I'll serve these more limited manifests.
But that's a way of trying to flatten the peaks. Everyone's been flattening the curve recently. This is talking about flattening the network demand curve as we go in there.
So the final thing you can do is--and this is something we actually deployed in Europe--is pure bandwidth-throttling of connections. If you're sitting at home and you start watching a movie, it may have a 6 Mbps rendition at the top of the ABR ladder. Your cable modem connection probably easily gives you like a 100-150 megs if you've got a good high-speed package at home. So that traffic may get served to you at 150 megabits per second. Now, the top bitrate was only, say, 6. You get that video in like 3% of the time it takes a segment, which is great, but there's a lot of idle time for the TCP connection. And it's a lot of bursty traffic with all these very big peaks coming down the pipe.
Something we can actually turn on is a throttling mode. We can say, "We will deliver it at 3-5 times the maximum bit rate that you need to serve," which means the players will adapt up to the highest bitrate. But there will be a spreading their traffic out over a much longer time so that the ABR algorithms will have a much better time trying to figure out what the actual bandwidth is. And so at least that makes the streaming connections on that pipe much more even, and much more predictable for both the ISP to be able to monitor and then make recommendations, and from the player to be able to stabilize it at a comfortable bitrate, rather than thrashing as it's seeing lots of lumpy traffic and lots of other congestion on that last mile link.
Akamai's Peter Chave discusses changes in content delivery traffic brought on by widespread shifts to working from home, distance learning, increased videoconferencing, and the like.
04 Sep 2020
NVIDIA's Greg Jones and Intel's Nehal Mehta discuss managing the power requirements of edge delivery in this clip from Content Delivery Summit 2020.
26 Aug 2020
Telestream's Ken Haren discusses contemporary strategies for delivering effective QoS metrics in this clip from Content Delivery Summit 2020.
29 Jul 2020
Limelight's Neil Glazebrook and Akamai's Peter Chave discuss the current progress toward a single universal streaming container format and what it means for CDNs in this clip from Content Delivery Summit 2020.
27 Jul 2020
Steven Tripsas discusses how Zype approaches quality of service (QoS) to improve response times and to meet the expectations of different types of clients--live and VOD--in this clip from Content Delivery Summit 2020.
24 Jul 2020
Akamai's Peter Chave and Streaming Media's Tim Siglin discuss the current state of edge compute and how CDNs have adjusted to unprecedented surges in 2020 in this clip from Content Delivery Summit 2020.
20 Jul 2020
Akamai's Peter Chave explains how changes resulting from shelter-at-home restrictions changed streaming traffic patterns, flattening or shifting peaks, and explains how CDNs interpreted and adjusted for these shifts in this clip from Content Delivery Summit 2020.
17 Jul 2020
How close are we to "write once, run everywhere" in edge delivery? Limelight's Steve Miller-Jones and id3as' Dom Robinson discuss edge delivery and the challenges of integration in this clip from Content Delivery Summit 2020.
14 Jul 2020
Because upticks in video conferencing, OTT, esports, and other areas of streaming have offset (and then some) the loss of live events over the last few months, CDNs remain at full capacity, but often demand is coming from unexpected places at unexpected times, as Limelight's Neil Glazebrook and Fastly's Jim Hall discuss in this clip from Content Delivery Summit 2020.
13 Jul 2020