-->
Save your seat for Streaming Media NYC this May. Register Now!

Encoding Best Practices to Reduce Latency

Watch the complete panel from Streaming Media East Connect, Latency Still Sucks (and What You Can Do About It) on the Streaming Media YouTube channel.

Learn more about low-latency streaming at Streaming Media West 2020.

Read the complete transcript of this clip:

Jason Thibeault: If we assume that latency is from glass to glass, we know that there are lots of components on the way from when content is acquired to when it's consumed by the viewer. There's encoding. There's a virtualized container, something that is turning the video from one bitrate to another, one format or one package to another. There's DRM, there's closed captioning, there's all sorts of things in the middle.

Let's talk about encoding a little bit. I know that there's been a big push amongst a lot of the encoding vendors out there to do context-aware encoding--not applying one profile to every frame of the video, but applying multiple profiles to the frames, depending upon what's happening in the frame. So if you've got a talking head that's different than somebody running down a field kicking a soccer ball, what are some choices that can be made during encoding to improve latency? What if that encoder is responsible for some delay in getting the video from the glass to the glass? What are some best practices around, say, chunk size? What are some best practices around profile variables?

Casey Charvet: Let's say your camera adds a frame or two of latency, and your video switcher adds two frames of latency. On our encoders, we use some techniques for rate control, and it looks ahead four frames. So it has a four-frame buffer in the encoder itself. The acquisition card taking the SDI signal has the buffer.

So before we've even begun processing video frames, we're already six frames in, and that's about a fifth of a second at 30 frames a second. So it's very easy to find yourself at a second of latency or half a second of latency before bytes have even left your origination site.

Some of the things that we do to combat that is we can decide we don't care that much about rate control. So we're just going to throw that out the window. We're going to turn that four-frame, look-ahead buffer off. If we want low latency, we don't care that much about quality-per-byte efficiency. And so we're willing to spend a few more bytes to have lower latency. And so we might have shorter keyframe intervals. So we're sending out more I-Frames. If you want the lowest latency possible, you just put something into a near-realtime encoder, MPEG-TS and I-Frame-only and throw it somewhere. And you'll have 35 milliseconds of latency and it's great, but that doesn't work when we're trying to deliver over constrained bandwidth or deliver at scale.

So I think there's definitely ways to tune your encode profile on the origination side to get lower latency. And if you start there and work all the way through your production pipeline, your net latency at the end is low. If you start with a lot of latency at the beginning you're, you're going to be fighting that the whole time.

Marc Cymontowski: It's important to differentiate the two sides of the story--the ingest side, and the delivery side. On the ingest, you often have low-latency protocols, because, as Casey mentioned, if you have an MPEG-TS encoder, and it's a hardware appliance like our encoders us, for example, you can achieve very low latency. You can wrap that in the transport stream or you can send it over a local network and you won't have any issues. But as soon as you go over the public internet to reach the cloud--whether for transcoding and repackaging and then delivering the video or for direct distribution end-to-end--that ingest is a complicated part because often you don't have the quality of the networks that you would wish you have on-prem. So getting the content into the cloud is the first challenge.

In many cases in the past, RTMP has been used to do this, which is that TCP based protocol that comes with a lot of limitations because of the congestion control and some head of line blocking issues that cannot give you the full throughput. UDP-based protocols like SRT utilize the available bandwidth better, but you have to recover for packet loss that happens on the way. But then when you hit the cloud and you want to deliver to many, many, many people then a stream like an MPEG-TS is not very scalable because you need to deliver the whole data, live linear to every single person. So that's where the boom of HLS started because suddenly you repackage the data and with the TS, it's pretty easy. You just chunk the TS up and then you can make an HLS stream out of it. And the moment you have chunked it up and put the data to segments, you can scale better because you can use a caching mechanism to distribute files across servers and then deliver. And that's where on the delivery side, the latency's introduced.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Will Caching Still Impact Latency in a Post-HLS World?

GigCasters' Casey Charvet, Haivision's Marc Symontowski, and Streaming Video Alliance's Jason Thibeault contemplate caching, latency, and content delivery in a post-chunked media world.

How Much Do Low-Latency Codecs Reduce Latency?

GigCasters' Casey Charvet and CenturyLink's Rob Roskin discuss the efficacy of new low-latency revisions to existing protocols to decrease streaming latency in this clip from Streaming Media East 2020.

Companies and Suppliers Mentioned