Streaming Media

Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn
 

Know Your Tech for Low-Latency Streaming

videoRx's Robert Reinhardt guides viewers through the key enabling technologies of low-latency streaming, including server ingest and client delivery protocols like WebRTC, NDI, RTMP, and HLS in this presentation from Streaming Media West 2022.

Learn more about low-latency streaming at Streaming Media East 2023.

Robert Reinhardt: The technologies that we have, when it comes to low latency include ingest protocols, or muxers--they're not necessarily synonymous, but WebRTC is more of a transport layer. It's not necessarily a specific way to mux audio and video, although, like HLS, it's a wrapper that muxes audio and video together. But when it comes to server ingest, we've got low-latency protocols. NDI, of course, had a huge boost during COVID because a lot of production workflows went to full NDI--people using NDI within their own private LANs on the cloud, and using NDI, of course, on location, or coming out of Teams or Zoom. NDI is a very popular--license-free, for the most part--ultra-low-latency way to get video around that can be uncompressed or compressed, depending on what flavor of NDI you're using.

RTSP/UDP, that's used more with the kind of security cameras, traffic cams out there. Again, we get all sorts of audiences here. I'm not presuming you all come from media and entertainment. So when it comes to municipality live streaming, I do work for various municipalities in British Columbia, and across the United States I've worked with city of Colorado Springs, a few municipalities in California, working specifically with their traffic cams. And they want low latency on that too. Most of those cameras are IP cameras that are Axis or another vendor like Axis, and those are all RTSP pulls from those cameras into an infrastructure that hopefully won't add much more latency on top of it. And of course, RTMP Flash has been around for a long time and is gone, but RTMP is its legacy that even Facebook and YouTube today still use for ingest.

So if you're doing a live stream on those platforms, you probably already know the latency is pretty high, because we can't do RTMP out anymore. We could do RTMP in. Gradually, that's gonna be phased out, but because there's such an infrastructure investment on top of RTMP, I don't think it's gonna go away next year or the year after that. RTMP is probably gonna be a legacy protocol that sticks around and it can actually be pretty low latency--even ultra-low latency, and I'll go back to that slide in just a second that I skipped to from Wowza that basically refers to tuning all of these different protocols and muxers that might be out there.

For server ingest, I put the popular ones on the left-hand side. It's missing SRT, which should be there. SRT, of course, is pushed by Haivision. It's open source, and SRT is quickly becoming a popular replacement to RTMP, particularly if you're using a codec that's not H.264. If you wanna start using modern codecs like HEVC or AV1, you're not gonna be able to use RTMP very easily to do that. Yury Udovichenko of Softvelum has adapted an RTMP variant that will work with other codecs, but that's very specific to the infrastructure that his company's been working on.

Client delivery of course, is how we're consuming these streams, not just how a server might be talking to a point of origin or a remote location for client delivery. We've got our standard HTTP delivery with HLS-DASH and CMAF variantof that. WebRTC, of course, is there for client delivery as well.

We still have web socket services out there. nanocosmos, whose tagline was "around the world in about a second," used web sockets for that playback mechanism. So web sockets wasn't necessarily an end-to-end delivery for them. It was a client delivery that was easier to scale than WebRTC. So you still see some web socket implementations. Web sockets have been around in browsers for a long, long time, and it's just a generic socket. You could send whatever you want over a web socket: data, audio, video. It's not necessarily an easy protocol to work with because, again, it was not necessarily designed for sending video and audio and video around the web like WebRTC was.

Apple HLS has around 30 seconds of latency. Typically they put it down in the 18+ seconds of latency column. I would say your average HLS latency is 30 seconds, mainly because people are using ten-second chunk sizes and a three-chunk playlist. So you multiply 3 chunks times 10--not including any kind of delays between those chunks being delivered across CDNs--and you're looking at 30 seconds. It's not too hard to reduce that latency down to six seconds. You'll see under HLS Tuned, Wowza put it just after five seconds and you could get a two-second chunk size and keyframe interval times three. Again, if you start minimizing your chunk size, just multiply it by the number of chunks that are listed in the manifest, and that gives you your average latency. You'll need to add some time to that too, just because of transports between edges and your origin potentially. But I would say if you have a two-second segment size and it's on a playlist that's repeated three times, then you're probably looking anywhere from 6-10 seconds of latency in a tuned playlists like that.

And that's not hard to do. Anyone who's got a media server can tune their packaging to that and not have to go through many jumps in not too many hoops to do that. As we get closer though to this sub-one-second latency, we start to get into more different technologies like WebRTC, you can see is first and foremost in this near real time. We're closer to 250 milliseconds. Generally speaking, I think most people are looking to achieve under 500 milliseconds, if not under 300 milliseconds of latency when they're using WebRTC. And of course, there's a cost associated with that. WebRTC doesn't scale as easily as any of these http methods of delivery, so you're gonna have to budget accordingly, whether it's building out your own WebRTC infrastructure using someone else's.

We now have Low-Latency HLS (LL-HLS) that's been out for a number of years. I remember when Roger Pantos came to Streaming Media West 2019. He was talking about Low-Latency HLS for the very first time at the conference. And so we've had some time over COVID to see how that's gonna evolve. The original Low-Latency HLS spec had some HTTP version 2-specific PUT commands in it, that they've since removed it so that CDNs don't have to use it. And instead they're using this preload hint that you can put into manifests that are specifically Low-Latency HLS. And of course you could tune any of the others, like RTMP.

Back in the days when HQ Trivia was a massively popular trivia game, I had a couple clients that were trying to get on that same bandwagon, and we were using RTMP libraries in native apps to play RTMP in a smartphone app. So if you're building your own customized playback technology, then you have a lot more options still if it's not gonna be strictly within the domain of the browser. So RTMP could be an option for playback if you were building a custom environment for it these days.

I wouldn't put too much weight into RTMP playback just because we've got options that are a lot more mature now, like WebRTC. Even just a few years ago, WebRTC didn't have the kind of cross-platform and cross-browser acceptance or standards that we have now. Now you don't have to worry so much about H.264 vs. VP8. Those are still codecs that are in play that might need some transcoding back and forth, depending on your workflows. But it's come a long way, and again, COVID accelerated that.

Related Articles
CNN+ Live Operations Manager Ben Ratner discusses how even "ultra-low latency" complicates hybrid (cloud and on-prem) workflows in this clip from Streaming Media Connect 2022.
RealEyes' David Hassoun discusses what low latency is and what it isn't, and sets reasonable goals for what you can expect when doing live streams in the current climate.