Streaming Media

Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn Streaming Media on YouTube

SME 2018: Red5 Pro's Chris Allen Talks Ultra-Low Latency Large-Scale Streaming
Streaming Media contributing editor Tim Siglin interviews Red5 Pro CEO & Co-Founder Chris Allen at Streaming Media East 2018.
Learn more about the companies mentioned in this article in the Sourcebook:

Learn more about WebRTC and streaming in a post-Flash world at Streaming Media's next event.

Read the complete transcript of this interview:

Tim Siglin: We’re back at Streaming Media East 2018. I'm Tim Siglin, a contributing editor with Streaming Media and the Founding Executive Director of the not-for-profit Help Me! Stream. I'm here today with Chris Allen from Red5 Pro. Chris, first of all tell us what is Red5 Pro?

Chris Allen: We're focused primarily on ultra-low latency, real-time streaming at huge scale. We started the project back in 2005. We were the first guys to reverse-engineer RTMP and created an open source project to essentially compete with the Flash Media Server, or become an alternative to it. We saw the demise of Flash as an opportunity, and saw a real need in the market for real-time solutions that scale. It's turning out we were right, which is a good thing. We decided to go more of a licensing model, switched our business around from all services on the open source stuff to now a proprietary, licensed product.

Tim Siglin: Interesting. My background, as I think I mentioned to you earlier was video conferencing. For us, latencies were 70 milliseconds or lower and round trips of 250ms. When you're talking about ultra-low latency for streaming, what sort of latencies are you describing?

Chris Allen: We leverage WebRTC as the edge peer endpoint, so for delivery we can get anywhere from 200 milliseconds up to about 500, depending on the distance from the edge and everything else, and other factors are in it. But, yeah, we're always under the half-second kind of range. The use cases for this are everything from these new live trivia apps, which are coming out and getting really popular. Obviously, the data needs to be as close to the live stream as possible. And we have other use cases, more like military type IOT devices, which need to synchronize properly. Live auctions are a big one. There's live broadcast, whenever you have any kind of transaction with it.

Tim Siglin: So, for you, it's not just about getting the latency low for watching breaking news. It's getting the latency low and having the interactivity synchronized as well.

Chris Allen: That's right.

Tim Siglin: What kinds of issues have you seen as Flash has gone away, and given way to HTML5? We're slowly trying to get things to parity with what we had with interactivity in Flash. Are there specific areas that you've found that we still have pain points, where we're not quite back to where Flash was from an interactivity standpoint?

Chris Allen: One of the trickiest things, and one of most annoying things about the WebRTC spec is that the data channel is not guaranteed to be synchronized with the video. So that causes all kinds of issues like if you need precision accuracy with that, you're not gonna get it. So you have to do tricks with the edge server to send time stamps back over and then delay that data channel basically.

Tim Siglin: It's like the old days of having to have your audio and video synced if you were within a venue, because the video would hit the screen before people would hear the audio.

Chris Allen: Right, exactly. I think, Flash needed to go away, but there were a lot of really good things with it. Particularly, it was reliable. You had a synchronized video and data stream. And it was simple TCP stuff, which is nice. The other thing with WebRTC is we leverage UDP for it, which has its advantages, but you also deal with lost packets and retrying and all kinds of other stuff.

Tim Siglin: And you also deal with a lot of network architectures that aren't really friendly to UDP, precisely for the reasons that you describe. We had Lisa Larson-Kelley earlier, who had been heavily involved in the Flash developing community, who's now doing work with Facebook and we were talking about that similar thing, where as an industry, we had some really nice things. They were proprietary. We all want to go open source, but to get there we're having to go through these pain points again. Synchronization, as you say, other things.

What do you all bring to the table that other people who are leveraging WebRTC don't bring to the table?

Chris Allen: We have a clustering model, which basically allows you to get really huge scale with it, and you can deploy this on your Google Cloud and AWS. Most of our customers use AWS. And it auto-scales for you too, so it'll spin up and spin down instances as the load occurs. You basically can install it and not worry about it. A lot of these event-driven things, like the trivia apps, that's a really good thing for them, because they can actually just shut the whole rig down, and then just pull it back up right before they broadcast. We also have a version of this that works on hardware.

Tim Siglin: Okay, on-premise.

Chris Allen: Exactly. We're partnering with Limelight and we're deploying our WebRTC solution within their network.

Tim Siglin: In the video conferencing world we had point-to-point, which was standard, Polycoms. And we had soft clients, which sort of were introduced into it. But we had the idea of an MCU, the multipoint control unit. Does WebRTC have something similar there as you’re starting to scale many-to-many, as opposed to one-to-many, to allow not just interactivity within the data channel, but even from a large-scale video conferencing, so to speak?

Chris Allen: We do have some of that functionality of being able to stitch videos together and stuff like that, with a transcoder node. It's a new thing we call Cauldron. Then you can build brews that work in it, that's kind of our fun thing with it. Basically C++ programs that run native code. We've got some experiments using OpenCV and base detection with it too, which one of our customers is using for basically masking out everything but the face. So, there's a lot of interesting things with that. We also use it for splitting the stream into multiple bit rates. Then you can do ABR.

Tim Siglin: ABR as well. All right. Chris, appreciate your time. We'll be right back with our next guest.

Related Articles
Streaming Media Contributing Editor Tim Siglin interviews Technicolor-Postworks' Anthony Raffaele following his keynote at at Streaming Media East 2018
Streaming Media Contributing Editor Tim Siglin interviews IAB's Eric John at Streaming Media East 2018
Streaming Media Contributing Editor Tim Siglin interviews Knowledge Vision's Michael Kolowich at Streaming Media East 2018
Streaming Media Contributing Editor Tim Siglin interviews Streaming Video Alliance's Jason Thibeault at Streaming Media East 2018
Streaming Media Contributing Editor Tim Siglin interviews Kelley Green Consulting's Lisa Larson-Kelley at Streaming Media East 2018.
Streaming Media Contributing Editor Tim Siglin interviews IBM Cloud Video's Scott Grizzle at Streaming Media East 2018
Streaming Media's Tim Siglin interviews Roku's Bernarda Duarte at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews RealEyes Media Chief of Technology Jun Heider at Streaming Media East 2018.
Streaming Media's Tim Siglin interview GigCasters' Casey Charvet at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews RYOT Studio's Kathryn Friedrich at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews TIME's Mia Tramz at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews SeaChange's Kurt Michel at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews Vimeo CMO Harris Beber at Streaming Media East 2018.
Streaming Media's Jan Ozer interviews RealNetworks Founder Rob Glaser at Streaming Media East 2018.
Streaming Media's Tim Siglin interviews ABC News Product Manager Raj Moorjani at Streaming Media East 2018