Optimizing Streaming Tech to Meet Latency Requirements
Learn more about streaming latency at Streaming Media East 2022.
Read the complete transcript of this clip:
Jason Thibeault: Talking about the different types of video and experiences that you deliver to your end users from VOD to auctions, to fantasy auctions--do those impact the way that you guys select technology you as you're looking at your workflows and trying to optimize, let's say, to make sure that your auctions are real time? What is the thought process you guys go through in terms of optimizing and selecting technology for those individual use cases to meet your latency requirements?
Darcy Lorincz: Well, it's actually pretty simple. We make everything go to the lowest latency, and then we start scaling back because, obviously, less latency equals more cost. So then we look at the cost and say, does that workflow need to have that millisecond? Or can it tolerate something else? We've actually started to look at machine learning for some of those workflows to understand what cameras and what networks and what resources are in play for certain types of, whatever it is. We now have AR and VR experiences that are being incorporated. So those are bringing a whole new challenge. Web AR, especially--it's overlaying things that don't work in this space from somewhere out in the web. So we just say, let's just see if everything can fit into the lowest-latency scenario and then start taking it out from there.
Some of it's purposeful because it doesn't need it, and doesn't need the higher cost or that performance. And some of it just doesn't ever meet the requirements and it has to drop off the table. Sometimes we just drop it. We move around, so there's locations that we go to, some that are awesome with their connectivity, and some that aren't. That's the first and last mile again. So that's the outbound.
On-site, we're self-contained. We test everything. But when you get on-site, it's usually all bets are off. So you've got to look at those things, and that's where the machine learning comes in when we're setting up. It says, "This didn't meet your role for that latency, so what are you going to do about it?" Sometimes you can do something about it and sometimes you just have to drop it to another level of latency. It's getting better, because machines are there and they can help you. Then you don't need a whole bunch of people running around trying to plug things in for different scenarios. It's really common infrastructure. Find the lowest common denominator, and that's where we kind of set our baseline.
The pandemic highlighted the need for ultra-low-latency video. Here's a look at how the industry has responded, from LL-HLS and DASH to WebRTC
Company debuts streaming service deployment for Code of Support Foundation hybrid event