-->
Save your seat for Streaming Media NYC this May. Register Now!

System 73's Daniel Perea Strom Talks Improving Content Delivery

Tim Siglin, Founding Executive Director, Help Me Stream Research Foundation, and Contributing Editor, Streaming Media, sits down with Daniel Perea Strom, Chief Technology Officer, System73, to discuss improving content delivery in this exclusive interview from Streaming Media East 2023.

“You were on a number of panels yesterday talking about some topics that were of interest to me,” Siglin says to Strom. “One of them specifically was this whole conversation around being able to deliver parts of chunks before the whole chunk is done encoding.”

Siglin asks Strom to discuss that topic further. “Obviously, as we've moved forward in the industry, we're now saying how do we lower that latency so that we can actually still deliver it with the HTTP way of delivery but keep it lower. So describe to me a little bit about what you were talking about yesterday and how you can bring those latencies down but still keep it stable by not dropping below certain levels that people have tried in the past, which seem to make it unstable.”

“When we are building streaming workflows, very seldom do we go to a single provider that does everything,” Strom says. “So usually, this is built in a piecemeal fashion integrating different providers. But whenever you want to reach low latency, any step of the delivery chain is really important because any extra delay introduced in the delivery chain is going to [inhibit] your goals of low latency. But then of course comes the conversation of, ‘Who is the provider that is going to be able to provide me this need?’”

Strom says that going with a single provider enabling end-to-end workflows often involves proprietary or state-of-the-art tech to enable a lower latency bounce. However, in those cases, there is also the risk of sacrificing a bare minimum of resiliency when relying on a single provider. He emphasizes the importance of multi-CDNs to ensure robustness in case of service interruptions.

“Right,” Siglin says. "And I think the industry has risen to the point where we say we need multiple CDNs, whether it's for resiliency or load balancing. So therefore, no single provider solution.”

Strom says that closely exploring ways to achieve low latency in your delivery chain can become a driving factor in discovering bottlenecks even across different providers. “So for example, let's say that we are talking about common scenarios in HTTP-based streaming, either HLS or DASH, where you have video segments of six seconds, right? And originally everybody was working with ten, but nowadays the standardization is six.”

“But you still need at least three of those typically, so you've got that delay,” Siglin says.

Strom notes that was just a recommendation based on original standardization efforts. “But technologies have evolved to overcome that frontier,” he says. “And whenever you have all the pieces of your delivery chain able to break that frontier inside the segment, then many things become possible. So whenever you have a video segment of six seconds, but you are able to announce it to your audience through the manifest that that video segment is going to be available because you're on a livestream, that one video segment is going to be available one after the other…at least you can say, ‘Hey guys, there is a new segment coming, please start asking for it even before it has started to be encoded.” He mentions that while System73 is not a provider who can yet achieve that, they have done proof of concepts around it. “And whenever you have that,” he says, “you have the encoder announcing to the origin and then announcing to the CDN and then announcing it to the players, ‘There is a new segment coming, please ask for it.’ The video players know that there is a new segment coming and saying, ‘Hey, CDN, please send it to me, even before it is available.’”

Siglin asks Strom to clarify if he means that the players are asking for the segment before it even exists.

“Right,” Strom says. “And what happens is that the CDN, if it's configured correctly, they will perceive that announcement at the same time for a lot of people, if you're going to a large event…but even in a small event, you have concurrency of that announcement and concurrency of that request.”

“Which is where the coalescing comes in,” Siglin says.

“Exactly,” Strom says. “So you have the CDN in place saying, ‘I'm not going to send all of those requests as cache miss, upwards.' I'm going to say, 'Cache hit, and please wait. I'm going to send a request back to the origin. The origin is also configured with cache with request collapsing, and the most important part is that the HTTP server that is in front of the encoder should be able to also do requesting and have that segment in delivery with traditional protocols that are able to receive the content in a chunk and transfer encoding mode that essentially is delivering the chunk as it is being encoded by the encoder, with an open connection towards the origin shield. Then all of those connections open to the CDN POPs and all of those POPs open connections to the players, all of them assuming that it's an HTTP request that is ongoing, and the pace of delivery is just driven by the encoding pace. And this is not related necessarily with low latency, but the same principles that we apply to achieve large scalability into the audience that we want to reach is applying multicast on the application layer, but being compatible with traditional HTTP unicast is, so whenever you are able to pull that off, very interestingly enough, not only large scalability for broadcasting events live…”

“Your latency drops,” Siglin says, “because there are a lot fewer things going on in the system…”

“Exactly!” Strom says.

Siglin asks, “So when you do the request coalesce and it's aggregating in all those requests and saying, ‘Just a second, I'll deliver it to you from the encoder standpoint, are we getting to a pseudo multicast model almost because it's a whole bunch of unicast requests that get pushed down into one?”

“Exactly,” Strom says. “That's the holy grail from my perspective…for example, the discussion that we had yesterday about ‘multicast is dead, long live multicast,’ it was very funny for me because multicast in the first principle approach is just a way of relaying content on specific parts of [the network].”

“Replicating it,” Siglin says.

“Exactly, that's the basic idea,” Strom says. “Now, traditionally multicast means mentally, ‘Oh, I'm going to do this on the IP layer, but it shouldn't be that case. Actually, what we do at System73 is replicate that multicast on the application layer.”

Siglin says, “So, the coalescing essentially takes a lot of the computation out, “I’ve got to schedule all these requests out as they go. So here's the other question on that. Because I've been in the industry for 25 years now, there were little things that people tried to do where they would tweak segment sizes down below a second, and of course you'd run into a number of issues with that. And then there was the whole TCP windowing issue about getting it too low down there. The other panelist from CDN77, he and I were talking about [why] the reason that they stick with traditional HLS in dash segment lengths rather than necessarily using, say, Apple's low latency HLS, was that they can't necessarily get those legacy devices out there. They either have to build everything to low latency HLS, which may leave out some of the legacy devices, or you build everything to standard HTTP segment sizes, and then you have the problem of having that longer latency…But with what you're suggesting, essentially because those requests are coming in and being coalesced, ‘I can send a portion of the segment out.’ And then the question is, when you do that on an intermittent network, especially a mobile device, what happens if you miss those deltas? Because you know…about low latency HLS, [you’re] taking a segment and putting it in here, and then what are the deltas in between them? That's great if you're on a stable network, but if you're on an intermittent network, you may miss a couple of those. And then the question is, do you have to wait until the entire length of the segment to pick back up?”

“That's a very good question,” Strom says. “And actually, in my humble opinion, it depends a lot on the kind of content and the kind of expectation [of the] audience, in terms of their experience. For example, if we're talking about traditional broadcasting for large audience that will be a suitable use case for a using HTTP-based content request, that means that usually those customers would not like to miss a piece of the content because of an unstable network. And traditional streaming, you may have a rebuffering event, and of course that means that you are not receiving content for a short while. Now the decision that content providers need to make is what do I want the experience to be? Do I accept that maybe if that rebuffing event happened in the middle of a scoring event in a sport venue, I'm willing to not show that goal to my end users? Or I prefer to accept that there's going to be a rebuffering event and then continue the playback from that point? And there are measurements to, first of all, of course minimize that kind of event, but also if it happens because the reality what you want to do afterwards…from my humble opinion, with HTTP-based streaming, you are able to decide what you want to do.”

“So you can add your business logic as you choose,” Siglin says.

“Exactly,” Strom says. “You can configure the player to decide, ‘Hey, I want to wait until the content come by, continue the play…of course with an increased glass to glass delay, but you can speed up slightly, and then catch up again. Or just accept that the connection is unstable and increase the delay to glass. And you don't have to make it for all audiences at the same time…you can accept that for only the people that are being impacted…and you can even choose if you want to enable by default the catch up for all your audience or just disable it by default or just keep the option to the end user to choose what their experience will be. And on the other hand, you want to keep the content because you want to have everybody synchronized all the time. Right, right. You can configure the player to say, ‘Okay, if I'm behind more than say one second, let's keep content, and jump again.”

Learn more about improving content delivery at Streaming Media Connect 2023.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Scripps' Yazmin Wickham Talks OTT App Development & Content Strategy

Streaming Media's Tim Siglin sits down with Yazmin Wickham of Scripps' Network to discuss OTT app development and content strategies in this exclusive interview from Streaming Media East 2023.

How Amazon and Disney Define the Edge in Content Delivery

Where is the edge when it comes to streaming? Experts from Disney Streaming and Amazon Web Services offer their take in this clip from Streaming Media Connect 2023.

Where Is the Edge in Streaming Content Delivery?

Where exactly is the edge in streaming content delivery? According to leading figures from the Streaming Video Technology Alliance, Amazon Web Services, and Fortinet, defining what edge computing means for streaming varies by use cases that involve factors such as streamlining user experiences, taking security measures, and evaluating data costs.

How Open Caching Solves Content Delivery Bottlenecks

How does open caching solve content delivery bottlenecks? According to Gautier Demond of Qwilt, the biggest problem-solving advantage of open caching is removing the traditional bottlenecks of CDN networks while reinventing the overall analytics-sharing relationship with ISPs.