How to Make Multi-Edge Deployments Work
What are some of the tech challenges for making multi-edge deployments work? Mark de Jong, Chairman, CDN Alliance, asks Sean McCarthy, Principal Product Manager, Technical, Paramount, “What is holding us back? What needs to be done to make sure that we have multi-edge in a working mode?”
“The Terraform deployment, which is I think pretty standard across the board,” McCarthy says. “Multi-language support…it really also depends on what your application is. So in the case of our multi-CDN strategy, we're able to build logic into either the client, the manifest itself, or DNS to basically load balance the CDNs.” He emphasizes that several unknown factors at play may need to be considered before multi-edge deployments can even be orchestrated. “So if we have a service that's called before stream starts or even midstream, how do you know which edge to go to? How do you know if one edge is performing better, more cost-effectively, or is down altogether? That's a problem I think we're going to need to solve before we do these multi-edge deployments.”
Pankaj Chaudhari, Architect - Video Delivery, Disney Streaming, mentions several factors. “Monitoring and troubleshooting and taking the logs,” he says. “I think there has to be commonality around that.” He notes some of the issues when depending on different infrastructures. “Because we don't operate that infrastructure, [as a client] you are using someone else's infrastructure. So you would expect that infrastructure to be available to you. And if some node or some region is unavailable for capacity reasons…then that should be temporarily unavailable to you. I think if that, with those ‘optics’ point of view [of] load and capacity, the more oblivious we are to it probably seems like the better thing to do. But I'm sure there will always be edge cases. Like in our case, we do want to understand network capacity, so we can be a little bit more dynamic with respect to how we interact with the CDN because our players are a little bit more intelligent.”
Chaudhari highlights that there is still a ways to go before there is widespread capacity for seamless multi-edge deployments. “I'd imagine that some point down the road, I'd probably make a statement that we ought to know how much of compute capacity you have as well too, or how much of storage capacity you have, like edge databases,” he says. “We are certainly not there yet. But maybe [there] might be one case where some more visibility into how that edge is operating…might become relevant as more and more of the edge starts to get used for these use cases.”
De Jong says, “Let’s go back to CDNs in general. They're always referred [to as] black boxes, right? We want to have a little bit more transparency on the black boxes on edge as well [to] just to have a little bit more understanding…especially because there are more variables in edge than just with CDN, [which are] typically capacity or storage and that's it. But there're much more variables with edge in that regard.”
Chaudhari says, “Invariably sometime in the future the customer is going to ask a lot of questions and you have to open up things so that…a mutual understanding is shared between the provider and the client, so better use of infrastructure can happen.”
McCarthy agrees and says, “Managing the edge infrastructure, it's kind of a blessing and a curse in that regard. The visibility and just the fact that today we don't even have eyes on it. It’s good and bad because it's not like a lot of apps that we're deploying in our own Kubernetes infrastructure. We have full DevOps eyes on everything on the infrastructure. The blessing is we don't need to worry about that…we know that our edge compute provider's just going to scale with load and it works and it hasn't broken yet and that's great, but the day that it does break…or these compute instances are overloaded we're going to want visibility."
Ultimately, McCarthy says, “We have to find that even medium for how much visibility and control we really want over that infrastructure. [With] the few deployments we have today, I think it's just been a blessing that we don't have to worry about that stuff. It's one less thing for our DevOps to worry about. That could change in the future.”
Learn more about multi-edge deployments at Streaming Media East 2023.
Any streaming workflow is only as strong as its weakest (or least-tested) link. The more massive the stream, unfortunately, the larger the opportunity and the smaller the margin for error. So, what do the experts say about the architectural demands and challenges of maintaining five-nines uptime and broadcast quality when the stakes are too high to let either suffer? And what solutions do they recommend?
19 May 2023
Where is the edge when it comes to streaming? Experts from Disney Streaming and Amazon Web Services offer their take in this clip from Streaming Media Connect 2023.
17 Mar 2023
Where exactly is the edge in streaming content delivery? According to leading figures from the Streaming Video Technology Alliance, Amazon Web Services, and Fortinet, defining what edge computing means for streaming varies by use cases that involve factors such as streamlining user experiences, taking security measures, and evaluating data costs.
08 Nov 2022
According to Jason Thibeault of the Streaming Video Technology Alliance, advances in edge computing have changed the streaming ecosystem in ways that require cooperation between CDN competitors in order to best serve the needs of their end users
07 Sep 2022
NVIDIA's Greg Jones and Intel's Nehal Mehta discuss managing the power requirements of edge delivery in this clip from Content Delivery Summit 2020.
26 Aug 2020