Video: Where is the Edge in Content Delivery?
Learn more about content delivery at Streaming Media's next conference.
Read the complete transcript of this clip:
Marcus Bergstrom: Let's tackle the million-dollar question (or probably the billion-dollar question): Where is the edge? I define the edge as latency. Just because you are in a central office, doesn't mean you're at the edge. It depends completely on how you're connected, where you're processing and what's the way the device is connected into the network.
If it's connected to a fixed network, and in you're in a central office somewhere, you need to backhaul all the way down into the wireless network, are you at the edge? No, I would say you are not.
For me, I define the edge as latency, and if you think about what we talked about earlier, the latency requirements of some of these applications, I think it's just a fact that if you're going to support real time IOT application or any time of application that's extremely latency-sensitive or time-sensitive, you need to be deployed at the aggregation of the access network, whether it's a fixed or a mobile operator.
Iyou look at some of these numbers, of course, it differs if you are in metropolitan Frankfurt versus rural India. But as a baseline, to deploy something at the edge, you need to be within 10-20 milliseconds latency run time.
And if you think about this in a just a downstream scenario, or an interactive scenario that will require processing at some point, then the latency requirements are being amplified, because for every time you're having an interaction between your server and the end device, you will have a round time, right?
Thinking a self-driving car scenario, right, or an AR scenario. I think Pokemon was the first application that really showed us the vulnerability of an access network or a mobile network when it comes to request-heavy applications that require low latency.