Streaming Media West '15: Elemental, AWS, and the State of Cloud Transcoding
[This sponsored interview was conducted at Streaming Media West 2015.]
Tim Siglin: Welcome back to Almost Live here at Streaming Media West 2015. I have Aslam Khader from Elemental Technologies. Thank you for joining us today. Everybody knows that Elemental was acquired by Amazon, Amazon Web Services. Amazon had a Cloud play, Elemental had an on-prem play. There are hybrid plays. How are the combined efforts of both Elemental and AWS helping the industry work through what types of solutions are best for them?
Aslam Khader: That's a great question. Actually, as you probably know, Elemental introduced Elemental Cloud Solution about three and a half years ago. It's been very successful. We're a software that runs on appliances, that runs in virtual machines, or runs in the Cloud. We introduced it in the Cloud 3 ½ years ago.
It’s been really successful, growing rapidly. Just as we were agnostic about which infrastructure you wanted to run it on, whether you wanted to run our solutions on appliances, in virtual machines, or in the Cloud, in the same way on the Cloud, we wanted to make sure that we were relatively agnostic. Our whole intent was that infrastructure is available and we’d be able to run on any infrastructure that made sense.
What we did then was to create kind of an abstraction, and not utilize all of the services that AWS has that are extremely rich in the kinds of services that they provide. What we are doing now more and more is to ensure that we can actually leverage the richness of the AWS services to make the Elemental Cloud-based implementation a lot more efficient, and a lot more reliable. We’ve already helped our customers move to the Cloud. We’ve got many, many customers now who are running completely in the Cloud. We have a pretty large number that are running on-prem and bursting to the Cloud. We introduced a live streaming in the Cloud solution as well.
We have SDI encoders. You can take it and push out a HLS, or RTMP, or any other feed that we would then push into the Cloud or RTMP. We take it into the Cloud and we do all the processing and then distribute into CloudFront. You have a complete end-to-end solution. What we’re trying to do more is to make the ingest of content into the Cloud a lot easier, to make the running of the content in the Cloud much more reliable, much more performant, much more robust, and to make the actual delivery of the content out there much more flexible in the ways in which that content can be consumed.
Tim Siglin: Let's take that to a practical example, something like a live sporting event. How would you help sort of eliminate the roadblocks for something like a live sporting event?
Aslam Khader: One of the biggest costs for these live sporting events is production. You take your big truckloads out into an event, and you essentially do your production in the truck. Once you do your production in the truck, you could think about a stream being pushed into the Cloud, another stream going up into the satellite to be delivered maybe to your primary screen distribution facilities. You could look at a stream going into the Cloud where we would essentially take that stream, transcode it in the Cloud, create all the different bitrates, do all the different packaging, and then push it out through CloudFront into a whole OTT solution. You could have a complete OTT implementation in a live sporting event without having to really do anything more than just push a feed into the cloud.
Tim Siglin: Let's talk about the difference in terms of investment versus risk for hardware encoding versus software encoding. Tell me about that.
Aslam Khader: Hardware encoders are very good at implementing technology that's gone through a maturation phase and it's mature because you take that algorithm and you burn it into silicon, and then that silicon can perform that algorithm in the most efficient way, most power-efficient way, most performant way because it's burned into the silicon. Well, that used to work completely fine when you had 10-year innovation cycles driven by operator commercials. They would monetize that over 10 years because they were driving the monetization and the innovation.
What hardware is not very good at doing is changing rapidly. What happened 6 or 7 years ago when the iPad got introduced was there was a shift in power and where innovation was taking place. Innovation shifted from the operators from a video perspective. That changes the economics completely of a hardware-based solution. Now you’ve got a basically mature, optimize, deploy, and get your return within that period of time, whereas a software-based solution doesn’t suffer from that. A software-based solution is much more flexible, can rapidly change. With DASH’s current implementation you change that around. You fine-tune your HEVC. You continue to fine-tune it. You don’t have to wait 36 months before that happens.
Tim Siglin: Part of this hardware-software discussion is hardware appliance, ASIC-based versus general purpose computing with software-based. Is that what I'm understanding?
Aslam Khader: Right. General-purpose computing has reached a point now where with Moore's Law it's become powerful enough that you can stream together off-the-shelf silicon and really create powerful systems that can do the kind of things in real time that were not possible. As an example, you know we introduced a UHD solution.We did it in software. We bonded together full CPUs and full GPUs at that point for initial solution. Within six months, we moved up and bonded. Now with two CPUs and two GPUs you actually get the same solution.
Tim Siglin: Let’s talk about two other parts of that. One would be sort of the futuristic virtual reality. The other would be the closer to home 1080p60 and HDR. I hear a lot around the show floor and in conversations where people are saying, okay, it’s interesting for us to talk about 4K, but in reality we think we’re going to get lift on HDR and 1080p60. That’s sort of a near-term solution, and then like I said, the far-term virtual reality, how is Elemental addressing each of those aspects?
Aslam Khader: We believe 1080p60 is a fine solution. We actually demonstrated using our software encoders 1080p60 and 1080p120 just to show that you can continue to go to higher frame rates, and what are the benefits of going with higher frame rates? HDR, as we’ve always known--the industry is going through a period of change here where we’re not settled on a particular HDR technology. There are four or five different relatively viable options out there. We’ve been working with Dolby Vision for the last three years. We’ve been working with Technicolor for the last 12 months or so for some of the BBC and Philips implementations. We’ve demonstrated all of those along with HDR 10.
I think the industry needs to get to a point where. We’re at the forward edge of the innovation, so people can do their proof-of-concepts and understand what works and what doesn’t work, but the industry needs to reach a point where we can all kind of get around one standard, which is hopefully backward-compatible in a way, so that you don’t have to increase double your cost of distribution, et cetera, processing and distribution. HDR is an extremely good technology. We believe in next-generation video experiences being more pixels, better pixels, faster pixels, and cheaper pixels. Pixels that sound really good. 4K is good. 8K maybe at some point. By itself, it doesn’t make the whole experience. The whole experience is 4K with better pixels, which is HDR, which is 422, which is 10-bit and 12-bit, et cetera. Faster pixels--
Tim Siglin: And a high frame rate.
Aslam Khader: Faster pixels, high frame rates, and then cheaper pixels with HEVC, essentially, because now the amount of information is increasing. You’ve got to squish it down more. You need HEVC. Then you add object based audio to it, and that to us is the next generation video experience, which we are kind of working and leading the charge on. Then if you take that around and say, what are the other kind of experiences that we see beginning to happen? You mentioned the virtual reality work that’s going on. We’ve been working, again, with a number of virtual-reality companies that I can’t talk about right now because they’re rolling out some really innovative new technologies. By the way, we could do this with our software-based encoding off the bat because we don’t have a hardware chip that says you can only do this resolution by that pixel. So the video resolutions are all weird, but our software solution can take care of it completely. We’ve been working with these companies to actually demonstrate it. At IBC we demonstrated a couple of different video virtual reality technologies.
We think that that is really a good combination with all these other professionally produced experiences that you see where you can essentially think about this. If you take a seat in the front of an awards show or in the box of a sporting event and plunk one of these cameras, now you can have millions of people around the world experience what it is to experience a particular event as if they were sitting in that seat. I think those are the kinds of experiences that are going to drive some of this from a perspective of adoption because now the technology and the horsepower is becoming available to enable these experiences. From our perspective, our software-defined video story plays in perfectly because it gives us the flexibility to do all this all in the context of our current software solution, whether it’s running on the ground or in the Cloud.
Tim Siglin: Very good. We’ve been speaking with Aslam Khader from Elemental.
This article is Sponsored Content