Video: What is the Best Way to Move Streams Across Unmanaged Networks?
Learn more at Streaming Media East.
Read the complete transcript of this clip:
Mahmoud Al-Daccak: Environment dictates technology, not the other way around, and use-case dictates technology and not the other way around. So in all our products we use the usual Dash, HLS, and all these delivery mechanisms. At the same time transport streams remains really like SDI distribution for broadcasters' transport streams for streaming. We decided that we need to move that reliably across unmanaged networks and whatnot. We had SRT (Secure Reliable Transport), which we use to protect the streams from packet loss and network conditions and whatnot, and also to secure it end to end.
That's the use case where you have from the originating point to the destination over SRT as potentially a replacement for a transport stream unreliable over EDP or an RTMP that suffers possibly from not high-efficiency of using the pipe. There's some latency introduced in there, the lack of support of multitrack audio, for example, and all that.
Again, I go back to it's really the use case that dictates what are you going to wrap your streams in. The same thing I would like to circle back, if I may, a little bit into the CPU, GPU, and all that. Again, I think environment dictates the technology. I think we're fortunate enough to be playing with devices--either there are portable devices that they need to fit in constrained spaces or servers fitting in data centers, and for that, really, we look at the technology as what can it enable me. So, we use ASICs where we have to use ASICs. Whatever the technology is, it doesn't matter.
However, at the same time, we use FPGAs where we need to use FPGAs, whether it is in a portable device or in data centers. We're fairly advanced in using FPGAs in those instances, they're giving us great advantages. I think that where density matters, they might definitely trump CPUs. However, in certain boxes, we do leverage the CPU and the GPU however it is. In one box, actually, we leverage both--an ASIC CPU base and a GPU--just to get 4Kp60 for H.264 and HEVC.
I fully value the Vanguard codec for sure. When we started, again, we started testing five different codecs on CPUs, and we wanted to use a codec that was able to service from 300 kilobits for 720, believe it or not, to 50 megabits for HD or 4K. For that, we went with x265 and we worked very, very closely with them to actually exploit the codec to the extent possible to serve our purpose.
Again, I go back to it's not really about the technology, it's how can I exploit whatever is available to me, a codec from a vendor or a hardware from another vendor to service my use cases, which are sometimes unique.
NASA's Lee Erickson explains the value of ingesting a clean signal into an encoder to reducing latency after output.
Streaming Video Alliance's Jason Thibeault and Limelight's Charley Thomas address the question of whether WebRTC provides a viable solution for network latency issues in this panel from Live Streaming Summit.
Amazon's Keith Wymbs and Jim De Lorenzo discuss how they've met the challenges of improving latency and time-to-first-byte to serve the millions of viewers who are tuning in to Amazon's Thursday night NFL broadcasts in this keynote from Streaming Media West 2017.
Wowza Senior Product Manager Jamie Sherry discusses key latency considerations and ways to address them at every stage in the content creation and delivery workflow.
Wowza's Mike Talvensaari confronts the myth that low latency for large-scale streaming is always worth the expense, and discusses various applications and use cases along a continuum of latency requirements for effective delivery.
Companies and Suppliers Mentioned