Using SRT to Enhance Remote Production Quality
See more videos like this on StreamingMedia.com.
Learn more about SRT and remote production at Streaming Media East Connect 2021.
Read the complete transcript of this clip:
George Herbert: What if I said there was a better way to get rid of the last barrier that videoconferencing gives us--limited control over the quality of video and audio that we have? I think there is a better way, and this is exactly what we've been doing here at Epiphan for our own internal content creation for just about a year. Now, I introduced and propose using SRT for this type of higher-quality transport.
For those not familiar, SRT stands for Secure Reliable Transport. It's a protocol created by Haivision and designed to be low-latency, pretty network- and firewall-friendly, and most of all secure, because it does offer encryption, and the possibility of using the internet we use every day and using lower-cost products and easier cost of entry to replace the older, traditional satellite truck.
So, instead of spending millions on satellite trucks, what if we could do it with things we might already have and achieve in many cases, better results? We can take that same philosophy and apply that to video conferencing as well.
One of the things that makes SRT really nice is that, unlike some of the more traditional streaming protocols like RTMP--where you just kind of throw it at the wall and hope it sticks--SRT is much more advanced in some of the other features it has. It does have the traditional video and audio stream going from point A to point B, but it also has a secondary data channel as a back channel with diagnostic and packet recovery information that allows us to gain a lot of stats about what's happening. We can look at the stats and see our roundtrip time.
We can look at our packet loss percentage and we can make decisions based on those numbers to see the quality of our connection and adjust it as necessary. This gives us a lot of power and strength, and it means that we can bring that latency way down compared to some of the other things. For traditional broadcast television, we're used to 5-8 seconds of latency. When we look at other stream types, they're kind of in that space, and some are much longer on a strong connection. It's possible to have SRT down below 100 milliseconds, even on weaker connections. It's very possible to have it under 1 second. And these are things that we can easily achieve with the right technologies.
As for our total latency from creation to consumption, of course there are always going to be some additions there, as we add the CDN, maybe adds a bit more transcoding, but as we can shorten each part of that chain, it ultimately shortens the whole chain, hopefully. And if we can start with something that's low latency like SRT, it really helps.
I think the other thing that's really exciting about SRT is that it is backed by the SRT Alliance, which is made up of 450 plus members at this point. And these are companies, many of whom are represented here today, that are working as a collective to see this technology really shine. Here at Epiphan Video, we're part of the Alliance, as are many of our friends at places like Wowza, BirdDog, and so on. This means that every now and then these companies get together and play in a sandbox to figure out how to interop this. And because they're using this technology, it makes interop even easier between different products, whether those are hardware encoders, like we make, whether they're CDNs, or whether they're camera manufacturers. But being able to bring these all together over a single protocol is pretty exciting.
So, how do we bring in a remote guest over SRT? That's ultimately the question, and what I'm trying to propose. How we could do this is, we can send out a remote hardware encoder--in this case, I'm representing our Pearl Nano, our most recent small hardware encoder. We have our camera and audio--hopefully, high-quality ones--plugged into that encoder. And it would send SRT to our production hub totally remotely.
And in our case, I'm representing our production hub with Pearl-2. The Pearl-2 can ingest that SRT stream, maybe add some other elements of production and send it back out to the final destination, maybe in another stream type, maybe also in SRT. Multiple destinations at the same time, whatever you need.
If we need to scale this up into multiple guests, then we would have multiple endpoints with multiple remote-contribution encoders. And in this case of showing several Pearl nanos, all feeding those back again into our centralized production hub, it can then do all the mixing and switching between these different streams and then record and stream on to the final destination.
People say, "Okay, well that looks great on paper, but who's actually doing this?" We use this, and I'm going to show you exactly how we do this, and describe to you how we do this. But a year ago when everything kind of went crazy, we started out like most people did, and and pivoted immediately to Zoom. We were already a Zoom customer, using it for over a year for a variety of things. That made our transition to work-from-home instantaneous, and we were very lucky to have that. We started doing our content creation that way, but very quickly, we were very unsatisfied with the quality of the image and the audio that we were getting from Zoom. And we wanted to find a better way. It just so happened that we were already in the process of rolling out SRT into our hardware encoders.
For those of us who do content creation, we distributed those encoders to our homes--many of us had them anyway--and we started using SRT as our workflow. This allowed us to make the most of our individual internet connections, bringing up the overall quality of video and audio. We were contributing to the production, and this made a huge difference. I cannot tell you the comments I get about how good our productions look compared to what they were. We did a webinar yesterday for two hours, and we got nonstop comments about how good and how high quality it was. That's all thanks to the way we're using SRT to deliver it.
We still do use Zoom, but we use it as only a backchannel of audio communication so that our remote multiple remote participants can communicate in real time. Those remote participants are also sending an SRT stream with their high-quality camera and high-quality audio through SRT, into our main studio production hub, using a Pearl-2 where it then mixes and switches between all of that content and then sends it back out to the final destination. This makes it very easy for most people to set up. We can even pre-configure these remote contribution encoders. I've participated in a bunch of webinars and content creation using it this way. When we do it here, you know, fairly locally with our staff in our Ottawa, Canada region, it's very easy for me to do about 100-millisecond latency. When I need to bring in someone from our California office, we need to bring that up to about 200 milliseconds. I've joined webinars with some of our distribution partners in Europe, sending an SRT stream from my home to Belgium. For that, I've had to go to 300 milliseconds to play it safe, but that's still pretty fast. The worst case I've been involved in is bringing in some of our friends at BirdDog in Australia to join us on a webinar. Their connection wasn't great, and it's the other side of the world. And the worst case we had to do was 500 milliseconds. This made a huge difference in how we were able to deliver content.
This has been a fantastic experience for someone who stares into camera lenses a lot during the day, and it can be very, very good.
As a metric, Apple's Advanced Video Quality Tool (AVQT) showed some bright spots, but it's hard to see it bumping VMAF or SSIMPLUS from real-world workflows without a lot more verification.
As Zoom and other videoconferencing applications dominate our professional interactions, and remote productions and presentations remain a necessary alternative in-person conferences and other events, webcam video is often the weakest link in our remote connections. Anthony Burokas recommends alternative camera sources--smartphones, DSLRs, and better webcams--and explains how to make them work.
Stream4us' Anthony Burokas offers best practices for making remote guests in multi-source live productions look and sound their best in this clip from Streaming Media West Connect.
Mediasite VP of Customer Success Bill Cherne offers tips on how to use video for team-building and improving morale while the pandemic continues to mandate remote work for most organizations in this clip from Streaming Media West Connect.
Pandemic-era social distancing and the pivot to remote production have brought Zoom into the mix for many live producers who might have never used it before. Live X's Corey Behnke, SLV Live's Shawn Lam, and LiveSports LLC's Jef Kethley discuss how Zoom has entered their workflows in this clip from Streaming Media West Connect 2020.
Video Rx CTO Robert Reinhardt discusses the benefits and drawbacks of SRT (Secure Reliable Transport) in this clip from his presentation at Streaming Media West 2018.
Epiphan's David Kirk gives Streaming Media Producer's Tim Siglin the skinny on Epiphan's new 4K-capable streaming appliance, the Pearl 2, at Streaming Media West 2016.
Epiphan Pearl is a small, dual-input, touchscreen-driven video recording and streaming device for lecture or presentation capture. This tutorial details how to stream and capture live presentations with Pearl.
Companies and Suppliers Mentioned