-->
Save your seat for Streaming Media NYC this May. Register Now!

Video: 360 Immersive Live Streaming Workflow: Part 1, Capture and Stitching

Learn more at Streaming Media East!

Read the complete transcript of this clip:

Shawn Przybilla: In general, live streaming of a 360-degree immersive experience is not much different from a distribution perspective, but there are some key considerations to keep in mind for a successful life stream of a 360. So there's a couple stages. You're capturing and distributing or contributing a source feed to your processing infrastructure, and then delivering it over a traditional CDN. So that’s pretty straightforward. I think a lot of us are probably familiar with this capture, stream process to AVR, deliver over a traditional CDN or origin to our end users.

So generally the capturing's going to be happening at a local event, not on AWS. So one of the key considerations is how do we get from the capture location to the cloud, and I'll touch on that in a second.

But first, there's a wide variety of cameras to capture content. So I just wanted to call out that there's a wide range of potential devices and potential arrays of cameras you could use to capture this content. All that have varying features. Some things to keep in mind that drastically affect the architecture from a distribution perspective would be whether it's monoscopic or stereoscopic, there's some really large considerations there. Whether the device has onboard stitching capabilities or whether it has onboarding coding capabilities is a big one. How fast the device drains its battery or does it have onboard storage if you're doing video on demand recording. Those are all key considerations. You can spend as little as $300. I forgot to bring it up, but I have a little camera that attaches to my cell phone that does 360-degree live streaming. All the way to $50,000 plus for a custom camera arrays that you build yourself with GoPros or other devices.

One of the other considerations is your stitching and projection mapping. The most common approach is equirectangular. You can think of equirectangular projection mapping as kind of like our traditional map. It's not a one to one comparison. There's distortion at the polls of the map of the earth. When we go to project that inside a sphere, there's a lot of waste that happened. For instance, by the way, the aliens supposed to be your head. So imagine your head's inside a sphere and we're projecting the video inside of it. Sort of like a planetarium or something like that. There's some problems with equirectangular. So you notice there's a lot of deformation as you go further and further from the middle. Then there's a lot of wasted data at the poles. So you're not utilizing this sphere very well.

So there's a couple of other approaches that have been pioneered by folks at Facebook and YouTube. Specifically cube map, pyramid mapping, and barrel mapping. All these mapping technologies have various tradeoffs and we're really trying to figure out what the best approach is that takes into consideration the whole distribution chain. For instance, pyramid mapping, I think Facebook has a great blog post on. They were discussing how you could do view adaptive bitrate where the bitrate of the angles where the user isn't looking are delivered at a lower bitrate than the view port that they're actually viewing the content in. That changes in real time as the user changes their head in that plane. That's not necessarily supported natively in some of the adaptive bit rate protocols as you might imagine. But I'm hopeful that some of these key learnings will be folded into those standards over time.

There's another consideration: a lot of those cameras say they're 4K, but the reality is they're 4K capture of a 360-degree experience. What the user actually sees is not 4K. They're seeing whatever they're looking at in that experience. So just keep that in mind as you think about contribution capture 4K in a two dimensional fixed plane experience is much different than sending video for a full 360 immersive experience.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

How to Improve Live Video Workflows Through Optimized Root Cause Analysis

Zixi has improved live video workflows through their specialized Software-Defined Video Platform, which uses dynamic machine learning and an automated analytics approach to Root Cause Analysis to assist with faster team problem-solving collaboration

Video: 360 Immersive Live Streaming Workflow: Part 2, Processing and Distribution

How does 360 immersive differ from other live streaming workflows? AWS Special Solutions Architect Shawn Przybilla takes viewers step by step through the process, continuing with processing and distribution in Part 2 of this two-part series.

Companies and Suppliers Mentioned