Streaming Media

Streaming Media on Facebook Streaming Media on Twitter Streaming Media on LinkedIn
 

Streaming Live Concerts: Backstage with TourGigs

This article explores what it takes to pull off a live multi-camera concert film, backhaul a compressed stream from a crazy location, and deliver it at scale to users on multiple screens. It also looks at different monetization models, rights management, policy enforcement, royalties, and why some shows go perfectly well and others go perfectly wrong.

On-site Multicamera Workflow

We use a mixture of camera bodies in our multicamera workflow. We’ve used cameras with Micro Four-Thirds lenses, and we’ve also used some ENG-type cameras. Sony EX1s were our workhorses for a long time. We got a great deal of use out of Panasonic Lumix cameras. Lately we’ve come to love Super 35mm cameras such as the Canon C300. The camera bodies, lenses, and shooters we employ probably have a greater impact on the nature of the product than any other stage in the pipeline.

Of course, if any part falls down the live event is not going to work, but it really begins with a good live production and having good cameras, good glass, good shooters, and a good director. If you don’t have good equipment and a strong onsite production crew, then everything else afterwards is just a technical exercise.

Managing Long Cable Runs

We rely heavily on tactical fiber optic cable. SDI runs over copper; often we don’t get enough length out of it because we have to set up at the back. We always position a camera at the rear of the auditorium, so we have some very long runs.

If we’re at a festival and we want to run to front of house, this can easily exceed the typical length that you can get out of HD-SDI over copper. Running that over fiber is great; we use Blackmagic converters for that extensively. More often we’ll set up on card tables or in a closet more often than with a traditional production truck on site. Being flexible enough to work with the space we have available goes along with our motto of being fast and light and able to get in and out. Figure 3 (below) shows some examples of the typically cramped conditions we work under.

Figure 3. Typical TourGigs production environments

The Road-Ready TourGigs Rack

All of the switchers and converters, audio processors, and encoders that we use on a live concert stream fit into one rack case that we can fly around or ship around across the country as we travel from venue to venue. Our encoders are purpose-built and the client is custom software. We’ve tried a lot of solutions, and there are a lot of good ones on the market.

But for us, each time we used a commercial solution, we had issues with it. Eventually, we began to ask ourselves how we could improve on the existing solutions to suit our needs better and if it was worth writing software to do so. We concluded that it was, and we’ve assembled a software team that put together a client that we’ll be launch publicly as Gig Casters in 2016.

The field is our laboratory. It’s a great source of data that drives our innovation. We take careful notes and pay attention to when things work and when things don't work and try to build on that.

Backhaul

Backhaul is one of our major pain points. We spend a lot of time in pre-production on getting the signal back into our platform. Commercial-grade connections--symmetric fiber, Ethernet over copper—are great, but in most of the venues where we work, these connections are pretty rare and getting one dropped is expensive. The traditional ToCo world doesn’t like to work on an event-by-event basis; they want two-year contracts to drop fiber into a venue. Unless we know that we’re going to be doing 30 shows a year in a particular venue, it’s really not worth it.

We've used cable modems that are already installed in venues a number of times. This works pretty well. We can call ahead of time and get the package bumped up if a they’re already subscribed to a lower tier that and they can drop it back down after the show. But if the cable modem is already there, it's usually shared among the whole back office, production staff, and who knows when you're going to find a computer in a production office or in the manager's office that's using up all of the upstream. The unpredictability of cable modem shared within the venue can really kill us sometimes.

Cellular has been very hit or miss for us. A lot of people are looking to cellular bonding as a great technology opportunity, but in our experience, it has not been a technology that we want to rely on when we're doing a big show. Another problem with relying on cellular bonding is that sometimes we’re filming in a big, old brick building, and when we actually get backstage and set up, we find we have no signal back there.

Fixed microwave is great when available. We've used this at some festivals and it’s always proved a really good option. As I mentioned earlier, we don't do much with production trucks but the price of this technology has come down a lot lately and it’s on our radar.

The other thing to note is that when we're using public transit--cable modem or pretty much anything else besides the ViaSat satellite--we're going to cross the public internet at some point, which means we’re subject to outages, congested peering points, and packet loss.

As for network operations and engineering, live event workloads pair with public cloud like they were made for each other. We can bring up as many as 5 or 10 servers to do the transcoding to support an event. Then we tear them down after the end of the event, so this is a great pairing. It allows us to elastically scale out very easily.

Software and CDNs

We combine a lot of off the shelf software with our custom software so our products that our CMS Flight Deck will tie into Wowza and then Wowza will feed Control Plane, our policy and service engine. Control Plane is where implement a lot of the pay-per-view type controls and policies. We've got a hybrid solution for the host site and for ecommerce, meaning we put some persistent stuff in the data center then we put elastic scale out stuff in the cloud where it works well.

CDNs are great for edge caching and delivery. We wouldn't be able to do what we do if the CDNs weren't where they are today and at their price points it's really great. We do network and show monitoring for the duration of the event. I'll park an engineer in front of the screen and say, "Your job tonight is to watch this show for 3 hours, and listen to it carefully on headphones. If you see anything off, contact the site ops so that they can get a message to the director or the audio engineer and we can make corrections."

The event lasts only three hours, so if we don't act fast the whole thing will be spoiled. We've carefully selected all of our vendors and we control as many variables as possible. We have our own player. We have our own encoders. We run our own cloud instances from a cloud provider. All of this is about control and knowing the whole pipeline, and if anything is amiss we can take action to correct it. That old adage about putting all your eggs in one basket applies here. Everyone has outages. Amazon has outages. Akami has outages. Comcast has outages. It's good to have multiple CDNs as a backup if outages happen.