-->
Save your FREE seat for Streaming Media Connect in November. Register Now!

Hybrid Processing: When To Use the Cloud, and When Not To

Article Featured Image

My favorite home improvement warehouse store recently aired an advertisement that reinforced the idea that to perform a job properly, I needed the right tool. On a home improvement project, this means that you will either expend too much time or effort (and often both) in the course of trying to achieve your goal. To use an HGTV Brother Versus Brother analogy, this is the equivalent of using a power drill instead of a screwdriver to assemble a chlld’s swingset. You can use the screwdriver, but it will cost you a great deal of time, effort, and energy.

Today, content creators and broadcasters require a complex media supply chain to reach their viewers on any screen, at any time. This supply chain starts with encoding and ends with secure playback on multiple devices, including mobile and desktop.

Publishers must apply a great deal of intellectual capital alongside their monetary capital to make sure that the broadcaster selects the right tools and defines the most optimal workflow to process and deliver its content. A big driver on this thread is the move from specialized hardware to software-defined solutions that can run on any piece of hardware, and in any place. This is a big deal that I’ll cover a little later. There is no shortage of factors that decision makers must consider in order to help them arrive at the smartest, most efficient paths to success and viewership. Some of these include cloud vs. on-prem, baseband vs. multicast, pre-formatted vs. "just in time," and more.   

The Cloud is not a Silver Bullet

We’ve all heard "that guy" talk about the cloud. It is the solution to all our problems, and can transform your business and worklfow and make it instantly cost-efficient. It will generate rainbows and unicorns—as well as fifteen renditions of your streams for any screen, he says (cue angelic singing and a backlit puffy cloud). Don’t get me wrong; I’m a big fan of the cloud. He and I go way back. But there are some very distinct situations when it makes sense to use the cloud to create digital media, and there are others where it definitely does not. This is where the notion of hybrid processing is becoming a key part of an intelligent workflow.

The fine folks at Merriam-Webster define hybrid as "a process or thing that has two different types of components performing essentially the same function." This is a perfect definition and the perfect context for this topic. As I mentioned earlier, the two key components to a hybrid encoding approach are (a) on-prem, off-the-shelf hardware with specialized software and (b) a cloud facility with similar functionality that is geographically distant from the origination facility. This could be down the block or across the country. Each component has a key role, so let’s explore some of the strengths and weaknesses of each.

On-Prem: Very Friendly, but Largely Misunderstood

One common challenge with on-premise processing was the cost associated with this deployment. Until fairly recently, on-premise encoding meant purchasing specialized, expensive, and proprietary hardware, which creates two problems. First, you’re potentially locking yourself into a 36-month (or shorter) lifecycle with this gear, and will have to replace it in order to take advantage of typical upsell—increased density, features, or other hardware improvements from one vendor. Second, depending on the size of the organization, there is a concern over support for the hardware. Can the internal IT staff handle this? Engineering? Then, even if a warm body is capable, they would have to sit right in front of this specialized gear to manage it, taking them away from other roles and responsibilities.

As I’ve outlined earlier, some voices are a bit too quick on the draw to point to "the cloud" as the solution to all that ails a broadcaster or content creator. "Just point your file to the cloud," they say, "we’ll do the rest." Easy to say, and that’s before we even get around to discussing live, linear streams versus on-demand content. One reality of this high-definition world we all clamored for (and now have) is the weight of the infrastructure required to support those fine-looking pieces of video content. HD is heavy—and not for lack of a fitness routine. In today’s broadcast facilities, the contribution files are large, ranging anywhere from 50Mbps to 150Mbps. These are lightly compressed, pristine representations of the live and on-demand content we all watch on our televisions, tablets and phones—and they are proud to be big and meaty. With video file data rates around 100Mbps, let your mind wrap around how to move these files—50 of them at the same time—not only to the cloud, but also out of one.

Maintaining a stable connection to the cloud to move just one contribution stream can be a challenge, and again, depending on the size of the organization, may be very taxing to the overall bandwidth at the facility. To reach so many screens, one must create a good number of stream renditions of varying sizes and rates. Many times, it makes more financial sense to process these files at predictable utilization rates on-prem rather than incurring the cost of doing so in the cloud. As with most broad statements and assertions, I will say best practices are best, but your actual mileage may vary.

Hey, Look at that Puffy, Powerful Cloud

All of this is not to say that there are no benefits from the cloud; far from it. There are plenty. As today’s TV Everywhere deployments demonstrate, there are hundreds upon hundreds of channels that are available for your consumption, often with authentication from your service provider. Now, imagine trying to manage the workflow, troubleshooting, resource allocation and issue resolution for just a subset of those channels. Imagine you are one MVPD (cable service provider) and need to manage all those channels’ TV Everywhere output and monitor them from time to time for quality and uptime. You desperately need to do so from one point (or at least very few points) of presence. You cannot afford to remote into all these machines; instead, you need to see them all in one organized place. The operator needs to be able to monitor health, see audio and video components, schedule events, and perform a number of other tasks in one place. This is truly TV Everywhere, enabled by the cloud—access anytime, anywhere.

Burstable Capacity

The cloud is also ideal for burstable capacity and processing. "Burstable" is not a meteorological term, but rather a welcome addition to digital media workflows. In the not-too-distant past, when a studio, network, broadcaster or the like needed to convert a library of content to a new format or update to the latest version of a streaming protocol, they were dependent on the amount of encoding technology they owned within their facility. If the library was 10,000 files with four renditions, but they only had the capacity to process 1,000 files per day, a calendar and delivery problem quickly developed.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

The Cloud Vs. On-Prem Encoding Dilemma

As cloud encoding services have matured, more and more companies are switching or supplementing their on-premises efforts. Here's what some customers are doing.