Much Ado About the Cloud
The cloud and its use to power digital media workflows is headlining many of the recent media-related announcements, and it is becoming increasingly difficult to parse through the myriad cloud messages to clearly identify what is real and what is simply noise.
What is clear is that the cloud is rapidly evolving both in features and capacity in an attempt to address various technical and business issues that are prohibiting targeted verticals from adopting it. In this article I’m going to discuss those issues and point out key areas that should be considered when evaluating the various cloud offerings.
Emergence of the Cloud
I still recall the first time I signed up for cloud resources on Amazon Web Services in 2009, and how easy and simple the process was. All I had to do was complete the form, provide a credit card number, and for about $200 in a matter of minutes I had multiple servers in a load-balanced environment with desired scalability, and was ready to transact. I was truly amazed knowing how much hardware and software was put to work so quickly with just a few clicks of a button! The cost in both time and money would have been significant if I had to buy, install, configure, host, and manage all that gear myself. I couldn’t stop thinking about the profound impact this would have on the world. Suddenly, starting a new business and having massive resources available from Day 1 at a fraction of the cost and time it would normally take became a reality. What once seemed to be a huge barrier to entry, or to scale your business to the next level, now appeared to be nothing more than a utility anyone could order, use at will, and pay only for what was actually used—much like how we use electricity today. Hardly anyone uses generators to power his or her house or office, except in emergency situations; rather, we all depend on energy companies to power our lives at the touch of a switch.
Also in 2009, iStreamPlanet was in the middle of preparations for the 2010 Vancouver Olympics, architecting and deploying origins to host every second of every sport of the Games. As we were preparing for this extremely complex, high-profile event, I couldn’t stop asking myself, when would we be able to leverage the cloud to provision and host the massive workflow we had put in place for the Olympics? And when we were done with the Olympics, could we just scale up /down those same cloud resources and/or provision them to another project? This thinking was the genesis of a two-and-a-half year journey to understand, and ultimately find answers as to how to more accurately leverage the cloud to deliver scalability and elasticity based on customer demand and actual usage instead of vague assumptions and guesswork.
Today’s Cloud & Video Workflow Needs
What the cloud clearly offers is vast amounts of computing power (thousands of servers strategically located around the world), storage, and bandwidth to move bits in and out of the cloud on a scale that, in most cases, exceeds the largest individual enterprise deployments by multiple factors, and all the while eliminating technology refresh expenses altogether.
Typically, today’s cloud offerings come in three flavors: 1) Infrastructure-as-a-Service (IaaS), 2) Platform-as-a-Service (PaaS), and 3) Software-as-a-Service (SaaS). IaaS is great if all that is required is the raw compute and storage resources and tools to create, use, and manage virtual machines (VMs); PaaS provides higher-level operations such as automated management of VMs, software maintenance, and optimization and enhanced failover and scalability options; SaaS offers “ready-to-use” applications that leverage the cloud and eliminate the need for a resource-intensive implementation or integration process.
With these cloud service distinctions in mind, you might be asking, what about Encoding.com and other folks who have been providing transcoding services on Amazon for years? Tremendous credit goes out to the folks at Encoding.com for recognizing an opportunity and building part of the overall video workflow (i.e., file-based transcoding from mezzanine assets to one or more web-ready formats) inside the public cloud. However, the focus of this discussion is supporting the entire video workflow inside the cloud including ingest, digitization, editing, protection, packaging, publishing, origination, and delivery of both live and VOD (file-based) content.
To further underscore the need to contemplate the complete video workflow, let’s take a look at two specific scenarios:
1. Ingesting, encoding, encrypting, and originating 50 linear channels to be distributed over two different CDNs, and growing that number to 500 channels in more than three months; and
2. Securely ingesting 10,000 hours of premium content, transcoding it into multiple formats, encrypting transcoded assets with one or more applicable rights management technologies, and delivering it securely to one or more distribution outlets within a 72-hour window—then processing another 1,000 hours of content monthly with specific SLA-guaranteed turnaround times.
Now, the true needs here, for both scenarios, are as follows:
- Secure ingest of content
- Intelligent routing of live content to designated cloud access points
- Elasticity (ability to scale up/down as needs change)
- Live video workflow in the cloud
- Multi-CDN/cloud origin integrations for redundancy and scalability reasons
- VOD video workflow in the cloud including secure and accelerated ingest and push to designated distribution outlets
- An SLA to cover service up-time and turnaround times for VOD assets
The Rule of Six
In fact, we can use these asks to identify six key areas that should be considered when evaluating different cloud services and cloud based offerings:
- Access Points into the Cloud
- Media Workload-Friendly PaaS
- Private-Public Workflow
- Total Cost of Ownership
Aside from the quality of the experience, nothing concerns content partners more than the security of their content. When moving assets in and out of the cloud, are those sessions encrypted? And while stored in the cloud, are the mezzanine assets encrypted and at what point do they get decrypted for processing (e.g., transcoding)?
Access Points into the Cloud
Where content is ingested (i.e., at what cloud physical location) can have drastic impact both in terms of performance and cost—this is especially important in fragmented markets where transport costs of going across multiple countries can add up quickly. Additionally, reliability and ingest latency are absolutely paramount to the live workflow (e.g., 50 milliseconds and fewer than 10 network hops vs. 500 milliseconds and 30 network hops can mean the difference between a high-quality consistent video experience and constant buffering).
Media Workload-Friendly PaaS
PaaS that is designed to mitigate security concerns, automate provisioning and scaling of resources, and that offers a modular approach to all video workflow components—with the ability to build-in and build-on while giving control to all components through APIs—can accelerate the time it takes to build and deploy media workflow applications in the cloud and, ultimately, lower start-up costs.
Cloud providers are quickly innovating around hybrid cloud architecture to bridge the gap between private and public cloud resources, but has anything specifically been done to mitigate media workload issues? During the next several years, as the public cloud continues to gain traction, having the ability to seamlessly and transparently leverage both the private and public cloud will play an important role in how fast the public cloud gets adapted.
An interesting thing about processing media (e.g., encoding or transcoding) is that regardless of how much compute power we use, we always seem to need more. Faster machines will process more in a shorter amount of time and, if optimized for media processing (e.g., CPU and GPU optimizations to work with various codecs including multi-thread and parallel processing), they will yield better results both in the time it takes to process media and in overall quality of the video itself.
Total Cost of Ownership (TCO)
Although the cloud promises to provide elasticity and lower costs by leveraging economies of scale, it is important to do some modeling to fully understand all the costs and avoid surprises. How you are being charged for a SaaS could differ substantially from how you are being charged for PaaS or Iaas-based resources to run your services. One size hardly fits all, and depending on if you are looking for an end-to-end solution, a fast way to deploy your own solution, or just need cloud scale to run your existing “cloud-ready” solution, your costs and ultimately ROI can change quickly.
It is hard to argue that in a not too distant future the cloud will turn this complex and resource intensive task of processing media into a utility. Through the coming years, the cloud’s ability to leverage economies of scale will continue to put pressure on pricing and the battle between capital expenditures (i.e., buying hardware and software and running it yourself) and operating expenditures (i.e., leveraging cloud to process media on-demand and pay only for what was actually used) will ultimately be won at the finance table.
Advances in the live video workflow have opened the doors to IP acquisition and routing, moving us towards 99.9999% reliable live streaming
The explosion in over-the-top (OTT) video viewing for both live and on-demand content calls for new innovation and collaboration across the entire online video ecosystem and workflow pipeline