-->
Save your seat for Streaming Media NYC this May. Register Now!

SDN and NFV: The Future of Virtualization and Cloud Computing

If you are deploying streaming media infrastructure or managing online video creation and distribution workflows, you’re familiar with the challenges of organizing potentially vast arrays of computers and applications. This article will look at how infrastructure is evolving from something that has traditionally been built from single-purpose appliances and dedicated networks, into a “cloud” of resources that can be orchestrated as required to run almost any application. If you want to lower your overhead, scale to support audiences of any size, or ensure that you can replace failing infrastructure with backup options in seconds, then you need to understand what the telecom giants are doing—not only because they are defining things at vast scale and with extreme uptime requirements, but because it is enabling them to enter the workflow and streaming infrastructure market too.

Pretty soon you will be seeing software-defined networking (SDN) and network function virtualization (NFV) emerge at all layers of the distributed computing sector—streaming workflows included. Before we dive in to SDN and NFV, though, some background is in order.

Once upon a time, computers were little more than calculators dedicated to a particular calculation: They had specific capabilities and specific instruction methods, and while they could generally perform that calculation with different variables extremely efficiently, if you wanted to change the function you had to typically modify the instructions, and potentially the hardware, and then reload the instructions with the new variables and execute the new “instruction.”

With time, computers evolved to differ from calculators by virtue of the fact that various complex instruction sets could be interchangeably performed—and sometimes the instructions could be stored in a memory and swapped in and out as they were needed for various tasks with some speed. Operators would call the instructions one at a time, perhaps changing a few variables to process different results.

With yet more time, the operator’s role was replaced by a program that would call various instructions, and sometimes pass variables and results between instructions, and we started to see the emergence of today’s computing paradigms. Critically, the evolution of programming languages and operating systems—which helped standardize how the programs interfaced to the underlying hardware—accelerated the space, creating a market for programmers/software developers which co-existed with the hardware space.

While computer software was often optimized for specific hardware, there was a sense that programs could be run on a number of different hardware platforms. However getting to the point where an application could run on “any” hardware platform took some years, and really widespread use of “cross-platform” software was—and in some ways still is—fairly limited, with pools of applications being available for different operating systems or, increasingly today, browsers.

Around a decade ago the idea of ‘imaging’ an entire hard drive—principally as a way to backup and restore a complete machine—emerged. Once that had taken hold, it became possible to deliver an image of a “perfect” computer setup for a given task, and to clone it to many installations.

As deployment of these “perfect” setups became a common model for managing large datacenters, the organization of deploying these images to specific computers became increasingly automated. To accelerate this, some of the intelligence was pushed out to the computers, and by running a host system on each physical computer, it became possible to launch several copies of the cloned images on a single computer with the host helping each clone access the underlying physical resources of CPU/memory/network Interfaces, etc.

This became known as virtualization.

Once it became possible to boot up multiple computers within the “parent” at the same time, it became possible to deliver services from each of these “virtual machines” to remote machines in a way that the remote machines could not tell if the services were being delivered from a “real” computer or a “virtual” one.

Virtualization brought with it a wide range of benefits, along with a few challenges.

First and foremost, the benefits included the ability to share hardware. Now instead of user A using his computer for 10% of the time, and user B using her computer for 3% of the time, both could share a single hardware system, and while it seemed as if they each had their own private infrastructure, actually they had a shared infrastructure, meaning that hardware costs could be dramatically reduced.

Taking advantage of this, a number of operators decided to allow third parties to run their virtual machines on their own infrastructure, charging them for utilization by pricing measures such as CPU cycles or data network utilization.  In the case of Amazon Web Services, this became a runaway success.

When tied to a usage-based economic model, and typically referring to a multi-tenant environment, this type of virtual machine hosting has become what is commonly called Infrastructure as a Service (IaaS) clouds.

This is not new, and indeed in the streaming media sector we should take some pride that we have been using scaled-up virtualised platforms for nearly 20 years: Content delivery networks have been exemplary (if not pioneering) “cloud” services.

When I stream through a CDN, I typically pay for how much of the network resource I use. When I am not using it I don’t get charged, and the operator can scale up or down, or reallocate resources relatively dynamically.

Typically the big cloud infrastructures that are dedicated to a specific task or application are positioned as Software as a Service (SaaS) or Platform as a Service (PaaS) clouds. (There’s plenty of nitpicking about the semantics and specifics of the terms IaaS, SaaS, and PaaS, but that’s another discussion, and for our purposes, these rough descriptions will suffice.)

So far so good: We began with a hardware-specific application, moved through an environment where we can move the application around between machines of the same type to a world where we can treat the machine as an abstract and move the entire software side of things around between physical machines, and now finally we can hire this type of infrastructure—or the applications running on it—on an as-needed basis.

But the story doesn’t stop there. In fact, in some ways the journey thus far was just a few baby steps.

You will recall that virtualization brings with it a few compromises.

Back in 2008 engineers were looking at the virtualization environment and trying to work out how to give the virtual machines “direct” access to the underlying hardware.

The problem was that when a virtual machine needed to send a packet of network data to a network card it would have to “negotiate” with the underlying “host” operating system.

This was in turn both interfaced to the actual network card in the machine, and also to the virtual host computer. It would then intermediate and relay the packet—often invisibly to any external observer, but with a measurable performance loss.

If you wanted to do something like route a lot of packets of data through that virtual machine, or—possibly closer to home for this audience—encode a lot of video, then the extra CPU hit (in performance terms) of the virtual host intermediating between the virtual machine and the physical hardware would often be quite noticeable. For occasional use purposes this may not represent a problem, but as sustained use heads toward 24/7. This inefficiency became more apparent and in some ways has given virtualization a still-lingering reputation that it is inefficient when you are delivering high performance all the time.

In 2008 Linux Containers (LXC) were first released in the open source community. LXC allowed multiple Linux Systems (so called “containers”) to run on the control host. Each container sees the hardware in isolation, and so can build virtualized environments. However, since in effect they share the same kernel as the host, they can run at very nearly the hardware platform’s full performance, or certainly much closer than in an image-based virtualization.

Note that in the container model you are limited to developing based on a common OS kernel, where in the “image clone” virtualization model you could have Windows, Linux, and others co-existing on a single host. However there is a further significant advantage to the Linux Container model: Because the containers only contain the “delta” from the base host system they are very small, and can be both launched quickly and moved from storage to a target host system very quickly.

This “deployability” has caught the eye of not only application developers, but the operators of the networks on which those applications are being deployed.

So much so that in the past three years an entire European Telecommunications Standards Institute (ETSI) standards track has formed focused on how network models that both deploy and orchestrate vast software-controlled infrastructure environments (such as telecommunications networks) and build the tools that they deploy within those networks—including things like CDN edge caching and in-network transcoding—with interfaces that are structured and interoperable.

(That terminology is more generic than that of the ETSI’s standards track, but their activity is certainly a key driving force in helping to drive a new technology model into network architect-speak.)

Welcome to the world of software-defined networking and network function virtualization.

SDN is a description focused on the operations and orchestration of vast virtualized compute functions over large physical (including wireless/radio) network infrastructures.

NFV describes the migration of in-network computation from dedicated hardware to a model where the function is deployed on an as needed basis (by the SDN) on whatever resource the SDN determines is suitable.

While we are all always aware of how fast things in the IT industry can change, once SDN and NFV have assumed their well-anticipated widespread adoption (and judging by the attendance of the world’s leading telecoms’ strategic architects at the  Network Virtualization & SDN World 2015 (http://www.sdnworldevent.com/), where I was honored to provide my services as chair, it is clear to me that there are a number of giants who are slowly awakening, and where SaaS operators such as encoding service providers and CDNs have achieved agility through using a combination of dumb networks and public/private clouds to win market share, the very networks that these SaaS operators deliver their services over will soon absolutely have the agility to knock off those same SaaS propositions to the market and at a scale and distribution that today’s generation of network SaaS providers can neither compete with nor (probably) even imagine.

Seriously hot stuff.

Now go forth and Google ;)

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues