The Return of Multicast: Why it Succeeds in a Live Linear World
Multicast isn't new, but CDNs, operators, and content publishers have finally caught up to the possibilities it offers for increased scale and decreased costs.
Learn more about the companies mentioned in this article in the Sourcebook:
Several times over the years at Streaming Media I’ve been given free rein to indulge myself writing about my favorite of all topics in the streaming space: IP multicast.
IP multicast offers significant advantages, in some scenarios, over traditional unicast and broadcast.
In a live stream unicast, every connected user has his or her own bandwidth requirement back from the server, so the server needs a large enough internet connection to serve all the requests for the stream. This is true even if it’s for the same data going to many different people. While essential for personalized services, unicast doesn’t scale well for large live online events.
In a broadcast there is no direct connection, but some of the network is allocated to the broadcast; in effect, the content is streamed to all users even if they don’t watch it—internet routers essentially do not forward broadcast packets, so broadcasts are not possible between routed IP networks.
In a multicast, end users tune in by registering their interest in the stream with their own router, which then in turn registers with all the routers back to the origin. The routers then work out that they only need to send one copy of the packet of data to the next router (rather than one for each user). Then the very last router only has to send the packet out once to the final access network, but at the moment it does, all the receivers who want the packet are tuned in at the same time. So whether there is one recipient or thousands (as there would be in an enterprise network), the load still looks like just a single end user.
This means that many different broadcasts can be delivered to many different people using the internet all at the same time, and audience sizes are no longer restricted by the size of a central server (or in the case of the traditional pureplay CDN, a distributed network of servers that could share that load).
A Brief History of Multicast
My own activities with multicast began in the late ’90s. I had been streaming audio, mainly using a little hack in Live365.com’s service to get free CDN services, then more professionally using Akamai, Real Broadcast Networks, and other U.S. CDNs.
I moved to video and deployed a lot of Windows Media projects, and as I was learning the various interfaces, I kept reading about the Windows Media Multicast capability and the scale of live audiences it could enable. I dug deeper and it felt like I had found a forgotten piece of history, one that just needed a kick to start up and change how anyone could broadcast a live data stream to all those who wanted to hear it, more efficiently than via any other method. I realized that scale could be reached with existing equipment—it just needed a coherent deployment configuration. And with several standards and no clear business, multicast also needed some help getting buy-in from relevant players.
Having thoroughly enjoyed the dot-com era, and having previously worked as a club promoter, I was both aware of how MP3 had changed the dynamic of the entire content industry in an instant, and how IP multicast would similarly (if it ever escaped into the wild) change the dynamic of the broadcast industry.
Despite the challenges, I decided to take an early position myself; in 2001 I joined with one of the authors of the early IP multicast protocols to create the Global-Multicast Internet eXchange (Global-MIX). The idea behind Global-MIX was to offer structured source-specific multicast over ISPs, with us managing an aggregated origin of live streaming TV and radio encoders, as well as multicast servers. We used MBGP (in whose creation my partner was also involved) to deliver the multicast video to ISPs who wanted to forward it to their users.
Unicast vs. broadcast vs. multicast
Since we reverted to unicast in the event multicast was not possible (which was the case most of the time) we could offer a comprehensive CDN live service to TV and radio broadcasters that were simulcasting. Where we managed to multicast, we achieved incredible margins, allowing us to carve a niche in a space dominated by larger U.S. CDNs.
It all went well until Windows Media fell from grace, and Adobe didn’t really get its Fusion product together well enough to replace the simplicity with which Windows Media could be deployed.
And so the moment was gone. The legacy of Global-MIX lives on as the MBGP Peering in the London Internet Exchange, and it still services some functions, but it is a complex beast. In the traditionally appliance-led and capital expenditure (capex)-intensive network investment, most network services need a minimum level of traffic before they can garner political and financial support internally, and since IP multicast reduced costs for everyone, any successful long-term strategy was vulnerable to a counterargument of “that didn’t increase our revenue.” Worse, the efficiencies could reduce revenues or reduce transit-buying power, which could affect the business more widely. Sometimes inefficiency can be valuable.
This has remained the status quo for years. But things are not static. All the while, mindful of the growing CDN market that operates in their own backyards, operators have been working out their own on-net distribution strategies. Over the past 10 years we have seen the pureplay CDNs provide a competency that has typically been outside of the operators’ own domain: a knowledge of how to run media services and manage the application layer software on top of the IP routes.
Now, that’s all changing. Operators often see that value as something they want to take in-house; they want to be able to offer CDN-like services among the suites of propositions that they offer to their wide range of clients. VPNs, proxies, voice telephony, and an abundance of data services all present architecturally similar problems to a network operator. They all require workflow, network and processing, and ultimately billing.
As I wrote recently about the emergence of software-defined networking and network function virtualization, operators are now defining a new generation of architecture. They are adopting a forward-looking and more dynamic approach to their network functions deployment, allowing them to work with faster cycles in the market.
For an operator to deploy an on-net CDN a couple of years ago was a significant capex consideration. Now the plan is to deploy a generic compute infrastructure once, and to ensure that the applications—including CDN—can be run on that compute infrastructure. From there, they can change what they are doing through software at virtually no incremental cost, rather than by “rolling trucks” and burning tons of capital.
Over the next year or two, as the operators finish that fundamental architecture and replace routers with compute power and routing software, it becomes easy for them to roll out both very small and very large service models in dynamically. At that point, they will take over the problems of network optimization for particular client traffic entirely with their own CDN architectures, and they will begin to detach from the managed services from the CDNs. Certainly, operators might initially license software from today’s CDNs to get them started—and we can see Akamai, Limelight, and others already in the process of providing licensed software models—but over time many will develop new CDN capabilities themselves, tuned to their own network resources and skills, and offer them as a key differentiator. But that is still to come.
However there is something else in the pot, alongside this new computerized and software-driven networking model, that is going to add some significant flavor to the soup: the return of IP multicast.
This time around, audience adoption of live linear streaming is reaching a critical mass; the value of the audience outweighs the revenue lost by replacing inefficient unicast with highly efficient IP multicast. Suddenly the telcos are in favor of growing the audience, rather than charging the content providers for trying to grow audiences. Now the telcos are involved in the value chain from the subscriber end, rather than simply chasing revenue from publishers. This means that multicast’s optimization increases profits, rather than simply reducing the value of the service telcos can sell to others.
As is true for many other enterprises, network bandwidth limitations have helped to limit the more rapid integration of streaming media at Boeing. Boeing’s initiative to multicast-enable its network is intended to free up valuable bandwidth and clear the way for new streaming applications.
Today many streaming industry advocates (and pundits) wonder why eleven years after the first test multicasts of streaming, this technology is still not ubiquitous. Will multicast streaming efficiencies ever happen?
In the multicast industry there are a number of commonly heard statements that have become "urban legends." It is the streaming media industry’s responsibility—if it wants to become ubiquitous by driving down bandwidth costs—to dispel these myths with "truths" such as those provided here.
How Multicast is Changing the Face of Streaming
Thanks to 4G wireless connections and improved video formats, mobile viewers will soon have streaming experiences that rival the desktop.
Silverlight support might be ending, but enterprises that rely on it for multicast now have an option that uses existing video systems.
As 2016 begins, StreamingMedia.com looks at the content, monetization, and workplace challenges that face the industry. The more things change, the more they stay the same.