Opinion: Is A Codec War in Our Future?
Today, the industry is abuzz with articles, claims, and counterclaims about codecs. As someone who has been immersed in MPEG since the days of its inception, I want to explain the history of codecs in order to give you a better understanding of where the market is headed.
Until now, video standards have been necessary to create an interoperable world for the distribution of media, initially for broadcast given the constraints of hardware decoders and later on for unicast as the industry evolved toward software-based decoding. The first bump on the MPEG road came in the early 2000s when Microsoft developed its own codec, VC9, later known as VC-1 by SMPTE.
Does anyone wonder what happened to VC-1? This codec had the same performance as MPEG-4 AVC/H.264 and roughly the same licensing terms, so the industry response was “Thanks, but no thanks.” The second bump in the road came in 2010 when Google bought On2 to gain access to the A/V codec technology. They launched VP8, limited to YouTube, and are now transitioning to VP9. Thus far, VP8/9 has had very limited success outside YouTube, and it is not supported widely on mobile devices. Companies like Google, Microsoft, Cisco, and Mozilla do not like the MPEG licensing model for HEVC, since it is unclear, so they created the Alliance for Open Media (AOM) to offer an open source, royalty-free codec that performs twice as efficiently as HEVC. It’s compelling, right?
Now let’s do some math. The pay-TV market is worth approximately $350 billion, and the OTT market is valued at about $35 billion. Even if a new codec was able to address the OTT market, which has yet to be proven, it still has to succeed in a market that is 10 times larger. As a matter of fact, AV1, the AOM new codec, is only targeting OTT delivery and will not be able to solve the codec problems for the broadcast market.
Let’s take a look at the big challenges on the broadcast side. If service providers want to increase the resolution of their channels from SD to HD in emerging countries and HD to UHD in developed regions, especially for delivery over bandwidth-limited networks such as DTT, mobile, or OTT (DSL), then a new codec will be required after 2020.
This brings us to the dynamics of an open source codec. The nature of an open source codec is that it will be updated on a regular basis. As a reference, a new version of the Reference Design Kit (RDK), the open source cable middleware, is released every month. There will be multiple versions of AV1 operated by different OTT services that will have to interoperate with billions of AOM-capable devices. Who is going to be the traffic cop? It’s likely that AOM will define interoperability profiles, but based on what we have seen with DASH, it may be several years until there is a good level of interoperability for live services.
OTT operators pushing for AOM (i.e., Netflix, Amazon) will have to impose strict device rules and duplicate streams on their servers, incurring additional encoding and storage costs. This scenario could work for VOD where the library size is limited, but what about live applications? This would require using unicast to stream multiple versions to address each different client. If the operator has a managed network where multicast is used for live, there is not enough network capacity to simulcast.
This begs the question: Is AOM a real solution for live OTT? Well, if you liked the fragmentation of Android, you are going to love AOM. The open source codec can work in a software client environment, such as web browser on a PC, but does not work on TVs, IP STBs, HDMI dongles, smartphones, or game stations. Essentially, it is not compatible with the lion’s share of OTT clients with a dedicated hardware decoder for achieving a very high performance target (i.e., games and video in Ultra HD), a low cost (IP STB) or power efficiency (i.e., mobile devices).
MPEG-4 AVC/H.264 has been an international standard since 2002. And while MPEG-4 encoders have been deployed for at least 10 years all over the world, some TVs today have a hard time decoding a standard stream that is ratified for 14 years. Since most of those TVs are not connected, only an over-the-air update can be done. This costs money and is not always possible in all countries. Imagine if the codec was changing every month. Do you think this would work?
Now let’s circle back to the AV1 codec from AOM. Before it can be successful, performance claims have to be substantiated. MPEG is working with engineers to reach a factor two compression at the codec level by 2020 as an international standard, meaning its capabilities will be demonstrated in 2019.
On the royalty free aspect of AV1, MPEG has developed a lot of essential patents for block-based compression. Unless AOM comes up with a brand new technique, it will have to pay MPEG patent holders, who are not making much money with HEVC these days and will be eager to monetize on AV1.
Last but not the least, it’s important to note that AV1 is only looking at the codec level, while MPEG focuses on the system level. Harmonic believes (and recently made a contribution to MPEG) that if we work at the system level, we could reach a factor four compared with the HEVC approach of using new techniques such as elastic encoding, OTT content based rate control, tiling for VR, scalable coding for 8K, pre-post processing pairing, machine learning, etc. Of course, if all factors are considered, then the chances to reach a factor four are higher as opposed to the simple broadcast of HD. Those techniques will bring major differences compared with the pure codec rat race. In addition, they will solve system problems such a picture quality and latency that could derail applications delivered over the internet if performance levels are not met.
In short, MPEG has the potential to leapfrog AOM with a system approach, coming only two years after AV1 was deployed on web browsers for wired applications. What’s more, it’s a platform that also addresses the broadcast market.
You can argue AOM has rallied chip companies such as Intel, ARM, AMD and NVDIA, all of which develop silicon technologies for PCs and mobile devices. But on mobile devices, the A/V decoding function is always done in the hardware for cost and power reasons. The logic used to do AV1 decoding will have to be reprogrammable, probably using similar techniques to FPGA. (See the recent acquisition of Altera by Intel.) This has not been deployed yet, and of course what works today for HD might not work for Ultra HD, especially when all the goodies like WCG, HDR, HFR and NGA are added in 2017!
So what does AOM really bring to the market? It offers a clear alternative for the web world that does not want to pay an "MPEG tax," a potential plan B if HEVC licensing does not get resolved and possibly new algorithm innovation that is less controlled than MPEG.
Will the content providers and consumers win at the end of this codec war? MPEG isn’t finished yet. It is going to innovate or will become irrelevant, especially if the licensing terms are becoming less and less accepted. Will MPEG learn a lesson from HEVC? Trust me, there is a lot of frustration in the MPEG community, and they won’t repeat past mistakes.
Thierry Fautier is vice president of video strategy at Harmonic. This is a vendor-contributed article. StreamingMedia.com accepts contributions from vendors based solely on the information and insight they offer our readers.
HEVC Advance says it hopes to speed the adoption of HEVC decoders among the installed base of computers and devices by making some software downloads royalty free
22 Nov 2016
Netflix compared 5,000 clips from 500 titles in its library using the x264, x265, and libvpx codecs. x265's implementation of HEVC was the clear winner on quality and efficiency, but whether that matters in light of compatibility and licensing issues isn't so obvious.
02 Sep 2016
Speedy adoption of HEVC has been delayed by squabbles over licensing. Here, the developers of x265 propose steps to end the gridlock and move forward.
31 Aug 2016
Companies and Suppliers Mentioned