Buyers' Guide to Encoder Appliances
In addition to the latency around the frames themselves, content needs to be encoded and then packetized for delivery. In some instances, where content is already coming from an IP camera, the content needs to be repackaged so the IP stream that’s pushed to the encoder or transcoder will introduce an additional one- to two-frame delay.
CAST, which develops and sells semiconductors, summed up the latency issue as both a human-perception and a machine-interaction dilemma.
“When humans interact with video in a live video conference or when playing a game, latency lower than 100ms is considered to be low, because most humans don’t perceive a delay that small,” the company wrote in a 2013 blog post.
“But in an application where a machine interacts with video—as is common in many automotive, industrial, and medical systems—then latency requirements can be much lower: 30ms, 10ms, or even under a millisecond, depending on the requirements of the system,” the company added.
While it’s unlikely that streaming video will ever require the nanosecond accuracy of HDMI distribution, there’s still a significant amount of content to process. A typical 1080p video signal generates approximately 62 million pixels per second for 30 fps content, and double that for HFR content at 60 fps. A field-programmable gate array (FPGA) has enough processing power to deal with compression in a highly deterministic manner, since it doesn’t face issues that a GPP is prone to, such as task switching.
This 100–300 ms lower latency is in marked contrast to a 2-or-more second delay for HLS near-real-time streams.
While there isn’t room in this Buyers’ Guide to go into additional detail around use cases for varying latencies, a white paper I assisted with in 2017 covers several of the key trade-off elements, comparing low-latency live streaming with near-real-time streaming in terms of reach and scalability.
When Hardware Is Key
To wrap up this Buyers Guide, though, we’ll focus on three key areas where hardware encoder appliances excel:
1. Ultra-High Resolutions:Software-based encoding on the latest GPPs and CPUs, as well as on mid- to high-end graphics processing units (GPUs), does a rather decent job of encoding full-motion 720p HD video content. For 1080p content, especially at high frame rates or high dynamic ranges (HFR and HDR, respectively) the need still exists for high-density encoding via hardware.
Nowhere is the need for hardware acceleration and dedicated appliances more apparent than in the realm of 4K or Ultra HD video content. For this resolution, which is four times that of 1080p (which itself is four times that of 720p), the current need for hardware-assisted encoding is indisputable, especially if frame rates rise to 50 or 60 fps for sports and high-motion action content.
The advent of 8K panels, as demonstrated at the Consumer Electronics Show in Las Vegas in January 2018, also requires hardware in both the encoding and decoding of content, with adoption of 8K not slated until after 2020.
2. Baseband Video Inputs, Both Analog and Digital:In the post-analog era, where many consumer devices have met the “analog sunset” requirements by removing either analog output connectors or the software functionality of these connectors, it may seem ludicrous to suggest that baseband analog inputs are still relevant for encoding appliances in 2018. Yet the number of analog cameras in the field continues to not just persist, but also, in some emerging markets, to be on the rise. The extra-longtail nature of analog video equipment necessitates at least the option of analog inputs on today’s encoding appliances.
3. Real-Time Operating Systems: Real-time streaming needs a real-time operating system (RTOS). There, I’ve said it. While we will continue to debate, probably for the next 20 years, whether embedded operating systems based on mainstream OS choices (e.g., Linux variants, stripped-down Microsoft Windows OS versions, or even mobile device OSes such as Apple’s iOS and Google’s Android OS) are proper platforms on which to base a highly time-sensitive appliance such as a low-latency video encoder, one fact remains: single-purpose operating systems for single-purpose encoding devices eliminate many of the underlying effects that plague GPPs and mainstream OSes.
On the flip side, an encoding appliance with an RTOS is often limited in terms of its upgrade path, requiring the replacement of a customized, application-specific integrated circuit (ASIC) or the reprogramming of an advance RISC machine (ARM), digital signal processor (DSP), or even FPGA firmware to be able to update standards-based compliance as the MPEG, or the ITU, or other standards bodies publish specification updates.
Hardware Still Matters
The constant need for hardware encoding for newer formats, balanced against software-only solutions that take advantage of the latest CPUs and GPUs, may just be the streaming industry’s analog to a CPU manufacturer’s tick-tock approach to features and processor size. Until resolutions, frame rates, and color bit depths level off and solidify into consistent standards, there will always be a need for hardware encoding appliances, especially when it comes to lower-latency encoding and delivery.
[This article appears in the 2018 Streaming Media Industry Sourcebook as "Buyers' Guide to Encoder Appliances."]
What's the best solution for your video on-demand encoding needs? That depends, but this guide will help you figure out which questions to ask.
When moving to the cloud, don't let price be the only consideration. This guide explains the different categories for cloud VOD encoding and the features to look for in each.
If you're not already using per-title encoding, it's time. Here's a guide to choosing the tool that's best for you.
While nearly any encoder can connect to any streaming service, some encoders make it easier than others. Here's how to choose the right tool for the job.