Back to Basics: Hardware Acceleration
As more and more high-definition (HD) content is acquired, edited, and pushed out to the web, mobile, devices and set-top boxes, the demand for a higher-quality user experience is rapidly followed by a need for hardware acceleration. But what exactly is hardware acceleration?
Learn more about the companies mentioned in this article in the Sourcebook:
A primer for newbies and a refresher for veterans, Back to Basics is a regular feature examining some of the basic concepts and technologies involved with delivering online video.
As more and more high-definition (HD) content is acquired, edited, and pushed out to the web, mobile, devices and set-top boxes, the demand for a higher-quality user experience is rapidly followed by a need for hardware acceleration.
But what exactly is hardware acceleration? In our part of the business there are four general processes that use "idle" or latent processor power for hardware acceleration: ingest/live streaming, content creation, output, and playback.
Since ingest is the realm of specialized appliances such as Envivio's C4, Inlet's Spinnaker, or Elemental's Live appliance, I'm not going to spend a lot of time on it other than to say I'll be digging into this topic in more depth with the Transitions consulting team-and a few key colleagues-over the next few weeks for an in-depth comparison of select hardware-accelerated live ingest and transcoding boxes.
Content Creation and Output
For content creation (editing) and output (compression), the promise of hardware acceleration is either reduced time for complex sequences or compression schemes or better quality within the same period of time-or a bit of both.
On the content creation side, a good example is the Mercury Playback Engine, an underpinning of Adobe's new Premiere Pro editing application, available in the upcoming Creative Suite 5 Production Premium bundle.
Demonstrated at the recent National Association of Broadcasters' show in Las Vegas, the Mercury Playback Engine accelerates content creation by using latent graphics processor unit (GPU) cycles to play back multiple streams of very high-resolution content in real time.
At NAB, Adobe demonstrated the ability to play back several 4K RED digital cinema streams, using Mercury, with the net effect being that the old days of waiting for an HD or Digital Cinema timeline to render may be in the past.
"Blockbuster films are not made overnight," stated NVIDIA, who has partnered with Adobe on the Mercury Playback Engine acceleration, "but with Adobe Premiere Pro CS5 and NVIDIA Quadro graphics solutions, editing time can be drastically reduced."
The technology behind this acceleration is called CUDA, a programming language used specifically to program NVIDIA graphics card for a variety of parallel processing tasks.
"Built using the NVIDIA CUDA parallel processing architecture," NVIDIA states, "the Mercury Playback Engine coupled with Quadro GPUs delivers real-time previewing and editing of native, high-resolution footage, including multiple layers of RED 4K video."
At the outset, Adobe is also limiting the number of GPUs it supports with its Mercury Playback Engine, and currently supports no mobile GPUs. A bit more on that topic can be found in the playback section of this article.
Hardware acceleration is also used to transcode files from one format to another. Some of the better-known transcoding solutions from RipCode, Media Excel, and the other companies mentioned above, use hardware acceleration from CPUs, GPUs, or digital signal processors (DSPs).
For instance, CUDA is used by the Elemental Server appliances to access its four NVIDIA GPUs, allowing eight 1080p streams to be transcoded simultaneously.
Even on the desktop, though, GPU acceleration is used to accelerate transcoding: Both Adobe's CS5 version of Adobe Media Encoder as well as Elemental's Accelerator significantly speed up the output of an Adobe Premiere Pro timeline to a standalone exported H.264 file.