Amazon Elastic Transcoder: Review
For output, Amazon supports the H.264 and VP8 video codecs, AAC, Vorbis, and MP3 audio codecs, in Ogg, WebM, MP4, and MPEG-2 transport streams for HLS. If you need Smooth Streaming, Flash HTTP Dynamic Streaming (HDS) or DASH, you’re out of luck.
When producing H.264, Amazon currently supports single-pass only, with constant and variable bitrate control. In both the UI and API, you can select the number of reference frames, but not the B-frame interval, and you can’t select which entropy coding technique (CABAC or CAVLC) the Transcoder applies. Though Amazon uses the x264 codec, neither the UI or API exposes presets or tuning options, a quick way the user can balance encoding speed vs. quality. There’s also no support for closed captions or digital rights management.
Creating Encoding Jobs
Once you’ve created your presets, it’s time to create an encoding job (Figure 5). This is straightforward once you figure out that Input Key means source file, Output Key Prefix means output folder within a bucket and Output Key means output file name. Unfortunately, there is no auto-naming option for the output file, like inputname_preset, or inputname_date, so you have to manually type in each output name.
Figure 5. Creating a new transcoding job.
To encode a single file to multiple outputs, you click Add Another Output on the bottom of Figure 5, choose the preset and enter in the required information. You can’t add multiple presets at once, or set universal parameters, like segment size, once for all HLS iterations. It takes 24 separate groups of steps to add 24 presets to the job. Once you’ve added in all of your presets, you have to set the order for your HLS playlist (Figure 6), which all other encoders perform automatically.
Figure 6. Identifying all the outputs in the master HLS playlist.
Once you create the job, there’s no job’s bin or similar scheme to save it for later reuse, which would have been nice. However, within the log file, you can copy jobs, edit them by adding new input files, and rerun the jobs. While clumsy, it’s a saving grace for those producing repetitive jobs within the user interface.
Quality and Performance
So the user interface was spotty, how was the output? Well, Elastic Transcoder was generally accurate at meeting the target and maximum data rates, which is always appreciated. You can see this in Figure 7, which shows a file encoded using a target of 600Kbps and a maximum of 750Kbps.
Figure 7. Amazon was very good at meeting target and maximum data rates (here 600Kbps target, 750Kbps maximum).
Regarding quality, I compared Amazon output to Zencoder using the most most aggressive preset in the adaptive group, 640x360@600Kbps. Given that Amazon’s output was single-pass, and Zencoder dual-pass, I anticipated substantial qualitative differences, but that wasn’t the case. As you can see in Figure 8, Zencoder was slightly sharper in quality, but nothing any viewer would notice without side by side comparisons.
Figure 8. Not much difference here.
I then produced files at a ridiculously aggressive 1280x720@600Kbps with Zencoder and Amazon, and again, saw very little difference. At least for my test file, Amazon’s encoding engine worked very well.
Unfortunately, performance results were much less impressive. Here I encoded the ten files shown in Table 1 to the previously discussed list of 24 presets. With Zencoder, I was done in 1:48:21 (hour:minute:second), with Amazon 4:37:12, almost three times longer.
Table 1. Zencoder proved much faster in this long test.
It’s possible that I could have achieved faster results with Amazon by spreading the results over multiple pipelines, though as you can see in Figure 3, I should be able to process 20 jobs simultaneously in U.S. East. I did check the output files and saw that Amazon started all jobs almost immediately, so there were no queuing delays. I dutifully checked the FAQs, and saw:
Q: How many jobs are processed at once?
Transcoding jobs are queued and are processed in parallel according to system capacity. You control the parallel processing of jobs through transcoding pipelines.
Not really that helpful, and at $270 per series of jobs, I thought it prudent not to experiment. I would have asked a technical resource from Amazon, but they declined to make one available. Besides, from a competitive standpoint, all other SaaS cloud encoding services shield you from this type of juggling; you run your jobs and they figure out how to process them as quickly as possible and within the relevant service level agreement (SLA).
Speaking of SLAs, the Amazon Elastic Transcoder does not offer one at this time.. There are no service-related assurances, making the Elastic Transcoder a bad choice for producers who need guaranteed performance.
As a final test, I used Apple’s MediaStream Analyzer to measure the overhead in the HLS streams encoded at the 640x360@1200@Kbps preset. Here, the Elastic Transcoder showed an overhead rate of 11.47% compared to 3.15% for Zencoder. Basically, this means that the Elastic Transcoder requires an 8% higher data rate to deliver HLS output than Zencoder, which should translate directly to higher delivery charges for these streams.
Where does that leave us? For broadcasters and high-volume producers, Amazon has critical deficits in input formats (no ProRes, ARRI or RED), features (no captions, DRM) and workflows, and lacks an SLA, which makes it unusable. Beyond this class at the tippy top of the pyramid is a group of producers who rank output quality as their first, second and third concerns, and Amazon proved competitive, but certainly not the very best. Though the user interface could use some help, it’s workable, and few producers really need the absolute fastest encoding time.
When considering Amazon, however, remember the limitations regarding source and delivery location. When comparing prices, remember to factor in that Amazon delivers no breaks for transmuxed content, so it might not be the lowest cost provider. In addition, if you’re paying for content delivery, remember that you’ll need a slightly higher data rate to deliver equivalent quality with the Amazon encoder, and that HLS output is also less efficient, so delivery charges will be comparatively higher.
We've upgraded our testing methodology to accurately reflect what broadcasters and other high-volume video publishers ask of their encoding systems. Our first subjects: Elemental Server and Telestream Vantage Lightspeed Server.