Review: ViCueSoft CodecWar
CodecWar is an analysis service created by ViCueSoft, the developer of codec analysis tools VQ Analyzer, VQ DVK, and VQ Probe. The site has two primary functions. First, it offers a free library of codec comparisons relating to all relevant compression standards, proprietary and open source. Using this function, for example, you can quickly compare how VVC stacks up against the latest version of AV2, with outputs including both RD-Curve and BD-Rate comparisons. This feature makes the site useful for anyone who needs to stay current on codec comparisons.
Second, the site allows you to configure an encoder, encode single or multiple test files, and produce RD-Curves for the output. In both use cases, you can use test files provided on the site or upload your own and encode using codecs on the site or upload your own. However, if you upload your own clips or codecs for the comparison function, you’ll be charged for the associated processing time.
The site is a work in progress. As currently configured, the ideal users are researchers who are looking for a convenient way to compare codecs using relevant datasets and codec developers who are looking for a structured way to benchmark their codecs against others. In contrast, it’s not a particularly convenient way for streaming producers to run experiments to optimise their encoding parameters, although it could grow into this.
Compare Video Codecs
This function is free to end users and allows them to choose and configure different views of the data that ViCueSoft has already crunched. It’s a useful way to understand how different codecs compare using a range of clips and comparison methods. All of this data is free; you just need to configure the data you want to see and download the report.
To start, as seen in Figure 1, you can choose from a template or your own selection. At a high level, each comparison comes down to one or more codecs compared using one or more test clips.
Figure 1. When comparing codecs, you can choose a template or your own comparison points.
Codec support is growing; for example, when I started my review, the only VP9 codec available was the SVT-VP9 codec, which historically has been a poor performer. By the last day, ViCueSoft had added libvpx. The HEVC options were the MPEG reference encoder, which is great for researchers but irrelevant for producers, and SVT-HEVC, another poor performer. Here, x265 is the obvious choice for the HEVC codec (and I’m sure ViCueSoft will add it soon). While you can upload any codec you would like, this means additional encoding charges and inconvenience.
In contrast, the service made better choices for AV1, including both SVT-AV1 and libaom, and VVC, including both the MPEG reference encoder and Fraunhofer’s open source implementation. Similarly, H.264 included both the MPEG reference encoder and x264, the logical choice. The service did an excellent job staying current on what I was curious about, which was how the latest version of AV2 compared to VVC.
You can see this in Figure 2. I’ve already selected the VVC reference encoder as the reference codec, which I’m comparing to the AVM/Research 4 codec using the random access (RA) configuration for both.
Figure 2. Comparing VVC with the latest version of AV2
Note the “How it works” button on the top left of Figure 2. In this case, it displays a YouTube video, but on other screens, it displays more detailed instructions. ViCueSoft does a nice job of sprinkling these help screens throughout the UI to assist your operations.
Next, you choose the clips to compare, as shown in Figure 3. You can sort by any of the parameters at the top; you can see the resolution options in the drop-down list. The site includes an excellent mix of test clips, including all genres and some familiar clips like CrowdRun, WITCHER3, and Tango, which you can view and download at codecwar.com/gallery.
Figure 3. Choosing which clips to compare
Note that not all test clips are available for all encoding comparisons, meaning each codec in the available configurations. For example, after ViCueSoft added libvpx, I attempted to compare it to x264 using both VBR configurations. Only one test clip was available in the free comparison, a 480x270@60 fps clip that held little interest.
To be clear, the most common comparisons will have multiple options; the VVC versus AV2 comparison I ran had more than 40 comparison clips. While the number of clips will increase over time, the more fringe your codec/configuration comparison, the fewer clips will likely be available for the free comparison.
Figure 4 shows the top section of the results. Overall, AV2 proved 26.2% more efficient than the reference VVC codec as computed using the AOM Piecewise Cubic Hermite Interpolating Polynomial method (PCHIP) as compared to Polyfit. If you have no idea which interpolation method to choose, you’re in good company; fortunately, ViCueSoft has a blog post that seems to indicate that PCHIP is more accurate.
Figure 4. According to ViCueSoft’s calculations, AV2 is about 26% more efficient than the VVC reference codec, which is impressive.
Of course, no quality comparison is complete without understanding the command strings used for each encoder. You can access these for all transcodes through the Configurations tab in your Profiles page in the CodecWar interface. This tab also contains the configuration files you might need for performing your own custom transcodes in the second function I’ll describe.
At the bottom of Figure 4, you see the ability to toggle between BD-Rates (for Bjontegaard Delta-Rates) and RD-Curves (for Rate Distortion-Curves). As you probably know, a rate-distortion graph plots the quality levels of the codecs at different bitrates, while BD-Rate quantifies the difference into a single number.
For example, on the left of Figure 5, you can see the RD-Curve comparing VVC and AV2 at different bitrates for the MeridianTalk clip using the VMAF metric. AV2 is the bluish line atop the curve, with the gray VVC on the bottom. As you might have guessed, the top clip has the better quality.
Figure 5. Here’s the RD-Curve presentation from the CodecWar report for VMAF.
The BD-Rate calculation shown on the right quantifies these differences into a single number, in this case, 34.85%. For this analysis, VVC is the so-called anchor codec to which we’re comparing AV2. Since the number is in green, it means that, on average, AV2 delivers the same quality as VVC at a 34.85% lower bitrate. If the number were red, it would mean that AV2 was less efficient.
Again, referring to the bottom of Figure 4, the BD-Rate report shows the BD-Rate value for all clips; you can download a sample BD-Rate report for VMAF. The RD-Curve report includes the data shown in Figure 5 for all test clips; you can download a sample RD-Curve report for VMAF. Note that you can view and save either report for any of the metrics shown at the bottom of Figure 4.
Each time you choose a report, you can click “Read more about this study” shown at the top of Figure 4 to view the data in Figure 6. This provides an overview, links to the GitLab location where you can download the codec, and details about the metric and interpolation method.
Figure 6. Details about the report shown in Figure 4
Configure Video Encoder
The second CodecWar use case is to customise an encoding profile, encode a file, and produce the RD-Curves for any of the metrics shown in Figure 4. This option is more useful but also much more complicated and a bit disjointed. In addition, you must pay to play.
You work through this process with a five-step wizard, first choosing your codec. In this operation, you can only select a single codec. As shown in Figure 7, I’ve chosen AVC using the x264 codec. Again, you can choose any codec available in the system or upload any codec here by contacting ViCueSoft.
Figure 7. Creating my custom encoding pipeline for x264
At Step 2, you can select a preset, either one that comes standard with the service or one that you make yourself. Or you can create your own custom configuration. I chose a custom configuration to arrive at Figure 8.
Figure 8. Configuring the custom encode
You start on the upper left by choosing a template, and the template parameters then display in the editable Encoder text field in the middle. You can change any parameter within the text field by simply editing the text. However, if you change the command string, you’re in charge of making sure the command string works. If it’s incorrect, you’ll fail the validation phase discussed next.
Similarly, for those codecs operated via a configuration file, you are able to upload a custom configuration file. As mentioned, to see what comprises a configuration file, you can download one created by ViCueSoft from the Configurations option in your customer profile screen. Unfortunately, the “How it works” video file doesn’t address what a configuration file is and where to access one; it just states that you can upload one, which complicated operation for me.
Once you finalise your changes or add a different configuration file, you click the Validation button on the lower right. The encoder attempts to transcode a short file to validate the new configuration. If it succeeds, you can proceed. If there’s an error, you must debug it and make the new configuration work before you can continue.
There’s no encoding GUI, and given the breadth of codecs available, there really couldn’t be. So, you’re in charge of ensuring the configuration options are correct. This shouldn’t be an issue for researchers who know the reference coders or for codec developers who are evaluating their own codecs, but it might be for casual users who are seeking to run some encoding experiments. One complication for me was that the x264 codec used the x264 encoder, not the x264 encoder in FFmpeg, so typical FFmpeg commands didn’t work. That wasn’t a huge deal, but it slowed me down. If technically feasible, the developers should consider using FFmpeg-like commands for codecs like x264, x265 (if and when added), and VP9 instead of or in addition to the native encoder commands.
Returning to the interface tour, in the bottom middle of Figure 8, you can see the QP values used for the multiple encodes, which you can also edit. If you prefer, you can also choose the encoding points via bitrates, although you’ll have to choose or create a preset that encodes using VBR.
Then, you press Validate on the bottom right to verify that your changes are valid. In the upper left of the Validation screen, you can see the charges associated with the encoding job that you’ve programmed in. These represent CodecCash, which cost 0.9 Euros each. You can read about pricing at codecwar.com/pricing.
The middle screen shows that we passed all of the validation tests and are free to proceed. Once you’ve validated a configuration (Figure 9), it’s saved with the default configurations, and it appears as an option each time you deploy that codec.
Figure 9. The new configuration has passed; we can proceed.
Next, you choose the clips to encode with the selected configuration. Pricing is based on the codec, resolution, and frame rate. In Figure 10, you can see that based on the formula applied by the site, CrowdRun (1080p@50 fps) counts as 39 streams, for a total price of eight codec cash units.
Figure 10. Choosing the streams, setting the price, and starting the encode
Encoding time depends on the clip, codec, and configuration. Most of the preset configurations are set to maximum quality, which obviously extends the encoding time. After you start the encode, there’s a status screen you can check to determine your progress. Once complete, you can access the results in the Workspaces tab. As shown in Figure 11 (on page 52), once you access the job, you can view three categories of results.
The Configuration tab details test parameters like command strings, while the Detailed log displays encoding times and other performance data. The Metrics tab contains the RD-Curves for the clips that you encoded, with all of the same metrics shown in the middle of Figure 11.
Figure 11. Here are the RD-Curves for the selected clips that I encoded.
You can export all results in JSON format for future processing, but you can’t run comparisons of the results files produced. So, if you ran CrowdRun with x264 with a single B-frame and then with 16 B-frames, you couldn’t load and compare the two in this interface.
At a high level, the ideal customer for the second, encoding-related function of CodecWar is a compressionist who doesn’t have the programming skills (or time) to automate their testing and reporting activities. This certainly includes me.
The challenge is that each compressionist has their own unique analysis and reporting schema. Mine relies heavily on RD-Curve and BD-Rate data, but also on visualisations of the metric scores over the duration of the file to spot quality drops, as well as data like the lowest-quality frame in the file and standard deviation to assess the likelihood of transient quality issues and quality variability. I’d love the ability to upload some files, choose some parameters, and then download the results, but only if it provides the data I rely on to make what I feel are informed decisions. Building an application that can support many of these idiosyncratic schemas will be tough, but ViCueSoft is off to an impressive start.
This review will highlight Ant Media Server and many of the features supported with the streaming server. Viewers will also learn how to configure the WebRTC streaming server on Amazon Web Services and how to get up and running with delivering live streams and video-on-demand streams.