-->
Save your seat for Streaming Media NYC this May. Register Now!

How to Produce VVC With FFmpeg

Article Featured Image

Anytime you start working with a new codec, there are some fundamental tests that you should run to achieve optimal performance/quality optimization. In this article, I’ll take you through those tests while encoding VVC using a version of FFmpeg that includes the Fraunhofer VVC codec.

You can learn how to compile a version of FFmpeg with the Fraunhofer VVC codec. It does not appear that Fraunhofer’s VVC codec will be incorporated into the baseline versions of FFmpeg for reasons discussed at go2sm.com/baseline.

Step 1: Master the Basics

Before you start experimenting with encoding options that impact quality and performance, you should begin with a script that creates a file suitable for adaptive bitrate (ABR) distribution. This typically means variable bitrate (VBR) encoding, a 2-second GOP size, and a closed GOP.

Usually, I experiment with two 10-second files: one a music video called Freedom, the other an American football test clip from Harmonic. Freedom is a bit easier to compress, with lower motion and less detail, while Football is a challenge, with high motion and lots of complex details.

I typically target a VMAF score of around 80–90 in my tests, which should stress the codec and the parameters I’m testing. If you shoot for 95 plus, you won’t see significant differences from the parameters you’re testing. If you target in the 70s, you may produce results that aren’t relevant for videos that you typically ship at 93–95 VMAF. I tested on Windows, but the tests and command string should work with minimal modification on Linux and macOS.

This first command string creates a basic file that’s suitable for ABR distribution:

ffmpeg -y -i input.mp4 -an -vcodec vvc-b:v 1.4M -period 2 -subjopt 0 -vvencparams "decodingrefreshtype=idr" output.mp4

Here’s an explanation of the key switches in this string.

  • Ffmpeg —calls the program.
  • -i —is the input file.
  • -vcodec vvc —is the VVC codec.
  • -b:v 1.4M —is the target bitrate.
  • -period 2 —is the 2-second GOP.
  • -subjopt 0 —disables subjective optimization because we’re testing with metrics.
  • -vvenc-params —are VVC-specific options.
  • -decodingrefreshtype=idr —is closed GOP (open by default).

Most of these options are self-explanatory, although a couple of them warrant some discussion. First, regarding bitrate, my initial command string would normally limit variability to 200% constrained VBR. I tried adding maxrate and bufsize to the command string to accomplish this. While there was no error message, there was also no impact on file size or quality. It was the same exact file with or without, so I left them out. I asked Fraunhofer about this, and the company responded that “these parameters are currently not supported in VVenC but our RC should not overshoot 200%.”

Also note that I disabled subjective optimization with -subjopt 0 because I was measuring quality throughout with VMAF. If you’re producing for distribution, you should include the subjective optimizations; if you’re producing to compare to HECV, AV1, or other codecs, you should leave this in and make sure to use a similar switch, like -tune PSNR, with the other codecs.

Encoding using this string provided the baseline quality level; my first experiment is always whether to use single-pass or two-pass encoding.

Single-Pass or Two-Pass Encoding?

One of the first lessons every budding compressionist learns is that two-pass encoding is superior to single-pass for VBR encoding. As the logic goes, the first pass identifies the easy-to-encode and hard-toencode regions to enable the encoder to allocate the bitrate for the best overall quality. While true in this case, it’s not true universally, particularly with newer codecs in which rate control mechanisms haven’t been optimized. So, you should always encode using single-pass and two-pass modes, measure the encoding time and quality differences, and make a decision.

For the record, I tested on a 32-core/64-thread Dell Precision 7820 Tower with two Intel Xeon Gold 6226R CPUs running at 2.9 GHz running Windows 10 Pro for Workstations with 64GB of RAM. Each CPU has 16 cores and 32 threads for a total of 64 threads. Here’s the command string that I used for two-pass encoding, with the new switches in green:

ffmpeg -y -i input.mp4 -an -vcodec vvc -b:v 1.4M -period 2 -subjopt 0 -vvenc-params “passes=2:pass=1:rcstats file=stats _ free _ VVC _ 2pass.json: decodingrefreshtype=idr” output.mp4 ffmpeg -y -i input.mp4 -an -vcodec vvc -b:v 1.4M -period 2 -subjopt 0 -vvenc-params "passes=2:pass=2:rcstats file=stats _ free _ VVC _ 2pass.json: decodingrefreshtype=idr" output.mp4

Again, the new switches are straightforward: The first calls for two-pass encoding; the second identifies the pass. You can name a JSON file to store the first-pass information. Rather than the Nul - that you typically see with two-pass encoding, you have to name the file in both passes.

  • passes=2 —two-pass encoding
  • pass=1 —first/second pass
  • rcstatsfile=stats _ free _ VVC _ 2pass.json—stats file
  • Need to name the output file in both passes

Table 1 (below) shows the basic analysis, again with the two 10-second files. As is usually the case, the encoding time difference in this instance was minimal. The first pass is almost always very fast and, in this case, added less than 3% of encoding time on average. In the table, a green background designates the best performance and a yellow score, the worst. A quick glance reveals that two-pass delivers better overall quality, low-frame quality, and standard deviation.

VVC Table 1

Average VMAF is computed using the harmonic mean method, which incorporates quality variability into the overall score. Low frame is the lowest score for any single frame in the video, which is a predictor of transient quality issues. Standard deviation measures the variability of the quality score. Whereas higher scores are better for the first two quality metrics, lower is better for standard deviation.

I computed all quality scores with the Moscow State University Video Quality Measuring Tool. In the “a picture is worth a thousand words” category, it also produces the graph shown in Figure 1 (below), which plots the per-frame VMAF score for single-pass in red and two-pass in green. This is the Football clip, and you can see multiple regions where single-pass encoding quality drops significantly below that of two-pass.

VVC per-frame VMAF scores

Note that this Results plot includes a Show Frame button that lets you view any frame in the video and toggle between the source file and all encoded versions. I did that here, and the lower-quality regions in the single-pass file manifested as blurry numbers and loss of detail. Two-pass encoding it is, and on to the next tests.

Thread Count and Wavefront Synchro

Blessed with a 32-core/64-thread machine, I was eager to learn how many of these I could fruitfully apply to VVC encoding. One of the tips that Fraunhofer provided was, “In case you have more than 8 cores available, you get more speed when defining more than 8 threads and using:

-vvenc-params "wavefrontsynchro=1:tiles=2x2"

It’s generally a bad idea to test two parameters at the same time (threads and wavefront synchro), and that proved the case here. Here’s the string I used, with the new switches in green:

ffmpeg -y -i input.mp4 -an -vcodec vvc -period 2 -threads 16 -preset fast -subjopt 0 -vvenc-params “passes=2:pass=1:rcstatsfile=stats _ F1 _ 4M _ fast.json: decodingrefreshtype=idr:wavefrontsynchro=1:tiles=2x2" -b:v 1.4M output.mp4 ffmpeg -y -i input.mp4 -an -vcodec vvc -period 2 -threads 16 -preset fast -subjopt 0 -vvenc-params “passes=2:pass=2:rcstatsfile=stats _ F1 _ 4M _ fast.json: decodingrefreshtype=idr:wavefrontsynchro=1:tiles=2x2” -b:v 1.4M output.mp4

Threads sets the number of threads, while wave front applies the recommended additional switch. Seeking to flex all of the cores on the Dell workstation, I first tested up to 64 threads with wavefront synchro enabled. But I also tested eight threads with and without wavefront synchro. Figure 2 (below) shows my initial findings, with encoding time in green, overall quality in red, and low-frame quality in blue. All scores show each setting’s results as a percentage of 100%; for encoding time, 100% was the longest encoding time, and for the quality scores, 100% was the highest-quality score.

wavefront synchro

So, testing with eight threads and no wavefront synchro produced the longest encoding time, but delivered 100% of overall and low-frame quality. Adding wavefront synchro to the eight-thread encode reduced encoding time by about 8.5%, but dropped overall quality by 1.3% and low-frame quality by 3.2%. So, it appears that wavefront synchro negatively impacted quality.

The other obvious conclusion was that going beyond 16 threads had no impact on performance or quality. I checked this with Fraunhofer, which confirmed that the advantages for 1080p video will top out at 16 threads but that higher resolutions will benefit from higher thread counts. So, for my 1080p test videos, I tested 16 threads with wavefront synchro disabled and with one and four threads (no wavefront) to produce
Figure 3 (below). This clarifies the impact of wavefront synchro and more accurately delineates our configuration options.

impact of threads/wavefront on encoding time and quality

Here are a few observations drawn from these results:

  1. A single thread produced the best overall quality by a hair.
  2. Using from one to four threads produced almost a 4x increase in encoding speed, which slowed dramatically thereafter.
  3. A no-threads approach produced a better result than eight threads, delivering the same quality but reducing encoding time by 8.5%. As per Fraunhofer, this is because not including a threads command triggers auto detection, which uses more threads if the encoder detects that your CPU has multiple cores.
  4. Wavefront synchro reduced encoding time but also dropped quality in both cases.
  5. 16 threads without wavefront synchro reduced encoding time by 25.5%, with very minimal quality loss.

This breaks down into two issues: how to optimize quality and production efficiency. We’ll look at quality first.

Let’s assume that you opted for 16 threads with wavefront synchro, which drops encoding time by 33% but also overall quality by 1.5% and low-frame quality by 3.2%. If you have a specific quality target for your videos, like 93 VMAF, you’ll have to boost the bitrate of your videos slightly to achieve this quality target.

In selecting wavefront synchro, you’re not making a “quality” decision; you’re choosing to reduce encoding costs but increase bandwidth costs to recover the lost quality. The best option relates directly to how many times your typical video will be viewed. If your video will be watched millions of times, it makes the most sense to produce the highest possible quality regardless of encoding cost. If your typical view counts are in the dozens or hundreds, it makes better economic sense to reduce encoding costs and boost the bandwidth to compensate. Now let’s look at production efficiency.

Choosing Threads 16 without wavefront synchro delivers 100% of the quality with an 83.6% encoding time reduction from a single core, which translates to about a 6.1x performance increase. Does this make economic sense? It doesn’t, as you can see in Table 2 (below).

vvc cost per hour using different thread counts

Table 2 shows the computed cost per hour of producing VVC using different thread counts. The fundamental problem is that Amazon Web Services pricing is ruthlessly linear—four threads cost 4x what a single thread costs, and so on. If your processing isn’t as productive, moving to higher thread counts decreases your encoding time per hour, but increases your cost per hour. That’s why a single thread produces the lowest cost per hour.

Four threads is closest to linear, but encoding with four threads improves encoding speed by 3.5x over a single thread. It also costs 4x more, so overall, it’s more expensive. Beyond four threads, hyperthreading efficiency really drops, which is why cost per hour balloons. Note that this analysis isn’t unique to VVC; it’s similar for AV1 and other codecs that I’ve tested. From a pure throughput perspective, it seems very difficult to efficiently leverage the extra threads once you go beyond four or eight.

So, the best economic decision is to produce with a single thread, although four threads is close and likely drops carbon emissions significantly. Go beyond four threads, and your production costs start to grow significantly.

Of course, the configuration that you use for production doesn’t control the configuration that you use for testing. Given that encoding with multiple threads has an irrelevant (.02%) impact on overall quality, I used 16 threads without wavefront synchro to test all presets.

Choosing a Preset for VVC Encoding

As you probably know, presets control the trade-off between performance and quality. You choose a preset with VVC like you do with most other FFmpeg-based codecs, seen in green in the following:

ffmpeg -y -i input.mp4 -an -vcodec vvc -period 2 -preset slow -subjopt 0 -vvenc-params "passes=2:pass=1:rcstats file=stats _ F1 _ 4M _ slow.json:decoding refreshtype=idr" -b:v 1.4M output.mp4 ffmpeg -y -i input.mp4 -an -vcodec vvc -period 2 -preset slow -subjopt 0 -vvenc-params "passes=2:pass=2:rcstats file=stats _ F1 _ 4M _ slow.json:decoding refreshtype=idr" -b:v 1.4M output.mp4

Figure 4 (below) shows the five presets available for the VVC codec and the trade-off in terms of overall quality (red), low-frame quality (green), and encoding time (blue). As before, these numbers are on a scale from zero to 100%, so the Slow preset delivers 99.27% of total quality and 99.3% of low-frame quality in 27.38% of the encoding time of the slower preset.

quality and encoding time trade-offs of the 5 VVC presets

As with the threads decision, this isn’t really a quality decision; it’s a bandwidth cost versus encoding cost decision. Encoding with the Slow preset shaves roughly 73% from encoding time, so if you’re running your own encoder, it cuts the encoding cost by 73%. However, to achieve the same quality as with the slower preset, you’ll have to boost the bitrate by 3.5%. As before, your expected view count will dictate your preset selection. If your view count scales to seven figures and beyond, prioritize bandwidth costs over encoding charges; if your expected audience is in the 3–5 figures range, minimize encoding charges.

Testing new codecs or encoders with these techniques provides a structured way to understand how different options impact quality and throughput and how to choose the best options for your productions.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

How to Deploy GPAC for FFmpeg Packaging and ABR Distribution

As much as we love FFmpeg for transcoding op­erations, it can get frustrating when packag­ing your content for ABR delivery. By packag­ing, I mean formatting and segmenting your media files, creating manifest files for HLS and DASH, for­matting for CMAF, and managing multiple audio and subtitle streams. Fortunately, there are easier-to-use solutions that are equally open source and equal­ly free. In this article, I'll focus on GPAC, which is a great packaging alternative.

The State of VVC Adoption and Implementation

If you're a video publisher who doesn't rank high in the VVC patent list, it's likely that you won't consider VVC until 2026 or later. If you're looking to augment your H.264 encodes with a more efficient codec before then, your most likely choices are HEVC and AV1, with LCEVC also an option. However, if you're a product or service provider in the streaming media ecosystem, it's long past the time to start considering where and when VVC is going to fit in for you and your target customers.

Jan Ozer Talks "VVC: Ready for Action?" Workshop Coming Up at Streaming Media East 2023

One of the premier pre-conference events at the upcoming Streaming Media East conference explores the readiness of the Versatile Video Codec (VVC) for deployment and implementation. Presented on Wednesday, May 17, 2023, from 9:00 AM to 12:00 PM at Boston's Westin Copley Place, the session includes multiple speakers, many from companies already shipping VVC-related products or services. 

How to Script for FFmpeg Using PowerShell and BASH

FFmpeg was designed as a cross-platform solution for video and audio recording, conversion, and streaming using simple static command lines. Using variables and "for loops" in a command string simplifies the reuse of existing scripts and helps automate their operation. While you can't use these scripts in the Windows Command window, you can use Microsoft PowerShell in Windows and Bash on Linux and the Mac. In this tutorial, you'll learn how to create and run such scripts with PowerShell and Bash.

Testing EVC, VVC, and LCEVC: How Do the Latest MPEG Codecs Stack Up?

Jan Ozer put EVC, VVC, and LCEVC through the paces, checking each for not only encoding quality, encoding complexity, and playback efficiency but also power consumption. Each one has its pros and cons; read on to find out how they all performed.

How Does VVC Measure Up Right Now?

VVC today can be both useful and usable; let's hope that VVC IP owners can formulate a royalty policy that delivers the same.

HEVC, AV1, VVC: How to Make Sense of 2019's World of Codecs

The old realities that used to dictate codec adoption no longer apply. Opening up new markets now matters more than reducing operating expenses. How are HEVC, AV1, and VVC positioned for the future?

Video: Licensing VVC and AV1--What You Need to Know

Bitmovin Codec Engineer Christian Feldman unravels the relevant licensing issues surrounding the emerging codecs AV1 and VVC in this clip from his Video Engineering Summit presentation at Streaming Media East 2019.

Video: The Current State of AV1 and VVC

Bitmovin Codec Engineer Christian Feldman provides a snapshot of the current state of the AV1 and VVC codecs in this clip from his Video Engineering Summit presentation at Streaming Media East 2019.

HEVC, VP9, AV1, and VVC: Presenting a Codec Update in 11 Charts

Is AV1 all that people expect it to be? How much better would HEVC be doing with a fair royalty policy? Look to these charts for the answers to tomorrow's codec questions.

Companies and Suppliers Mentioned