Video: How to Measure Picture Quality
Learn more about video quality measurement at Streaming Media's next event.
Watch the complete video of this panel from Streaming Media West, DT201A. Non-Reference Picture Quality Assessment, in the Streaming Media Conference Video Portal.
Read the complete transcript of this clip:
Andrew Scott: Picture quality. We can consider a couple of different methods.
Subjective picture quality, as implied by the name, is basically our human perception of how good the video is. And typically, subjective scoring is done by actually getting a group of people--a sufficiently large group, maybe a couple of dozen people, in a controlled environment to view some test material and give their opinion on what the score is.
Typically, we use something called the Mean Opinion Score, or MOS, which is a five-point scale as shown here, and basically the average, the mean of those 20 or 30 people who are scoring the content computes our Mean Opinion Score.
It's as simple as that. An objective measurement, though, is basically an algorithm, a machine-generated way that tries as best as it can to approximate that subjective value.
A really good objective measurement would be one that has a high degree of correlation with the subjective scores, and that's really the target here.
With picture quality, we've got a couple of different ways of making a measurement. First of all, we'll consider what I call a full-reference picture quality measurement, and there's various algorithms in place now that do this.
DMOS stands for Differential Mean Opinion Score. PSNR is structural similarity. VMAF is the Netflix-developed measurement. All of these are examples of a full-reference measurement, and the key thing about a full-reference measurement is you actually have two copies of the material: original source and the distorted or modified version.
What we typically do here is make a comparison, and so each of those algorithms is essentially a comparison metric. It uses pixel values from both images, and it makes a measure of essentially the difference between them. So really it's more of a fidelity measurement than a quality measurement, all right? We're not sort of objectively comparing, objectively looking at the quality by itself, but it's more of the difference between them.
In the example here, what if my distorted image is actually better than the original? It's not really distorted, but it's better. That might actually have a low objective score because it's, again, different.
It's different in a good way, so that's why I say it's more of a measure of fidelity than quality.
The key thing about a full-reference measurement is you need access to both sources, right? They need to be present together. So it makes it impractical for live material where perhaps you're monitoring point where you're making the observation, you don't have access to that original material.
I would say a full-reference measurement is best suited for an in the lab-type measurement, and in fact that's what's done a lot of times if we're doing a codec evaluation.
How good is my codec in terms of reducing the distortion compared to the original? I've got my original. I've got my post-encode version. I can compare them together in the lab.
These techniques are often used in that kind of environment.
There's a long way to run video quality measurement tasks and a shorter way. As our columnist learned, it's never too late to learn a more efficient system.
Streaming Learning Center's Jan Ozer lays out the basics of objective quality metrics for encoding assessments in this clip from his presentation at Streaming Media West 2018.