Tutorial: Improving Video Color Quality
[Editor's Note: This article appears in the December/January issue of Streaming Media magazine. Click here for your free subscription.
Over the past 10-plus years, many customers and colleagues have asked me how to improve the quality of their web videos. The first thing I tell them is to get out of the it’s-only-for-the-web mentality. Web viewership is growing faster than anyone thought possible, and its acceptance has surpassed TV’s adoption over the same time span. So think quality first, and remember that a shoot for the web is the same as a shoot for HDTV.
Once you’re in that mind-set, there are three more ways to improve quality. First, know your equipment and how to adjust for optimal output. Second, adjust your white-and-black balances during acquisition. Finally, raise your contrast and complexity during encoding. By doing these things, you can make your video look great and differentiate yourself from everyone else.
A major element of great-looking webcasts or streaming media is content acquisition. Without a decent camera and knowledge of its characteristics, the video will look unimpressive. The first step is getting to know your camera. There isn’t one brand that is best—that’s totally subjective and up to personal preference—but simply knowing your camera’s brand will tell you quite a bit about its characteristics. It doesn’t matter if you are doing a single-camera or a multicamera shoot. Just knowing the brand and how each one differs allows you to set up the camera properly. These differences affect the white balance and color saturation.
You might notice how one camera’s reds look brighter and appear to "pop," while another brand’s reds are dull but the blues appear to pop. This can be true whether the camera is professional or prosumer. One brand’s camera might use complementary metal oxide semiconductors (CMOS) while another brand uses charge-coupled devices (CCD) sensors.
Making the CMOS/CCD Decision
The difference between CMOS and CCD isn’t something most people need to know. However, understanding how each one works and the differences between them can tell you which filters you should be using and how light is affected. One good question to ask is, "What is CMOS, and why is it good for a camera and my production?" CMOS sensors use multiple transistors to amplify and move the charge provided by incoming photons of light, enabling the pixels to be read individually.
One of the advantages of CMOS is its low power consumption. CMOS sensors consume up to 100 times less power than CCDs. Because CCDs are essentially capacitive devices, they need external control signals and large clock swings to achieve acceptable charge transfer efficiencies.
Another advantage of CMOS is that it is adaptable for high frame rates and resolutions. Most CMOS sensors are set at HD resolutions, but since their resolutions are so high and their sensors can access just the pixels of a region, they allow high frame rate. Basically, reducing the resolution allows the higher frame rates. This is a real advantage of CMOS. Now, you can set your camera up right and use it in high frame rate applications.
One last thing that is good about CMOS is that the sensors don’t have artifacts, smear, or blooming. They have a clean, high-quality image.
However, like everything else in life, CMOS sensors have their drawbacks. First, they aren’t as sensitive to light as your typical CCD sensors. So if you’re in a low-light setting, you might run into problems.
Another issue is that these sensors usually don’t have infrared (IR) filters installed on them. In industrial applications, this is more common. But without an IR filter, your colors will be skewed. Basically, your spectrum will be adjusted so that your greens will be brown. But this is an easy fix—simply add a filter. Many CMOS sensors use a Bayer filter that passes red, green, or blue light to selected pixels. Lastly, CMOS is considered to be noisier than CCDs, but, in most cases, this is only noticeable on test equipment.