So what is "Component Video" anyway?
February 26, 2001,
If you are just getting into home theater you will no doubt be confused by a lot of the jargon. And since the term component video is sure to befuddle just about everyone, here's a little primer on the subject. It might sound a little technical at first, but if you've got a DVD player, read on for some important information.
Starting at the beginning: RGB
The human eye has receptors on the retina known as rods and cones. Rods are sensitive to luminance, and the cones detect color. Each cone is uniquely sensitive sensitive to light in one of three parts of the visible spectrum--either red/orange, green/yellow, or blue/violet. Therefore, since your eyes are particular efficient at seeing red, green, and blue, a video system only needs to capture and reproduce red, green, and blue, or RGB as it's called. The camera must capture RGB on the front end. That information must be delivered accurately to your television or projector which must display RGB. By varying the intensity of red, green, and blue, every color of the spectrum can be reproduced. So what you end up with is perfectly natural color on your screen.
A Problem: Bandwidth
So how do you transport an image from the camera to your TV or projector? You could transmit it in the RGB format in which the camera first captured it. However, RGB is a bandwidth hog and bandwidth is expensive. So the first thing that happens is RGB is converted into a more compact format. This format is component video.
Component video consists of three signals. The first is the luminance signal, which indicates brightness or black & white information that is contained in the original RGB signal. It is referred to as the "Y" component. The second and third signals are called "color difference" signals which indicate how much blue and red there is relative to luminance. The blue component is "B-Y" and the red component is "R-Y". The color difference signals are mathematical derivatives of the RGB signal.
Green doesn't need to be transmitted as a separate signal since it can be inferred from the "Y, B-Y, R-Y" combination. The display device knows how bright the image is from the Y component, and since it knows how much is blue and red, it figures the rest must be green so it fills it in.
Once we've got our video information packaged up in component video format we've reduced bandwidth requirements by a factor of 3 to 2. But more compression was required for broadcast purposes. So back in 1953 when color television was born, a technique was developed to compress all of the component video information into one signal for broadcast. That one signal defined by the National Television Standards Committee (NTSC) is known as composite video.
Composite video shows up everywhere these days. It is (except for HDTV) what comes over the air to your TV's antenna, or through the coaxial cable from your cable TV provider. The yellow "video" jacks on the back of your VCR, laserdisc player or DVD player all output composite video.
The good news is that it only takes one wire to carry a composite video signal. The bad news is that the display system, whether it's a television or projector, needs to un-compress the composite signal, restore it to its original three-signal component video format, and then derive from that the RGB information for final display.
The problem is that picture information is lost when component video is compressed into composite format. Furthermore, once you pack luminance (Y) and chrominance (C) information into one signal, it cannot ever be separated cleanly again. So when the television or projector tries to convert the composite signal back to component video, it can't recover the entire original signal. The result is that the final video image on the screen is diminished-the picture is not as crisp and clean, and the colors aren't as accurate and rich as they would have been had the composite video compression been avoided.