2.1.4. Representation of Monochrome and Color Images

For a black-and-white image, a light with c(λ) can he represented by one number I given by

(2.8)

where sBW(λ) is the spectral characteristic of the sensor used and k is some scaling constant. Since brightness perception is the primary concern with a black-and-white image. sBW(λ) is typically chosen to resemble the relative luminous efficiency function. The value I is often referred to as the luminance, intensity, or gray level of a black-and-white image. Since I in (2.8) represents power per unit area it is always nonnegative and finite that is,

0 ≤ I ≤ IMAX                                                  (2.9)

where IMAX is the maximum I possible. In image processing, I is typically scaled such that it lies in some convenient arbitrary range, for example 0 ≤ I ≤ 1 or  0 ≤ I ≤ 255 . In these cases 0 corresponds to the darkest possible level and 1 or 255 corresponds to the brightest possible level. Because of this scaling, the specific radiometric or photometric units associated with I become unimportant. A black-and-white image  has in a sense only one color. Thus, it is sometimes called a monochrome image.

A color image can be viewed as three monochrome images. For a color image, a light with c(λ) is represented by three numbers which are called tristimulus values. One three-number set that is frequently used in practice is R, G, and B, representing the intensity of the red, green, and blue components. The tristimulus values R, G, and B are obtained by

(2.10)

where SR(λ), SG(λ), and SB(λ) are spectral characteristics of the red, green, and blue sensors (filters) respectively. Like the gray level I in a monochrome image, R, G, and B are nonnegative and finite. One possible set of sR(λ), SG(λ) and SB(λ) is shown in Figure 2.7. Examples of fR(x, y), fG(x,y) and fB(x,y). which represent the red, green and blue components of a color image. are shown in Figures 2.8(a). (b), and (c). respectively (see color insert). The color image that results when the three components are combined by a color television monitor is shown in Figure 2.8(d).

One approach to processing a color image is to process three monochrome images. R, G, and B separately and then combine the results. This approach is simple and is often used in practice.  Since brightness. hue. and saturation each depends on all three monochrome images. independent processing of R, G, and B may affect hue and saturation. even though the processing objective may he only modifying the brightness.

The three tristimulus values R, G, and B can be transformed into a number of other sets of tristimulus values. One particular set, known as luminance-chrom­inance, is quite useful in practice. When R, G, and B are the values used on a TV monitor (in the NTSC color system), the corresponding luminance-chrominance values Y, I, and Q related to R, G, and B by

(2.11a)

and

(2.11b)

The Y component is called the luminance component, since it roughly reflects the luminance in (2.3). It is primarily responsible for the perception of the brightness of a color image, and can be used as a black-and-white image. The I and Q components are called chrominance components, and they are primarily responsible for the perception of the hue and saturation of a color image. The fY(x, y), fI(x, y) and fQ(x, y) components, corresponding to the color image in Figure 2.8, are shown as three monochrome images in Figures 2.9(a), (b), and (c), respectively. ), fI(x, y) and fQ(x, y) can be negative, a bias has been added to them for display. The mid-gray intensity in Figures 2.9(b) and (c) represents the zero amplitude of ), fI(x, y) and fQ(x, y). One advantage of the YIQ tristimulus set relative to the RGB set is that we can process the Y component only. The processed image will tend to differ from the unprocessed image in its appearance of brightness. Another advantage is that most high-frequency components of a color image are primarily in the Y component. Therefore, significant spatial lowpass filtering of I and 0 components does not significantly affect the color image. This feature can be exploited in the coding of a digital color image or in the analog transmission of a color television signal.

When the objective of image processing goes beyond accurately reproducing the “original” scene as seen by human viewers, we are not limited to the range of wavelengths visible to humans. Detecting objects that generate heat, for ex­ample, is much easier with an image obtained using a sensor that responds to infrared light than with a regular color image. Infrared images can be obtained in a manner similar to (2.7) by simply changing the spectral characteristics of the sensor used.

Eye is most sensitive to Y, next to I, next to Q. In NTSC, 4 MHz is allocated to Y, 1.5 MHz to I, 0.6 MHz to Q.