Sei sulla pagina 1di 14

With respect to noise reduction, luma and chroma have different characteristics due

to how they are captured and how we perceive them. First, a little background info.

Sensor and mosaic: In a typical sensor, white lights are filtered into RGB
components and captured by individual sensor cells. Each cell captures only one of
the 3 primary light colors. A typical grid pattern contains 50% green, 25% red and
25% blue cells with green occupying two diagonal spots in a 2x2 group.

Color space: We all know about the RGB channels, you are probably familiar with
hus, sat, value (HSV) color space. The fact is, the color space can be transformed into
any number of orientation given 3 relatively orthogonal axis. One interesting color
space is YCbCr used in TV transmission. Similar to lab color, the color space is
specified in luminance, blue and red components. The interesting thing is, human
vision perception has twice the acuity in luminance vs. color. TV transmission takes
advantage of this characteristic and pack twice the luma resolution than chroma.

Why are there more green than red and blue in a sensor? Because green is most
representative (closest to the orientation) of luma, it serves as a stand-in for luma
channel. In fact, luma is a mix of about 60% green, 30% red and 10% blue.

How do we make use of this information?

We know luminance contain twice the spatial resolution. It is the main contributor
to sharpness. We know chrominance is lower resolution, it can tolerate more low
pass filtering (read blurring). We also know blue sensors is less dense, and must be
boosted because it has the lowest input level. Therefore blue channel tend to be the
noisiest.

One way to tackle noise is to isolate the noise source and filter only the necessary
channel with the appropriate amount. Examine individual RGB channels, and see if
blue channel contributes most of the noise. Convert to lab color and see if luma
channel is noisy. Perhaps filter more aggressively in chroma instead of luma
channels. That’s the underlying approach, but in practice, lightroom color noise
slider does a decent job.

Chrominance (chroma or C for short) is the signal used in video systems to convey
the color information of the picture, separately from the accompanying luma signal (or Y for
short). Chrominance is usually represented as two color-difference components: U = B′ − Y′
(blue − luma) and V = R′ − Y′ (red − luma). Each of these difference components may have
scale factors and offsets applied to it, as specified by the applicable video standard.
In composite video signals, the U and V signals modulate a color subcarrier signal, and the
result is referred to as the chrominance signal; the phase and amplitude of this modulated
chrominance signal correspond approximately to the hue and saturation of the color. In
digital-video and still-image color spaces such as Y′CbCr, the luma and chrominance
components are digital sample values.
Separating RGB color signals into luma and chrominance allows the bandwidth of each to
be determined separately. Typically, the chrominance bandwidth is reduced in analog
composite video by reducing the bandwidth of a modulated color subcarrier, and in digital
systems by chroma subsampling.

Contents

 1History
 2Television standards
 3Digital systems
 4See also
 5References

History[edit]
The idea of transmitting a color television signal with distinct luma and chrominance
components originated with Georges Valensi, who patented the idea in 1938.[1] Valensi's
patent application described:
The use of two channels, one transmitting the predominating color (signal T), and the other
the mean brilliance (signal t) output from a single television transmitter to be received not
only by color television receivers provided with the necessary more expensive equipment,
but also by the ordinary type of television receiver which is more numerous and less
expensive and which reproduces the pictures in black and white only.

Previous schemes for color television systems, which were incompatible with existing
monochrome receivers, transmitted RGB signals in various ways.

Television standards[edit]
In analog television, chrominance is encoded into a video signal using
a subcarrier frequency. Depending on the video standard, the chrominance subcarrier may
be either quadrature-amplitude-modulated (NTSC and PAL) or frequency-
modulated (SECAM).
In the PAL system, the color subcarrier is 4.43 MHz above the video carrier, while in the
NTSC system it is 3.58 MHz above the video carrier. The NTSC and PAL standards are the
most commonly used, although there are other video standards that employ different
subcarrier frequencies. For example, PAL-M (Brazil) uses a 3.58 MHz subcarrier,
and SECAMuses two different frequencies, 4.250 MHz and 4.40625 MHz above the video
carrier.
The presence of chrominance in a video signal is indicated by a color burst signal
transmitted on the back porch, just after horizontal synchronization and before each line of
video starts. If the color burst signal were visible on a television screen, it would appear as a
vertical strip of a very dark olive color. In NTSC and PAL, hue is represented by a phase
shiftof the chrominance signal relative to the color burst, while saturation is determined by
the amplitude of the subcarrier. In SECAM (R′ − Y′) and (B′ − Y′) signals are transmitted
alternately and phase does not matter.
Chrominance is represented by the U-V color plane in PAL and SECAM video signals, and
by the I-Q color plane in NTSC.

Digital systems[edit]
Digital video and digital still photography systems sometimes use a luma/chroma
decomposition for improved compression. For example, when an ordinary RGB digital
image is compressed via the JPEG standard, the RGB colorspace is first converted (by a
rotation matrix) to a YCbCr colorspace, because the three components in that space have
less correlation redundancy and because the chrominance components can then be
subsampled by a factor of 2 or 4 to further compress the image. On decompression, the
Y′CbCr space is rotated back to RGB.
YCbCr

One of two primary color spaces used to represent digital component video (the other is
RGB). The difference between YCbCr and RGB is that YCbCr represents color as
brightness and two color difference signals, while RGB represents color as red, green and
blue. In YCbCr, the Y is the brightness (luma), Cb is blue minus luma (B-Y) and Cr is red
minus luma (R-Y). See component video.

YCbCr Is Digital
MPEG compression, which is used in DVDs, digital TV and Video CDs, is coded in YCbCr,
and digital camcorders (MiniDV, DV, Digital Betacam, etc.) output YCbCr over a digital link
such as FireWire or SDI. The ITU-R BT.601 international standard for digital video defines
both YCbCr and RGB color spaces. See chroma subsampling.

YPbPr Is Analog
YPbPr is the analog counterpart of YCbCr. It uses three cables for connection, whereas
YCbCr uses only a single cable (see YPbPr). See YUV, YUV/RGB conversion
formulas and ITU-R BT.601.

1 Introduction
Color images, video images, medical images obtained by
multiple scanners, and multispectral satellite images consist
of multiple image frames or channels (Fig. 1). These image
channels depict the same scene or object observed either by
different sensors or at different times, and thus have
substantial commonality among them. We use the
term multichannel image to refer to any collection of image
channels which are not identical, but exhibit strong between-
channel correlations.

Sign in to download full-size image


FIGURE 1. Example of a multichannel image. A color image consists of three
color components (channels) which are highly correlated with one another.
Similarly, a video image sequence consists of a collection of closely related
images.
In this chapter we focus on the problem of image recovery as
it applies specifically to multichannel images. Image
recovery refers to the computation of an image from observed
data that alone do not uniquely define the desired image.
Important examples are image denoising, image deblurring,
decoding of compressed images, and medical image
reconstruction.
In image recovery, ambiguities in inferring the desired image
from the observations usually arise from uncertainty produced
by noise. These ambiguities can only be reduced if, in
addition to information provided by the observed data, one
also has prior knowledge about the desired image. In many
applications the most powerful piece of prior information is
that the desired image is smooth (spatially correlated),
whereas the noise is not. Multichannel images offer the
possibility of exploiting correlations between the channels in
addition to those within each channel. By utilizing this extra
information, multichannel image recovery can yield
tremendous benefits over separate recovery of
the component channels.
In a broad category of image recovery techniques, the image
is computed by optimizing an objective function that quantifies
correspondence of the image to the observed data as well as
prior knowledge about the true image. Two frameworks have
been developed to describe the use of prior information as an
aid in image recovery: the deterministic formulation
with regularization [8] and the statistical formulation [4].
Conceptually, these frameworks are quite different, but in
many applications they lead to identical algorithms.
In the deterministic formulation, the problem is posed in terms
of inversion of the observation model, with regularization used
to improve the stability of the solution (for more details
see Chapter 3.11). In the statistical formulation, the statistics
of the noise and the desired image are incorporated explicitly,
and are used to describe the desired characteristics of the
image. In principle, these formulations apply equally well to
multichannel images and single-channel images but in
practice two significant problems must be addressed.
First, an appropriate model must be developed to express the
relationships among the image channels. Regularization and
statistical methods reflecting within-channel relationships are
much better developed than those describing between-
channel relationships, and are often easier to work with. For
example, while the power spectrum (the Fourier transform of
the autocorrelation) is a useful statistical descriptor for one
channel, the cross power spectrum (the Fourier transform of
the cross-correlation) describing multiple channels is less
tractable because it is complex. Second, a multichannel
image has many more pixels than each of its channels, so
approaches that minimize the computations are typically
sought. Several such approaches are described in the
following sections.
The goal of this chapter is to provide a concise summary of
the theory of multichannel image recovery. We classify
multichannel recovery methods into two broad approaches,
each of which is illustrated through a practical application. In
the first, which we term the explicitapproach, all the channels
of the multichannel image are processed collectively,
and regularization operators or prior distributions are used to
express the between- and within-channel relationships. In the
second, which we term the implicit approach, the same effect
is obtained indirectly by: 1) applying a Karhunen-Loève (KL)
transform that decorrelates the channels, 2) recovering the
channels separately in a single-channel fashion, and 3)
inverting the KL transform. This approach has a substantial
computational advantage over the explicit approach, as we
explain later, but it can only be applied in certain situations.
The rest of this chapter is organized as follows. In Section
2 we present the multichannel observation model and basic
image recovery approaches are reviewed in Section 3.
In Section 4 we describe the explicit approach and illustrate it
using the example of restoration of video image sequences.
In Section 5, we explain the implicit approach and illustrate it
using the example of recontruction of time-varying medical
images. We conclude with a summary in Section 6.
====intensity saturation effect.
====Ooi CH, Kong NSP, Ibrahim H (2009) Bi-histogram equalization with a plateau
limit for digital image enhancement. IEEE Trans Consum Electron 55(4):2072–2080

Color Histogram Equalization - MATLAB CODE

Histogram Equalization can be considered as redistribution of the intensity of the image.


Color histogram equalization can be achieved by converting a color image into HSV/HSI image and
enhancing the Intensity while preserving hue and saturation components.

However, performing histogram equalization on components of R,G and B independently


will not enhance the image. At the end of this post, check the histogram of before and after histogram
equalization of an image which is obtained by performing histogram equalization on the
components(R,G and B) independently.

Steps to be performed:

1. Convert RGB image into HSI Image.


https://www.imageeprocessing.com/2013/05/converting-rgb-image-to-hsi.html

2. Obtain the ‘Intensity Matrix’ from the HSI Image matrix

3. Perform Histogram Equalization on the intensity Matrix


https://www.imageeprocessing.com/2011/04/matlab-code-histogram-equalization.html

4. Update the Intensity Matrix from the HSI Image matrix with the histogram equalized Intensity matrix

5. Convert HSI Image back to RGB Image


https://www.imageeprocessing.com/2013/06/convert-hsi-image-to-rgb-image.html

MATLAB CODE:

%COLOR HISTOGRAM EQUALIZATION

%READ THE INPUT IMAGE

I = imread('football.jpg');
%CONVERT THE RGB IMAGE INTO HSV IMAGE FORMAT

HSV = rgb2hsv(I);

%https://www.imageeprocessing.com/2013/05/converting-rgb-image-to-hsi.html

%PERFORM HISTOGRAM EQUALIZATION ON INTENSITY COMPONENT

Heq = histeq(HSV(:,:,3));

%https://www.imageeprocessing.com/2011/04/matlab-code-histogram-equalization.html

HSV_mod = HSV;

HSV_mod(:,:,3) = Heq;

RGB = hsv2rgb(HSV_mod);

%https://www.imageeprocessing.com/2013/06/convert-hsi-image-to-rgb-image.html

figure,subplot(1,2,1),imshow(I);title('Before Histogram Equalization');

subplot(1,2,2),imshow(RGB);title('After Histogram Equalization');

EXPLANATION:
RGB image matrix is converted into HSI(Hue ,Saturation and Intensity) format and histogram
equalization is applied only on the Intensity matrix . The Hue and Saturation matrix remains the same.
The updated HSI image matrix is converted back to RGB image matrix.

%DISPLAY THE HISTOGRAM OF THE ORIGINAL AND THE EQUALIZED IMAGE

HIST_IN = zeros([256 3]);

HIST_OUT = zeros([256 3]);

%http://angeljohnsy.blogspot.com/2011/06/histogram-of-image.html

%HISTOGRAM OF THE RED,GREEN AND BLUE COMPONENTS

HIST_IN(:,1) = imhist(I(:,:,1),256); %RED

HIST_IN(:,2) = imhist(I(:,:,2),256); %GREEN

HIST_IN(:,3) = imhist(I(:,:,3),256); %BLUE

HIST_OUT(:,1) = imhist(RGB(:,:,1),256); %RED

HIST_OUT(:,2) = imhist(RGB(:,:,2),256); %GREEN

HIST_OUT(:,3) = imhist(RGB(:,:,3),256); %BLUE

mymap=[1 0 0; 0.2 1 0; 0 0.2 1];

figure,subplot(1,2,1),bar(HIST_IN);colormap(mymap);legend('RED
CHANNEL','GREEN CHANNEL','BLUE CHANNEL');title('Before Applying Histogram
Equalization');

subplot(1,2,2),bar(HIST_OUT);colormap(mymap);legend('RED
CHANNEL','GREEN CHANNEL','BLUE CHANNEL');title('After Applying Histogram
Equalization');
EXPLANATION:

Obtain the histogram of each component (Red,Green and Blue) independently.

Define the colormap ‘mymap’ with three colors namely Red, Green and Blue.

Display the histograms of the components before and after histogram equalization.

NOTE:

Histogram of the above image by processing the components independently gives bad result.

Histogram Equalization applied on individual components


Like "IMAGE PROCESSING" page

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Labels: color image processing, Histogram equalization

Your Reactions:

2 comments:

Unknown said...

very good explanation of color image histogram


April 11, 2017 at 4:12 PM

Unknown said...

how can I do a histogram equalization on each channel and get the same results as shown
above, but without using the inbuilt matlab functions
September 8, 2017 at 2:45 AM

Enjoyed Reading? Share Your Views

FOLLOW US
Search This Blog

Search

GRAB YOUR FREE GIFT TODAY


Featured Post
Image Processing with Python

Python is a high level programming language which has easy to code syntax and offers packages for wide range of applications including nu...

LIKE "IMAGE PROCESSING"


Support this blog by leaving your valuable comments and a like on Facebook Fan Page.

THANKS FOR READING

Subscribe Now:

Subscribe in a reader

TAGS
Removing Image noise GUI Components in MATLAB Image Conversion Edge detectionPhotoshop effects in
MATLAB MATLAB BUILT_IN FUNCTIONSMorphological Image ProcessingVideo Processing Array functions in MATLAB Files Histogram
equalization Image CompressionObject Identification Optical illusion Shapes Templates Image Geometry Image Arithmetic

Followers
Today's Popular Posts


Face Detection - MATLAB CODE

Lets see how to detect face, nose, mouth and eyes using the MATLAB built-in class
and function. Based on...


Sobel edge detection

The gradient of the image is calculated for each pixel position in the image. The procedure and ...


Matlab code: Histogram equalization without using histeq function

It is the re-distribution of gray level values uniformly. Let’s consider a 2 dimensional image
which has values rangin...


Converting RGB Image to HSI

Converting RGB Image to HSI H stands for Hue, S for Saturation and I for Intensity. MATLAB CODE: Read
a RGB Image ...


Gaussian Filter without using the MATLAB built_in function

Gaussian Filter Gaussian Filter is used to blur the image. It is used to reduce the noise and the image
details. ...

Potrebbero piacerti anche