Sei sulla pagina 1di 15

Unit 7

COLOR IMAGE PROCESSING

COLOR IMAGE PROCESSING


7.1 INTRODUCTION:
The use of colour in image processing is motivated by two principal factors. First colour is powerful descriptor that often simplifies object identification and extraction from a scene. Second human can discern thousands of colour shades and intensities compared to about only 2 dozen shades of gray. Colour image processing is divided into two major areas. Full colour Pseudo colour Full colour is acquired with a full colour sensor, such as a colour TV camera colour scanner. In Pseudo colour, the problem is one of assigning a colour to a particular monochrome intensity of range of intensities. Until recently most digital colour image processing was done at the pseudo colour level. Due to the reducing of cost for colour sensor and hardware for processing techniques are used in wide range.

7.2 COLOR FUNDAMENTALS:


When a beam of sunlight passes through a glass prism, the emerging beam of light is not white but consist instead of a continuous spectrum of colour ranging from ultra-violet to red. The colour spectrum may be divided into six broad regions; [VIBGYOR], violet, blue, green, yellow, orange, red. When viewed in full colour, no colour in spectrum ends abruptly, but rather each colour blends smoothly into the next.

Fig 4.1 Color spectrum seen by passing white light through a prism Colours that humans and some other animals perceive in an object are determined by the nature of light reflected from the object. Visible light is composed of a relatively narrow band of frequencies in electro-magnetic spectrum. A body that reflects light that is balanced in all visible wavelength appear white to the observer. A body favours reflectance in a limited range of the visible spectrum exhibits some shades of colour. Three basic qualities are used to describe the quality of chromatic light sources. Radiance, Luminance, & Brightness. Radiance is the total amount of energy that flows from the light source, units measured in Watts. [It gives a measure of amount of energy on observer receives from light source called Luminance]. Brightness: Subjective descriptor that is practically impossible to measure. It embodies the achromatic notation of intensity and is one of the key factor in describing colour sensation. RGB components acting alone can generate all spectrum colours used of the word primary has been widely misinterpreted to mean that the 3 standard primaries when mixed in various intensity proportions can proceed all visible colours.

Unit 7

COLOR IMAGE PROCESSING

Fig 4.2 Absorption of light by the red, green and blue cones in the human eyes as a function of wavelength The primary colours can be added to produce secondary colours of light magenta (Red + blue), cyan (green + blue), and yellow (red + green). Mixing the three primary colours or a secondary with its opposite primary colour in the right intensities produce white light.

Fig 4.3 Primary and secondary colors of light and pigments


2

Unit 7

COLOR IMAGE PROCESSING

7.3 COLOR MODELS:


The purpose of colour model (also called colour space or colour system) is to facilitate the specifications of colour in some standards, generally accepted way. In essence, a colour model is a specification of a co-ordinate system and a subspace with in that system where each colour is represented by a single point. In terms of digital image processing, the hardware-oriented models most commonly used in practice are RGB (Red, Green, Blue) CMY (Cyan, Magenta, Yellow) CMYK (Cyan, Magenta, Yellow, Black) HSI (Hue, Saturation, Intensity) Under these conditions each RGB colour pixel [i.e., a triplet of values (RGB)] said to have a depth of 24 bits. (3 image plane times the number of bits per plane). The DOUBT full colour image is used to denote a 24-bit RGB colour image. The total number of colour in a 24-bit RGB image is ^3= =16,777,216. A convenient way to view these colours is to generate colour planes (faces or cross sections of the cube). This is accomplished simply by fixing one of 3-colors and allow other 2 vary. For instance across sectional plane through the centre of the cube and parallel to the GB plane (127, G, B). For G, B=0,1,2,255. Here we used the actual pixel values rather than the mathematically convenient normalized values in the range [0 1] because the former values are the ones actually used in compute to generate colour. The fig. 4(a) shows that an image of the cross-sectional plane is viewed simply by feeling the three individual components images into a colour monitor. In the component images 0 represents black and 255 represents white(note that these are grey scale images). Fig. 2(b) shows the three hidden surface planes of the cube. Acquiring a colour images is basically the process shown in fig. 4(a) in reverse. A colour image can be acquired by using three filters, sensitive to red, green, and blue respectively. When we view a colour scene with a monochrome camera equipped with one of these filters, the result is a monochrome image whose intensity is proportional to the response of the filter. Repeating this process with each filter produces three monochrome images that are RGB component images sensors usually integrate this process into a signal device. Clearly displaying these three RGB components images in the form shown in fig 2(a) would yields an RGB colour DOUBT of the original colour scene. 7.4 RGB COLOR MODEL: Each colour appears in its primary spectral components of red, green and blue. Model is based on a Cartesian co-ordinate system. The colour subspace of interest is the cube shown in which RGB values are at 3 corners: cyan, magenta and yellow are at three other corners. Black is at origin. White is at the farthest from the origin. In this model the gray scale (point of RGB values) extend from black to white along the line joining these two points. The different colours in this model are points on or inside the cube and are defined by vectors extending from the origin. Assumption that all colour values have been normalised so that the cube shown is the unit cube i.e., all the values of R, G, B are assumed to be in the range [0 1]. Images represented in the RGB colour model consist of 3 component images, one for each primary colour. When fed to an RGB monitor, these three images combine on the phosphor scene to produce a composite colour image. The number of bits used to represent each pixel in RGB space is called the pixel depth. Consider an RGB image in which each of the red, green and blue images is an 8-bit image.

Unit 7

COLOR IMAGE PROCESSING

Fig 4.4 Schematic of the RCB color cube. Differentiating between the primary colour of light and the primary colours pigments or colorants is important. The primary colour is defined as one that subtracts or absorbs a primary colour of light and reflects or transmits the other two. Therefore the primary colour pigments are magenta, cyan and yellow and the secondary colours are red, green and blue. A proper combination of three pigments primaries or a secondary with its opposite primary produces black. The characteristics generally used to distinguish one colour from another are Brightness: Chromatic notation of intensity. Hue: Attribute associated with dominant wavelength in a mixture of light waves. Saturation: Relative purity. Hue is an attribute associated with the dominant wavelength in a mixture of light wave. Hue represents dominant colour as perceived by an observer. Saturation refers to the relative purity or the amount of white light mixed with hue. Hue & saturation taken together are called chromaticity and therefore a colour may be characterized by its brightness and chromaticity. Chromaticity diagram is useful for colour mixing because a straight line segment joining any two points in the diagram refers defines are different colour. Variations that can be obtained by combining that two colour additively.

7.5 THE CMY AND CMYK COLOR MODELS:


Cyan, magenta, and yellow are the secondary colours of light or alternatively, the primary colours of pigments. For example when a surface coated with cyan pigments is illuminate with white light is reflected from the surface i.e., cyan subtracts red light from reflected white light, which itself is composed of equal amount of red, green and blue light. Most devices that deposit colour pigments on paper, such as colour printers and copiers, require CMY data input to perform an RGB to CMY conversion internally.

Unit 7

COLOR IMAGE PROCESSING

This conversion is performed using the simple operations [ ] [ ] [ ]

Where again, the assumption is that all the values have been normalized to the range [0 1]. Above equation demonstrates that light reflected from a surface coated with pure cyan does not contain red (i.e., c=1-R in the equation). Similarly, pure magenta does not reflect green & pure yellow does not reflect blue. RGB values can be obtained from a set of CMY values by sub tractor the individuals CMY values from 1. In image processing this colour model is used in connection with generating hard copy output. According to subtracting mixing equal amount of the pigments primaries. Cyan, Magenta and yellow should produce black. In practice, combining these colours for printing produces a muddy looking black. In order to produce true black, a fourth colour, black is added giving rise to the CMYK colour model. Four-color printing they are referring to 3-colors of the CMY colour model plus black. 7.5.1 HSI COLOR MODEL: The RGB, CMY, and other similar colour models are not well suited for describing colour in terms that are practical for human interpretation. When humans view a colour object we describe it by hue, saturation and brightness. Hue is predominant spectral colour of the received light. Spectral purity of the colour is known as saturation. Brightness is subjective descriptor that is practically impossible to measure. It embodies the achromatic notation of intensity and is one of the key factor in describing colour sensation. The HSI (Hue, Saturation, Intensity) colour modal, decomposed the intensity components from the colour carrying information (hue and saturation) in a colour image. HSI model is an ideal tool for developing image processing algorithms based on colour descriptions that are natural and intuitive to humans. RGB is colour image can be viewed as three monochrome intensity images (representing red, green, and blue). So we should be able to extract intensity from an RGB image. If we take the colour cube and stand it on the black (0 0 0) vertex with the white vertex (1 1 1) directly above it, as shown in fig.(a) the intensity (grey slice) is along
5

Unit 7

COLOR IMAGE PROCESSING

the line joining these two vertices. The line (intensity axis) joining the black and white vertices is vertical. Intensity components of any colour point can have by passing a plane perpendicular to the intensity axis and containing the colour point. The intensities of the plane with the intensity axis would give us a point with intensity, value in the range [0 1]. The saturation (purity) of a colour increases as a function of distance from the intensity axis. The saturation points on the intensity axis is zero, as evident by the fact all the saturation of points along this axis are gray.

Hue can be determined also from a given RGB point, consider fig(b) which shows a plane defined by three points(black, white and cyan). The fact that the black and white points are contained in plane tells us that the intensity axis also is contained in the plane further more, we see that all points contained in the plane segment defined by the intensity axis and the boundaries of the cube have the same hue. All the colours generated by 3 colours line in the are white and black and third colour is a colour point. All points on the triangular would have the same hue because the black and white components cannot change the hue (of course, the intensity and saturation of points in this triangle would be difficult). By rotation the shaded plane about the vertical intensity axis, we would obtain different hues. Note: From these concepts we arrive at the conclusion that the hue, saturation and intensity values required to form the HSI space can be obtained from the RGB colour cube. As the plane moves up and down the intensity axis, the boundaries defined by the intersections of each plane with the faces of the cube have either a triangular or hexagonal shape. In this plane we see that the primary colours are separated by 120*, the secondary by 60* from primaries, which means that the angle between the secondarys is 120*. The fig.(2b) shows the same hexagonal shape and an arbitrary colour point (shown by a dot). The hue of the point is determined by an angle from some reference point.

Unit 7

COLOR IMAGE PROCESSING

An angle of 0* from the red axis designates 0 hue, and the hue increases counter clock wise from there. The saturation (distance from the vertical axis) is the length of the vector from the origin to that point. Note that the origin is defined by the intersection of colour plane with the vertical intensity axis. The important components of HSI colour space are the vertical intensity axis. The length of the vector to the HSI colour point and the angle this vector makes with the red axis. Therefore it is not unusual to see the HSI planes defined in terms of hexagonal just discussed a triangle or even circle. The shape chosen does not match. Since any one of these shapes can be wrapped into one of the other two by a geometric transformation.

7.6 PSEUDO COLOR IMAGE PROCESSING:


Pseudo (also called false colour) image processing consists of assigning colour to the gray values based on a specified criterion. The terms pseudo or false colour is used to differentiate the process of assigning colours to monochrome images from the process associated with true colour images. The principal use of pseudo colour is for human visualisation and interpretation of gray scale events in an image (or) sequence of images. One of the principal motivations for using colour is the fact that humans can discern thousands of colour shades and intensities compared to only two dozen or so gray. 7.6.1 INTENSITY SLICING: Sometimes also called as density slicing. Intensity slicing and colour coding are examples for pseudo colour image processing. If an image is interpreted as a 3D function (intensity vs. spatial co-ordinates) the method can be viewed as one of placing planes parallel to the coordinates plane of the image; each plane then slices the function in the area of intersection. Figure (a) shows an example of using a plane of f(x, y)= to slice the image function into two levels. If a different colour is assigned to each side of the plane shown in fig.(a) any pixel whose gray levels is above the plane will be coated with one colour and any pixel below the plane will be arbitrarily assigned one of two colours.

Unit 7

COLOR IMAGE PROCESSING

The result is a two colour image whose relative appearance can be controlled by moving the slicing plane up and down the gray level axis. In general, the technique may be summarized as follows. Let [0, L-1] represent the gray scale, let level to represent black [f(x, y)=0] and the level represents white[f(x,y)=L-1]. Suppose that P planes perpendicular to the intensity axis are defined at levels then assuming that 0<P<L-1, the P planes partitions the gray scale into P+1 intervals, Gray level to color assignments are made according to the relations. F(x, y)= if f(x,y) Where defined by the partitioning planes at l=K-1 and l=K.

The idea of planes is useful primarily for a geometric interpretation of the intensity slicing technique. Shows an alternative representation that defines the same mapping. According to the mapping function shown in fig(b) any input gray level is assigned one of two colours, depending on whether it is above or below the value of When more levels are used the mapping function takes a stair case form.
8

Unit 7

COLOR IMAGE PROCESSING

7.6.2GRAY LEVEL TO COLOR TRANSFORMATION: Other types of transformations are more general and they are capable of achieving a wide range of pseudo colour enhancement results than the simple slicing techniques discussed in the preceding sections. An approach that is particularly attractive is shown in fig(a). Basically the idea is to perform three independent transformations on the gray level of any input pixel. The three results are then fed separately into the red, green and blue channels of a colour television monitor. This method produces a composite image whose colour content is modulated by the nature of the transformation functions. These are the transformations on the gray level values of an image and are not functions of position. Intensity slicing, piece wise linear functions of the gray levels are use to generate colour. In this method can be based on smooth, nonlinear function.

Functional block diagram of pseudo colour image processing, the corresponding red, green and blue inputs of an RGB colour monitor.

are fed into

To combine several monochrome images into a single colour composite shows in fig(b). It is an multispectral image processing, where different sensors produce individual monochrome images each of different spectral band. The type of additional processes shown in fig(b) can be techniques such as colour balancing, combining images and selecting the 3 images for display based on knowledge about response characteristics of the sensor used to generate the images. It is possible to combine the sensed images into a meaningful pseudo code map. One way to combine the sensed image data by how they show either differences in the surface chemical composition or changes in the way the surface reflects sunlight.

Unit 7

COLOR IMAGE PROCESSING

7.6.3 BASIC OF FULL-COLOR IMAGE PROCESSING: This processing techniques application to full colour images . This type fall into two major categories In first category, we process each component image individually and this form a composite processed colour image from the individually processed components. Second category in RGB system each colour point can be interpreted as a vector extracting from the origin to that point in RGB co-ordinate system. Let C represents an arbitrary vector in RGB colour space. C = [ ] = [ ].............................................(a)

The eq. indicates that the components of C are simply the RGB components of a colour image at a point. The colour components are a function of co-ordinates (x, y) by using the notation. C(x, y) = [ ]=[ ].................................(b)

For an image of size MxN there are MN such vectors c(x, y), for x=0, 1, 2..M-1; y=0,1,2,N-1. Equation 2 depicts a vector whose components are spatial variables in x and y. The formula b allows to process a colour image by processing each of its components images separately, using standard gray scale image processing methods. The results of individual colours components processing are not always equivalent to direct processing in colour vector space, in which case we must formulate new approaches. In order for per-colour-component and vector based processing to be equivalent, two conditions have to be satisfied. First, the process has to be applicable to both vector and scalar. Second, the operation on each component of a vector must be independent of the other components.

As the illustration shows neighbourhood spatial processing of gray-scale & full colour images. Suppose that the process is neighbourhood averaging. In fig(a) averaging would be accomplished by summing the gray levels of the pixels in the neighbourhood and dividing by the total number of pixels in the neighbourhood. In fig(b) averaging would be alone by summing all the vectors in the neighbourhood and dividing each component by the total number of vectors in the neighbourhood. But each component of average vector is the sum of the pixels in the image corresponding to that component, which is the same as the result that would be obtained if averaging were done on a per-colour-component basis and then the vector was formed.

10

Unit 7

COLOR IMAGE PROCESSING

7.7 COLOR TRANSFORMATIONS:


Deals with processing the components of colour image within the context of a signal colour model. Formulation: As with the gray-level transformation techniques, we model colour transformations using the expression g(x, y)=T[f(x, y)] where f(x, y) is colour input image. g(x, y) is transformed or processed colour output image. T is an operator on f over a spatial domain neighbour of (x, y). The pixel values hue are triplets or quarters (i.e., groups of three or four values). From the colour space chosen to represent the images, as illustrated in fig(b). Analogous to the approach we used to introduce the basis gray levels transformation. The colour transformation of the form Where are variables denoting the color components of f(x,y) and {T1,T2,.....Tn} is a set of transformation or color mapping functions that operate on to produce . The m transformations, combine to implement the signal transformation function T. The colour space chosen to describe the pixels of f and g determines the value of n. If the RGB colour spaced is selected, for examples, n=3 and r1,r2,r3 denotes the red, green and blue components of input image. If the CMYK or HSI colour spaces are chosen n=4 or n=3.

7.8 COLOR COMPONENTS:


The hues directly opposite one another on the circles are called components. Components are analogous to the gray scale case, colour components are useful for enhancing details that is embedded in dark region of a image particularly when the regions are dominant in size.

7.8.1 TONE AND COLOR CORRECTION: Colour transformation can be performed on most desktop computers. In conjunction with digital cameras, flatbed scanners, and inkjet printers they turn a personal computer into a digital darkroom-allowing tonal adjustments and colour corrections, the main stages of high end colour reproduction systems to be performed without the need for traditionally out filter wet processing (i.e., dark room) facilities. Although the tone and colour corrections are useful in other areas of imaging like photo enhancement, and colour reproduction. The effectiveness of the transformation examined, therefore these transformation are developed, refined and evaluated on monitors. It is necessary to maintain a high degree of colour consistency between the monitor should represent accurately any digitally scanned
11

Unit 7

COLOR IMAGE PROCESSING

source images. The colour profiles used to map each device to the model and the model itself. The model of choice for many colour management systems (CMS) is CIE L* a*b* model. The colour components are given by the following equations L*=116[ ]-16 a*=500[ b*=200 xw, yw, zw are reference white tri-stimulus values. Like the HSI system, the L8a*b* system is excellent de-coupler of intensity(represented by lightness L*) and colour (represented by a* for red-green) and (b* for green-blue), making it useful in both images, manipulation (tone and contrast editing) and image compression applications. The principal benefit of calibrated imaging systems is that they allow tonal and colour imbalances to be collected interactively and independently i.e., the two sequential operations. Before colour irregularities, like ores and under saturated colours, are resolved, problems involving the image s tonal range are corrected. The tonal range of an image also called its key type, refuse to its general distribution of colour intensities. Most of the information in high-key images is concentrated at high or light intensities; the colour of low-key images are located predominant of low intensities; middle-key images lie in between. As in the monochrome case, it is often desirable to distribute the intensities of a colour image equally between the highlights and shadows.

7.9 HISTOGRAM PROCESSING:


Colour images are composed of multiple components, however considerations must be given to adopting the gray scale technique to more than one component and/or histogram equalise the components of a colour image independently. This results in erroneous colour. A more logical approach is to spread the colour intensities uniformly, having the colour themselves (e.g., hues) unchanged. The intensity equalization process did not alter the values of hue and saturation of the image, it did impact the overall colour perception. Increasing the image saturation component, subsequent to histogram equalization using the transformation in this type of adjustment is common when working with the intensity component in HSI space because changes in intensity usually affect the appearance of colour in an image.

7.10 SMOOTHING AND SHAPING:


The next step beyond transform each pixel of a colour image without regard to its neighbours is to modify its values based on the characteristics of the surrounding pixels. This type of neighbourhood processing an illustrated with in the context of colour image smoothing and shaping. 7.10.1 COLOUR IMAGE SMOOTHING: The principal difference is that instead of scalar gray level values must deal with component vectors. Let denote the set of co-ordinates defining a neighbourhood centred at (x, y) in a RGB colour images, the average of the RGB component vectors in the neighbourhood is (x, y)=1/k C(x, y) = [ ] =[ ]

12

Unit 7

COLOR IMAGE PROCESSING

We recognize the components of this vectors as the scalar images that would be obtained by independently smoothing each plane of the starting RGB images using conventional gray scale neighbourhood processing. Thus, we conclude that smoothing by neighbourhood averaging can be carried out on per-colour-plane base. The result is the same as when the averaging is performed using RGB colour vector. The size of the smoothing mask increases.

7.11 COLOR IMAGE SHAPING:


Laplacian of a vector is defined as a vector whose components are equal to the laplacian of the individual scalar components of the input vector. In the RGB colour system, the laplacian of vector C in eq. C(x, y) = [ ]=[ ]

Which, as in the previous section, tells us that we can compute the laplacian of a full colour image by computing the laplacian of each component image separately. Compute the laplacian of the RGB component images. Combine them to produce the sharpened full colour result. Similarly sharpened image based on the HSI component was generated by combining with that unchanged hue and saturation.

7.12 COLOR SEGMENTATION:


Segmentation is a process that partitions an image into regions. 7.12.1 SEGMENTATION IN HSI COLOUR SPACE: Segment an image based on colour and in addition, we want to carry out the process on individual process, it is natural to think first of the HSI space because colour is convenient represented in the hue images. Typically saturation is used as a masking image in order to isolate further regions of interest in hue images. The intensity image is used less frequently for segmentation of colour images because it carries no information. Task: segment the region of interest. Compare the a & b a:-region of interest image, b:-HSI component images. By comparing the two i.e., a, b the region in which we are interested have high values of hue, indicating that the colours are on the blue magenta side of red. Generate the binary mask by thresholding the saturation image with a threshold equal to 10% of the max value in the saturation. Any pixel value greater than the threshold was set to 1(white). All other were set to 0(black). Perform product of the mask with the hue image. Compute histogram, the histogram that has high values (which are the values of interest) which are grouped at the very high end of the gray scale. 7.12.2 SEGMENTATION IN RGB VECTOR SPACE: If suppose that the objective is to segment the object of a specified colour range in the RGB image. Give a set of sample colour points representation of colour of interest we obtain an estimate of the average colour that we wish to segment. Let the average colour denoted by the RGB vector a. The objective of segmentation is to classify each RGB pixel in a given image as having a colour in specified range or not. In order to perform this comparison, it is necessary to have a measure of similarity. One of simplest measure is that Euclidean Distance. Let Z denote an arbitrary point in RGB space. We say that Z is similar to a if the distance between them is less than a specified threshold D0. The Euclidean distance between Z and a is given by
13

Unit 7

COLOR IMAGE PROCESSING

D(z. a) = =[ ]....................(1) = Where the subscripts R, G, B denote the RGB components of vector a and z. The locus of point such that D(z, a)D0 is a solid sphere of radius D0.

Point contained within or on the sphere satisfy the specified colour criterion; points outside the sphere do not coding these two sets of points in the image with say, black and white produces a binary segmented image. A useful generation of eq.1 is the distance measure of the form D(z, a)=[ ].__eq.2 Where C is the covariance matrix of the colour we wish to segment. The locus of the points such that D(z, a)<=D0 describes a solid 3D elliptical body with the important property that its principal axes are oriented I the direction of max data spread. When C=I, then 3x3 identity mtrixeq2 reduces to eq.1.

7.13 COLOR EDGE DETECTION:


Edge detection is an important tool for image segmentation. Interested in the issue of computing edges on an individual image basis vs. computing edges directly in colour vector space. Encoding is applicable to a scalar function not for a vector function. Recalling that for a scalar function f(x, y), the gradient is a vector pointing in the direction of maximum rate of change of f at co-ordinates (x, y). Let r, g and b be unit vector along the R, G and B axis of RGB colour space and define the vectors. and v= Let the quantities follows , and be defined in terms of the dot product of the vector as | | | | | |

| |

| |

| |

| |

| |

| |

R, G and B and consequently the gs are function of x and y. using this notation , it can be shown that the direction of maximum rate of change of C(x, y) is given by the angle. = 1/2 And that the value of the rate of change at (x, y) in the direction of , is given by
14

Unit 7

COLOR IMAGE PROCESSING

F()=

Because =tan(+/-), if 0 is a solution, so is 0+/-/2 furthermore, F()=F( + ), so F needs to be computed only for values of in the half-open interval [0 ]. The fact that eq.5 provides two values 90* apart means that this eq. associated with each point (x, y) a pair of orthogonal directions. Along one of those directions F is max and it is minimum along the other. The derivation of these results is rather lengthy, and little gain in terms of fundamental objective of our current discussion by detailing it hue. Procedure: Task: Original image, find the gradient of the image obtained using the vector method Image obtained by computing the gradient of each RGB component image and forming a composite gradient image at each co-ordinate (x, y). The edge details of the vector gradient image are more complete than the detail in individual plane gradient image.

7.14 NOISE IN COLOR IMAGES: The noise content of a colour image has the same characteristics in colour channel, but it is
possible for colour channels to be affected differently by noise. One possibility is for the elections of a particular channel to malfunction. Different noise levels are more likely to be caused by differences in the relative strength of illumination available to each of the colour channels. Example, use of a red (reject filter) in a CCD camera will reduce the strength of illumination available to the red sensor. CCD sensors are noisier at low levels of illumination, so the resulting red component of an RGB image would tend to be noisier than the other two component images.

7.15 COLOR IMAGE COMPRESSION:


Therefore the number of bits required to represent colour is typically 3 or 4 times greater than number employed in the representation of gray levels. Data compression plays a central role in the storage and transmission of colour images w. r. t RGB , CMY(K), and MSI images of the previous section. The data that are the object of any compression are the components of each colour pixel (e.g., the red, green and blue components of pixels in an RGB image). They are the means by which colour information is conveyed . Compression is the process of reducing or eliminating redundant and/or irrelevant data. The image must be decomposed before input to a colour monitor the compressed image contains only 1 data bit (thus 1 strong bit) for every 230 bits of data in the original image. The transmission time for compressed image is less than the original image. Decompression is done at the receiver. The JPEG-2000 compression algorithm used to generate is a recently introduced standard. Reconstruction image can be slightly blurred. This can be eliminated by altering the level of compression.

7.16 COLOR SLICING:


Highlighting a specific range of colours in an image is useful for separating objects from their surroundings. The basic idea is (1) Displaying the colours of interest so that they stand out from the background. (2) use the regions defined by the colours as a mask for further processing. All the practical colour slicing approaches requires each pixels transformed colour components to be a function of all original pixel colour components.

15

Potrebbero piacerti anche