Sei sulla pagina 1di 7

MC0086 DIGITAL IMAGE PROCESSING

Que. 1 Explain the significance of digital image processing in Gamma-ray Imaging and Imaging in the Visible and Infrared Bands? Ans: A digital image is a numeric representation (normally binary) of a two-dimension all image. Depending on whether the image resolution is fixed, it may be of vector or raster type. Without qualifications, the term "digital image" usually refers toaster images also called bitmap images. Raster images have a finite set of digital values, called picture elements or pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding quantized values that represent the brightness of a given color at any specific point. Typically, the pixels are stored in computer memory as raster image or raster map, a twodimensional array of small integers. These values are often transmitted or stored in a compressed form. Raster images can be created by a variety of input devices and techniques, such as digital cameras, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more. They can also be synthesized from arbitrary non-image data, such as mathematical functions or three-dimensional geometric models; the latter being a major sub-area of computer graphics. The field of digital image processing is the study of algorithms for their transformation. There are many fields of digital image processing like: 1) 2) 3) 4) 5) 6) Gamma-Ray Imaging X-Ray Imaging Imaging in the Ultraviolet Band Imaging in the Visible and Infrared Bands Imaging in the Microwave Band Radio Band

Gamma-Ray Imaging: Gamma-ray imaging is used in many applications like Nuclear medicine and astronomy, Positron emission tomography (PET imaging) is commonly used in medical diagnostic imaging. Radioactive isotope administered to patient, which emits positrons. Positron and electron meet and annihilate, giving out gamma-rays, which are detected by sensors. Prominent bright spots indicate white masses corresponding to tumors in lung and brain. Imaging in the Visible and Infrared Bands: The infrared band is often used in conjunction with visual imaging, so we have grouped the visible and infrared bands for the purpose of illustration. Examples of images obtained with a light microscope. The examples range from pharmaceuticals and micro inspection to materials characterization. Even in just microscopy, the application areas are too numerous to detail here. It is not difficult to conceptualize the types of processes one might apply to these images, ranging from enhancement to measurements.

Que. 2 Explain the properties and uses of electromagnetic spectrum? Ans: The electromagnetic spectrum can be expressed in terms of wavelength, frequency, or energy. Wavelength () and frequency () are related by the expression =c/ Where c is the speed of light (2.998*108 m/s).The energy of the various components of the electromagnetic spectrum is given by the expression E =h Where h is Plancks constant. The units of wavelength are meters, with the terms microns (denoted m and equal to 106 m) and nanometers (109 m) being used frequently. Frequency is measured in Hertz (Hz), with one Hertz being equal to one cycle of a sinusoidal wave per second. Electromagnetic waves can be visualized as propagating sinusoidal waves with wavelength or they can be thought of as a stream of mass less particles, each traveling in a wavelike pattern and moving at the speed of light. Each mass less particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. Energy is proportional to frequency, so the higher-frequency (Shorter wavelength) electromagnetic phenomena carry more energy per photon. Thus, radio waves have photons with low energies; microwaves have more energy than radio waves, infrared still more, then visible, ultraviolet, X-rays, and finally gamma rays, the most energetic of all. This is the reason that gamma rays are so dangerous to living organisms. At the short-wavelength end of the electromagnetic spectrum, we have gamma rays and hard Xrays. Gamma radiation is important for medical and astronomical imaging, and for imaging radiation in nuclear environments. Hard (high-energy) X-rays are used in industrial applications. Moving still higher in wavelength, we encounter the infrared band, which radiates heat, a fact that makes it useful in imaging applications that rely on heat signatures. The part of the infrared band close to the visible spectrum is called the near-infrared region. The opposite end of this band is called the far-infrared region. This latter region blends with the microwave band. This band is well known as the source of energy in microwave ovens, but it has many other uses, including communication and radar. Finally, the radio wave band encompasses television as well as AM and FM radio. In the higher energies, radio signals emanating from certain stellar bodies are useful in astronomical observations.

Que. 3 Differentiate between Monochromatic photography and Color photography? Ans: Monochromatic Photography: The most common material for photographic image recording is silver halide emulsion, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a no reversal process. It produces a negative image in the sense that the silver density is inversely proportional to the exposing light. A positive reflection print of an image can be obtained in a two-stage process with no reversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the light intensity that exposed the negative transparency. Color Photography: Modern color photography systems utilize an integral tripack film to produce positive or negative transparencies. In a cross section of this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light from passing through to the green and red silver emulsions that follow in consecutive layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers. Upon development, the blue emulsion layer is converted into a yellow dye transparency whose dye concentration is proportional to the blue exposure for a negative transparency and inversely proportional for a positive transparency. Similarly, the green and red emulsion layers become magenta and cyan dye layers, respectively. Color prints can be obtained by a variety of processes. The most common technique is to produce a positive print from a color negative transparency onto nonreversible color paper. In the establishment of a mathematical model of the color photographic process, each emulsion layer can be considered to react to light as does an emulsion layer of a monochrome photographic material.

Que. 4 Define and explain Dilation and Erosion concept? Ans: Dilation: Dilation, an object grows uniformly in spatial extent. Generalized dilation is expressed symbolically as like: -

G (j, k) = F (j, k) H (j, k)


Where F (j, k), for 1 j, k N is a binary-valued image and H (j, k) for, 1 j, k L, where L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, F (j, k) and H (j, k) are assumed to be square arrays. Generalized dilation can be defined mathematically and Implemented in several ways. The Murkowski addition definition is

) (r, c ) H

* (

)+

It states that G (j, k) is formed by the union of all translates of F (j, k) with respect to itself in which the translation distance is the row and column index of pixels of H (j, k) that is a logical illustrates the concept. Erosion: With erosion an object shrinks uniformly. Generalized erosion is expressed symbolically as like: -

G (j, k) = F (j, k) H (j, k)


Where H (j, k) is an odd size L * L structuring element. Generalized erosion is defined to be

) (r, c ) H

* (

)+

The meaning of this relation is that erosion of F (j, k) by H (j, k) is the intersection of all translates of F (j, k) in which the translation distance is the row and column index of pixels of H (j, k) that are in the logical one state.

Que. 5 What is mean by Image Feature Evaluation? Which are the two quantitative approaches used for the evaluation of image features? Ans: Image Feature Evaluation:The original ASM method iteratively refines the pose and shape parameters of the point distribution model driving the ASM by a least squares fit of the shape to update the target points at the estimated object boundary position, as determined by a suitable object boundary criterion. We propose unimproved search procedure that is more robust against outlier configurations in the boundary target points by requiring subsequent shape changes to be smooth, which is imposed by smoothness constraint on the displacement of neighboring target points at each iteration and implemented by a minimal cost path approach. We compare the original ASM search method and our improved search algorithm with a third method that does not rely on iteratively refined target point positions, but instead optimizes a global Bayesian objective function derived from statistical a priori contour shape and image models. The two quantitative approaches used for the evaluation of image features are as under: 1) Prototypes Performance 2) Figure of Merit Prototypes Performance: In the prototypes performance approach for image classification, a prototypes image with regions that have been independently categorized is classified by classification procedures using various images features to be evaluated. The classification error is then measured for each features set. The best set of features is, of course, that which results in the least classification. The prototypes performance approach for image segmentation is similar to nature. A prototypes image with independently identified region is segmented by segmentation procedures using a test set of features. Then, the detected segment is compared to the known segment, and the segmentation error is evaluated. The problems associated with the prototypes performances of methods types of features of evaluation are the integrity of the prototypes data and the fact the performance indication is depended not only the quality of the feature but also on the classification or segmentation ability of the classifier or segmented. Figure of Merit: The figure of merit approach of feature is evaluation involves the establishment of some functional distance measurement between set of image feature such as that a large distance implies a low classification error and vice versa. Augers and Pratt have utilized the Bhattacharyya distance figureof-merit for texture feature evaluation. The method should be extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features

Que. 6 Explain about the Region Splitting and merging with example Ans: - Region Splitting The basic idea of region splitting is to break the image into a set of disjoint regions which are coherent within themselves: Initially take the image as a whole to be the area of interest. Look at the area of interest and decide if all pixels contained in the region satisfy some similarity constraint. If TRUE then the area of interest corresponds to a region in the image. If FALSE split the area of interest (usually into four equal sub-areas) and consider each of the subareas as the area of interest in turn. This process continues until no further splitting occurs. In the worst case this happens when the areas are just one pixel in size. These are a divide and conquer or top down method. If only a splitting schedule is used then the final segmentation would probably contain many neighboring regions that have identical or similar properties. We can describe the splitting of the image using a tree structure, using a modified quad tree. Each non-terminal node in the tree has at most four descendants, although it may have less due to merging.

I1

I2

I3

I41

I42

I43

Example of Region Splitting and Merging Tree

Merging A merging process is used after each split which compares adjacent regions and merges them if necessary. Algorithms of this nature are called split and merge algorithms. To illustrate the basic principle of these methods can consider by an imaginary image Whole image are shown in following example:-

I1

I2

I
I3 I4

(a) Whole Image

(b) First Split

I1
I41

I2
I42 I44

I1
I41

I2
I42 I43 (d) Merge

I3

I43

I3

(c)Second Split

Example of Region Splitting and Merging

1. Not all the pixels in are similar so the region is split as in example. 2. Assume that all pixels within regions, and respectively are similar but those in are not. 3. Therefore is split next as in example (c). 4. Now assume that all pixels within each region are similar with respect to that region and that after comparing the split regions, regions and are found to be identical. 5. These are thus merged together as in above example(d)

Potrebbero piacerti anche