Sei sulla pagina 1di 6

Question 1 - Explain the process of formation of image in human eye.

Image formation in Human Eye : The principal difference between the lens of the eye and an ordinary optical lens is that the former is flexible. The radius of curvature of the anterior surface of the lens is greater than the radius of its posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye. The distance between the center of the lens and the retina (called the focal length) varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest refractive power. When the eye focuses on a nearby object, the lens is most strongly refractive. This information makes it easy to calculate the size of the retinal image of any object.

Figure 1: Graphical representation of the eye looking at a palm tree. Point c is the optical center of the lens. In Fig.1, for example, the observer is looking at a tree 15 m high at a distance of 100 m. If h is the height in mm of that object in the retinal image, the geometry of Fig. 2.2 yields 15/100=h/17 or h=2.55 mm. The retinal image is reflected primarily in the area of the fovea. Perception then takes place by the relative excitation of light receptors, which transform radiant energy into electrical impulses that are ultimately decoded by the brain.

Question 2 - Explain different linear methods for noise cleaning. Noise added to an image generally has a higher-spatial-frequency spectrum than the normal image components because of its spatial decor relatedness. Hence, simple low-pass filtering can be effective for noise cleaning. Convolution method of noise cleaning: A spatially filtered output image G(j,k) can be formed by discrete convolution of an input image F(m,n) with a L * L impulse response array H(j,k)

According to the relation, G(j,k)= F(m,n) H(m+j+C, n+k+C) where C=(L+1)/2

For noise cleaning, H should be of low-pass form, with all positive elements. Several common pixel impulse response arrays of low-pass form are used and two such forms are given below

These arrays, called noise cleaning masks, are normalized to unit weighting so that the noise-cleaning process does not introduce an amplitude bias in the processed image. Homomorphic filtering: It is a useful technique for image enhancement when an image is subject to multiplicative noise or interference. Fig.2 describes the process.

Figure 2: Homomorphic Filtering The input image F(j,k) is assumed to be modeled as the product of a noise-free image S(j,k) and an illumination interference array I(j,k). Thus, F(j,k) = S(j,k) I(j,k) Taking the logarithm yields the additive linear result log{F(j, k)} = log{I(j, k)} + log{S(j, k)} Conventional linear filtering techniques can now be applied to reduce the log interference component. Exponentiation after filtering completes the enhancement process. Question 3 - Describe various texture features of historical and practical importance. The following describe the texture features of historical and practical importance: i) Fourier Spectra Methods: Several studies have considered textural analysis based on the Fourier spectrum of an image region. Because the degree of texture coarseness is proportional to its spatial period, a region of coarse texture should have its Fourier spectral energy concentrated at low spatial frequencies. Conversely, regions of fine texture should exhibit a concentration of spectral energy at high spatial frequencies. Although this correspondence exists to some degree, difficulties often arise because of spatial changes in the period and phase of texture pattern repetitions. Experiments have shown that there is considerable spectral overlap of regions of distinctly different natural texture, such as urban, rural and woodland regions extracted from aerial photographs. On the other hand, Fourier spectral analysis has proved successful in the detection and classification of coal miner's black lung disease, which appears as diffuse textural deviations from the norm.

ii) 2 Edge Detection Methods: Rosenfeld and Troy have proposed a measure of the number of edges in a neighborhood as a textural measure. As a first step in their process, an edge map array E(j, k) is produced by some edge detector such that E(j, k) = 1 for a detected edge and E(j, k) = 0 otherwise. Usually, the detection threshold is set lower than the normal setting for the isolation of boundary points. This texture measure is defined as

Where W = 2w + 1 is the dimension of the observation window. A variation of this approach is to substitute the edge gradient G(j, k) for the edge map array in the above equation. iii) Autocorrelation Methods: The autocorrelation function has been suggested as the basis of a texture measure. Although it has been demonstrated in the preceding section that it is possible to generate visually different stochastic fields with the same autocorrelation function, this does not necessarily rule out the utility of an autocorrelation feature set for natural images. The autocorrelation function is defined as

for computation over a W X W window with -T <= m, n <= T pixel lags. Presumably, a region of coarse texture will exhibit a higher correlation for a fixed shift than will a region of fine texture. Thus, texture coarseness should be proportional to the spread of the autocorrelation function. Faugeras and Pratt have proposed the following set of autocorrelation spread measures:

Where

Question 4 - Which are the two quantitative approaches used for the evaluation of image features? Explain. There are two quantitative approaches to the evaluation of image features: i) Prototype performance: In the prototype performance approach for image classification, a prototype image with regions (segments) that have been independently categorized is classified by a classification

procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or segmentation ability of the classifier or segmenter. ii) Figure of merit: The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance figure-of-merit for texture feature evaluation. The method should be extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as

where x denotes a vector containing individual image feature measurements with conditional density p (x | S1).

Question 5 - Explain with diagram Digital image restoration model. In order to effectively design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and a posteriori modeling. In the former case, measurements are made on the physical imaging system, digitizer and display to determine their response to an arbitrary image field. The posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored.

Figure 3 : Digital image restoration model. Fig.3 shows a general model of a digital imaging system and restorationprocess. In the model, a continuous image light distribution C(x,y,t, ) dependent on spatial coordinates (x, y), time (t) and spectral wavelength () is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging system produces a set of output image fields FO(i)( x ,y ,t j ) at time instant tj described by the general relation

where OP { . } represents a general operator that is dependent on the space coordinates (x, y), the time history (t), the wavelength () and the amplitude of the light distribution (C). For a monochrome imaging system, there will only be a single output field, while for a natural color imaging system, FO (i)( x ,y ,t j ) may denote the red, green and blue tristimulus bands for i = 1, 2, 3, respectively. Multispectral imagery will also involve several output bands of data. In the general model of Fig.3 each observed image field FO (i) ( x ,y ,t j ) is digitized, to produce an array of image samples ES (i) ( m1 , m2 , t j ) at each time instant t j. The output samples of the digitizer are related to the input observed field by

Question 6 - Explain about the Region Splitting and merging with example. Region Splitting and Merging: Sub-divide an image into a set of disjoint regions and then merge and / or split the regions in an attempt to satisfy the conditions stated. Let R represent the entire image and select predicate P. One approach for segmenting R is to subdivide it successively into smaller and smaller quadrant regions so that, for any region Ri , P(Ri) = TRUE. We start with the entire region. If P(R) = FALSE, then the image is divided into quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into sub quadrants and so on. This particular splitting technique has a convenient representation in the form of a so called quad tree. The fig.4 shows root of the tree corresponds to the entire image and that each node corresponds to a subdivision. In this case, only R4 was sub divided further. If only splitting were used, the final partition likely would contain adjacent regions with identical properties. This draw back may be remedied by allowing merging, as well as splitting. Two adjacent regions Rj and Rk are merged only if P(Rj U Rk = TRUE. i) Split into four disjoint quadrants any region Ri for which where P (Ri) = FALSE ii) Merge any adjacent regions Ri and Rk for which P(Rj U Rk) = TRUE. iii) Stop when no further merging or splitting is possible.

Figure 4: (a) partitioned image (b) Corresponding quad tree

Potrebbero piacerti anche