Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
UNIT 1
SPATIAL DOMAIN FILTERING
1. Define Image.
An Image may be defined as a two dimensional function f(x, y) where x & y are spatial
(plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called intensity or
gray level of the image at that point. When x, y and the amplitude values of f are all finite,
discrete quantities we call the image as Digital Image.
2. Define Image Sampling.
Digitization of spatial coordinates (x, y) is called Image Sampling. To be suitable for
computer processing, an image function f(x,y) must be digitized both spatially and in magnitude.
3. Define Quantization.
Digitizing the amplitu-de values is called Quantization. Quality of digital image is
determined to a large degree by the number of samples and discrete gray levels used in sampling
and quantization.
4. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an image.
Image will have high contrast, if the dynamic range is high, and image will have dull washed out
gray look if the dynamic range is low.
5. Define Mach band effect.
The spatial interaction of Luminance from an object and its surround creates a
phenomenon called the mach band effect.
6. Define Brightness.
Brightness of an object is the perceived luminance of the surround. Two objects with
different surroundings would have identical luminance but different brightness.
7. What is meant by Tapered Quantization?
If gray levels in a certain range occur frequently while others occurs rarely, the
quantization levels are finely spaced in this range and coarsely spaced outside of it. This method
is sometimes called Tapered Quantization.
8. What do you meant by Gray level?
Gray level refers to a scalar measure of intensity, that ranges from black to grays and
finally to white.
UNIT 2
Reverse transforms
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete
function h (rk) = nk, where rk is the kth gray level and nk is the number of pixels in the image
having gray level rk.
20. What is histogram equalization?
It is a technique used to obtain linear histogram. It is also known as histogram
linearization. Condition for uniform histogram is Ps(s) = 1
21. What is contrast stretching?
Contrast stretching reduces an image of higher contrast than the original, by darkening
the levels below m and brightening the levels above m in the image.
22. What is spatial filtering?
Spatial filtering is the process of moving the filter mask from point to point in an image.
For linear spatial filter, the response is given by a sum of products of the filter coefficients, and
the corresponding image pixels in the area spanned by the filter mask.
23. Define averaging filters.
The output of a smoothing, linear spatial filter is the average of the pixels contained in
the neighborhood of the filter mask. These filters are called averaging filters.
24. What is a Median filter?
The median filter replaces the value of a pixel by the median of the gray levels in the
neighborhood of that pixel.
25. What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest points in an image.
The 0th percentile filter is minimum filter used for finding darkest points in an image.
26. Define high boost filter.
High boost filtered image is defined as
HBF = A (original image) – LPF
= (A-1) original image + original image – LPF
HBF = (A-1) original image + HPF
27. State the condition of transformation function s=T(r).
T(r) is single-valued and monotonically increasing in the interval 0_r_1
0_T(r) _1 for 0_r_1.
UNIT 3
SEGMENTATION AND EDGE DETECTION
1. What is segmentation?
The first step in image analysis is to segment the image. Segmentation subdivides an
image into its constituent parts or objects.
2. Write the applications of segmentation.
The applications of segmentation are,
Detection of isolated points.
Detection of lines and edges in an image.
3. What are the three types of discontinuity in digital image?
Three types of discontinuity in digital image are points, lines and edges.
4. How the discontinuity is detected in an image using segmentation?
The steps used to detect the discontinuity in an image using segmentation are
Compute the sum of the products of the coefficient with the gray levels contained
in the region encompassed by the mask.
The response of the mask at any point in the image is R = w1z1+ w2z2 + w3z3 +
………..+ w9z9
Where zi = gray level of pixels associated with mass coefficient wi.
The response of the mask is defined with respect to its center location.
5. Why edge detection is most common approach for detecting discontinuities?
The isolated points and thin lines are not frequent occurrences in most practical
applications, so edge detection is mostly preferred in detection of discontinuities.
6. How the derivatives are obtained in edge detection during formulation?
The first derivative at any point in an image is obtained by using the magnitude of the
gradient at that point. Similarly the second derivatives are obtained by using the Laplacian.
7. Write about linking edge points.
The approach for linking edge points is to analyse the characteristics of pixels in a small
neighborhood (3x3 or 5x5) about every point (x,y)in an image that has undergone edge detection.
All points that are similar are linked, forming a boundary of pixels that share some common
properties.
Sobel operators have the advantage of providing both the differencing and a smoothing
effect. Because derivatives enhance noise, the smoothing effect is particularly attractive feature
of the sobel operators.
9. What is pattern?
Pattern is a quantitative or structural description of an object or some other entity of
interest in an image. It is formed by one or more descriptors.
10. What is pattern class?
It is a family of patterns that share some common properties. Pattern classes are denoted
as w1 w2 w3 ……… wM , where M is the number of classes.
11. What is pattern recognition?
It involves the techniques for arranging pattern to their respective classes by
automatically and with a little human intervention.
12. What are the three principle pattern arrangements?
The three principal pattern arrangements are vectors, Strings and trees. Pattern vectors are
represented by old lowercase letters such as x ,y, z and it is represented in the form x=[x1,
x2, ……….., xn ]. Each component x represents I th descriptor and n is the number of such
descriptor.
13. What is edge?
An edge is a set of connected pixels that lie on the boundary between two regions. Edges
are more closely modeled as having a ramp like profile. The slope of the ramp is inversely
proportional to the degree of blurring in the edge.
14. What is meant by object point and background point?
To execute the objects from the background is to select a threshold T that separate these
modes. Then any point(x,y) for which f(x,y)>T is called an object point. Otherwise the point is
called background point.
15. Define region growing.
Region growing is a procedure that groups pixels or sub regions into layer regions based
on predefined criteria. The basic approach is to start with a set of seed points and from the grow
regions by appending to each seed these neighboring pixels that have properties similar to the
seed.
Chain codes are used to represent a boundary by a connected sequence of straight line
segment of specified length and direction. Typically this representation is based on 4 or 8
connectivity of segments. The direction of each segment is coded by using a numbering scheme.
17. What are the demerits of chain code?
The demerits of chain code are,
The resulting chain code tends to be quite long.
Any small disturbance along the boundary due to noise cause changes in the code
that may not be related to the shape of the boundary.
18. What is polygonal approximation method?
Polygonal approximation is an image representation approach in which a digital
boundary can be approximated with arbitrary accuracy by a polygon. For a closed curve the
approximation is exact when the number of segments in polygon is equal to the number of points
in the boundary so that each pair of adjacent points defines a segment in the polygon.
19. Specify the various polygonal approximation methods.
The various polygonal approximation methods are
Minimum perimeter polygons.
Merging techniques.
Splitting techniques.
20. Name few boundary descriptors.
Simple descriptors.
Shape descriptors.
Fourier descriptors.
21. Define length of a boundary.
The length of a boundary is the number of pixels along a boundary. Example, for a chain
coded curve with unit spacing in both directions, the number of vertical and horizontal
components plus √2 times the number of diagonal components gives its exact length.
22. Define shape numbers.
Shape number is defined as the first difference of smallest magnitude. The order n of a
shape number is the number of digits in its representation.
23. Name few measures used as simple descriptors in region descriptors.
Area.
Perimeter.
Mean and median gray levels.
Minimum and maximum of gray levels.
UNIT 4
UNIT 5
COLOR IMAGES AND IMAGE COMPRESSION
Each code word is made up of continuation bit c and information bit which are binary
numbers. This is called B2 code or B code. This is called B2 code because two information bits
are used for continuation bits.
23. Define arithmetic coding.
In arithmetic coding, one to one corresponds between source symbols and code word
doesn’t exist whereas the single arithmetic code word assigned for a sequence of source symbols.
A code word defines an interval of number between 0 and 1.
24. What is bit plane decomposition?
An effective technique for reducing an image’s inter-pixel redundancies is to process the
image’s bit plane individually. This technique is based on the concept of decomposing multilevel
images into a series of binary images and compressing each binary image via one of several
well-known binary compression methods.