Sei sulla pagina 1di 107

EX.

NO:


DISPLAY OF GRAYSCALE IMAGES
DATE:

AIM:
To write a MATLAB program
a) To read and display an image.
b) To perform data type conversion to 8 bit, 16 bit, 32 bit and double data
types.
c) To perform file format conversion in jpg, jpeg, png, bmp and tiff formats.
d) To perform arithmetic and logical operations on different images.
SOFTWARE USED:

MATLAB Version 2013.

THEORY:
Types and classes of images:
Name
Double

Single
Uint8
Unit16
Uint32

Description
Double precision, floating point numbers
in approximate range 10308(8 bytes per
element)
Single precision, floating point with value
in range 10308(4 bytes per element)
Unsigned 8 bit integer[0, 255] (1 byte per
element)
Unsigned 16 bit integer[0 , 65536] (2
byte/element)
Unsigned 32 bit integer[0 , 4294967295]
(4 byte/element)
Characters (2 bytes/element)
Values are 0 or 1 (1 byte/element)

Char
Logical

Image types:
The toolbox supports 4 types of images:
Grayscale
Binary
Indexed
RGB

Grayscale image, also called gray level image, is a data matrix whose values represent
shades of grey.

Binary image: Each pixel assumes one of only two discrete values 1 or 0. A binary image
is started as a logical array of 0s or 1s. Thus, a numeric array is converted into binary
using function logical. BW refers to binary images.

Indexed image: It consists of an array and a colourmap matrix. The pixel values in the
array are direct indices into a colourmap. This uses the variable X to refer to the array
and map refers to the colourmap.
The colourmap matrix is an m-by-3 array of class double containing floating point
values in the range [0,1]. Each row of a map specifies red, green and blue components
of a single colour.

RGB images use three numbers for each pixel, allowing possible use of millions of
colours within the image at the cost of requiring three times as much space as grayscale
images do.

Image files and formats:
They are the standardized means of organizing and storing digital images.

a) JPEG/JFIF:
Joint Photographic Experts Group is a compression method.
JPEG compressed images are actually stored in JFIF (JPEG File
Interchange Format).
JPEG in most cases is a lossy compression.
JPEG/JFIF filename extension is JPG or JPEG
b) TIFF (Tagged Image File Formats):
It is a flexible format that manually saves 8 bits or 16 bits per
colour(RGB) for 24 bits and 48 bits total.
Filename extension: TIFF or TIF
E.g.: Digital Camera Images
c) GIF (Graphics Interchange Format):
It is limited to an 8 bit palette or 256 colours. This makes GIF format
suitable for simplediagrams, shapes etc.
Supports animation
Lossless compression that is more effective
d) PNG(Portable Network Graphics):
Supports true colour(16 million)
The lossless PNG format is best suited for editing pictures. The lossy
formats like PNG are best suited for final distribution.
e) BMP (Bitmap File Format):
Handles graphics files within MS Windows OS
BMP files are uncompressed, hence they are large

SYNTAXES USED:
IMREAD
IMREAD reads image from graphics file.
a=imread(filename.fmt) reads a grayscale or color image from the file specified by the
string FILENAME. The text string fmt specifies the format of the file by its standard file
extension. The return value A is an array containing the image data. If the file contains a
grayscale image, A is an M-by-N array. If the file contains a true color image, A is an M-
by-N by-3 array. For tiff files containing color images that use the CMYK color space, A
is an M-by-N-by-4 array.
IMSHOW
IMSHOW displays image in Handle Graphics.
a=imshow(i,[low high]) displays the grayscale image I, specifying the display range for I
in [low high].the value low (and any value less than low) displays as black, the value high
(and any value greater than high) displays as white. Values in between are displayed as
intermediate shades of gray, using default number of gray levels. if you use an empty
matrix([]) for[low high], imshow uses [min(I(:))]; that is, the min value in I is displayed as
black, and the max value is displayed as white. imshow(rgb) displays the true color
image rgb. imshow(bw) displays the binary image bw. imshow displays pixels with 0 as
black and pixels with 1 as white.
IMRESIZE
It resizes image.
b=imresize(a,scale) returns an image that is scale times the size of a, which is grayscale,
rgb, or binary image. b=imresize(a,[numrows numcols]) resizes the image so that it has
specified number of rows and columns. either numrows or numcols may be NaN, in
which case imresize computes the number of rows or columns automatically in order
to preserve the image aspect ratio.
IMFINFO
IMFINFO displays information about graphics file.
a=imfinfo(filename.fmt) returns a structure whose fields contain information about an
image in a graphics file. It displays the information about file size, height, width,
resolution, image type, etc.
RGB2GRAY
Converts RGB image or colormap to grayscale.
rgb2gray converts RGB images to grayscale by eliminating the hue and saturation
information while retaining the luminance.
IMADD
Adds one image from another or adds constant to an image.

Z=imadd(X,Y) adds each element in array X with the corresponding element in array Y
and returns the sum in the corresponding element of the output array Z. X and Y are
real, non-sparse numeric arrays with the same size and class, or Y is a scalar double. Z

has the same size and class as X, unless X is logical, in which case Z is double. If X and Y
are integer arrays, elements in the output that exceed the range of the integer type are
truncated, and fractional values are rounded.

IMSUBTRACT
Subtract one image from another or subtract constant from image

Z = imsubtract(X,Y) subtracts each element in array Y from the corresponding element in
array X and returns the difference in the corresponding element of the output array Z. X
and Y are real, non-sparse numeric arrays of the same size and class, or Y is a double
scalar. The array returned, Z, has the same size and class as X unless X is logical, in which
case Z is double. If X is an integer array, elements of the output that exceed the range of
the integer type are truncated, and fractional values are rounded.

IMMULTIPLY
Multiplication of two images or multiplication of an image by a constant

Z = immultiply(X,Y) multiplies each element in array X by the corresponding element in
array Y and returns the product in the corresponding element of the output array Z. If X
and Y are real numeric arrays with the same size and class, then Z has the same size and
class as X. If X is a numeric array and Y is a scalar double, then Z has the same size and
class as X. If X is logical and Y is numeric, then Z has the same size and class as Y. If X is
numeric and Y is logical, then Z has the same size and class as X. immultiply computes
each element of Z individually in double-precision floating point. If X is an integer array,
then elements of Z exceeding the range of the integer type are truncated, and fractional
values are rounded.
If X and Y are numeric arrays of the same size and class, you can use the expression X.*Y
instead of immultiply.

IMDIVIDE
Division of the image by a constant value

Z = imdivide(X,Y) divides each element in the array X by the corresponding element in
array Y and returns the result in the corresponding element of the output array Z. X and
Y are real, non-sparse numeric arrays with the same size and class, or Y can be a scalar
double. Z has the same size and class as X and Y, unless X is logical, in which case Z is
double. If X is an integer array, elements in the output that exceed the range of integer
type are truncated, and fractional values are rounded. If X and Y are numeric arrays of
the same size and class, you can use the expression X./Y instead of imdivide.

LOGICAL OPERATIONS
True or false (Boolean) conditions

The logical data type represents true or false states using the numbers 1 and 0,
respectively. Certain MATLAB functions and operators return logical values to indicate
fulfillment of a condition. You can use those logical values to index into an array or
execute conditional code.

FUNCTIONS

bitand
Find logical AND
cmp
Find logical NOT
bitor
Find logical OR
bitxor
Logical exclusive-OR

























TYPE AND FILE CONVERSION


clear;
clc;
img1=imread('rice.png');
subplot(221);
imshow(img1);
title('uint8');
whos('img1');
uint16=im2uint16(img1);
subplot(222);
imshow(uint16);
title('uint16');
whos('uint16');
double=im2double(img1);
subplot(223);
imshow(double);
title('double');
whos('c');
imwrite(img1,'convert.tif');
tif=imread('convert.tif');
subplot(224);
imshow(tif);
title('Tif');
whos('tif');
inf=imfinfo('rice.png')
info=imfinfo('convert.tif')

ARITHMETIC OPERATION:
img2=imread('cameraman.tif');
figure;
subplot(331);
imshow(img2);
title('Camera Man');
whos('img2');
subplot(332);
imshow(img1);
title('Rice');
add=imadd(img1,img2);
subplot(333);
imshow(add);
title('Added');

whos('add');
sub=imsubtract(img1,img2);
subplot(334);
imshow(sub);
title('Subtracted');
whos('sub');
div1=imdivide(img1,2);
subplot(335);
imshow(div1);
title('Divided by 2');
whos('div1');
div2=imdivide(img1,0.2);
subplot(336);
imshow(div2);
title('Divided by 0.2');
whos('div2');
mul1=immultiply(img1,2);
subplot(337);
imshow(mul1);
title('Multiply by 2');
whos('mul1');
mul2=immultiply(img1,0.2);
subplot(338);
imshow(mul2);
title('Multiply by 0.2');
whos('mul2');

LOGICAL OPERATION:
im1=imread('Rice.png');
im2=imread('Coins.png');
im3=imresize(im2,[256 256]);
or=bitor(im1,im3);
subplot(233);
imshow(or);
title('OR');
whos('or*');
and=bitand(im1,im3);
subplot(234);
imshow(and);
title('AND');
whos('and*');

xor=bitxor(im1,im3);
subplot(235);
imshow(xor);
title('XOR');
whos('xor*');
subplot(231);
imshow(im1);
title('Image1');
subplot(232);
imshow(im3);
title('Image2');

OBSERVATION:
Name Size Bytes Class Attributes

img1 256x256 65536 uint8

Name Size Bytes Class Attributes

uint16 256x256 131072 uint16

Name Size Bytes Class Attributes

tif 256x256 65536 uint8


inf =

Filename: [1x62 char]
FileModDate: '26-Jan-2003 01:03:06'
FileSize: 44607
Format: 'png'
FormatVersion: []
Width: 256
Height: 256
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: [137 80 78 71 13 10 26 10]
Colormap: []
Histogram: []
InterlaceType: 'none'
Transparency: 'none'
SimpleTransparencyData: []
BackgroundColor: []
RenderingIntent: []
Chromaticities: []
Gamma: []
XResolution: []
YResolution: []
ResolutionUnit: []
XOffset: []
YOffset: []
OffsetUnit: []

SignificantBits: []
ImageModTime: '27 Dec 2002 19:57:12 +0000'
Title: []
Author: []
Description: 'Rice grains'
Copyright: 'Copyright The MathWorks, Inc.'
CreationTime: []
Software: []
Disclaimer: []
Warning: []
Source: []
Comment: []
OtherText: []


info = Filename: '\\psf\Home\Documents\MATLAB\convert.tif'
FileModDate: '17-Sep-2015 10:29:31'
FileSize: 66250
Format: 'tif'
FormatVersion: []
Width: 256
Height: 256
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: [73 73 42 0]
ByteOrder: 'little-endian'
NewSubFileType: 0
BitsPerSample: 8
Compression: 'PackBits'
PhotometricInterpretation: 'BlackIsZero'
StripOffsets: [8 8265 16474 24733 32987 41198 49461 57732]
SamplesPerPixel: 1
RowsPerStrip: 32
StripByteCounts: [8257 8209 8259 8254 8211 8263 8271 8263]
XResolution: 72
YResolution: 72
ResolutionUnit: 'Inch'
Colormap: []
PlanarConfiguration: 'Chunky'
TileWidth: []
TileLength: []

TileOffsets: []
TileByteCounts: []
Orientation: 1
FillOrder: 1
GrayResponseUnit: 0.0100
MaxSampleValue: 255
MinSampleValue: 0
Thresholding: 1
Offset: 65996



Name Size Bytes Class Attributes

img2 256x256 65536 uint8

Name Size Bytes Class Attributes

add 256x256 65536 uint8

Name Size Bytes Class Attributes

sub 256x256 65536 uint8

Name Size Bytes Class Attributes

div1 256x256 65536 uint8

Name Size Bytes Class Attributes

div2 256x256 65536 uint8

Name Size Bytes Class Attributes

mul1 256x256 65536 uint8

Name Size Bytes Class Attributes

mul2 256x256 65536 uint8

Name Size Bytes Class Attributes


or 256x256 65536 uint8

Name Size Bytes Class Attributes

and 256x256 65536 uint8






OUTPUT:
FILE CONVERSION

ARITHMETIC OPERATION




LOGICAL OPERATION:

















EX.NO:


MASKING AND THRESHOLDING
DATE-

AIM:
To write a MATLAB program
a) To read and display an image.
b) To create a grayscale image and perform masking operation
c) To perform thresholding operation.

SOFTWARE USED:

MATLAB Version 2013.
THEORY
THRESHOLDING
Thresholding is a method of image segmentation based on the intensity histogram of
the image. Suppose an image f(x,y) consists of a light object on a dark background in
such a way that object and background pixels have intensity levels grouped into two
dominant modes. Here, it is easier to extract by selecting a threshold T that separates
these modes. Then any point (x,y) for which f(x,y) T is called object point; otherwise
the point is called background point. The threshold image g(x,y) is defined as


Single threshold Multiple Threshold


Thresholding may be viewed as a operation involving the function T of the form

Where
a. f(x,y) = gray level of point (x,y)
b. p(x,y) = local property of the point (x,y)

c. x,y = the spatial coordinates of the point


When T depends only on f(x,y) ie, only the gray level values, the threshold is called
Global Threshold
When T depends only on both f(x,y) and p(x,y) ie, the gray level values & the local
properties, the threshold is called Local Threshold
When T depends only f(x,y), p(x,y) & x,y ie, the gray level values, the local properties as
well as the spatial coordinates of the point the threshold is called Dynamic or Adaptive
Threshold

SYNTAXES USED:
IMREAD
IMSHOW
IMPIXEL REGION
impixelregion creates a Pixel Region tool associated with the image displayed in the
current figure, called the target image. The Pixel Region tool opens a separate figure
window containing an extreme close-up view of a small region of pixels in the target
image. The Pixel Region rectangle defines the area of the target image that is displayed
in the Pixel Region tool. You can move this rectangle over the target image using the
mouse to view different regions. To get a closer view of the pixels displayed in the tool,
use the zoom buttons on the Pixel Region tool toolbar or change the size of the Pixel
Region rectangle using the mouse.
MAT2GRAY
I = mat2gray(A, [amin amax]) converts the matrix A to the intensity image I. The
returned matrix I contains values in the range 0.0 (black) to 1.0 (full intensity or
white). amin and amax are the values in A that correspond to 0.0 and 1.0 in I. Values less
than amin become 0.0, and values greater than amax become 1.0.
I = mat2gray(A) sets the values of amin and amax to the minimum and maximum values
in A.
gpuarrayI = mat2gray(gpuarrayA,___) performs the operation on a GPU.










MASKING OF IMAGE
co=imread('Rice.png');
co1=im2double(co);
subplot(221);
imshow(co1);
title('Coins');
whos('co1*');
ma=zeros(256,256);
ma1=im2double(ma);
title('Mask Unprocessed');
for i=107:140;
j=16:28;
ma1(i,j)=1;
end
subplot(222);
imshow(ma1);
fi=immultiply(co1,ma1);
subplot(223);
imshow(fi);
THRESHOLDING
a=imread('coins.png');
subplot(231);
imshow(a);
title('Coins');
subplot(232);
imhist(a);
title('Histogram');
b1=a>99;
subplot(233);
imshow(b1);
title('Using Threshold Value');
BW=im2bw(a,0.4667);
subplot(234);
imshow(BW);
title('Using Histogram');
level=graythresh(a);
BW1=im2bw(b1,level);
subplot(235);
imshow(BW1);
title('Using Greythresh');
display(level);

OBSERVATION:
MASKING


THRESHOLDING

EX.NO:

ADDITION OF NOISES AND FILTERING
DATE:

AIM:
To write a MATLAB program
a) To add the following noises to an image:
Salt & pepper
Speckle
Poisson
Gaussian
b) To filter an image using the following types of filter:
Median
Weiner
Gaussian
Laplacian

SOFTWARE REQUIRED:
MATLAB Version 2013.

SYNTAX USED:
IMNOISE
Add noise to image.
J = IMNOISE (I, TYPE,) Add noise of a given TYPE to the intensity image.
'gaussian'

Gaussian white noise with constant mean and variance

'poisson'

Poisson noise

'salt & pepper'

On and off pixels

'speckle'

Multiplicative noise


Types:
Gaussian: Gaussian white noise with constant mean and variance.
J = IMNOISE(I,'gaussian',M,V) adds Gaussian white noise of mean M and variance
V to the image I. When unspecified, M and V default from 0 to 0.01 respectively.
Salt And pepper: "On and Off" pixels.

J = IMNOISE(I, salt& pepper',D) adds "salt and pepper" noise to the image I,
where D is the noise density. This affects approximately D*numel(I) pixels. The
default for D is 0.05.
Speckle: Multiplicative noise.
J = IMNOISE(I,'speckle',V) adds multiplicative noise to the image I,using the
equation J = I + n*I, where n is uniformly distributed random noise with mean 0
and variance V. The default for V is 0.04.
Poisson: Poisson noise.
J = IMNOISE(I, poisson') generates Poisson noise from the data instead of adding
artificial noise to the data. IfI is double precision, then input pixel values are
interpreted as means of Poisson distributions scaled up by 1e12. For example, if
an input pixel has the value 5.5e-12, then the corresponding output pixel will be
generated from a Poisson distribution with mean of 5.5 and then scaled back
down by 1e12. If I is single precision, the scale factor used is 1e6. If I is uint8 or
uint16, then input pixel values are used directly without scaling. For example, if
a pixel in a uint8 input has the value 10, then the corresponding output pixel will
be generated from a Poisson distribution with mean 10.

IMFILTER:
IMFILTER N-D filtering of multidimensional images.
B = IMFILTER(A,H) filters the multidimensional array A with the multidimensional filter
H. A can be logical or it can be a nonsparse numeric array of any class and dimension.
The result, B has the same size and class as A.
Replace h in this syntax with
g for Gaussian filter.
l for laplacian filter.

Types of filters
Median: MEDFILT2 2-D median filtering.
B = MEDFILT2(A,[M N]) performs median filtering of the matrix A in two dimensions.
Each output pixel contains the median value in the M-by-N neighborhood around the
corresponding pixel in the input image. MEDFILT2 pads the image with zeros on the
edges, so the median values for the points within [M N]/2 of the edges may appear
distorted.
Wiener: WIENER2 2-D adaptive noise-removal filtering.

WIENER2 lowpass filters an intensity image that has been degraded by constant power
additive noise. WIENER2 uses a pixel-wise adaptive Wiener method based on statistics
estimated from a local neighborhood of each pixel. J = WIENER2(I,[M N],NOISE) filters
the image I using pixel-wise adaptive Wiener filtering, using neighborhoods of size M-
by-N to estimate the local image mean and standard deviation. If you omit the [M N]
argument, M and N default to 3. The additive noise (Gaussian white noise) power is
assumed to be NOISE.

FSPECIAL:
FSPECIAL Create predefined 2-D filters.
H = FSPECIAL(TYPE) creates a two-dimensional filter H of the specified type and
returns H as correlation kernel.
Value

Description

Average

Averaging filter

gaussian

Gaussian lowpass filter

laplacian

Approximates the two-dimensional Laplacian operator

Prewitt

Prewitt horizontal edge-emphasizing filter

Sobel

Sobel horizontal edge-emphasizing filter


Types:
Gaussian: Gaussian lowpass filter.
H = FSPECIAL('gaussian',HSIZE,SIGMA) returns a rotationally symmetric Gaussian
lowpass filter of size HSIZE with standard deviation SIGMA (positive). HSIZE can be a
vector specifying the number of rows and columns in H or a scalar, in which case H is a
square matrix. The default HSIZE is [3 3], the default SIGMA is 0.5.
Laplacian: filter approximating the 2-D Laplacian operator.
H = FSPECIAL('laplacian',ALPHA) returns a 3-by-3 filter approximating the shape of the
two-dimensional Laplacian operator. The parameter ALPHA controls the shape of the
Laplacian and must be in the range 0.0 to 1.0. The default ALPHA is 0.2.
Mean(A):
M = mean(A) returns the mean value along the first array dimension of A whose size
does not equal 1.

If A is a vector



=> mean(A) returns the mean of the
elements.

If A is a nonempty, nonvector matrix
=> mean(A) treats the columns of A as
vectors and returns a row vector whose elements are the mean of each column.

If A is an empty 0-by-0 matrix

=> mean(A) returns NaN.

If A is a multidimensional array

=> mean(A) treats the values along the first
array dimension whose size does not equal 1 as vectors and returns an array of row
vectors.




























ADDITION OF NOISE
img=imread('Rice.png');
noi=imnoise(img,'salt & pepper',0.05);
subplot(331);
imshow(img);
title('Rice');
subplot(332);
imshow(noi);
title('Salt and Pepper');
img2=imread('coins.png');
noi2=imnoise(img2,'gaussian',0.07,0.07);
subplot(333);
imshow(img2);
title('Coins');
subplot(334);
imshow(noi2);
title('Gaussian');
noi3=imnoise(img,'poisson');
subplot(335);
imshow(noi3);
title('Poisson');
noi4=imnoise(img,'speckle',0.07);
subplot(336);
imshow(noi4);
title('Speckle');

FILTERING
%speckle noise
a=imread('cameraman.tif');
subplot(331);
imshow(a);
title('original image');
b=imnoise(a,'speckle',0.01);
subplot(332);
imshow(b);
title('speckle noise');
K = filter2(fspecial('average',3),b)/255;
subplot(333);
imshow(K);
title('Average');
L = medfilt2(b,[3 3]);

subplot(334);
imshow(L);
title('Median');
m = wiener2(b,[5 5]);
subplot(335);
imshow(m);
title('Weiner');
n=filter2(fspecial('gaussian',3,0.5),b)/255;
subplot(336);
imshow(n);
title('Gaussian');
o=filter2(fspecial('log',[5 5],0.5),b)/255;
subplot(337);
imshow(o);
title('Log');

%poisson noise
a=imread('cameraman.tif');
subplot(331);
imshow(a);
title('original image');
c=imnoise(a,'poisson');
subplot(332);
imshow(c);
title('poisson noise');
K = filter2(fspecial('average',3),c)/255;
subplot(333);
imshow(K);
title('Average');
L = medfilt2(c,[3 3]);
subplot(334);
imshow(L);
title('Median');
m = wiener2(c,[5 5]);
subplot(335);
imshow(m);
title('Weiner');
n=filter2(fspecial('gaussian',3,0.5),c)/255;
subplot(336);
imshow(n);
title('Gaussian');

o=filter2(fspecial('log',[5 5],0.5),c)/255;
subplot(337);
imshow(o);
title('Log');
%salt&pepper noise
a=imread('cameraman.tif');
subplot(331);
imshow(a);
title('original image');
d=imnoise(a,'salt & pepper',0.02);
subplot(332);
imshow(d);
title('salt&pepper noise');
K = filter2(fspecial('average',3),d)/255;
subplot(333);
imshow(K);
title('Average');
L = medfilt2(d,[3 3]);
subplot(334);
imshow(L);
title('Median');
m = wiener2(d,[5 5]);
subplot(335);
imshow(m);
title('Weiner');
n=filter2(fspecial('gaussian',3,0.5),d)/255;
subplot(336);
imshow(n);
title('Gaussian');
o=filter2(fspecial('log',[5 5],0.5),d)/255;
subplot(337);
imshow(o);
title('Log');

%gaussian noise
a=imread('cameraman.tif');
subplot(331);
imshow(a);
title('original image');
e=imnoise(a,'gaussian',0.001,0.01);
subplot(332);

imshow(e);
title('gaussian noise')
K = filter2(fspecial('average',3),e)/255;
subplot(333);
imshow(K);
title('Average');
L = medfilt2(e,[3 3]);
subplot(334);
imshow(L);
title('Median');
m = wiener2(e,[5 5]);
subplot(335);
imshow(m);
title('Weiner');
n=filter2(fspecial('gaussian',3,0.5),e)/255;
subplot(336);
imshow(n);
title('Gaussian');
o=filter2(fspecial('log',[5 5],0.5),e)/255;
subplot(337);
imshow(o);
title('Log');


















OBSERVATION

NOISE


FILTERING OF SPECKLE NOISE

FILTERING OF SALT&PEPPER NOISE



FILTERING OF POISSON NOISE

FILTERING OF GAUSSIAN NOISE

EX.NO:
MORPHOLOGICAL OPERATION
DATE:

AIM:
To write a MATLAB program to perform the following morphological operations on the
given image:
a) Dilation
b) Erosion
c) Opening & closing
SOFTWARE USED:

MATLAB Version 2013.

SYNTAXES USED:
IMREAD
IMSHOW
IMDIVIDE
IMRESIZE
IMFINFO
IMFINFO displays information about graphics file.
a=imfinfo(filename.fmt) returns a structure whose fields contain information about an
image in a graphics file. It displays the info about file size, height, width, resolution,
image type, etc.
RGB2GRAY
Converts RGB image or colormap to grayscale.
rgb2gray converts RGB images to grayscale by eliminating the hue and saturation
information while retaining the luminance
STREL
Creates a stereo element of desired shape
SE = strel(shape, parameters) creates a structuring element, SE, of the type specified
by shape. This table lists all the supported shapes. Depending on shape, strel can take
additional parameters. See the syntax descriptions that follow for details about creating
each type of structuring element.
SE = strel('ball', R, H, N)
SE = strel('diamond', R)
SE = strel('disk', R, N)
SE = strel('line', LEN, DEG)
SE = strel('octagon', R)
SE = strel('rectangle', MN)
SE = strel('square', W)

IMERODE
IM2 = imerode(IM,SE) erodes the grayscale, binary, or packed binary image IM,
returning the eroded imageIM2. The argument SE is a structuring element object or
array of structuring.
IM2 = imerode(IM,NHOOD) erodes the image IM, where NHOOD is an array of 0's
and 1's that specifies the structuring element neighbourhood.


IMDILATE
IM2 = imdilate(IM,SE) dilates the grayscale, binary, or packed binary image IM, returning
the dilated image,IM2. The argument SE is a structuring element object, or array of
structuring element objects, returned by the strelfunction.If IM is logical and the
structuring element is flat, imdilate performs binary dilation; otherwise, it performs
grayscale dilation. If SE is an array of structuring element objects, imdilate performs
multiple dilations of the input image, using each structuring element in SE in succession.

IMOPEN
IM2 = imopen(IM,SE) performs morphological opening on the grayscale or binary
image IM with the structuring element SE. The argument SE must be a single structuring
element object, as opposed to an array of objects. The morphological open operation is
erosion followed by a dilation, using the same structuring element for both operations.
IM2
=
imopen(IM,NHOOD) performs
opening
with
the
structuring
element strel(NHOOD), where NHOOD is an array of 0's and 1's that specifies the
structuring element neighbourhood.

IMCLOSE
IM2 = imclose(IM,SE) performs morphological closing on the gray scale or binary
image IM, returning the closed image, IM2. The structuring element, SE, must be a
single structuring element object, as opposed to an array of objects. The morphological
close operation is a dilation followed by an erosion, using the same structuring element
for both operations.

BWMORPH
BW2 = bwmorph(BW,operation) applies a specific morphological operation to the binary
image BW.BW2 = bwmorph(BW,operation,n) applies the operation n times. n can be Inf,
in which case the operation is repeated until the image no longer changes.




MORPHOLOGICAL OPERATION
DILATION
a = imread('cameraman.tif');
s1 = strel('line',10,0);disp(s1);
b = imdilate(a,s1);
subplot(421);
imshow(a);
title('Original image')
subplot(422)
imshow(b);
title('Dilated image-line 1');
s2 = strel('line',10,90);disp(s2);
c = imdilate(a,s2);
subplot(423)
imshow(c);
title('Dilated image-line 2');
s3 = strel('line',10,45);disp(s3);
d = imdilate(a,s3);
subplot(424)
imshow(d);
title('Dilated image-line 3');
s4 = strel('ball',5,8);disp(s4);
e= imdilate(a,s4);
subplot(425)
imshow(e);
title('Dilated image-ball');
s5 = strel('disk',6,6);disp(s5);
f= imdilate(a,s5);
subplot(426)
imshow(f);
title('Dilated image-disc');
s6 = strel('diamond',6);disp(s6);
g= imdilate(a,s6);
subplot(427)
imshow(g);
title('Dilated image-diamond');

EROSION
a = imread('coins.png');
s1 = strel('line',10,0);disp(s1);
b = imerode(a,s1);

subplot(421);
imshow(a);
title('Original image')
subplot(422)
imshow(b);
title('Eroded image-line 1');
s2 = strel('line',10,90);disp(s2);
c = imerode(a,s2);
subplot(423)
imshow(c);
title('Eroded image-line 2');
s3 = strel('line',10,45);disp(s3);
d = imerode(a,s3);
subplot(424)
imshow(d);
title('Eroded image-line 3');
s4 = strel('ball',5,8);disp(s4);
e= imerode(a,s4);
subplot(425)
imshow(e);
title('Eroded image-ball');
s5 = strel('disk',6,6);disp(s5);
f= imerode(a,s5);
subplot(426)
imshow(f);
title('Eroded image-disc');
s6 = strel('diamond',4);disp(s6);
g= imerode(a,s6);
subplot(427)
imshow(g);
title('Eroded image-diamond');

CLOSED
a = imread('cameraman.tif');
s1 = strel('line',10,0);disp(s1);
b = imclose(a,s1);
subplot(421);
imshow(a);
title('Original image')
subplot(422)
imshow(b);

title('Closed image-line 1');


s2 = strel('line',10,90);disp(s2);
c = imclose(a,s2);
subplot(423)
imshow(c);
title('Closed image-line 2');
s3 = strel('line',10,45);disp(s3);
d = imclose(a,s3);
subplot(424)
imshow(d);
title('Closed image-line 3');
s4 = strel('ball',5,8);disp(s4);
e= imclose(a,s4);
subplot(425)
imshow(e);
title('Closed image-ball');
s5 = strel('disk',6,6);disp(s5);
f= imclose(a,s5);
subplot(426)
imshow(f);
title('Closed image-disc');
s6 = strel('diamond',4);disp(s6);
g= imclose(a,s6);
subplot(427)
imshow(g);
title('Closed image-diamond');

OPENED
a = imread('cameraman.tif');
s1 = strel('line',10,0);disp(s1);
b = imopen(a,s1);
subplot(421);
imshow(a);
title('Original image')
subplot(422)
imshow(b);
title('Opened image-line 1');
s2 = strel('line',10,90);disp(s2);
c = imopen(a,s2);
subplot(423)
imshow(c);

title('Opened image-line 2');


s3 = strel('line',10,45);disp(s3);
d = imopen(a,s3);
subplot(424)
imshow(d);
title('Opened image-line 3');
s4 = strel('ball',5,8);disp(s4);
e= imopen(a,s4);
subplot(425)
imshow(e);
title('Opened image-ball');
s5 = strel('disk',6,6);disp(s5);
f= imopen(a,s5);
subplot(426)
imshow(f);
title('Opened image-disc');
s6 = strel('diamond',4);disp(s6);
g= imopen(a,s6);
subplot(427)
imshow(g);
title('Opened image-diamond');



















OBSERVATION
DILATED

ERODED

CLOSED

OPENED

EX.NO:


GRAY LEVEL ENHANCEMENT
DATE:

AIM:
To write a MATLAB program to perform the following enhancements on an image:
a) Brightness and Contrast Adjustment
b) Gamma Transformation/ Power Law Transformation
c) Gray Level Slicing
d) Bit Plane Slicing
e) Logarithmic transformation
f) Contrast stretching

SOFTWARE USED:

MATLAB Version 2013.

THEORY:
Thresholding:
A special case of contrast stretching where ==0is called clipping. Thresholding is a
special case of clipping where a=b=t and the output becomes binary.

Contrast stretching:
Low contrast images occur often due to poor or non-uniform lightning conditions or
due to non-linearity or small dynamic range of the imaging sensor. The figure shows
atypical contrast stretching transformation, which can be expressed as:
V={ u , 0u<a

( u-a) + Va , au<b

( u-b) + Vb , bu<L
The slope of the transformation is chosen greater than unity in the region of stretch.
a & b parameters can be obtained by examining histogram.
For dark region >1, a L/3

Midstretch region >1, b 2L/3

Bright region stretch >1

Gray level slicing:


Highlighting a specific range of gray levels in an image. There are two basic themes for
gray level slicing.
One approach is to display a high value for all the gray levels in the range of interest
and a low value for all other gray levels (Binary image, Fig(a))
The second approach is to brighten the desired range of gray levels but preserve the
background and gray-level tonalities in the image.(Fig(b))

Bit plane slicing:


Highlighting the contribution made to the total image appearance by specific
bits might be perceived.
Each pixel representation by 8 bits, when separated into bit planes ranging from
0 to 7, the 0th plane represents least significant bit to bit plane 7th the most
significant bit.
0-plane : lower order bits in bytes
7-plane: higher order bits
Separating bit planes is useful for analyzing relative importance played by each
bit. The higher order bits contain majority of visually significant data.
Mapping all pixels between 0 to 127 to one level & 128-255 to another gray
level.



Log transformations:
The general form of the log transformation is
s = c log(1+r)
c: constant
The shape of the log curve shows that this transformation maps a narrow range
of low gray level values in the input image into a wider range of output levels.
The opposite is true for higher levels of input levels. We would use a
transformation of this type to expand the values of dark pixels in an image while
compressing the higher level values.
The opposite is true for inverse log transformation.



Power law transformations:
The basic form of power law transformation is

s = cr
C, : positive constants
Power law curves with the fractional values of map a narrow range of dark input
values into a wider range of output values, with the opposite being true for higher
values of input levels.

SYNTAXES USED:

IMREAD
IMSHOW
IMRESIZE
IMFINFO
RGB2GRAY
IMADD
IMADJUST
IMADJUST Adjust image intensity values or colormap.
J = IMADJUST(I) maps the values in intensity image I to new values in J such that 1%
of data is saturated at low and high intensities of I. This increases the contrast of the
output image J. RGB2 = imadjust(RGB1,[.2 .3 0; .6 .7 1],[]).

BITGET
BITGET Get bit.
C = BITGET(A,BIT) returns the value of the bit at position BIT in A. A must be an
unsigned integer or an array of unsigned integers, and BIT must be a number between 1
and the number of bits in the unsigned integer class of A e.g., 32 for UINT32s.
STRETCHLIM
STRETCHLIM Find limits to contrast stretch an image.
LOW_HIGH = STRETCHLIM(I,TOL) returns a pair of gray values that can be used by
IMADJUST to increase the contrast of an image.

GRAY LEVEL ENHANCEMENT


LOG TRANSFORM
clc;
clear;
img=imread('Cameraman.tif');
subplot(221);
imshow(img);
title('Original Image');
con=input('Enter Constant below 1=');
[I,J]=size(img);
for i=1:I
for j=1:J
f=double(img(i,j));
z(i,j)=con.*log10(1+f);
end
end
subplot(222);
imshow(z);
title('Log Transformed below 1');
con1=input('Enter Constant above 1=');
[I,J]=size(img);
for i=1:I
for j=1:J
f=double(img(i,j));
z1(i,j)=con1.*log10(1+f);
end
end
subplot(223);
imshow(z1);
title('Log Transformed above 1');

GAMMA TRANSFORM
clc;
clear;
img=imread('pout.tif');
subplot(221);
imshow(img);
title('Original');
con=input('Enter the Constant ');
dou=im2double(img);
g=input('Enter Gamma Value above 1 ');

[x,y]=size(img);
for i=1:x
for j=1:y
e(i,j)=con*(dou(i,j))^g;
end
end
subplot(222);
imshow(e);
title('Above 1');
g1=input('Enter Gamma Value below 1 ');
for i=1:x
for j=1:y
f(i,j)=con*(dou(i,j))^g1;
end
end
subplot(223);
imshow(f);
title('Below 1');

GRAY SLICING
clear;
clc;
img=imread('Rice.png');
subplot(221);
imshow(img);
title('Original');
[I,J]=size(img);
for i=1:I
for j=1:J
if img(i,j)>=150 && img(i,j)<=210
img (i,j)=250;
else
img (i,j)=20;
end
end
end
subplot(222);
imshow(img);
title('Gray Sliced');

BIT SLICING
clear;
clc;
im=imread('cameraman.tif');
[I,J]=size(im);
for i=1:8
b=bitget(im,i);
subplot(3,3,i);
imshow(b,[]);
end

CONTRAST ENHANCEMENT
img=imread('Rice.png');
adj=imadjust(img,stretchlim(img),[]);
subplot(321);
imshow(img);
title('Original Image');
subplot(323);
imshow(adj);
title('Enhanced Image');
img2=imread('Rice.png');
subplot(325);
imshow(img2);
title('Contrast Image');
imcontrast(gca);
subplot(322);
imhist(img);
subplot(324);
imhist(adj);







OBSERVATION:
LOG TRANSFORM


GAMMA TRANSFORM

BIT SLICING


GRAY SLICING

CONTRAST ENHANCEMENT

EX.NO:
EDGE DETECTION
DATE:

AIM:
To write a MATLAB program to perform the detection of edges by following algorithms:
a. Sobel
b. Canny
c. Roberts
d. Prewitt
SOFTWARE USED:
MATLAB Version 2013
THEORY

Edge detection refers to the process of identifying and locating sharp discontinuities in
an image. The discontinuities are abrupt changes in pixel intensity which characterize
boundaries of objects in a scene. Classical methods of edge detection involve convolving
the image with an operator (a 2-D filter), which is constructed to be sensitive to large
gradients in the image while returning values of zero in uniform regions. The geometry
of the operator determines a characteristic direction in which it is most sensitive to
edges. Operators can be optimized to look for horizontal, vertical, or diagonal edges.
Edge detection is difficult in noisy images, since both the noise and the edges contain
high frequency content. Attempts to reduce the noise result in blurred and distorted
edges.

The various operators used may be as follows:

SOBEL

The operator consists of a pair of 33 convolution kernels as shown in Figure 1. One
kernel is simply the other rotated by 90.

These kernels are designed to respond maximally to edges running vertically and
horizontally relative to the pixel grid, one kernel for each of the two perpendicular
orientations. The kernels can be applied separately to the input image, to produce
separate measurements of the gradient component in each orientation (call these Gx
and Gy). These can then be combined together to find the absolute magnitude of the
gradient at each point and the orientation of that gradient.
The gradient magnitude is given by:

Typically, an approximate magnitude is computed using:



which is much faster to compute.

The angle of orientation of the edge (relative to the pixel grid) giving rise to the spatial
gradient is given by:

q = arctan(Gy /Gx)

CANNY

In order to implement the canny edge detector algorithm, it is first necessary to filter
out any noise in the original image before trying to locate and detect any edges. The
Gaussian smoothing can be performed using standard convolution methods. After
smoothing the image and eliminating the noise, the next step is to find the edge
strength by taking the Gradient of the image. The Sobel operator performs a 2-D spatial
gradient measurement on an image. Then, the approximate absolute gradient
magnitude (edge strength) at each point can be found.The magnitude, or edge strength,
of the gradient is then approximated using the formula

The direction of the edge is computed using the gradient in the x and y directions.
However, an error will be generated when sumX is equal to zero. Whenever the
gradient in the x direction is equal to zero, the edge direction has to be equal to 90
degrees or 0 degrees, depending on what the value of the gradient in the y-direction is
equal to. If GY has avalue of zero, the edge direction will equal 0 degrees. Otherwise the
edge direction will equal 90 degrees. Theformula for finding the edge direction is just:

Theta = invtan (Gy / Gx)



After the edge directions are known, non-maximum suppression now has to be applied.
Non-maximum suppression is used to trace along the edge in the edge direction and
suppress any pixel value (sets it equal to 0) that is not considered to be an edge. This will
give a thin line in the output image.
Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking
up of an edge contour caused by the operator output fluctuating above and below the
threshold. If a single threshold, T1 isapplied to an image, and an edge has an average
strength equal to T1, then due to noise, there will be instances where the edge dips
below the threshold. Equally it will also extend above the threshold making an edge look
like a dashed line. To avoid this, hysteresis uses 2 thresholds, a high and a low. Any pixel
in the image that has a value greater than T1 is presumed to be an edge pixel, and is
marked as such immediately. Then, any pixels that are connected to this edge pixel and
that have a value greater than T2 are also selected as edge pixels. If you think of
following an edge, you need a gradient of T2 to start but you don't stop till you hit a
gradient below T1.

ROBERTS

The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient
measurement on an image. Pixel values at each point in the output represent the
estimated absolute magnitude of the spatial gradient of the input image at that point.
The operator consists of a pair of 22 convolution kernels as shown in



One kernel is simply the other rotated by 90. This is very similar to the Sobel operator.

PREWITT

Prewitt operator is similar to the Sobel operator and is used for detecting vertical and
horizontal edges inimages.




SYNTAXES USED:

IMREAD
IMSHOW
SOBEL METHOD
BW = edge(I, sobel') specifies the Sobel method.The Sobel method finds edges using the
Sobel approximation to the derivative. It returns edges at those points where the
gradient of I is maximum.
PREWITT METHOD
BW = edge(I,'prewitt') specifies the Prewitt method. The Prewitt method finds edges
using the Prewitt approximation to the derivative. It returns edges at those points
where the gradient of I is maximum.

ROBERTS METHOD
BW = edge(I,'roberts') specifies the Roberts method. The Roberts method finds edges
using the Roberts approximation to the derivative. It returns edges at those points
where the gradient of I is maximum.
CANNY METHOD

BW = edge(I,'canny') specifies the Canny method. The Canny method finds edges by
looking for local maxima of the gradient of I. The gradient is calculated using the
derivative of a Gaussian filter. The method uses two thresholds, to detect strong and
weak edges, and includes the weak edges in the output only if they are connected to
strong edges. This method is therefore less likely than the others to be fooled by noise,
and more likely to detect true weak edges.

EDGE DETECTION
img1=imread('cameraman.tif');
img2=edge(img1,'canny');
img3=edge(img1,'log');
img4=edge(img1,'prewitt');
img5=edge(img1,'roberts');
img6=edge(img1,'sobel');
img7=edge(img1,'zerocross');
subplot(421);
imshow(img1);
title('original');
subplot(422);
imshow(img2);
title('canny');
subplot(423);
imshow(img3);
title('log');
subplot(424);
imshow(img4);
title('prewitt');
subplot(425);
imshow(img5);
title('roberts');
subplot(426);
imshow(img6);
title('sobel');
subplot(427);
imshow(img7);
title('zerocross');











OUTPUT:
EDGE DETECTION

EX.NO:


DISCRETE FOURIER TRANSFORMATION
DATE:

AIM:
To write a MATLAB program to
a) To generate the DFT basis image for the given matrix dimension
b) Find the Discrete Fourier Transform of an image.
c) To prove the rotational property & convolution of DFT

SOFTWARE USED:
MATLAB version 2013

THEORY
Fourier Transform
The Fourier transform is an important image processing tool which is used to
decompose an image into its sine and cosine components. The output of the
transformation represents the image in Fourier as Frequency domain while the input
image is the spatial domain equivalent.
Discrete Fourier transform
The DFT is the sampled Fourier Transform and therefore does not contain all
frequencies forming an image, but only a set of samples which is large enough to fully
describe the spatial domain image. The number of frequencies corresponds to the
number of pixels in the spatial domain image, i.e. the image in the spatial and Fourier
domain is of the same size.
For a square image of size NN, the two-dimensional DFT is given by:


where f(a,b) is the image in the spatial domain and the exponential term is the basis
function corresponding to each point F(k,l) in the Fourier space. The equation can be
interpreted as: the value of each point F(k,l) is obtained by multiplying the spatial image
with the corresponding base function and summing the result.

The basis functions are sine and cosine waves with increasing
frequencies, i.e. F(0,0) represents the DC-component of the image which corresponds to
the average brightness and F(N-1,N-1) represents the highest frequency.
The inverse Fourier transform is given by:


Note the
normalization term in the inverse transformation. This normalization is
sometimes applied to the forward transform instead of the inverse transform, but it
should not be used for both.
To obtain the result for the above equations, a double sum has to be calculated for each
image point. However, because the Fourier Transform is separable, it can be written as


where


Using these two formulas, the spatial domain image is first transformed into an
intermediate image using N one-dimensional Fourier Transforms. This intermediate
image is then transformed into the final image, again using None-dimensional Fourier
Transforms. Expressing the two-dimensional Fourier Transform in terms of a series
of 2N one-dimensional transforms decreases the number of required computations.
The Fourier Transform produces a complex number valued output image which can be
displayed with two images, either with the real and imaginary part or
with magnitude and phase. In image processing, often only the magnitude of the Fourier
Transform is displayed, as it contains most of the information of the geometric structure
of the spatial domain image. However, if we want to re-transform the Fourier image
into the correct spatial domain after some processing in the frequency domain, we must
make sure to preserve both magnitude and phase of the Fourier image.

SYNTAXES USED:
IMREAD
IMSHOW

ZEROS () creates an array of all zeros.
B = zeros (m, n) or B = zeros([m n]) returns an m-by-n matrix of zeros.
FFT2(X) returns the two-dimensional discrete Fourier transform (DFT) of X.
Y = fft2(X) returns the two-dimensional discrete Fourier transform (DFT) of X. The DFT is
computed with a fast Fourier transform (FFT) algorithm. The result, Y, is the same size as
X. If the dimensionality of X is greater than 2, the fft2 function returns the 2-D DFT for
each higher dimensional slice of X. For example, if size(X) = [100 100 3], then fft2
computes the DFT of X(:,:,1), X(:,:,2) and X(:,:,3).Y = fft2(X, m ,n) truncates X, or pads X
with zeros to create an m-by-n array before doing the transform. The result is m-by-n.
FFTSHIFT() Shift zero-frequency component to center of spectrum.
Y = fftshift(X) rearranges the outputs of fft, fft2, and fftn by moving the zero-frequency
component to the center of the array. It is useful for visualizing a Fourier transform with
the zero-frequency component in the middle of the spectrum.

ALGORITHM
Step 1 Define an input vector i
Step 2 Form the Dft matrix function
Step 3 Multiply the Dft matrix, its transpose and input vector
Convolution Property
Step 1 Generate a binary image using 0s and 1s
Step 2 Two images are generated
Step 3 Convolve both the generated images
Step 4 Take the FFT of the two images separately, multiply and take the inverse Fourier
transform
Step 5 Plot the convoluted and multiplied output
Step 6 Verify if the images are equal
Rotation property
Step 1 Generate a binary image using 0s & 1s
Step 2 Rotate the generated image using function imrotate()
Step 3 Take the magnitude of the FFT of the original and rotated image
Step 4 Display the image, rotated image & their corresponding FFTs

DISCRETE FOURIER TRANSFORM


DFT
clear;
clc;
M=input('Enter M=');
N=M;
i=sqrt(-1);
for k=0:N-1
for p=0:N-1
for n=0:N-1
for q=0:N-1
g{k+1,p+1}(n+1,q+1)=(exp((-2*pi*i*k*n)/N))*(exp((-2*pi*i*p*q)/M));
end
end
end
end
subplot(221);
mag=g;
t=1;
for i=1:N
for j=1:M
subplot (M,N,t);
imshow(mag{i,j});
t=t+1;
end
end

DFT AND IDFT TO IMAGE:


a=imread('Coins.png');
subplot(231);
imshow(a);
title('Original');
b=im2double(a);
c=fft2(b);
subplot(232);
imshow(c);
title('FFT');
d=ifft2(c);
subplot(233);
imshow(d);
title('IFFT');
mag=abs(c);
subplot(234);

imshow(mag);
title('Magnitude Plot');
ang=angle(c);
subplot(235);
imshow(ang);
title('Phase Plot');

TO PROVE CONVOLUTION PROPERTY:
img1=zeros(256);
for i=90:170
for j=90:170
img1(i,j)=1;
end
end
img2=ones(256);
for i=40:210
for j=40:210
img2(i,j)=0;
end
end
img3=conv2(img1,img2,'same');
ff1=fft2(img1);
ff2=fft2(img2);
ff3=ff1.*ff2;
img4=fftshift(ifft2(ff3));
subplot(231),imshow(img1),title('Image 1');
subplot(232),imshow(img2),title('Image 2');
subplot(233),imshow(ff1),title('FFT of Image 1');
subplot(234),imshow(ff2),title('FFT of Image 2');
subplot(235),imshow(img3),title('Convolution in Time Domain');
subplot(236),imshow(img4),title('Multiplication in Frequency Domain');

TO PROVE ROTATION PROPERTY:


clc;
clear;
a=imread('coins.png');
subplot(231);
imshow(a);
title('Image');
a1=im2double(a);
b=fft2(a1);

subplot(232);
imshow(b);
title('FFT of Image');
c=imrotate(a1,90);
subplot(233);
imshow(c);
title('Rotated Image');
d=fft2(c);
subplot(234);
imshow(d);
title('FFT of Rotated Image');





























DISCRETE FOURIER TRANSFORM GENERATED IMAGES


DFT & IDFT IN IMAGE:

TO PROVE CONVOLUTION PROPERTY:


TO PROVE ROTATION PROPERTY:

EX.NO:
DISCRETE COSINE TRANSFORMATION
DATE:

AIM:
To write a MATLAB program
a) To generate a Basis images with the given matrix dimension using DCT
transforms
b) To perform the discrete cosine transform on the given image

SOFTWARE USED:
MATLAB Version 2013

THEORY

The NxN Cosine transform matrix also called Discrete Cosine transform ( DCT ) is defined
as



The 2-D DCT of a sequence & The inverse discrete transform are defined as :

SYNTAXES USED:

IMREAD
IMSHOW
SUBPLOT: subplot Create axes in tiled positions.
DCT: Discrete cosine transform.
Y = dct(X) returns the discrete cosine transform of X. The vector Y is the same size as X
and contains the discrete cosine transforms coefficients.
Y = dct(X, N) pads or truncates the vector X to length N before transforming.
ABS: Absolute value.
abs(X) is the absolute value of the elements of X. When X is complex, abs(X) is the
complex modulus (magnitude) of the elements of X.
LOG: Natural logarithm.
log(X) is the natural logarithm of the elements of X.Complex results are produced if X is
not positive.
IDCT: Inverse discrete cosine transform.
X = idct(Y) inverts the DCT transform, returning the original vector if Y was obtained
using Y = DCT(X). X = idct(Y,N) pads or truncates the vector Y to length N before
transforming. If Y is a matrix, the idct operation is applied to each column.

ALGORITHM :

Basis function generation

Step 1 Open a new M file
Step 2 Get the input matrix dimension ,N
Step 3 Assign the values for (k) (l)
Step 4 Use the suitable Matlab expression for cosine transforms inside the loop
Step 5 Repeat the loop till N-1
Step 6 Plot the basis vectors as NxN

DCT of an image

Step 1 Read the input image in a variable a
Step 2 Convert it to grayscale
Step 3 Resize the image to NxN size
Step 4 Generate the CT transform matrix of size N
Step 5 Apply the rule U=A.V.AT and store the size N
Step 6 Convert the output image in absolute and logarithmic form
Step 7 Plot the DCT of an image

DISCRETE COSINE TRANSFORM


clc; close all; clear;
N=input('Enter Order=');
M=N;
a1=ones(1,N)*sqrt(2/N);
a1(1)=sqrt(1/N);
a2=ones(1,N)*sqrt(2/M);
a2(1)=sqrt(1/M);
for u=0:N-1
for v=0:M-1
for x=0:N-1
for y=0:M-1
h{u+1,v+1}(x+1,y+1)=a1(u+1)*a2(v+1)*cos((((2.*x)+1).*u.*pi)/(2.*N)).*cos((((2.*y)+1).*v
.*pi)/(2.*N));
end
end
end
end
mag=h;
k=1;
for i=1:N
for j=1:M
subplot(N,M,k);
imshow(mag{i,j});
k=k+1;
end
end
DCT & IDCT IN IMAGE:
clear; clc;
img=imread('Rice.png');
img2=im2double(img);
img3=dct2(img2);
img4=idct2(img3);
subplot(231),imshow(img),title('Original');
subplot(232),imshow(img3),title('DCT');
subplot(233),imshow(img4),title('IDCT');
mag=abs(img3);
subplot(234),imshow(mag),title('Magnitude Plot');
pha=angle(img3);
subplot(235),imshow(pha),title('Phase Plot');

OUTPUT:
DISCRETE COSINE TRANSFORM GENERATED IMAGES


DCT & IDCT IN IMAGE:

EX NO.



HISTOGRAM EQUALISATION
DATE

AIM

To write a MATLAB code to enhance contrast the given images using histogram
equalization

SOFTWARE USED

MATLAB 2013a

THEORY

Histogram

Histogram of a digital image with intensity levels in the range [0,L-1] is a discrete
function

h(rk) = nk

Where rk = kthintensity value in interval (0,G)
nk = number of pixels in the image with intensity rk

It is a graphical representation of showing a visual impression of distribution of data. It is
the estimate of probability distribution of a continuous variable. The normalized
Histogram is obtained by dividing all elements of h(rk) by total number of pixels in
image

P(rk) = h(rk)/ n = nk/ n

Histogram equalization

It is a grey level mapping process that seeks to produce monochrome images with
uniform intensity histograms. It gives the net result of this intensity level with increased
dynamic range, which tends to have high contrast. The transformation is the cumulative
distribution function



SYNTAXES USED

IMREAD
IMSHOW
IMHIST(I)

calculates the histogram for the intensity image I and displays a plot of the histogram.
The number of bins in the histogram is determined by the image type.

If I is a grayscale image, imhist uses a default value of 256 bins.
If I is a binary image, imhist uses two bins.

imhist(i,n) displays a histogram for the intensity image I, where n specifies the number
of bins used in the histogram. n also specifies the length of the colorbar displayed at the
bottom of the histogram plot. If I is a binary image, n can only have the value 2.

HISTEQ

Histeq enhances the contrast of images by transforming the values in an intensity
image, or the values in the colormap of an indexed image, so that the histogram of the
output image approximately matches a specified histogram.

J = histeq(I,hgram) transforms the intensity image I so that the histogram of the output
intensity image J with length(hgram) bins approximately matches hgram. The vector
hgram should contain integer counts for equally spaced bins with intensity values in the
appropriate range: [0, 1] for images of class double, [0, 255] for images of class uint8,
and [0, 65535] for images of class uint16. histeq automatically scales hgram so that
sum(hgram) = prod(size(I)). The histogram of J will better match hgram when
length(hgram) is much smaller than the number of discrete levels in I.

J = histeq(I,n) transforms the intensity image I, returning in J an intensity image with n
discrete gray levels. A roughly equal number of pixels is mapped to each of the n levels
in J, so that the histogram of J is approximately flat. (The histogram of J is flatter when n
is much smaller than the number of discrete levels in I.) The default value for n is 64.

ALGORITHM

The following steps are preformed to obtain Histogram Equalisation. Find the frequency
of each pixel value

Step 1

Step 2
Step 3
Step 4
Step 5

Step 6

Step 7
Step 8

Consider a matrix A


Let No. of bins = 5
Pixel value 1 occurs 3 times
Pixel value 2 occurs 3 times and so on
Find the probability of each frequency
The probability of pixel value 1s = Frequency (1)/No of pixels =3/9
Find the cumulative histogram of each pixel
The cumulative histogram of 1 = 3
The cumulative histogram of 2 = cumulative histogram of 1 + Frequency
of 2 = 3 + 2 = 5
Find the cumulative probability distribution of each pixel
CDF of 1 = Cumulative histogram of 1 / No of pixels = 3/9
Calculate the final value of each pixel by multiplying of with no of bins
CDF of 1 = 3/9 * 5 = 1.6667. Round off ~ 2
Replace the final values

HISTOGRAM EQUALIZATION WITHOUT USING INBUILT FUNCTION


clear;
clc;
I=imread('cameraman.tif');
I=double(I);
maximum_value=max((max(I)));
[row col]=size(I);
c=row*col;
h=zeros(1,300);
z=zeros(1,300);
for n=1:row
for m=1:col
if I(n,m) == 0
I(n,m)=1;
end
end
end
for n=1:row
for m=1:col
t = I(n,m);
h(t) = h(t) + 1;
end
end
pdf = h/c;
cdf(1) = pdf(1);
for x=2:maximum_value
cdf(x) = pdf(x) + cdf(x-1);
end
new = round(cdf * maximum_value);
new= new + 1;
for p=1:row
for q=1:col
temp=I(p,q);
b(p,q)=new(temp);
t=b(p,q);
z(t)=z(t)+1;
end
end
b=b-1;
subplot(2,2,1);
imshow(uint8(I)) , title(' Image1');

subplot(2,2,2), bar(h) , title('Histogram of the Original Image');


subplot(2,2,3), imshow(uint8(b)) , title('Image2');
subplot(2,2,4), bar(z) , title('Histogram Equalisation of image2');

HISTOGRAM EQUALIZATION USING INBUILT FUNCTION
img1=imread('cameraman.tif');
equalised=histeq(img1);
subplot(221);imshow(img1);title('Original Image');
subplot(222);imhist(img1);title('Original Image Histogram');
subplot(223);imshow(equalised);title('Equalised Image');
subplot(224);imhist(equalised);title('Equalised Image Histogram');





























OUTPUT
HISTOGRAM EQUALIZATION WITHOUT INBUILT FUNCTION


HISTOGRAM EQUALIZATION WITH INBUILT FUNCTION

EX NO.:
IMAGE FILTERING-FREQUENCY DOMAIN
DATE:

AIM:

To write a MATLAB program to apply the following filters to the given image:
a) 2-D Butterworth low pass filter
b) 2-D Butterworth high pass filter
c) Gaussian low pass filter
d) Gaussian high pass filter
e) Ideal low pass filter
f) Ideal high pass filter

SOFTWARES USED :

MATLAB Version 2013a

THEORY:

The foundation for linear filtering in frequency domain is the convolution theorem,




f(x,y)*h(x,y) H(u,v)F(u,v)

where H(u,v) is the filter transfer function

The idea in frequency domain filtering is to select a filter transfer function that
modifies F(u,v) in a specific manner.

LOWPASS FREQUENCY DOMAIN FILTERS:



The transfer function of an ideal lowpass filter is
1 if D(u, v) D0

H (u, v) =
0 if D(u, v) > D0
where D(u,v) : the distance from point (u,v) to the center of the frequency rectangle

D(u, v) = (u M / 2) 2 + (v N / 2) 2 2

D0 is a positive number
The locus of points for which D(u,v)= D0 is a circle. Because filter H(u,v) multiplies the
Fourier transform of an image, we see that an ideal filter cuts off all components of
F(u,v) outside the circle and keeps all the components on or inside the circle
unchanged.

A Butterworth lowpass filter of order n, with cut off frequency at distance D0 from
the center of the filter, has the transfer function
1

H (u, v) =
2n
1 + [D(u, v) / D0 ]

A Gaussian lowpass filter has transfer function given by
2
2
H (u, v) = e D (u ,v ) / 2 D0 Where is the standard deviation

HIGH PASS FREQUENCY DOMAIN FILTERS:

The high pass filtering sharpens the image by attenuating low frequencies and leaving
the high frequencies of an image unaltered.
If Hlp(u,v) is the transfer function of a lowpass filter, then the transfer function of the
corresponding high pass filter Hhp(u,v) is given by,

Hhp(u,v)=1- Hlp(u,v)

ALGORITHM:
Step 1 Read the image to be filtered.
Step 2 Take the Fourier Transform of the image using function fft2().
Step 3 Define the transfer function of the filter and form the transfer matrix.
Step 4 Multiply the formed transfer matrix with transformed image.
Step 5 Take inverse Fourier transform to get the filtered image. Display the images and
transfer function in 3-D view.

Ideal low pass and high pass filter:


Step 1 Read the image.
Step 2 Create the mask for low pass or high pass filtering.
Step 3 Take the Fourier transform of the mask and of the image using function fft2().
Step 4 Multiply the transformed mask and image.
Step 5 Take the inverse Fourier transform using the function ifft2() to get filtered
image.
Step 6 Display the filtered image, mask and fftshift of the mask
































FREQUENCY DOMAIN FILTERS


BUTTERWORTH LOWPASS FILTER
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);
rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
h(i,j)=1/(1+((d(i,j))/rad).^(2*N));
end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);
fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After LPF');
subplot(223);
surf(h);
title('LPF in 3D');
subplot(224);
imshow(h);
title('LPF as Image');

BUTTERWORTH HIGH PASS FILTER
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);

rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
h(i,j)=1-(1/(1+((d(i,j))/rad).^(2*N)));
end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);
fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After HPF');
subplot(223);
surf(h);
title('HPF in 3D');
subplot(224);
imshow(h);
title('HPF as Image');

IDEAL LOW PASS FILTER
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);
rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
if d(i,j)<=rad
h(i,j)=1;
else
h(i,j)=0;
end

end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);
fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After Ideal LPF');
subplot(223);
surf(h);
title('Ideal LPF in 3D');
subplot(224);
imshow(h);
title('Ideal LPF as Image');

IDEAL HIGH PASS FILTER
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);
rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
if d(i,j)<=rad
h(i,j)=0;
else
h(i,j)=1;
end
end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);

fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After Ideal HPF');
subplot(223);
surf(h);
title('Ideal HPF in 3D');
subplot(224);
imshow(h);
title('Ideal HPF as Image');

GAUSSIAN LOW PASS FILTER:
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);
rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
h(i,j)=exp(-(d(i,j).^2)/(2*((rad).^2)));
end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);
fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After Gaussian LPF');
subplot(223);
surf(h);

title('Gaussian LPF in 3D');


subplot(224);
imshow(h);
title('Gaussian LPF as Image');

GAUSSIAN HIGH PASS FILTER:
clear;
clc;
img=imread('Coins.png');
[X,Y]=size(img);
N=input('Order of Filter=');
x=ceil(X/2);
y=ceil(Y/2);
rad=26;
for i=1:X
for j=1:Y
d(i,j)=sqrt((i-x).^2+(j-y).^2);
h(i,j)=1-exp(-(d(i,j).^2)/(2*((rad).^2)));
end
end
fft1=fftshift(fft2(img));
fil=h.*fft1;
fin=ifft2(fil);
fin1=uint8(fin);
subplot(221);
imshow(img);
title('Original');
subplot(222);
imshow(fin1);
title('After Gaussian HPF');
subplot(223);
surf(h);
title('Gaussian HPF in 3D');
subplot(224);
imshow(h);
title('Gaussian HPF as Image');




OUTPUT:
BUTTERWORTH LOWPASS FILTER


BUTTERWORTH HIGHPASS FILTER

IDEAL LOWPASS FILTER


IDEAL HIGHPASS FILTER

GAUSSIAN LOWPASS FILTER


GAUSSIAN LOWPASS FILTER


EX NO.:
COLOUR IMAGE PROCESSING-I
DATE:

AIM:

To write a MATLAB program to:

a) Display a colour image and its R, G, B and H, S, V components
b) Convert the given RGB image to different forms
c) Perform colour manipulation using brighten, cmpermute, cmunique, rgbplot
commands

THEORY:

Colour image processing deals with colour descriptions of images that are natural and
intuitive to humans.
Human eyes:
Primary colors for standardization
blue : 435.8 nm
green : 546.1 nm
red : 700 nm
Cones in human eyes are divided into three sensing categories
65% are sensitive to red light
33% sensitive to green light
2% sensitive to blue (but most sensitive)
The R, G, and B colors perceived by
human eyes cover a range of spectrum
RGB Colour models:
The model is based on a Cartesian co-ordinate system. RGB values are at 3 corners. Cyan
, magenta and yellow are at the other three corners. Black is at the origin and white is at
the corner furthest from the origin.


NTSC Colour space:
One of the main advantages of this format is that grayscale information is separated
from color data, so the same signal can be used for both color and black and white
television sets.
In the NTSC color space, image data consists of three components: luminance (Y), hue
(I), and saturation (Q). The first component, luminance, represents grayscale
information, while the last two components make up chrominance (color information).



YCbCr
The YCbCr color space is widely used for digital video. In this format, luminance
information is stored as a single component (Y), and chrominance information is stored
as two color-difference components (Cb and Cr). Cb represents the difference between
the blue component and a reference value. Cr represents the difference between the
red component and a reference value.

Y
16
65.481

128.553
24.966

R
Cb = 128 +
-37.797
-74.203
112.000 + G
Cr 128
112.000
-93.786
-18.214 B

HSV colour space:

Hue, saturation, value is formulated by looking at RGB colour cube along its
gray scale axis(the axis joining the black and white vertices) which results in a
hexagonally shaped colour space. As we move along the vertical (gray) axis, size of
hexagonal plane perpendicular to the axis changes.
Hue: expressed as an angle around a colour hexagon, typicallyusing the red axis
as reference (0) axis.
Value: measured along the axis of the cone.
V=0 end of axis is black
V=1 end of axis is white
Saturation: (purity of colour) measured as the distance from V axis.
The HSV colour system is based on cylindrical co-ordinates. Conversion from RGB
-HSV entails developing the equation to map RGBvalues (which are in Cartesian co-
ordinates) to cylindrical co-ordinates.


CMY & CMYK colour spaces
Cyan, Magenta and Yellow are the secondary colours of RGB or
alternativelyprimary colours of pigments.
It is used in colour printers and copiers that deposit colour pigments on paper.
It requires CMY data input or an RGBCMY conversion to be performed
internally, as:

C
1
R

M =
1 -
G

Y
1
B


In theory, equal amounts of cyan, magenta and yellow should produce
black.However, practically, muddy black is produced. Hence, a fourth colour black is
added, giving rise to CMYK model.
HSI colour space: Hue, saturation and brightness.

Hue: colour attribute that describes pure colour whereas saturation is the
measure of the degree to which a pure colour is diluted by white light.
Brightness: subjective descriptor that is impossible to measure.


The dotis an arbitrary colour point.
The angle from the red axis gives the hue, and the length ofthe vector is the
saturation.
The intensity of all colours in any of these planes is given by the position of the
plane on the vertical intensity axis.
Converting colours from RGBHIS
Given a colour as R, G, and B its H, S, and I values are calculated as follows:
if B G


H =
360 if B > G

2 [(R G ) + (R B )]

= cos 1
1
(R G )2 + (R B )(G B ) 2

3
[min(R, G, B )]
(R + G + B )
I = 13 (R + G + B )
Cmunique: Eliminate duplicate colors in colormap and convert grayscale or truecolor image
to indexed image
Cmpermute: Rearrange colors in colormap. Randomly reorders the colors in map to
produce a new colormap
S = 1

Imapprox: Approximate indexed image by reducing number of colors


ALGORITHM:

To display colour images and individual colour elements:

Step 1 Read the input RGB image
Step 2 Display the image in rgb, hsv, ntsc, yCbCr, and Lab forms. Separate the individual
colour elements like r-plane, g-plane, b-plane, hsv plane, yq plane, lab plane.
Step 3 Observe the output images.

Colour image manipulation:

Step 1 Read the input image.
Step 2 Perform the colour map manipulation using brighten, cmpermute, cmunique
and rgbplot commands.
Step 3 Observe the output image














COLOR IMAGE PROCESSING


img1=imread('peppers.png');
subplot(551);
imshow(img1);
title('Original image');
img2=rgb2hsv(img1);
subplot(552);
imshow(img2);
title('HSV');
img3=rgb2ycbcr(img1);
subplot(553);
imshow(img3);
title('YCBCR');
img4=rgb2ntsc(img1);
subplot(554);
imshow(img4);
title('NTSC');
img5=img1(: , : , 1);
subplot(555);
imshow(img5);
img6=img1(: , : , 2);
subplot(556);
imshow(img6);
img7=img1(: , : , 3);
subplot(557);
imshow(img7);
img8=img2(: , : ,1);
subplot(558);
imshow(img8);
title('HUE');
img9=img2(: , :, 2);
subplot(559);
imshow(img9);
title('SATURATION');
img10=img2(: , : , 3);
subplot(5,5,10);
imshow(img10);
title('VALUE');
Red=img1;Blue=img1;Green=img1;
Red(:,:,2:3)=0;
subplot(5,5,11);

imshow(Red);
title('RED COMPONENT');
Green(:,:,1)=0;
subplot(5,5,12);
imshow(Green);
title('GREEN COMPONENT');
Blue(:,:,1:2)=0;
subplot(5,5,13);
imshow(Blue);
title('BLUE COMPONENT');
j=img3(: ,: ,1);
subplot(5,5,14);
imshow(j);
title('Y COMPONENT');
k=img3(: ,: ,2);
subplot(5,5,15);
imshow(k);
title('CB COMPONENT');
l=img3(: ,: ,3);
subplot(5,5,16);
imshow(l);
title('CR COMPONENT');


















OUTPUT
CONVERSION BETWEEN COLOR SPACE

EX NO.
COLOUR IMAGE PROCESSING- II
DATE:

AIM:
To write a MATLAB program to perform colour conversion of colour images into various
formats.

SOFTWARES USED
MATLAB Version 2013a

THEORY:
Makecform:
Create color transformation structure
C = makecform (type)creates the color transformation structure C that defines the color
space conversion specified by type. To perform the transformation, pass the color
transformation structure as an argument to the applycformfunction.
The type argument specifies one of the conversions listed in the following
table. Makecform supports conversions between members of the family of device-
independent color spaces defined by the CIE, (International Commission on
Illumination). In addition, makecform also supports conversions to and from
the sRGB and CMYK color spaces
Applycform:
Apply device-independent color space transformationB = applycform(A,C) converts the
color values in A to the color space specified in the color transformation structureC. The
color transformation structure specifies various parameters of the transformation.
ALGORITHM:
Step 1 Read the input image.
Step 2 Perform image conversions using functions makecform andapplycform.
Step 3 Take the inverse Fourier transform to get the filtered image.
Step 4 Perform conversions like srgb2lab, lab2srgb, xyz2uvl, xyz2lab, xyz2xyl.
Step 5 Observe the output image








IMAGE TYPE CONVERSION


img1=imread('onion.png');
img2=double(img1);
img3=makecform('srgb2lab');
img4=applycform(img2,img3);
subplot(331);
imshow(img4);
title('SRGB2LAB');
img5temp=makecform('lab2srgb');
img5=applycform(img2,img5temp);
subplot(332);
imshow(img5);
title('LAB2SRGB');
img6temp=makecform('srgb2xyz');
img6=applycform(img2,img6temp);
subplot(333);
imshow(img6);
title('SRGB2XYZ');
img7temp=makecform('uvl2xyz');
img7=applycform(img2,img7temp);
subplot(334);
imshow(img7);
title('UVL2XYZ');
img8temp=makecform('lch2lab');
img8=applycform(img2,img8temp);
subplot(335);
imshow(img8);
title('LCH2LAB');
img9temp=makecform('lab2lch');
img9=applycform(img2,img9temp);
subplot(336);
imshow(img9);
title('LAB2LCH');
img10temp=makecform('xyz2uvl');
img10=applycform(img2,img10temp);
subplot(337);
imshow(img10);
title('XYZ2UVL');
img11temp=makecform('xyl2xyz');
img11=applycform(img2,img11temp);
subplot(338);

imshow(img11);
title('XYL2XYZ');
img12temp=makecform('xyz2lab');
img12=applycform(img2,img12temp);
subplot(339);
imshow(img12);
title('XYZ2LAB');

CMPERMUTE
load trees
hsvmap=rgb2hsv(map);
[dum,index]=sort(hsvmap(:,3));
[Y,newmap]=cmpermute(X,map,index);
figure;
subplot(221);
image(X);
colormap(map);
title('Original image');
subplot(222);
rgbplot(map);
title('RGB plot of map');
subplot(223);
image(Y);
colormap(newmap);
title('Image after cmpermute');
subplot(224);
rgbplot(newmap);
title('Image after cmpermute');
subplot(224);
rgbplot(newmap);
title('RGB plot of image after cmpermute');

CMUNIQUE
load trees
subplot(121);
image(X);
colormap(map);
title('Original image');
axis off;
axis image;
[Y,newmap]=cmunique(X,map);

subplot(122);
image(Y);
colormap(newmap);
title('Image after cmunique');
axis off;
axis image;

CMAPPROX
load trees
subplot(121);
image(X);
colormap(map);
title('Original image');
axis off;
axis image;
[Y,newmap]=imapprox(X,map,0.5)
subplot(122);
image(Y);
colormap(newmap);
title('Image after cmapprox');
axis off;
axis image;


















IMAGE TYPE CONVERSION

CMPERMUTE

CMUNIQUE







CMAPPROX

EX NO:
WAVELET BASED IMAGE COMPRESSION
DATE:

AIM:

To write a MATLAB program to perform second level decomposition on image using
a) Haar wavelets,
b) Daubechies wavelet and
c) Bi-orthogonal wavelets

SOFTWARES USED:

MATLAB Version 2013a

THEORY:

The term DWT refers to a class of transformations that differ not only in the
transformation kernels employed, but also in the fundamental nature of those functions
and in the way in which they are applied. The various transforms are related by the fact
that their expansion functions are small waves of varying frequency and limited
duration















Each wavelet possesses the following general properties:
Separability, Scalability, and Translatability. The kernels can be represented as three
separable 2-D wavelets and one separable 2-D scaling function.
H(x,y) = (x). (y)
V(x,y) = (y). (x)

D(x,y) = (x). (y)

H(x,y) ,V(x,y) ,D(x,y) are called horizontal, vertical and diagonal wavelets respectively
and one separable 2-D scaling function.
(x,y)= (x). (y)
Each of these 2-D functions is the product of two 1-D real, square-integrable scaling and
wavelet functions.
j,k(x) =2j/2 (2j x-k)
l,k(x)=2j/2 (2j x-k)
Multiresolution Compatibility. The 1-D scaling function just introduced satisfies the
following requirements of multiresolution analysis:
a. j,k is orthogonal to its integer translates.
b. The set of functions that can be represented as a series expansion of j,k at low
scales or resolutions is contained within those that can be represented at higher
scales.
c. The only function that can be represented at every scale is f(x)=0.
d. Any function can be represented with arbitrary precision as j
Wavelet j,k, together with its integer translates and binary scalings, can represent the
difference between any two sets of j,k representable functions at adjacent scales.
Orthogonality. The expansion functions form an orthonormal or biorthogonal basis for
the set of 1-D measurable, square- integrable functions.To be called a basis there must
be a unique set of expansion coefficients for every representable function. The fourier
kernels, gu,v=hu,v for real, orthonormal kernels. For the biorthogonal case,
hr, gs = rs = 1 r=s

0 otherwise
And g is called the dual of h. foe biorthogonal wavelet transform with scaling and
wavelet functions l,k(x) and j,k(x) the duals are denoted " l,k(x) and "j,k(x)
respectively.
Fast wavelet transform:
An important consequence of the above properties is that both (x) and (x) can be
expressed as linear combinations of double- resolution copies of themselves. That is, via
the series expansions.
(x) = h(n).2 (2x-n)
(x) = h(n).2 (2x-n)
Where h and h are called scaling and wavelet vectors, respectively.
h(-n) Lowpass decomposition filters
h(-m) Highpass decomposition filters
Block containing a 2 & 2 a down arrow represent downsampling, extracting every other
point from a sequence of points.















Mathematically, the series of filtering and downsampling operations used to compute
WH(j,m,n) in the above figure, for example,
WH(j,m,n) = h(-m)*[ h(-n)*W(j+1,m,n)|n=2k,k0] |m=2k, k0

The input to the filter bank is decomposed into four lower resolution components. The
coefficients are created via two lowpass filters are thus called approximation
coefficients { Wi for i=H,V,D } are horizontal, vertical and diagonal detail coefficients
respectively
!!!
W(j,m,n) = 1/MN[ !!!
!!!
!!! !,!,! (, )]
Wi(j,m,n) = 1/MN[
WH(j,m,n) = 1/MN[
WV(j,m,n) = 1/MN[
WD(j,m,n) = 1/MN[

!
!!!
!!! (, )!,!,! (, )]
!
!!! !!!
!!!
!!! (, )!,!,! (, )]
!
!!! !!!
!!!
!!! (, )!,!,! (, )]
!
!!! !!!
!!!
!!! (, )!,!,! (, )]

!!!
!!!

The 2-D forward wavelet transform can be computed in a fast way using an analysis
filter bank.
We can apply wavelet domain filtering methods after taking the wavelet transform of an
image.
Compute the 2-D wavelet transform of an image.
Make modifications to the transform coefficients.
Compute the inverse.
DWT scaling vectors behave as lowpass filters and DWT wavelet vectors behave as
highpass filters.
Salient features of wavelets:
Time and frequency resolution using wavelets

A wavelet basis function can be translated anywhere in the spatial domain. It can also
have multiple scaled versions with the adjacent scales related by a factor of 2. The
translated and scaled versions of these basis functions are used to detect and localise
different frequency components in an image.
A basis function at a higher scale can detect a high frequency in the image content.
Localization of this high frequency is made possible by noting the translation of the basis
function.


Multiresolution property of wavelet transform
The concept of scale is close to concept of frequency interpretation in terms of
capturing the level of detail. Basis functions at higher scales can take note of rapid
variations in the signal and thus can capture high level of detail.
Basis function at lower level scale can capture a lower level of detail.
The wavelets ability to capture multiple levels of details makes it possible to
have multiresolution decomposition of the image.
ALGORITHM
Step 1 Read the input image from the MATLAB directory.
Step 2 Generate the wavelet transform coefficient using dct2 command.
Step 3 Generate a matrix from the obtained coefficient.
Step 4 Perform the operation consecutively for progressive levels of compression.
Step 5 Display the transformed image and input image.
Step 6 For LPF operation, perform wavedec2 command and obtain the coefficients.
Step 7 Separate out the first few coefficients and reconstruct the image using
waverec2 command.
Step 8 For HPF reconstruct the last few values from the coefficients and display the
reconstructed output.
Step 9 For wavelet based denoising and Gaussian noise to the input.
Step 10 Set a threshold and filter the noise and reconstruct the image.
Step 11 Find out the approximate coefficients using appcoef.

DISCRETE WAVELET TRANSFORM


USING HAAR
clc;
clear;
img1=imread('pout.tif');
[p,q,r,s]=dwt2(img1,'haar');
wav1=[p,q;r,s];
[p1,q1,r1,s1]=dwt2(p,'haar');
wav2=[p1,q1;r1,s1];
wav3=idwt2(p1,q1,r1,s1,'haar');
subplot(221),imshow(img1),title('Original');
subplot(222),imshow(wav1,[]),title('DWT Decimation');
subplot(223),imshow(wav2,[]),title('DWT Decimation');
subplot(224),imshow(wav3,[]),title('Reconstructed');

USING DOUBECHIES
clc;
clear;
img1=imread('pout.tif');
[p,q,r,s]=dwt2(img1,'db3');
wav1=[p,q;r,s];
[p1,q1,r1,s1]=dwt2(p,'db3');
wav2=[p1,q1;r1,s1];
wav3=idwt2(p1,q1,r1,s1,'db3');
subplot(221),imshow(img1),title('Original');
subplot(222),imshow(wav1,[]),title('Daubechies DWT Decimation');
subplot(223),imshow(wav2,[]),title('Daubechies DWT Decimation');
subplot(224),imshow(wav3,[]),title('Daubechies Reconstructed');
USING BIORTHOGONAL
clc;
clear;
img1=imread('pout.tif');
[p,q,r,s]=dwt2(img1,'bior3.5');
wav1=[p,q;r,s];
[p1,q1,r1,s1]=dwt2(p,'bior3.5');
wav2=[p1,q1;r1,s1];
wav3=idwt2(p1,q1,r1,s1,'bior3.5');
subplot(221),imshow(img1),title('Original');
subplot(222),imshow(wav1,[]),title('Biorthogonal DWT Decimation');
subplot(223),imshow(wav2,[]),title('Biorthogonal DWT Decimation');
subplot(224),imshow(wav3,[]),title('Biorthogonal Reconstructed');

OUTPUT
USING HAAR


USING DOUBECHIES

USING BIORTHOGONAL

EX NO IMAGE SEGMENTATION USING WATERSHED TRANSFORMATION


DATE

AIM
To write a MATLAB code to perform
a) Detection of lines and points
b) Thresholding operations on image using watershed transformation

SOFTWARE USED
MATLAB Version 2013a

THEORY
Image segmentation is the process of subdividing an image into its constituents parts or
region. The image segmentation algorithm is based on two basic properties of intensity
values
i.
DisContinuity
ii.
Similarity


a) Step edge b)Ramp edge c) Roof edge

a) STEP EDGE:
Digital step edges are used frequently as edge as edge models in algorithm
development. A step edge is considered as ideal
b) RAMP EDGE
Used when the edges are blurred due to defocussing and noisy due to electronic
components. Ramp is practical edge model
c) ROOF EDGE
Digitization of line drawings in thin features results in roof edges
Edge Linking ( Region Boundary decision )
An edge detection procedure identifies the set of edge points. The edge pixels by
themselves cannot segment an image. The three methods based on whether they
gather their evidence at local level, regional level or global level.
1. Local processing
2. Global processing
3. Regional processing using the hough transform.
ALGORITHM
Step 1 Read the input from Matlab directory.
Step 2 Convolve the required mask with the input image for detecting horizontal,
vertical, diagonal, lines and points using the imfiller command
Step 3 To perform optimal thresholding, find out the gray level thresholding value using
graythresh command
Step 4 Perform thresholding of the input image using im2low command and display the
output
Step 5 To perform local thresholding, divide the image into blocks using blkproc
command
Step 6 Find the localized threshold and perform threshold of individual blocks.
Step 7 To find the global threshold, set a threshold of your choice depending on the
pixel values and perform thresholding

WATERSHED TRANSFORM
clc;
clear;
img1=checkerboard(40);
img2=imnoise(img1,'salt & pepper',0.1);
wa1=watershed_old(img1,6);
wa2=watershed_old(img2,6);
subplot(221),imshow(img1),title('Image 1');
subplot(222),imshow(img2),title('Image 2');
subplot(223),imshow(wa1),title('Watershed of Image 1');
subplot(224),imshow(wa2),title('Watershed of Image 2');

OUTPUT

Potrebbero piacerti anche