Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CHAPTER 1
INTRODUCTION
Sclera recognition is found out to be a best technique to complement previous trades because of
sclera part, Because Sclera areas are highly secured part of an eye, which is impossible to stole.
Identification of a human being by the vessel pattern of the sclera is possible because the vessel
pattern of human beings is very different, and it is unique for every individual. Twins also have
different vessel and this makes it suitable for human identification. Secondly, the vessel pattern
of the person throughout lifetime is stable. Even the vessel patterns of left and right eye of a
person differ from each other. Therefore, this system is best and reliable approach human
identification. This paper organized as follows. Section II covers the background of sclera a
recognition, after that segmentation process is clearly specified after that that we have concluded.
CHAPTER 2
LITERATURE SURVEY
information can be used to grade disease severity or as a part of automated diagnosis of diseases
(Biomedical Systems). The algorithm is based on two critical procedures the first one is the
accurate detection of the optic nerve and the macula. From their location on the retina we extract
information about the size of the retina and its general location on the image matrix. The results
from this procedure will later be used as reference for the placement procedure, which is part of
the first comparison stage of the algorithm. The other critical stage for the algorithm is the
detection of the vascular network. The accuracy at that stage colors the general accuracy of the
algorithm. Thats because the whole identification procedure is based on the detection of the
vascular network branching points. High accuracy at the stage of vascular network detection
delivers high accuracy for branching points detection.
All grayscale algorithms utilize the same basic process Get the red, green, and blue values of a
pixel ,use fancy math to turn those numbers into a single gray value ,replace the original red,
green, and blue values with the new gray value.
Humans do not perceive all colors equally, the average method of grayscale conversion is
inaccurate. Instead of treating red, green, and blue light equally, a good grayscale conversion
will weight each color based on how the human eye perceives it. A common formula in image
processors is:Gray = (Red * 0.3 + Green * 0.59 + Blue * 0.11). Desaturating an image works by
converting an RGB triplet to an HSL triplet, then forcing the saturation to zero. Basically, this takes a
color and converts it to its least-saturated variant. A pixel can be desaturated by finding the midpoint
between the maximum of (R, G, B) and the minimum of (R, G, B), like so:
The image of iris is considered as the input for applying the edge detection techniques. Here first
iris image is filterd by using Gaussian filtering so that noise can be removed and then edge
detection is used to extract the edge points in the characters in the iris images. Four Edge
detection techniques have been analyzed and compared to detect the edgesin the iris image.
Edges of an image are detected using Sobel, Prewitt, Laplacian of Gaussian (LoG) and minimum
constructor with lalacian edge detectors(Hybrid). The performance of Hybrid provides better
result than other edge detection techniques for iris image . The improved SSIM measure can be
used for edge detection (Parameterization), modulating of edge tracing outputs and comparison
of edge tracing for real and fake images.
2.5 SEGMENTATION
Segmentation is the most important part in image processing. Fence off an entire image into
several parts which is something more meaningful and easier for further process. These several
parts that are rejoined will cover the entire image. Segmentation may also depend on various
features that are contained in the image. It may be either color or texture.
a)Region Based:
In this technique pixels that are related to an object are grouped for segmentation .The
thresholding technique is bound with region based segmentation. The area that is detected for
segmentation should be closed. Region based segmentation is also termed as Similarity Based
Segmentation . There wont be any gap due to missing edge pixels in this region based
segmentation The boundaries are identified for segmentation. In each and every step at least one
pixel is related to the region and is taken into consideration. After identifying the change in the
color and texture, the edge flow is converted into a vector .From this the edges are detected for
further segmentation.
b) Edge Based
Segmentation can also be done by using edge detection techniques.In this technique the
boundary is identified to segment. Edges are detected to identify the discontinuities in the
image. Edges on the region are traced by identifying the pixel value and it is compared with the
neighboring pixels. For this classification they use both fixed and adaptive feature of Support
Vector Machine (SVM). In this edge based segmentation, there is no need for the detected edges
to be closed.There are various edge detectors that are used to segment the image. Thresholding
is the easiest way of segmentation. It is done through that threshold values which are obtained
from the histogram of those edges of the original image . The threshold values are obtained from
the edge detected image. So, if the edge detections are accurate then the threshold too.
Segmentation through thresholding has fewer computations compared to other techniques.
Segmentation is based on his ton. For a particular segment there may be set of pixels which is
termed as his ton. Roughness measure is followed by a thresholding method for image
segmentation. Segmentation is done through adaptive thresholding. The gray level points where
the gradient is high, is then added to thresholding surface for segmentation . The drawback of
this segmentation technique is that it is not suitable for complex images. Segmentation is also
done through Clustering.Theory followed a different procedure, where most of them apply the
technique directly to the image but here the image is converted into histogram and then
clustering is done on it . Pixels of the color image are clustered for segmentation using an
unsupervised technique Fuzzy C. This is applied for ordinary images. If it is a noisy image, it
results to fragmentation . A basic clustering algorithm i.e., K-means is used for segmentation in
textured images. It clusters the related pixels to segment the image. Segmentation is done
through feature clustering and there it will be changed according to the color components .
Segmentation is also purely depending on the characteristics of the image. Features are taken
into account for segmentation. Difference in the intensity and color values are used for
segmentation . For segmentation of color image they use Fuzzy Clustering technique, which
iteratively generates color clusters using Fuzzy membership function in color space regarding to
image space. The technique is successful in identifying the color region . Real time clustering
based segmentation. A Virtual attention region is captured accurately for segmentation. Image is
segmented coarsely by multithresholding .It is then refined by Fuzzy C-Means Clustering. The
advantage is applied to any multispectral images Segmentation approach for region growing is
K-Means Clustering. A Clustering technique for image segmentation is done with cylindrical
decision elements of the color space. The surface is obtained through histogram and is detected
as a cluster by thresholding .Seeded Growing Region (SRG) is used for segmentation. It has a
drawback of pixel sorting for labeling. So, to overcome this boundary oriented parallel pixel
labeling technique is obtained to SRG .
c) Model Based:
Markov Random Field (MRF) based segmentation is known as Model based segmentation. An
inbuilt region smoothness constraint is presented in MRF which is used for color segmentation.
Components of the color pixel tuples are considered as independent random variables for further
processing. MRF is combined with edge detection for identifying the edges accurate. MRF has
spatial region smoothness constraint and there are correlations among the color components.
Expectation-Maximization (EM) algorithm values the parameter is based on unsupervised
operation. Multiresolution based segmented technique named as Narrow Band. It is faster than
the traditional approach. The initial segmentation is performed at coarse resolution and then at
finer resolution. The process moves on in an iterative fashion. The resolution based segmentation
is done only to the part of the image. So, it is fast.
The segmentation may also be done by using Gaussian Markov Random Field (GMRF) where
the spatial dependencies between pixels are considered for the process Gaussian Markov Model
(GMM) based segmentation is used for region growing. The extension of Gaussian Markov
Model(GMM) that detects the region as well as edge cues within the GMM framework. The
feature space is also detected by using this technique. Thus segmentation is done to estimate the
surfaces. Segmentation can be applied to any type of image. Comparing to other methods
thresholding is the simplest and computationally fast.
2.6 MATCHING
Many matching schemes have been proposed and used for previous biometric and pattern
recognition applications. Some historical examples of matching schemes are presented, along
with justification for their use or disuse.
a) Hamming Distance
Hamming distance is a distance measure for binary strings that measures the amount of similarity
between two strings by measuring the number of bits that must be changed to make the two
strings equivalent . It is a common distance metric for biometrics, for instance Daugmans iris
recognition algorithms. However, for this work, it is not used, because the feature vectors used
are not binary.
b) Euclidean Distance
Euclidean distance is the distance among two vectors, and is commonly used as a simple metric
for how similar two vectors are. In this work, it is used as the primary measure of how similar
two features are.
c) Spectral Angle Measure
Spectral angle measure is a commonly used measure of similarity in hyper spectral and multiple
spectrum imaging that measures the similarity between hyper spectral signatures.
d) Information Distance
Information distance, or mutual information, is a measure of the dependence between two
random variables, and can also be used as a distance metric for feature vectors.
CHAPTER 3
SYSTEM DESCRIPTION
The field of Digital Image Processing refers to processing the digital images by means of a
digital computer. In a broader sense, it can be considered as a processing of any two dimensional
data, where any image (optical information) is represented as an array of real or complex
numbers represented by a definite number of bits. An image is represented as a two dimensional
function f(x,y), where x and y are spatial (plane) coordinates and the amplitude of f at any
pair of coordinates (x,y) represents the intensity or gray level of the image at that point. A digital
image is one for which both the co-ordinates and the amplitude values of f are all finite, discrete
quantities. Hence, a digital image is composed of a finite number of elements, each of which has
a particular location value. These elements are called pixels. A digital image is discrete in both
spatial coordinates and brightness and it can be considered as a matrix whose rows and column
indices identify a point on the image and the corresponding matrix element value identifies the
gray level at that point. One of the first applications of digital images was in the newspaper
industry, when pictures were first sent by submarine cable between London and New York.
Introduction of the Bartlane cable picture transmission system in the early 1920s reduced the
time required to transport a picture across the Atlantic from more than a week to less than three
hours
In this section, they proposed a sclera segmentation procedure, vein pattern enhancement
method, feature extraction method and finally the feature matching and matching decision. A
typical sclera biometric technique is explained below.
There is a difference between color and grayscale images: The sclera region in a color image is
estimated using the best representation between two colorbased techniques, whereas the sclera
region in a grayscale image is extracted by Otsus threshold method . Pre-processing In image
processing, as we know all the filters are applied to the grayscale image. If the input image to the
system is colored image then it is converted into a grayscale image.
Any colored image has three dimensions that is, red, green and blue plane. In MATLAB a
binary and grayscale image is represented by one 2-dimensional array. A color image is
represented by a 3-dimensional array (one 2-dimensional array for each of the color planes or
color channels red, green and blue). The origin of the image is in the upper left and the size of
the image is defined by the parameter width (number of columns of the array) and height
(number of rows of the array). Note that the x- and y-coordinates are chosen such that the z-axis
points to the front. If any image from the dataset has green or brown colored iris then it is
necessary to get those color intensities; Hence the colored image is converted into the three
planes using following formulae where 1,2,3 resembles to red, green and blue plane respectively.
R =testing ( : , : , 1) ;
G =testing (: , : , 2) ;
B =testing (: , : , 3) ;
inside the iris or pupil). For example, in Fig. 3(a) and (b), there are glares on the surface of the
cornea which create challenges for glare detection. A Sobel filter is first applied to highlight
desired glare areas (Fig. 3) For the glares in the sclera or skin areas, the local background is often
brighter than the pupil or iris. Using the Sobel filter, it will not stand out as much as glare in the
desired area. Note that the glare detection method is applied in grayscale images. If the original
image is a color image, a grayscale transformation is applied first
Fig. 3.4 Glare detection approach. (a) Color. (b) Grayscale. (c) Convoluted
images.
Fig. 3.5. Iris boundary detection. (a) Finding the start point. (b) Searching along radial
direction
Here, the pupil and iris regions are segmented using a greedy angular search , which is
performed on the edge-detected image and can accurately detect the pupil boundaries regardless
of gaze direction and eyelid/eyelash occlusion. The algorithm searches along the radial direction
at a predefined set of angles to estimate the pupil boundaries and then iteratively maps the
highest edge value along the angular direction for /2 radians for each of these starting angles.
thresholding step , and the sclera area detection step. The left and right ROIs are selected based
on the iris center and boundaries. The height of the ROI is the diameter of the iris, and the length
of the ROI is the distance between the limbic boundary and the margin of the image. Otsus
method is applied to the ROIs to obtain potential sclera areas. The correct left sclera area should
be located in the right and center sides, and the correct right sclera area should be located in the
left and center. This way, we eliminate nonsclera areas. Fig. 6 shows the process for detecting
the left sclera area. The same approach is applied to detect the right sclera area.
Using the top and bottom portions of the estimated sclera region as guidelines, the upper eyelid,
lower eyelid, and iris boundaries are then refined using the Fourier active contour
method . Fig. 7 shows an example of two segmented sclera imagesnote that some areas are not
perfectly segmented. In reality, perfect segmentation of all images is impossible. Therefore, the
feature extraction and matching steps of the system need to be tolerant of segmentation error.
Fig 3.9 Vessel patternsbefore and after Gabor enhancement. (a) Segmented
sclera region. (b) After Gabor enhancement (vessel-boosted image). (c) After
thresholding (binary vessel image). (d) After morphological operations
An adaptive threshold, based on the distribution of filtered pixel values, is used to determine a
threshold to binarize the Gabor filtered image. In practice, thezero elements of the filtered
addition, some very thin vascular patterns may not be visible at all times. In this paper, binary
morphological operations are used to thin the detected vessel structure down to a single-pixel
wide skeleton and to remove the branch points. This leaves a set of single-pixel wide lines that
represents the vessel structure. Fig. 9(d) shows the vessel skeleton after binary morphology.
These lines are then recursively parsed into smaller segments. The process is repeated until the
line segments are nearly linear with the lines maximum size. For each segment, a least squares
line is fit to each segment. These line segments are then used to create a template for the vessel
structure. The segments are described by three quantitiesthe segment angle to some reference
angle at the iris center, the segment distance to the iris center, and the dominant angular
orientation of the line segment. The template for the sclera vessel structure is the set of all
individual segments descriptors. This implies that, while each segment descriptor is of a fixed
length, the overall template size for a sclera vessel structure varies with the number of individual
segments.
discussed in Section I, the sclera vascular patterns deform nonlinearly with the movement of the
eye and eyelids and the contraction/dilation of the pupil. As a result, the segments of the vascular
patterns could move individually, and this must be accounted for in the registration scheme.
We developed a new method based on a random sample consensus (RANSAC)-type algorithm to
estimate the best fit parameters for registration between the two sclera vascular patterns.
RANSAC is an iterative model-fitting method that can robustly fit to a model, even given noise .
To limit potential false accepts due to overfitting, the patterns are registered as a set of points
the centers of the line segments that makeup the template. The optimal registration used is the
one that minimizes the minimum distance between the templates. This reduces artificially
introduced false accepts because it does not register the patterns using the same parameters used
for matching; therefore, the optimal registration and optimal matching can, and probably will, be
different for templates that should not match. For the registration algorithm, it randomly chooses
two pointsone from the test template and one from the target template. It also randomly
chooses a scaling factor and a rotation value, based on a priori knowledge of the database. Using
these values, it calculates a fitness value for the registration using these parameters. The
algorithm performs a number of iterations, recording the values 0 for those that are minimal in
D(Sx, Sy). In this way, we ensure that the registration process is globally scale, orientation, and
deformation invariant.
If there is a nonzero matching score, the segments are removed from future comparisons (one
from the test and one from the target templates) and the matching result is recorded. The total
matching score is the sum of the individual matching scores divided by the maximum matching
score for the minimal set between the test and target templates, i.e., one of the test or target
templates has fewer points, and thus, the sum of its descriptor weights sets the maximum score
that can be attained. The proposed matching scheme allows for a multitude of potential changes
in the vascular pattern and allows for multiple independent vessel patterns to be matched.
Additionally, it allows for overlapping vessel patterns to be matched even as they change
independently, where matching schemes that retain and use the crossing points of the patterns
could be problematic in this type of situation. In this way, we ensure that the matching step is
locally scale, orientation, and deformation invariant
Functions for integrating MATLAB based algorithms with external applications and
languages, such as C, C++, FORTRAN, Java, COM, and Microsoft Excel.
MATLAB is used in vast area, including signal and image processing, communications, control
design, test and measurement, financial modeling and analysis, and computational. Add-on
toolboxes (collections of special-purpose MATLAB functions) extend the MATLAB
environment to solve particular classes of problems in these application
CHAPTER 4
FLOW CHART
A typical sclera vein recognition system includes feature enhancement,feature extraction, and
feature matching. We have proposed simplified method for sclera segmentation, a new method
for sclera pattern enhancement based on histogram equalization and line descriptor based feature
extraction and pattern matching with the help of matching score between the two segment
descriptors.
CHAPTER 5
PROJECT PLAN
TOPIC
SELECTION
LITERATURE
SURVEY
THEORY
FORMULATION
FLOW CHART
AND
ALGORITHM
SIMULATION
PHASE 1
Confirmed the main project topic Sclera Recognition: A novel biometric method
system in the first week of August.
Rough idea of the project was obtained.
Block diagram and flowchart of the system was formulated.
Started collecting the reference papers during the second week of august.
Literature survey was completed.
Database was collected.
Preprocessed the image and converted into gray plane.
40% of segmemntation was done.
PHASE 2
COMPILATION
ALGORITHM AND
SIMULATION
DOCUMENTATION
LITERATURE REVIEW
THEORITICAL DESIGN
AND FLOWCHART
ALGORITHM
SIMULATION
COMPILATION AND
DOCUMENTATION
REFERENCES
1] International Journal on Recent and Innovation Trends in Computing and Communication
An Efficient and Optimal IRIS Recognition System using MATLAB GUI
2] ELECO 2011 7th International Conference on Electrical and Electronics Engineering, 1-4
December, Bursa, TURKEY Color to Grayscale Conversion Based On Neighborhood Pixels
Effect Approach for Digital Image