Sei sulla pagina 1di 8

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)

Volume 2, No. 1, January 2013

ISSN: 2319-4413

Signature Recognition & Verification System Using Back


Propagation Neural Network
Nilesh Y. Choudhary, GFS GCOE, Jalgaon, India
Mrs. Rupal Patil, GFS GCOE, Jalgaon, India
Dr. Umesh. Bhadade, GFS GCOE, Jalgaon, India
Prof. Bhupendra M Chaudhari, Govt. Polytechnics Nadurbar, India

ABSTRACT
The fact that the signature is widely used as a means of
personal identification tool for humans require that the
need for an automatic verification system. Verifwication
can be performed either Offline or Online based on the
application. However human signatures can be handled
as an image and recognized using computer vision and
neural network techniques. With modern computers,
there is need to develop fast algorithms for signature
recognition. There are various approaches to signature
recognition with a lot of scope of research. In this
paper, off-line signature recognition & verification
using back propagation neural network is proposed,
where the signature is captured and presented to the
user in an image format. Signatures are verified based
on features extracted from the signature using Invariant
Central Moment and Modified Zernike moment for its
invariant feature extraction because the signatures are
Hampered by the large amount of variation in size,
translation and rotation and shearing parameter.
Before extracting the features, preprocessing of a
scanned image is necessary to isolate the signature part
and to remove any spurious noise present. The system is
initially trained using a database of 56 persons
signatures obtained from those 56 individuals whose
signatures have to be authenticated by the system. For
each subject a mean signature is obtained integrating
the above features derived from a set of his/her genuine
sample signatures .This signature recognition&
verification system is designed using MATLAB. This
work has been tested and found suitable for its purpose.

INTRODUCTION
Handwritten signature is one of the most widely
accepted personal attributes for identity verification of
the person. The written signature is regarded as the
primary means of identifying the signer of a written
document based on the implicit assumption that a
persons normal signature changes slowly and is very
difficult to erase, alter or forge without detection. The
handwritten signature is one of the ways to authorize
transactions and authenticate the human identity
compared with other electronic identification methods
such as fingerprints scanning and retinal vascular

i-Xplore International Research Journal Consortium

pattern screening. It is easier for people to migrate from


using the popular pen-and-paper signature to one where
the handwritten signature is captured and verified
electronically.
There are two main streams in the signature recognition
task. First approach requires finding information and
can recognize signature as the output of the system and
it is seen that in a certain time interval, it is necessary to
make the signature. This system models the signing
person and other approach is to take a signature as a
static two-dimensional image which does not contain
any time-related information [1].in short, signature
recognition can be divided into two groups. Online and
offline.
The online signature recognition, where signatures are
acquired during the writing process with a special
instrument, such as pen tablet. In fact, there is always
dynamic information available in case of online
signature recognition, such as velocity, acceleration and
pen pressure. So far there have been many widely
employed methods developed for online signature
recognition for example, Artificial Neural Networks
(ANN)[2,3], dynamic time warping (DTW)[4,5], the
hidden Markov models (HMM)[6,7].
The off-line recognition just deals with signature
images acquired by a scanner or a digital camera. In
general, offline signature recognition& verification is a
challenging problem. Unlike the on-line signature,
where dynamic aspects of the signing action are
captured directly as the handwriting trajectory, the
dynamic information contained in off-line signature is
highly degraded. Handwriting features, such as the
handwriting order, writing-speed variation, and
skillfulness, need to be recovered from the grey-level
pixels.
In the last few decades, many approaches have been
developed in the pattern recognition area, which
approached the offline signature verification problem.
Justino,[8] propose an off-line signature verification
system using Hidden Markov Model . Zhang, Fu and
Yan [9] proposed handwritten signature verification
system based on Neural Gas based Vector

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

Quantization. Vlez, Snchez and Moreno [10] propose


a robust off-line signature verification system using
compression networks and positional cuttings. [11, 12,
13]
The signature recognition & verification system shown
in Fig 1 is broadly divided into three subparts
1) Preprocessing, 2) Feature extraction,3) Recognition
& Verification.

1. SIGNATURE DATABASE
For training and testing of the signature recognition and
verification system 675 signatures are used. The
signatures were taken from 56 persons. The templates
of the signature as shown in Fig 2
For training the system 56 persons signatures are used.
Each of these persons signed 8 original signature and

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

The input signature is captured from the scanner or


digital high pixel camera which provides the output
image in term of BMP Colour image. The
preprocessing algorithm provides the required data
suitable for the final processing. In the feature
extraction phase the invariant central moment and
Zernike moment are used to extract the feature for the
classification purpose. In classification the Back
propagation Neural Network is used to provide high
accuracy and less computational complexity in training
and testing phase of the system.

signed 4 forgery signatures in the training set the total


number of signatures is 675 (12 x 56) are used. In order
to make the system robust, signers were asked to use as
much as variation in their signature size and shape and
the signatures are collected at different times without
seeing other signatures they signed before.
For testing the system, another 112 genuine signatures
and 112 forgery signatures are taken from the same 56
persons in the training set.

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

ISSN: 2319-4413

2.1 Converting Color Image to Gray Scale


Image
In todays technology, almost all image capturing and
Scanning devices gives their output in color format. A
color image consists of a coordinate matrix and three
color matrices. Coordinate matrix contains X, Y
coordinate values of the image. The color matrices are
labeled as red (R), green (G), and blue (B). The
technique presented in this study are based on grey
scale images, therefore, scanned or captured color
images are initially converted to grey scale using the
following equation (1)
Gray color = 0.299*Red + 0.5876*Green
+0.114*Blue
(1)

Fig 3.Scanned Image

Fig 4.Colour to Gray Scale Image

2.2 Noise Reduction


Fig 2.Signature Templates

2. PREPROCESSING
Preprocessing algorithms is nothing but data
conditioning algorithm which provide data for feature
extraction process. It establishes the link between real
world data and recognition & verification system. The
preprocessing of the trajectory of input signature pattern
directly facilitates pattern description and affects the
quality of description. Any image-processing
application suffers from noise like touching line
segments, isolated pixels and smeared images. This
noise may cause severe distortions in the digital image
and hence result in ambiguous features and a
correspondingly poor recognition and verification rate.
The preprocessing step is applied both in training and
testing phases. Background elimination, noise
reduction, width normalization and skeletonization are
the sub steps

Noise reduction (also called smoothing or noise


filtering) is one of the most important processes in
image processing. Images are often corrupted due to
positive and negative impulses stemming from
decoding errors or noisy channels. An image may also
be degraded because of the undesirable effects due to
illumination and other objects in the environment.
Median filter is widely used for smoothing and
restoring images corrupted by noise. It is a non-linear
process useful especially in reducing impulsive or saltand-pepper type noise. In a median filter, a window
slides over the image, and for each positioning of the
window, the median intensity of the pixels inside it
determines the intensity of the pixel located in the
middle of the window. Different from linear filters such
as the mean filter, median filter has attractive properties
for suppressing impulse noise while preserving edges.
Median Filter is used in this study due to its edge
preserving feature [14,15, 16, 17].

Fig 5.Noise Removal

i-Xplore International Research Journal Consortium

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

2.3 Background Elimination and Border


Clearing
In Many image processing algorithms require the
separation of objects from the image background.
Thresholding is the most easily & sophistically
applicable method for this purpose. It is widely used in
image segmentation [18, 19].
Thresholding is choosing a threshold value H and
assigning 0 to the pixels with values smaller than or
equal to H and 1 to those with values greater than H.
We used Thresholding technique for separating the
signature pixels from the background pixels. Clearly, in
this application, we are interested in dark objects on a
light background, and therefore, a threshold value H,
called the brightness threshold, is appropriately chosen
and applied to image pixels f(x, y) as in the following
Equation (2)
If f(x, y) H then
f(x, y) = Background
else f(x, y) = Object
(2)
Signature image which is located by separating it from
complex background image is converted into binary
image white background taking the pixel value of 1.
Vertical and horizontal (histogram) projections are used
for border clearing. For both direction, vertical and
horizontal, we counted every row zeros and the
resulting histogram is plotted sideways.

2.4 Signature Normalization

ISSN: 2319-4413

In these equations:
, - pixel coordinates for the normalized signature,
, - pixel coordinates for the original signature,
M- one of the dimensions (width or height) for the
normalized signature

Fig 6. Normalized Image

3. FEATURE EXTRACTION
Feature extraction, as defined by Devijver and Kittle
[20] is Extracting the information from the raw data
which is most relevant for classification stage. This
data can be minimized within-class pattern variation
and increases the inter-class variations. Therefore,
achieving a high recognition performance in signature
recognition system is highly influenced by the selection
of efficient feature extraction methods, taking into
consideration the domain of the application and the type
of classifier used [21]. An efficient feature extraction
algorithm should require two characteristics: Invariance
and reconstruct-ability Features [21] that are invariant
to certain transformations on the signature which would
be able to recognize many variations of these
signatures. Such transformations include translation,
scaling, rotation, stretching, skewing and mirroring.

Signature dimensions may vary due to the irregularities


in the image scanning and capturing process.
Furthermore, height and width of signatures vary from
person to person and, sometimes, even the same person
may use different size signatures. First, we need to
eliminate the size differences and obtain a standard
signature size for all signatures. After this normalization
process, all signatures will have the same dimensions.
In this study, we used a normalized size of 50x50 pixels
for all signatures that will be processed further. During
the normalization process, the aspect ratio between
width and height of a signature is kept intact.

On the other hand, the ability to reconstruct signature


from their extracted features ensures that complete
information about the signature shape is present in these
features. In this feature extraction step, the well known
feature set in pattern recognition is used. one is depends
on invariant central moment designed by Hus [22]
which is used for scale and translation normalization
and other is modified Zernike moment[23] which is
used for rotation normalization.

Normalization process made use of the following


equation (3) & (4).

The moments of order (u + v) of an image composed of


binary pixels B(x, y) are proposed by [24], [25] as
shown in eq. (5).

3.1 Invariant Central Moment

(3)
(5)
The bodys area A and the images center of
i s found from eq. 6.
mass
(4)
(6)

i-Xplore International Research Journal Consortium

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

ISSN: 2319-4413

The central moments, which are translation Invariant,


are given by eq. 7.
(12)
Where
(7)
Finally, the normalized central moments, which are
translation and scale invariant, are derived from the
central moments as shown in eq. 8.

(8)
Where
K=1+ (u+ v)/2 for u+v2

3.2 Zernike Moments


Zernike polynomials are a set of complex polynomials
which form a complete orthogonal set over the interior
of the unit circle [26].The form of polynomial is shown
by eq. 10.

(10)
Where
is the Length of the vector from the origin to the point
(x, y), is the angle between this vector and the x axis
in the Counterclockwise direction and the radial
is
polynomial

The orthogonality property of Zernike moments, as


expressed in the eq.8, allows easy image reconstruction
from its Zernike moments by simply adding the
information content of each individual order moment.
Moreover, Zernike moments have simple rotational
transformation properties interestingly enough the
Zernike moments of a rotated image, have identical
magnitudes to those of the original one, where they
merely acquire a phase shift upon rotation.
Therefore, the magnitudes of the Zernike moments are
rotation invariant features of the underlying image.
Translation and scale-invariance, on the other hand, are
obtained by shifting and scaling the image into the unit
circle.

Fig 7. Rotation Normalization

4. BACK PROPAGATION ARTIFICIAL NEURAL


NETWORK
(10)
Zernike moments are the projections of the image
function onto these orthogonal basis functions. The
Zernike moment of order n with repetition m for a
digital image is given by

(11)
Where, * is the complex conjugate operator and
x2+y21.
To calculate the Zernike moments for a given image, its
pixels are mapped to the unit circle x2+y21. This is
done by taking the geometrical center of the image as
the origin and then scaling its bounding rectangle into
the unit circle, as shown in Figure 7. Due to the
orthogonality of the Zernike basis, the part of the
original image inside the unit circle can be
approximated using its Zernike moments Anm up to a
given order nmax using

i-Xplore International Research Journal Consortium

There are several algorithms that can be used to create


an artificial neural network, but the Back propagation
[27] was chosen because it is probably the easiest to
implement, while preserving efficiency of the network.
Backward Propagation Artificial Neural Network
(ANN) use more than one input layers (usually 3). Each
of these layers must be either of the following:
Input Layer This layer holds the input for the
network
Output Layer This layer holds the output data,
usually an identifier for the input.
Hidden Layer This layer comes between the input
layer and the output layer. They serve as a
propagation point for sending data from the
previous layer to the next layer.
A typical Back Propagation ANN is as depicted in Fig 8
The black nodes (on the extreme left) are the initial
inputs. Training such a network involves two phases. In
the first phase, the inputs are propagated forward to
compute the outputs for each output node. Then, each
of these outputs is subtracted from its desired output,
causing an error [an error for each output node].

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

In the second phase, each of these output errors is


passed backward and the weights are fixed. These two
phases are continued until the sum of square of output
errors reaches an acceptable value. Each neuron is
composed of two units. The First unit adds products of
weights coefficients and input signals while the second
unit realizes nonlinear function, called neuron
activation function. Signal is adder output signal and
=
is output signal of nonlinear element. Signal y is
also output signal of neuron. To teach the neural
network, we need data set. The training data set consists
2 assigned with corresponding
of input signals 1
target (desired output). The network training is an
iterative process. In each iteration weights coefficients
of nodes are modified using new data from training data
set. Each teaching step starts with forcing both input
signals from training set. After this stage we can
determine output signals values for each neuron in each
network layer
Symbols
represent weights of connections
and input of neuron in
between output of neuron
the next layer.
In the next algorithm step, the output signal of the
network is compared with the desired output value
(the target), which is found in training data set. The
of output layer
difference is called error signal
neuron. It is impossible to compute error signal for
internal neurons directly, because output values of these
neurons are unknown. For many years the effective
method for training multilayer networks has been
unknown. Only in the middle eighties the back
propagation algorithm has been worked out. The idea is
to propagate error signal (computed in single teaching
step) back to all neurons, which output signals were
input for discussed neuron.
The weights' coefficient
used to propagate errors
back are equal to this used during computing output
value. Only the direction of data flow is changed
(signals are propagated from output to inputs one after
the other). This technique is used for all network layers.
If propagated errors came from few neurons they are
added. The illustration is below

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

Fig 8. A 3-layer neural network using back propagation

When the application launches, it waits for the user to


determine whether he wishes to train or verify a set of
signatures. At the training stage, based on the back
propagation neural network algorithm, the user gives
eight 12 different images as input, of which the real
input to the network, are the individual pixels of the
images. When input is confirmed and accepted, it
passes through the back propagation neural network
algorithm to generate an output which contains the
network data of the trained images. The back
propagation artificial neural network simply calculates
the gradient of error of the network regarding the
networks modifiable weights. In this paper we a multilayer neural network designed by O.C Abikoye [28],

5. TRAINING AND TESTING


The recognition phase consists of two parts, training
and testing respectively which is accomplished by back
propagation neural network.
As explained in Section 1. 672 images in our database
belonging to 56 people are used for both training and
testing. Since 8 (out of 12) input vectors for each image
were used for training purposes, there are only 224
(56*4) input vectors (data sets) left to be used for the
test set. Under normal (correct) operation of the back
propagation neural network, only one output is expected
to take a value of 1 indicating the recognition of a
signature represented by that particular output. The
other output values must remain zero. The output layer
used a logic decoder which mapped neuron outputs
between 0.5-1 to a binary value of 1. If the real value of
an output is less than 0.5, it is represented by a 0
value. The back propagation neural network program
recognized all of the 56 signatures correctly. This result
translates into a 100% recognition rate. We also tested
the system with 15 random signatures which are not
contained in the original database.

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

Only two of these signatures which are very similar to


at least one of the 56 stored images resulted in false
positives (output > 0.5) while the remaining 8 are
recognized correctly as not belonging to the original set
(the output value was <= 0.5). Since recognition step is
always followed by the verification step, these kinds of
false positives can be easily caught by our verification
system. In other words, the verification step serves as a
safeguard against false positives as well as false
negatives.

6. RESULT AND CONCLUSION


In this study, we presented Off-Line Signature
Recognition and Verification System using the back
propagation neural network which is based on steps of
image processing, invariant central moment invariants,
Zernike moment & some global properties and back
propagation neural networks.
Both systems used a three-step process; in the first step,
the signature is separated from its image background.
Second step performs normalization and digitization of
the original signature. Invariant central moment
invariants, Zernike moment and global properties which
are used as input features for the back propagation
neural network are obtained in the third step.

Fig. 9 Online Result for Proposed Method


As Shown in Fig 9 the proposed system is evaluated on
two performance criteria. The feature extraction stage
and the overall recognition rate for achieving high
recognition performance in signature recognition
system is highly influenced by the selection of efficient
feature vector. In this paper, we compute the
comparison between the previously implemented Hus
moment [28] Zernike moment [23]. Original feature
vectors produced from the different moment invariant
techniques are applied for signature features extraction
from the binary images of the signature and absolute

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

percentage error with the dimension of the feature


vector is calculated.
The recognition system gives the 98% success rate by
recognizing the all signature pattern correctly for all
that signature which is used in training. It gives the poor
performance for signature that is not in the training
phase. Generally the failure to recognize/verify a
signature was due to poor image quality and high
similarity between 2 signatures. Recognition and
verification ability of the system can be increased by
using additional features in the input data set. This
study aims to reduce to a minimum the cases of forgery
in business transactions.

REFERENCES
[1] A. Pacut, A Czajka, Recognition of Human
Signatures, pp. 1560-1564, 2001.
[2] Ronny Martens, Luc Claesen, On- Line
Signature Verification by Dynamic TimeWarping, IEEE Proceedings of ICPR'96
1996.
[3] Quen-Zong Wu, I-Chang Jou, and Suh-Yin
Lee, On-Line Signature Verification Using
LPC Cepstrumand Neural Networks, IEEE
Transactions on Systems, Man,
and
CyberneticsPart B: Cybernetics, 27(1):148153, 1997.
[4] Pave1 Mautner, OndrejRohlik, Vaclav
Matousek,
JuergenKempp, Signature
Verification Using ART-2 Neural Network,
Proceedings of the 9th International
Conference on Neural information Processing
(ICONIP'OZ) ,2: 636-639, 2002
[5] A. Jain, F. Griess, S. Connell, On-line
signature Verification, Pattern Recognition,
Vol. 35, No. 12,
2002
[6] W. Nelson, W. Turin, T. Hastie, Statistical
methods for on-line signature verification,
International Journal
of Pattern
Recognition and Artificial Intellingence, 8,
1994
[7] R. Kashi, J. Hu, W.L. Nelson, W.Turin, A
hidden markov model approch to online
handwritten
signature
verification,
International Journal on Document Analysis
and Recognition, Vol. 1, No.1, 1998.
[8] E. J. R. Justino, F. Bortolozzi and R.
Sabourin,(
2001)
Offline
Signature
Verification Using HMM for Random, Simple
and Skilled Forgeries, ICDAR 2001,
International Conference on Document
Analysis and Recognition, vol. 1, pp. 105-110.
[9] B. Zhang, M. Fu and H. Yan (1998 ),
Handwritten Signature Verification based on

www.irjcjournals.org

International Journal of IT, Engineering and Applied Sciences Research (IJIEASR)


Volume 2, No. 1, January 2013

Neural Gas Based Vector Quantization,


IEEE International Joint Conference on Neural
Net-works, pp. 1862-186
[10] J. F. Vlez, . Snchez, and A. B. Moreno (
2003 ) , Robust Off-Line Signature
Verification Using Compression Networks
And Positional Cuttings, Proc. 2003 IEEE
Workshop on Neural Networks for Signal
Processing, vol. 1, pp. 627-636.
[11] Q. Yingyong, B. R. Hunt, "Signature
Verification Using Global and Grid Features",
Pattern Recognition, vol. 22, no.12, Great
Britain (1994), 1621--1629.
[12] Drouhard, J.P., R. Sabourin, and M. Godbout,
A neural
network approach to off-line
signature verification using
directional
PDF, Pattern Recognition, vol. 29, no. 3,
(1996), 415-424.
[13] G. Rigoll, A. Kosmala, "A Systematic
Comparison Between On-Line and Off-Line
Methods for Signature
Verification with
Hidden Markov Models", 14th
International
Conference on Pattern Recognition - vol.
II, Australia (1998), 1755.
[14] Lim, J.S., Two-Dimensional and Image
Processing, Prentice-Hall, 1990.
[15] Yang, X., and Toh, P.S., Adaptive Fuzzy
Multilevel Median Filter, IEEE Transaction
on Image Processing, Vol. 4, No. 5, pp.680682, may 1995.
[16] Hwang, H., and Haddad, R.A. Adaptive
Median Filters: new Algorithm and Results,
Transactions on Image processing, Vol. 4, No.
4 pp.449-505, April 1995.
[17] Rosenfeld, A., Digital Picture Processing,
Academic Press Inc., 1982.

i-Xplore International Research Journal Consortium

ISSN: 2319-4413

[18] Erdem, U.M., 2D Object Recognition In


Manufacturing Environment Using Implicit
Polynomials and Algebraic Invariants, Master
Thesis, Bogazici University, 1997.
[19] Fu, K.S., Mui, J.K., A survey On Image
Segmentation, Pattern Recognition, Vol. 13,
pp.3-16, Pergoman Press, 1981.
[20] Devijver, P.A. and J. Kittler, 1982, Pattern
Recognition: A Statistical Approach. PrenticeHall,
London, ISBN: 10: 0136542360.
[21] Trier, O.D., A.K. Jain and T. Taxt, Feature
extraction methods for character recognition-a
survey. Patt.Recog., 29: 641-662.
[22] Hu M, Visual pattern recognition by moment
invariants, IRE Trans. Inf. Theory, IT-8, PP:
179187.
[23] Khotanzad, A. and Y.H. Hong, Invariant
image recognition by Zernike moments, IEEE
Trans. Patt.
Anal. Mach. Intell., 12: 489497. DOI:
10.1109/34.55109.
[24] Theodoridis, S. and K. Koutroumbas, 2006,
Pattern Recognition, 3rd Edn., Academic
Press, ISBN: 10: 0123695317, pp: 856.
[25] Reiss, T.H, The revised fundamental theorem
of moment invariants, IEEE Trans. Patt.Anal.
Mach.
Intell.,
13:
830-834.
DOI:
10.1109/34.85675, 1991.
[26] Khotanzad, A, Y. H. Hong, Invariant image
recognition by Zernike moments, IEEE Trans.
Patt. Anal.Intell, pp489-497, March 1990.
[27] Golda, A. 2005. Principles of Training multilayer neural network using back propagation.
[28] O.C Abikoye, M.A MabayojeR. Ajibade
Offline Signature Recognition & Verification
using Neural Network International Journal of
Computer Applications (0975 8887) Volume
35 No.2, December 2011

www.irjcjournals.org

Potrebbero piacerti anche