Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1. Introduction Today's e-security are in critical need of finding accurate, secure and cost-effective alternatives to passwords and personal identification numbers (PIN) as financial losses increase dramatically year over year from computer-based fraud such as computer hacking and identity theft [15]. Biometric solutions address these fundamental problems, because an individual's biometric data is unique and cannot be transferred. Biometrics is automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Examples of physiological characteristics include hand, finger images, facial characteristics, and iris recognition. Behavioral characteristics are traits that are learned or acquired. Dynamic signature verification, speaker verification, and keystroke dynamics are examples of behavioral characteristics [2,3]. Biometrics system uses hardware to capture the biometric information, and software to maintain and manage the system. In general, the system translates these measurements into a mathematical, computer-readable format. When a user first creates a biometric profile, known as a template, that template is stored in a database. The biometrics system then compares this template to the new image created every time a user accesses the system. For an enterprise, biometrics provides value in two ways. First, a biometric device automates entry into secure locations, relieving or at least reducing the need for full-time monitoring by personnel. Second, when rolled into an authentication scheme, biometrics adds a strong layer of verification for user names and passwords. Biometrics adds a unique identifier to network authentication, one that is extremely difficult to duplicate. Smart cards and tokens also provide a unique identifier, but biometrics has an advantage over these devices: a user can not lose or forget his or her fingerprint, retina, or voice. The practical applications for biometrics are diverse and expanding, and range from healthcare to government, financial services, transportation and public safety and justice [2,3]. Such applications are on-line identification for E-Commerce, access control of a certain building or restricted area, off-line personal identification, financial automated teller machine (ATM), on-line tickets purchase and
93
94
Table 1: Comparison of different biometric technology 2.3 The iris features and process The iris has many features that can be used to distinguish one iris from another [1,5,6,12,13]. One of the primary visible characteristic is the trabecular meshwork, a tissue which gives the appearance of dividing the iris in a radial fashion that is permanently formed by the eighth month of gestation. During the development of the iris, there is no genetic influence on it, a process known as chaotic morphogenesis that occurs during the seventh month of gestation, which means that even identical twins have differing irises. The iris has in excess of 266 degrees of freedom, i.e. the number of variations in the iris that allow one iris to be distinguished from another. The fact that the iris is protected behind the eyelid, cornea and aqueous humour means that, unlike other biometrics such as fingerprints, the likelihood of damage and/or abrasion is minimal. The iris is also not subject to the effects of aging which means it remains in a stable form from about the age of one until death. The use of glasses or contact lenses (colored or clear) has little effect on the representation of the iris and hence does not interfere with the recognition technology. Figure (1) shows examples of the iris pattern and they demonstrate the variations found in irises.
Figure 1: Examples of human iris patterns In general, the process of iris recognition system includes the following four steps: 1. 2. 3. 4. Capturing the image Defining the location of the iris Optimizing the image Storing and comparing the image
The image of the iris can be captured using a standard camera using both visible and infrared light and may be either a manual or automated procedure. The camera can be positioned between three and a half inches and one meter to capture the image. In the manual procedure, the user needs to adjust the camera to get the iris in focus and needs to be within
95
1-Biometric devices
2-Biometric process
3-Trial template
9-decision
8-Score
7- Matching
4-Biometric devices
2.4 Related work Several methods have been proposed for iris recognition. Daugman [7,8,9] presented a system based on phase code using Gabor filters for iris recognition and reported that it has excellent performance on a diverse database of many images. Wildes [14] described a system for personal verification based on automatic iris recognition. It relies on image registration and image matching, which is computationally very demanding. Boles et al. [18] proposed an algorithm for iris feature extraction using zero-crossing representation of 1-D wavelet transform. All these algorithms are based on grey image, and color information was not used. Because is that a grey iris image can provide enough information to identify different
96
97
The human eye should be 9 cm far away from the camera as shown above. The halogen lamp is in a fixed position to get the same illumination effect over all the images, thus excluding the illuminated part from the Iris while getting the Iris Code is easier, to acquire a more clear images through a CCD camera and minimize the effect of the reflected lights caused by the surrounding illumination, we arrange two halogen lamps as the surrounding lights and the two halogen lamp should be in front of the eye. Figure (3) shows the device configuration for acquiring human eye images.
8 cm 12 cm H a lo g e n la m p 50 W
B. Image localized and Isolating Iris boundary The proposed algorithm is based on the fact that there is some obvious difference in the intensity around each boundary, and since the value of the pixels in the pupil not always be zero so we need an edge detection algorithm to make all values of the pupil to be zero to easy determination of the pupil center and then get the pupil boundary. We start the algorithm by applying the edge detection method based on discrete approximation [17] to differential operators such as Laplacian of Gaussian (LOG), which denoted by G( x, y ) to the image I ( x, y ) at the position ( x, y ) to acquire the image information, where G( x, y ) is smoothing function of scale that smoothes the image to select the spatial scale of edges under consideration. The LOG function is defined as:
G( x, y ) = (1 / 2 ).( e ( x
+ y 2 ) / 2 2
)( x 2 + y 2 / 4 2 / 2 )
(1)
Edge detection result should be enhanced using linear method like Median filter to remove the garbage around the pupil to gain clear pupil to determine perfect centre. Get the centre of the pupil by counting the number of black pixels (zero value) of each column and row. Then get each row and column that has the maximum number of these black pixels. Then determine the center by simple calculation according to the image coordinate to set it correct on the image, consequently we can determine the radius of the pupil. Thus we can find the pupillary boundary (inner). A similar procedure is extended by using a coarse scale to locate the outer boundary (limbus) which can be apparent by using the mid-point algorithms of circle and ellipse. Merging the existing edge segments into boundaries by linking these
98
I ( x( p, ), y ( p, ) I ( p, )
Where
(2) (3)
x( p, ) = (1 p ) * x p ( ) + p * x i ( )
99
AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003 y ( p, ) = (1 p ) * y p ( ) + p * y i ( ) x p ( ) = x p 0 ( ) + rp * cos( ) y p ( ) = y p 0 ( ) + rp * sin( ) xi ( ) = xt 0 ( ) + ri * cos( ) y i ( ) = y t 0 ( ) + ri * sin( )
Where rp and ri
are respectively the radius of the pupil and the iris, while are the coordinates of the pupillary and limbic
( x p ( ), y p ( )) and ( xi ( ), y i ( ))
boundaries in the direction . Figure (4) depicted how iris image is converted to polar coordinates.
100
Figure 5: Harr transform The following MATLAB code illustrates the Haar decomposition process: Step-1 Initiations: function [s,d]= dwthaar(Signal); N = length(Signal); s = zeros(1, N/2); d = s; Step-2 The actual transform: for n=1:N/2 s(n) = 1/2*(Signal(2*n-1) + Signal(2*n)); d(n) = Signal(2*n-1) - s(n); Step-3 Wavelet decomposition using the Haar transform: function T = wavelet-decomp(Signal) N = size(Signal,2); J = log2(N);
if rem(J,1) error('Signal must be of length 2^N.'); T = zeros(J, N); T(1,:) = Signal; for j=1:J Length = 2^(J+1-j); T(j+1, 1:Length) = dwthaar( T(j, 1:Length) ); T(j+1, Length+1:N) = T(j, Length+1:N); For the 450x60 iris image in polar coordinates, we apply wavelet transform 4times in order to get the 28x3 sub-images (i.e. 84 features). By combining these 84 features in the HH sub-image of the high-pass filter of the fourth transform (HH4) and each average value for the three remaining high-pass filters areas (HH1,HH2,HH3), the dimension of the resulting feature vector is 87. Each value of 87 dimensions has a real value between -1.0 and 1.0. By quantizing each real value into binary form by convert the positive value into 1 and the negative value into 0. Therefore, we can represent an iris image with only 87 bits.
E. Matching process: Hamming Distance Calculation Comparison of Iris Code records includes calculation of a Hamming Distance (HD) [4], as a measure of variation between the Iris Coe recorded from the presented iris and each Iris Code recorded in the databases. Let A j and B j be two iris codes to be compared, the Hamming distance function can be calculated as:
101
HD =
(9)
denoting exclusive-OR operator. (The exclusive-OR is a Boolean operator that equals one if and only if the two bits A j and B j are different).
The Matching Algorithm: Let A j and B j be two iris codes to be compared and to test if A j in the database or not. The following steps describe the process: - For j = 1 to 87 do o Comparing bit-by-bit code A j with the first code B j in the database. If the result of the XOR is (0), this mean the 2 bits are the same, so count the number of zeros o Else dont count it and continue to the next bit o Next j until reaching the final code in the database. Calculating the similarities (matching) ration by the following formula: o
MR =
N z * 100 Tn
(10)
Tz are the number of zeros and total number of bits in each code, Where Nz and respectively. MR is a matching ratio. 4. Result and Discussion
To show the effectiveness of the proposed algorithms, In our implementation, we divide the 240 samples of irises data into 20 equal size folders, such that a single folder is used for testing the model that has been developed from the remaining nineteen sets. The images were acquired under different conditions. The original images were collected as grey-level images. Figure (6) shows examples of failures and
success cases or iris pattern during the acquisition process. Figure (7) illustrates that the iris has been isolated by localizing the inner and outer boundaries. It is obviously that the process can discrimination the regions that have different texture characteristics. For wavelet transform the best results were obtained for the high pass (H) in scales 3 and 4 in vertical direction. Maximal statistical measure value was 2.54 what means that it is quite good and accurate result. Acceptance Rate (FAR) and False Rejection Rate (FRR) are the two critical measurements of system effectiveness. This system is scored a perfect 0.001% FAR and 0.55% FRR. Table (2) shows the classification rate compared with the well known two methods; Wilde's and Daugmans. In two test modes, is a little better than ours. In fact, the dimensionality of the feature vector in both methods is much higher than ours. The feature vector consists of 2048 components in Daugman's method, while only 84 in our method. In addition, they extract features in much smaller local regions. These make their methods a little better than ours. Now, we are working on more precisely representing the variation of texture of the iris in local region and reducing the dimensionality of the feature vector. Thus, we expect to further improve the performance of the current method.
102
Figure 7: Localization of the iris pattern result Method Wildes, Daugmans Our system Rate (%) 99.2% 100% 97.3%
5. Conclusion We describe in this paper efficient techniques for iris recognition system with high performance from the practical point of view. These techniques are: A method of evaluating the quality of an image in the image acquisition step and excluding it from the subsequent processing if it is not appropriate. A computer graphics algorithm for detecting the centre of the pupil and localizing the iris area from an eye image. Transforming the localized iris area into a simple coordination system. A compact and efficient feature extraction method which is based on 2D multiresolution wavelet transform. Matching process based on Hamming distance function between the input code and the registered iris codes. The system find out the recognition rate is about 97.3%.
103
References 1. A. A. Onsy, S. Maha, A New Algorithm for Locating the Boundaries of the Human Iris 1st IEEE International Symposium on Signal Processing and Information Technology December 28-30, Hilton Ramses, Cairo, Egypt, 2001. 2. A. Julian, Biometrics: Advanced Identity Verification The Complete Guide, SpringerVerlag publishers, 2000. 3. B J. Erik. Overview of the Biometric Identification Technology Industry A presentation to the IBIA Conference: Defending Cyberspace '99, http://www.ibia.org 4. F. Kagan Grkaynak, Y. Leblebici and D. Mlynek A Compact High-Speed Hamming Distance Comparator for Pattern Matching Applications http://turquoise.wpi.edu, 1998. 5. G. Kee, Y. Byun, K. Lee and Y. Lee, Improved Techniques for an Iris Recognition System with High Performance Lecture Notes Artificial Intelligence, LNAI 2256, pp. 177-181, 2001. 6. G.O. Williams, Iris Recognition Technology IEE Aerospace and Electronics Systems Magazine, vol. 12, no. 4, pp. 23-29,1997. 7. J.G Daugman, Biometric Personal Identification System based on Iris Analysis U.S. patent 5, 291,560, March 1, 1994. 8. J.G Daugman, High Confidence Visual Recognition of Persons by a Test of Statistical Independence IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no.11, pp.1148-1161, 1993. 9. J.G Daugman, Recognizing Persons by their Iris Patterns In Biometrics: Personal Identification in Networked Society, Kluwer, pp.103-121, 1998. 10. J.L. Wayman, Technical Testing and Evaluation of Biometric Identification Devices In Biometrics: Personal Identification in Networked Society (A. Jain, R. Bolle, S. Pankanti, editors), Kluwer, Dordrecht, pp. 345-368, 1999. 11. Li Ma, Y. Wang, T. Tan, Iris Recognition Based on Multichannel Gabor Filtering ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January, Melbourne, Australia,2002. 12. P. Jablonski, R. Szewczyk, Z. Kulesza, A. Napieralski, J. Cabestany, M. Moreno, People Identification on the Basis of Iris Pattern Image Processing and Preliminary Analysis International Conference MIEL'2002. 13. P. W. Hallinan, Recognizing Human Eyes SPIE Proc. Geometric Methods in Computer Vision, 1570, pp. 214-226, 1991. 14. R. Wildes, Iris Recognition: An emerging biometric technology Proceedings of the IEEE, vol.85, no.9, September 1997. 15. R. Kevin, E-Security for E-Government A Kyberpass Technical White Paper,April 2001, www.kyberpass.com 16. S. Lim, K. Lee, O. Byeon, and T. Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier ETRI Journal, volume 23, no.2, June 2001. 17. S.E. Umbaugh, Computer Vision and Image Processing: A practical approach using CVIP tools NJ: Prentice-Hall, 1998. 18. W.W. Boles and B. Boashah, A Human Identification Technique Using Images of the Iris and Wavelet Transform IEEE Trans. on Signal Processing, vol.46, pp. 1185-1188, April 199
104