Sei sulla pagina 1di 12

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003

An Iris Recognition System to Enhance E-security Environment Based on Wavelet Theory


Jafar M. H. Ali Aboul Ella Hassanien Kuwait University, Faculty of Business Administration, Quantitative Methods and Information Systems Department P.O..Box 5969 Safat, code no. 13060 Kuwait Email: jafar@cba.edu.kw & Abo@cba.edu.kw Web site: http://www.cba.edu.kw/abo Abstract: In this paper, efficient biometric security techniques for iris recognition system with high performance and high confidence are described. The system is based on an empirical analysis of the iris image and it is split in several steps using local image properties. The system steps are capturing iris patterns; determine the location of the iris boundaries; converting the iris boundary to the stretched polar coordinate system; extracting the iris code based on texture analysis using wavelet transforms; and classification of the iris code. The proposed system use the wavelet transforms for texture analysis, and it depends heavily on knowledge of the general structure of a human iris. The system was implemented and tested using a dataset of 240 samples of iris data with different contrast quality. The classification rate compared with the well known methods is discussed. Keywords: User authentication, e-security, biometrics, iris recognition, segmentation, wavelet, classification, e-business.

1. Introduction Today's e-security are in critical need of finding accurate, secure and cost-effective alternatives to passwords and personal identification numbers (PIN) as financial losses increase dramatically year over year from computer-based fraud such as computer hacking and identity theft [15]. Biometric solutions address these fundamental problems, because an individual's biometric data is unique and cannot be transferred. Biometrics is automated methods of identifying a person or verifying the identity of a person based on a physiological or behavioral characteristic. Examples of physiological characteristics include hand, finger images, facial characteristics, and iris recognition. Behavioral characteristics are traits that are learned or acquired. Dynamic signature verification, speaker verification, and keystroke dynamics are examples of behavioral characteristics [2,3]. Biometrics system uses hardware to capture the biometric information, and software to maintain and manage the system. In general, the system translates these measurements into a mathematical, computer-readable format. When a user first creates a biometric profile, known as a template, that template is stored in a database. The biometrics system then compares this template to the new image created every time a user accesses the system. For an enterprise, biometrics provides value in two ways. First, a biometric device automates entry into secure locations, relieving or at least reducing the need for full-time monitoring by personnel. Second, when rolled into an authentication scheme, biometrics adds a strong layer of verification for user names and passwords. Biometrics adds a unique identifier to network authentication, one that is extremely difficult to duplicate. Smart cards and tokens also provide a unique identifier, but biometrics has an advantage over these devices: a user can not lose or forget his or her fingerprint, retina, or voice. The practical applications for biometrics are diverse and expanding, and range from healthcare to government, financial services, transportation and public safety and justice [2,3]. Such applications are on-line identification for E-Commerce, access control of a certain building or restricted area, off-line personal identification, financial automated teller machine (ATM), on-line tickets purchase and

93

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


internet kiosk and military area access control and etc. Using iris recognition in ATM [12,13,16], a customer simply walks up to the ATM and looks in a sensor camera to access their accounts. The camera instantly photographs the iris of the customer. If the customers iris data matches the record stored a database access is granted. At the ATM, a positive authentication can be read through glasses, contact lenses and most sunglasses. Iris recognition proves highly accurate, easy to use and virtually fraud proof means to verify the identity of the customer. In this paper we present an iris recognition system using a wavelet theory. The proposed system use the wavelet transforms for texture analysis, and it depends heavily on knowledge of the general structure of a human iris. The rest of this paper is organized as follows. Section 2 introduces an overview of the basic concepts of biometric technology and discusses the related work. Section 3 discusses in details the proposed system. Results will discuss in section 4. Conclusions are shown in section 5. 2. Background and Related work 2.1 Identification vs. Verification It is important to distinguish whether a biometrics system is used to verify or identify a person. These are separate goals, and some biometrics systems are more appropriate for one than the other, though no biometric system is limited to one or the other. The needs of the environment will dictate which system is chosen. The most common use of biometrics is verification. As the name suggests, the biometric system verifies the user based on information provided by the user. For example, when X enters her/his user name and password, the biometric system then fetches the template for X. If there is a match, the system verifies that the user is in fact X. Identification seeks to determine who the subject is without information from the subject. For instance, face recognition systems are commonly used for identification; a device captures an image of the subject is face and looks for a match in its database. Identification is complicated and resource-intensive because the system must perform a one-to-many comparison of images, rather than a one-to-one comparison performed by a verification system. 2.2 Biometric error analysis All biometrics systems suffer from two forms of error: Form-1 is a false acceptance and Form-2 is a false rejection. Form-1 happens when the biometric system authenticates an impostor. Form-2 means that the system has rejected a valid user. A biometric system's accuracy is determined by combining the rates of false acceptance and rejection. Each error presents a unique administrative challenge. For instance, if you are protecting sensitive data with a biometric system, you may want to tune the system to reduce the number of false acceptances. However, a system that's highly calibrated to reduce false acceptances may also increase false rejections, resulting in more help desk calls and administrator intervention. Therefore, administrators must clearly understand the value of the information or systems to be protected, and then find a balance between acceptance and rejection rates appropriate to that value. A poorly created enrollment template can compound false acceptance and rejection. For example, if a user enrolls in the system with dirt on his finger, it may create an inaccurate template that doesn't match a clean print. Natural changes in a user's physical traits may also lead to errors. The point of intersection is called the crossover accuracy of the system. In general, as the value of the crossover accuracy becomes higher, the

94

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


inherent accuracy of the biometric increases. Table (1) shows crossover accuracy of the different biometric technology. Biometrics Retinal Scan Iris Scan Fingerprints Hand Geometry Signature Dynamics Voice Dynamics Crossover Accuracy 1:10,000,000+ 1:131,000 1:500 1:500 1:50 1:50

Table 1: Comparison of different biometric technology 2.3 The iris features and process The iris has many features that can be used to distinguish one iris from another [1,5,6,12,13]. One of the primary visible characteristic is the trabecular meshwork, a tissue which gives the appearance of dividing the iris in a radial fashion that is permanently formed by the eighth month of gestation. During the development of the iris, there is no genetic influence on it, a process known as chaotic morphogenesis that occurs during the seventh month of gestation, which means that even identical twins have differing irises. The iris has in excess of 266 degrees of freedom, i.e. the number of variations in the iris that allow one iris to be distinguished from another. The fact that the iris is protected behind the eyelid, cornea and aqueous humour means that, unlike other biometrics such as fingerprints, the likelihood of damage and/or abrasion is minimal. The iris is also not subject to the effects of aging which means it remains in a stable form from about the age of one until death. The use of glasses or contact lenses (colored or clear) has little effect on the representation of the iris and hence does not interfere with the recognition technology. Figure (1) shows examples of the iris pattern and they demonstrate the variations found in irises.

Figure 1: Examples of human iris patterns In general, the process of iris recognition system includes the following four steps: 1. 2. 3. 4. Capturing the image Defining the location of the iris Optimizing the image Storing and comparing the image

The image of the iris can be captured using a standard camera using both visible and infrared light and may be either a manual or automated procedure. The camera can be positioned between three and a half inches and one meter to capture the image. In the manual procedure, the user needs to adjust the camera to get the iris in focus and needs to be within

95

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


six to twelve inches of the camera. This process is much more manually intensive and requires proper user training to be successful. The automatic procedure uses a set of cameras that locate the face and iris automatically thus making this process much more user friendly. Once the camera has located the eye, the iris recognition system then identifies the image that has the best focus and clarity of the iris. The image is then analyzed to identify the outer boundary of the iris where it meets the white sclera of the eye, the pupillary boundary and the centre of the pupil. This results in the precise location of the circular iris. The iris recognition system then identifies the areas of the iris image that are suitable for feature extraction and analysis. This involves removing areas that are covered by the eyelids, any deep shadows Once the image has been captured, such algorithm should be used to filter and map segments of the iris into hundreds of vectors. The algorithm also should take into account the changes that can occur with an iris, for example the pupils expansion and contraction in response to light will stretch and skew the iris. This information is used to produce a vector record called Iris-Code, which is a 512-byte record. This record is then stored in a database for future comparison. Figure (2) describes the process involved in using a biometric system for security. It contains nine steps, (1) capture the chosen biometric; (2) process the biometric and extract and enroll the biometric template; (3) store the template in a local repository, a central repository, or a portable token such as a smart card; (4) live-scan the chosen biometric; (5) process the biometric and extract the biometric template; (6) store the reference template (7) match the scanned biometric against stored templates; (8) provide a matching score to business applications with threshold value; (9) record a secure audit trail with respect to system.

1-Biometric devices

2-Biometric process

3-Trial template

9-decision

8-Score

7- Matching

Threshold 5-Biometric process 6-Reference template

4-Biometric devices

Figure 2: how a biometric system works

2.4 Related work Several methods have been proposed for iris recognition. Daugman [7,8,9] presented a system based on phase code using Gabor filters for iris recognition and reported that it has excellent performance on a diverse database of many images. Wildes [14] described a system for personal verification based on automatic iris recognition. It relies on image registration and image matching, which is computationally very demanding. Boles et al. [18] proposed an algorithm for iris feature extraction using zero-crossing representation of 1-D wavelet transform. All these algorithms are based on grey image, and color information was not used. Because is that a grey iris image can provide enough information to identify different

96

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


individuals. In addition, Daugman and Wildes systems employed carefully designed devices for image acquisition to ensure that the iris is located at the same location within the image, and the images have the same resolution and are glare free under fixed illuminations. However, these requirements are not always easy to be satisfied especially in practical applications. In our method, the irises are localized and unwrapped to form a block of texture. Features are extracted using multi-scale global texture analysis. This makes our method translation and rotation invariant as well as tolerant to illumination variations. Compared with zero-crossing representations of 1-D wavelet transforms [11], which employed only the information, along the circle; the proposed system use 2-D texture analysis because the iris patterns also exist along the radius. 3. The Proposed System In this section, we will discuss in details the proposed system. The system contains five main steps; image acquisition, iris localization, coordinate systems, recognition and identification process and matching and classification evaluation. Each step is described as follows: Step-1 Acquisition of Eye image: This is the stage of acquiring the eye image from the digital camera. Step-2 Iris localization: Pupillary and Limbic Iris Boundary: By utilizing the eye image, the boundary between the pupil and the iris is detected after the position of the eye in the given image is localized. After the center and the radius of the pupil are extracted, the right and the left radius of the iris are searched based on these data. Step-3 Establishment of Coordinate System: By making use of the center and the radius which are calculated in advanced step, we set the polar coordinate system. In this coordinate system, the feature of the iris is extracted. Step-4 Recognition/Identification of Iris: The extracted iris pattern is partitioned into tracks in the form of band. Each local region of these tracks is transformed into complex number with the 2-D Gabor filter. Actually, the sign of real or imaginary part of transformed number is encoded into 1 for positive sign and 0 for negative sign. The assigned bits are compared with the bits of all personal codes in the data-base or the registered memory. Step-5 Matching score: Finally, the system makes a decision to recognize/identify for the given iris by the matching score. A. Image Acquisition One of the major challenges of automated iris recognition system is to capture a high quality image of the iris while remaining noninvasive to the human operator [10]. Given that the iris is a relatively small (1 cm in diameter), dark object and that human operators are very sensitive about their eyes, this matter required careful engineering. The following points should be concern: Desirable to acquire images of the iris with sufficient resolution and sharpness to support recognition It is important to have good contrast in the interior iris pattern without resorting to a level of illumination that annoys the operator The images should be well framed (i.e. centered) Noises in the acquired images should be eliminated as much as possible.

97

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003

The human eye should be 9 cm far away from the camera as shown above. The halogen lamp is in a fixed position to get the same illumination effect over all the images, thus excluding the illuminated part from the Iris while getting the Iris Code is easier, to acquire a more clear images through a CCD camera and minimize the effect of the reflected lights caused by the surrounding illumination, we arrange two halogen lamps as the surrounding lights and the two halogen lamp should be in front of the eye. Figure (3) shows the device configuration for acquiring human eye images.

H a lo g e n Lam p 50 w 12 cm 8cm M o n ito r F ra m e g ra b b e r C C D C a m e ra 9 cm

8 cm 12 cm H a lo g e n la m p 50 W

Figure 3: Configuration of the proposed image acquisition device

B. Image localized and Isolating Iris boundary The proposed algorithm is based on the fact that there is some obvious difference in the intensity around each boundary, and since the value of the pixels in the pupil not always be zero so we need an edge detection algorithm to make all values of the pupil to be zero to easy determination of the pupil center and then get the pupil boundary. We start the algorithm by applying the edge detection method based on discrete approximation [17] to differential operators such as Laplacian of Gaussian (LOG), which denoted by G( x, y ) to the image I ( x, y ) at the position ( x, y ) to acquire the image information, where G( x, y ) is smoothing function of scale that smoothes the image to select the spatial scale of edges under consideration. The LOG function is defined as:

G( x, y ) = (1 / 2 ).( e ( x

+ y 2 ) / 2 2

)( x 2 + y 2 / 4 2 / 2 )

(1)

Edge detection result should be enhanced using linear method like Median filter to remove the garbage around the pupil to gain clear pupil to determine perfect centre. Get the centre of the pupil by counting the number of black pixels (zero value) of each column and row. Then get each row and column that has the maximum number of these black pixels. Then determine the center by simple calculation according to the image coordinate to set it correct on the image, consequently we can determine the radius of the pupil. Thus we can find the pupillary boundary (inner). A similar procedure is extended by using a coarse scale to locate the outer boundary (limbus) which can be apparent by using the mid-point algorithms of circle and ellipse. Merging the existing edge segments into boundaries by linking these

98

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


edges, we can precisely isolate the iris boundary from the eye. The proposed iris boundary isolation algorithm is described as following steps: Step-1 Edge detection: We will localize the pupillary boundary by using a finer scale then apply the Zero-Crossing for each pixel to make a comparison between pixels to make all values of the pupil to be zero to easy determine the boundary. Step-2 Edge Linking: Using coarse-to-fine scale to get boundaries by merging the existing edge segments into boundaries this is done by edge linking. Step-3 Enhancement The result of step-1 and step-2 should be enhanced by using median filter. Step-4 Pupil/limbus center: Determine the center of the pupil ( x0 , y 0 ) by counting the number of black pixels (zero value) [17] as follows: - Count every pixel in each row. - Get the row of maximum number of pixels. - Get the position of the first and last pixels respectively, ( x1 , y1 ) and

( x 2 , y 2 ) of this row. Then find the center of this row by, x0 = x1 + x 2 2 .


Similarly, apply the previous steps for determining the center of the column of maximum number of pixels by y 0 = y1 + y 2 / 2 . We actually can not obtain only one point, so we select the center point as the most frequently crossed point. Consequently, the radius of virtual circle of the pupil can be determined. Step-5 Isolate the iris boundary: Segment the image of the iris from the eye by applying boundary detection technique to localize the pupillary boundary. This technique based on merging the existing edge (the maximum number of point of the edge) segments into boundaries by edge linking [17] as follows: - Define the size of neighbourhoods 5 5 . - Link similar points which having closed values, the entire image undergoes this process, while keeping a list of linked points. - When the process is complete the boundary is determined by the linked list which can be apparent by using the mid-point algorithms of circle and ellipse. - Similar steps can be extended by using a coarse scale to locate the outer boundary (limbus). C. Polar Transformation Algorithm The localized iris part from the image should be transformed into polar coordinates system Locating iris in the image delineates the circular iris zone of analysis by its own inner and outer boundaries. The Cartesian to polar reference transform suggested by Daugman [8] authorizes equivalent rectangular representation of the zone of interest as shown figure 5. In this way we compensate the stretching of the iris texture as the pupil changes in size, and we unfold the frequency information contained in the circular texture in order to facilitate next features extraction. Moreover this new representation of the iris breaks the noeccentricity of the iris and the pupil. The ( [ ;2 ] ) parameter and dimensionless p ( p [0;1]) parameter describe the polar coordinate system. Thus the following equations implement: -

I ( x( p, ), y ( p, ) I ( p, )
Where

(2) (3)

x( p, ) = (1 p ) * x p ( ) + p * x i ( )

99

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003 y ( p, ) = (1 p ) * y p ( ) + p * y i ( ) x p ( ) = x p 0 ( ) + rp * cos( ) y p ( ) = y p 0 ( ) + rp * sin( ) xi ( ) = xt 0 ( ) + ri * cos( ) y i ( ) = y t 0 ( ) + ri * sin( )
Where rp and ri

(4) (5) (6) (7) (8)

are respectively the radius of the pupil and the iris, while are the coordinates of the pupillary and limbic

( x p ( ), y p ( )) and ( xi ( ), y i ( ))

boundaries in the direction . Figure (4) depicted how iris image is converted to polar coordinates.

Figure 4: Polar transformation


D. Feature Extraction: Iris Code This section illustrates the technique of how to get the feature vector (iris code) to able to compare the similarities of the human eyes and to identify the person. Gabor transform and wavelet transform are typically used for analyzing the human iris patterns and extracting feature points from them [11]. In this paper, a wavelet transform is used to extract features from iris images. Among the mother wavelets, we use Haar wavelet. The wavelet transform breaks an image down into four sub-sampled, or images. The results consist of one image that has been high pass in the horizontal and vertical directions, one that has been low passed in the vertical and high passed in the horizontal, and one that has been low pass filtered in both directions. This transform is typically implemented in the spatial domain by using 1-D convolution filters g. Figure (5) shows the result of Harr transform. Where, H and L mean the high pass and low pass filter, respectively. While HH means that the high pass filter is applied to signals of both directions. The results of Haar transform in four types of coefficients: (a) coefficients that result from a convolution with g in both directions (HH) represent diagonal features of the image. (b) coefficients that result from a convolution with g on the columns after a convolution with h on the rows (HL) correspond to horizontal structures. (c) Coefficients from high pass filtering on the rows, followed by low pass filtering of the columns (LH) reflect vertical information. (d) The coefficients from low pass filtering in both directions are further processed in the next step.

100

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003

Figure 5: Harr transform The following MATLAB code illustrates the Haar decomposition process: Step-1 Initiations: function [s,d]= dwthaar(Signal); N = length(Signal); s = zeros(1, N/2); d = s; Step-2 The actual transform: for n=1:N/2 s(n) = 1/2*(Signal(2*n-1) + Signal(2*n)); d(n) = Signal(2*n-1) - s(n); Step-3 Wavelet decomposition using the Haar transform: function T = wavelet-decomp(Signal) N = size(Signal,2); J = log2(N);

if rem(J,1) error('Signal must be of length 2^N.'); T = zeros(J, N); T(1,:) = Signal; for j=1:J Length = 2^(J+1-j); T(j+1, 1:Length) = dwthaar( T(j, 1:Length) ); T(j+1, Length+1:N) = T(j, Length+1:N); For the 450x60 iris image in polar coordinates, we apply wavelet transform 4times in order to get the 28x3 sub-images (i.e. 84 features). By combining these 84 features in the HH sub-image of the high-pass filter of the fourth transform (HH4) and each average value for the three remaining high-pass filters areas (HH1,HH2,HH3), the dimension of the resulting feature vector is 87. Each value of 87 dimensions has a real value between -1.0 and 1.0. By quantizing each real value into binary form by convert the positive value into 1 and the negative value into 0. Therefore, we can represent an iris image with only 87 bits.
E. Matching process: Hamming Distance Calculation Comparison of Iris Code records includes calculation of a Hamming Distance (HD) [4], as a measure of variation between the Iris Coe recorded from the presented iris and each Iris Code recorded in the databases. Let A j and B j be two iris codes to be compared, the Hamming distance function can be calculated as:

101

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003


1 87 Aj B j 87 j

HD =

(9)

denoting exclusive-OR operator. (The exclusive-OR is a Boolean operator that equals one if and only if the two bits A j and B j are different).
The Matching Algorithm: Let A j and B j be two iris codes to be compared and to test if A j in the database or not. The following steps describe the process: - For j = 1 to 87 do o Comparing bit-by-bit code A j with the first code B j in the database. If the result of the XOR is (0), this mean the 2 bits are the same, so count the number of zeros o Else dont count it and continue to the next bit o Next j until reaching the final code in the database. Calculating the similarities (matching) ration by the following formula: o

MR =

N z * 100 Tn

(10)

Tz are the number of zeros and total number of bits in each code, Where Nz and respectively. MR is a matching ratio. 4. Result and Discussion

To show the effectiveness of the proposed algorithms, In our implementation, we divide the 240 samples of irises data into 20 equal size folders, such that a single folder is used for testing the model that has been developed from the remaining nineteen sets. The images were acquired under different conditions. The original images were collected as grey-level images. Figure (6) shows examples of failures and
success cases or iris pattern during the acquisition process. Figure (7) illustrates that the iris has been isolated by localizing the inner and outer boundaries. It is obviously that the process can discrimination the regions that have different texture characteristics. For wavelet transform the best results were obtained for the high pass (H) in scales 3 and 4 in vertical direction. Maximal statistical measure value was 2.54 what means that it is quite good and accurate result. Acceptance Rate (FAR) and False Rejection Rate (FRR) are the two critical measurements of system effectiveness. This system is scored a perfect 0.001% FAR and 0.55% FRR. Table (2) shows the classification rate compared with the well known two methods; Wilde's and Daugmans. In two test modes, is a little better than ours. In fact, the dimensionality of the feature vector in both methods is much higher than ours. The feature vector consists of 2048 components in Daugman's method, while only 84 in our method. In addition, they extract features in much smaller local regions. These make their methods a little better than ours. Now, we are working on more precisely representing the variation of texture of the iris in local region and reducing the dimensionality of the feature vector. Thus, we expect to further improve the performance of the current method.

102

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003

(a) Failure cases

(b) Success case


Figure 6: Iris success and failure cases

Figure 7: Localization of the iris pattern result Method Wildes, Daugmans Our system Rate (%) 99.2% 100% 97.3%

Table 2: classification rate

5. Conclusion We describe in this paper efficient techniques for iris recognition system with high performance from the practical point of view. These techniques are: A method of evaluating the quality of an image in the image acquisition step and excluding it from the subsequent processing if it is not appropriate. A computer graphics algorithm for detecting the centre of the pupil and localizing the iris area from an eye image. Transforming the localized iris area into a simple coordination system. A compact and efficient feature extraction method which is based on 2D multiresolution wavelet transform. Matching process based on Hamming distance function between the input code and the registered iris codes. The system find out the recognition rate is about 97.3%.

103

AMO - Advanced Modeling and Optimization, Volume 5, Number 2, 2003

References 1. A. A. Onsy, S. Maha, A New Algorithm for Locating the Boundaries of the Human Iris 1st IEEE International Symposium on Signal Processing and Information Technology December 28-30, Hilton Ramses, Cairo, Egypt, 2001. 2. A. Julian, Biometrics: Advanced Identity Verification The Complete Guide, SpringerVerlag publishers, 2000. 3. B J. Erik. Overview of the Biometric Identification Technology Industry A presentation to the IBIA Conference: Defending Cyberspace '99, http://www.ibia.org 4. F. Kagan Grkaynak, Y. Leblebici and D. Mlynek A Compact High-Speed Hamming Distance Comparator for Pattern Matching Applications http://turquoise.wpi.edu, 1998. 5. G. Kee, Y. Byun, K. Lee and Y. Lee, Improved Techniques for an Iris Recognition System with High Performance Lecture Notes Artificial Intelligence, LNAI 2256, pp. 177-181, 2001. 6. G.O. Williams, Iris Recognition Technology IEE Aerospace and Electronics Systems Magazine, vol. 12, no. 4, pp. 23-29,1997. 7. J.G Daugman, Biometric Personal Identification System based on Iris Analysis U.S. patent 5, 291,560, March 1, 1994. 8. J.G Daugman, High Confidence Visual Recognition of Persons by a Test of Statistical Independence IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no.11, pp.1148-1161, 1993. 9. J.G Daugman, Recognizing Persons by their Iris Patterns In Biometrics: Personal Identification in Networked Society, Kluwer, pp.103-121, 1998. 10. J.L. Wayman, Technical Testing and Evaluation of Biometric Identification Devices In Biometrics: Personal Identification in Networked Society (A. Jain, R. Bolle, S. Pankanti, editors), Kluwer, Dordrecht, pp. 345-368, 1999. 11. Li Ma, Y. Wang, T. Tan, Iris Recognition Based on Multichannel Gabor Filtering ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January, Melbourne, Australia,2002. 12. P. Jablonski, R. Szewczyk, Z. Kulesza, A. Napieralski, J. Cabestany, M. Moreno, People Identification on the Basis of Iris Pattern Image Processing and Preliminary Analysis International Conference MIEL'2002. 13. P. W. Hallinan, Recognizing Human Eyes SPIE Proc. Geometric Methods in Computer Vision, 1570, pp. 214-226, 1991. 14. R. Wildes, Iris Recognition: An emerging biometric technology Proceedings of the IEEE, vol.85, no.9, September 1997. 15. R. Kevin, E-Security for E-Government A Kyberpass Technical White Paper,April 2001, www.kyberpass.com 16. S. Lim, K. Lee, O. Byeon, and T. Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier ETRI Journal, volume 23, no.2, June 2001. 17. S.E. Umbaugh, Computer Vision and Image Processing: A practical approach using CVIP tools NJ: Prentice-Hall, 1998. 18. W.W. Boles and B. Boashah, A Human Identification Technique Using Images of the Iris and Wavelet Transform IEEE Trans. on Signal Processing, vol.46, pp. 1185-1188, April 199

104

Potrebbero piacerti anche