Sei sulla pagina 1di 27

Automatic Attendance System using Fingerprint Verification Technique

1. INTRODUCTION In many institutions and organization the attendance is a very important factor for various purposes and its one of the important criteria that is to follow for students and organization employees. The previous approach in which manually taking and maintains the attendance records was very inconvenient task. After having these issues in mind we develop an automatic attendance system which automates the whole process of taking attendance and maintaining it. We already know about some commonly used biometric techniques are used for objective identification and verification are like iris recognition, voice identification, facial recognition, fingerprint identification, DNA recognition, hand geometry recognition, signature recognition, and gait recognition. Biometrics techniques are widely used in various areas like building security, forensic science, ATM, criminal identification and passport control . In our proposed automatic attendance system we uses fingerprint recognition technique for obtaining the attendance. The fingerprint recognition is widely used for many other purposes and it is widely popular technique . Fingerprint verification is very convenient and reliable way to verify the persons Identity. It is believed that no two people have identical fingerprint in this world, so the fingerprint verification and identification is most popular way to verify the authenticity or identity of a person wherever the security is a problematic question. The reason for popularity of fingerprint technique is uniqueness of person arises from his behavior; personal characteristics are like, for instance uniqueness, which indicates that each and every fingerprint is unique, different from one other. Universality, that means every person hold the individual characteristics of fingerprint. Permanence, means that fingerprint are permanent, are impossible to change or forgot, and can never be stolen. Collectability means

that we can measure fingerprint quantitatively . In present scenario, the various uses of fingerprint verification are widespread like authentication to logon machine and others but still majorly for law enforcement applications. There are a lot of expectations that the use of fingerprint recognition will increase which is dependent of some factor involved like small fingerprint capturing devices, fast computing hardware, and awareness on easy to use methods for security [3]. This paper cover the topics on fingerprint verification, algorithm and our proposed system, the details of pre-processing of fingerprint image including enhancement, binarization, segmentation, extracting minutiae from image, post processing and matching, experiment and its result. Fingerprint Recognition The Fingerprint is the feature pattern of one finger or an impression of friction ridges found on inner surface of finger as shown in figure 1(a). Everyone in this world has his own fingerprint with the permanent uniqueness. A fingerprint is made up of ridges and furrows, which shows good similarities like parallelism and average width . However the research conducting on fingerprint verification and identification shows that we can distinguish fingerprint with the help of minutiae, which are the some abnormal points on the ridges. There are two type of the termination of minutiae, immediate ending of ridges or a point where ridge ends abruptly called ending or termination and the point on the ridge from which other branch drives or a point from where ridge splits into two or more branches is known as bifurcation .

2. LITERATURE REVIEW This section provides an account of published work by academic scholars and accredited commentators on the subject of modern biometrics. The intent is to provide the reader with the pros and cons of the knowledge and ideas established on the topic. The material selected for research should reflect the overall goals outlined in the aims and objectives for this project. Those overall goals are primarily intent on evaluating the current state of biometric technologies and their potential. Therefore, with this in mind and taking into account the Ashbourne definition of a modern biometric system, it is possible to discern the more relevant material for appraisal. Much of this material tends to be recent, reflecting the most significant period for the development of biometrics as a usable modern technology , (c. 1990s to the present), with the necessary overall historical context being provided in the main introduction. Maltoni et al, [A][1] Bolle et al, [A][3] and Wikipedia, [B][4] are in agreement as to the list of general characteristics a biometric must meet in order to provide high level performance, these include: Universality, a characteristic of everyone. Distinctiveness, any two persons should be sufficiently different. Permanence, i.e. invariant with respect to matching, over time. Collectabillity, biometric can be measured quantitatively. Performance, achievable recognition accuracy, speed, robustness. Acceptability, the extent to which people are willing to accept the system. Circumvention, how easy it is to fool the system.

As a gauge to performance these characteristics provide the context for understanding common biometric identifiers in relation to the key aspects mentioned above. In summary, it is hoped that the review will reflect the overall research objective, provide insight as to the current level/limits of knowledge and identifying controversies and areas of further research.

Moore, G, 2005 stated that the picture writing of a hand ridge patterns was discovered in Nova Scotia. In ancient Babylon, fingerprints were used on clay tablets for business transaction and in ancient China, thumbs prints were found on clay seals. In 14th century Persia, various official government papers had fingerprints and one government official, a doctor, observed that no two fingerprints were exactly alike.

Year 1686 - Malpighi

Descriptions In 1686, Marcello Malpighi, a professor of anatomy at the University of

Bologna, noted in his treatise; ridges, spirals and loops in fingerprints. He made no mention of their value as a tool for individual identification. A layer of skin was named after him; "Malpighi" layer, which is approximately 1.8mm thick. 1823 - Purkinji In 1823, John Evangelist Purkinji, a professor of anatomy at the

University of Breslau, published his thesis discussing fingerprint patterns, but he too made no mention of the value of fingerprints for personal identification. 1856 - Hershel The English first began using fingerprints in July of 1858, when Sir

William Herschel, Chief Magistrate of the

Hooghly district in Jungipoor, India, first used all fingerprints were

fingerprints on native contracts. Sir Herschel's private conviction that

unique to the individual, as well as permanent throughout that individual's life, inspired him to expand their use. 1880 - Faulds During the 1870's, Dr. Henry Faulds, the British SurgeonSuperintendent

of Tsukiji Hospital in Tokyo, Japan, took up the study of "skin-furrows" after noticing finger marks on specimens of "prehistoric" pottery. In 1880, Dr. Faulds published an article in the Scientific Journal, "Nature" (nature). He discussed fingerprints as a means of personal

identification, and the use of printers ink as a method for obtaining such fingerprints. 1882 - Thompson In 1882, Gilbert Thompson of the U.S. Geological Survey in New Mexico

used his own fingerprints on a document to prevent forgery. This is the first known use of fingerprints in the United States. 1888 - Galton Sir Francis Galton, a British anthropologist and a cousin of Charles Darwin, began his observations of fingerprints as a means of identification in the 1880's. 1891 - Vucetich Juan Vucetich, an Argentine Police Official, began the first fingerprint files

based on Galton pattern types. At first, Vucetich included the Bertillon System with the files. 1892 Vucetich & Galton Juan Vucetich made the first criminal fingerprint identification in 1892. Sir Francis Galton published his book, "Fingerprints", establishing the individuality and permanence of fingerprints. The book included the first classification system for fingerprints. While he soon discovered that fingerprints offered no firm clues to an individual's intelligence or genetic history, he was able to scientifically prove what Herschel and Faulds already suspected: that fingerprints do not change over the course of an individual's lifetime, and that no two fingerprints are exactly the same. According to his calculations, the odds of two individual fingerprints being the same were 1 in 64 billion. Galton identified the characteristics by which

fingerprints can be identified. These same characteristics (minutia) are basically still in use today, and are often referred to as Galton's Details. 1897 Haque & Bose On 12th June 1987, the Council of the Governor General of

India approved a committee report that fingerprints should be used for classification of criminal records. Later that year, the Calcutta (now Kolkata) Anthropometric Bureau became the world's first Fingerprint Bureau. Working in the Calcutta Anthropometric Bureau (before it became the Fingerprint Bureau) were Azizul Haque and Hem Chandra Bose. Haque and Bose are the two Indian fingerprint experts credited with primary development of the Henry System of fingerprint classification (named for their supervisor, Edward Richard Henry). The Henry classification system is still used in all English-speaking countries. 1901 Henry Introduction of fingerprints for criminal identification in England and

Wales, using Galton's observations and revised by Sir Edward Richard Henry. 1902 First systematic use of fingerprints in the U.S. by the New York Civil Service Commission for testing. Dr. Henry P. DeForrest pioneers U.S. fingerprinting. 1903 The New York State Prison system began the first systematic use of fingerprints in U.S. for criminals. 1904 The use of fingerprints began in Leavenworth Federal Penitentiary in Kansas, and the St. Louis Police Department. They were assisted by a Sergeant from Scotland Yard who had been on duty at the St. Louis World's Fair Exposition guarding the British Display. Sometime after the St. Louis World's Fair, the International Association of Chiefs of Police (IACP) created America's first national fingerprint repository, called the National Bureau of Criminal Identification. 1905 U.S. Army begins using fingerprints. U.S. Department of Justice forms the Bureau of Criminal Identification in Washington, DC to provide a centralized reference collection of fingerprint cards. Two years later the U.S. Navy started, and was joined the next year by the Marine Corp. During the next 25 years more and more law enforcement agencies join in the use of fingerprints as a means of personal identification. Many of these agencies began sending copies of their fingerprint cards to the National Bureau of Criminal Identification, which was established by the

International Association of Police Chiefs. 1907 U.S. Navy begins using fingerprints. U.S. Department of Justice's Bureau of Criminal Identification moves to Leavenworth Federal Penitentiary where it is staffed at least partially by inmates. 1908 U.S. Marin Corps begins using fingerprints. 1918 Edmond Locard wrote that if 12 points (Galton's Details) were the same between two fingerprints, it would suffice as a positive identification. Locard's 12 points seems to have been based on an unscientific "improvement" over the eleven anthropometric measurements (arm length, height, etc.) used to "identify" criminals before the adoption of

fingerprints. 1924 In 1924, an act of congress established the Identification Division of the FBI. The IACP's National Bureau of Criminal Identification and the US Justice Department's Bureau of Criminal Identification consolidated to form the nucleus of the FBI fingerprint files. 1946 By 1946, the FBI had processed 100 million fingerprint cards in manually maintained files and by 1971, 200 million cards. With the introduction of AFIS technology, the files were split into computerized criminal files and manually maintained civil files. 2005 The FBIs Integrated AFIS (IAFIS) in Clarksburg, WV has more than 49 million individual computerized fingerprint records for known criminals. Old paper fingerprint cards for the civil files are still manually maintained in a warehouse facility (rented shopping center space) in Fairmont, WV, though most enlisted military service member fingerprint cards received after 1990, and all military-related fingerprint cards received after 19 May 2000, have now been computerized and can be searched internally by the FBI. In some future build of IAFIS, the FBI may make such civil file AFIS searches available to other federal crime laboratories. All US states and larger cities have their own AFIS databases, each with a subset of fingerprint records that is not stored in any other database. Thus, law enforcement fingerprint interface standards are very important to enable sharing records and mutual searches for identifying criminals.

3. PROJECT OVERVIEW The main aim of this paper is to develop an accurate, fast and very efficient automatic attendance system using fingerprint verification technique. We propose a system in which fingerprint verification is done by using extraction of minutiae technique and the system that automates the whole process of taking attendance, Manually which is a laborious and troublesome work and waste a lot of time, with its managing and maintaining the records for a period of time is also a burdensome task. For this purpose we use fingerprint verification system using extraction of minutiae techniques. The experimental result shows that our proposed system is highly efficient in verification of user fingerprint. Figure 2 shows our proposed automatic attendance system using fingerprint verification technique. A fingerprint is captured by user interface, which are likely to be an optical, solid state or an ultrasound sensor. Generally, there are two approaches are used for fingerprint verification system among them first one is Minutiae based technique, in which minutiae is represented by ending or termination and bifurcations. Other one is Image based method or matching pattern, which take account of global feature of any fingerprint image. This method is more useful then the first one as it solve some intractable problem of method one, but this paper talk about the minutiae based representation of fingerprint. The fingerprint verification can be defined as the system that confirm the authenticity of one person by comparing his captured fingerprint templates against the stored templates in database. One to one comparisons are conducted to identify the person authenticity. After this if the authenticity of person is verified then system signal true else false.

Automatic attendance system architecture representing the stages of preprocessing, extraction of minutiae and matching of minutiae

4. FINGER PRINT RECOGNITION PROCESS PREPROCESSING OF FINGERPRINT IMAGE The pre-processing of a fingerprint image comprises of procedures like, first the enhancement of image is done by histogram equalization and Fourier transform. After this the process of binarization is done on the enhanced fingerprint image by using locally adaptive method. This binarized fingerprint image is segmented by using threshold or region of interest techniques. A. Enhancement of Image Since the fingerprint image are acquired from high quality sensors but the perfection of image quality is questionable. So the enhancement of fingerprint image is done to improve image quality, without even knowing the source of degradation, with this it increase the contrast between ridges and furrows and connect the broken points of ridges. Enhancement of image is first done by histogram equalization, which performed on input image based on calculated probability density function, with the help of which noise is prevented from being amplified and visualization effects are enhanced. After this Fourier transform is applied on image small processing blocks [8, 16] (32 by 32 pixel) according to

where u=0, 1, 2....31, v=0, 1, 2........31. To enhance the block by its dominant frequency we multiply FFT of block by its magnitude a set of times. The original magnitude FFT = abs(F(u, v)) = |F(u, v)|.
We can get an enhanced block, by using formula

where F-1 (F(u, v)) is calculated as:

for x = 0, 1, 2, ..., 31 and y = 0, 1, 2, ..., 31.

B. Binarization of Image The Binarization of fingerprint image is to convert an image up to 256 gray levels to white and black image. A locally adaptive binarization method is used in which image binarization is done by choose mean intensity value or threshold value and classify all pixels with or above threshold value as white and other pixels as black. C. Segmentation of Image Separating the fingerprint area from the background is always useful to avoid extraction of noisy areas of fingerprint [17]. The segmentation of image is to distinguish image object from the background. Only the region of interest is useful for recognition, so image area without effective ridges and furrows are discarded because it does not holds any important information and remaining effective area is sketched out since minutiae in bound region are confusing with the initial minutiae when and image second were is generated. by To extract region of interest Two we use two methodologies, first is block direction estimation and direction variety check [9, 18] extracting morphological operations. morphological operations are chosen OPEN, which remove expand image with removing of noise and CLOSE, which shrink image with eliminating small cavities. The interest fingerprint image area is found by subtraction of closed area from opened area. RECOGNITION OF MINUTIAE The recognition of minutiae is based on the extraction of minutiae in which binary image obtained by binarization process are submitted to fingerprint ridge thinning stage and marking of minutiae. A. Thinning Ridge Thinning or thinning is a process of reducing the width of the ridges in fingerprint image to one pixel wide [10,11, 19]. Can say like this is to eliminate the

redundant pixel of ridges till the ridges point is one pixel wide and it should thinned to its central pixel. The minutiae points, which have pixel value one is ending and more than two is bifurcation. Thinning algorithm is classified by iterative and noniterative algorithms which is a faster approach [10]. The purpose of thinning is to preserve the fingerprint minutiae shape while eliminating extra information and performed because of morphological filtering of segmented image, removal of unwanted branches, and smoothing up the result central path. This algorithm follows the three simple rules are first to remove the unwanted edge points, adding new edge points and shift edge points to the new location. The Algorithm is: The rules [12, 20] are here according to the number of edge point neighbors which an edge point has, and with help of this algorithm erroneous pixels are removed. STEP 1: An edge point has zero neighbors, then remove the edge point. STEP 2: An edge point has one neighbour, then start search for the neighbour with maximum edge response to continue the edge, fill the gaps. (A edge can be filled by maximum of three pixel.) STEP 3: An edge point has two neighbours, and then there are three cases, If point is sticking out of an otherwise straight line, then compare its edge response to the corresponding point. If the point is adjoining a diagonal edge then remove it. Else, the point is valid edge point.

STEP 4: An edge point has more than two neighbours, and then if point is not having any link between multiple edges then thin the edge in logical consistent way. The figure 3 is showing the Algorithm cases of number of edge points neighbour. B. Enhanced Thinning The fingerprint ridge thinning is to eliminate the redundant pixel of ridge, till the ridges one pixel wide, but this not always happens. There are still some locations where skeleton has two or more pixel width, some extra or erroneous pixel. An extra or erroneous pixel is one with more than four connected neighbour, it can destroy the integrity of bridges and spurs, miss detect and exchange type of

minutiae points. So before extraction of minutiae we need to eliminate those extra or erroneous pixels [4], for this purpose we use Enhanced Thinning Algorithm. C. Marking of Minutiae After Thinning of fingerprint image the important and next step is marking all minutiae points. The maximum number of minutiae detected, increased the probability of accurate results. The crossing number (CN) concept is widely use for this purpose. Together with marking all thinned ridge and fingerprint image are labeled with a unique ID for further process of matching and this labeling is done by using morphological operation BWLABEL [4]. POST PROCESSING OF MINUTIAE After the pre-processing stage on the binary and skeleton image, we extract almost all minutiae from fingerprint skeleton using various method including Rutovitz crossing numbers [14], due to various noises in fingerprint image it unable to heal the image totally, like false ridge breaks, ridge cross connection and those extraction algorithm produces a large number of spurious minutiae [11] such as break, spur, merge, triangle, multiple break, ladder, lake, island, wrinkle, dot as show in figure 4. So for accurate fingerprint verification post processing stage is very necessary as it helps in differentiating spurious minutiae from genuine ones. As we able to eliminate more unwanted or spurious minutiae chances of getting better matching performance increase with the matching time will decreases. For various types of false minutiae as in figure 4 shown, dot, spur, lake, island are removed by pre processing algorithms, but bridge, triangle, ladder, wrinkle are not which also known as H-points. If we able to remove H-points of image so we able to eliminate most of spurious minutiae point. The process of eliminating the false minutiae are consist of following steps first extract minutiae set, then remove short breaks, after that removal of spurs if any, then removal of Hpoints [13] , after that remove close minutiae and border minutiae and we get the true minutiae set. The elimination process of false minutiae is already started by applying thinning algorithms as shown in Figure 1 (extraction of minutiae steps), by applying the

threshold concept and various thinning algorithms we already able to remove the short breaks and spurs. The post processing starts from the next step is removals of H-Points, where H-Points are detected and eliminated. To recognize the H-Points we follow some rules like, the point of intersection should lie between the two ending or two bifurcation points, the distance between bridge midpoint and break midpoint should be in a threshold limit and then we remove minutiae that are very close to each other or the minutiae points which are within the certain distance threshold from image border [14]. After preprocessing, a large percentage of extra or spurious minutiae are deleted and we can treat rest of the minutiae points as genuine and which can be used for fingerprint matching purpose. MATCHING OF MINUTIAE Matching of minutiae is that when we have two set of minutiae of fingerprint image and using a algorithm we determines whether the give set of minutiae is from the same finger or not. There are some matching techniques as correlation based matching in which two fingerprint images are superimposed and finding out the correlation between corresponding pixel, Ridge feature based matching in which feature extracted are compared to extracted ridge pattern and the other one is Minutia based matching technique [3] in which minutiae extracted from two fingerprint and stored as sets of point in two dimensional plane. We described this technique here a) The stage of Alignment : In this stage anyone minutiae is choose from each image then calculate the similarity of the two ridges associated with the two referenced minutiae points [9]. If the threshold value is smaller than similarity then transform each set of minutiae to new coordinate system whose origin is at referenced point and x-axis is coincident with the direction of referenced point. b)The stage of Matching: After deriving the set of transformed minutiae points, an algorithm is used for matching the pairs, assuming that minutiae have nearly identical direction and position.

5. Tools Used .Net Framework 3.5 C# SQL Server 2005

.Net Framework 3.5 The main features of .Net Framework 3.5 are as below, which prompted us to choose this platform to develop the application of this complex nature. Interoperability Because computer systems commonly require interaction between newer and older applications, the .NET Framework provides means to access functionality implemented in newer and older programs that execute outside the .NET environment. Access toCOM components is provided in the System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework; access to other functionality is achieved using the P/Invoke feature. Common Language Runtime engine The Common Language Runtime (CLR) serves as the execution engine of the .NET Framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling. Language independence The .NET Framework introduces a Common Type System, or CTS. The CTS specification defines all possible data types and programming constructs supported by the CLR and how they may or may not interact with each other conforming to the Common Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language. Base Class Library The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality available to all languages using the .NET Framework.

The BCL provides classes that encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction, XML document manipulation, and so on. It consists of classes, interfaces of reusable types that integrates with CLR(Common Language Runtime). Simplified deployment The .NET Framework includes design features and tools which help manage the installation of computer software to ensure it does not interfere with previously installed software, and it conforms to security requirements. Security The design addresses some of the vulnerabilities, such as buffer overflows, which have been exploited by malicious software. Additionally, .NET provides a common security model for all applications. Portability While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be platformagnostic,[3] and cross-platform implementations are available for other operating the Common systems Language Type (see Silverlight Infrastructure(which System, and and includes the Alternative the core class implementations section below). Microsoft submitted the specifications for libraries, Common the Common Intermediate

Language), the C# language, and the C++/CLI language to both and the, making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms.

C# By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI).Most of its intrinsic types correspond to value-types implemented by the CLI framework. However, the language specification does not state the code generation requirements of the

compiler: that is, it does not state that a C# compiler must target a Common Language Runtime, or generate Common Intermediate Language (CIL), or generate any other specific format. Theoretically, a C# compiler could generate machine code like traditional compilers of C++ or Fortran. Some notable features of C# that distinguish it from C and C++ (and Java, where noted) are:

C# supports strongly typed implicit variable declarations with the keyword var, and implicitly typed arrays with the keyword new[]followed by a collection initializer.

Meta programming via C# attributes is part of the language. Many of these attributes duplicate the functionality of GCC's and VisualC++'s platformdependent preprocessor directives.

Like C++, and unlike Java, C# programmers must use the keyword virtual to allow methods to be overridden by subclasses.

Extension methods in C# allow programmers to use static methods as if they were methods from a class's method table, allowing programmers to add methods to an object that they feel should exist on that object and its derivatives.

The type dynamic allows for run-time method binding, allowing for JavaScript like method calls and run-time object composition.

C#

has

strongly

typed

and

verbose

function

pointer

support

via

the

keyword delegate.

Like the QT framework's pseudo-C++ signal and slot, C# has semantics specifically surrounding publish-subscribe style events, though C# uses delegates to do so.

C#

offers

Java

like syncronized method

calls,

via

the

attribute [MethodImpl(MethodImplOptions.Synchronized), and has support for mutually-exclusive locks via the keyword lock.

The C# languages does not allow for global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions.

Local variables cannot shadow variables of the enclosing block, unlike C and C++.

C# namespace provides

the

same

level

of

code

isolation

as

Java package or a C++ namespace, with very similar rules and features to a package.

C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a type that implements the true operator, such as the boolean type. While C++ also has a boolean type, it can be freely converted to and from integers, and expressions such as if(a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of common programming mistakes in C or C++ such as if (a = b) (use of assignment =instead of equality ==).

In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one that has been garbage collected), or to a random block of memory. An unsafe pointer can point to an instance of a value-type, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through theSystem.IntPtr type, but it cannot dereference them.

Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory that is no longer needed.

In

addition

to

the try...catch construct

to handle

exceptions,

C#

has

a try...finally construct to guarantee execution of the code in the finally block, whether an exception occurs or not.

Multiple inheritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead

architect

to

avoid

complication

and

simplify

architectural

requirements

throughout CLI. When implementing multiple interfaces that contain a method with the same signature, C# allows the programmer to implement each method depending on which interface that method is being called through, or, like Java, allows the programmer to implement the method once and have that be the single invocation on a call through any of the classes interfaces.

C#, unlike Java, supports operator overloading. Only the most commonly overloaded operators in C++ may be overloaded in C#.

C# is more type safe than C++. The only implicit conversions by default are those that are considered safe, such as widening of integers. This is enforced at compile-time, during JIT, and, in some cases, at runtime. No implicit conversions occur between booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default.

C# has explicit support for covariance and contra variance, unlike Java which as neither, and unlike C++ which has some degree of support for contra variance simply through the semantics of return types on virtual methods.

Enumeration members are placed in their own scope. C# provides properties as syntactic sugar for a common pattern in which a pair of methods, accessor (getter) and mutator (setter)encapsulate operations on a single attribute of a class. No redundant method signatures for the getter/setter implementations need be written, and the property may be accessed using attribute syntax rather than more verbose method calls.

Checked exceptions are not present in C# (in contrast to Java). This has been a conscious decision based on the issues of scalability and versionability.

Though primarily an imperative language, since C# 3.0 it supports functional programming expressions. techniques through first-class function objects and lambda

SQL Server 2005

SQL Server 2005 (formerly codenamed "Yukon") was released in October 2005. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML is converted to an internal binary data type before being stored in the database. Specialized indexing methods were made available for XML data. XML data is queried using XQuery; SQL Server 2005 added some extensions to the TSQL language to allow embedding XQuery queries in T-SQL. In addition, it also defines a new extension to XQuery, called XML DML, that allows query-based modifications to XML data. SQL Server 2005 also allows a database server to be exposed over web services using Tabular Data Stream(TDS) packets encapsulated within SOAP (protocol) requests. When the data is accessed over web services, results are returned as XML. Common Language Runtime (CLR) integration was introduced with this version, enabling one to write SQL code as Managed Code by the CLR. For relational data, TSQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems. Data pages are check summed for better error resiliency, and optimistic concurrency support has been added for better performance. Permissions and access control have been made more granular and the query processor handles concurrent execution of queries in a more efficient way. Partitions on tables and indexes are supported natively, so scaling out a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let it integrate with the .NET Framework. SQL Server 2005 introduced "MARS" (Multiple Active Results Sets), a method of allowing usage of database connections for multiple purposes. SQL Server 2005 introduced DMVs (Dynamic Management Views), which are specialized views and functions that return server state information that can be

used to monitor the health of a server instance, diagnose problems, and tune performance. Service Pack 1 (SP1) of SQL Server 2005 introduced Database Mirroring, a high availability option that provides redundancy and failover capabilities at the database level. Failover can be performed manually or can be configured for automatic failover. Automatic failover requires a witness partner and an operating mode of synchronous (also known as high-safety or full safety).

6. APPLICATIONS The application of fingerprint identification system are : Finance, insurance, securities: Financial safe management, important system and department staff authorized management, fingerprint drawing money business, credit card of fingerprint identification, the securities and exchange the identification, insurance beneficiary identification. Information industry. Computer application system identification upgraded like this way: (fingerprints instead of password), internet electronic trading system card password exchange (fingerprints instead of

identification, smart

password), administrator identification for communication and network equipment (switch, internet, etc.). Security industry: Fingerprint access control system, fingerprint door lock, fingerprint car lock, building fingerprint door lock, treasury and guns warehouse fingerprint access control and so on.

7. LIMITATIONS One of the open issues in ngerprint verication is the lack of robustness against image quality degradation [80, 2]. The performance of a ngerprint recognition system is heavily affected by ngerprint image quality. Several factors determine the quality of a ngerprint image: skin conditions (e.g., dryness, wetness, dirtiness, temporary or permanent cuts and bruises), sensor conditions (e.g., dirtiness, noise, size),user cooperation, etc. Some of these factors cannot be avoided and some of themvary along time. Poor quality images result in spurious and missed features, thusdegrading the performance of the overall system. Therefore, it is very important for a ngerprint recognition system to estimate the quality and validity of the captured ngerprint images. We can either reject the degraded images or adjust someof the steps of the recognition system based on the estimated quality. Several algorithms for automatic ngerprint image quality assessment have been proposedin literature [2]. Also, the benets of incorporating automatic quality measures in ngerprint verication have been shown in recent studies [28, 6, 32, 5].A successful approach to enhance the performance of a ngerprint verication system is to combine the results of different recognition algorithms. A number of simple fusion rules and complex trained fusion rules have been proposed in literature [11, 49, 81]. Examples for combining minutia- and texture-based approachesare to be found in [75, 61, 28]. Also, a comprehensive study of the combination ofdifferent ngerprint recognition systems is done in [30]. However, it has been foundthat simple fusion approaches are not always outperformed by more complex fusionapproaches, calling for further studies of the subject. Another recent issue in ngerprint recognition is the use of multiple sensors, either for sensor fusion [60] or for sensor interoperability [74, 7]. Fusion of sensors offers some important potentialities [60]: a) the overall performance can be improved substantially, b) population coverage can be improved by reducing enrollment and verication failures, and c) it may naturally resist spoong attempts against biometric systems. Regarding sensor interoperability, most biometric systems are designed under the assumption that the data to be compared is obtained uniquely and the same for every sensor, thus being restricted in their ability to match or

compare biometric data originating from different sensors in practice. As a result, changing thesensor may affect the performance of the system. Recent progress has been madein the development of common data exchange formats to facilitate the exchange of feature sets between vendors [19]. However, little effort has been invested in the development of algorithms to alleviate the problem of sensor interoperability. Someapproaches to handle this problem are given in [74], one example of which is thenormalization of raw data and extracted features. As a future remark, interoperability scenarios should also be included in vendor and algorithm competitions, as donein [37]. Due to the low cost and reduced size of new ngerprint sensors, several devices in daily use already include embedded ngerprint sensors (e.g., mobile telephones, PC peripherals, PDAs, etc.) However, using small-area sensors implies having less information available from a ngerprint and little overlap between different acquisitions of the same nger, which has great impact on the performance of the recognition system [59]. Some ngerprint sensors are equipped with mechanical guides in order to constrain the nger position. Another alternative is to perform several acquisitions of a nger, gathering (partially) overlapping information during the enrollment, and reconstruct a full ngerprint image. In spite of the numerous advantages of biometric systems, they are also vulnerable to attacks [82]. Recent studies have shown the vulnerability of ngerprint systems to fake ngerprints [35, 72, 71, 63]. Surprisingly, fake biometric input to the sensor is shown to be quite successful. Aliveness detection could be a solution and it is receiving great attention [26, 78, 8]. It has also been shown that the matching score is a valuable piece of information for the attacker[82, 73, 62]. Using the feedback provided by this score, signals in the channels of the verication systemcan be modied iteratively and the system is compromised in a number of iterations. A solution could be given by concealing the matching score and just releasing an acceptance/rejection decision, but this may not be suitable in certain biometric systems [82]. With the advances in ngerprint sensing technology, new high resolution sensors are able to acquire ridge pores and even perspiration activities of the pores [40, 21]. These features can provide additional discriminative information to existing ngerprint recognition

systems. In addition, acquiring perspiration activities of the pores can be used to detect spoong attacks.

8. CONCLUSION This paper introduces the efficient automatic attendance system, by using minutiae based fingerprint technique. We use the methods which are simple, effective and accurate to do the faster execution of enhancement and thinning algorithm of fingerprint image. In addition, we examine the experimentally determined constant K during the enhancement of image with using Fourier Transform, by which we able to differentiate the enhanced quality of image that can lead to the best verification of extracted minutiae of image. The performance evaluation of proposed system is done by using FVC 2000 database (500 images) [21] and the used time taken for verification was very less and verification rate is higher and accuracy is near about 92%.

REFERENCES [1] Fakhreddine Karray, Jamil Abou Saleh, Mo Nours Arab and Milad Alemzadeh, Multi Modal Biometric Systems: A State of the Art Survey, Pattern Analysis and Machine Intelligence Laboratory, University of Waterloo, Waterloo, Canada. [2] Anil K. Jain, Arun Ross and Salil Prabhakar, An introduction to biometric recognition, Circuits and Systems for Video Technology, IEEE Transactions on Volume 14, Issue 1, Jan. 2004 Page(s):4 20. [3] D. Maltoni, D. Maio, A. K. Jain, S. Prabhaker, Handbook of Fingerprint Recognition, Springer, New York, 2003. [4] Manvjeet Kaur, Mukhwinder Singh, Akshay Girdhar, and Parvinder S. Sandhu, Fingerprint Verification System using Minutiae Extraction Technique, World Academy of Science, Engineering and Technology 46 2008. [5] H. C. Lee and R. E. Gaensslen, Advances in Fingerprint Technology, Elsevier Science, New York, ISBN 0-444-01579-5. [6] Guide to Fingerprint Recognition DigitalPersona, Inc. 720 Bay Road Redwood City, CA 94063 USA, http://www.digitalpersona.com [7] L. OGorman, Overview of fingerprint verification technologies, Elsevier Information Security Technical Report, Vol. 3, No. 1, 1998. [8] B.G. Sherlock. D.M. Monro. K. Millard., Fingerprint enhancement by directional Fourier filtering, IEE hoc.-Vis. Image Signal Processing, Vol. 141, No. 2, April 1994. [9] Lin Hong., "Automatic Personal Identification Using Fingerprints," Ph.D. Thesis, ISBN: 0-599-07596-1, 1998.

[10] E. Hastings, A Survey of Thinning Methodologies, Pattern analysis and Machine Intelligence, IEEE Transactions, vol. 4, Issue 9, pp. 869885, 1992.

Potrebbero piacerti anche