Sei sulla pagina 1di 40

LIBRARY MANAGEMENT SYSTEM

1. INTRODUCTION

1.1 Introduction to Library Management System:


The project titled Library Management System is Library Management Software for
monitoring and controlling the transactions in a library.
The purpose of LMS is to manage daily operations of library effectively & efficiently. This
system basically has four types of modules that handle daily activity for the library.
It is a sub-discipline of management that focuses on specific issues faced by libraries
and library management professionals. Library management encompasses normal managerial
tasks, as well as intellectual freedom and fundraising responsibilities.
The transactions like login, register, add, search, delete, issue are provided. The Library
Management System stores the details like name, address, ID number, Date of Birth of
members working in the library and users who come to library. The details of books like
book name, book number, Price, author, edition, year of publication etc are also stored. The
application mainly focuses on basic operations in a library like adding new member, new
books, and updating new information, searching books and members and facility to borrow
and return books

1.1.2 Modules:

 User management
 Book Inventory Module
 Book Borrowing Module
 Search Facility Management
 Assign Book Module
 Pending Overdue Books From Inventory

1.1.3 Users:
 Admin

1.1.4 Keywords:
Generic Technology Keywords:Database, User Interface, Programming.
Specific Technology Keywords:Asp.Net 3.5, C#.Net, MS SqlServer-05

Project Keywords:Presentation, Business Object, Data Access Layer

SDLC Keywords: Analysis, Design, Implementation, Testing

1.2 Overview:
System Requirements:
Hardware:

The Hardware consists of physical components of the computer that input storage
processing control, output devices. The software that manages the resources of computer is
known as operating systems. Computer always includes an external storage system to store
data in programs. The popular storage medium are floppy disk other media are hard disks
and magnetic tapes etc. The kind of hardware used in the project is

Specification:
Processor: Intel Pentium or More

RAM : 512 MB RAM


Hard Disk: 40GB

Server Hardware:
Min. 80GB hard disk
Ram 1 GB
Windows Server OS
Client Hardware:
Local Area Network

Software:
Software is a set of programs to do a particular task. Software is an essential requirement of
computer systems. The kind of software used in this project is.

Specification:
Operating System Server: Windows XP or later

Database Server: Microsoft SQL Server-2005


Web server: IIS (Internet Information Services) 6.0 Or Above

Web Technologies: html, CSS, JavaScript, Asp.net with C#

Client: Microsoft Internet Explorer 6.0 Or Above

IDE & Tools: Microsoft Visual Studio .Net-2008, Ajax

Server Software:
Windows XP with Service Pack 2
Microsoft Visual Studio 2008
ASP.NET with C#
IIS (Internet Information Services)
Microsoft SQL Server 2005
Provision for connectivity

Client Software:
Internet Browser 6.0 and higher version

2. FEASIBILITY ANALYSIS

Whatever we think need not be feasible .It is wise to think about the feasibility of any
problem we undertake. Feasibility is the study of impact, which happens in the organization
by the development of a system. The impact can be either positive or negative. When the
positives nominate the negatives, then the system is considered feasible. Here the feasibility
study can be performed in two ways such as technical feasibility and Economical Feasibility.

2.1 Technical Feasibility:


We can strongly say that it is technically feasible, since there will not be much difficulty in
getting required resources for the development and maintaining the system as well. All the
resources needed for the development of the software as well as the maintenance of the same
is available in the organization here we are utilizing the resources which are available
already.

2.2 Economical Feasibility:


Development of this application is highly economically feasible .The organization needed
not spend much m one for the development of the system already available. The only thing is
to be done is making an environment for the development with an effective supervision. I f
we are doing so , we can attain the maximum usability of the corresponding resources .Even
after the development , the organization will not be in a condition to invest more in the

organization Therefore , the system is economically feasible.

2. INTRODUCTION TO DOT NET FRAMEWORK

The Microsoft .NET Framework is a software component that is a part of Microsoft


Windows operating systems. It has a large library of pre-coded solutions to common program
requirements, and manages the execution of programs written specifically for the framework.
The .NET Framework is a key Microsoft offering, and is intended to be used by most new
applications created for the Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a large
range of programming needs in areas including: user interface, data access, database
connectivity, cryptography, web application development, numeric algorithms, and network
communications. The class library is used by programmers who combine it with their own
code to produce applications.

Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. This runtime environment, which is also a part
of the .NET Framework, is known as the Common Language Runtime (CLR).

The CLR provides the appearance of an application virtual machine, so that


programmers need not consider the capabilities of the specific CPU that will execute the
program. The CLR also provides other important services such as security mechanisms,
memory management, and exception handling. The class library and the CLR together
compose the .NET Framework.

The .NET Framework is included with all the new versions of Windows Server 2003,
and can be installed on older versions of Windows.

3. PRINCIPLE DESIGN FEATURES

3.1 Interoperability:
Because interaction between new and older applications is commonly required, the
.NET Framework provides means to access functionality that is implemented in programs
that execute outside the .NET environment. Access to COM components is provided in the
System Runtime Interpol Services and System Enterprise Services namespaces of the
framework, and access to other functionality is provided using the P/Invoke feature.

3.2 Common runtime engine:


Programming languages on the .NET Framework compile into an intermediate
language known as the Common Intermediate Language, or CIL (formerly known as
Microsoft Intermediate Language, or MSIL). In Microsoft's implementation, this intermediate
language is not interpreted, but rather compiled in a manner known as just-in-time
compilation(JIT) into native code. The combination of these concepts is called the Common
Language Infrastructure (CLI), a specification; Microsoft's implementation of the CLI is
known as the Common Language Runtime (CLR).

3.3 Language Independence:


The .NET Framework introduces a Common Type System, or CTS. The CTS
specification defines all possible data types and programming constructs supported by the
CLR and how they may or may not interact with each other. Because of this feature, the .NET
Framework supports the exchange of instances of types between programs written in any of
the .NET languages. This is discussed in more detail in Microsoft .NET Languages.

3.4 Base Class Library:


The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes which encapsulate a number of common functions, including file reading
and writing, graphic rendering, database interaction and XML document manipulation.

3.5 Simplified Deployment:


Installation of computer software must be carefully managed to ensure that it does not
interfere with previously installed software, and that it conforms to increasingly stringent
security requirements. The .NET framework includes design features and tools that help
address these requirements.

3.6 Security:
The design is meant to address some of the vulnerabilities, such as buffer overflows,
that have been exploited by malicious software. Additionally, .NET provides a common
security model for all applications.

3.7 Portability:
The design of the .NET Framework allows for it to be platform agnostic, and thus be
cross platform compatible. That is, a program written to use the framework should run
without change on any type of system for which the framework is implemented.
In addition, Microsoft submits the specifications for the Common Language
Infrastructure (which includes the core class libraries, Common Type System, and the
Common Intermediate Language), and the C# language, and the C++/CLI language to both
ECMA and the ISO, making them available as open standards. This makes it possible for
third parties to create compatible implementations of the framework and its languages on
other platforms.

4 .ARCHITECTURE
4.1 CLI:
The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-agnostic platform
for application development and execution, including functions for exception handling,
garbage collection, security, and interoperability. Microsoft's implementation of the CLI is
called the Common Language Runtime, orCLR.

The CLR is composed of four primary parts:


 Common Type System (CTS)

 Common Language Specification (CLS)

 Metadata

 Virtual Execution System (VES)

4.2 Assemblies:
The intermediate CIL code is housed in .NET assemblies. As mandated by
specification, assemblies are stored in the Portable Executable (PE) format, common on the
Windows platform for all DLL and EXE files. The assembly consists of one or more files, but
one of these must contain the manifest, which has the metadata for the assembly.

The complete name of an assembly (not to be confused with the filename on disk)
contains its simple text name, version number, culture and public key token. The public key
token is a unique hash generated when the assembly is compiled; thus two assemblies with
the same public key token are guaranteed to be identical.

A private key can also be specified known only to the creator of the assembly and can
be used for strong naming and to guarantee that the assembly is from the same author when a
new version of the assembly is compiled (require adding an assembly to the Global Assembly
Cache).

4.3 Metadata:
All CIL is Self-Describing through .NET metadata. The CLR checks on metadata to
ensure that the correct method is called. Metadata is usually generated by language compilers
but developers can create their own metadata through custom attributes. Metadata also
contains information about the assembly. Metadata is also used to implement the reflective
programming capabilities of .NET Framework.

4.4 Class Library:


The Base Class Library, sometimes incorrectly referred to as the Framework Class
Library (FCL) (which is a superset including the Microsoft.* namespaces), is a library of
classes available to all languages using the .NET Framework.

The BCL provides classes which encapsulate a number of common functions such as
file reading and writing, graphic rendering, database interaction, XML document
manipulation, and so forth. The BCL is much larger than other libraries, but has much more
functionality in one package.

4.5 Security:
Dot NET has its own security mechanism, with two general features:

 Code Access Security (CAS)

 Validation and verification.

Code Access Security is based on evidence that is associated with a specific assembly.
Typically the evidence is the source of the assembly (whether it is installed on the local
machine, or has been downloaded from the intranet or Internet). Code Access Security uses
evidence to determine the permissions granted to the code. Other code can demand that
calling code is granted a specified permission.

The demand causes the CLR to perform a call stack walk: every assembly of each
method in the call stack is checked for the required permission and if any assembly is not
granted the permission then a security exception is thrown.

When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains
valid metadata and CIL, and it checks that the internal tables are correct. Verification is not
so exact. The verification mechanism checks to see if the code does anything
that is 'unsafe'. The algorithm used is quite conservative and hence sometimes code that is
'safe' is not verified. Unsafe code will only be executed if the assembly has the 'skip
verification' permission, which generally means code that is installed on the local machine.

Dot NET Framework uses Application domains as a mechanism for isolating code
running in a process. Application domains can be created and code loaded into or unloaded
from them independent of other Application domains. This helps increase fault tolerance of
the application, as faults or crashes in one Application domain do not affect rest of the
application

Application domains can also be configured independently with different security


privileges. This can help increasing security of the application by separating potentially
unsafe code. However, the developer has to split the application into subdomains, it is not
done by the CLR.
4.6 Memory Management:
The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself. To this
end, the memory allocated to instantiations of .NET types (objects) is done contiguouslyfrom
the managed heap, a pool of memory managed by the CLR.

As long as a reference to an object exists, which might be either a direct reference to


an object or via a graph of objects, the object is considered to be in use by the CLR. When
there is no reference to an object, and thus cannot be reached or used, it becomes garbage.
However, it still holds on to the memory allocated to it. .NET Framework includes a garbage
collector which runs periodically, on a separate thread than the application's thread, that
enumerates all the unusable objects and reclaims the memory allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, and mark-


and-sweep garbage collector. The GC runs only when a certain amount of memory has been
used or there is enough pressure for memory on the system. Since it is not guaranteed when
the conditions to reclaim memory is reached, the GC runs are non-deterministic.

Each .NET application has a set of roots, which are a set of pointers maintained by the
CLR that point to objects on the managed heap (managed objects). These include references
to static objects and objects defined as local variables or method parameters currently in
scope, as well as objects referred to by CPU registers.

When the GC runs, it pauses the application, and for each objects referred to in the
root, it recursively enumerates all the objects reachable from the root objects and marks the
objects as reachable. It uses .NET metadata and reflection to discover the objects
encapsulated by an object, and then recursively walk them.

It then enumerates all the objects on the heap (which were initially allocated
contiguously) using reflection and all the objects, not marked as reachable, are garbage. This
is the markphase. Since the memory held by garbage is not of any consequence, it is
considered free space. However, this leaves chunks of free space between objects which were
initially contiguous.

The objects are then compacted together, by using memcpy to copy them over to the
free space to make them contiguous again. Any reference to an object invalidated by moving
the object is updated to reflect the new location by the GC. The application is resumed after
the garbage collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned a
generation; newly created objects belong to Generation 0. The objects that survive a garbage
collection are tagged as Generation 1, and the Generation 1 objects that survive another
collection

5. ABOUT THE DATABASE USED

Microsoft SQL Server is a Relational Database Management system (RDBMS) produced


by Microsoft. Its primary query language is Transact-SQL, an implementation of the
ANSI/ISO standard Structured Query Language (SQL) used by both Microsoft and
Sybase.

5.1 Architecture:
The architecture of Microsoft SQL Server is broadly divided into three components:
 SQLOS

 The Relational Engine

 Protocol Layer

SQLOS: implements the basic services required by SQL Server, including thread
scheduling, memory management and I/O management

The Relational Engine implements the relational database components including support
for databases, tables, queries and stored procedures as well as implementing the type system.

The Protocol Layer which exposes the SQL Server functionality.

5.2 SQLOS:
SQLOSimplements the basic services required by SQL Server, including thread
scheduling, memory management and I/O management

SQLOS is the base component in the SQL Server architecture. It implements


functions normally associated with the Operating System - thread scheduling, memory
management, I/O management, buffer pool management, resource management,
synchronization primitives and locking, and deadlock detection.

Because the requirement of SQL Server is highly specialized, SQL Server implements
its own memory and thread management system, rather than using the generic one already
implemented in the Operating System. It divides all the operations it performs into a series of
Tasks - both background maintenance jobs as well as processing requests from clients.
Internally, a pool of worker threads is maintained, onto which the tasks are scheduled.

A task is associated with the thread until it is completed, only after its completion is
the thread freed and returned to the pool. If there are no free threads to assign the task to, the
task is temporarily blocked. Each worker thread is mapped onto either an Operating System
thread or a fibre.

Fibres are user mode threads that implement co-operative multitasking. Using fibers
means SQLOS does all the book-keeping of thread management itself, but then it can
optimize them for its particular use. SQLOS also includes synchronization primitives for
locking as well as monitoring for the worker threads to detect and recover from deadlocks.
SQLOS handles the memory requirements of SQL Server as well. Reducing disc I/O
is one of the primary goals of specialized memory management in SQL Server. It maintains a
buffer pool, which is used to cache data pages from the disc, and to satisfy the memory
requirements for the query processor, and for other internal data structures.

SQLOS monitors all the memory allocated from the buffer pool, ensuring that the
components return unused memory to the pool, and shuffles data out of the cache to make
room for newer data. For changes that are made to the data in buffer, SQLOS writes the data
back to the disc lazily, that is when the disc subsystem is either free, or there have significant
number of changes made to the cache, while still serving requests from the cache. For this, it
implements a Lazy Writer, which handles the task of writing the data back to persistent
storage

5.3 Relational engine:

The Relational engine implements the relational data store using the capabilities
provided by SQLOS, which is exposed to this layer via the private SQLOS API. It
implements the type system, to define the types of the data that can be stored in the tables, as
well as the different types of data items (such as tables, indexes, logs etc) that can be stored.
It includes the Storage Engine, which handles the way data is stored on persistent storage
devices and provides methods for fast access to the data.

The storage engine implements log-based transaction to ensure that any changes to the
data are ACID compliant. It also includes the query processor, which is the component that
retrieves data. SQL queries specify what data to retrieve, and the query processor optimizes
and translates the query into the sequence of operations needed to retrieve the data. The
operations are then performed by worker threads, which are scheduled for execution by
SQLOS.

5.4 Protocol layer:


Protocol layer implements the external interface to SQL Server. All operations that
can be invoked on SQL Server are communicated to it via a Microsoft-defined format, called
Tabular Data Stream (TDS). TDS packets can be encased in other physical transport
dependent protocols, including TCP/IP, Named pipes, and shared memory. Consequently,
access to SQL Server is available over these protocols. In addition, the SQL Server API is
also exposed over web services.

Data storage:
The main unit of data storage is a database, which is a collection of tables with typed
columns. SQL Server supports different data types, including primary types such as Integer,
Float, Decimal, Char (including character strings), Varchar (variable length character strings),
binary (for unstructured blobs of data), Text (for textual data) among others. It also allows
user-defined composite types (UDTs) to be defined and used.

SQL Server also makes server statistics available as virtual tables and views called
Dynamic Management Views or DMVs. A database can also contain other objects
including views, stored procedures, indexes and constraints, in addition to tables, along with
a transaction log.

An SQL Server database can contain a maximum of 231 objects, and can span
multiple OS-level files with a maximum file size of 220 TB. The data in the database are
stored in primary data files with an extension .mdf. Secondary data files, identified with an
.ndf extension, are used to store optional metadata. Log files are identified with the .ldf
extension.

Storage space allocated to a database is divided into sequentially numbered pages,


each 8 KB in size. A page is the basic unit of I/O for SQL Server operations. A page is
marked with a 96-byte header which stores metadata about the page including the page
number, page type, free space on the page and the ID of the object that owns it. Page type
defines the data contained in the page - data stored in the database; index, allocation map
which holds information about how pages are allocated to tables and indexes, change map
which holds information about the changes made to other pages since last backup or logging,
or contain large data types such as image or text. While page is the basic unit of an I/O
operation, space is actually managed in terms of an extent which consists of 8 pages.

A database object can either span all 8 pages in an extent ("uniform extent") or
share an extent with up to 7 more objects ("mixed extent"). A row in a database table cannot
span more than one page, so is limited to 8 KB in size. However, if the data exceeds 8 KB
and the row contains Varchar or Varbinary data, the data in those columns are moved to a
new page (or possible a sequence of pages, called Allocation unit) and replaced with a pointer
to the data.
5.5 Buffer management:
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be
buffered in-memory, and the set of all pages currently buffered is called the buffer cache. The
amount of memory available to SQL Server decides how many pages will be cached in
memory.

The buffer cache is managed by the Buffer Manager. Either reading from or writing
to any page copies it to the buffer cache. Subsequent reads or writes are redirected to the in-
memory copy, rather than the on-disc version. The page is updated on the disc by the Buffer
Manager only if the in-memory cache has not been referenced for some time.

While writing pages back to disc, asynchronous I/O is used whereby the I/O operation
is done in a background thread so that other operations do not have to wait for the I/O
operation to complete. Each page is written along with its checksum when it is written. When
reading the page back, its checksum is computed again and matched with the stored version
to ensure the page has not been damaged or tampered with in the meantime.

5.6 Logging and Transaction:


SQL Server ensures that any change to the data is ACID-compliant, i.e., it uses
transactions to ensure that any operation either totally completes or is undone if fails, but
never leave the database in an intermediate state. Using transactions, a sequence of actions
can be grouped together, with the guarantee that either all actions will succeed or none will.
SQL Server implements transactions using a write-ahead log.

Any changes made to any page will update the in-memory cache of the page,
simultaneously all the operations performed will be written to a log, along with the
transaction ID which the operation was a part of. Each log entry is identified by an increasing
Log Sequence Number (LSN) which ensures that no event overwrites another. SQL Server
ensures that the log will be written onto the disc before the actual page is written back. This
enables SQL Server to ensure integrity of the data, even if the system fails.

If both the log and the page were written before the failure, the entire data is on
persistent storage and integrity is ensured. If only the log was written (the page was either not
written or not written completely), then the actions can be read from the log and repeated to
restore integrity. If the log wasn't written, then also the integrity is maintained, even though
the database is in a state when the transaction as if never occurred.
If it was only partially written, then the actions associated with the unfinished
transaction are discarded. Since the log was only partially written, the page is guaranteed to
have not been written, again ensuring data integrity. Removing the unfinished log entries
effectively undoes the transaction. SQL Server ensures consistency between the log and the
data every time an instance is restarted.

5.7 Concurrency and locking:


SQL Server allows multiple clients to use the same database concurrently. As such, it
needs to control concurrent access to shared data, to ensure data integrity - when multiple
clients update the same data, or clients attempt to read data that is in the process of being
changed by another client.

SQL Server provides two modes of concurrency control: pessimistic concurrency


and optimistic concurrency. When pessimistic concurrency control is being used, SQL
Server controls concurrent access by using locks. Locks can be either shared or exclusive.
Exclusive lock grants the user exclusive access to the data - no other user can access the data
as long as the lock is held.

Shared locks are used when some data is being read - multiple users can read from
data locked with a shared lock, but not acquire an exclusive lock. The latter would have to
wait for all shared locks to be released. Locks can be applied on different levels of granularity
- on entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on
the entire index or on index leaves. The level of granularity to be used is defined on a per-
database basis by the database administrator. While a fine grained locking system allows
more users to use the table or index simultaneously, it requires more resources. So it does not
automatically turn into higher performing solution.

SQL Server also includes two more lightweight mutual exclusion solutions - latches
and spinlocks - which are less robust than locks but are less resource intensive. SQL Server
uses them for DMVs and other resources that are usually not busy. SQL Server also monitors
all worker threads that acquire locks to ensure that they do not end up in deadlocks - in case
they do, SQL Server takes remedial measures, which in many cases is to kill one of the
threads entangled in a deadlock and rollback the transaction it started.

To implement locking, SQL Server contains the Lock Manager. The Lock Manager
maintains an in-memory table that manages the database objects and locks, if any, on them
along with other metadata about the lock. Access to any shared object is mediated by the lock
manager, which either grants access to the resource or blocks it.

SQL Server also provides the optimistic concurrency control mechanism, which is
similar to the multi-version concurrency control used in other databases. The mechanism
allows a new version of a row to be created whenever the row is updated, as opposed to
overwriting the row, i.e., a row is additionally identified by the ID of the transaction that
created the version of the row.

Both the old as well as the new versions of the row are stored and maintained, though
the old versions are moved out of the database into a system database identified as Tempdb.
When a row is in the process of being updated, any other requests are not blocked (unlike
locking) but are executed on the older version of the row. If the other request is an update
statement, it will result in two different versions of the rows - both of them will be stored by
the database, identified by their respective transaction IDs.

5.8 Data retrieval:


The main mode of retrieving data from an SQL Server database is querying for it. The
query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL Server
shares with Sybase SQL Server due to its legacy. The query declaratively specifies what is to
be retrieved. It is processed by the query processor, which figures out the sequence of steps
that will be necessary to retrieve the requested data.

The sequence of actions necessary to execute a query is called a query plan. There
might be multiple ways to process the same query. For example, for a query that contains a
join statement and a select statement, executing join on both the tables and then executing
select on the results would give the same result as selecting from each table and then
executing the join, but result in different execution plans. In such case, SQL Server chooses
the plan that is supposed to yield the results in the shortest possible time. This is called query
optimization and is performed by the query processor itself.

SQL Server includes a cost-based query optimizer which tries to optimize on the cost,
in terms of the resources it will take to execute the query. Given a query, the query optimizer
looks at the database schema, the database statistics and the system load at that time. It then
decides which sequence to access the tables referred in the query, which sequence to execute
the operations and what access method to be used to access the tables.
While a concurrent execution is more costly in terms of total processor time, because
the execution is actually split to different processors might mean it will execute faster. Once a
query plan is generated for a query, it is temporarily cached. For further invocations of the
same query, the cached plan is used. Unused plans are discarded after some time. SQL Server
also allows stored procedures to be defined. Stored procedures are parameterized T-SQL
queries, which are stored in the server itself (and not issued by the client application as is the
case with general queries). Stored procedures can accept values sent by the client as input
parameters, and send back results as output parameters.

They can also call other stored procedures, and can be selectively provided access to. Unlike
other queries, stored procedures have an associated name, which is used at runtime to resolve
into the actual queries. Also because the code need not be sent from the client every time (as
it can be accessed by name), it reduces network traffic and somewhat improves performance.
Execution plans for stored procedures are also cached as necessary.

5.9 Advantages
 Support name management and avoid duplication
 Store of organizational knowledge linking analysis, design and implementation

6.LIBRARY MANAGEMENT SYSTEM ER-DIAGRAM


7. LIBRARY MANAGEMENT SYSTEM USE-CASE
EDIT BOOK
DETAILS

DELETE BOOK
DETAILS

ADMIN

ASSIGN

OVER
DUE
SAMPLE CODE

ASP .NET code for Login:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class Login : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
try
{
Request.Cookies["user"].Expires = DateTime.Now.AddMinutes(-30);
Request.Cookies.Remove("user");
}
catch (Exception)
{
}
}
protected void btnLogin_Click(object sender, EventArgs e)
{
if (inputEmail.Value == "admin" && inputPassword.Value =="admin@123")
{
Response.Cookies["user"]["login"] = "true";
Response.Redirect("Home.aspx");
}
}}
ASP .NET code for Book Assign:
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Configuration;
using System.Data;
using System.Data.SqlClient;
public partial class BookAssign : System.Web.UI.Page
{
SqlConnectioncon=new
SqlConnection(ConfigurationManager.ConnectionStrings["bs"].ConnectionString);
SqlCommand cmd = new SqlCommand();
SqlDataAdapter da;
DataSet ds;
string sql_query;
protected void Page_Load(object sender, EventArgs e)
{
}
protected void btn_Assign_Click(object sender, EventArgs e)
{
try
{
string returndate = DateTime.Today.AddDays(15).ToShortDateString();
sql_query = "Insert into Assign (studentid, bookid, assigneddate, returndate, penality,
statusid) values ('" + txt_assign_studentid.Text.Trim() + "','" + txt_assign_bookid.Text.Trim()
+ "','" + txt_assign_bookdate.Text.Trim() + "','" + returndate + "','0','s1')";
cmd = new SqlCommand(sql_query, con);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
lblresult_bookassign.Text = "<b>Book ID :</b> " + txt_assign_bookid.Text + " is assigned
to <b>Student ID :</b> " + txt_assign_studentid.Text + " on the <b>Date of :</b> " +
txt_assign_bookdate.Text + " and you have to <b>return on Date :</b> " + returndate + "
otherwise <b>Penality per day : </b> 5 Rupees.";
}
Catch
{
con.Close();
}
}
protected void txt_assign_bookid_TextChanged(object sender, EventArgs e)
{
try
{
sql_query = "Select * from BookRecord Where bookid='" + txt_assign_bookid.Text.Trim() +
"'";
da = new SqlDataAdapter(sql_query, con);
ds = new DataSet();
da.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
{
txt_assign_bookid.Text = ds.Tables[0].Rows[0]["bookid"].ToString();
txt_assign_bookname.Text = ds.Tables[0].Rows[0]["bookname"].ToString();
txt_assign_bookqty.Text = ds.Tables[0].Rows[0]["bookquantity"].ToString();
txt_assign_bookdate.Text = DateTime.Today.Date.ToShortDateString();
}
}
catch
{
con.Close();
}
}
protected void txt_assign_studentid_TextChanged(object sender, EventArgs e)
{
try
{
sql_query = "Select * from Student Where studentid='" + txt_assign_studentid.Text.Trim() +
"'";
da = new SqlDataAdapter(sql_query, con);
ds = new DataSet();
da.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
{
txt_assign_studentid.Text = ds.Tables[0].Rows[0]["studentid"].ToString();
txt_assign_studentname.Text = ds.Tables[0].Rows[0]["studentname"].ToString();
txt_assign_studentbranch.Text = ds.Tables[0].Rows[0]["studentbranch"].ToString();
txt_assign_studentyear.Text = ds.Tables[0].Rows[0]["studentyear"].ToString();
}
}
catch
{
con.Close();
}
}
}
ASP .NET code for User:
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class Users : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void btnAdd_Click(object sender, EventArgs e)
{
try
{
SqlConnection conn = new
SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["LIBRARY
ConnectionString"].ConnectionString);
SqlCommand cmd = new SqlCommand("insert into Student
(studentid,studentname,studentbranch,studentyear) values (@id,@name,@branch,@year)",
conn);
cmd.Parameters.AddWithValue("@id", txtStudentId.Text);
cmd.Parameters.AddWithValue("@name", txtStudentName.Text);
cmd.Parameters.AddWithValue("@branch", ddlBranch.SelectedValue );
cmd.Parameters.AddWithValue("@year", ddlyear.SelectedValue);
try
{
conn.Open();
cmd.ExecuteNonQuery();
GridView1.DataBind();
txtStudentId.Text = "";
txtStudentName.Text = "";
}
catch (Exception ex)
{
}
finally
{
conn.Close();
}
}
catch (Exception ex)
{

}
}
}
8. SCREEN SHOTS
9. ENTITY RELATIONSHIP DIAGRAM
12.1 Entity-relationship Model:

An Entity-relationship model is an abstract conceptual representation of structured data.


Entity-relationship modeling is a relational schema database modeling method, used in
software engineering to produce a type of conceptual data model or semantic data model of a
system, often a relational database, and its requirements in a top-down fashion. Diagrams
created using this process are called entity-relationship diagrams, or ER diagrams for short.
Originally proposed in 1976 by Dr. Pin-Shan (Peter) Chen, many variants of the process have
subsequently been devised.

The first stage of information system design uses these models during the requirements
analysis to describe information needs or the type of information that is to be stored in a
database. The data modeling technique can be used to describe any ontology (i.e. an overview
and classifications of used terms and their relationships) for a certain universe of discourse
(i.e. area of interest). In the case of the design of an information system that is based on a
database, the conceptual data model is, at a later stage (usually called logical design), mapped
to a logical data model, such as the relational model; this in turn is mapped to a physical
model during physical design.

Note: sometimes, both of these phases are referred to as a "physical design".

An entity represents a discrete object. Entities can be thought of as nouns.

Examples: a computer, an employee, a song, a mathematical theorem. Entities are represented


as rectangles.

A relationship captures how two or more entities are related to one another. Relationships can
be thought of as verbs, linking two or more nouns.

SOFTWARE TESTING
Is the menu bar displayed in the appropriate contested some system related features
included either in menus or tools? Do pull –Down menu operation and Tool-bars work
properly? Are all menu function and pull down sub function properly listed ?; Is it possible
to invoke each menu function using a logical assumptions that if all parts of the system are
correct, the goal will be successfully achieved .? In adequate testing or non-testing will
leads to errors that may appear few months later.
This create two problem

1. Time delay between the cause and appearance of the problem.

2. The effect of the system errors on files and records within the system

The purpose of the system testing is to consider all the likely variations to which it will be
suggested and push the systems to limits.

The testing process focuses on the logical intervals of the software ensuring that all
statements have been tested and on functional interval is conducting tests to uncover errors
and ensure that defined input will produce actual results that agree with the required results.
Program level testing, modules level testing integrated and carried out.

There are two major type of testing they are

1) White Box Testing. 2) Black Box Testing.

White Box Testing:

White box sometimes called “Glass box testing” is a test case design uses the control
structure of the procedural design to drive test case.

Using white box testing methods, the following tests were made on the system

A) All independent paths within a module have been exercised once. In our system,
ensuring that case was selected and executed checked all case structures. The bugs that were
prevailing in some part of the code where fixed

b) All logical decisions were checked for the truth and falsity of the values.

Black box Testing

Black box testing focuses on the functional requirements of the software. This is black box
testing enables the software engineering to derive a set of input conditions that will fully
exercise all functional requirements for a program. Black box testing is not an alternative to
white box testing rather it is complementary approach that is likely to uncover a different
class of errors that white box methods like..
1) Interface errors

2) Performance in data structure

3) Performance errors

4) Initializing and termination errors

CONCLUSION

Our project is only a humble venture to satisfy the needs in a library. Several user
friendly coding have also adopted. This package shall prove to be a powerful package in
satisfying all the requirements of the organization.
The objective of software planning is to provide a frame work that enables the manger to
make reasonable estimates made within a limited time frame at the beginning of the
software project and should be updated regularly as the project progresses. Last but not least
it is no the work that played the ways to success but ALMIGHTY

BIBLIOGRAPHY

REFERENCES
[1] R. Agrawal, A. Evfimievski, and R. Srikant. Information sharing across
private databases. In Proc. of the 2003 ACM SIGMOD, San Diego, California,
2003.
[2] C. Clifton, M. Kantarcioglu, J. Vaidya, X. Lin, and M. Y. Zhu. Tools for
privacy preserving data mining. SIGKDD Explorations, 4(2), December 2002.

[3] W. Du and Z. Zhan. Building decision tree classifier on private data. In


Workshop on Privacy, Security, and Data Mining at the 2002 IEEE ICDM,
Maebashi City, Japan, December 2002.

[4] B. C.M. Fung, K.Wang, A.W. C. Fu, and J. Pei. Anonymity for continuous
data publishing. In Proc. of the 11th EDBT, Nantes, France, March 2008. ACM
Press.

[5] B. C. M. Fung, K. Wang, and P. S. Yu. Top-down specialization for


information and privacy preservation. In Proc. Of the 21st IEEE ICDE, Tokyo,
Japan, April 2005.

[6] B. C. M. Fung, K. Wang, and P. S. Yu. Anonymizing classification data for


privacy preservation. IEEE Transactions on Knowledge and Data Engineering
(TKDE), 19(5):711–725, May 2007.

[7] R. D. Hof. Mix, match, and mutate. Business Week, July 2005.

[8] A. Hundepool and L. Willenborg. μ- and τ -argus: Software for statistical


disclosure control. In Proc. of the 3rd International Seminar on Statistical
Confidentiality, Bled, 1996.

[9] W. Jiang and C. Clifton. Privacy-preserving distributed kanonymity. In


Proc. of the 19th Annual IFIP WG 11.3 Working Conference on Data and
Applications Security, pages 166–177, Storrs, CT, August 2005.

[10] W. Jiang and C. Clifton. A secure distributed framework for achieving k-


anonymity. Very Large Data Bases Journal (VLDBJ), 15(4):316–333,
November 2006.

[11] K. LeFevre, D. J. DeWitt, and R. Ramakrishnan. Workloadaware


anonymization. In Proc. of the 12th ACM SIGKDD, Philadelphia, PA, August
2006.

[12] A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramaniam. ℓ-


diversity: Privacy beyond k-anonymity. ACM TKDD, 1(1), March 2007.

[13] D. J. Newman, S. Hettich, C. L. Blake, and C. J. Merz. UCI repository of


machine learning databases, 1998.
http://ics.uci.edu/_mlearn/MLRepository.html.
[14] J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann,
1993.

[15] P. Samarati and L. Sweeney. Generalizing data to provide anonymity when


disclosing information. In Proc. of the 17 ACM PODS, page 188, Seattle, WA,
th

June 1998.

[16] L. Sweeney. Achieving k-anonymity privacy protection using


generalization and suppression. International Journal on Uncertainty,
Fuzziness, and Knowledge-based Systems, 10(5):571–588, 2002.

[17] K. Wang and B. C. M. Fung. Anonymizing sequential releases. In Proc. of


the 12th ACM SIGKDD, pages 414–423, Philadelphia, PA, August 2006.

[18] G. Wiederhold. Intelligent integration of information. In Proc. of the 1993


ACM SIGMOD, pages 434–437, 1993.

[19] Z. Yang, S. Zhong, and R. N. Wright. Privacy-preserving classification of


customer data without loss of accuracy. In Proc. of the 5th SDM, pages 92–102,
Newport Beach, CA, 2005.

[20] A. C. Yao. Protocols for secure computations. In Proc. of the 23rd IEEE
Symposium on Foundations of Computer Science, 1982.

Potrebbero piacerti anche