Sei sulla pagina 1di 88


1. Abstract

2. Organization profile

3. Project Introduction

3.1 Project Overview

3.2 Purpose

3.3 Scope

3.4 Definations, Acronyms

4. Analysis and Design

4.1 Existing System

4.2 Proposed System

4.3 Functional Requirements

4.4 Non Functional Requirements

4.5 Feasibility Study

4.6 Economic Feasibility

4.7 Operational Feasibility

4.8 Technical Feasibility

4.9 Requirement Analysis

5. Specific Requirements

5.1 Hardware Interface

5.2 Software Interface

6. Software Development Approach

6.1 OOP’s Concepts

6.2 Visual studi2010

6.3 SQL Server2008

6.5 Java Script

6.6 UML

6.7 Use case Diagrams

6.8 Class Diagrams

6.9 Interaction Diagrams

6.10 Collaboration Diagrams

7. Implementation

8. Testing

9. Screens

10. conclusion

11. Bibliography


The project has been developed to keep track of detail regarding the equipments. The current
product is a window-based. To provide the basic services related to the Supply of the equipments
to maintain their PRE-SO (Supply Order) and POST-SO details. The product will take care of all
the supply orders. Pre-So is maintained from the starting of the financial year. It is concern to
keep the records of each Supply Order, which is received, from firm, supplying equipments.
These equipments are then assigned a unique ISG Number given by BRO, further they are
supplied to different project departments of BRO.
After the completion of PRE-SO, BRO maintains the POST-SO worksheet. The supply
and liability to the current year is being prepared in this worksheet. First the details of the supply
order for the current year is prepared at the end of the current year followed by the liability
worksheet that is being carry forward.

At last Guide Sheet is formed that include budget allotted, liabilities of last three year
cleared in current year, liabilities to be carry forward to next year.

2. Organization Profile

At Bhaskara Info Services we offer application and product development, re-engineering,

technology consulting, data warehousing, testing, maintenance of companies in the travel, tour &
hospitality, cargo & logistics, oil & gas domains.

Bhaskara Info Services Pvt. Ltd a pioneering IT service organization offering countless
services to a wide range of businesses. Committed to providing quality enterprise wide solutions
using a wide variety of technologies or platforms with a special focus on knowledge
management solutions. We offer software development, web development and specialization in
the above mentioned areas. We build robust, scalable and complete solutions based on your
unique organizational needs satisfying real business needs in real business environment. We are
committed to provide powerful software application development and services using internet
technologies to improve and transform key enterprise processes.. Offering a comprehensive
range of qualitative knowledge services at a minimal cost, the group has been a hub for the
success to the aspiring IT segments the organization shares the higher end service lines in
touching the extremities of IT Services while managing the IT assets of our satisfied clients.

With a vision of reaching great heights in IT industry serving both domestic and
international sectors, Bhaskara Info Services Pvt. Ltd brings a fresh and innovative approach for
IT services. We help customers to achieve their business objectives by providing innovative &
best-in-class IT solutions and services. In pursuit of our goal, we are driven by a set of closely
held values and business principles.

Quality Objectives

To enhance the company's ability to consistently meet our customer's needs, by improving
organizational and team effectiveness.
To realize our quality policy, daily improvements will be coupled with individual and team
innovations in the following areas:

 Customer Driven Quality: Quality is judged by the customer. The quality process must
lead to services that contribute value and lead to customer delight

 Timely support to customer
 We meet customer expectations by:
 Understanding the needs of customers
 Anticipating and working towards the future needs of customers
 Constantly improving and preventing errors from occurring
 Attracting, training and retaining qualified staff

Our Vision

To pioneer in developing the most progressive technology with the most secured systems,
relentless in the pursuit of client and employee excellence.

Our Mission

Our mission is to provide high-quality, extremely good value solutions through strong
relationship with our customers.

Our Philosophy

Always open to our client's needs and always willing to change our ways to suit their style.

How BIS works

BIS brings together existing simple and coherent program targeted for specific career paths.
Choose a suitable career path for yourself and follow the specialization courses to get there.


 To benefit from the latest and the most advanced educational program, BIS value pack
 BIS is a tailor made, customized program to help students get the right career start
 Value pack is empowered with the right balance of theory and hands-on sessions.
 Available on leading IT tracks, namely- e-Business Administration, Software Testing,
Information Management, Performance Management, Managing Technology & Service
Oriented Architecture

3. Project Introduction

3.1 Project Overview

Inventory management is the active control program which allows the management of sales,
purchases and payments. Inventory management software helps create invoices, purchase
orders, receiving lists, payment receipts and can print bar coded labels. An inventory
management software system configured to your warehouse, retail or product line will help to
create revenue for your company. The Inventory Management will control operating costs and
provide better understanding. We are your source for inventory management information,
inventory management software and tools.

A complete Inventory Management Control system contains the following components:

 Inventory Management Definition

 Inventory Management Terms

 Inventory Management Purposes

 Definition and Objectives for Inventory Management

 Organizational Hierarchy of Inventory Management

 Inventory Management Planning

 Inventory Management Controls for Inventory

 Determining Inventory Management Stock Levels

Inventory Management and Inventory Control must be designed to meet the dictates of the
marketplace and support the company's strategic plan. The many changes in market demand,
new opportunities due to worldwide marketing, global sourcing of materials, and new
manufacturing technology, means many companies need to change their Inventory Management
approach and change the process for Inventory Control.

Despite the many changes that companies go through, the basic principles of Inventory
Management and Inventory Control remain the same. Some of the new approaches and
techniques are wrapped in new terminology, but the underlying principles for accomplishing
good Inventory Management and Inventory activities have not changed.

The Inventory Management system and the Inventory Control Process provides information to
efficiently manage the flow of materials, effectively utilize people and equipment, coordinate
internal activities, and communicate with customers. Inventory Management and the activities of
Inventory Control do not make decisions or manage operations; they provide the information to
Managers who make more accurate and timely decisions to manage their operations

3.2 Purpose

Inventory proportionality is the goal of demand-driven inventory management. The primary

optimal outcome is to have the same number of days worth of inventory on hand across all
products so that the time of run out of all products would be simultaneous.

The secondary goal of inventory proportionality is inventory minimization. By integrating

accurate demand forecasting with inventory management, replenishment inventories can be
scheduled to arrive just in time to replenish the product destined to run out first, while at the
same time balancing out the inventory supply of all products to make their inventories more
proportional, and thereby closer to achieving the primary goal. Accurate demand forecasting also
allows the desired inventory proportions to be dynamic by determining expected sales out into
the future; this allows for inventory to be in proportion to expected short-term sales or
consumption rather than to past averages, a much more accurate and optimal outcome.

Integrating demand forecasting into inventory management in this way also allows for the
prediction of the "can fit" point when inventory storage is limited on a per-product basis.

3.3 Scope

The scope of this system is to provide user efficient working environment and more output can
be generated through this. This system provides user friendly interface resulting in knowing each
and every usability features of the system. This system helps in tracking records so that past
records can be verified through them and one can make decisions based on the past records. This
system completes the work in a very less time resulting in less time consumption and high level
of efficiency.
4. Analysis and Design

4.1 Existing System

As we know the manual processing is quite tedious, time consuming, less accurate in comparison
to computerized processing. Obviously the present system is not is exception consultant
encountering all the above problems.

1. Time consuming.
2. It is very tedious.
3. All information is not placed separately.
4. Lot of paper work.
5. Slow data processing.
6. Not user-friendly environment.
7. It is difficult to found records due file management system

Limitation of the Existing System

 Existing system was manual.

 Time consuming as data entry which include calculations took lot of time.
 Searching was very complex as there could be 100’s of entry every year.
 The proposed system is expected to be faster than the existing system.

4.2 Proposed System

In new computerized system I tried to give these facilities.

1. Manually system changes into computerized system.

2. Friendly user interface.
3. Time saving.
4. Save paper work.
5. Connecting to database so we use different type of queries, data report.
6. Give facility of different type of inquiry.
7. Formatted data.
8. Data’s are easily approachable.

Goal Of The Proposed System

The goal of the proposed system is to prepare PRE-SO on the basis of order made.

This module contain two stages:

 SO Sheet
 Pre_SO Worksheet
The preliminary stage of this project is to create the SO (Supply Order) sheet for each equipment
order.In this sheet each equipment spare part is assign a unique identifier i.e ISG Num.

SO sheet contains the Rate of each spare part.When the worksheet is being prepare then this rate
acts as the LPP.The reference of Last Purchase Price (LPP) of the equipments corresponding to
the ISG (Initial Stocking Guide) is maintain to form the transaction sheet of the particular
financial year. Each firm specifies their Tender Price (TP) respective to the spare parts. Each
equipment can have multiple spare parts, uniquely identified by their Part No.

Pre-SO worksheet is being prepared approx at the end of each financial year on the basis of SO
transaction sheet and Tender Price coated.In this worksheet basic operation is to get the last
purchase price, and its reference.

4.3 Functional Requirements

Functional Requirements

Functional requirements capture the intended behavior of the system. This behavior may be
expressed as services, tasks or functions the system is required to perform. This white paper lays
out important concepts and discusses capturing functional requirements in such a way that they
can drive architectural decisions and be used to validate the architecture.

Functional Requirements contains

 What inputs the system should accept, and under what conditions

 What outputs the system should produce, and under what conditions
 What data the system should store that other systems might use
 What computations the system should perform
 The timing and synchronization of the above

4.4 Non Functional Requirements

1. Portability
2. Reliability
3. Performance

Types of Non Functional Requirements

 Interface Requirements
 Performance Requirements
o Time/Space Bounds
o Reliability
o Security
o Survivability
 Operating Requirements

Non Functional Requirements Contains:

 The size of the system

 The need to interface to other systems
 The target audience
 The contractual arrangements for development
 The stage in requirements gathering
 The level of experience with the domain and the technology
 The cost incurred if the requirements are faulty

4.5 Feasibility Study

The feasibility study is the important step in any software development process. This is
because it makes analysis of different aspects like cost required for developing and executing the
system, the time required for each phase of the system and so on. If these important factors are
not analyzed then definitely it would have impact on the organization and the development and
the system would be a total failure. So for running the project and the organization successfully
this step is a very important step in a software development life cycle process. In the software
development life cycle after making an analysis in the system requirement the next step is to

make analysis of the software requirement. In other words feasibility study is also called as
software requirement analysis. In this phase development team has to make communication with
customers and make analysis of their requirement and analyze the system. By making analysis
this way it would be possible to make a report of identified area of problem. By making a
detailed analysis in this area a detailed document or report is prepared in this phase which has
details like project plan or schedule of the project, the cost estimated for developing and
executing the system, target dates for each phase of delivery of system developed and so on. This
phase is the base of software development process since further steps taken in software
development life cycle would be based on the analysis made on this phase and so careful analysis
has to be made in this phase. Though the feasibility study cannot be focused on a single area
some of the areas or analysis made in feasibility study is given below. But all the steps given
below would not be followed by all system developed. The feasibility study varies based on the
system that would be developed.

 Feasibility study is made on the system being developed to analyze whether the system
development process require training of personnel. This help in designing training
sessions as required in later stage.

 Is the system developed has scope for expanding or scope for switching to new
technology later if needed in ease. In other study is made to find the portability of the
system in future.

 Is the cost of developing the system high or does it meet the budgeted costs. That is a cost
benefit analysis is made. In other words an analysis is made on cost feasibility of the
project. This helps in identifying whether the organization would meet the budgeted costs
and also helps the organization in making earlier and effective plans for meeting extra
costs because of the system development.

 Analysis is made on what software to use for developing the system. This study and
analysis would help to choose the best implementation for system and the organization.
This feasibility study includes factors like scalability, how to install, how to develop and
so on. This feasibility study in short includes the analysis of technical areas. This analysis

helps the efficiency of the system developed to get improved. This is because by
choosing the correct technology by making analysis on the needs of system helps in
improving the efficiency of the system.

 The above feasibilities are analysis which helps in development of the system. But the
scope of feasibility study does not end with this. Analysis or feasibility study also
includes the analysis of maintenance stage. In other words feasibility study is made to
analyze how one would maintain the system during maintenance stage. This helps sin
planning for this stage and also helps in risk analysis. Also the analysis helps in making
analysis about what training must be given and how and what all documents must be
prepared to help users and developers to face maintenance phase.


There are many advantages of making feasibility study some of which are summarized below:

 This study being made as the initial step of software development life cycle has all the
analysis part in it which helps in analyzing the system requirements completely.

 Helps in identifying the risk factors involved in developing and deploying the system.

 The feasibility study helps in planning for risk analysis.

 Feasibility study helps in making cost/benefit analysis which helps the organization and
system to run efficiently.

 Feasibility study helps in making plans for training developers for implementing the system.

 So a feasibility study is a report which could be used by the senior or top persons in the
organization. This is because based on the report the organization decides about cost
estimation, funding and other important decisions which is very essential for an organization
to run profitably and for the system to run stable.

4.6 Economic Feasibility

For any system if the expected benefits equal or exceed the expected costs, the system can be
judged to be economically feasible. In economic feasibility, cost benefit analysis is done in
which expected costs and benefits are evaluated. Economic analysis is used for evaluating the
effectiveness of the proposed system.

In economic feasibility, the most important is cost benefit analysis. As the name suggests, it is an
analysis of the costs to be incurred in the system and benefits derivable out of the system

A simple economic analysis which gives the actual comparison of costs and benefits are much
more meaningful in this case. In addition, this proves to be a useful point of reference to compare
actual costs as the project progresses. There could be various types of intangible benefits on
account of automation. These could include increased customer satisfaction, improvement in
product quality better decision making timeliness of information, expediting activities, improved
accuracy of operations, better documentation and record keeping, faster retrieval of information,
better employee morale.

Cost-based study: It is important to identify cost and benefit factors, which can be categorized
as follows: 1. Development costs; and 2. Operating costs. This is an analysis of the costs to be
incurred in the system and the benefits derivable out of the system.

Time-based study: This is an analysis of the time required to achieve a return on investments.
The future value of a project is also a factor.

4.7 Operational Feasibility

Operational feasibility is mainly concerned with issues like whether the system will be used if it
is developed and implemented. Whether there will be resistance from users that will effect the
possible application benefits? The essential questions that help in the operational feasibility of a
system are following.

 Does management support the project?

 Are the users not happy with current business practices? Will it reduce the time (operation)
considerably? If yes, then they will welcome the change and the new system.

 Have the users been involved in the planning and development of the project? Early
involvement reduces the probability of resistance towards the new system.

 Will the proposed system really benefit the organization? Does the overall response increase?
Will accessibility of information be lost? Will the system affect the customers in
considerable way?

Proposed project is beneficial only if it can be turned into information systems that will meet the
organizations operating requirements. Simply stated, this test of feasibility asks if the system will
work when it is developed and installed. Are there major barriers to Implementation? Here are
questions that will help test the operational feasibility of a project:

Is there sufficient support for the project from management from users? If the current system
is well liked and used to the extent that persons will not be able to see reasons for change, there
may be resistance.

4.8 Technical Feasibility

In technical feasibility the following issues are taken into consideration.

 Whether the required technology is available or not

 Whether the required resources are available

Once the technical feasibility is established, it is important to consider the monetary factors
also. Since it might happen that developing a particular system may be technically possible but it
may require huge investments and benefits may be less. For evaluating this, economic feasibility
of the proposed system is carried out.

Evaluating the technical feasibility is the trickiest part of a feasibility study. This is because,
at this point in time, not too many detailed design of the system, making it difficult to access
issues like performance, costs on (on account of the kind of technology to be deployed) etc. A
number of issues have to be considered while doing a technical analysis.

Understand the different technologies involved in the proposed system before commencing
the project we have to be very clear about what are the technologies that are to be required for
the development of the new system. Find out whether the organization currently possesses the
required technologies. Is the required technology available with the organization?

4.9 Behavioral Feasibility

People are inherently resistant to change, and computers have been known to facilitate change.
An estimate should be made of how strong a reaction the user staff is likely to have toward the
development of a computerized system. Therefore it is understandable that the introduction of a
candidate system requires special efforts to educate and train the staff.The software that is being
developed is user friendly and easy to learn.In this way, the developed software is truly efficient
and can work on any circumstances ,tradition ,locales.

Behavioral study strives on ensuring that the equilibrium of the organization and status quo in
the organization are nor disturbed and changes are readily accepted by the users.

4.10 Requirement Analysis

Requirements analysis in systems engineering and software engineering, encompasses those

tasks that go into determining the needs or conditions to meet for a new or altered product, taking
account of the possibly conflicting requirements of the various stakeholders, such as
beneficiaries or users.

Requirements analysis is critical to the success of a development project. Requirements

must be actionable, measurable, testable, related to identified business needs or opportunities,
and defined to a level of detail sufficient for system design.

Conceptually, requirements analysis includes three types of activity:

 Eliciting requirements: the task of communicating with customers and users to determine
what their requirements are. This is sometimes also called requirements gathering.
 Analyzing requirements: determining whether the stated requirements are unclear,
incomplete, ambiguous, or contradictory, and then resolving these issues.
 Recording requirements: requirements may be documented in various forms, such as natural-
language documents, use cases, user stories, or process specifications.

Requirements analysis can be a long and arduous process during which many delicate
psychological skills are involved. New systems change the environment and relationships
between people, so it is important to identify all the stakeholders, take into account all their
needs and ensure they understand the implications of the new systems. Analysts can employ
several techniques to elicit the requirements from the customer. Historically, this has included
such things as holding interviews, or holding focus groups (more aptly named in this context as
requirements workshops) and creating requirements lists. More modern techniques include
prototyping, and use cases. Where necessary, the analyst will employ a combination of these
methods to establish the exact requirements of the stakeholders, so that a system that meets the
business needs is produced. Systematic requirements analysis is also known as requirements
engineering. it is sometimes referred to loosely by names such as requirements gathering,
requirements capture, or requirements specification. The term requirements analysis can also be
applied specifically to the analysis proper, as opposed to elicitation or documentation of the

requirements, for instance. Requirement engineering is a sub discipline of systems engineering
and software engineering that is concerned with determining the goals, functions, and constraints
of hardware and software systems. In some life cycle models, the requirement engineering
process begins with a feasibility study activity, which leads to a feasibility report. If the
feasibility study suggests that the product should be developed, then requirement analysis can
begin. if requirement analysis precedes feasibility studies, which may foster outside the box
thinking, then feasibility should be determined before requirements are finalized.

5. Specific Requirements

Software Requirements:

 Visual Studio .Net 2003

 SQL Server 2000
 Windows 2000 Server edition

Hardware Requirements:

Web Architecture : ASP.NET 3.5

Technology Or Framework : .NET 3.5

Programming Language : C# 3.0

Web Server : IIS 6.0(Internet Information services)

Web Browser : IE or Mozilla

Database Server : Microsoft SQL Server 2005

IDE(Integrated Development Environment) : Microsoft Visual Studio 2008

6. Software Development Approach

Object Oriented Programming is a method of implementation in which programs are

organized as cooperative collection of objects, each of which represents an instance of a class,
and whose classes are all members of a hierarchy of classes united via inheritance relationships.

OOP Concepts

Four principles of Object Oriented Programming are

 Abstraction
 Encapsulation
 Inheritance
 Polymorphism


Abstraction denotes the essential characteristics of an object that distinguish it from all
other kinds of objects and thus provide crisply defined conceptual boundaries, relative to the
perspective of the viewer.


Encapsulation is the process of compartmentalizing the elements of an abstraction that

constitute its structure and behavior encapsulation serves to separate the contractual interface of
an abstraction and its implementation.


 Hides the implementation details of a class.

 Forces the user to use an interface to access data

 Makes the code more maintainable.


Inheritance is the process by which one object acquires the properties of another object.


Polymorphism is the existence of the classes or methods in different forms or single name
denoting different implementations



Web Server Runtime Env

HTTP Aspnet_isapi.dll

App Domain
HTTP HTTP Handlers

Process Req

inet_info.exe Aspnet_wp.exe

 Inet_info.exe  identifies the request and submits the request to the aspnet_isapi.dll.
 Aspnet_isapi.dll  is a script engine which process the .aspx page

 Then the script engine will submit the request to the ASP.NET runtime env.
 After verifying all the security issues of both machine.config and web.config then an
AppDomain will be defined for the request and after processing the request the response will
be given to the client as HTTP response.
 Machine.Config  it is used to maintain the complete configuration details of all the web
applications registered on to the web server of
 Web. Config  It is used to maintain the config details about a single web application.
 Where configuration details includes security, database connectivity,state management,trace
details of the web application, authentication and authorization of the applications and
 AppDomain : All windows appns run inside a process and these process own resources such
as memory and kernel objects and the threads execute code loaded into a process.Process are
protected from each other by the OS. All these appns are run on high isolation mode to work
safely.The disadvantage of this is memory resources are blocked.So to achieve this in a
single process all the applications should be made to run which is good to an extent but the
draw back is if one crashes all other are effected. So in .net the code verification feature takes
care that the code is safe to run.
 so each application runs its own application domain and therefore it is protected from
other applications on the same machine so it ignores the process isolation specified
on IIS.
 builds upon a extensible architecture known as HTTP runtime.This
is responsible for handling the requests and sending the response.It is upto an individual
handlers such as or web service to implement the work done on a request.IIS supports
a low level API known as ISAPI. implements a similar concept with HTTP
handlers.A request is assigned to from IIS then examines entries in the
<httphandlers> section based on the extension of the request to determine which handler the
request should be send to.

Features of

 Up gradation of ASP to ASPX is not required it supports side by side execution and
hence a request can be given from ASP to ASPX and vice versa.
 Simplified Programming Model
 ASP.Net is a technology which can be implemented using any dot net language such as,C# etc and hence there is no requirement of HTML,JavaScript or VBScript to
implement ASP.NET

 Simplified deployment
 ASP.Net supports setup and deployment and hence the web app can be defined with a
web set up project which can be easily deployed on to the web server. Where as for ASP
CUTE FTP is used for deploying manually we have to upload.
 Better Performance
 As the ASPX pages are complier based the performance of the web application will be
faster then the ASP pages (as they are interpreter based)
 Caching
 It is a process of maintaining the result or output of a web page temporarily for some
period of time .ASP supports Client Side caching where as ASP.Net supports both client
side and server side.
 Security
 In ASP security is done by IIS or writing the code manually. Where as ASP.Net is
defined with built in security features such as
 windows authentication
 Forms Authentication
 Passport Authentication
 Custom Authentication
 More powerful data access

 supports ADO and ADO.Net as its database connectivity model which will be
implemented using the most Powerful OOP’S languages like VB.Net and C# and hence
the database access using ASPX pages will be very powerful.
 Web services
 It is a code which will be published on the web which can be used by any applications
written using any language for an platform or device.
 Better session Management
 Session Management in ASP.Net can be maintained using the database and as well
cookieless sessions are also supported.It also supports enabling and disabling of session
info within a web application.

 Simplified Form Validations

 ASP.Net provides validation controls using which any type of client side validations are
performed without writing any code.

 A web page is in 2 parts

1} Designing part (HTML Content, Flash, Dreamweaver etc)
2} logic Part (sub programs and event procedures and it has also your database
 ASP.Net supports 2 techniques for creating web page
1) In Page Technique
when you place design part code and logic part code with in a single file called as
ASPX then it is called as in Page Technique.
2) Code Behind Technique
when design part code is represented with in ASPX file and logic part code is
represented with in dll file then it is called as code behind technique.
 ASP Supports only In Page technique.
 DLL file is not a readable file so it is secured.

Difference Between VB 6.0 & VB.NET

1) It is an object based 1) It is an object oriented programming

programming 2)Here its mandatory
2)Variables or member declarations
are not mandatory
3) Uses Unstructured / Structured methods
3)Uses Unstructured method for
for handling exception
handling exceptions
4) supports ADO and ADO.NET models
4) Uses DAO, RDO, ADO object
models for database connectivity
5) uses crystal reports
5)Uses Data projects as its default
reporting tool

6)Uses COM for language 6) Uses .net assembly for language

interoperability interoperability
7)Does not support multi threading 7)Does support multithreading
8)Uses DCOM to support 8)Uses .net remoting to support
distributed technology distributed tech.
9) Supports web tech.,client side 9)It does not support web technology.
appns or server side appns can be
Note cant be used to design Client
designed using VB
Side / Server side appns but it can used as
an implementing Lang for

Differences between &


DATA TYPES 1.Unsigned Data Types 1.No Unsigned Data Types

2.Strongly Typed Lang. 2.It is not strongly typed

OOPS Concept More concepts in C# Less Concepts here.

u have interfaces, No indexes in and it has
abstraction, indexes limitations wrt interface

Memory Garbage Collector. Garbage collector,

Manag. Automatic releasing of destructor,dispose.Automatic
resources is available. releasing of resources is not
It Boosts the performance. available.You have to explicitly use
dispose method

Operator Is available in C# Is not available in VB.Net


Pointers Is available in C# Is not available in VB.Net

Auto XML Is available in C# Is not available in VB.Net


Page Life Cycle Events

 Page_Init
 This is fired when the page is initialized
 Page_Load
 This is fired when the page is loaded
 The difference between Page_Init and Page_load is that the controls are guaranteed to be
fully loaded in the Page_load.The controls are accessible in the Page_Init event,but the
ViewState is not loaded,so controls will have their default values,rather than any values
set during the postback.
 Control_Event
 This is fried if a control triggered the page to be reloaded (such as a button)

 Page_unload
 This is fired when the page is unloaded from the memory

 Types of Controls in ASP.Net



Standard List Controls Validation Data Misc

Controls  Radio Controls bound Controls
label Button List Required Controls
 Check Box field Validator
Textbox Data Grid Crystal
Range Report
Button Data List
Dropdown Validator Viewer
Link Button List Repeater control
List Box
Image Validator
Calendar Expression




Check Box


Common Syntax for any web server control

 <asp:controltype id=“name of the control” runat=“server”

//additional properties
 To close syntax is “ / “ .

 In order to set or get the value from any standard control text property should be used.
 Eg:
 <asp:label id=“lb1” runat=“server” text=“user name”></asp:label>
 <asp:button id=“lb1” runat=“server” text=“Login” />

 Calendar Control

 Usage: It is used to place the calendar on the webform

o Note: Place the calendar control and right click on it and select autoformat to
provide a better look and feel for the control
 Calendar control can be considered as a collection of table cells
 Where every table cell will maintain the information about the days as a calendar day in
the format of a link button control
 Whenever the calendar days has to be customized based on the requirement of the user
DAYRENDER event should be used.
 Every event handler in the dot net tech will accept two arguments 1st one being object
and the 2nd one is eventArguements
o I.e. Day Render(Object,eventArguements)
 Event Arguments of DayRender event will provide
o e.cell -> to refer table cell
o -> to refer calendar day
 In order to add a string value as a control to any other control “Literal Control” Should be


Connection oriented model

Disconnected oriented model
Connection Oriented Model

 Whenever an application uses the connection oriented model to interact with the db then
the connectivity between the application and the database has to be maintained always.
 Whenever an user executes any statement other than a select then command object can be
binded directly to the application
 If the user executes a select statement then dataReader is used to bind the result to the

Disconnected Oriented Model

When the user interacting with the db using this model then while performing the manipulations
or navigations on the data connectivity between the application and the database is not required
Note: When ever the data is been updated on to the database then the connectivity is required in
the disconnected model.


Application Data View DataSet Database

This is available in
client system
Data Adapter Connection

Data Providers

Disconnected Model
 Connection  it is used to establish the physical
connection between the application and the database
 DataAdapter it is a collection of commands which acts
like a bridge between the datastore and the dataset.
 Commands in DataAdapter 

DataAdapter Collection of all these commands

is DataAdapter
Select Command
Fill(Dataset Name[,DataMember])
Table Mappings

Insert Command

Update Command
Update(Dataset Name[,DataMember])
Delete Command

Data Adapter

 DataAdapter can always be binded to a single table at a time.
 Whenever the dataAdapter is used then implicit opening and closing of connection of closing
object will take place.
 If the dataAdapter is defined using a tool or a control then all the commands for the adapter
will be defined implicitly provided the base table with a primary key.
 If the base table is not defined with a primary key then the commands relevant for update
command and Delete command will not be defined.
Fill Method

 It is used to fill the data retrieved by the select command of DataAdapter to the dataset.
 Update Method
 It is used to update the dataAdapter with the data present in the dataMember of the
dataSet. In other words used to the update the database.
 DataSet
 It is an in memory representation of the data in the format of XML at the client system.
 Points to remember about DataSet:
o It contains any no of datatables which may belong to the same or different
databases also.
o If any manipulation are performed on the database it will not be reflected on to
the database.
o Dataset is also considered as a collection of datatables where a datatable can be
considered as a DataMember.
o Dataset will not be aware of from where the data is coming from and where the
data will be passed from it.
o Dataset supports establishing the relationship between the datatables present in
the dataset where the datatables might belong to different databases also.
 DataSet is of 2 types 
o Typed DataSet  when ever the dataset is defined with the support of XML
schema definitions then it is said to be typed dataSet.
o UnTyped DataSet  if the dataset is defined without the XML Schema Definition
then it is said to be UnTyped DataSet.


 It is logical representation of the data present in the datamember of dataSet.

 Usage  It is used to sort the data,filter the data or if the data has to be projected in the
pagewise then the dataView should be used.
 Command

It is used to provide the source for executing the statement I.e it used to specify the
command to be executed.
 Data Reader

It is a forward and read only record set which maintains the data retrieved by the select



Used if the statement is CONNECTION

CONNECTION select statement




Used if the data has to be filtered,
sorted or if the data has to be projected
in page-wise



SQL Connection Oracle Connection OleDB Connection ODBC Connection

SQL Command Oracle Command OleDB Command ODBC Command

SQL Datareader Oracle DataReader OleDB Data Provider ODBC DataProvider

SQL DataAdapter Oracle DataAdapter OleDB DataAdapter ODBC DataAdapter

 Syntax to define the Object

 Dim objectName as new Connection(“ProviderInfo”) where xx  can be either
SQL,Oracle,Oledb or ODBC
 Provider Info
 To connect to MS-Access 2000 above versions 
 Provider=microsoft.jet.oledb.4;datasource=databaseName.mdb
 To connect to SQL-Server db 
 Provider=sqloledb.1;userid=sa;password=;database=database name; datasource =
 Note if SQL Connection is used then provider = provider name is not required.
 To Connect to ORACLE 
 Provider =;userid=scott;pwd=tiger;datasource = server name
 OR
 Provider = msdaora.1;…….
 Note if oracle connection is used then provider= provider name is not required.
 To define Command Object 

 Dim objectName as new xxxCommand([SQL Statement,connection object/Connection
 To define DataReader 
 Dim objectName as xxxDataReader
 To define DataAdapter 
 Dim objectName as xxxDataAdapter(Select Statement,<Connection Object / Connection
 When ever the DataAdapter is defined using the above syntax then only the command
relevant for the SelectCommand will be defined and in order to use the above commands
they have to be build explicitly.
 To define DataSet 
 Dim objectName as new DataSet()
 To define DataView 
 Dim objectName as new DataView(datasetName.DataMemberName)
 Security in ASP.NET

 provides various authentication methods to achieve security.

 They are: 
 Forms Authentication
 Windows Authentication
 Passport Authentication
 Custom Authentication
 FORMS Authentication

 It is used to authenticate the user credentials for Internet and Intranet applications.
 It is used to specify the authentication mode to be used by the ASP.Net web application,
to specify the login page information and to specify the format of the password to be used
for providing additional security and also it acts like a database which maintains the user
credentials information.
Syntax to set the authentication
<authentication mode=“Forms”>

<forms loginUrl = “login.aspx”>
<Credentials passwordFormat =“SHA1/MD5/Clear”>
<User name =“_____” password=“____” />
_____________ any no of user information

 It is used to allow or deny the users from accessing the webforms present in the web
 <authorization>
 <allow users=“__,__,__ / * “ />
 <deny users=“__,__,__ / * ”/>
 </authorization>
 Note: the tags and the attributes present in the web.config is a case sensitive contents.
 In order to support Forms Authentication in ASP.Net the Dot Net Framework provides a
base class library called as “”

Methods to support Forms Authentication

 Authenticate :It is used to authenticate if the provided information belongs to a valid user
credentials or not.It returns True if user info is valid else returns false.
 Syntax  authenticate(username,password)
 RedirectFromLoginPage  It is used to redirect to the requested webform from the login
page if the provided user credentials belongs to a valid user.
 Syntax :- redirectFromLoginPage(username,booleanvalue)
 If specified TRUE then the user info will be maintained as a permanent HTTP Cookie at the
client system and if FALSE is specified then user info will be maintained temporarily till the
browser is closed.

 HashPasswordForStoringInConfigFileit is used to encrypt the data using either SHA1 or
md5 hash algorithms.
 Syntax  HashPasswordForStoringInConfigFile
(original Text,”md5/sha1”)
 SignOut  It is used to clear the session of the user which has been set the application
  returns the name of the user who has currently logged in.

Windows Authentication

 It is used to authenticate the user information based on the users registered on the
 Note it is used to validate the users on the intranet environment.
 In web.config file 
o <authentication mode=“windows” />
o <authorization>
 <allow users/role =“DomainName/UserName,---” / roleName />
 <deny users/role = “DomainName/UserName,---” / roleName />
o </authorization>
 Whenever the user who has been currently logged in is present in the allow users list then
all the webforms can be accessed directly present in the web application.Else implicilty
the webserver will project a dialog box to provide the user credentials and allow the user
to access the webforms provided the information belongs to a valid user credentials.

Types of Windows Authentication

 Basic Authentication  if used as authentication type then the user credentials will be
passed across the n/w in cleartext Format.
 DigestAuthentication  it is a special authentication type used to authenticate the
Domain server users.

 Note if the OS is not a domain server then the Digest authentication type will be disabled
in that system
 NTLM authentication  it is a default authentication type used by the windows
authentication where NTLM stands for Integrated Windows Authentication

Steps to set the authentication Type

 Start > RUN > inetmgr

 Right click on default web site and select properties
 Click on Directory Security tab
 Click on the Edit button present in the anonymous access and authentication control
 Check on the different authentication types to be used
 To know the domain name of the system
 [ In command prompt ]
 C:\host Name
 This gives the domain name

Passport Authentication

 If the same user credentials has to be maintained across multiple websites then passport
authentication can be used.
 To achieve this 
o Install Microsoft Passport SDK
o In web.config file
 <authentication mode =“passport”>
 <passport redirectUrl =“internal /URL ‘ />
o </authentication>

Custom Authentication

 It is used to Validate the user credentials as per the requirement of the application.


 It is used to maintain the state of the user across multiple pages.

{ OR } Web server maintaining client information with out any connectivity is called as
state management .This can be implemented in various ways
1.View State [ Hidden field ]
2. Page Submission
5.Query String
7. Cache

View State

 It is the concept of persisting controls properties between requests under post back
 The view state will be implemented based on hidden field.
 The main advantage of view state will be 2 things
 There is no programming required from the developer so less burden on the developer.
 The memory will not be allocated in the client system nor at in the webserver system.It
will be maintained as the part of the web page itself.
 The problem with a view state is there will be more amount of data transfer between
client and web server.
 The view state can be controlled at the 3 levels 
1 } Control Level 
<Input = ….Enable viewstate=“true/false”>

Note : when it comes to sensitive data it is not recommended to implement view state
the sensitive data can be password,credit card no, etc.
 When you go with password type textbox the view state will not be applicable implicitly.
 2} Page Level
<%@ Pagedirective …..enable viewstate=“true/false” >
 3 }Application Level 
It requires web config
It will be applicable to all the web pages


• It is used to maintain the server side information at the client system. { OR } A cookie
can be defined as a small amount of memory used by the web server on the client system.
Usage : The main purpose of cookies will be storing perosonal information of the
client,it can be username,pwd,no of visits,session id.
• Cookies can be of 2 types:-
• Client Side Cookies If the cookie information is set using Javascript / VbScript within
an HTML page then it is said to be a client Side Cookies.
• Server Side CookiesIf the cookie information is set using server side technology then
it is said to be server side cookies.They are of 2 types:
1] Persistant Cookies ( Permanent Cookies )
2] nonPersistant Cookies ( Temporary Cookies )
• 1] Persistant Cookies ( Permanent Cookies )
o When the cookie is stored on to the hard disk memory then it is called as
persistant cookie.
o When you provide expires than the cookie will be considered as persistent.
• 2] nonPersistant Cookies ( Temporary Cookies )
o When the cookie is stored with in the process memory of the browser then it is
called temporary cookies.
• Syntax

• To set the cookies information
Response.cookies(“cookie name”).value = value
• To get or read the value from a cookie
variable =
request.cookies(“cookie name”).value

Points to remember about cookies

• Cookies information will be always be stored at the client system only.

• Cookies information are browser dependent ie the values of the cookies set by one browser
cant be read by other browser.
• If the cookie information is set by IE then that info. Will be maintained in the memory of the
browser itself.
• If the cookie information is set by Netscape Navigator then then information will be
maintained in “Cookies.txt” file.
• There is no security for cookies information.
• Browsers has a capability to disable the usage of cookies within it.
• Note  if the browser disables the cookies in it and if any request for a web form which
contains the usage of cookies then the request will not function properly.
• User can change cookie content (or) user can delete Text file.
• The browser will support 20 cookies towards a single website . If we add 21st cookie then
automatically the first cookie will be deleted.
• A cookie can represent maximum of 4kb of data.
• To bind the cookie information to a specific domain 
response.cookies(“cookie name”).Domain = DomainName
• To allow the different paths of the same domain to access the cookie information 
response.cookies(“cookie name”).path = “/path….”
• note the default expiration time for the cookies is 30 min.
• To set the expiration time for the cookie info 
response.cookies(“cookie name”).expires = dateTime

• To secure the cookie information 
response.cookies(“cookie Name”).secure = booleanValue


When client makes a first request to the application, runtime will create a block of
memory to the client in the web server machine.This Block of memory is called as session
memory.This memory will be unique to the client with the Time Out of 20 min by default.Here
timeout indicates from the last access of client request not from creation of cookies.Cookie can
represent only plain text not an object but session memory has an object.

Differences between Session & Cookies

Session Cookies

It will be maintained in the It will be maintained in the

web server system.So it is client system. So it is called as
called as server side client side state management.

Session can represent objects Cookie can represent plain


More security for data Less security for data.

Accessing will be slow Accessing would be faster.

Limitations of sessions in ASP

• In ASP client info is maintained by the server using its sessionID irrespective of session
object usage.
• Sessions in ASP are always cookies based.
• Enabling and disabling of sessions are not supported in ASP
• Sessions in ASP.Net

• Sessions in can be
Cookies Based ( Default )
It can be stored in database also (SQL Server)
• Syntax
 To get session Info
Session(“variable”) = value
 To Read / Get value
Variable = session(“variable”)
• Note:
• If the value assigned to the session variable is a character data then the info will be
maintained in the contents collection of the session object
• If the value assigned to the session variable is an object then that information will be
maintained in the static object collection of session object.
• By default session state for the application will be true and hence the contents of the
session object can be used.
• In order to disable the session object usage in the web form then “enable session state”
attribute of the page directive should be set as false.
• In the page directive I.e go to the HTML view and in that page directive at the start of the
page make the enable session state as = false.
• Syntax 
<% @ page language =“vb” enablesessionstate=“false”…….%>
• Session Object

• Session Object  this object can be used to access session memory from web
The following are the methods 
1. Add(key,value) where key  String and value  object
3.Abandon()  to close the session

• Points to remember about Session

• The default session timeout is 20mins

To set the session timeout
session.timeout = minutes ( specify the min)
In web.config we have tag available for session
<sessionstate mode=“Inproc” cookieless=“false” timeout =“minutes” />
Note : the default sessionstate uses cookies for maintaining the data or info.
• To define a session as cookie-less then in web.config:
<sessionstate mode=“Inproc” cookieless=“false” timeout=“20” />
note: once the sessionstate is set to cookieless then the sessionInfo or the sessionID
will be appended to the URL for each and every webform present in the web application.
• In order to retrieve the sessionID of the client
session.sessionID should be used.
• In order to maintain the session info. On the SQL server database then in web.config:
<sessionState mode=“sqlserver” stateconnectionstring=“tcpid=127.0.01:42424”
“______(completepath”) cookieless=“false” timeout=“20” />
• In order to clear the session variable present in the session object contents collection then
• In order to clear all the items present in the contents collection then
“session.contents.removeall()” should be used.
• In order to kill the session of the user then “session.abandon()” method should be used.
• To disable the session information for a specific webform then enablesessionstate=“false”
should be set for the page.


• It is used to maintain the state of all the users accessing the web applications.

• When the first client,first request comes to the application web server will allocate a
block of memory this is called as application memory.
• The application memory will not have any life time.
• Application object can be used to access application memory from web page
• Application object consists the following methods 
1} Add (key,value) {or} Application(“var”) = value
2} Remove(key)
3} lock()
4} unLock()
note  the lock and unlock are not available in session,but available in application .
• To set:
Application (“variable”) = value
• To read:
variable = application(“variable”)
• ProblemIf the application object is not maintained properly then it will result in Data
• Whenever the application variables are used in the webform then it is mandatory to Lock
the application contents.
• To do: Application.Lock()
• If application.lock() method is encountered while processing the webform then all the
other requests which uses the application contents will not be blocked till the webform
processing is completed.
• Lock is used to allow only one client at a particular time.
• Each client requests to the webserver is considered as thread.webserver will allocate
equal processor time to all the threads.In this aspect more then one thread can manipulate
application memory data,this may lead to improper result to avoid this it is recommended
for synchronisation of threads.
• Synchronisation is nothing but allowing user one at a particular time.
• The synchronisation of threads can be implemented using lock and unlock methods.


• It’s a collection of events where the code written in those events will be executed implicitly
whenever the relevant event takes place.
• In order to work with the application and the session objects and to handle the events in a
proper manner “global.asax” file should be used.
• Application_Start  the code written in this event will be executed only once whenever the
application has been encountered with the first request
• Session_Start  the code written in this event will be executed when ever a new session for
the user starts.
• Application_BeginRequest  the code written in this event will be fired when ever any
webform present in the webapplication is loaded.
• Application_Authenticate  the code written in this event will be executed when even the
authentication takes place.
• Application_error  the code written in this event will be executed when ever any error or
exceptions occurs at webforms present in the web application.
Note  in order to get the last error which has been generated on the webform
“server.getLastError()” should be used.
• Session_End  the code written in this event will be executed whenever the session of the
user ends
• Application_End  the code written in this event will be executed whenever the web
application is closed.


• It is used to maintain the result of the webform temporarily for a specific period of time.
• ASP supports client side caching.
• Where as supports both client side caching and server side caching.

Client Side Caching
• If the cache page is maintained at the
client side it is said client side caching.
Web server
C2 Server
Modem ISP
Cache page


• To Set this :
Response.cachecontrol = public
• Advantage : only the people who are connected in the network they will be getting the
page faster.

Server Side Caching

• Then it is said to be server side caching.

• Points to remember
• Caching should be used if and only if the following properties are satisfied
1} The contents of the webform should not be modified at least for a specific period of time.
2} The no of clicks for the webform present in the web application should be more.

Types – Server side caching

• 1~~~> Page – Output Cache

• 2 ~~~> Page – Fragmentation (Partial) Cache
• 3 ~~~> Data Cache.

Page – Output cache
whenever the complete result of the webform or the o/p of the webform is maintained as a cache
page at the webserver then it is said to be a page-output cache.

• To Set
<% @ outputcache duration=“seconds”
varybyparam=“none/controlName/VariableName” %>
• VaryByParam  it is used to set an individual cache page for every distinct value
assigned for the control or the variable assigned to the varybyparam.
{example 1}
• Page Fragmentation Cache  It is used to maintain only a partial page contents as a
cache contents on the web server
• To achieve this Page Fragmentation 
 Define a web custom control
 Set the cache for the custom control
 use the web custom control on the web form.

Web User Control

• Web User Control  It is used to design a web control which can be used by an webforms of
• To design  Project  Add web user control
• To use the web user control on the web form 
• 1st method 
select the name of the web user control file in the solution explorer and then drag drop that
file on to the web form.
• 2nd method 
1} register the web user control as a tag prefix in the webform:
for eg :
<% @ register tagprefix = “UC1” tagname=“webusercontrol” src=“webusercontrol2.aspx”

2} place the web user control as a normal control on the webform
<uci:webusercontrol2 id=“wuc2” runat=“server” />

Data Cache

• It is used to maintain the data present in an object as a cache information ,where the
object can be dataset,datview or datareader.
• Note: once the data is been set as a cache then if the data is modified or manipulated at
the database level there wont be any reflection at the data present in the cache.


• It is used to trace the flow of the application.

• It is of 2 types 
• Application level tracing  If this is used then for all the webforms present in the web
application the trace details or information will be provided.
• Page level tracing  if used then only specific web form the trace details will be set.
• Note  if the application level and page level tracing information is set then the
preference will be given to the page level tracing only.
• To set application level tracing 
in web.config  <trace enabled=“true” requestlimit=“10” pageoutput=“true”…../>
• Methods to support tracing

• Trace.write  It is used to write the data on to the trace information.

• Trace.warn  it is used to write the data on to the trace information using red as its fore
color such that the information will be highlighted at the trace info section.
• To set page level trace info in page directive tag :
<% @ pagelanguage=“vb” trace=“true” %>

Introduction to C#

The Common Language Infrastructure

The Common Language Infrastructure (CLI) is a specification that allows several different
programming languages to be used together on a given platform

• Parts of the Common Language Infrastructure:

• Common Intermediate language (CIL) including a common type system (CTS)
• Common Language Specification (CLS) - shared by all languages
• Virtual Execution System (VES)

Metadata about types, dependent libraries, attributes, and more

MONO and .NET are both implementations of the Common Language Infrastructure

The C# language and the Common Language Infrastructure are standardized by ECMA and ISO

CLI Overview

C# Compilation and Execution
The Common Language Infrastructure supports a two-step compilation process

• Compilation
• The C# compiler: Translation of C# source to CIL
• Produces .dll and .exe files
• Just in time compilation: Translation of CIL to machine code
• Execution
• With interleaved Just in Time compilation
• On Mono: Explicit activation of the interpreter

On Window: Transparent activation of the interpreter

.dll and .exe files are - with some limitations - portable in between different platforms

What is Common Language Runtime?

The Common Language Runtime is the engine that compiles the source code in to an
intermediate language. This intermediate language is called the Microsoft Intermediate

During the execution of the program this MSIL is converted to the native code or the machine
code. This conversion is possible through the Just-In-Time compiler. During compilation the end
result is a Portable Executable file (PE).

This portable executable file contains the MSIL and additional information called the metadata.
This metadata describes the assembly that is created. Class names, methods, signature and other
dependency information are available in the metadata. Since the CLR compiles the source code
to an intermediate language, it is possible to write the code in any language of your choice. This
is a major advantage of using the .Net framework.

The other advantage is that the programmers need not worry about managing the memory
themselves in the code. Instead the CLR will take care of that through a process called Garbage

collection. This frees the programmer to concentrate on the logic of the application instead of
worrying about memory handling.

Common Language Runtime Overview

Compilers and tools expose the runtime's functionality and enable you to write code that benefits
from this managed execution environment. Code that you develop with a language compiler that
targets the runtime is called managed code; it benefits from features such as cross-language
integration, cross-language exception handling, enhanced security, versioning and deployment
support, a simplified model for component interaction, and debugging and profiling services.

To enable the runtime to provide services to managed code, language compilers must emit
metadata that describes the types, members, and references in your code. Metadata is stored with
the code; every loadable common language runtime portable executable (PE) file contains
metadata. The runtime uses metadata to locate and load classes, lay out instances in memory,
resolve method invocations, generate native code, enforce security, and set run-time context

The runtime automatically handles object layout and manages references to objects, releasing
them when they are no longer being used. Objects whose lifetimes are managed in this way are
called managed data. Garbage collection eliminates memory leaks as well as some other
common programming errors. If your code is managed, you can use managed data, unmanaged
data, or both managed and unmanaged data in your .NET Framework application. Because
language compilers supply their own types, such as primitive types, you might not always know
(or need to know) whether your data is being managed.

The common language runtime makes it easy to design components and applications whose
objects interact across languages. Objects written in different languages can communicate with
each other, and their behaviors can be tightly integrated. For example, you can define a class and
then use a different language to derive a class from your original class or call a method on the
original class. You can also pass an instance of a class to a method of a class written in a
different language. This cross-language integration is possible because language compilers and
tools that target the runtime use a common type system defined by the runtime, and they follow

the runtime's rules for defining new types, as well as for creating, using, persisting, and binding
to types.

As part of their metadata, all managed components carry information about the components and
resources they were built against. The runtime uses this information to ensure that your
component or application has the specified versions of everything it needs, which makes your
code less likely to break because of some unmet dependency. Registration information and state
data are no longer stored in the registry where they can be difficult to establish and maintain.
Rather, information about the types you define (and their dependencies) is stored with the code
as metadata, making the tasks of component replication and removal much less complicated.

Language compilers and tools expose the runtime's functionality in ways that are intended to be
useful and intuitive to developers. This means that some features of the runtime might be more
noticeable in one environment than in another. How you experience the runtime depends on
which language compilers or tools you use. For example, if you are a Visual Basic developer,
you might notice that with the common language runtime, the Visual Basic language has more
object-oriented features than before. Following are some benefits of the runtime:

• Performance improvements.
• The ability to easily use components developed in other languages.
• Extensible types provided by a class library.
• New language features such as inheritance, interfaces, and overloading for object-oriented
programming; support for explicit free threading that allows creation of multithreaded,
scalable applications; support for structured exception handling and custom attributes.

If you use Microsoft® Visual C++® .NET, you can write managed code using the Managed
Extensions for C++, which provide the benefits of a managed execution environment as well as
access to powerful capabilities and expressive data types that you are familiar with. Additional
runtime features include:

• Cross-language integration, especially cross-language inheritance.

• Garbage collection, which manages object lifetime so that reference counting is unnecessary.
• Self-describing objects, which make using Interface Definition Language (IDL) unnecessary.

• The ability to compile once and run on any CPU and operating system that supports the

You can also write managed code using the C# language, which provides the following benefits:

• Complete object-oriented design.

• Very strong type safety.
• A good blend of Visual Basic simplicity and C++ power.
• Garbage collection.
• Syntax and keywords similar to C and C++.
• Use of delegates rather than function pointers for increased type safety and security. Function
pointers are available through the use of the unsafe C# keyword and the /unsafe option of
the C# compiler (Csc.exe) for unmanaged code and data.

Managed Execution Process

.NET Framework 1.1

The managed execution process includes the following steps:

Choosing a compiler:

To obtain the benefits provided by the common language runtime, you must use one or more
language compilers that target the runtime, such as Visual Basic, C#, Visual C++, JScript, or one
of many third-party compilers such as an Eiffel, Perl, or COBOL compiler.

Because it is a multi language execution environment, the runtime supports a wide variety of
data types and language features. The language compiler you use determines which runtime
features are available and you design your code using those features. Your compiler, not the
runtime, establishes the syntax your code must use. If your component must be completely
usable by components written in other languages, your component's exported types must expose
only language features that are included in the Common Language Specification (CLS).

1. To obtain the benefits provided by the common language runtime, you must use one or
more language compilers that target the runtime.
2. Compiling your code to Microsoft intermediate language (MSIL).
3. Compiling to MSIL

.NET Framework 1.1

When compiling to managed code, the compiler translates your source code into Microsoft
intermediate language (MSIL), which is a CPU-independent set of instructions that can be
efficiently converted to native code. MSIL includes instructions for loading, storing, initializing,
and calling methods on objects, as well as instructions for arithmetic and logical operations,
control flow, direct memory access, exception handling, and other operations. Before code can
be run, MSIL must be converted to CPU-specific code, usually by a just-in-time (JIT) compiler.
Because the common language runtime supplies one or more JIT compilers for each computer
architecture it supports, the same set of MSIL can be JIT-compiled and run on any supported
When a compiler produces MSIL, it also produces metadata. Metadata describes the types in
your code, including the definition of each type, the signatures of each type's members, the
members that your code references, and other data that the runtime uses at execution time. The
MSIL and metadata are contained in a portable executable (PE) file that is based on and extends
the published Microsoft PE and common object file format (COFF) used historically for
executable content. This file format, which accommodates MSIL or native code as well as
metadata, enables the operating system to recognize common language runtime images. The
presence of metadata in the file along with the MSIL enables your code to describe itself, which
means that there is no need for type libraries or Interface Definition Language (IDL). The
runtime locates and extracts the metadata from the file as needed during execution.
For detailed descriptions of MSIL instructions, see the Tool Developers Guide directory of
the .NET Framework SDK.

Compiling translates your source code into MSIL and generates the required metadata.

Compiling MSIL to Native Code

Before you can run Microsoft intermediate language (MSIL), it must be converted by a .NET
Framework just-in-time (JIT) compiler to native code, which is CPU-specific code that runs on
the same computer architecture as the JIT compiler. Because the common language runtime
supplies a JIT compiler for each supported CPU architecture, developers can write a set of MSIL
that can be JIT-compiled and run on computers with different architectures. However, your
managed code will run only on a specific operating system if it calls platform-specific native
APIs, or a platform-specific class library.

JIT compilation takes into account the fact that some code might never get called during
execution. Rather than using time and memory to convert all the MSIL in a portable executable
(PE) file to native code, it converts the MSIL as needed during execution and stores the resulting
native code so that it is accessible for subsequent calls. The loader creates and attaches a stub to
each of a type's methods when the type is loaded. On the initial call to the method, the stub
passes control to the JIT compiler, which converts the MSIL for that method into native code and
modifies the stub to direct execution to the location of the native code. Subsequent calls of the
JIT-compiled method proceed directly to the native code that was previously generated, reducing
the time it takes to JIT-compile and run the code.

The runtime supplies another mode of compilation called install-time code generation. The
install-time code generation mode converts MSIL to native code just as the regular JIT compiler
does, but it converts larger units of code at a time, storing the resulting native code for use when
the assembly is subsequently loaded and run. When using install-time code generation, the entire
assembly that is being installed is converted into native code, taking into account what is known
about other assemblies that are already installed. The resulting file loads and starts more quickly
than it would have if it were being converted to native code by the standard JIT option.

As part of compiling MSIL to native code, code must pass a verification process unless an
administrator has established a security policy that allows code to bypass verification.
Verification examines MSIL and metadata to find out whether the code is type safe, which
means that it only accesses the memory locations it is authorized to access. Type safety helps

isolate objects from each other and therefore helps protect them from inadvertent or malicious
corruption. It also provides assurance that security restrictions on code can be reliably enforced.

The runtime relies on the fact that the following statements are true for code that is verifiably
type safe:

 A reference to a type is strictly compatible with the type being referenced.

 Only appropriately defined operations are invoked on an object.
 Identities are what they claim to be.

During the verification process, MSIL code is examined in an attempt to confirm that the code
can access memory locations and call methods only through properly defined types. For
example, code cannot allow an object's fields to be accessed in a manner that allows memory
locations to be overrun. Additionally, verification inspects code to determine whether the MSIL
has been correctly generated, because incorrect MSIL can lead to a violation of the type safety
rules. The verification process passes a well-defined set of type-safe code, and it passes only
code that is type safe. However, some type-safe code might not pass verification because of
limitations of the verification process, and some languages, by design, do not produce verifiably
type-safe code. If type-safe code is required by security policy and the code does not pass
verification, an exception is thrown when the code is run.

1. At execution time, a just-in-time (JIT) compiler translates the MSIL into native code. During
this compilation, code must pass a verification process that examines the MSIL and metadata
to find out whether the code can be determined to be type safe.
2. Executing your code.
a. The common language runtime provides the infrastructure that enables execution to
take place as well as a variety of services that can be used during execution.

6.3 SQL Server 2008
CLR architecture

 Digg This
 Stumble
 Delicious

The .NET Framework CLR is very tightly integrated with the SQL Server 2005 database engine.
In fact, the SQL Server database engine hosts the CLR. This tight level of integration gives SQL
Server 2005 several distinct advantages over the .NET integration that's provided by DB2 and
Oracle. You can see an overview of the SQL Server 2005 database engine and CLR integration
in Figure 3-1.

As you can see in Figure 3-1, the CLR is hosted within the SQL Server database engine. A SQL
Server database uses a special API or hosting layer to communicate with the CLR and interface
the CLR with the Windows operating system. Hosting the CLR within the SQL Server database
gives the SQL Server database engine the ability to control several important aspects of the CLR,

 Memory management 
 Threading 
 Garbage collection

The DB2 and Oracle implementation both use the CLR as an external process, which means that
the CLR and the database engine both compete for system resources. SQL Server 2005's in-
process hosting of the CLR provides several important advantages over the external
implementation used by Oracle or DB2. First, in-process hosting enables SQL Server to control
the execution of the CLR, putting essential functions such as memory management, garbage
collection, and threading under the control of the SQL Server database engine. In an external
implementation the CLR will manage these things independently. The database engine has a
better view of the system requirements as a whole and can manage memory and threads better

than the CLR can do on its own. In the end, hosting the CLR in-process will provide better
performance and scalability.

Figure 3-1: The SQL Server CLR database architecture

Enabling CLR support

By default, the CLR support in the SQL Server database engine is turned off. This ensures that
update installations of SQL Server do not unintentionally introduce new functionality without the
explicit involvement of the administrator. To enable SQL Server's CLR support, you need to use
the advanced options of SQL Server's sp_configure system stored procedure, as shown in the
following listing:

sp_configure 'show advanced options', 1




sp_configure 'clr enabled', 1




CLR Database object components

To create .NET database objects, you start by writing managed code in any one of the .NET
languages, such as VB, C#, or Managed C++, and compile it into a .NET DLL (dynamic link
library). The most common way to do this would be to use Visual Studio 2005 to create a new
SQL Server project and then build that project, which creates the DLL. Alternatively, you create
the .NET code using your editor of choice and then compiling the code into a .NET DLL using
the .NET Framework SDK. ADO.NET is the middleware that connects the CLR DLL to the SQL
Server database. Once the .NET DLL has been created, you need to register that DLL with SQL
Server, creating a new SQL Server database object called an assembly. The assembly essentially
encapsulates the .NET DLL. You then create a new database object such as a stored procedure or
a trigger that points to the SQL Server assembly. You can see an overview of the process to
create a CLR database object in Figure 3-2.

Figure 3-2: Creating CLR database objects

CLR assemblies in SQL Server 2005

 Digg This
 Stumble
 Delicious

SQL Server .NET data provider

If you're familiar with ADO.NET, you may wonder exactly how CLR database objects connect
to the database. After all, ADO.NET makes its database connection using client-based .NET data
providers such as the .NET Framework Data Provider for SQL Server, which connects using
networked libraries. While that's great for a client application, going through the system's
networking support for a database call isn't the most efficient mode for code that's running
directly on the server. To address this issue, Microsoft created the new SQL Server .NET Data
Provider. The SQL Server .NET Data Provider establishes an in-memory connection to the SQL
Server database.


After the coding for the CLR object has been completed, you can use that code to create a SQL
Server assembly. If you're using Visual Studio 2005, then you can simply select the Deploy
option, which will take care of both creating the SQL Server assembly as well as creating the
target database object.

If you're not using Visual Studio 2005 or you want to perform the deployment process manually,
then you need to copy the .NET DLL to a common storage location of your choice. Then, using
SQL Server Management Studio, you can execute a T-SQL CREATE ASSEMBLY statement
that references the location of the .NET DLL, as you can see in the following listing:



The CREATE ASSEMBLY command takes a parameter that contains the path to the DLL that
will be loaded into SQL Server. This can be a local path, but more often it will be a path to a
networked file share. When the CREATE ASSEMBLY is executed, the DLL is copied into the
master database.

If an assembly is updated or becomes deprecated, then you can remove the assembly using the
DROP ASSEMBLY command as follows:


Because assemblies are stored in the database, when the source code for that assembly is
modified and the assembly is recompiled, the assembly must first be dropped from the database
using the DROP ASSEMBLY command and then reloaded using the CREATE ASSEMBLY
command before the updates will be reflected in the SQL Server database objects.

You can use the sys.assemblies view to view the assemblies that have been added to SQL Server
2005 as shown here:

SELECT * FROM sys.assemblies

Since assemblies are created using external files, you may also want to view the files that were
used to create those assemblies. You can do that using the sys.assembly_files view as shown

SELECT * FROM sys.assembly_files

Creating CLR database objects

 Digg This
 Stumble
 Delicious

After the SQL Server assembly is created, you can then use SQL Server Management Studio to
clause to point to the assembly that you created earlier.

When the assembly is created, the DLL is copied into the target SQL Server database and the
assembly is registered. The following code illustrates creating the MyCLRProc stored procedure
that uses the MyCLRDLL assembly:




The EXTERNAL NAME clause is new to SQL Server 2005. Here the EXTERNAL NAME
clause specifies that the stored procedure MyCLRProc will be created using a .SQL Server
assembly. The DLL that is encapsulated in the SQL Server assembly can contain multiple classes
and methods; the EXTERNAL NAME statement uses the following syntax to identify the correct
class and method to use from the assembly:

Assembly Name.ClassName.MethodName

In the case of the preceding example, the registered assembly is named MyCLRDLL. The class
within the assembly is StoredProcedures, and the method within that class that will be executed
is MyCLRProc.

Specific examples showing how you actually go about creating a new managed code project with
Visual Studio 2005 are presented in the next section.

Creating CLR database objects

The preceding section presented an overview of the process along with some example manual
CLR database object creation steps to help you better understand the creation and deployment
process for CLR database objects. However, while it's possible to create CLR database objects
manually, that's definitely not the most productive method. The Visual Studio 2005 Professional,
Enterprise, and Team System Editions all have tools that help create CLR database objects as
well as deploy and debug them. In the next part of this chapter you'll see how to create each of
the new CLR database objects using Visual Studio 2005.

NOTE: The creation of SQL Server projects is supported in Visual Studio 2005 Professional
Edition and higher. It is not present in Visual Studio Standard Edition or the earlier releases of
Visual Studio.

Java Script
JavaScript is a script-based programming language that was developed by Netscape
Communication Corporation. JavaScript was originally called Live Script and renamed as
JavaScript to indicate its relationship with Java. JavaScript supports the development of both
client and server components of Web-based applications. On the client side, it can be used to
write programs that are executed by a Web browser within the context of a Web page. On the
server side, it can be used to write Web server programs that can process information submitted
by a Web browser and then update the browser’s display accordingly

Even though JavaScript supports both client and server Web programming, we prefer
JavaScript at Client side programming since most of the browsers supports it. JavaScript is

almost as easy to learn as HTML, and JavaScript statements can be included in HTML
documents by enclosing the statements between a pair of scripting tags


<SCRIPT LANGUAGE = “JavaScript”>

JavaScript statements


Here are a few things we can do with JavaScript:

 Validate the contents of a form and make calculations.

 Add scrolling or changing messages to the Browser’s status line.
 Animate images or rotate images that change when we move the mouse over them.

Java script is an easy-to-use programming language that can be embedded in the header of your
web pages. It can enhance the dynamics and interactive features of your page by allowing you to
perform calculations, check forms, write interactive games, add special effects, customize
graphics selections, create security passwords and more.

Benefits of Java Script

Following are the benefits of JavaScript.

 associative arrays
 loosely typed variables
 regular expressions
 objects and classes
 highly evolved date, math, and string libraries

W3C DOM support in the JavaScript

Disadvantages of JavaScript

 Developer depends on the browser support for the JavaScript

There is no way to hide the JavaScript code in case of commercial application

5.6.1 UML Diagrams

UML stands for Unified Modeling Language

"The Unified Modeling Language (UML) is a graphical language for visualizing,

specifying, constructing, and documenting the artifacts of a software-intensive system.
The UML offers a standard way to write a system's blueprints, including conceptual
things such as business processes and system functions as well as concrete things such
as programming language statements, database schemas, and reusable software

UML is unique in that it has a standard data representation. This representation is called the meta
model. The meta-model is a description of UML in UML. It describes the objects, attributes, and
relationships necessary to represent the concepts of UML within a software application.

The UML notation is rich and full bodied. It is comprised of two major subdivisions. There is a
notation for modeling the static elements of a design such as classes, attributes, and relationships.
There is also a notation for modeling the dynamic elements of a design such as objects,
messages, and finite state machines. The unified modeling language allows the software engineer
to express an analysis model using the modeling notation that is governed by a set of syntactic
semantic and pragmatic rules.

A UML system is represented using five different views that describe the system from distinctly
different perspective.

User Model View

This view represents the system from the user’s perspective. The analysis representation
describes a usage scenario from the end-users perspective.

Structural model view

In this model the data and functionality are arrived from inside the system. This model view
models the static structures.

Behavioral Model View

It represents the dynamic of behavioral as parts of the system, depicting the interactions of
collection between various structural elements described in the user model and structural model

Implementation Model View

In this the structural and behavioral as parts of the system are represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in which the system is to be
implemented are represented.

UML is specifically constructed through two different domains they are:

 UML Analysis modeling, this focuses on the user model and structural model views of the
 UML design modeling, which focuses on the behavioral modeling, implementation modeling
and environmental model views.

Relationships in UML

Generalization relationship

In UML modeling, a generalization relationship is a relationship in which one model element

(the child) is based on another model element (the parent). Generalization relationships are used
in class, component, deployment, and use case diagrams.

To comply with UML semantics, the model elements in a generalization relationship must be
the same type. For example, a generalization relationship can be used between actors or between
use cases; however, it cannot be used between an actor and a use case.

The parent model element can have one or more children, and any child model element can have
one or more parents. It is more common to have a single parent model element and multiple child
model elements.

Generalization relationships do not have names. A generalization relationship indicates that a

specialized (child) model element is based on a general (parent) model element. Although the
parent model element can have one or more children, and any child model element can have one
or more parents, typically a single parent has multiple children.

Association relationship

An association relationship is a structural relationship between two model elements that shows
that objects of one classifier (actor, use case, class, interface, node, or component) connect and

can navigate to objects of another classifier. Even in bidirectional relationships, an association
connects two classifiers, the primary (supplier) and secondary (client).

In UML models, an association is a relationship between two classifiers, such as classes or use
cases that describes the reasons for the relationship and the rules that govern the relationship, an
association appears as a solid line between two classifiers.

Aggregation relationship

In UML models, an aggregation relationship shows a classified as a part of or subordinate to

another classifier. An aggregation is a special type of association in which objects are assembled
or configured together to create a more complex object. Aggregation protects the integrity of an
assembly of objects by defining a single point of control, called the aggregate, in the object that
represents the assembly. Aggregation also uses the control object to decide how the assembled
objects respond to changes or instructions that might affect the collection.

An aggregation association appears as a solid line with an unfilled diamond at the association
end, which is connected to the classifier that represents the aggregate. Aggregation relationships
do not have to be unidirectional.

Composition Relationship

A composition relationship represents a whole–part relationship and is a type of aggregation. A

composition relationship specifies that the lifetime of the part classifier is dependent on the
lifetime of the whole classifier. Each instance of type Circle seems to contain an instance of type
Point. This is a relationship known as composition.

The black diamond represents composition. It is placed on the Circle class because it is the
Circle that is composed of a Point. The arrowhead on the other end of the relationship denotes
that the relationship is navigable in only one direction. That is, Point does not know about Circle.
In UML relationships are presumed to be bidirectional unless the arrowhead is present to restrict

Inheritance Relationship

The inheritance relationship in UML is depicted by a peculiar triangular arrowhead. This

arrowhead, that looks rather like a slice of pizza, points to the base class. One or more lines
proceed from the base of the arrowhead connecting it to the derived classes.

Dependency relationships

In UML modeling, a dependency relationship is a relationship in which changes to one model

element (the supplier) impact another model element (the client). Dependency relationship can
also be used to represent precedence, where one model element must precede another.
Dependency relationships usually do not have names.

A dependency is displayed as a dashed line with an open arrow that points from the client model
element to the supplier model element.


A use case is a set of scenarios that describing an interaction between a user and a system.  A use
case diagram displays the relationship among actors and use cases.  The two main components of
a use case diagram are use cases and actors.

An actor is represents a user or another system that will interact with the system you
are modeling.  A use case is an external view of the system that represents some action the user
might perform in order to complete a task.


Class diagrams are widely used to describe the types of objects in a system and their
relationships.  Class diagrams model class structure and contents using design elements such as
classes, packages and objects.  Class diagrams describe three different perspectives when
designing a system, conceptual, specification, and implementation. These perspectives become
evident as the diagram is created and help solidify the design. 

Classes are composed of three things: a name, attributes, and operations.  Below is an example
of a class.


Interaction diagrams model the behavior of  use cases by describing the way groups of
objects interact to complete the task.  The two kinds of interaction diagrams are
sequence and collaboration diagrams.

Sequence diagrams demonstrate the behavior of objects in a use case by describing the objects
and the messages they pass. The diagrams are read left to right and descending.  The example
below shows an object of class 1 start the behavior by sending a message to an object of class 2. 
Messages pass between the different objects until the object of class 1 receives the final message.

5.6.5 Collaboration diagrams

Collaboration diagrams are also relatively easy to draw.  They show the relationship between
objects and the order of messages passed between them.  The objects are listed as icons and
arrows indicate the messages being passed between them. The numbers next to the messages are
called sequence numbers.  As the name suggests, they show the sequence of the messages as they
are passed between the objects.  There are many acceptable sequence numbering schemes in
UML.  A simple 1, 2, 3... format can be used, as the example below shows.

ER Diagram

7. Implementation


1. Admin
View all details in all departments as well as in the production
2. Manager
 Maintain the store and particular information,
 Maintain daily sells,
 Update the admin department
3. Sales Person
 He maintain the particular store
 Looks after the store items

8. Testing

Software Testing is the process used to help identify the correctness, completeness, security, and
quality of developed computer software. Testing is a process of technical investigation,
performed on behalf of stakeholders, that is intended to reveal quality-related information about
the product with respect to the context in which it is intended to operate. This includes, but is not
limited to, the process of executing a program or application with the intent of finding errors.
Quality is not an absolute; it is value to some person. With that in mind, testing can never
completely establish the correctness of arbitrary computer software; testing furnishes a criticism
or comparison that compares the state and behavior of the product against a specification. An
important point is that software testing should be distinguished from the separate discipline of
Software Quality Assurance (SQA), which encompasses all business process areas, not just

There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following routine
procedure. One definition of testing is "the process of questioning a product in order to evaluate
it", where the "questions" are operations the tester attempts to execute with the product, and the
product answers with its behavior in reaction to the probing of the tester[citation needed].
Although most of the intellectual processes of testing are nearly identical to that of review or
inspection, the word testing is connoted to mean the dynamic analysis of the product by putting
the product through its paces. Some of the common quality efficiency, portability,
maintainability, attributes includes capability, reliability, compatibility and usability. A good test
is sometimes described as one which reveals an error. However, more recent thinking suggests
that a good test is one which reveals information of interest to someone who matters within the
project community.


In general, software engineers distinguish software faults from software failures. In case of a
failure, the software does not do what the user expects. A fault is a programming error that may
or may not actually manifest as a failure. A fault can also be described as an error in the
correctness of the semantic of a computer program. A fault will become a failure if the exact

computation conditions are met, one of them being that the faulty portion of computer software
executes on the CPU. A fault can also turn into a failure when the software is ported to a
different hardware platform or a different compiler, or when the software gets extended.
Software testing is the technical investigation of the product under test to provide stakeholders
with quality related information.

Software testing may be viewed as a sub-field of Software Quality Assurance but typically
exists independently In SQA, software process specialists and auditors take a broader view on
software and its development. They examine and change the software engineering process itself
to reduce the amount of faults that end up in the code or deliver faster.

Regardless of the methods used or level of formality involved the desired result of testing is a
level of confidence in the software so that the organization is confident that the software has an
acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the
software. An arcade video game designed to simulate flying an airplane would presumably have
a much higher tolerance for defects than software used to control an actual airliner.

A problem with software testing is that the number of defects in a software product can be
very large, and the number of configurations of the product larger still. Bugs that occur
infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to
function without faults for a certain length of time must have already been tested for at least that
length of time. This has severe consequences for projects to write long-lived reliable software.

A common practice of software testing is that it is performed by an independent group of

testers after the functionality is developed but before it is shipped to the customer. This practice
often results in the testing phase being used as project buffer to compensate for project delays.
Another practice is to start software testing at the same moment the project starts and it is a
continuous process until the project finishes.

Another common practice is test suites to be developed during technical support escalation
procedures. Such tests are then maintained in regression testing suites to ensure that future
updates to the software don't repeat any of the known mistakes. It is commonly believed that the
earlier a defect is found the cheaper it is to fix it.

In counterpoint, some emerging software disciplines such as extreme programming and the
agile software development movement, adhere to a "test-driven software development" model. In
this process unit tests are written first, by the programmers (often with pair programming in the
extreme programming methodology). Of course these tests fail initially as they are expected to.
Then as code is written it passes incrementally larger portions of the test suites. The test suites
are continuously updated as new failure conditions and corner cases are discovered, and they are
integrated with any regression tests that are developed.

Unit tests are maintained along with the rest of the software source code and generally
integrated into the build process (with inherently interactive tests being relegated to a partially
manual build acceptance process). The software, tools, samples of data input and output, and
configurations are all referred to collectively as a test harness.


The separation of debugging from testing was initially introduced by Glen ford J. Myers in
his 1978 book the "Art of Software Testing". Although his attention was on breakage testing it
illustrated the desire of the software engineering community to separate fundamental
development activities, such as debugging, from that of verification. Drs. Dave Gelperin and
William C. Hetzel classified in 1988 the phases and goals in software testing as follows: until
1956 it was the debugging oriented period, where testing was often associated to debugging:
there was no clear difference between testing and debugging. From 1957-1978 there was the
demonstration oriented period where debugging and testing was distinguished now in this period
it was shown, that software satisfies the requirements. The time between 1979-1982 is
announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is
classified as the evaluation oriented period. Intention here is that during the software lifecycle a
product evaluation is provided and measuring quality. From 1988 on it was seen as prevention
oriented period where tests were to demonstrate that software satisfies its specification, to detect
faults and to prevent faults. Dr. Gelperin chaired the IEEE 829-1988 (Test Documentation
Standard) with Dr. Hetzel writing the book "The Complete Guide of Software Testing". Both
works were pivotal in to today's testing culture and remain a consistent source of reference. Dr.

Gelperin and Jerry E. Durant also went on to develop High Impact Inspection Technology that
builds upon traditional Inspections but utilizes a test driven additive.

White-box and black-box testing

White box and black box testing are terms used to describe the point of view a test engineer
takes when designing test cases. Black box being an external view of the test object and white
box being an internal view. Software testing is partly intuitive, but largely systematic. Good
testing involves much more than just running the program a few times to see whether it works.
Thorough analysis of the program under test, backed by a broad knowledge of testing techniques
and tools are prerequisites to systematic testing. Software Testing is the process of executing
software in a controlled manner in order to answer the question “Does this software behave as
specified?” Software testing is used in association with Verification and Validation. Verification
is the checking of or testing of items, including software, for conformance and consistency with
an associated specification. Software testing is just one kind of verification, which also uses
techniques as reviews, inspections, walk-through. Validation is the process of checking what has
been specified is what the user actually wanted.

 Validation: Are we doing the right job?

 Verification: Are we doing the job right?

In order to achieve consistency in the Testing style, it is imperative to have and follow a set of
testing principles. This enhances the efficiency of testing within SQA team members and thus
contributes to increased productivity.

At SDEI, 3 levels of software testing is done at various SDLC phases

 Unit Testing: in which each unit (basic component) of the software is tested to verify
that the detailed design for the unit has been correctly implemented
 Integration testing: in which progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated and
tested until the software works as a whole.

 System testing: in which the software is integrated to the overall product and tested
to show that all requirements are met

 A further level of testing is also done, in accordance with requirements:

 Acceptance testing: upon which the acceptance of the complete software is based.
The clients often do this.
 Regression testing: is used to refer the repetition of the earlier successful tests to
ensure that changes made in the software have not introduced new bugs/side effects.

In recent years the term grey box testing has come into common usage. The typical grey box
tester is permitted to set up or manipulate the testing environment, like seeding a database, and
can view the state of the product after his actions, like performing a SQL query on the database
to be certain of the values of columns. It is used almost exclusively of client-server testers or
others who use a database as a repository of information, but can also apply to a tester who has to
manipulate XML files (DTD or an actual XML file) or configuration files directly. It can also be
used of testers who know the internal workings or algorithm of the software under test and can
write tests specifically for the anticipated results. For example, testing a data warehouse
implementation involves loading the target database with information, and verifying the
correctness of data population and loading of data into the correct tables.

9. Screens

10. Conclusion

The objective of this project was to build a program for maintaining the details of all Supply
Order .The system developed is able to meet all the basic requirements. It will provide the
facility to the user so that they can keep tracks of all the equipments being supplied. The
management of the Inventory will be also benefited by the proposed system, as it will automate
the whole supply procedure, which will reduce the workload. The security of the system is also
one of the prime concerns.

There is always a room for improvement in any software, however efficient the system may
be. The important thing is that the system should be flexible enough for future modifications.
The system has been factored into different modules to make system adapt to the further
changes. Every effort has been made to cover all user requirements and make it user friendly.

 Goal achieved: The System is able provide the interface to the user so that he can
replicate his desired data. .

 User friendliness: Though the most part of the system is supposed to act in the
background, efforts have been made to make the foreground interaction with user as
smooth as possible. Also the integration of the system with Inventory Management
project has been kept in mind throughout the development phase.

Future Scope

The scope of the project includes that what all future enhancements can be done in this system to
make it more feasible to use. Databases for different products range and storage can be provided.
Multilingual support can be provided so that it can be understandable by the person of any

language. More graphics can be added to make it more user-friendly and understandable.
Manage & backup versions of documents online.
11. Bibliography

Books Referred

 “Microsoft Learning Vb.Net ”

 “Teach Yourself VB.NET in 21 Days”- Sams Pearson Education [Lowell Mauer]
 “Professional ASP.NET 2.0” –Wrox [Evajen,Hanselma,Muhammad,Sivakumar,Rader]
 “ASP.NET 2.0 Uleashed”-Sams Pearson Education [Stephen Walther]
 “Software Engineering” [Pankaj Jalote]
 “Software Engineering” [K.K. Aggarwal & Yogesh Sighn]

Sites Referred