Sei sulla pagina 1di 139

Project title

“MUSIC ONLINE”

SESSION: 2009-2010

NAME: MONIKA AWASTHI


COURSE: MCA VIth SEM.
ENROLLMENT No. 043600211
IGNOU DELHI
CENTRE : ST. JOHNS COLLEGE
AGRA (U.P)
CERTIFICATE

This is to certify that the dissertation entitled “airline

reservation”, submitted by Ms. Mini Jain, is an original and

independent work done by her under my supervision.

Place: Agra

Date: 2009

(GUIDE)
Name: Mr. Manu Pratap Singh
Lecturer of computer science
ICIS,Khandari
Agra (U.P.)
DECLARATION

I mini jain student of BCA (VI Semester) Session 2008-2009,

Batch 2006-2009 hereby declare that my work entitled “airline reservation”, is the outcome of genuine

efforts done by me under the able guidance of my guide Mr. Manu Pratap Singh,Institute Of Computer

Science, and being submitted to Dr. B. R. Ambedkar University, Agra as dissertation in partial fulfillment

for the award of the degree of Bachelor of Computer Applications (BCA).

Place: Agra
Date: 2009

Name: MINI JAIN


Course: BCA (VI
Semester)
University Roll No::3246
University Enrollment No:06109803

.
Home www.acubepro.com
TABLE OF CONTENTS:

CONTENTS PAGE NO.

1. PROJECT PROPOSAL………………………………………………………..1
2. PERFORMA FOR APPROVAL………………………………………………3
3. GUIDE RESUME………………………………………………..……………4-6
4. TITLE OF PROJECT…………………………………………………….……7
5. INTRODUCTION……………………………………………………….……9-11
6. TOOLS/PLATFORM, HARDWARE AND SOFTWARE
REQUIREMENT SPECIFICATION…………………...……………………12-
17
7. Technology Overview of .net 2005 with c# 2.0 ,…………………………..
SQL SERVER 2000 , SDLC ………………………………………………
8. ANALYSIS (DFDs, ER Diagrams, Class Diagrams etc. As per the project
requirements)…………………………………………………………………22-
31
9. A COMPLETE STRUCTURE OF THE PROGRAM……………………….32-
38
. Number of modules and their description.
. Data structures for all modules.
. Process Logic of each module
. Report generation.
10. TESTING AND RESULT………………..………………………………….39-
51
11. FUTURE ENHANCEMENT…………………………………………………52
12. Glossary
Music OnLine
(USING ASP.NET THROUGH C# 2.0)

Introduction of Music OnLIne

Music is probably more familiar than we might at first imagine. Indeed,


nowadays it is all around us, whether it be in restaurants, supermarkets, lifts, for
advertising or as theme and incident a music on television. Hearing live music is one of
the most pleasurable experiences available to human beings. The music sounds great, it
feels great, and you get to watch the musicians as they create it. No matter what kind of
music you love, try listening to it live.
There are lots of things to enjoy at a concert, lots of things to pay attention to.
Your job is to be affected by the music, but you can be affected by what most appeals
to you, or by whatever grabs your interest. Here are a few choices for what to listen to.
Choose whatever you like, switch as often as you want, and feel free to add to the list.
Some things to enjoy in classical music

Moods and feelings

Loudness and softness

Different speeds

Instrument sounds

Melodies

Rhythms

Changes and transformations

Beautiful performing

Memories that get triggered


Recognition of something heard earlier

Visual images that come to mind


Objective

Music Online is a one stop hangout for movies. At MUSIC ONLINE, we believe and swear by the
term ‘Filmy’ and we have tried to live up to its true spirit. Music Online is designed for people like
you - the hard core movie buffs, enthusiasts and even those who claim that they don’t have a filmy
bone in their body. Don’t miss our special attempts to focus on regional and bollywood movies!

There are various reasons for you to hangout at Music Online – Besides becoming a member of
India’s largest movie rental service and renting good quality movies from Music Online , you can
just sign in and indulge by taking the quiz or creating one, rating movies, starting a club, testing
your movie compatibility with your friends or simply read more about the movies and review them.
Now, enjoy movies and a lot more… at your finger tips!.

Music Online Hangout is your space, you can decide on who to let in and who
not to, what to put up there and what not to. You are the leading person in your
hangout, so get set - wear the Director’s cap!

Music Online is shaping (or shaking) up a couple of things around here. We thought
that it was about time we indulged in movies, rather than just watching them. And so
we have just expanded our canvas and we invite you to discover, enjoy, share and
indulge in the world of movies!
We have scrubbed, shined and polished our website and filled it with as much
color and content to keep you watching and indulging. If you are not at Seventymm™,
you are not rolling. We have mapped out the best laid plans, and the red carpet
welcome… so have you arrived yet?
TOOLS, PLATFORM & LANGUAGE TO BE USED

TOOLS:

 FRONT-END : ASP.NET(With C# 2.0)


 BACK-END : SQL SERVER 2000

PLATFORM:

 WINDOW SERVER - 2000.

HARDWARE AND SOFTWARE ENVIRONMENT

HARDWARE ENVIRONMENT:

 PROCESSOR : P-IV(1.80 GHZ)


 RAM : 128 MB
 STORAGE CAPACITY : 40 GB
 DRIVERS : 52X24X
 52X CD 1.44 MB FDD

SOFTWARE ENVIRONMENT:

 OPERATING SYSTEM : WINDOW SERVER 2000


 RDBMS : SQL SERVER 2000
Technology Overview of my Project

.Net 2005 Framework ,SQL SERVER 2000 and SDLC of Project

.Net 2005 with c# 2.0

The Microsoft .NET Framework version 2.0 extends the .NET Framework
version 1.1 with new features, improvements to existing features, and
enhancements to the documentation. This section provides information
about some key additions and modifications.

For more information about compatibility and for a list of the public API
modifications to the class library that might affect the compatibility of your
application.

64-Bit Platform Support

The new generation of 64-bit computers enables the creation of


applications that can run faster and take advantage of more memory than
is available to 32-bit applications. New support for 64-bit applications
enables users to build managed code libraries or easily use unmanaged
code libraries on 64-bit computers. For more information, see 64-bit
Applications.

Access Control List Support

An access control list (ACL) is used to grant or revoke permission to access


a resource on a computer. New classes have been added to the .NET
Framework that allow managed code to create and modify an ACL. New
members that use an ACL have been added to the I/O, registry, and
threading classes.
ADO.NET

New features in ADO.NET include support for user-defined types (UDT),


asynchronous database operations, XML data types, large value types,
snapshot isolation, and new attributes that allow applications to support
multiple active result sets (MARS) with SQL Server 2005. For more
information about these and other new ADO.NET features, see What's New
in ADO.NET.

ASP.NET

The Microsoft .NET Framework 2.0 includes significant enhancements to all


areas of ASP.NET. For Web page development, new controls make it easier
to add commonly used functionality to dynamic Web pages. New data
controls make it possible to display and edit data on an ASP.NET Web page
without writing code. An improved code-behind model makes developing
ASP.NET pages easier and more robust. Caching features provide several
new ways to cache pages, including the ability to build cache dependency
on tables in a SQL Server database.

You can now customize Web sites and pages in a variety of ways. Profile
properties enable ASP.NET to track property values for individual users
automatically. Using Web Parts, you can create pages that users can
customize in the browser. You can add navigation menus using simple
controls.

Improvements to Web site features allow you to create professional Web


sites faster and more easily. Master pages allow you to create a consistent
layout for all the pages in a site, and themes allow you to define a
consistent look for controls and static text. To help protect your sites, you
can precompile a Web site to produce executable code from source files
(both code files and the markup in .aspx pages). You can then deploy the
resulting output, which does not include any source information, to a
production server. Enhancements to ASP.NET also include new tools and
classes to make Web site management easier for Web site developers,
server administrators, and hosters.

ASP.NET accommodates a wide variety of browsers and devices. By


default, controls render output that is compatible with XHTML 1.1
standards. You can use device filtering to specify different property values
on the same control for different browsers.

For a more complete list of new features in ASP.NET, see What's New in
ASP.NET.
Authenticated Streams

Applications can use the new NegotiateStream and SslStream classes for
authentication and to help secure information transmitted between a client
and a server. These authenticated stream classes support mutual
authentication, data encryption, and data signing. The NegotiateStream
class uses the Negotiate security protocol for authentication. The
SslStream class uses the Secure Socket Layer (SSL) security protocol for
authentication.

COM Interop Services Enhancements

Four major enhancements have been made to the classes and tools that
support interoperability with COM:

• The operating system maintains a limited number of handles, which


are used to reference critical operating system resources. The new
SafeHandle and CriticalHandle classes, and their specialized derived
classes, provide safe and reliable means of manipulating operating
system handles.
• Marshaling improvements make interoperating with native code
easier. Two enhancements to the interop marshaler satisfy the two
most common user requests: the ability to wrap native function
pointers into delegates and the ability to marshal fixed-size arrays of
structures inside structures.
• The performance of calls between applications in different application
domains has been made much faster for common call types.
• New switches on the Type Library Importer (Tlbimp.exe) and Type
Library Exporter (Tlbexp.exe) eliminate dependency on the registry
to resolve type library references. This enhancement is important for
creating robust build environments.
Console Class Additions

New members of the Console class enable applications to manipulate the dimensions of the
console window and screen buffer; to move a rectangular area of the screen buffer, which is
useful for performing smooth, simple animation; and to wait while reading console input until
a key is pressed. Other new class members control the foreground and background colors of
text, the visibility and size of the cursor, and the frequency and duration of the console
beep.Data Protection API

The new Data Protection API (DPAPI) includes four methods that allow
applications to encrypt passwords, keys, connections strings, and so on,
without calling platform invoke. You can also encrypt blocks of memory on
computers running Windows Server 2003 or later operating systems.

Debugger Display Attributes

You can now control how Visual Studio displays a class or member when
an application is being debugged. The debugger's Display Attributes
feature enables you to identify the most useful information to display in
the debugger.

Debugger Edit and Continue Support

The .NET Framework 2.0 reintroduces the Edit and Continue feature that
enables a user who is debugging an application in Visual Studio to make
changes to source code while executing in Break mode. After source code
edits are applied, the user can resume code execution and observe the
effect. Furthermore, the Edit and Continue feature is available in any
programming language supported by Visual Studio.

Detecting Changes in Network Connectivity

The NetworkChange class allows applications to receive notification when


the Internet Protocol (IP) address of a network interface, also known as a
network card or adapter, changes. An interface address can change for a
variety of reasons, such as a disconnected network cable, moving out of
range of a wireless local area network, or hardware failure. The
NetworkChange class provides address change notification by raising
events when a change is detected.

Distributed Computing

In the System.Net namespace, support has been added for FTP client
requests, caching of HTTP resources, automatic proxy discovery, and
obtaining network traffic and statistical information.

The namespace now includes a Web server class that you can use to
create a simple Web server for responding to HTTP requests. Classes that
generate network traffic have been instrumented to output trace
information for application debugging and diagnostics. Security and
performance enhancements have been added to the
System.Net.Sockets.Socket and System.Uri classes.

In the System.Web.Services namespaces, support for SOAP 1.2 and


nullable elements has been added.

In the System.Runtime.Remoting.Channels namespaces, channel security


features have been added. The TCP channel now supports authentication
and encryption, as well as several new features to better support load
balancing.

EventLog Enhancements

You can now use custom DLLs for EventLog messages, parameters, and
categories.

Expanded Certificate Management

The .NET Framework now supports X.509 certificate stores, chains, and
extensions. In addition, you can sign and verify XML using X.509
certificates without using platform invoke. There is also support for PKCS7
signature and encryption, and CMS (a superset of the PKCS7 standard
available on Microsoft Windows 2000 and later operating systems). PKCS7
is the underlying format used in Secure/Multipurpose Internet Mail
Extensions (S/MIME) for signing and encrypting data. For more
information, see the X509Certificate2 class topic.

FTP Support

Applications can now access File Transfer Protocol resources using the
WebRequest, WebResponse, and WebClient classes.

Generics and Generic Collections

The .NET Framework 2.0 introduces generics to allow you to create


flexible, reusable code. Language features collectively known as generics
act as templates that allow classes, structures, interfaces, methods, and
delegates to be declared and defined with unspecified, or generic type
parameters instead of specific types. Actual types are specified later when
the generic is used. Several namespaces, such as System Namespace and
System.Collections.Generic, provide generic classes and methods. The new
System.Collections.Generic namespace provides support for strongly typed
collections. System.Nullable<T> is a standard representation of optional
values. Generics are supported in three languages: Visual Basic, C#, and
C++.

Reflection has been extended to allow runtime examination and


manipulation of generic types and methods. New members have been
added to System.Type and System.Reflection.MethodInfo, including
IsGenericType to identify generic types (for example, class Gen<T,U>
{...}), GetGenericArguments to obtain type parameter lists, and
MakeGenericType to create specific types (for example, new Gen<int,
long>()).
Globalization

Five new globalization features provide greater support for developing


applications intended for different languages and cultures.

• Support for custom cultures enables you to define and deploy


culture-related information as needed. This feature is useful for
creating minor customizations of existing culture definitions, and
creating culture definitions that do not yet exist in the .NET
Framework. For more information, see the
CultureAndRegionInfoBuilder class.

• Encoding and decoding operations map a Unicode character to or


from a stream of bytes that can be transferred to a physical medium
such as a disk or a communication line. If a mapping operation
cannot be completed, you can compensate by using the new
encoding and decoding fallback feature supported by several classes
in the System.Text namespace.

• Members in the UTF8Encoding class, which implements UTF-8


encoding, are now several times faster than in previous releases.
UTF-8 is the most common encoding used to transform Unicode
characters into bytes on computers.
• The .NET Framework now supports the latest normalization standard
defined by the Unicode Consortium. The normalization process
converts character representations of text to a standard form so the
representations can be compared for equivalence.
• The GetCultureInfo method overload provides a cached version of a
read-only CultureInfo object. Use the cached version when creating
a new CultureInfo object to improve system performance and reduce
memory usage.
I/O Enhancements

Improvements have been made to the usability and functionality of


various I/O classes. It is now easier for users to read and write text files
and obtain information about a drive.

You can now use the classes in the System.IO.Compression namespace to


read and write data with the GZIP compression and decompression
standard, described in the IETF RFC 1951 and RFC 1952 specifications,
which are available at the IETF Request for Comments (RFC) search page.
Note: search is limited to RFC numbers.

Manifest-Based Activation

This feature provides new support for loading and activating applications
through the use of a manifest. Manifest-based activation is essential for
supporting ClickOnce applications. Traditionally, applications are activated
through a reference to an assembly that contains the application's entry
point. For example, clicking an application's .exe file from within the
Windows shell causes the shell to load the common language runtime
(CLR) and call a well-known entry point within that .exe file's assembly.

The manifest-based activation model uses an application manifest for


activation rather than an assembly. A manifest fully describes the
application, its dependencies, security requirements, and so forth. The
manifest model has several advantages over the assembly-based
activation model, especially for Web applications. For example, the
manifest contains the security requirements of the application, which
enables the user to decide whether to allow the application to execute
before downloading the code. The manifest also contains information
about the application dependencies.
Manifest-based activation is provided by a set of APIs that allow managed
hosts to activate applications and add-ins described by a manifest. These
APIs contain a mixture of both new classes and extensions to existing
classes.

This activation model also invokes an entity called a Trust Manager that
performs the following tasks:

1. Determines whether an application is allowed to be activated. This


decision can be made by prompting the user, querying policy, or by
any other means deemed appropriate for a given Trust Manager.

2. Sets up the security context to run an application in. Most


commonly, this step involves setting up a code access security (CAS)
policy tree on the application domain in which the application will
run.

.NET Framework Remoting

.NET Framework remoting now supports IPv6 addresses and the exchange
of generic types. The classes in the
System.Runtime.Remoting.Channels.Tcp namespace support
authentication and encryption using the Security Support Provider
Interface (SSPI). Classes in the new
System.Runtime.Remoting.Channels.Ipc namespace allow applications on
the same computer to communicate quickly without using the network.
Finally, you can now configure the connection cache time-out and the
number of method retries, which can improve the performance of network
load-balanced remote clusters.
Using classes in the System.Net.NetworkInformation namespace,
applications can access IP, IPv4, IPv6, TCP, and UDP network traffic
statistics. Applications can also view address and configuration information
for the local computer’s network adapters. This information is similar to
the information returned by the Ipconfig.exe command-line tool.

Ping

The Ping class allows an application to determine whether a remote


computer is accessible over the network. This class provides functionality
similar to the Ping.exe command-line tool, and supports synchronous and
asynchronous calls.

Processing HTTP Requests from Within Applications

You can use the HttpListener class to create a simple Web server that
responds to HTTP requests. The Web server is active for the lifetime of the
HttpListener object and runs within your application, with your
application's permissions. This class is available only on computers running
the Windows XP Service Pack 2 or Windows Server 2003 operating
systems.

Programmatic Control of Caching

Using the classes in the System.Net.Cache namespace, applications can


control the caching of resources obtained using the WebRequest,
WebResponse, and WebClient classes. You can use the predefined cache
policies provided by the .NET Framework or specify a custom cache policy.
You can specify a cache policy for each request and define a default cache
policy for requests that do not specify a cache policy.

The .NET Framework is an integral Windows component that supports


building and running the next generation of applications and XML Web
services. The .NET Framework is designed to fulfill the following
objectives:

• To provide a consistent object-oriented programming environment


whether object code is stored and executed locally, executed locally
but Internet-distributed, or executed remotely.
• To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
• To provide a code-execution environment that promotes safe
execution of code, including code created by an unknown or semi-
trusted third party.
• To provide a code-execution environment that eliminates the
performance problems of scripted or interpreted environments.
• To make the developer experience consistent across widely varying
types of applications, such as Windows-based applications and Web-
based applications.
• To build all communication on industry standards to ensure that
code based on the .NET Framework can integrate with any other
code.

The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language
runtime is the foundation of the .NET Framework. You can think of the
runtime as an agent that manages code at execution time, providing core
services such as memory management, thread management, and
remoting, while also enforcing strict type safety and other forms of code
accuracy that promote security and robustness. In fact, the concept of
code management is a fundamental principle of the runtime. Code that
targets the runtime is known as managed code, while code that does not
target the runtime is known as unmanaged code. The class library, the
other main component of the .NET Framework, is a comprehensive,
object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user
interface (GUI) applications to applications based on the latest innovations
provided by ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load


the common language runtime into their processes and initiate the
execution of managed code, thereby creating a software environment that
can exploit both managed and unmanaged features. The .NET Framework
not only provides several runtime hosts, but also supports the
development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side


environment for managed code. ASP.NET works directly with the runtime
to enable ASP.NET applications and XML Web services, both of which are
discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts


the runtime (in the form of a MIME type extension). Using Internet
Explorer to host the runtime enables you to embed managed components
or Windows Forms controls in HTML documents. Hosting the runtime in
this way makes managed mobile code (similar to Microsoft® ActiveX®
controls) possible, but with significant improvements that only managed
code can offer, such as semi-trusted execution and isolated file storage.

The following illustration shows the relationship of the common language


runtime and the class library to your applications and to the overall
system. The illustration also shows how managed code operates within a
larger architecture.

.NET Framework in context


The following sections describe the main components and features of
the .NET Framework in greater detail.

Features of the Common Language Runtime

The common language runtime manages memory, thread execution, code


execution, code safety verification, compilation, and other system
services. These features are intrinsic to the managed code that runs on
the common language runtime.

With regards to security, managed components are awarded varying


degrees of trust, depending on a number of factors that include their origin
(such as the Internet, enterprise network, or local computer). This means
that a managed component might or might not be able to perform file-
access operations, registry-access operations, or other sensitive functions,
even if it is being used in the same active application.
The runtime enforces code access security. For example, users can trust
that an executable embedded in a Web page can play an animation on
screen or sing a song, but cannot access their personal data, file system,
or network. The security features of the runtime thus enable legitimate
Internet-deployed software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-


and-code-verification infrastructure called the common type system (CTS).
The CTS ensures that all managed code is self-describing. The various
Microsoft and third-party language compilers generate managed code that
conforms to the CTS. This means that managed code can consume other
managed types and instances, while strictly enforcing type fidelity and
type safety.

In addition, the managed environment of the runtime eliminates many


common software issues. For example, the runtime automatically handles
object layout and manages references to objects, releasing them when
they are no longer being used. This automatic memory management
resolves the two most common application errors, memory leaks and
invalid memory references.

The runtime also accelerates developer productivity. For example,


programmers can write applications in their development language of
choice, yet take full advantage of the runtime, the class library, and
components written in other languages by other developers. Any compiler
vendor who chooses to target the runtime can do so. Language compilers
that target the .NET Framework make the features of the .NET Framework
available to existing code written in that language, greatly easing the
migration process for existing applications.

While the runtime is designed for the software of the future, it also
supports software of today and yesterday. Interoperability between
managed and unmanaged code enables developers to continue to use
necessary COM components and DLLs.
The runtime is designed to enhance performance. Although the common
language runtime provides many standard runtime services, managed
code is never interpreted. A feature called just-in-time (JIT) compiling
enables all managed code to run in the native machine language of the
system on which it is executing. Meanwhile, the memory manager
removes the possibilities of fragmented memory and increases memory
locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance, server-side


applications, such as Microsoft® SQL Server™ and Internet Information
Services (IIS). This infrastructure enables you to use managed code to
write your business logic, while still enjoying the superior performance of
the industry's best enterprise servers that support runtime hosting.

.NET Framework Class Library

The .NET Framework class library is a collection of reusable types that


tightly integrate with the common language runtime. The class library is
object oriented, providing types from which your own managed code can
derive functionality. This not only makes the .NET Framework types easy
to use, but also reduces the time associated with learning new features of
the .NET Framework. In addition, third-party components can integrate
seamlessly with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of


interfaces that you can use to develop your own collection classes. Your
collection classes will blend seamlessly with the classes in the .NET
Framework.

As you would expect from an object-oriented class library, the .NET


Framework types enable you to accomplish a range of common
programming tasks, including tasks such as string management, data
collection, database connectivity, and file access. In addition to these
common tasks, the class library includes types that support a variety of
specialized development scenarios. For example, you can use the .NET
Framework to develop the following types of applications and services:

• Console applications.
• Windows GUI applications (Windows Forms).
• ASP.NET applications.
• XML Web services.
• Windows services.

For example, the Windows Forms classes are a comprehensive set of


reusable types that vastly simplify Windows GUI development. If you write
an ASP.NET Web Form application, you can use the Web Forms classes.

Client Application Development

Client applications are the closest to a traditional style of application in


Windows-based programming. These are the types of applications that
display windows or forms on the desktop, enabling a user to perform a
task. Client applications include applications such as word processors and
spreadsheets, as well as custom business applications such as data-entry
tools, reporting tools, and so on. Client applications usually employ
windows, menus, buttons, and other GUI elements, and they likely access
local resources such as the file system and peripherals such as printers.

Another kind of client application is the traditional ActiveX control (now


replaced by the managed Windows Forms control) deployed over the
Internet as a Web page. This application is much like other client
applications: it is executed natively, has access to local resources, and
includes graphical elements.

In the past, developers created such applications using C/C++ in


conjunction with the Microsoft Foundation Classes (MFC) or with a rapid
application development (RAD) environment such as Microsoft® Visual
Basic®. The .NET Framework incorporates aspects of these existing
products into a single, consistent development environment that
drastically simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are


designed to be used for GUI development. You can easily create command
windows, buttons, menus, toolbars, and other screen elements with the
flexibility necessary to accommodate shifting business needs.

For example, the .NET Framework provides simple properties to adjust


visual attributes associated with forms. In some cases the underlying
operating system does not support changing these attributes directly, and
in these cases the .NET Framework automatically recreates the forms. This
is one of many ways in which the .NET Framework integrates the
developer interface, making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted access


to a user's computer. This means that binary or natively executing code
can access some of the resources on the user's system (such as GUI
elements and limited file access) without being able to access or
compromise other resources. Because of code access security, many
applications that once needed to be installed on a user's system can now
be deployed through the Web. Your applications can implement the
features of a local application while being deployed like a Web page.

Server Application Development

Server-side applications in the managed world are implemented through


runtime hosts. Unmanaged applications host the common language
runtime, which allows your custom managed code to control the behavior
of the server. This model provides you with all the features of the common
language runtime and class library while gaining the performance and
scalability of the host server.

The following illustration shows a basic network schema with managed


code running in different server environments. Servers such as IIS and
SQL Server can perform standard operations while your application logic
executes through the managed code.

Server-side managed code

ASP.NET is the hosting environment that enables developers to use the


.NET Framework to target Web-based applications. However, ASP.NET is
more than just a runtime host; it is a complete architecture for developing
Web sites and Internet-distributed objects using managed code. Both Web
Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting
classes in the .NET Framework.

XML Web services, an important evolution in Web-based technology, are


distributed, server-side application components similar to common Web
sites. However, unlike Web-based applications, XML Web services
components have no UI and are not targeted for browsers such as Internet
Explorer and Netscape Navigator. Instead, XML Web services consist of
reusable software components designed to be consumed by other
applications, such as traditional client applications, Web-based
applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and
deployment into the highly distributed environment of the Internet.

If you have used earlier versions of ASP technology, you will immediately
notice the improvements that ASP.NET and Web Forms offer. For example,
you can develop Web Forms pages in any language that supports the .NET
Framework. In addition, your code no longer needs to share the same file
with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other
managed application, they take full advantage of the runtime. In contrast,
unmanaged ASP pages are always scripted and interpreted. ASP.NET
pages are faster, more functional, and easier to develop than unmanaged
ASP pages because they interact with the runtime like any managed
application.

The .NET Framework also provides a collection of classes and tools to aid
in development and consumption of XML Web services applications. XML
Web services are built on standards such as SOAP (a remote procedure-
call protocol), XML (an extensible data format), and WSDL ( the Web
Services Description Language). The .NET Framework is built on these
standards to promote interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included with
the .NET Framework SDK can query an XML Web service published on the
Web, parse its WSDL description, and produce C# or Visual Basic source
code that your application can use to become a client of the XML Web
service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and
XML parsing. Although you can use the class library to consume XML Web
services directly, the Web Services Description Language tool and the
other tools contained in the SDK facilitate your development efforts with
the .NET Framework.

If you develop and publish your own XML Web service, the .NET
Framework provides a set of classes that conform to all the underlying
communication standards, such as SOAP, WSDL, and XML. Using those
classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by
distributed software development.
Finally, like Web Forms pages in the managed environment, your XML Web
service will run with the speed of native machine language using the
scalable communication of IIS.

Common Language Runtime (CLR)

Compilers and tools expose the runtime's functionality and enable you to
write code that benefits from this managed execution environment. Code
that you develop with a language compiler that targets the runtime is
called managed code; it benefits from features such as cross-language
integration, cross-language exception handling, enhanced security,
versioning and deployment support, a simplified model for component
interaction, and debugging and profiling services.

To enable the runtime to provide services to managed code, language


compilers must emit metadata that describes the types, members, and
references in your code. Metadata is stored with the code; every loadable
common language runtime portable executable (PE) file contains
metadata. The runtime uses metadata to locate and load classes, lay out
instances in memory, resolve method invocations, generate native code,
enforce security, and set run-time context boundaries.

The runtime automatically handles object layout and manages references


to objects, releasing them when they are no longer being used. Objects
whose lifetimes are managed in this way are called managed data.
Garbage collection eliminates memory leaks as well as some other
common programming errors. If your code is managed, you can use
managed data, unmanaged data, or both managed and unmanaged data
in your .NET Framework application. Because language compilers supply
their own types, such as primitive types, you might not always know (or
need to know) whether your data is being managed.

The common language runtime makes it easy to design components and


applications whose objects interact across languages. Objects written in
different languages can communicate with each other, and their behaviors
can be tightly integrated. For example, you can define a class and then
use a different language to derive a class from your original class or call a
method on the original class. You can also pass an instance of a class to a
method of a class written in a different language. This cross-language
integration is possible because language compilers and tools that target
the runtime use a common type system defined by the runtime, and they
follow the runtime's rules for defining new types, as well as for creating,
using, persisting, and binding to types.

As part of their metadata, all managed components carry information


about the components and resources they were built against. The runtime
uses this information to ensure that your component or application has the
specified versions of everything it needs, which makes your code less
likely to break because of some unmet dependency. Registration
information and state data are no longer stored in the registry where they
can be difficult to establish and maintain. Rather, information about the
types you define (and their dependencies) is stored with the code as
metadata, making the tasks of component replication and removal much
less complicated.

Language compilers and tools expose the runtime's functionality in ways


that are intended to be useful and intuitive to developers. This means that
some features of the runtime might be more noticeable in one
environment than in another. How you experience the runtime depends on
which language compilers or tools you use. For example, if you are a
Visual Basic developer, you might notice that with the common language
runtime, the Visual Basic language has more object-oriented features than
before. Following are some benefits of the runtime:

• Performance improvements.
• The ability to easily use components developed in other languages.
• Extensible types provided by a class library.
• New language features such as inheritance, interfaces, and
overloading for object-oriented programming; support for explicit
free threading that allows creation of multithreaded, scalable
applications; support for structured exception handling and custom
attributes.

If you use Microsoft® Visual C++® .NET, you can write managed code
using the Managed Extensions for C++, which provide the benefits of a
managed execution environment as well as access to powerful capabilities
and expressive data types that you are familiar with. Additional runtime
features include:

• Cross-language integration, especially cross-language inheritance.


• Garbage collection, which manages object lifetime so that reference
counting is unnecessary.
• Self-describing objects, which make using Interface Definition
Language (IDL) unnecessary.
• The ability to compile once and run on any CPU and operating
system that supports the runtime.

You can also write managed code using the C# language, which provides
the following benefits:

• Complete object-oriented design.


• Very strong type safety.
• A good blend of Visual Basic simplicity and C++ power.
• Garbage collection.
• Syntax and keywords similar to C and C++.
• Use of delegates rather than function pointers for increased type
safety and security. Function pointers are available through the use
of the unsafe C# keyword and the /unsafe option of the C# compiler
(Csc.exe) for unmanaged code and data.

Application Domain Overview

Historically, process boundaries have been used to isolate applications


running on the same computer. Each application is loaded into a separate
process, which isolates the application from other applications running on
the same computer.

The applications are isolated because memory addresses are process-


relative; a memory pointer passed from one process to another cannot be
used in any meaningful way in the target process. In addition, you cannot
make direct calls between two processes. Instead, you must use proxies,
which provide a level of indirection.

Managed code must be passed through a verification process before it can


be run (unless the administrator has granted permission to skip the
verification). The verification process determines whether the code can
attempt to access invalid memory addresses or perform some other action
that could cause the process in which it is running to fail to operate
properly. Code that passes the verification test is said to be type-safe. The
ability to verify code as type-safe enables the common language runtime
to provide as great a level of isolation as the process boundary, at a much
lower performance cost.

Application domains provide a more secure and versatile unit of processing


that the common language runtime can use to provide isolation between
applications. You can run several application domains in a single process
with the same level of isolation that would exist in separate processes, but
without incurring the additional overhead of making cross-process calls or
switching between processes. The ability to run multiple applications
within a single process dramatically increases server scalability.

Isolating applications is also important for application security. For


example, you can run controls from several Web applications in a single
browser process in such a way that the controls cannot access each
other's data and resources.

The isolation provided by application domains has the following benefits:

• Faults in one application cannot affect other applications. Because


type-safe code cannot cause memory faults, using application
domains ensures that code running in one domain cannot affect
other applications in the process.
• Individual applications can be stopped without stopping the entire
process. Using application domains enables you to unload the code
running in a single application.

• Code running in one application cannot directly access code or


resources from another application. The common language runtime
enforces this isolation by preventing direct calls between objects in
different application domains. Objects that pass between domains
are either copied or accessed by proxy. If the object is copied, the
call to the object is local. That is, both the caller and the object
being referenced are in the same application domain. If the object is
accessed through a proxy, the call to the object is remote. In this
case, the caller and the object being referenced are in different
application domains. Cross-domain calls use the same remote call
infrastructure as calls between two processes or between two
machines. As such, the metadata for the object being referenced
must be available to both application domains to allow the method
call to be JIT-compiled properly. If the calling domain does not have
access to the metadata for the object being called, the compilation
might fail with an exception of type System.IO.FileNotFound. See
Remote Objects for more details. The mechanism for determining
how objects can be accessed across domains is determined by the
object. For more information, see MarshalByRefObject Class.

• The behavior of code is scoped by the application in which it runs. In


other words, the application domain provides configuration settings
such as application version policies, the location of any remote
assemblies it accesses, and information about where to locate
assemblies that are loaded into the domain.
• Permissions granted to code can be controlled by the application
domain in which the code is running.

Application Domains and Assemblies


This topic describes the relationship between application domains
and assemblies. You must load an assembly into an application domain
before you can execute the code it contains. Running a typical application
causes several assemblies to be loaded into an application domain.

If an assembly is used by multiple domains in a process, the JIT output


from an assembly's code can be shared by all domains referencing the
assembly. The runtime host decides whether to load assemblies as
domain-neutral when it loads the runtime into a process. For more
information, see the LoaderOptimizationAttribute attribute and the
associated LoaderOptimization enumeration. For hosting, see the
documentation for CorBindToRuntimeEx in the common language runtime
Hosting Interfaces Specification found in the .NET Framework SDK.
There are three options for loading domain-neutral assemblies:

• Load no assemblies as domain-neutral, except Mscorlib, which is


always loaded domain-neutral. This setting is called single domain
because it is commonly used when the host is running only a single
application in the process.
• Load all assemblies as domain-neutral. Use this setting when there
are multiple application domains in the process, all of which run the
same code.
• Load strong-named assemblies as domain-neutral. Use this setting
when running more than one application in the same process.

When you decide whether to load assemblies as domain-neutral, you must


make a tradeoff between reducing memory use and performance. The
performance of a domain-neutral assembly is slower if that assembly
contains static data or static methods that are accessed frequently. Access
to static data is slower because of the need to isolate assemblies. Each
application domain that accesses the assembly must have a separate copy
of the static data, to prevent references to objects in static fields from
crossing domain boundaries. As a result, the runtime contains additional
logic to direct a caller to the appropriate copy of the static data or method.
This extra logic slows down the call.

An assembly is not shared between domains when it is granted a different


set of permissions in each domain. This can occur if the runtime host sets
an application domain-level security policy. Assemblies should not be
loaded as domain-neutral if the set of permissions granted to the assembly
is likely to be different in each domain.
What's New in Microsoft SQL Server 2000

Microsoft® SQL Server™ 2000 extends the performance, reliability, quality, and ease-of-use
of Microsoft SQL Server version 7.0. Microsoft SQL Server 2000 includes several new
features that make it an excellent database platform for large-scale online transactional
processing (OLTP), data warehousing, and e-commerce applications.

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis
Services. Analysis Services also includes a new data mining component. For more
information,

The Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta Data
Services. The term repository is used only in reference to the repository engine within Meta
Data Services. For more information, .

The What's New topics contain brief overviews of the new features and links to relevant
conceptual topics that provide more detailed information. These conceptual topics provide
links to topics that describe the commands or statements you use to work with these features.

Relational Database Enhancements

Microsoft® SQL Server™ 2000 introduces several server improvements and new features:

XML Support

The relational database engine can return data as Extensible Markup Language (XML)
documents. Additionally, XML can also be used to insert, update, and delete values in the
database. For more information,

Federated Database Servers

SQL Server 2000 supports enhancements to distributed partitioned views that allow you to
partition tables horizontally across multiple servers. This allows you to scale out one database
server to a group of database servers that cooperate to provide the same performance levels as
a cluster of database servers. This group, or federation, of database servers can support the
data storage requirements of the largest Web sites and enterprise data processing systems.

SQL Server 2000 introduces Net-Library support for Virtual Interface Architecture (VIA)
system-area networks that provide high-speed connectivity between servers, such as between
application servers and database servers. For more information,

User-Defined Functions

The programmability of Transact-SQL can be extended by creating your own Transact-SQL


functions. A user-defined function can return either a scalar value or a table. For more
information, .

Indexed Views

Indexed views can significantly improve the performance of an application where queries
frequently perform certain joins or aggregations. An indexed view allows indexes to be
created on views, where the result set of the view is stored and indexed in the database.
Existing applications do not need to be modified to take advantage of the performance
improvements with indexed views. For more information, .

New Data Types

SQL Server 2000 introduces three new data types. bigint is an 8-byte integer type.
sql_variant is a type that allows the storage of data values of different data types. table is a
type that allows applications to store results temporarily for later use. It is supported for
variables, and as the return type for user-defined functions. For more information

INSTEAD OF and AFTER Triggers

INSTEAD OF triggers are executed instead of the triggering action (for example, INSERT,
UPDATE, DELETE). They can also be defined on views, in which case they greatly extend
the types of updates a view can support. AFTER triggers fire after the triggering action. SQL
Server 2000 introduces the ability to specify which AFTER triggers fire first and last. For
more information, .

Cascading Referential Integrity Constraints


You can control the actions SQL Server 2000 takes when you attempt to update or delete a
key to which existing foreign keys point. This is controlled by the new ON DELETE and ON
UPDATE clauses in the REFERENCES clause of the CREATE TABLE and ALTER TABLE
statements. For more information, .

Collation Enhancements

SQL Server 2000 replaces code pages and sort orders with collations. SQL Server 2000
includes support for most collations supported in earlier versions of SQL Server, and
introduces a new set of collations based on Windows collations. You can now specify
collations at the database level or at the column level. Previously, code pages and sort orders
could be specified only at the server level and applied to all databases on a server. For more
information, .

Collations support code page translations. Operations with char and varchar operands having
different code pages are now supported. Code page translations are not supported for text
operands. You can use ALTER DATABASE to change the default collation of a database. For
more information,

Full-Text Search Enhancements

Full-text search now includes change tracking and image filtering. Change tracking maintains
a log of all changes to the full-text indexed data. You can update the full-text index with these
changes by flushing the log manually, on a schedule, or as they occur, using the background
update index option. Image filtering allows you to index and query documents stored in image
columns. The user provides the document type in a column that contains the file name
extension that the document would have had if it were stored as a file in the file system. Using
this information, full-text search is able to load the appropriate document filter to extract
textual information for indexing. For more information .

Multiple Instances of SQL Server

SQL Server 2000 supports running multiple instances of the relational database engine on the
same computer. Each computer can run one instance of the relational database engine from
SQL Server version 6.5 or 7.0, along with one or more instances of the database engine from
SQL Server 2000. Each instance has its own set of system and user databases. Applications
can connect to each instance on a computer similar to the way they connect to instances of
SQL Servers running on different computers. The SQL Server 2000 utilities and
administration tools have been enhanced to work with multiple instances. For more
information, .

Index Enhancements

You can now create indexes on computed columns. You can specify whether indexes are built
in ascending or descending order, and if the database engine should use parallel scanning and
sorting during index creation. For more information,

The CREATE INDEX statement can now use the tempdb database as a work area for the
sorts required to build an index. This results in improved disk read and write patterns for the
index creation step, and makes it more likely that index pages will be allocated in contiguous
strips. In addition, the complete process of creating an index is eligible for parallel operations,
not only the initial table scan.

Failover Clustering Enhancements

The administration of failover clusters has been greatly improved to make it very easy to
install, configure, and maintain a Microsoft SQL Server 2000 failover cluster. Additional
enhancements include the ability to failover and failback to or from any node in a SQL Server
2000 cluster, the ability to add or remove a node from the cluster through SQL Server 2000
Setup, and the ability to reinstall or rebuild a cluster instance on any node in the cluster
without affecting the other cluster node instances. The SQL Server 2000 utilities and
administration tools have been enhanced to work with failover clusters. For more information

Net-Library Enhancements

The SQL Server 2000 Net-Libraries have been rewritten to virtually eliminate the need to
administer Net-Library configurations on client computers when connecting SQL Server 2000
clients to instances of SQL Server 2000. The new Net-Libraries also support connections to
multiple instances of SQL Server on the same computer, and support Secure Sockets Layer
encryption over all Net-Libraries. SQL Server 2000 introduces Net-Library support for Virtual
Interface Architecture (VIA) system-area networks that provide high-speed connectivity
between servers, such as between application servers and database servers.

64-GB Memory Support


Microsoft SQL Server 2000 Enterprise Edition can use the Microsoft Windows 2000
Advanced Windows Extension (AWE) API to support up to 64 GB of physical memory
(RAM) on a computer. For more information, .

Distributed Query Enhancements

SQL Server 2000 introduces a new OPENDATASOURCE function, which you can use to
specify ad hoc connection information in a distributed query. SQL Server 2000 also specifies
methods that OLE DB providers can use to report the level of SQL syntax supported by the
provider and statistics on the distribution of key values in the data source. The distributed
query optimizer can then use this information to reduce the amount of data that has to be sent
from the OLE DB data source. SQL Server 2000 delegates more SQL operations to OLE DB
data sources than earlier versions of SQL Server. Distributed queries also support the other
functions introduced in SQL Server 2000, such as multiple instances, mixing columns with
different collations in result sets, and the new bigint and sql_variant data types.

SQL Server 2000 distributed queries add support for the OLE DB Provider for Exchange and
the Microsoft OLE DB Provider for Microsoft Directory Services.

Updatable Distributed Partitioned Views

SQL Server 2000 introduces enhancements to distributed partitioned views. You can partition
tables horizontally across several servers, and define a distributed partitioned view on each
member server that makes it appear as if a full copy of the original table is stored on each
server. Groups of servers running SQL Server that cooperate in this type of partitioning are
called federations of servers. A database federation built using SQL Server 2000 databases is
capable of supporting the processing requirements of the largest Web sites or enterprise-level
databases. For more information, .

Kerberos and Security Delegation

SQL Server 2000 uses Kerberos to support mutual authentication between the client and the
server, as well as the ability to pass the security credentials of a client between computers, so
that work on a remote server can proceed using the credentials of the impersonated client.
With Microsoft Windows® 2000, SQL Server 2000 uses Kerberos and delegation to support
both integrated authentication as well as SQL Server logins. For more information,

Backup and Restore Enhancements


SQL Server 2000 introduces a new, more easily understood model for specifying backup and
restore options. The new model makes it clearer that you are balancing increased or decreased
exposure to losing work against the performance and log space requirements of different
plans. SQL Server 2000 introduces support for recovery to specific points of work using
named log marks in the transaction log, and the ability to do partial database restores

Users can define passwords for backup sets and media sets that prevent unauthorized users
from accessing SQL Server backups.

Scalability Enhancements for Utility Operations

SQL Server 2000 enhancements for utility operations include faster differential backups,
parallel Database Console Command (DBCC) checking, and parallel scanning. Differential
backups can now be completed in a time that is proportional to the amount of data changed
since the last full backup. DBCC can be run without taking shared table locks while scanning
tables, thereby enabling them to be run concurrently with update activity on tables.
Additionally, DBCC now takes advantage of multiple processors, thus enabling near-linear
gain in performance in relation to the number of CPUs (provided that I/O is not a bottleneck).

Text in Row Data

SQL Server 2000 supports a new text in row table option that specifies that small text, ntext,
and image values be placed directly in the data row instead of in a separate page. This reduces
the amount of space used to store small text, ntext, and image data values, and reduces the
amount of disk I/O needed to process these values. For more information,

XML Integration of Relational Data

The Microsoft® SQL Server™ 2000 relational database engine natively supports Extensible
Markup Language (XML).

You can now access SQL Server 2000 over HTTP using a Universal Resource Locator (URL).
You can define a virtual root on a Microsoft Internet Information Services (IIS) server, which
gives you HTTP access to the data and XML functionality of SQL Server 2000.
You can use HTTP, ADO, or OLE DB to work with the XML functionality of SQL Server
2000:

• You can define XML views of SQL Server 2000 databases by annotating XML-Data
Reduced (XDR) schemas to map the tables, views, and columns that are associated
with the elements and attributes of the schema. The XML views can then be referenced
in XPath queries, which retrieve results from the database and return them as XML
documents.

• The results of SELECT statements can be returned as XML documents. The SQL
Server 2000 Transact-SQL SELECT statement supports a FOR XML clause that
specifies that the statement results be returned in the form of an XML document
instead of a relational result set. Complex queries, or queries that you want to make
secure, can be stored as templates in an IIS virtual root, and executed by referencing
the template name.

• You can expose the data from an XML document as a relational rowset using the new
OPENXML rowset function. OPENXML can be used everywhere a rowset function
can be used in a Transact-SQL statement, such as in place of a table or view reference
in a FROM clause. This allows you to use the data in XML documents to insert,
update, or delete data in the tables of the database, including modifying multiple rows
in multiple tables in a single operation.

Graphical Administration Enhancements

Microsoft® SQL Server™ 2000 introduces these graphical administration improvements and
new features:
Log Shipping

Log shipping allows the transaction logs from a source database to be continually backed up
and loaded into a target database on another server. This is useful for maintaining a warm
standby server, or for offloading query processing from the source server to a read-only
destination server. For more information, see Log Shipping.

SQL Profiler Enhancements

SQL Profiler now supports size-based and time-based traces, and includes new events for Data
File Auto Grow, Data File Auto Shrink, Log File Auto Grow, Log File Auto Shrink, Show
Plan All, Show Plan Statistics, and Show Plan Text.

SQL Profiler has been enhanced to provide auditing of SQL Server activities, up to the
auditing levels required by the C2 level of security defined by the United States government.
For more information,

SQL Query Analyzer Enhancements

SQL Query Analyzer now includes Object Browser, which allows you to navigate through and
get information (such as parameters and dependencies) about database objects, including user
and system tables, views, stored procedures, extended stored procedures, and functions. The
Object Browser also supports generating scripts to either execute or create objects. Other
enhancements include server tracing and client statistics that show information about the
server-side and client-side impact of a given query.

SQL Query Analyzer includes a stored procedure debugger. SQL Query Analyzer also
includes templates that can be used as the starting points for creating objects such as
databases, tables, views, and stored procedures. For more information,

Copy Database Wizard

Users can run the Copy Database Wizard to upgrade SQL Server version 7.0 databases to SQL
Server 2000 databases. It can also be used to copy complete databases between instances of
SQL Server 2000. For more information..
Replication Enhancements

Microsoft® SQL Server™ 2000 introduces the following replication improvements and new
features:

Implementing Replication

SQL Server 2000 enhances snapshot replication, transactional replication, and merge
replication by adding:

• Alternate snapshot locations, which provide easier and more flexible methods for
applying the initial snapshot to Subscribers. You can save (and compress) the snapshot
files to a network location or removable media, which can then be transferred to
Subscribers without using the network.

• Attachable subscription databases, which allow you to transfer a database with


replicated data and one or more subscriptions from one Subscriber to another SQL
Server. After the database is attached to the new Subscriber, the subscription database
at the new Subscriber will automatically receive its own pull subscriptions to the
publications at the specified Publishers.

• Schema changes on publication databases, which allow you to add or drop columns on
the publishing table and propagate those changes to Subscribers.

• On demand script execution, which allows you to post a general SQL script that will
be executed at all Subscribers.

• Pre- and post-snapshot scripts, which allow you to run scripts before or after a
snapshot is applied at the Subscriber.

• Remote agent activation, which allows you to reduce the amount of processing on the
Distributor or Subscriber by running the Distribution Agent or Merge Agent on one
computer while activating that agent from another computer. You can use remote
agent activation with push or pull subscriptions.
• Support of new SQL Server features, which includes user-defined functions, indexed
views, new data types, and multiple instances of SQL Server.

• The ActiveX Snapshot Control, which makes programmatic generation of snapshots


easier.

• More snapshot scripting options, which support transfer of indexes, extended


properties, and constraints to Subscribers.

Merge Replication

Merge replication is the process of distributing data from Publisher to Subscribers, allowing
the Publisher and Subscribers to make updates while connected or disconnected, and then
merging the changes between sites when they are connected. Enhancements to merge
replication include:

• Greater parallelism of the Merge Agent for improved server-to-server performance.

• Optimizations for determining data changes relevant to a partition at a Subscriber.

• Dynamic snapshots, which provide more efficient application of the initial snapshot
when using dynamic filters.

• Vertical filters for merge publications.

• More powerful dynamic filtering with user-defined functions.

• The ability to use alternate synchronization partners when synchronizing data. Using
alternate synchronization partners, a Subscriber to a merge publication can synchronize
with any specified server that has the same data as the original Publisher.

• Automated management of identity ranges. In merge replication topologies where a


publication contains an identity column, and where new rows can be inserted at
Subscribers, automated management of identity ranges at the Subscriber ensures the
same identity values are not assigned to rows inserted at different subscription
databases, and that primary key constraint violations do not occur. This feature is also
available when queued updating is used with snapshot replication or transactional
replication.

• Support for timestamp columns in published tables.

• Improved management of the growth of merge tracking data.

• Several new merge replication conflict resolvers including interactive resolvers that
provide a user interface for immediate, manual conflict resolution, priority based on a
column value, minimum/maximum value wins, first/last change wins, additive/average
value, and merge by appending different text values.

• New options to validate permissions for a Subscriber to upload changes to a Publisher


(check_permissions) and security enhancements including code signing of conflict
resolvers included with Microsoft SQL Server 2000.

• New COM interfaces that support heterogeneous data sources as Publishers within a
SQL Server replication topology.

• Validation of replicated data per subscription or on a publication-wide basis.


Validation is also available through SQL Server Enterprise Manager.

• Reinitialization to allow uploading of changes from the Subscriber before the


application of a new snapshot.

Transactional Replication

With transactional replication, an initial snapshot of data is applied at Subscribers, and then
when data modifications are made at the Publisher, the individual transactions are captured
and propagated to Subscribers. Enhancements to transactional replication include:

• Concurrent snapshot processing so that data modifications can continue on publishing


tables while the initial snapshot is generated.

• Improved error handling and the ability to skip specified errors and continue
replication.

• Validation of replicated data at the Subscriber, including validation on vertical


partitions. Validation is also available through SQL Server Enterprise Manager.

• Publishing indexed views as tables.


• The option to store data modifications made at the Subscriber in a queue (queued
updating).

• The option to transform data as it is published to Subscribers (transforming published


data).

• The ability to restore transactional replication databases without reinitializing


subscriptions or disabling and reconfiguring publishing and distribution. You can also
set up transactional replication to work with log shipping, enabling you to fail over to a
warm standby server without reconfiguring replication. For more information,

Queued Updating

Queued updating allows snapshot replication and transactional replication Subscribers to


modify published data without requiring an active network connection to the Publisher.

When you create a publication with the queued updating option enabled and a Subscriber
performs INSERT, UPDATE, or DELETE statements on published data, the changes are
stored in a queue. The queued transactions are applied asynchronously at the Publisher when
network connectivity is restored.

Because the updates are propagated asynchronously to the Publisher, the same data may have
been updated by the Publisher or by another Subscriber and conflicts can occur when applying
the updates. Conflicts are detected automatically and several options for resolving conflicts are
offered.

Transforming Published Data

Transformable subscriptions (available with snapshot replication or transactional replication)


leverages the data movement, transformation mapping, and filtering capabilities of Data
Transformation Services (DTS).

Using transformable subscriptions in your replication topology allows you to customize and
send published data based on the requirements of individual Subscribers, including performing
data type mappings, column manipulations, string manipulations, and use of functions as data
is published.
Replication Usability

There have been several improvements in SQL Server Enterprise Manager that provide for
easier implementation, monitoring, and administration of replication. Enhancements to
replication usability include:

• A centralized Replication folder in the SQL Server Enterprise Manager tree, which
organizes all subscriptions and publications on the server being administered.

• The ability to browse for and subscribe to publications (when permission is allowed)
using Windows Active Directory.

• The ability to see multiple Distributors in a single monitoring node in SQL Server
Enterprise Manager.

• Standard and advanced replication options separated in the Create Publication, Create
Push Subscription, and Create Pull Subscription Wizards. You can choose to show
advanced options in these wizards on the Welcome page of each wizard.

• New wizards for creating jobs that create dynamic snapshots for merge publications
that use dynamic filters (Create Dynamic Snapshot Job Wizard), and for transforming
published data in snapshot replication or transactional replication (Transform
Published Data Wizard).

Data Transformation Services Enhancements

Microsoft® SQL Server™ 2000 introduces these Data Transformation Services (DTS)
enhancements and new features:

New Custom Tasks


New DTS custom tasks, available through DTS Designer or the DTS object model, allow you
to create DTS packages that perform tasks or set variables based on the properties of the run-
time environment. Use these tasks to:

• Import data from, and send data and completed packages to, Internet and File Transfer
Protocol (FTP) sites.

• Run packages asynchronously.

• Build packages that send messages to each other.

• Build packages that execute other packages.

• Join multiple package executions as part of a transaction.

Enhanced Logging Facilities

DTS package logs save information for each package execution, allowing you to maintain a
complete execution history. You can also view execution information for individual processes
within a task.

You can generate exception files for transformation tasks. When you log to exception files,
you can save source and destination error rows to a file through the DTS OLE DB text file
provider and re-process the error rows.

Saving DTS Packages to Visual Basic Files

DTS packages now can be saved to a Microsoft® Visual Basic® file. This allows a package
created by the DTS Import/Export Wizard or DTS Designer to be incorporated into Visual
Basic programs or to be used as prototypes by Visual Basic developers who need to reference
the components of the DTS object model. For more information,

A new multiphase data pump allows advanced users to customize the operation of the data
pump at various stages of its operation. You can now use global variables as input and output
parameters for queries .

Using Parameterized Queries


You can now use parameterized source queries in a DTS transformation task and an Execute
SQL task. In addition, DTS includes an option for saving the results of a parameterized query
to a global variable, allowing you to perform functions such as saving disconnected Microsoft
ActiveX® Data Objects (ADO) recordsets in DTS.

Using Global Variables to Pass Information Between DTS Packages

You now can use the Execute Package task to dynamically assign the values of global
variables from a parent package to a child package. Use global variables to pass information
from one package to another when each package performs different work items. For example,
use one package to download data on a nightly basis, summarize the data, assign summary
data values to global variables, and pass the values to another package that further processes
the data.

Working with Named and Multiple Instances of SQL Server


2000

With Microsoft® SQL Server™ 2000, you have the option of installing multiple copies, or
instances of SQL Server on one computer. When setting up a new installation of SQL Server
2000 or maintaining an existing installation, you can specify it as:

• A default instance of SQL Server.

This instance is identified by the network name of the computer on which it is running.
Applications using client software from earlier versions of SQL Server can connect to a
default instance. SQL Server version 6.5 or SQL Server version 7.0 servers can operate as
default instances. However, a computer can have only one version functioning as the default
instance at a time.

• A named instance of SQL Server.

This instance is identified by the network name of the computer plus an instance name, in the
format <computername>\<instancename>. Applications must use SQL Server 2000 client
components to connect to a named instance. A computer can run any number of named
instances of SQL Server concurrently. A named instance can run at the same time as an
existing installation of SQL Server version 6.5 or SQL Server version 7.0. The instance name
cannot exceed 16 characters.

A new instance name must begin with a letter, an ampersand (&), or an underscore (_),
and can contain numbers, letters, or other characters. SQL Server sysnames and
reserved names should not be used as instance names. For example, the term "default"
should not be used as an instance name because it is a reserved name used by Setup.

Single and multiple instances of SQL Server 2000 (default or named) are available using the
SQL Server 2000 Personal Edition, the SQL Server 2000 Standard Edition, or the SQL Server
2000 Enterprise Edition.

Default Instances

You cannot install a default instance of SQL Server 2000 on a computer that is also running
SQL Server 7.0. You must either upgrade the SQL Server 7.0 installation to a default instance
of SQL Server 2000, or keep the default instance of SQL Server 7.0 and install a named
instance of SQL Server 2000.

You can install a default instance of SQL Server 2000 on a computer running SQL Server 6.5,
but the SQL Server 6.5 installation and the default instance of SQL Server 2000 cannot be
running at the same time. You must switch between the two using the SQL Server 2000
vswitch command prompt utility.

Multiple Instances

Multiple instances occur when you have more than one instance of SQL Server 2000 installed
on one computer. Each instance operates independently from any other instance on the same
computer, and applications can connect to any of the instances. The number of instances that
can run on a single computer depends on resources available. The maximum number of
instances supported in SQL Server 2000 is 16.

When you install SQL Server 2000 on a computer with no existing installations of SQL
Server, Setup specifies the installation of a default instance. However, you can choose to
install SQL Server 2000 as a named instance instead by clearing the Default option in the
Instance Name dialog box.
A named instance of SQL Server 2000 can be installed at any time: before installing the
default instance of SQL Server 2000, after installing the default instance of SQL Server 2000,
or instead of installing the default instance of SQL Server 2000.

Each named instance is made up of a distinct set of services and can have completely different
settings for collations and other options. The directory structure, registry structure, and service
names all reflect the specific instance name you specify.

Working with Instances and Versions

of SQL Server

Multiple instances in Microsoft® SQL Server™ 2000 offer enhanced ways to work with
earlier versions of Microsoft SQL Server already installed on your computer. You can leave
previous installations intact, and also install and run SQL Server 2000. For example, you can
run SQL Server version 7.0 and a named instance of SQL Server 2000 at the same time, or
you can run SQL Server version 6.5 in a version switch configuration with SQL Server 2000.
If you need to have three different versions of SQL Server installed on the same computer,
there are several ways to accomplish this.

In addition, users of all editions of SQL Server can have more than one instance of SQL
Server 2000 installed and running at once (multiple instances), as well as one or more earlier
versions.

Considerations for using SQL Server 2000 in combination with previous installations include:

• Using SQL Server 6.5 with the default instance or named instances of SQL Server
2000.

• Running SQL Server 7.0 with a named instance of SQL Server 2000.

• Working with three versions of SQL Server: SQL Server 6.5, SQL Server 7.0, and
SQL Server 2000.

Note The concept of the default instance is new to SQL Server 2000, due to the
introduction of multiple instances. If installed on the same computer as SQL Server 2000,
either SQL Server version 6.5 or SQL Server version 7.0 can function as default instances
of SQL Server. (A default instance is identified by the network name of the computer on
which it is running.) For more information, .
Using SQL Server Books Online for SQL Server 7.0

When you keep Microsoft SQL Server version 7.0 on your computer and install a named
instance of SQL Server 2000, SQL Server Books Online for SQL Server 7.0 remains in its
original location: C:\Mssql7\Books. In this side-by-side configuration, Books Online for SQL
Server 7.0 remains accessible from the start menu in the SQL Server 7.0 program group.

Note This is an exception to what occurs for the other shared tools (such as code samples,
scripts, and templates), when a named instance of SQL Server 2000 is installed along with
SQL Server 7.0. All other shared tools from the 7.0 installation are copied to storage
locations, with pointers to the SQL Server 2000 tools replacing previous versions of the
tools. Files for Books Online for SQL Server 7.0 are not redirected in this way -- they
remain ready for use.

When SQL Server 7.0 is upgraded to the default version of SQL Server 2000, the 7.0 Books
Online files are also upgraded. That is, they are replaced with the SQL Server 2000 Books
Online.

Whether you have SQL Server 7.0 installed or not, you can access information in the SQL
Server 7.0 documentation. For more information

Fundamentals of SQL Server 2000 Architecture

Microsoft® SQL Server™ 2000 is a family of products that meet the data storage
requirements of the largest data processing systems and commercial Web sites, yet at the same
time can provide easy-to-use data storage services to an individual or small business.

The data storage needs of a modern corporation or government organization are very complex.
Some examples are:

• Online Transaction Processing (OLTP) systems must be capable of handling thousands


of orders placed at the same time.

• Increasing numbers of corporations are implementing large Web sites as a mechanism


for their customers to enter orders, contact the service department, get information
about products, and for many other tasks that previously required contact with
employees. These sites require data storage that is secure, yet tightly integrated with
the Web.
• Organizations are implementing off-the-shelf software packages for critical services
such as human resources planning, manufacturing resources planning, and inventory
control. These systems require databases capable of storing large amounts of data and
supporting large numbers of users.

• Organizations have many users who must continue working when they do not have
access to the network. Examples are mobile disconnected users, such as traveling sales
representatives or regional inspectors. These users must synchronize the data on a
notebook or laptop with the current data in the corporate system, disconnect from the
network, record the results of their work while in the field, and then finally reconnect
with the corporate network and merge the results of their fieldwork into the corporate
data store.

• Managers and marketing personnel need increasingly sophisticated analysis of trends


recorded in corporate data. They need robust Online Analytical Processing (OLAP)
systems easily built from OLTP data and support sophisticated data analysis.

• Independent Software Vendors (ISVs) must be able to distribute data storage


capabilities with applications targeted at individuals or small workgroups. This means
the data storage mechanism must be transparent to the users who purchase the
application. This requires a data storage system that can be configured by the
application, and then tune itself automatically so that the users do not need to dedicate
database administrators to constantly monitor and tune the application

SQL Server and XML Support

Extensible Markup Language (XML) is a hypertext programming language used to describe


the contents of a set of data and how the data should be output to a device or displayed in a
Web page. Markup languages originated as ways for publishers to indicate to printers how the
content of a newspaper, magazine, or book should be organized. Markup languages for
electronic data perform the same function for electronic documents that can be displayed on
different types of electronic gear.

Both XML and the Hypertext Markup Language (HTML) are derived from Standard
Generalized Markup Language (SGML). SGML is a very large, complex language that is
difficult to fully use for publishing data on the Web. HTML is a more simple, specialized
markup language than SGML, but has a number of limitations when working with data on the
Web. XML is smaller than SGML and more robust than HTML, so is becoming an
increasingly important language in the exchange of electronic data through the Web or
intracompany networks.

In a relational database such as Microsoft® SQL Server™ 2000, all operations on the tables in
the database produce a result in the form of a table. The result set of a SELECT statement is in
the form of a table. Traditional client/server applications that execute a SELECT statement
process the results by fetching one row or block of rows from the tabular result set at a time
and mapping the column values into program variables. Web application programmers, on the
other hand, are more familiar with working with hierarchical representations of data in XML
or HTML documents.

SQL Server 2000 introduces support for XML. These new features include:

• The ability to access SQL Server through a URL.

• Support for XML-Data schemas and the ability to specify XPath queries against these
schemas.

• The ability to retrieve and write XML data:


• Retrieve XML data using the SELECT statement and the FOR XML clause.

• Write XML data using the OpenXML rowset provider.


• Enhancements to the Microsoft SQL Server 2000 OLE DB provider (SQLOLEDB)
that allow XML documents to be set as command text and to return result sets as a
stream

Database Architecture

Microsoft® SQL Server™ 2000 data is stored in databases. The data in a database is
organized into the logical components visible to users. A database is also physically
implemented as two or more files on disk.

When using a database, you work primarily with the logical components such as tables, views,
procedures, and users. The physical implementation of files is largely transparent. Typically,
only the database administrator needs to work with the physical implementation.
Each instance of SQL Server has four system databases (master, model, tempdb, and msdb)
and one or more user databases. Some organizations have only one user database, containing
all the data for their organization. Some organizations have different databases for each group
in their organization, and sometimes a database used by a single application. For example, an
organization could have one database for sales, one for payroll, one for a document
management application, and so on. Sometimes an application uses only one database; other
applications may access several databases.

It is not necessary to run multiple copies of the SQL Server database engine to allow multiple
users to access the databases on a server. An instance of the SQL Server Standard or
Enterprise Edition is capable of handling thousands of users working in multiple databases at
the same time. Each instance of SQL Server makes all databases in the instance available to all
users that connect to the instance, subject to the defined security permissions.

When connecting to an instance of SQL Server, your connection is associated with a particular
database on the server. This database is called the current database. You are usually connected
to a database defined as your default database by the system administrator, although you can
use connection options in the database APIs to specify another database. You can switch from
one database to another using either the Transact-SQL USE database_name statement, or an
API function that changes your current database context.

SQL Server 2000 allows you to detach databases from an instance of SQL Server, then
reattach them to another instance, or even attach the database back to the same instance. If you
have a SQL Server database file, you can tell SQL Server when you connect to attach that
database file with a specific database name.

English Query Fundamentals

Using English Query, you can turn your relational databases into English Query applications,
which allow end users to pose questions in English instead of forming a query with an SQL
statement.

The English Query Model Editor appears within the Microsoft® Visual Studio® version 6.0
development environment. From there, you can choose one of the English Query project
wizards, the SQL Project Wizard or the OLAP Project Wizard, to automatically create an
English Query project and model. After the basic model is created, you can refine, test, and
compile it into an English Query application (*.eqd), and then deploy it (for example, to the
Web).
Creating an English Query Project and Model

Using the SQL Project Wizard or the OLAP Project Wizard, you incorporate the database
structure (table names, field names, keys, and joins) or cube information of the database into a
project and a model.
A model contains all the information needed for an English Query application, including the
database structure, or schema, of the underlying SQL database or cube and the semantic
objects (entities and relationships). You also define properties for an application and add
entries to the English Query dictionary, as well as manually add and modify entities and
relationships while testing questions and set other options to expand the model.
Creating Entities and Relationships

With the wizards, semantic objects are automatically created for the model. These include
entities and relationships (with phrasings such as customers buy products or Customer_Names
are the names of customers). Entities are usually represented by tables, fields, and OLAP
objects.

An entity is a real-world object, referred to by a noun (person, place, thing, or idea), for
example: customers, cities, products, shipments, and so forth. In databases, entities are
usually represented by tables, fields, and Analysis Services objects.
Relationships describe what the entities have to do with one another, for example: customers
purchase products. Command relationships are not represented in the database but refer to
actions to be executed. For example, a command to a compact disc player can allow requests
such as "Play the album with song X on it."
Deploying an English Query Application

You can deploy an English Query application in several ways, including within a Microsoft
Visual Basic® or Microsoft Visual C++® application and on a Web page running on
Microsoft Internet Information Services (IIS). In the Web scenario, the interface of the
application is with a set of Active Server Pages (ASP).
Meta Data Services Overview

Microsoft® SQL Server™ 2000 Meta Data Services is an object-oriented repository


technology that can be integrated with enterprise information systems or with applications that
process meta data.
A number of Microsoft technologies use Meta Data Services as a native store for object
definitions or as a platform for deploying meta data. One of the ways in which SQL Server
2000 uses Meta Data Services is to store versioned Data Transformation Services (DTS)
packages. In Microsoft Visual Studio®, Meta Data Services supports the exchange of model
data with other development tools.

You can use Meta Data Services for your own purposes: as a component of an integrated
information system, as a native store for custom applications that process meta data, or as a
storage and management service for sharing reusable models. You can also extend Meta Data
Services to provide support for new tools for resale or customize it to satisfy internal tool
requirements

Troubleshooting Overview

As a starting point to troubleshooting a problem in Microsoft® SQL Server™ 2000, you may
find the solution in one of the online troubleshooters from SQL Server Product Support
Services (PSS). For more information, see Online Troubleshooters from PSS. In addition,
review current error logs for information that may pinpoint the problem. Other current
information about troubleshooting SQL Server 2000 can be found on the FAQs & Highlights
for SQL Server page, available at Microsoft Web site.

Error Logs

The error log in SQL Server 2000 provides complete information about events in SQL Server.
You may also want to view the Microsoft Windows® 2000 or Windows NT® 4.0 application
log, which provides an overall picture of events that occur on the Windows NT 4.0 and
Windows 2000 operating systems, as well as events in SQL Server and SQL Server Agent.
Both logs include informational messages (such as startup data), and both record the date and
time of all events automatically.

SQL Server events are logged according to the way you start SQL Server.

• When SQL Server is started as a service under the Windows 2000 or Windows NT 4.0
operating system, events are logged to the SQL Server error log, to the Windows 2000
or Windows NT application log, or to both logs.

• When SQL Server is started from the command prompt, events are logged to the SQL
Server error log and to standard output (typically the monitor, unless output has been
redirected elsewhere).
.

Backward Compatibility Issues

If you encounter a problem regarding compatibility between SQL Server 2000 and earlier
versions of SQL Server, see SQL Server 2000 and SQL Server version 7.0 and SQL Server
2000 and SQL Server version 6.5. For information about a detailed list of feature changes
between SQL Server 6.5 and SQL Server 2000, .

Additional Resources

For access to the Microsoft Knowledge Base and other current information, a subscription to
Microsoft TechNet or MSDN® can be helpful. For more information, see:

• The Microsoft TechNet page at Microsoft Web site.

• The MSDN page at Microsoft Web site.

Viewing Web-Based Information

Numerous links to Microsoft Product Support Services (PSS) Web pages are provided in the
Troubleshooting topics. Links to the new online troubleshooters, as well as pertinent
Microsoft Knowledge Base articles and white papers, are also available. Every effort has been
made to ensure the Web links are correct and will remain stable over time. However, if a link
does not work, go to the MSDN Online Support Web page at Microsoft Web site, and
navigate to the correct location

System Development Methodology

1. Initiation Phase
The initiation of a system (or project) begins when a business need or opportunity is
identified. A Project Manager should be appointed to manage the project. This business need
is documented in a Concept Proposal. After the Concept Proposal is approved, the System
Concept Development Phase begins.
2. System Concept Development Phase

Once a business need is approved, the approaches for accomplishing the concept are
reviewed for feasibility and appropriateness. The Systems Boundary Document
identifies the scope of the system and requires Senior Official approval and funding
before beginning the Planning Phase.

3. Planning Phase

The concept is further developed to describe how the business will operate once the approved
system is implemented, and to assess how the system will impact employee and customer
privacy. To ensure the products and /or services provide the required capability on-time and
within budget, project resources, activities, schedules, tools, and reviews are defined.
Additionally, security certification and accreditation activities begin with the identification of
system security requirements and the completion of a high level vulnerability assessment.

4. Requirements Analysis

Phase Functional user requirements are formally defined and delineate the requirements in
terms of data, system performance, security, and maintainability requirements for the
system. All requirements are defined to a level of detail sufficient for systems design
to proceed. All requirements need to be measurable and testable and relate to the
business need or opportunity identified in the Initiation Phase.

5. Design Phase

The physical characteristics of the system are designed during this phase. The operating
environment is established, major subsystems and their inputs and outputs are defined, and
processes are allocated to resources. Everything requiring user input or approval must be
documented and reviewed by the user. The physical characteristics of the system are specified
and a detailed design is prepared. Subsystems identified during design are used to create a
detailed structure of the system. Each subsystem is partitioned into one or more design units or
modules. Detailed logic specifications are prepared for each software module.
6. Development Phase

The detailed specifications produced during the design phase are translated into hardware,
communications, and executable software. Software shall be unit tested, integrated, and
retested in a systematic manner. Hardware is assembled and tested.

7. Integration and Test Phase

The various components of the system are integrated and systematically tested. The user tests
the system to ensure that the functional requirements, as defined in the functional
requirements document, are satisfied by the developed or modified system. Prior to installing
and operating the system in a production environment, the system must undergo certification
and accreditation activities.

8. Implementation Phase

The system or system modifications are installed and made operational in a production
environment. The phase is initiated after the system has been tested and accepted by the . This
phase continues until the system is operating in production in accordance with the defined user
requirements.

9. Operations and Maintenance Phase

The system operation is ongoing. The system is monitored for continued performance in
accordance with user requirements, and needed system modifications are incorporated. The
operational system is periodically assessed through In-Process Reviews to determine how the
system can be made more efficient and effective. Operations continue as long as the system
can be effectively adapted to respond to an organization’s needs. When modifications or
changes are identified as necessary, the system may reenter the planning phase.
10. Disposition Phase

The disposition activities ensure the orderly termination of the system and preserve the vital
information about the system so that some or all of the information may be reactivated in the
future if necessary. Particular emphasis is given to proper preservation of the data processed
by the system, so that the data is effectively migrated to another system or archived in
accordance with applicable records management regulations and policies, for potential future
access.

SDLC Objectives

This guide was developed to disseminate proven practices to system developers, project
managers, program/account analysts and system owners/users throughout the DOJ. The
specific objectives expected include the following:

• To reduce the risk of project failure


• To consider system and data requirements throughout the entire life of the system
• To identify technical and management issues early
• To disclose all life cycle costs to guide business decisions
• To foster realistic expectations of what the systems will and will not provide
• To provide information to better balance programmatic, technical, management, and
cost aspects of proposed system development or modification
• To encourage periodic evaluations to identify systems that are no longer effective
• To measure progress and status for effective corrective action
• To support effective resource management and budget planning
• To consider meeting current and future business requirements

Key Principles

This guidance document refines traditional information system life cycle management
approaches to reflect the principles outlined in the following subsections. These are the
foundations for life cycle management.
Life Cycle Management Should be used to Ensure a Structured Approach
to Information Systems Development, Maintenance, and Operation

This SDLC describes an overall structured approach to information management.


Primary emphasis is placed on the information and systems decisions to be made and the
proper timing of decisions. The manual provides a flexible framework for approaching a
variety of systems projects. The framework enables system developers, project managers,
program/account analysts, and system owners/users to combine activities, processes, and
products, as appropriate, and to select the tools and methodologies best suited to the unique
needs of each project.

1. Support the use of an Integrated Product Team

The establishment of an Integrated Product Team (IPT) can aid in the success of a project. An
IPT is a multidisciplinary group of people who support the Project Manager in the planning,
execution, delivery and implementation of life cycle decisions for the project. The IPT is
composed of qualified empowered individuals from all appropriate functional disciplines that
have a stake in the success of the project. Working together in a proactive, open
communication, team oriented environment can aid in building a successful project and
providing decision makers with the necessary information to make the right decisions at the
right time.

2. Each System Project must have a Program Sponsor

To help ensure effective planning, management, and commitment to information systems,


each project must have a clearly identified program sponsor. The program sponsor serves in a
leadership role, providing guidance to the project team and securing, from senior
management, the required reviews and approvals at specific points in the life cycle. An
approval from senior management is required after the completion of the first seven of the
SDLC phases, annually during Operations and Maintenance Phase and six-months after the
Disposition Phase. Senior management approval authority may be varied based on dollar
value, visibility level, congressional interests or a combination of these.
The program sponsor is responsible for identifying who will be responsible for formally
accepting the delivered system at the end of the Implementation Phase.

3. A Single Project Manager must be Selected for Each System


Project

The Project Manager has responsibility for the success of the project and works through
a project team and other supporting organization structures, such as working groups or user
groups, to accomplish the objectives of the project. Regardless of organizational affiliation,
the Project Manager is accountable and responsible for ensuring that project activities and
decisions consider the needs of all organizations that will be affected by the system. The
Project Manager develops a project charter to define and clearly identify the lines of authority
between and within the agency’s executive management, program sponsor, (user/customer),
and developer for purposes of management and oversight.

4. A Comprehensive Project Management Plan is Required for


Each System Project

The project management plan is a pivotal element in the successful solution of an


information management requirement. The project management plan must describe how each
life cycle phase will be accomplished to suit the specific characteristics of the project. The
project management plan is a vehicle for documenting the project scope, tasks, schedule,
allocated resources, and interrelationships with other projects. The plan is used to provide
direction to the many activities of the life cycle and must be refined and expanded throughout
the life cycle.

5. Specific Individuals must be assigned to Perform Key Roles


throughout the Life Cycle

Certain roles are considered vital to a successful system project and at least one individual

must be designated as responsible for each key role. Assignments may be made on a full- or

part-time basis as appropriate. Key roles include program/functional management, quality


assurance, security, telecommunications management, data administration, database

administration, logistics, financial, systems engineering, test and evaluation, contracts

management, and configuration management. For most projects, more than one individual

should represent the actual or potential users of the system (that is, program staff) and should

be designated by the Program Manager of the program and organization


FEASIBILITY STUDY

A feasibility study is conducted to select the best system that meets performance
requirement. This entails an identification description, an evaluation of candidate system and
the selection of best system for he job. The system required performance is defined by a
statement of constraints, the identification of specific system objective and a description of
outputs.
The key consideration in feasibility analysis are :

1. Economic Feasibility :
2. Technical Feasibility :
3. Operational Feasibility:

Economical feasibility

It looks at the financial aspects of the project. It determines whether the


management has enough resources and budget to invest in the proposed system and the
estimated time for the recovery of cost incurred. It also determines whether it is worth while to
invest the money in the proposed project. Economic feasibility is determines by the means of
cost benefit analysis.The proposed system is economically feasible because the cost involved
in purchasing the hardware and the software are within approachable. The personal cost like
salaries of employees hired are also nominal, because working in this system need not
required a highly qualified professional. The operating-environment costs are marginal. The
less time involved also helped in its economical feasibility. It was observed that the
organization has already using computers for other purpose, so that there is no additional cost
to be incurred for adding this system to its computers.
The backend required for storing other details is also the same database that is Sql. The

computers in the organization are highly sophisticated and don’t needs extra components to

load the software. Hence the organization can implement the new system without any

additional expenditure. Hence, it is economically feasible

The result of the feasibility study is a formal proposal. This is simply report-a formal

document detailing the nature and the scope of the proposed solution. The proposals
summarize what is known and what is going to be done. Three key considerations are

involved in the feasibility analysis: economic, technical and operational behavior.

2.3.1 Economic Feasibility: Economic analysis is the most frequently used method for

evaluating the effectiveness of a candidate system. More determine the benefits and the saving

that are expressed from a candidate system and compare them costs. If benefits outweigh

costs. Otherwise, further justification or alterations in the proposed system will have to be

made if it is to have a chance of being approved. This is an ongoing effort that improves in

accuracy at each phase of the system life cycle.

2.3.2 Technical Feasibility: Technical feasibility center around the existing computer

system hardware etc. and to what extent it can support the proposed addition. For example, if

the current computer is operating at 80% capacity - an arbitrary ceiling – then running another

application could over load the system or require additional hardware. This involves financial

consideration to accommodate technical enhancements. If the budget is a serious constraint

then the project is judged not feasible.

2.3.3 Operational Feasibility: It is common knowledge that computer installations have

some thing to do with turnover, transfers, retraining and changes in employee job status.

Therefore, it is understandable that the introduction of a candidate system requites special

efforts to educate, sell, and train the staff on new ways of conducting business.

2.3.4 C h o i c e o f P l a t f o r m ?

In any organization a lot of data is generated as result of day-to-day operations. In the


past, all kind of data – be it business of a company. Since the task was performed
manually, it was time consuming and error prone. With the advent of computer, the
task of maintaining large amount of data has undergoes a sea change. Today computer
system have become so user friendly that even first time users can create their own
application with the help of tools such as MS-Access, Fox-Pro and SQL Server. These
tools are very visual and hence user friendly. They provide a point and click
environment for building applications that can interact with large amount of data

Technical Feasibility
It is a measure of the practically of a specific technical solution and the availability of
technical resources and expertise
• The proposed system uses Java as front-end and Sql server
2003 as back-end tool.
• Oracle is a popular tool used to design and develop database
objects such as table views, indexes.
• The above tools are readily available, easy to work with and
widely used for developing commercial application.
Hardware used in this project are- p4 processor 2.4GHz, 128 MB RAM, 40
GB hard disk, floppy drive. These hardware were already available on the existing computer
system. The software like Sql Server 2003, iis,.net framework and operating system
WINDOWS-XP’ used were already installed On the existing computer system. So no
additional hardware and software were required to purchase and it is technically feasible.
The technical feasibility is in employing computers to the organization. The organization is
equipped with enough computers so that it is easier for updating. Hence the organization has
not technical difficulty in adding this system.

Operational Feasibility

The system will be used if it is developed well then be resistance for users that
undetermined
No major training and new skills are required as it is based on DBMS
model.
• It will help in the time saving and fast processing and dispersal of user
request and applications.
New product will provide all the benefits of present system with better
performance.
Improved information, better management and collection of the reports.
User support.
• User involvement in the building of present system is sought to keep in
mind the user specific requirement and needs.
• User will have control over there own information. Important
information such as pay-slip can be generated at the click of a button.
• Faster and systematic processing of user application approval,
allocation of IDs, payments, etc. used had greater chances of error due to wrong
information entered by mistake.
Behavioral Feasibility

People are inherent to change. In this type of feasibility check, we come to know if
the newly developed system will be taken and accepted by the working force i.e. the people
who will use it.

Data Flow Diagram Overview


DFD is an important tool used by system analysts. The main merit of
DFD is that it can provide an over view of what data a system would process,
what transformation of data are done, what files are used and where the
result flows. The graphical representation of the system makes it a user and
analyst. DFDs are structured in such a way that starting from a simple
diagram which provides a broad overview at a glance, there can be expanded
to a hierarchy of diagrams giving to more and more details

• Square: -Source or destination of data (External or Internal)

• As the name suggested does not fall within system boundary,


hence they are defined as source or destination of data.

• Rounded rectangle/Circle: Process


This can be defined at place where transformation of data takes place;
this transformation includes additional modification deletion or accumulation
of data.

Open ended rectangle/parallel lines, data store.

This symbolically represents place where data is stored the data can be
stored for future procession (or) it can be processed for future return any
place where data is stored is called data stored.

Data flow can take place :

1. Between process

2. File to process

3. External entity to process

4. Process to external entity


5. Process to file
Information Flow of Data for Testing

Software
Test Results Evaluatio
Configuration
n

Testing Error
Rate Debu
Data g
Expected
Test Results
Reliabilit
Configuration y
Model
Correction

Information Flow of Data Testing


PERT CHART

Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM) are the

project scheduling techniques that can be applied to software development. Both technique are

driven by information already gathered in earlier project planning activities:

 Estimation of effort

 A decomposition of the product function

 The solution of the appropriate model and task set

 Decomposition of tasks

Both PERT and CPM provide quantitative tools that allow the software planning to determine

critical path – the claim of task that determined the duration of the project establish “most

likely” times estimates for individual tasks by applying statically models: and Calculation

“boundary times” that define a time “window” for a particular task.

Both PERT and CPM have been implemented in a wide verity of automated tools that are

available for the personal computer. Such tools are easy to use and make the scheduling

methods described previously available to every software project manager


GANTT CHART

When creating a software project schedule, the planner begins with a set of tasks (the work

break down structure). If automated tools are used, the work break down is input as a task

network or task outline. Efforts, duration, and start date are then input for each task. In

addition, task may be assigned to specific individuals.

As a sequence of this input, a timeline chart, also called a Gantt Chart, is generated. A Gantt

Chart can be developed for the entire project. Alternatively, separated it depicts a part of a

software project schedule that emphasizes the concept scooping task for a new word

processing software project. All project task (for concept scooping) are listed in the left hand

column. The horizontal bars occur at the same time on the calendar, task concurrency is

implied. The diamonds indicate milestones.

Once the information necessary for the generation of the Gantt Chart has been input, the major

of software project scheduling tools produce

project tables a tabular listing of all project tasks, their planned and actual start and end dates,
and a verity of related information. Used in conjunction with the Gantt Chart project tables
enable the project manager to track progress
Work Flow of Music Online

DFD
0’s Level
Manage and create
Administrator
project

Music
Searches and play musicOnline
Users Music Search

Music information

Database
1st Level

Song searching, Playing, Uploading and downloading

Applying
Search Searching
SearchFor
For Criteria
Music
Music

Showing
Showing
Search
Search
Results
Results Database
and
andPlay
Play
Music
Music
Relation

Sign
Download No SignInIn

Ch kFoFr in
Download

Ceh or
Or

k Lgoign
Or

ce
Or Or

Lo

c
Sign
Upload
Upload SignUpUp
Music
Music

Yes

Show
Show
DownLoa
DownLoa
dd
Page
Page
Module Details of Music OnLine

Database Module:-

Table Entity Relationship of Telecom


Table Structures

Database Design

In Music Site about 6 database tables are used such as

Table Name: UserLogin


Description: used to store user information.

Table Name: UserInfo


Description: used to store user information.
Table Name: Singer_Master
Description: used to store Singer information.

Table Name: Music_Master


Description: used to store all Music information.

Table Name: File_Master

Table Name: Actor_Master


Description: used to store Actor information.
Output of Music Online Pages
Coding
DownLoad.aspx

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Collections.Specialized;
using Com.Xyz.UI;
using Com.Xyz.DataBase;
using System.Net;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if (!WebFactory.ValidateUser(Session.SessionID))
{
Button1.Text = "Sign In";
Label1.Text = "To DownLoad Music Please";
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Button b = (Button)sender;
if (b.Text != "Sign In")
{
//string str = Request.QueryString["dfile"];
//System.IO.FileInfo ff = new
System.IO.FileInfo(str);
string Name = "DO PAL RUKA(VEER ZARA)";
string Format = "mpg";
string strFileName = Name + "." + Format;

//THIS CODE IS WRITTEN TO INCREMENT THE NO OF COUNT


IN THE DATABASE
//IncrementFileDownloadCount(RingtoneID);
//CODE END
string originalFilename = strFileName;
string localfilename = Server.MapPath("~/songs") +
"/" + strFileName;
WebClient req = new WebClient();
CredentialCache mycache = new CredentialCache();
mycache.Add(new Uri(localfilename), "Basic", new
NetworkCredential("administrator", "admin$123"));
req.Credentials = mycache;
HttpResponse response =
HttpContext.Current.Response;
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Buffer = true;
response.AddHeader("Content-Disposition",
"attachment;filename=\"" + originalFilename + "\"");
Response.AddHeader("Content-Type", "audio/mpeg");
byte[] data = req.DownloadData(localfilename);
response.BinaryWrite(data);
response.End();
return;
}
else
{
Response.Redirect("Login.aspx?"+MakeQuery);
}
}
public string MakeQuery
{
get
{
NameValueCollection nv = Request.QueryString;
string sq="";
int i = 1;
foreach (string s in nv.AllKeys)
{
sq += s + "=" + nv[s];
i++;
if (nv.AllKeys.Length != i)
{
sq += "& ";
}
}
return sq;
}
}
}

Editsection.aspx
using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Com.Xyz.UI;
using System.Drawing;

public partial class Editsection : System.Web.UI.Page


{
public void bind()
{
WebSession
wb=WebFactory.getsessionobject(Session.SessionID);
TextBox1.Text = wb.UserName;
TextBox2.Text = wb.Password;
TextBox4.Text = wb.Fname;
TextBox5.Text = wb.Lname;
TextBox6.Text = wb.Emailid;
}
protected void Page_Load(object sender, EventArgs e)
{
//{
// if (!IsPostBack)
// {
// if (WebFactory.ValidateUser(Session.SessionID))
// {
// bind();
// }
// else
// {
// Response.Redirect("Signup.aspx");
// }
// }
}
protected void Button1_Click(object sender, EventArgs e)
{
TextBox1.ReadOnly = false;
TextBox1.BackColor = Color.White;
TextBox1.BorderStyle = BorderStyle.NotSet;
TextBox1.BorderWidth = 1;
TextBox2.ReadOnly = false;
TextBox3.ReadOnly = false;
TextBox2.BackColor = Color.White;
TextBox2.BorderStyle = BorderStyle.NotSet;
TextBox2.BorderWidth = 1;
TextBox3.BackColor = Color.White;
TextBox3.BorderStyle = BorderStyle.NotSet;
TextBox3.BorderWidth = 1;

}
protected void Button4_Click(object sender, EventArgs e)
{

int i =
Com.Xyz.DataBase.Connection.CreateConnection().ExecuteNonQuery(
"update login set fname='" + TextBox4.Text.Trim() +
"',lname='"+TextBox5.Text+"',email='"+TextBox6.Text+"' where
uid='" + TextBox1.Text.Trim() + "'");
if (i > 0)
{
Label9.Visible = true;
Label9.Text = "Your record has been updated
successfully";
}
else
{
Label9.Visible = true;
Label9.Text = "Your record has not been updated
successfully";

}
TextBox4.ReadOnly = true;
TextBox4.BackColor = Color.Azure;
TextBox4.BorderStyle = BorderStyle.None;
TextBox4.BorderWidth = 0;
TextBox5.ReadOnly = true;
TextBox5.BackColor = Color.Azure;
TextBox5.BorderStyle = BorderStyle.None;
TextBox5.BorderWidth = 0;
TextBox6.ReadOnly = true;
TextBox6.BackColor = Color.Azure;
TextBox6.BorderStyle = BorderStyle.None;
TextBox6.BorderWidth = 0;
}
protected void Button2_Click(object sender, EventArgs e)
{

int i =
Com.Xyz.DataBase.Connection.CreateConnection().ExecuteNonQuery(
"update login set pwd='" + TextBox2.Text.Trim() + "' where
uid='" + TextBox1.Text.Trim() + "'");
if (i > 0)
{
Label10.Visible = true;
Label10.Text = "Your password has been changed
successfully";

}
else
{
Label10.Visible = true;
Label10.Text = "Your password has not been changed";

}
TextBox1.ReadOnly =true;
TextBox1.BackColor = Color.Azure;
TextBox1.BorderStyle = BorderStyle.None;
TextBox1.BorderWidth = 0;
TextBox2.ReadOnly = true;
TextBox2.BackColor = Color.Azure;
TextBox2.BorderStyle = BorderStyle.None;
TextBox2.BorderWidth = 0;
TextBox3.ReadOnly = true;
TextBox3.BackColor = Color.Azure;
TextBox3.BorderStyle = BorderStyle.None;
TextBox3.BorderWidth = 0;

}
protected void Button3_Click(object sender, EventArgs e)
{
TextBox4.ReadOnly = false;
TextBox4.BackColor = Color.White;
TextBox4.BorderStyle = BorderStyle.NotSet;
TextBox4.BorderWidth = 1;
TextBox5.ReadOnly = false;
TextBox5.BackColor = Color.White;
TextBox5.BorderStyle = BorderStyle.NotSet;
TextBox5.BorderWidth = 1;
TextBox6.ReadOnly = false;
TextBox6.BackColor = Color.White;
TextBox6.BorderStyle = BorderStyle.NotSet;
TextBox6.BorderWidth = 1;

}
}
Email.aspx

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Web.Mail;

public partial class _Default : System.Web.UI.Page


{
protected void Page_Load(object sender, EventArgs e)
{

}
protected void Button1_Click(object sender, EventArgs e)
{
string strTo = "ajitkumarjha85@gmail.Com";
string strFrom = "webmaster@mylocal.com";
string strSubject = "Hi Chris";

SmtpMail.Send(strFrom, strTo, strSubject,"A real


sdfsdfsdfsdfsdfsdf body text here");

Response.Write("Email was queued to disk");


}
protected void Image1_ServerClick(object sender,
ImageClickEventArgs e)
{
MailMessage mms = new MailMessage();
mms.To = to.Text;
mms.From = from.Text;
mms.Body = msg.Text;
mms.Subject = subject.Text;
SmtpMail.Send(mms);
Response.Write("Email bhej diya gaya.");
}
}
Login.aspx

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Collections.Specialized;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Com.Xyz.UI;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if
(Com.Xyz.UI.WebFactory.ValidateUser(Session.SessionID))
{
Response.Redirect("SearchForMusic.aspx");
}
}
protected void ImageButton2_Click(object sender,
ImageClickEventArgs e)
{
Response.Redirect("SignUp.aspx?"+MakeQuery);
}
public string MakeQuery
{
get
{
NameValueCollection nv = Request.QueryString;
string sq = "";
int i = 1;
foreach (string s in nv.AllKeys)
{
sq += s + "=" + nv[s];
i++;
if (nv.AllKeys.Length != i)
{
sq += "& ";
}
}
return sq;
}
}
protected void ImageButton1_Click(object sender,
ImageClickEventArgs e)
{
WebFactory.BeginSession(Session.SessionID,
TextBox1.Text, TextBox2.Text);
if (WebFactory.ValidateUser(Session.SessionID))
{
if (Request.QueryString["dfile"] != null)
Response.Redirect("download.aspx?" +
MakeQuery);
else
{
Response.Redirect("Home.aspx");
}
}

}
}
Playmusic.aspx

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Collections.Generic;

public partial class Default4 : System.Web.UI.Page


{
protected void Page_Load(object sender, EventArgs e)
{
//if (!
Com.Xyz.UI.WebFactory.ValidateUser(Session.SessionID))
//{
if (Request.QueryString["hsong"] == null)
{
if (((List<string>)Session["PlayList"]).Count == 0)
{
Response.Redirect("SearchForMusic.aspx");
}
else
{
int i = 0;
if (!IsPostBack)
{
GridView1.DataSource =
((List<string>)Session["PlayList"]);
GridView1.DataBind();
}
if (ddd.Attributes["src"] != null)
{
if (ViewState["p"] == null)
{
ViewState.Add("p", i);
}
else
{
i = (int)ViewState["p"];
}
ddd.Attributes["src"] = "Songs/" +
((List<string>)Session["PlayList"])[i] + ".mp3";
GridView1.SelectedIndex = i;

}
else
{
if (ViewState["p"] == null)
{
ViewState.Add("p", i);
}
else
{
i = (int)ViewState["p"];
}
ddd.Attributes.Add("src", "Songs/" +
((List<string>)Session["PlayList"])[0] + ".mp3");
GridView1.SelectedIndex = i;
}

}
}
else
{
if (ddd.Attributes["src"] != null)
{
if (Request.QueryString["stype"] == "audio")
{
ddd.Attributes["src"] = "Songs/" +
Request.QueryString["hsong"] + ".mp3";
ddd.Attributes["width"] = "400";
ddd.Attributes["height"] = "50";
}
else
{
ddd.Attributes["src"] = "Songs/" +
Request.QueryString["hsong"] + ".mpg";
ddd.Attributes["width"] = "770";
ddd.Attributes["height"] = "400";
}
}
}
//}
//else
//{
// if (Request.QueryString["song"] != null)
// {
// if (ddd.Attributes["src"] != null)
// ddd.Attributes["src"] = "Songs/" +
Request.QueryString["song"] + ".mp3";
// else
// ddd.Attributes.Add("src", "Songs/" +
Request.QueryString["song"] + ".mp3");

// }
//}
}
public void play()
{
int i;
i = (int)ViewState["p"];

ddd.Attributes["src"] = "Songs/" +
((List<string>)Session["PlayList"])[i] + ".mp3";
GridView1.SelectedIndex = i;
}

protected void Button1_Click(object sender, EventArgs e)


{

}
protected void GridView1_RowCommand(object sender,
GridViewCommandEventArgs e)
{
if (e.CommandName == "play")
{
//e.CommandArgument
ViewState["p"] =
Convert.ToInt32(e.CommandArgument);

play();
//Response.Redirect("PlayMusic.aspx?song=" +
GridView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[0].Tex
t);

}
}
}
SearchForMusic.aspx

using System;
using System.Data.SqlClient;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Collections.Generic;

public partial class Default7 : System.Web.UI.Page


{
public void bind(string str)
{
SqlDataAdapter da = new SqlDataAdapter(str, "initial
catalog=music;user id=sa;password=sa;data source=.");
da.Fill(ds);
GridView1.DataSource = ds;
GridView1.DataBind();

}
DataSet ds = new DataSet();
protected void Page_Load(object sender, EventArgs e)
{

//if (!
Com.Xyz.UI.WebFactory.ValidateUser(Session.SessionID))
//{
// Response.Redirect("Login.aspx");
//}

protected void ImageButton1_Click(object sender,


ImageClickEventArgs e)
{
DataTable dt = new DataTable();

DataColumn dc1 = new DataColumn("song_name",


typeof(string));
dt.Columns.Add(dc1);

string s = TextBox1.Text;
string search = DropDownList1.SelectedItem.ToString();
bool issolo=RadioButton1.Checked;
bool isdual = RadioButton1.Checked;
if (search == "Films")
{
string Query = "select song_name from music_master
where film_id=(select film_id from film_master where
film_name='";
Query += s.ToString() + "')" + " and dual=";
Query+= (isdual == true) ? "1" : "0";

bind(Query);

}
else if (search == "Actor")
{
string Query = "select song_name from music_master
where actor_id=(select actor_id from actor_master where
actor_name='" + DropDownList2.SelectedItem.ToString() + "')";
bind(Query);

}
else if (search == "Singer")
{
string Query = "select song_name from music_master
where singer_id=(select singer_id from singer_master where
singer_name='" + DropDownList2.SelectedItem + "')";
bind(Query);
}
else
{
//string ss = s.Substring(0, (s.IndexOf(" ") == -1)
? s.Length : s.IndexOf(" "));
//string Query = "select song_name from
music_master where song_name like '%" + ss + "%'";
//bind(Query);
string st12=TextBox1.Text;
DataSet ds4 =
Com.Xyz.DataBase.Connection.CreateConnection().GetDataSet("sele
ct song_name from music_master");

for (int i = 0; i < ds4.Tables[0].Rows.Count; i++)


{
string str11 = ds4.Tables[0].Rows[i]
[0].ToString();
if (str11.Contains(st12))
{

DataRow dr = dt.NewRow();
dr[0] = str11;
dt.Rows.Add(dr);
GridView1.DataSource = dt;
GridView1.DataBind();

}
public void CheckAdd(object sender, EventArgs e)
{
CheckBox cb = (CheckBox)sender;
foreach (GridViewRow gvr in GridView1.Rows)
{
if (cb.Equals(gvr.Cells[0].Controls[1]) &&
cb.Checked==true)
{
if (!
((List<string>)Session["PlayList"]).Contains(gvr.Cells[1].Text)
)
((List<string>)Session["PlayList"]).Add(gvr
.Cells[1].Text);

}
else if (cb.Equals(gvr.Cells[0].Controls[1]) &&
cb.Checked == false)
{
if
(((List<string>)Session["PlayList"]).Contains(gvr.Cells[1].Text
))
((List<string>)Session["PlayList"]).Remove(
gvr.Cells[1].Text);
}
}
}
protected void GridView1_RowCommand(object sender,
GridViewCommandEventArgs e)
{

if (e.CommandName == "play")
{
//e.CommandArgument
if (Session["PlayList"] != null)
{
if
(((CheckBox)GridView1.Rows[Convert.ToInt32(e.CommandArgument)].
Cells[0].Controls[1]).Checked == false)
{
((List<string>)Session["PlayList"]).Add(Gri
dView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[1].Text);
}
Response.Redirect("PlayMusic.aspx");
//Response.Redirect("PlayMusic.aspx?song=" +
GridView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[0].Tex
t);
}

}
else if (e.CommandName == "Download")
{
//string str = Server.MapPath("~/Songs/" +
GridView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[0].Tex
t + ".mp3");
if
(Com.Xyz.UI.WebFactory.ValidateUser(Session.SessionID))
{

if (Session["Download"] != null)
{
string str = Server.MapPath("~/Songs/"
+
GridView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[1].Tex
t + ".mp3");
((List<string>)Session["Download"]).Add
(str);

Response.Redirect("DownLoad.aspx");
//Response.Redirect("PlayMusic.aspx?song="
+
GridView1.Rows[Convert.ToInt32(e.CommandArgument)].Cells[0].Tex
t);
}
//Response.Redirect("DownLoad.aspx?dfile=" +
str);
//System.IO.FileInfo ff = new
System.IO.FileInfo(str);
//Response.AddHeader("Content-Disposition",
"attachment;filename=" + ff.Name);
//Response.AddHeader("Content-Length",
ff.Length.ToString());
//Response.End();
}
else
{
Response.Redirect("Login.aspx");
}
}
}
protected void GridView1_RowDataBound(object sender,
GridViewRowEventArgs e)
{
DataControlRowType dtype=e.Row.RowType;
if (!(dtype == DataControlRowType.Header || dtype ==
DataControlRowType.Footer || dtype ==
DataControlRowType.Pager))
{
ImageButton b =
(ImageButton)e.Row.Cells[2].Controls[0];
b.Attributes.Add("onmouseover",
"this.src='images/playover.JPG'");
b.Attributes.Add("onmouseout",
"this.src='images/play.JPG'");
e.Row.Cells[1].Attributes.Add("onmouseover","this.s
tyle.zoom='120%'");
e.Row.Cells[1].Attributes.Add("onmouseout","this.st
yle.zoom='normal'");
}
}
protected void DropDownList1_SelectedIndexChanged(object
sender, EventArgs e)
{
if (DropDownList1.SelectedItem.ToString()== "Actor")
{
TextBox1.Visible = false;
DropDownList2.Visible = true;
string str2 = "select * from actor_master";
DataSet ds3 =
Com.Xyz.DataBase.Connection.CreateConnection().GetDataSet(str2)
;
DropDownList2.DataSource = ds3;
DropDownList2.DataTextField = "actor_name";
DropDownList2.DataValueField ="actor_id";
DropDownList2.DataBind();
DropDownList2.Items.Insert(0,"--Choose Actor--");
//}

if (DropDownList1.SelectedItem.ToString() == "Singer")
{
TextBox1.Visible = false;
DropDownList2.Visible = true;
string str2 = "select * from singer_master";
DataSet ds3 =
Com.Xyz.DataBase.Connection.CreateConnection().GetDataSet(str2)
;
DropDownList2.DataSource = ds3;
DropDownList2.DataTextField = "singer_name";
DropDownList2.DataValueField = "singer_id";
DropDownList2.DataBind();
DropDownList2.Items.Insert(0, "--Choose Singer--");
//}

}
}
SignUp.aspx

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Collections.Specialized;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Com.Xyz.UI;
using Com.Xyz.DataBase;

public partial class _Default : System.Web.UI.Page


{
protected void Page_Load(object sender, EventArgs e)
{
if
(Com.Xyz.UI.WebFactory.ValidateUser(Session.SessionID))
{
Response.Redirect("Home.aspx");
}
}
protected void TextBox1_TextChanged(object sender,
EventArgs e)
{

}
public string MakeQuery
{
get
{
NameValueCollection nv = Request.QueryString;
string sq = "";
int i = 1;
foreach (string s in nv.AllKeys)
{
sq += s + "=" + nv[s];
i++;
if (nv.AllKeys.Length != i)
{
sq += "& ";
}
}
return sq;
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Connection con =
Com.Xyz.DataBase.ConnectionPool.GetConnection();
int i=con.ExecuteNonQuery("insert into userinfo
values('" + t1.Text + "','" + TextBox1.Text + "','" +
TextBox3.Text + "','" + TextBox4.Text + "','" + TextBox5.Text +
"','" + TextBox2.Text + "')");
if (i > 0)
{
if (Request.QueryString["dfile"] != null)
{
Response.Redirect("LOGIN.ASPX?"+MakeQuery);

}
}
else
{
Response.Redirect("Signup.aspx?"+MakeQuery);

}
protected void Button2_Click(object sender, EventArgs e)
{

}
}
UploadMusic.aspx

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;

public partial class _Default : System.Web.UI.Page


{
protected void Page_Load(object sender, EventArgs e)
{

}
protected void Button1_Click(object sender, EventArgs e)
{
bool fileOK = false;
String path = Server.MapPath("~/uploadedsongs/");
if (FileUpload1.HasFile)
{

String fileExtension =
System.IO.Path.GetExtension(FileUpload1.FileName).ToLower();
string [] allowedExtensions = { ".gif",
".png",".mp3", ".mpg", ".mpeg" };
for (int i = 0; i < allowedExtensions.Length;
i++)
{
if (fileExtension == allowedExtensions[i])
{
fileOK = true;
}
}
}

if (fileOK)
{
try
{
FileUpload1.PostedFile.SaveAs(path +
FileUpload1.FileName);
Label1.Text = "File uploaded!";
}
catch (Exception ex)
{
Label1.Text = "File could not be
uploaded.";
}
}
else
{
Label1.Text = "Cannot accept files of this
type.";
} } }
Web.Config

<?xml version="1.0"?>
<!-- Note: As an alternative to hand editing this file you can
use the
web admin tool to configure settings for your application.
Use
the Website->Asp.Net Configuration option in Visual Studio.
A full list of settings and comments can be found in
machine.config.comments usually located in
\Windows\Microsoft.Net\Framework\v2.x\Config
-->
<configuration>
<appSettings>
<add key="dbname" value="sqlserver"/>
<add key="maxconnection" value="5"/>
<add key="connectionstring" value="initial
catalog=music;data source=.;uid=sa;pwd=sa;"/>
<add key="username" value="ajit"/>
<add key="password" value="jha"/>
<add key="role" value="admin"/>
</appSettings><connectionStrings/>
<system.web>
<httpRuntime maxRequestLength="51200"/>
<webServices>

</webServices>
<!-- Set compilation debug="true" to insert debugging
symbols into the compiled page. Because this
affects performance, set this value to true only
during development. -->
<compilation debug="true"/>
<!--
The <authentication> section enables configuration
of the security authentication mode used by
ASP.NET to identify an incoming user.
-->
<authentication mode="Windows"/>
<!--
The <customErrors> section enables configuration
of what to do if/when an unhandled error occurs
during the execution of a request. Specifically,
it enables developers to configure html error pages
to be displayed in place of a error stack trace.

<customErrors mode="RemoteOnly"
defaultRedirect="GenericErrorPage.htm">
<error statusCode="403" redirect="NoAccess.htm" />
<error statusCode="404" redirect="FileNotFound.htm"
/>
</customErrors>
-->
</system.web>
<system.net>
<mailSettings>
<smtp from="hodroom">
<network host="." password="" userName="" />
</smtp>
</mailSettings>
</system.net>
</configuration>
MAINTENANCE

Maintenance of the project is very easy due to its modular design


and concept any modification can be done very easily. All the data are stored
in the software as per user need & if user wants to change he has to change
that particular data, as it will be reflected in the software every where. Some
of the maintenance applied is: -

(1) BREAKDOWN MAINTENANCE: -

The maintenance is applied when an error occurs & system halts


and further processing cannot be done .At this time user can view
documentation or consult us for rectification & we will analyze and change the
code if needed. Example: - If user gets a error “report width is larger than
paper size” while printing report & reports can not be generated then by
viewing the help documentation & changing the paper size to ‘A4’ size of
default printer will rectify the problem.”

(2) PREVENTATIVE MAINTENANCE: -

User does this maintenance at regular intervals for smooth functioning


(operation) of software as per procedure and steps mentioned in the manual.
Some reasons for maintenance are: -
(a) Error Correction: - Errors, which were not caught during testing, after
the system has, been implemented. Rectification of such errors is called
corrective maintenance.
(b) New or changed requirements: - When Organization requirements
changes due to changing opportunities.
(c) Improved performance or maintenance requirements: -Changes that is
made to improve system performance or to make it easier to maintain in
the future are called preventive maintenance. Advances in technology
(Adaptive maintenance): - Adaptive maintenance includes all the
changes made to a system in order to introduce a new technology.
SECURITY MEASURES

The security measures imposed in the software are: -

• A login password is provided in the software. User must login


to activate the application.
• User cannot change the password. To change password he must
contact the administrator.
• The user/password are given through SQL Server2000. If this is
installed on NT 4.0 then it is highly secured. If it is installed on
Windows 98, then run in degraded mode.
• Data security, correctness integrity is checked up before saving,
update or delete if errors found the procedure is aborted.
• A primary key & foreign key concept is implemented for avoiding
incorrect data entry or intentional or accidental delete or modification
of data.
• When user tries to delete the data then this first check for its
reference used by other data, if found the deletion aborted.
• I am also providing various securities at user level or at
forms.
• I am giving security at LAN with the help of status Of user .
Future Scope
On the basis of the work done in dissertation entitled “Music Online”, the following
conclusions emerge from the development.

1. This project has achieved the objective of replacing/augmenting the conventional


system of arranging manpower as could be conducted by a typical telecom dept.
2. The development of this package has been achieved by using C#.NET, which is very
conductive to develop the package with regard to time and specific need to the user.
3. This package is highly user friendly, required an optimal minimal input from user
while providing highly relevant and focused outputs.
4. Fully automated, avoiding human intervention. Hence it provides a very rapid cost
effective alternative to the conventional manual operation/procedures; the visual
outputs are more reliable than the audio forms of manual communication.
5. The system can further extended as per user and administrative requirements to
encompass other aspects of connection management for telecom dept.

LIMITATIONS: -

 This project does not Edit the date of connection or store the date of transfer in
case of connection transfer.
 System date for the project is like as backbone for the human, i.e. proposed
system is depends on system date so it must be correct.
 Cannot be connected to the Internet.
 There are some inherent problems like time, finance etc. to elaborate further
study.
Glossary of My Project

Access
Microsoft Access is an entry-level database management software from
Microsoft, which allows you to organize, access, and share information easily.
Access is very user-friendly and easy to use for inexperienced users, while
sophisticated enough for database and software developers.

ACID
ACID short for Atomicity – Consistency – Isolation – Durability and describes
the four properties of an enterprise-level transaction:

• ATOMICITY: a transaction should be done or undone completely. In the


event of an error or failure, all data manipulations should be undone, and
all data should rollback to its previous state.
• CONSISTENCY: a transaction should transform a system from one
consistent state to another consistent state.
• ISOLATION: each transaction should happen independently of other
transactions occurring at the same time.
• DURABILITY: Completed transactions should remain stable/permanent,
even during system failure.

ADO
Short for Microsoft ActiveX Data Objects. ADO enables your client applications
to access and manage data from a range of sources through an OLE DB
provider. ADO is built on top of OLE DB and its main benefits are ease of use,
high speed, and low memory overhead. ADO makes the task of building
complex database enabled client/server and web applications a breeze.

Column
Database tables are made of different columns (fields) corresponding to the
attributes of the object described by the table.

COMMIT
The COMMIT command in SQL marks the finalization of a database
transaction.

Cursor
Short for Current Set Of Records in some database languages. The cursor is a
database object pointing to a currently selected set of records.

Data
Piece of information collected and formatted in a specific way. The term data is
frequently used to describe binary (machine-readable) information.

Database
A database is a collection of information organized into related tables of data
and definitions of data objects. The data within a database can be easily
accessed and manipulated trough computer program.

DB2
DB2 is a relational database management system developed by IBM. DB2
runs on a variety of platforms including Sun Solaris, Linux and Windows.

Field
See Column definition

First Normal Form


See Normalization definition

Flat File
Flat file is a data file that has no structured relationships between its records.

Foreign Key
A foreign key is a key field (column) that identifies records in a table, by
matching a primary key in a different table. The foreign keys are used to cross-
reference tables.
Index
An index is a database feature (a list of keys or keywords), allowing searching
and locating data quickly within a table. Indexes are created for frequently
searched attributes (table columns) in order to optimize the database
performance.

INSERT
The INSERT is a SQL command used to add a new record to a table within a
database.

Isolation
See ACID definition

JOIN
The JOIN is a SQL command used to retrieve data from 2 or more database
tables with existing relationship based upon a common attribute.

Key
See Primary Key and Foreign Key definitions

Lock
Locks are used by Database management systems to facilitate concurrency
control. Locks enable different users to access different records/tables within
the same database without interfering with one another. Locking mechanisms
can be enforced at the record or table levels.

MySQL
MySQL is an open source relational database management system. MySQL
can be used on various platforms including UNIX, Linux and Windows (there
are OLE DB and ODBC providers as well as .NET native provider for MySQL).
MySQL is widely used as a backend database for Web applications and it'
viable and cheaper alternative to enterprise database systems like MS SQL
Server and Oracle.

Normalization
Normalization is the process of organizing data to minimize redundancy and
remove ambiguity. Normalization involves separating a database into tables
and defining relationships between the tables. There are three main stages of
normalization called normal forms. Each one of those stages increases the
level of normalization.

NULL
The NULL SQL keyword is used to represent a missing value.

ODBC
Short for Open DataBase Connectivity, a standard database access
technology developed by Microsoft Corporation. The purpose of ODBC is to
allow accessing any DBMS (DataBase Management System) from any
application (as long as the application and the database are ODBC compliant),
regardless of which DBMS is managing the data. ODBC achieves this by using
a middle layer, called a database driver, between an application and the
DBMS. The purpose of this layer is to transform the application's data queries
into commands that the DBMS understands. As we said earlier, both the
application and the DBMS must be ODBC compliant meaning, the application
must be capable of sending ODBC commands and the DBMS must be capable
of responding back to them.

PostgreSQL
PostgreSQL is an object-oriented open source relational database
management system, which uses a subset of SQL language.

Primary Key
The primary key of a relational table holds a unique value, which identifies
each record in the table. It can either be a normal field (column) that is
guaranteed to be unique or it can be generated by the database system itself
(GUID or Identity field in MS SQL Server for example). Primary keys may be
composed of more than 1 field (column) in a table.

Query
Queries are the main way to make a request for information from a database.
Queries consist of questions presented to the database in a predefined format,
in most cases SQL (Structured Query Language) format.
R

Record
The record is a complete set of information presented within a RDBMS.
Records are composed of different fields (columns) in a table and each record
is represented with a separate row in this table.

ROLLBACK
The ROLLBACK is a SQL command which cancels/undoes the proposed
changes in a pending database transaction and marks the end of the
transaction.

Row
See Record definition

Second Normal Form


See Normalization definition

SELECT
The SELECT is a SQL command, which is the primary means for retrieving
data from a RDBMS.

SQL
SQL is short for Structured Query Language and is an industry standard
language used for manipulation of data in a RDBMS. There are several
different dialects of SQL like, ANSI SQL, T-SQL, etc.

Stored Procedure
Stored Procedure is a set of SQL statements stored within a database server
and is executed as single entity. Using stored procedures has several
advantages over using inline SQL statements, like improved performance and
separation of the application logic layer from database layer in n-tier
applications.

Table
A Table in RDBMS refers to data arranged in rows and columns, which defines
a database entity.

Third Normal Form


See Normalization definition

Transaction
Transaction is a group of SQL database commands regarded and executed as
a single atomic entity.

UPDATE
The UPDATE is a SQL command used to edit/update existing records in a
database table
.Net Framework Glossary

A1—The bundle of Microsoft antivirus and antispyware development lines.

Abstract IL (ILX)—A toolkit for accessing the contents of .NET Common IL binaries.
Among its features, it lets you transform the binaries into structured abstract syntax
trees that can be manipulated.

Acceleration Server 2000—See Internet Security and Acceleration Server 2000.

Access modifiers—Language keywords used to specify the visibility of the methods and
member variables declared within a class. The five access modifiers in the C# language
are public, private, protected, internal, and protected internal.

Acrylic— Codename for an innovative illustration, painting and graphics tool that
provides creative capabilities for designers working in print, web, video, and interactive
media.

Active Server Pages (ASP)—A Microsoft technology for creating server-side, Web-based
application services. ASP applications are typically written using a scripting language,
such as JScipt, VBScript, or PerlScript. ASP first appeared as part of Internet Information
Server 2.0 and was code-named Denali.

B2B—Business-to-Business. The exchange of information between business entities.

B2C—Business-to-Consumer. The exchange of information between business and


consumer (i.e., customer) entities.

BackOffice Server 2000—A suite of Microsoft servers applications used for B2B and
B2C services. Included in this suite are Windows 2000 Server, Exchange Server 2000,
SQL Server 2000, Internet Security and Acceleration Server 2000, Host Integration
Server 2000, and Systems Management Server 2.0. These server applications are now
referred to as the .NET Enterprise Server product family.

Base class—The parent class of a derived class. Classes may be used to create other
classes. A class that is used to create (or derive) another class is called the base class or
super class. See Derived Class, Inheritance.

Behave!—A project for building tools to checking things such as deadlock freedom,
invariant checking, and message-understood properties in behavior properties of
asynchronous, message-passing programs.

BizTalk Server 2000—A set of Microsoft Server applications that allow the integration,
automation, and management of different applications and data within and between
business organizations. BizTalk Server is a key B2B component of the .NET Enterprise
Server product family.

Boxing—Conversion of a value type to a reference type object (i.e. System.Object).


Value types are stored in stack memory and must be converted (i.e., boxed) to a new
object in heap memory before they can be manipulated as objects. The methods,
functions, and events of the new object are invoked to perform operations on the value
(e.g., converting an integer to a string). Boxing is implicitly performed by the CLR at
runtime. See Unboxing.

Built-in Types—See Pre-defined types.

Cω (C-Omega)—An experimental programming language — actually an extension to C#


— that focuses on distributed asynchronous concurrency and XML manipulation. This is a
combination of research projects that were formally known as polymorphic C# and Xen
(and X#).

C# (C-Sharp)—An object-oriented and type-safe programming language supported by


Microsoft for use with the .NET Framework. C# (pronounced "see-sharp") was created
specifically for building enterprise-scale applications using the .NET Framework. It is
similar in syntax to both C++ and Java and is considered by Microsoft as the natural
evolution of the C and C++ languages. C# was created by Anders Hejlsberg (author of
Turbo Pascal and architect of Delphi), Scot Wiltamuth, and Peter Golde. C# is defined by
the standard ECMA-334.

Callback Method—A method used to return the results of an asynchronous processing


call. Typically, methods are called in a synchronous fashion, where the call does not
return until the results (i.e., the output or return value) of the call are available. An
asynchronous method call returns prior to the results, and then sometime later a callback
method is called to return the actual results. The callback method itself contains program
statements that are executed in response to the reception of the results. Also referred to
as a callback function under the Win32 API. See Event.
Casting—Conversion of a value from one type to another. Implicit casting is performed
silently by the compiler when the casting would not cause any information to be lost
(e.g., converting a 16-bit integer to a 32-bit integer value). Explicit casting is coded by
the programmer using the particular language's cast operator. This is necessary when the
use of a value would cause a possible loss of data (e.g., converting a 32-bit integer to a
16-bit integer value).

Data provider—A set of classes in the .NET Framework that allow access to the
information a data source. The data may be located in a file, in the Windows registry, or
any any type of database server or network resource. A .NET data provider also allows
information in a data source to be accessed as an ADO.NET DataSet. Programmers may
also author their own data providers for use with the .NET Framework. See Managed
providers.

DCOM (Distributed Component Object Model)—An extension of the Microsoft Component


Object Model (COM) that allows COM components to communicate across network
boundaries. Traditional COM components can only perform interprocess communication
across process boundaries on the same machine. DCOM uses the Remote Procedure Call
(RPC) mechanism to transparently send and receive information between COM
components (i.e., clients and servers) on the same network. DCOM was first made
available in 1995 with the initial release of Windows NT 4.

Delegate—A mechanism used to implement event handling in .NET Framework code. A


class that needs to raise events must define one delegate per event. Types that use the
class must implement one event handler method per event that must be processed.
Delegates are often described as a managed version of a C++ function pointer. However,
delegates can reference both instance and static (also called shared) methods, while
function pointers can only reference static methods.

Deployment—The process of installing an application, service, or content on to one or


more computer systems. In .NET, deployment is performed using XCOPY or the Windows
Installer. More complex deployment applications, such as System Management Server,
can also be used. See Installer Tool.

ECMA (European Computer Manufactures Association)—The ECMA (known since 1994 as


ECMA International) is an industry association founded in 1961 and dedicated to the
standardization of information and communication systems. The C# and CLI specification
were ratified by the ECMA on December 31, 2001 as international standards, and
assigned to them the ECMA standards designations of ECMA-334 (C#) and ECMA-335
(CLI), and Technical Report TR-84. These standards are available at www.ecma.ch.

Enterprise Instrumentation Framework (EIF)—A feature that expands the program


execution tracing capabilities found in the initial release of the .NET Framework. EIF
allows the use of configurable event filtering and tracing by integrating .NET applications
with the event log and tracing services built into the Windows operating system.
Warnings, errors, business events, and diagnostic information can be monitored and
reported for immediate, runtime analysis by developers, or collected and stored for later
use by technical support personnel. Support for EIF will be included in the next release of
Visual Studio.NET.

Event—A notification by a program or operating system that "something has happened."


An event may be fired (or raised) in response to the occurrence of a pre-defined action
(e.g., a window getting focus, a user clicking a button, a timer indicating a specific
interval of time has passed, or a program starting up or shutting down). In response to
an event, an event handler is called.

Fields—Same as member variables.

Finalize—A class-only method that is automatically called when an object is destroyed by


the garbage collector. The Finalize method is primarily used to free up unmanaged
resources allocated by the object before the object itself is removed from memory. A
Finalize method is not needed when only managed resources are used by the object,
which are automatically freed by the garbage collector. In C#, when a destructor is
defined in a class it is mapped to a Finalize method. Also called a finalizer. See
Dispose.

Finally block—A block of program statements that will be executed regardless if an


exception is thrown or not. A finally block is typically associated with a try/catch block
(although a catch block need not be present to use a finally block). This is useful for
operations that must be performed regardless if an exception was thrown or not (e.g.,
closing a file, writing to a database, deallocating unmanaged memory, etc).

Garbage Collection (GC)—The process of implicitly reclaiming unused memory by the


CLR. Stack values are collected when the stack frame they are declared within ends (e.g.,
when a method returns). Heap objects are collected sometime after the final reference to
them is destroyed.

GDI (Graphics Device Interface)—A Win32 API that provides Windows applications the
ability to access graphical device drivers for displaying 2D graphics and formatted text on
both the video and printer output devices. GDI (pronounced "gee dee eye") is found on
all version of Windows. See GDI+.

GDI+ (Graphics Device Interface Plus)—The next generation graphics subsystem for
Windows. GDI+ (pronounced "gee dee eye plus") provides a set of APIs for rendering 2D
graphics, images, and text, and adds new features and an improved programming model
not found in its predecessor GDI. GDI+ is found natively in Windows XP and the Windows
Server 2003 family, and as a separate installation for Windows 2000, NT, 98, and ME.
GDI+ is the currently the only drawing API used by the .NET Framework.
H

Hash Code—A unique number generated to identify each module in an assembly. The
hash is used to insure that only the proper version of a module is loaded at runtime. The
hash number is based on the actual code in the module itself.

"Hatteras"—Codename for Team Foundation Version Control tool. This is the new
version control in Visual Studio 2005.

Heap—An area of memory reserved for use by the CLR for a running programming. In
.NET languages, reference types are allocated on the heap. See Stack.

Host Integration Server 2000—A set of Microsoft server applications use to ingrate the
.NET platform and applications with non-Microsoft operating systems and hardware (e.g.,
Unix and AS/400), security systems (e.g., ACF/2 and RACF), data stores (e.g., DB2), and
transaction environments (e.g., CICS and IMS).

HTML (HyperText Markup Language)—A document-layout and hyperlink-specification


language. HTML is used to describe how the contents of a document (e.g., text, images,
and graphics) should be displayed on a video monitor or a printed page. HTML also
enables a document to become interactive with other documents and resources by using
hypertext links embedded into its content. HTML is the standard content display language
of the World Wide Web (WWW), and is typically conveyed between network hosts using
the HTTP protocol. See XHTML.

Identifiers—The names that programmers choose for namespaces, types, type


members, and variables. In C# and VB.NET, identifiers must begin with a letter or
underscore and cannot be the same name as a reserved keyword. Microsoft no longer
recommends the use of Hungarian Notation (e.g., strMyString, nMyInteger) or delimiting
underscores (e.g., Temp_Count) when naming identifiers. See Qualified identifiers.

ILASM—See MSIL Assembler.

ILDASM—See MSIL Disassembler.

Indigo —The code name for for Windows Communication Foundation (WCF), which is the
communications portion of Longhorn that is built around Web services. This
communications technology focuses on providing spanning transports, security,
messaging patterns, encoding, networking and hosting, and more.

"Indy"—The code-name for a capacity Planning tool being developed by Microsoft. This
was originally a part of Longhorn, but is speculated to ship earlier.

Interface Definition Language (IDL)—A language used to describe object interfaces by


their names, methods, parameters, events, and return types. A compiler uses the IDL
information to generate code to pass data between machines. Microsoft's IDL, called COM
IDL, is compiled using the Microsoft IDL compiler (MIDL). MIDL generates both type
libraries and proxy and stub code for marshaling parameters between COM interfaces
J

Just In Time (JIT)—The concept of only compiling units of code just as they are needed
at runtime. The JIT compiler in the CLR compiles MSIL instructions to native machine
code as a .NET application is executed. The compilation occurs as each method is called;
the JIT-compiled code is cached in memory and is never recompiled more than once
during the program's execution.

Keywords—Names that have been reserved for special use in a programming language.
The C# language defines about 80 keywords, such as bool, namespace, class,
static, and while. The 160 or so keywords reserved in VB.NET include Boolean,
Event, Function, Public, and WithEvents. Keywords may not be used as
identifiers in program code.

back to top

"Ladybug"—Code-name for product officially known as the Microsoft Developer Network


Product Feedback Center where testers can submit online bug reports and provide
product suggestions via the Web.

License Compiler—A .NET programming tool (lc.exe) used to produce .licenses files
that can be embedded in a CLR executable.

Lifetime—The duration from an objects existence. From the time an object is


instantiated to the time it is destroyed by the garbage collector.

Local assembly cache—The assembly cache that stores the compiled classes and
methods specific to an application. Each application directory contains a \bin subdirectory
which stores the files of the local assembly cache.

"Magneto"—The code-name for Windows Mobile 5.0. This version is to unify the
Windows CE, PocketPC, and SmartPhone platforms. This platform includes a new user
interface, improved video support, better keyboard support, and more.

Make Utility—A .NET programming tool (nmake.exe) used to interpret script files (i.e.,
makefiles) that contain instructions that detail how to build applications, resolve file
dependency information, and access a source code control system. Microsoft's nmake
program has no relation to the nmake program originally created by AT&T Bell Labs and
now maintained by Lucent. Although identical in name and purpose these two tools are
not compatible. See Lucent nmake Web site.
Managed ASP—Same as ASP.NET.

Managed C++—Same as Visual C++ .NET.

Managed code—Code that is executed by the CLR. Managed code provides information
(i.e., metadata) to allow the CLR to locate methods encoded in assembly modules, store
and retrieve security information, handle exceptions, and walk the program stack.
Managed code can access both managed data and unmanaged data.

Namespace—A logical grouping of the names (i.e., identifiers) used within a program. A
programmer defines multiple namespaces as a way to logically group identifiers based on
their use. For example, System.Drawing and System.Windows are two
namespaces containing each containing types used for for different purposes. The name
used for any identifier may only appear once in any namespace. A namespace only
contains the name of a type and not the type itself. Also called name scope.

Native code—Machine-readable instructions that are created for a specific CPU


architecture. Native code for a specific family of CPUs is not usable by a computer using
different CPU architectures (c.f., Intel x86 and Sun UltraSPARC). Also called object code
and machine code.

Object—The instance of a class that is unique and self-describing. A class defines an


object, and an object is the functional, realization of the class. Analogously, if a class is a
cookie cutter then the cookies are the objects the cutter was used to create.

Object type—The most fundamental base type (System.Object) that all other .NET
Framework types are derived from.

OLE (Object Linking and Embedding)—A Microsoft technology that allows an application
to link or embed into itself documents created by another type of application. Common
examples include using Microsoft Word to embed an Excel spreadsheet file into a Word
document file, or emailing a Microsoft Power Point file as an attachment (link) in Microsoft
Outlook. OLE is often confused with the Component Object Model (COM), because COM
was released as part of OLE2. However, COM and OLE are two separate technologies.

Orcas—The code name for the version of Visual Studio .NET to be released near the time
Microsoft Longhorn is released. This follows the release of Visual Studio .NET Whidbey.

Palladium—Former code name for Microsoft's Next-Generation Secure Computing Base


(NGSCB) project.

"Pheonix"—A software optimization and analysis framework that is to be the basis for all
future Microsoft compiler technologies.
"Photon"—A feature-rich upgrade to Windows Mobile that includes features such as
battery life. This version will follow Windows Mobiles 2005 (code-named "Magneto").

Pinned—A block of memory that is marked as unmovable. Blocks of memory are


normally moved at the discretion of the CLR, typically at the time of garbage collection.
Pinning is necessary for managed pointer types that will be used to work with unmanaged
code and expect the data to always reside at the same location in memory. A common
example is when a pointer is used to pass a reference to a buffer to a Win32 API function.
If the buffer were to be moved in memory, the pointer reference would become invalid,
so it must be pinned to its initial location.

Pre-JIT compiler—Another name for the Native Image Generator tool used to convert
MSIL and metadata assemblies to native machine code executables.

Qualified identifiers—Two or more identifiers that are connected by a dot character (.).
Only namespace declarations use qualified identifiers (e.g.,
System.Windows.Forms).

back to top

R2—The codename for the Windows Server 2003 Update due in 2005.

Register Assembly Tool—Same as Assembly Registration Tool.

Register Services Utility—Same as .NET Services Installation Tool.

Reference types—A variable that stores a reference to data located elsewhere in


memory rather than to the actual data itself. Reference types include array, class,
delegate, and interface. See Value types, Pointer types.

Satellite assembly—An assembly that contains only resources and no executable code.
Satellite assemblies are typically used by .NET application to store localized data. Satellite
assembles can be added, modified, and loaded into a .NET application at runtime without
the need to recompile the code. Satellite assemblies are created by compiling
.resource files using the Assembly Linking Utility.

Saturn—the code name for the original ASP.NET Web Matrix product.

Seamless Computing—A term indicating that a user should be able to find and use
information effortlessly. The hardware and software within a system should work in an
intuitive manner to make it seamless for the user. Seamless computing is being realized
with the improvements in hardware (voice, ink, multimedia) and software.
Secure Execution Environment (SEE)—A secure, managed-code, runtime
environment within the Microsoft Longhorn Operating System that helps to protected
against deviant applications. This is a part of Microsoft's "Trustworthy Computing"
initiative

"Talisker"—The pre-release code name for Windows CE .NET (a.k.a., Windows CE 4.x).

Throwing—When an abnormal or unexpected condition occurs in a running application,


the CLR generates an exception as an alert that the condition occurred. The exception is
said to be thrown. Programmers can also programmatically force an exception to be
thrown by the use of the throw statement. See Exception Handling.

Trustbridge—A directory-enabled middleware that supports the federating of identities


across corporate boundaries.

Try/Catch block—An exception handling mechanism in program code. A try block


contains a set of program statements that may possibly throw an exception when
executed. The associated catch block contains program statements that handle any
exception that is thrown in the try block. Multiple catch blocks may be defined to catch
specific exceptions (e.g., divide by zero, overflow, etc.). See Finally block.

UDDI (Universal Description, Discovery, and Integration)—An XML- and SOAP-based


lookup service for Web service consumers to locate Web Services and programmable
resources available on a network. Also used by Web service providers to advertise the
existence of their Web services to consumers.

Unboxing—Conversion of a reference type object (i.e. System.Object) to its value


type instance. Unboxing must be explicitly performed in code, usually in the form of a
cast operation. See Boxing.

Value types—A variable that stores actual data rather than a reference to data, which is
stored elsewhere in memory. Simple value types include the integer, floating point
number, decimal, character, and boolean types. Value types have the minimal memory
overhead and are the fastest to access. See Reference types, Pointer types.

Variable—A typed storage location in memory. The type of the variable determines what
kind of data it can store. Examples of variables include local variables, parameters, array
elements, static fields and instance fields. See Types.

Version number—See Assembly version number


W

Web Form—A .NET Framework object that allows development of Web-based


applications and Web sites. See Windows form.

The Web Matrix Project—A free WSIWIG development product (IDE)for doing ASP.NET
development that was released as a community project. The most recent version—The
Web Matrix Project (Revisited)—can be found here.

Web service—An application hosted on a Web server that provides information and
services to other network applications using the HTTP and XML protocols. A Web service
is conceptually an URL-addressable library of functionality that is completely independent
of the consumer and stateless in its operation.

XAML—(Extensible Application Markup Language) The declarative markup language for


Longhorn that allows an interface to be defined. Longhorn applications can be created by
using XAML for the interface definition and managed procedure code for other logic.

XCOPY—An MS-DOS file copy program used to deploy .NET applications. Because .NET
assemblies are self-describing and not bound to the Windows registry as COM-based
application are, most .NET applications can be installed by simply being copied from one
location (e.g., directory, machine, CD-ROM, etc.) to another. Applications requiring more
complex tasks to be performed during installation require the use of the Microsoft
Windows Installer.

XDR (XML Data-Reduced)—A reduced version of XML Schema used prior to the release
of XML Schema 1.0.

XML Schema Definition Tool— A .NET programming tool (Xsd.exe) used to generate
XML schemas (XSD files) from XDR and XML files, or from class information in an
assembly. This tool can also generate runtime classes, or DataSet classes, from an XSD
schema file.

XML Web services—Web-based .NET applications that provide services (i.e., data and
functionality) to other Web-based applications (i.e. Web service consumers). XML Web
services are accessed via standard Web protocols and data formats such as HTTP, XML,
and SOAP.

Yukon—The code name for the release of Microsoft SQL Server 2003 (a.k.a., SQL Server 9). Yukon
offers a tighter integration with both the .NET Framework and the Visual Studio .NET IDE. Yukon will
include full support for ADO.NET and the CLR, allowing .NET languages to be used for
writing stored procedures
TABLE OF CONTENTS:

CONTENTS
• GUIDE RESUME………………………………………………..……………
• TITLE OF PROJECT…………………………………………………….……
• INTRODUCTION……………………………………………………….……
• TOOLS/PLATFORM, HARDWARE AND SOFTWARE
REQUIREMENT SPECIFICATION…………………...……………………
• Technology Overview of .net 2005 with c# 2.0 ,…………………………..
SQL SERVER 2000 , SDLC ………………………………………………
• ANALYSIS (DFDs, ER Diagrams, Class Diagrams etc. As per the project
requirements)…………………………………………………………………
• A COMPLETE STRUCTURE OF THE PROGRAM……………………….
. Number of modules and their description.
. Data structures for all modules.
. Process Logic of each module
. Report generation.

Potrebbero piacerti anche