Sei sulla pagina 1di 90

Introduction to Databases and SQL Server

Having a database as well as an SQL Server is beneficial and advantageous to any


organization especially with the continuous popularity of information technology

In This Chapter:

Introduction to Database
Brief introductions to benefits of working with Database
SQL Server Features
The features, processes and specific uses of an SQL Server
SQL Server Architecture
A departmental and structural overview of the SQL server system and their
corresponding functions
1

An Introduction to Database

Brief introductions to benefits of working with Database

Data storage and management is an issue that confronts all organizations and
businesses in today's information age. Businesses are faced with the task of not only storing
customer, product, and order data but also with the task of 'mining' existing data for relevant
statistics and qualitative analysis. Computing in general and Database Management Systems
(DBMS) in particular automate data storage and manipulation and provide businesses with
the capacity to view their data in custom formats, display it over the Internet or on a
workstation, connect it to other data repositories, perform qualitative and quantitative
analysis, and more in an efficient and secure manner.

Examples of Databases

The DBMS began in the 1960s and early 1970s. Early DBMS were not the flexible data
containers we encounter today. Rather, they were tailored to fit the needs of a single
organization. Adding new types of data or reorganizing current data was problematic.
Relational Databases were conceived in the early seventies and they have replaced the
earlier legacy systems over the span of a few decades. Relational databases are standardized
in a few ways that make them useful across the board. They also present data in a way that
most people can easily work with and manipulate. This presentation or logical representation

Page 1 of 90
however does not mirror the actual way in which data is stored on the hard disk of the
computer.

The relational data model was defined by E.F. Codd in 1970 through a landmark research
paper called "A Relational Model of Data for Large Shared Data Banks". A relational
database is a group of related 2-dimensional tables. Each table describes the relation of the
main subject of the table to important information. The columns of the table contain the
types of information stored about the main subject. For example, the main subject may be
EMPLOYEE and the columns may contain EMPLOYEE_ID,
EMPLOYEE_DEPARTMENT, and EMPLOYEE_SALARY. Each column has a range of
permissible values, called a domain. For example, EMPLOYEE_SALARY should be a
number greater than zero. The rows in the database (referred to as tuples) would contain
information on different employees.

Searching for an employee in a relational database is quite easy; a user may match the value
in the EMPLOYEE_ID column to find an entry in the table. How the relational database is
represented in the hard drive of the computer - physical storage - becomes unimportant
compared to how users and database administrators view the table - the logical view.

The ER or Entity Relationship model was developed in the late seventies. In many ways, it is
simply a refinement of the relational model. The EMPLOYEE, according to this model,
would form an entity. Other entities, such as SKILLS, SHIFT, and HOURS_WORKED,
would be connected to EMPLOYEE by a series of well defined relationships. For example,
the relationship between EMPLOYEE and SKILLS would be many to many. That is, a given
employee may have many skills, and a given skill may be possessed by many employees.
However, the relationship between EMPLOYEE and SHIFT would be many to one; many
employees may work in one shift, but a single employee cannot work many shifts. The ER
model remains the standard today. Another standard that came to us from the 70's is the
Sequential Query Language, or SQL, a programming language that is based on the relational
model and can be used to efficiently pull-up and alter data from any relational database.

The ER Model

Microsoft's proprietary SQL Server relational database management system is among the
most popular DBMS today. This is partially due to the popularity of Microsoft products and

Page 2 of 90
partially due to SQL Server's powerful features and flexibility. SQL Server supports an
extended version of SQL known as T-SQL or Transact SQL. Although SQL server has been
traditionally used by small to medium vendors and businesses, it is increasingly becoming
the DBMS of choice among large enterprises.

The SQL Server offers ease of management via its standard GUI and provides an ideal
environment to store and deploy data used by Microsoft's ASP and ASP.NET platforms.
However, SQL Server only runs on the Microsoft Windows OS and there are no plans of
making it available on other platforms yet.

Microsoft SQL Server's codebase originated in the early Sybase SQL Server DBMS, a
competitor of Oracle, IBM, and Sybase. The first independently marketed version of SQL
Server (version 4.2) was created by Microsoft and Sybase in 1989. This version was
identical to version 4.0 of the Sybase SQL Server for UNIX and VMS. The SQL Server
version customized for Windows was first shipped in 1992. Version 6.5 was designed purely
for Windows NT and did not contain any connection to Sybase. Version 7.0 was the first to
institute a Graphical User Interface.

Two versions of the SQL Server are widely used today. One of them is the Microsoft SQL
Server 2000 released in August 2000. Several organizations have upgraded to SQL Server
2005. Free Express Editions that offer a substantial portion of the features of the enterprise
versions are also available for download, as well as evaluation versions that allow user's to
preview the capabilities of SQL Server.

Discuss this in the forums

< Prev
|
Next >

Learn Free!

• Microsoft Word
• Microsoft Excel 2003
• Microsoft Powerpoint
• Microsoft Access
• Microsoft Outlook
• Microsoft Visio 2003
• Computer Hardware
• Building Your PC
• Programming
• VB.NET Basics
• C# Basics
• Web Development
• ASP.NET Basics
• Java Basics
Page 3 of 90
• XML Basics
• Perl Basics
• Linux Basics
• Photoshop Basics
• Database Basics
• Microsoft SQL Server
o 01 - Introduction to Databases and SQL Server
 Introduction to Database
 SQL Server Features
 SQL Server Architecture
o 02 - Installing and Getting around SQL Server
o 03 - Basic Administrative Tasks
o 04 - SQL Server - A Practical Introduction to SSIS
o 05 - Normalization and Sample Database Creation
o 06 - Basic Queries and Data Modification
o 07 - Complex Queries and Views
o 08 - Advanced Programming with T-SQL
o 09 - An Introduction to SQL Server Tools and Components
o 10 - XML Programming in SQL Server 2005
• Oracle Basics
• eBay Basics
• Opening an Online Store
• Success Basics
• Debt Management

© 2007-2008 Spark Publishing Inc.

• About Edumax
• FAQ
• Terms of Use
• Privacy Policy
• Contact Us

SQL Server Features

The features, processes and specific uses of an SQL Server

We will first discuss the features of SQL server 2000 and later look at the SQL Server
2005 enhancements. SQL Server's Enterprise Manager console serves as its administrative
GUI tool. The Enterprise Manager may be used to view all the SQL Server installations and
Page 4 of 90
databases on the network and perform high level administrative functions that involve one or
more servers. Common backups, restores and other maintenance related tasks may also be
scheduled through the Enterprise Manager. This GUI allows Database Administrators
(DBAs) to make changes to individual databases by adding or modifying data objects such
as tables. The SQL server Query Analyzer may be used to run SQL queries and obtain
results. The query analyzer has an endless potential as a development tool for new reports,
analyses, stored procedures, and so on. The Enterprise Manager and Query Analyzer have
been consolidated into SQL Server 2005's Enterprise Management studio.

The SQL Server Program Group

The SQL Profiler is a performance monitoring tool. DBAs may observe and record
database and query efficiency in real time through custom views and 'traces' that capture the
effect of various commands and events on the system in log files. The Service Manager is
used to control the main SQL Server processes - MSSQLServer, MSDTC (Microsoft
Distributed Transaction Coordinator), and SQLServerAgent (the Job and Event Scheduler)
processes. You can see these processes if you press ctrl-alt-del and click on the 'Process' tab
of the Task Manager. The icon for these services is usually present in the system tray on the
bottom right hand side of the screen. The service manager (from the control panel) may be
used to start or stop these processes.

Data Transformation Services (DTS) is one of the most powerful and flexible features of
SQL Server 2000. DTS allows developers to perform complicated data import, export and a
variety of transformations.

Analysis Services provides tools that allow viewers to specify dimensions and factual data
for large data warehouses. Such dimensions add many dimensions to two dimensional
database tables and transform them into data 'cubes'. A wide variety of qualitative and
quantitative analyses may be performed on the data (data mining) through these cubes. The
cubes may be secured for use by specific users with specific roles.

The above are the main component level features of the SQL Server. DBMS features on a
finer level of granularity include the incorporation of user-defined functions, indexed views,
Page 5 of 90
distributed partitioned views, INSTEAD OF and AFTER triggers, new datatypes like bigint
and xml, cascading RI constraints, multiple instances, XML support, and log shipping. Let
us look at these features in detail.

Functions are used within queries, stored procedures, and other T-SQL commands. They
return the end product of a mathematical or statistical formula or expression. SQL Server
contains a number of built-in functions such as abs (to return the absolute value of a
number), sin (to return the trigonometric sine of a given number) and so on. SQL Server
2000 and 2005 allow users to write custom functions to perform user-specified calculations
that take zero or more input parameters and return either a single value or a set of rows.

A view is a custom way of looking at the data. Normally, data from two or more tables or
summaries and aggregations are consolidated via complex SQL queries and displayed in an
easy-to-comprehend view. Running underlying queries every time a view is invoked by a
user (views may be used just like tables) is expensive. SQL Server 2000 and 2005 allow
users to define indexes on views - a query for an indexed view is a lot more efficient.
Essentially, the result of the query is indexed and stored in the database.

A trigger is an SQL code that is executed every time a certain event (such as an insert into a
table, a delete from another table etc.) occurs. SQL Server 2000 and higher versions allow
users to write triggers that are executed instead of the triggering event. Further, each table
may have multiple 'after' triggers that are executed after performing updates, insertion, and
deletion events.

Large 8-Byte integers may be stored in tables using the datatype 'bigint'. A variable sized
column that adapts itself to a variety of data types such as int, varchar and so on may be
defined using the data type sql_variant. Further, a table datatype may be defined to
temporarily hold sets of rows of tabular data. Several SQL server instances (an instance
corresponds to a SQLServer process) with custom configurations and sets of databases, may
run on a single computer.

Extensible Markup Language (XML) is a data description and display standard. XML is
rapidly becoming the tool of choice for data needs. SQL Server provides native support for
XML and allows users to obtain query output in XML format and retrieve data from XML
documents as if they were SQL Server tables. Further, data in SQL Server tables may be
directly viewed on the Internet using XML tools and IIS (Internet Information Server).

A transaction log records every command that is issued to a DBMS. Log shipping allows a
transaction log for a specific time interval to be applied to another database. That is, all the
commands in the log for the time period are applied to the other database. This mechanism
may be used to perform incremental backups (i.e. backup changes rather than the entire
databases) or to restore a database that has been recovered from a backup and bring it up to
date. That is, if a backup is performed at 5:00am on Saturday and the system crashes at
5:00pm on Saturday, the transaction log from 5:00am to 5:00pm may be executed or applied
on the backup to bring it up to date and get the database running.

Page 6 of 90
In addition to the above, the latest versions of SQL Server support several enhancements to
Full-Text search such as change tracking and image filtering (for documents saved in image
columns). Versions 2000 and above provide built-in clustering support, which means that
several computers may be 'clustered' together to form a fail-safe DBMS server. Secure and
differential or incremental backups are also broadly supported.

In SQL server 2005, DTS is transformed to a more comprehensive SSIS or SQL Server
Integration Services that provides many more transformations and extensive programming
support. SQL server 2005 also supports Ad Hoc reporting through Reporting Services and
the new Business Intelligence Development Studio (a development tool for Analysis
Services, SSIS etc.).

Discuss this in the forums

< Prev
|
Next >

Learn Free!

• Microsoft Word
• Microsoft Excel 2003
• Microsoft Powerpoint
• Microsoft Access
• Microsoft Outlook
• Microsoft Visio 2003
• Computer Hardware
• Building Your PC
• Programming
• VB.NET Basics
• C# Basics
• Web Development
• ASP.NET Basics
• Java Basics
• XML Basics
• Perl Basics
• Linux Basics
• Photoshop Basics
• Database Basics
• Microsoft SQL Server
o 01 - Introduction to Databases and SQL Server
 Introduction to Database
 SQL Server Features
 SQL Server Architecture
o 02 - Installing and Getting around SQL Server
o 03 - Basic Administrative Tasks
Page 7 of 90
o 04 - SQL Server - A Practical Introduction to SSIS
o 05 - Normalization and Sample Database Creation
o 06 - Basic Queries and Data Modification
o 07 - Complex Queries and Views
o 08 - Advanced Programming with T-SQL
o 09 - An Introduction to SQL Server Tools and Components
o 10 - XML Programming in SQL Server 2005
• Oracle Basics
• eBay Basics
• Opening an Online Store
• Success Basics
• Debt Management

© 2007-2008 Spark Publishing Inc.

• About Edumax
• FAQ
• Terms of Use
• Privacy Policy
• Contact Us

Home » Microsoft SQL Server


2

Installing and Getting around SQL Server

Going through the step by step installation of an SQL server and learning the basic how-to's

Several versions of the SQL Server are available. An appropriate version should be
procured and installed only after a thorough assessment of organizational data management
needs is done. SQL Server 2000 is a robust version that is still preferred by many
organizations. SQL Server 2005 offers many unique technical features and components. SQL
Server 2005 Express is a handy free version that may be used to support small to medium
enterprise data needs. The Windows Installer takes users through the SQL Server
installation.

We will examine SQL Server 2005 installation in detail in this chapter. SQL Server 2005
allows installation and upgrade of individual components. For example, you may simply
install the SQL Server database engine without analysis services or choose to install analysis
services later. Significant improvements have been made to Error logging and the installation
directory structure. One of the best new features of the SQL Server 2005 installation is the
system configuration checker. This component checks the computer on which the SQL

Page 8 of 90
Server is being installed to make sure it has the requisite software, hard disk space and so on.
Unattended and silent installs are also supported.

Inserting the SQL Server CD or DVD or clicking the appropriate file from the download
opens an introductory screen that outlines product components and requirements. Select the
'Server components, Tools, Books Online, and Samples' option for a comprehensive install.
You will have to accept the license agreement by clicking on the appropriate checkbox.
Click Next to continue to the next screen.

SQL Server 2005 installation requires prior installation of the .NET Framework and
MSXML. Some components use IIS - if you do not have IIS installed, go to settings ->
Control Panel -> Add or Remove Programs. Click on the 'Add or Remove Windows
Components' icon on the left hand bar. If the box next to 'Internet Information Services' is
empty, click on it so that it contains a check mark and click next. IIS will be installed on
your machine. You may have to insert the Windows XP CD to complete installation.
Proceed to the next step once the prerequisites have been installed.

SQL Server Prerequisites

Next, the System Configuration Checker scans the system to make sure it contains the
minimum set of resources required for a SQL Server 2005 install. Any problems are relayed
to the user via warnings on a result screen. The first task completed by the wizard is a scan
of your system to make sure it meets the minimum requirements for SQL Server 2005. You
may fix minimum hardware requirement related warnings by delegating more RAM to the
virtual machine running SQL Server.

Page 9 of 90
System Configuration Check

You will next be prompted for the business name and other details, and your registration key.
Afterwards, you should select the components you wish to install along with SQL Server
2005. Select the SQL Server Database Services option. Also select Analysis Services and
SSIS (SQL Server Integration Services) as we will be using these in this tutorial. You could
either go with a default installation or click the Advanced button and select the features
yourself.

SQL Server Components for Install

Install SQL Server with the default instance rather than a named instance as it is simplest to
work with a default instance. Click on the radio button next to 'Default Instance' and proceed
to the security configuration step. The SQL Server Service should be able to log into the
system in order to access key resources. The simplest way to allow this is to specify a single
Page 10 of 90
account and a set of credentials for all SQL Server services. You may also choose to start
services automatically after installation. Check the box next to the SQL server Agent to start
this service upon installation and leave the other default selections as is.

Automatic Service Startup

Select Windows Authentication Mode for simple configuration and connection. This way,
SQL Server is as secure as the system itself. However, you shuld change the password of the
DBA account 'sa' once you open the SQL Server Management Server GUI.

Security

A Collation is a set of characters and sort orders that customizes the SQL Server installation
to a particular language and region in the world. The default collation is in Latin - General
for English and other languages that use the Latin Script. Leave the default as it is. Note that
Page 11 of 90
the option 'Dictionary Order, Case-Insensitive, for use with 1252 Character set' is highlighted
in the lower window. This makes your SQL Server installation case insensitive. Windows
collation sets are generally more consistent than SQL collations for the XP environment.

Collation Specification

If you selected Reporting Services, install it with the default configuration options. You may
even install Reporting services without specifying configuration at this time. The final step
summarizes your installation configuration. Click the 'Install' button to start installation. A
progress window will continuously display installation status and messages. After
installation, you may browse the SQL Server program group from the 'Start' menu.

The Management Studio GUI tool allows users to administer an SQL Server installation. The
GUI combines the features of the Query Analyzer tool and the Enterprise Manager GUI of
SQL Server 2000 along with value-added enhancements. Click on the 'SQL Server
Management Studio' item in the SQL Server program group list to open the Management
Server Studio GUI. You may have to log into the Server with the credentials you specified at
installation time to open the default local server (if you chose windows authentication, you
need not enter the username and password again). You will see an initial screen with several
panels. Frequently used panels include the Object Explorer, Registered Servers panel,
Database Object Property and the Query Panel.

Page 12 of 90
Management Studio with Object Explorer

You may restrict the panels you wish to see displayed at any time by clicking on the
appropriate item from the 'View' menu on the top tool bar. The Registered Servers panel
contains all the database servers you have connected to from your installation. The
constituent databases of any registered server may be displayed and manipulated via the
Object Explorer as if the registered server were a local server.

Register Server Dialogue

Page 13 of 90
You may create a new registration by clicking on the Database Engine Cylinder icon on the
top bar of the Registered Servers panel. Right click on 'Database Engine' and click on 'New
-> Server Registration'. Now, enter the server and connection information to register the new
server. A server group is a way of organizing several servers in a common folder. You may
start, stop, or configure a server by right clicking on its name and clicking on an item in the
list. If you have not already done so, right click on the entry for your default local folder and
select 'Connect' to connect to the database

The Object Explorer shows a tree view of all objects including databases, procedures, views,
security, and jobs that constitute the currently connected Server. The Databases folder
contains one folder per database with sub folders for tables, views, and Programmed Objects
such as functions and procedures. Note that every SQL Server Installation comes with a set
of System databases and associated objects. The Security folder under each database
contains lists of user account login names and roles. The Server Objects folder contains
Linked Servers, Triggers and other server-wide objects. The database mailbox, Maintenance
related plans, Full Text configuration, Transaction Coordinator and so on are stored under
the Management folder. A folder is also included for Notification Services.

One of the first things you should do if you chose Windows Authentication is to expand the
security folder and change the default password assigned to user 'sa'. To do this, click on the
'+' sign next to the security folder and the '+' sign next to logins. Right click on the 'sa' user
entry and click on 'properties'. Enter a new password and confirmation in the pop-up
dialogue and click 'OK'.

Change sa password

To open a new query window, click on the 'New Query' icon on the left hand side of the top
bar. You may enter one or more T-SQL commands in the new Query panel and execute the
commands by clicking the Execute '!' icon or hitting the F5 key. A Query window for the
'Master' database opens up by default; to open a window to query a specific database, expand
'Databases' by clicking the '+' sign, right click on the database name and select 'New Query'.
The Query editor allows users to design queries graphically rather than by typing a
Page 14 of 90
command. We will use this tool to create a new view later in the tutorial. This editor may be
used to design both data manipulation/search queries and data modification (UPDATE,
INSERT, DELETE) queries.

The Solution Explorer allows users to organize groups of T-SQL scripts into a .NET
framework-like project. Such solutions may be version controlled and managed through
Microsoft Visual Source Safe. Maintenance tasks such as backups and logging may be
packaged into a Maintenance Plan. Such plans are saved in the Management Folder under
Maintenance Plans. A job wizard allows users to create schedules for maintenance plans and
other jobs. The activity monitor in the Management folder allows users to view system
processes and locks on processes and objects. The DBA may use the constituent views to
monitor the server.

Discuss this in the foHome » Microsoft SQL Server

Basic Administrative Tasks

Discussing the basic administrative tasks

In This Chapter:

SQL Server Roles


Learning the different kids of server roles and its functions
SQL Server Backups
Learning the importance and general functions of SQL server backups

Discuss this in the forums

rums

SQL Server Roles

Learning the different kids of server roles and its functions

An SQL server role corresponds to a set of database and data object access permissions
assigned to operators. SQL Server supports server roles, database roles and application roles.
Server roles are common across the entire server installation and not specific to any single
database. The number and type of server roles are fixed by the system; new server roles may
not be added. However, new or existing users may be assigned to any server role. The
following is a list of server roles and descriptions. You may view these roles by expanding
the main server security folder and the 'Server Roles' folder under this folder.

Role Description
Page 15 of 90
bulkadmin Bulk inserts and other related operations
dbcreator Create, modify, and resize databases
diskadmin Manage disk files
processadmin Manage SQL Server processes
securityadmin Create and manage server logins, perform audits, read error logs
serveradmin Change server configuration, shut down server
Manage linked servers, replication, stored procedures, and execute certain
setupadmin
system stored procedures
sysadmin Broad access and complete control over all database functions (DBA)

You may add users to roles by using the Management Server GUI or by running special
System stored procedures on the command line within a new query window. To assign a
server role to a user through the GUI, right click on the role (expand the main folder for
security -> Server Roles) and click 'properties'. Click 'Add' to open the 'select login' dialogue
box. Here, you may click the browse button to view available logins. Check the box next to a
login and click OK to add the login to the selected role.

Add Logins to Role

Alternatively, you may execute a system stored procedure on a query window to add a user
to a role. The sp_addsrvrolemember procedure may be used for this purpose:

Add User to Role:


sp_addsrvrolemember <login_name>, <role_name>
Drop User from Role:
sp_dropsrvrolemember <login_name>, <role_name>

Page 16 of 90
SQL Server supports certain fixed database roles, a public database role, and user-defined
database roles. Database roles are relevant to each database and establish access permissions
at the database level. You may view the fixed roles for a database by expanding the 'Security'
folder under the database. Fixed roles are defined by the system; a DBA may add new users
to existing fixed roles. The following are some fixed roles and their descriptions:

Role Description
Add or remove groups, users or SQL Server
db_accessadmin
users to database
db_backupoperator Back up database

db_datareader View data in all user tables in database


Add, modify, or delete data from all user
db_datawriter
tables
Execute data definition language commands
db_ddladmin
in the database (create tables, insert data etc.)
db_denydatareader Drop privileges for users to view/select data
Drop privileges for users to modify/delete
Database owner; broad DBA
db_denydatawriter data
privileges and full control
db_owner
Manage statement and data object
db_securityadmin
permissions

You may add users to fixed database roles by using the Management Server GUI or by
running special System stored procedures on the command line within a new query window.
To assign a database role to a user through the GUI, right click on the role and click
'properties' (expand the database by clicking on the '+' sign next to its name; now, expand
'Security', 'Roles', and 'Database Roles' and right click on the role). Click 'Add' to open the
'select login' dialogue box. Here, you may click the browse button to view available logins.
Check the box next to a login and click OK to add the login to the selected role.

Page 17 of 90
View Database Role Properties

Alternatively, you may execute a system stored procedure on a query window to add a user
to a database role. The sp_addrolemember procedure may be used for this purpose:

Add Account to Role:


sp_addrolemember <role_name>, <member_account>
Drop Account from Role:
sp_droprolemember <role_name>, <member_account>

Every database user is assigned the Public role. This role contains default access permissions
for all users of the database. Each user of the database has at least the permissions assigned
to the Public role. The public role cannot be dropped. If you require custom roles and groups
with permissions that range beyond common tasks, you may create and define new database
roles using the Management Server GUI or the sp_addrole stored procedure.

Add new Role:


sp_addrole <role_name> , <role_owner>
Remove Role:
sp_droprole <role_name>

An application role defines the permissions for specific database applications. For example,
consider a GUI based software that allows users to create and save graphics schemes using
graphics elements stored in a specific SQL server database. This software hides the actual
database retrieval and insertion of the schemes from the user but performs these operations
internally. In such cases, it makes more sense to assign roles to the application rather than to
all users who access the application. An application role may be defined to accomplish this
type of security. A new application role may be added using the SQL server GUI. Expand
the database, 'Security', and 'Roles'. Right click on 'Application Role' -> 'New Application
Role...' and enter information in the pop up dialogue. Alternatively, you may use the
sp_addapprole stored procedure.

Page 18 of 90
Add Application Role:
sp_addapprole <role_name>, <account_password>
Delete Application Role:
sp_dropapprole <role_name>

Discuss this in the forums

< Prev
|
Next >

Learn Free!

• Microsoft Word
• Microsoft Excel 2003
• Microsoft Powerpoint
• Microsoft Access
• Microsoft Outlook
• Microsoft Visio 2003
• Computer Hardware
• Building Your PC
• Programming
• VB.NET Basics
• C# Basics
• Web Development
• ASP.NET Basics
• Java Basics
• XML Basics
• Perl Basics
• Linux Basics
• Photoshop Basics
• Database Basics
• Microsoft SQL Server
o 01 - Introduction to Databases and SQL Server
o 02 - Installing and Getting around SQL Server
o 03 - Basic Administrative Tasks
 SQL Server Roles
 SQL Server Backups
o 04 - SQL Server - A Practical Introduction to SSIS
o 05 - Normalization and Sample Database Creation
o 06 - Basic Queries and Data Modification
o 07 - Complex Queries and Views
o 08 - Advanced Programming with T-SQL
o 09 - An Introduction to SQL Server Tools and Components
o 10 - XML Programming in SQL Server 2005

Page 19 of 90
• Oracle Basics
• eBay Basics
• Opening an Online Store
• Success Basics
• Debt Management

© 2007-2008 Spark Publishing Inc.

• About Edumax
• FAQ
• Terms of Use
• Privacy Policy
• Contact Us

Home » Microsoft SQL Server » 03 - Basic Administrative Tasks


3

SQL Server Backups

Learning the importance and general functions of SQL server backups

The Backup Database tool within the SQL Server Management Studio may be used to
backup databases. Backups generated through this tool have an .abf extension. Right click on
the name of a database in the Object Explorer and select Tasks -> Backup... to open the
backup database dialogue box.

SQL Server Backup Dialogue

You may specify a recovery type, a timestamp for Backup set expiry, and a destination for
the backup files. You may use the options tab (click on 'options' in the left hand window) to
Page 20 of 90
either append to an existing database backup file or overwrite backup files on the selected
drive and media. Upon completion of the backup, you may choose to verify the backup for
integrity. The options dialogue also allows users to maintain a log of the backup and use a
tape drive for the backup. Click the 'OK' button to start the backup. Once the backup is
complete, a pop-up box will appear to indicate operation status (success, error messages
etc.).

The following command may be used to backup a database through a command executed on
a Query panel:

BACKUP DATABASE Northwind TO DISK = "f:\backup\mydbase.bak"

SQL Server offers three recovery models - FULL, BULK_LOGGED, and SIMPLE. A new
database's default recovery model is FULL. This method offers maximum flexibility. You
may recover either the entire database or just a part of the database via the FULL recovery
model. The full method uses a significant amount of transaction log space and is slightly
expensive in terms of CPU usage. The BULK_LOGGED model is less flexible than the
FULL model but also consumes smaller quantities of system resources. The SIMPLE
recovery model is easiest to implement. This model is analogous to an incremental recovery;
the SIMPLE model allows users to recover from the point of the previous backup. However,
recovery is limited to when the database was last backed-up. The following command
executed within a new query panel will obtain the current recovery model of the database:

SELECT dbpropertyex("database", "recovery")

The following command may be used to change the recovery model on the command line
(rather than through the backup dialogue on the Management Studio GUI).

ALTER DATABASE database name SET RECOVERY {FULL | SIMPLE |


BULK_LOGGED}

SQL Server backs-up database structure and metadata along with data and may be performed
while users are still logged on to the server. You may backup to an explicitly specified
physical device or to a logical device assigned to an actual physical devise.

Discuss this in the forums

Home » Microsoft SQL Server


4

SQL Server - A Practical Introduction to SSIS

Understanding the SSIS or SQL Server Integration Services

Page 21 of 90
DTS or Data Transformation Services is a powerful and flexible tool shipped with the
SQL Server. The latest version of the SQL Server (SQL Server 2005) ships with SSIS or
SQL Server Integration Services, a remodeled version of DTS with extra features. We will
go over SSIS concepts in this chapter and learn to work with SSIS through a simple example.

DTS and SSIS allow users to automate data import, export from a variety of sources,
perform sundry transformations, and format related changes during the process. An SSIS
package consists of a series of tasks related to this sort of data extraction and transformation;
the tasks are performed in a pre-defined order. Constituent parts of DTS/SSIS tasks are
called DTS/SSIS objects and may be related broadly to either flow of control (the order in
which execution happens) or data flow (where is the information extracted from? how?
where does it go?).

DTS objects may contain connection managers that handle connection to the data source
via configuration parameters such as location, type, information on columns and data etc.
Variables are placeholders for entities and parameters that change during the course of the
execution of the package. Event handlers contain code that executes every time a certain
predefined event occurs and log providers record execution related information. Following
are some basic DTS objects with short descriptions:

• Containers - Objects that group objects into a common 'bin'. Every SSIS package is a
container that holds objects; other SSIS containers include ForEach loops, for loops,
and event handlers.
• Destination Adapters - Destination adapters contain information about the output
format and settings and load data from a variety of sources into the specified output
format (Excel file, flat file, SQL Server etc.).
• Precedence Constraints - These evaluate certain conditions and use the results to
order tasks for execution. The status of current task (success, failure etc.) often
determines the flow of control at points where there are branches (i.e. a choice of two
or more tasks)
• Source Adapters - These integral components actually make data fit for transfer into
SQL Server or other formats by adapting data housed in several sources (flat text files,
Excel files, other SQL Server databases etc.). Scripting components allow developers
more control over this process; developers may use a scripting component to house
code that readies data in a custom proprietary format for transfer.
• Tasks - In large part, SSIS tasks deal with data extraction and loading. They may
allow the package to communicate with other objects and Windows entities to
accomplish goals. Often used tasks include Data Flow tasks, Workflow tasks,
Scripting tasks, and Database Maintenance tasks.
• Transformations - Data transformations include aggregations, sorting, merging,
grouping etc. on the simple level. SSIS also allows a wide variety of complex
transformations.

We will work with a books database over this tutorial. In this chapter, we will learn to import
data into SQL Server through an SSIS package that converts Text files to SQL Server

Page 22 of 90
database table rows. We will also learn to schedule a job that executes the package at pre-
defined intervals of time via the SQL Server Agent. Download the files author1.txt and
author2.txt onto an appropriate directory on your computer.

Next, create an 'author' table in SQL Server. We will go over table creation in detail in the
next chapter; for now, let us just create a table in the 'master' system database. Open SQL
Server Management Studio through the corresponding entry in the SQL Server program
group in the 'Start' Menu. First, connect to the default database either when Management
Studio automatically opens the 'Connect' dialogue box or by right-clicking the corresponding
entry in the object explorer and clicking 'Connect'.

Next, click on the 'New Query' icon on the left hand side of the top bar. A new
blank window will open in the right hand panel. Copy and paste the following command into
the window and click the '!' Execute icon on the top bar or simply hit the F5 key on your
keyboard. This action will create a table named 'author' within the master database with
columns authorid (an automatically increasing counter), lastname, and firstname. You may
browse this table by expanding the 'Tables' folder within the 'Master' database within 'System
Databases' (you may have to right click the folder and click 'Refresh' first).

USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[author](
[authorid] [int] IDENTITY(1,1) NOT NULL,
[lastname] [varchar](100) NOT NULL,
[firstname] [varchar](100) NOT NULL,
CONSTRAINT [PK_author] PRIMARY KEY CLUSTERED
(
[authorid] ASC
)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

Page 23 of 90
The author Table

We are now ready to create our SSIS package. Go to the SQL Server program group and
click on the Business Intelligence Development Studio Entry. The Visual Studio window
will open up. Click on 'New Project'. In the New project dialogue, enter suitable values next
to the 'Name:', 'Location:', and 'Solution Name:' select 'Integration Services Project'. Click
the OK button.

The New Project Dialogue

The following screen will open. This screen's main tab level should say 'package.dtsx'; we
are working on the SSIS package creation GUI. Right click on the bottom 'Connection
Managers' tab and select 'New Flat File Connection' from the list.

Page 24 of 90
The SSIS Package designer

The flat file connection manager dialogue should open. The 'File Name:' field should reflect
the name and location of author1.txt on your computer. Select the other fields as shown in
the next screen shot.

Flat File Connection Manger General Properties

If you open the 'author1.txt' file, you will see a tab separated list of author First Names and
Last Names. We will need to enter this specification in our flat file connection manager.
Click on the 'Advanced' Properties tab on the left hand panel of the Connection Manger
Editor. You will see default names (column1 and column2) for each column. Click on each
column name to view properties for the column. Change the names to 'firstname' and
'lastname'. The ',' character should be automatically chosen in the 'Column Delimiter' Field
for column 1. For the second column, this field should contain '{CR}{LF}'. Enter a length of
50 for each field in the 'OutputColumnWidth' field. Make sure that the DataType field
contains 'string[DT_STR}'. Once you have specified these details, click OK. You will now
see a small icon with the name you specified in the 'Connection Managers' panel.

Page 25 of 90
Flat File Connection Advanced Properties

Let us now create a destination connection manager that points to our SQL Server author
table. Right click on the 'Connection Managers' panel once more. This time, select 'OLE DB
Connection'. In the box that opens, select 'Native OLE DB\SQL Native Client' next to
'Provider'. Next, enter 'localhost' under the server name for your local default server and
select or enter 'master' under 'Connect to Database' under the checked 'Select or Enter a
Database Name' Radio Button. Click OK after testing the connection. Leave the next
dialogue as is and create the connection. You have now created an OLE DB Connection
Manger. You will now see an icon for the new connection in the 'Connection Managers'
Panel.

Page 26 of 90
OLE DB Connection Editor

The next step is to create a data flow object to facilitate data transfer between our two
connections. This part is simple. Make sure that you are within the 'control flow' tab. Place
your mouse over the 'toolbox' icon on the left hand side to view a list of available SSIS tasks.
Scroll down and click on 'Data Flow Task'; hold the mouse button down and drag an instance
of the task onto the central panel (you can access the toolbox through View -> Toolbox also).
Enter a suitable name for the task. Once you have entered a name and clicked OK, you
should see an icon labeled with the name you assigned to the task appear on the control flow
tab. The green arrow underneath the task allows you hook up the successful completion of
this task with the start of the next.

Page 27 of 90
Data Flow Task

Data Flow Task Icon After Creation

You may now specify input and output for the data flow task. Double-click on the data flow
task to get to the 'Data Flow Task' Tab to edit your task. Now, position your mouse over the
'toolbox' icon on the left hand side. Scroll down and click on 'Flat File Source' under data
flow source. Hold the mouse button down and drag an instance of the Source onto the central
panel. Double-click on the source icon to open the connection dialogue. In the initial
dialogue of the Flat File Source editor, select the name of the flat file connection manager we
created earlier by clicking on the down arrow at the end of the 'Flat File Connection Manger'
box. Now, click on 'columns' in the left hand tab and make sure both columns are checked.
Click OK.

Page 28 of 90
Flat File Source Properties

Flat File Source Columns

The red cross in the Data Source task is now a green check. Now, click on the connection
point at the center of the bottom border of the Source box and connect it to the OLE DB
Destination box. You will see the two items connected by a green arrow.

A data flow task

Page 29 of 90
Next, double click on the destination icon and specify basic properties. Select the
localhost.master connection we created earlier. Select [dbo].author in the 'Name of Table or
View' Field. Leave the default values in the other entries. Click on 'Mappings' in the left
hand tab. Since matching names were provided, the columns in the source and destination
should already be mapped. If not, simply click on 'firstname' in the source box and hold the
mouse down. Drag the mouse onto the 'firstname' column in the destination box and let go.

OLE DB Destination Editor

Column Mappings

Your simple package is now ready. It will import data from the specified flat file to SQL
Server. Let us add an extra function to the package. Large databases often use more than one
flat file as a source. The author information has been split into author1.txt and author2.txt for
demonstration. Let us use a 'foreach' task to import data into the author table from all author
flat files in a directory. Save your package by clicking 'save all' or right clicking on the
package tab and selecting 'Save..'. Now, click on the control flow tab once again. Move over
the toolbox and drag a 'foreach container' onto the work area.

Double click on the container. In the editor dialogue, select 'Foreach File enumerator' next to
'Enumerator'. Select or enter the directory where you have saved the author1.txt and
author2.txt files under 'folder'. Type author*.txt under 'Files'. It is better that you use a folder
that just holds author data files as we have just set up our package to loop through all files in
the directory of format author<some_string>.txt. Next, click on the 'Variable Mappings' tab.
Enter a name for a new variable; create it and map it to index 0. This step causes the name
and path of the file being processed to be held in the variable you just created and assigned.
Page 30 of 90
Foreach Collection Properties Editor

Foreach Variable Mapping

Next, drag the data flow task into the foreach container as shown below.

Foreach Container with Data Flow Task

Page 31 of 90
Now, right click on the name of the Flat File Connection Manager in the bottom panel and
select properties. You will see the properties window for this object in the right hand panel
(usually the bottom half of this panel) of BIDS. Click on the ellipsis '...' symbol next to the
connection String Property. An expression dialogue opens up. Now, map the connection
string property to the variable you created earlier and click OK. You should see the value of
the first file in your list in the Connection String box. Now, your package will take input
from each author*.txt file in turn as the flat file connection manager for the source refers to a
dynamic variable rather than a single source. Save the package. Build it using the top bar
'Build' menu.

Properties of Flat File Connection Manager

Set ConnectionString to Dynamic Variable.

A DBA may expect another process to add data files to a certain location at specific times
during the day. To create a job that runs the package automatically at set intervals of time,
open management studio, connect to the default database (the database that contains the
author table we created earlier) and expand the SQL Server Agent. Right-click Jobs and then
click New. In the new job dialogue, select 'Enabled'.

Enter a suitable name next to 'Step Name' and select 'SQL Server Integration Services
package execution' from the 'Type' list. Click on the general tab and use file system as the
package source. Click the ellipsis '...' icon next to package and browse to the location where
Page 32 of 90
you saved the package. Select the Windows account you normally connect to and administer
your database from the 'Run as' list. Click 'OK' to Save your job with an appropriate name.
Refresh to see your new job under the 'Jobs' folder

Browse New Job

Right click on your new job and click on properties. Select the 'Schedules' tab. Click New.
Type a name for the schedule. Enable the job by clicking the 'Enable box' if it is not already
checked by default. Click the 'Recurring' radio button. Click 'change...' and enter the
appropriate values in the Frequency, Daily Frequency, and Duration groups. Apply the
changes and save your job by clicking OK.

Schedule New Job

This tutorial should have given you a basic understanding of the capabilities of SSIS. Typical
SSIS packages are many times more complex than ours. Logging on or executing on failure
links may be used to handle task failure. Processed files may be moved to a separate folder
or even deleted in order to prevent duplicate inserts. Several tasks may be sequentially
connected to create a regular program flow.

Discuss this in the forums

Home » Microsoft SQL Server


5

Normalization and Sample Database Creation

Tips and guidelines in creating a well designed database

In This Chapter:

Database Normalization
Guidelines for a well-designed database
Page 33 of 90
Creating a New Table
Basic steps in creating new ta
5

Database Normalization

Guidelines for a well-designed database

The Database design process should comply with certain broad guidelines. A
well designed database stores information in the smallest amount of space possible and
responds to queries speedily. Our sample database stores data of books and customers of a
bookstore. Consider a scenario where customers should be mapped to their book category
preferences for new release mailing lists:

Data Redundancy

The above table stores the word Fiction four times and the words Non-Fiction, Poetry, and
Mythology three times. If the customer base grew to, say, 10000 people, the same words
would appear many thousands of times. This table contains redundant (unnecessarily
repeated) data. How would a good design eliminate this problem and streamline the
database? Take a look at the following tables.

First Normal Form

The CustomerCategories have now been split into two tables. Each customer now has a
unique ID. This ID is called a Primary Key. Assigning such unique IDs or primary keys
makes a database compliant to the 1st Normal form. The lookup becomes easier now; all
we have to do is search for 'Poetry' in the second table to get a list of customers who have a
preference for this category. However, it still looks untidy and numbers still get repeated a
lot. Also, if the last customer subscribed to a particular category is removed from the
database, the category will no longer be listed in the database!

Page 34 of 90
Second Normal Form

Look at the above set of tables. A new Category table has been created with unique IDs for
each category. Now, the new CustomerCategories table contains less repetitions and is much
clearer. We have separated the old CustomerCategories table because it depended on two
sets of keys - customerID and categoryID - not a good practice. By doing this, we have
achieved the 2nd normal form. To get the design compliant to the 3rd normal form, all the
data within a table should pertain to a single subject. For example, let us say that the
bookstore expands to many cities. Rather than adding branch name and location columns to
the customers table, we would create a Branch table with a unique primary key and use this
primary key to indicate a customer's primary branch. This concept is called a foreign key.
Using such keys makes the database comply with the 3rd Normal Form. Making a database
comply with these normal forms will ensure that it is efficient.

3rd Normal Form

A variant of the SQL query language known as T-SQL is the standard way of talking to a
SQL Server database. SQL as a language is declarative and not procedural. A procedural
language uses a number of steps or a procedure to get a specified result. A declarative
language, on the other hand, simply makes a statement - a single declaration that is sent to
the DBMS. The DBMS then executes internal programs and returns an answer. For example,
the SQL statement 'CREATE TABLE' with a list of parameters creates a table on the
database.

Page 35 of 90
A SQL Server table may contain many columns with varying data types. The data type is the
sort of value (number, string, date etc.) that the column contains. A table is made up of rows
and columns. The columns define the subject of the data (or domain) each row contains.
Table names should be unique within a database. The structure of a table consists of its
columns and their data types. The order in which rows appear when a table is opened has
nothing to do with the way they are stored.

We will learn T-SQL and use T-SQL to create structured programs with a flow of control
over the next three chapters. We will use a sample bookstore database for our exercises. Let
us first work on creating an appropriate database and populating tables with sample data.
Most commands may be executed by clicking on the 'New Query' icon on the top left hand
side window and entering a query or sets of queries in the window that pops up. Highlighting
a portion of the queries or the entire set and clicking the 'Execute' '!'' exclamation point icon
will run the highlighted queries. We will predominantly use the SQL query window panel to
test our queries. However, we will also look at the GUI way of accomplishing each task.

Our database will be called 'booksdb'. Right click on the 'databases' folder on the tree on the
left hand window of Management Studio and select 'New Database'. Enter 'booksdb' in the
empty field next to 'Database Name:' and click the OK button. The new database will be
created with folders for all associated objects. Now, let us learn how to create a table using
the Management Studio GUI before running a T-SQL script to create all the booksdb
database tables. We have already created an author table using the GUI in the last chapter.

Discuss this in the forums

Creating a New Table

Basic steps in creating new table

1. Click on the plus sign next to the newly created 'booksdb' database to
expand the folder's contents.
2. Right click on the table folder and click on 'New Table...'
3. A new table editor that looks like the next screen shot will appear.
4. Let us create the simple 'category' table first. Enter 'categoryid' under 'Column Name'.
5. Press tab or the right arrow key and select 'int' from the list that appears when you
press the left mouse button over the downward arrow symbol (you may also enter 'int'
using the keyboard.
6. Click on the check mark inside the 'Allow Nulls' box so that the box becomes empty.
7. Right click the black right arrow at the start of the column and select 'Set Primary
Key'. This will be the primary key of our table. After you do this, you should see a
gold key next to the 'categoryid' column definition.
8. Click on the column under 'Column Name' in the next row and enter 'categorydesc'.
9. Enter 'varchar(100)' in the next field under 'Data Type'
10.Uncheck the 'Allow Nulls' box next to categorydesc.
Page 36 of 90
11.Now, let us define an automatic increment that SQL server keeps track of for the
categoryid field so that we do not have to work out and specify one manually. Click on
the 'categoryid' column.
12.Click on any item within the 'ColumnProperties' tab underneath the table tab.
13.Use the down arrow key to scroll through the properties until you get to 'Identity
Specification'. Expand this item by clicking on the plus sign next to it.
14.Type or select 'Yes' next to '(Is Identity)' and press enter.
15.The Identity Increment and Identity Seed properties should now be automatically
populated with '1'. You table definition will now look like the following screenshot:

Category Table Definition

16.Right click on 'Table-dbo.table1' tab and click on the first item in the menu (Save
table_1)
17.Enter 'category' under 'Enter a Name for the Table' and press OK
18.This action creates and saves the new table. Right click on the 'Tables' folder and click
'Refresh'. You will now see 'dbo.category' underneath the 'Tables' folder under
booksdb.
19.Now, delete this table by right clicking on its name and selecting 'delete' from the
menu.
20.Select 'OK' on the 'Delete Object' pop up window.

Now that we have examined the GUI method of table creation, let us look at the equivalent
in T-SQL code:

CREATE TABLE [dbo].[category](


[categoryid] [int] IDENTITY(1,1) NOT NULL,
[categorydesc] [nvarchar](100) NOT NULL,
CONSTRAINT [PK_category] PRIMARY KEY CLUSTERED
(
[categoryid] ASC
Page 37 of 90
)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

• To run this code, click on the 'New Query' icon on the left hand side of the top toolbar.
• A Query tab will open up. Copy the above query into the tab.
• Select the entire Query and click the 'Execute' exclamation point symbol on the top
toolbar (or Query->Execute from the menu or by just hitting the F5 key). Note that all
the contents of the Query window will automatically be executed if no commands are
selected.
• Right click on the 'Tables' folder and click 'Refresh'. Once again, you should see the
new category table identical to the one you created with the GUI.
• Now, delete the contents of the query edition window and enter 'drop table category'.
• Highlight the above command and hit the F5 key. You will see the 'Command(s)
Completed Successfully' confirmation in the Messages tab in the lower half of the
window. Note that the SQL command to drop a table does not evoke a confirmation
step unlike the GUI-based command. You may have to refresh the tables folder to see
your changes.

Included in this tutorial are several T-SQL and SQL scripts that automatically create the
books database table and populate them. Download the scripts booksdbddl.sql, book.sql,
author.sql, category.sql, customer.sql, custorder.sql, orderdetails.sql, and subject.sql onto an
appropriate directory on your computer.

Now, run the booksdbddl.sql script. Click on Open -> File in the file menu on the top bar and
browse to the location of booksdbddl.sql. Click on the file and then the open button. The file
will open on a query window. Execute the contents of the window by hitting the F5 key or
clicking the 'Execute' exclamation point (!) icon.

After the above command, run the rest of the files in the exact order specified below. Each of
these files populates a table created in the previous step. You may see many '(1 row(s)
affected)' messages flash past after running each script.

author.sql
category.sql
subject.sql
book.sql
customer.sql
custorder.sql
orderdetails.sql

Now, go ahead and familiarize yourself with the tables by opening the definition of each
table in the booksdb database. Go through the data in the tables and the table structure.

Page 38 of 90
Books Database Tables

The above diagram displays all the relationships in the books database. Note that every table
except orderdetails has a primary key (the first column) and most dependant tables have
foreign keys. The author, category, and customer tables are top level tables that do not
depend on other tables. The subject table contains a foreign key that refers to a category id.
The book table contains foreign keys for both the author id and subject id fields. The
custorder table contains the id of the customer who placed the order and the order date. The
orderdetails table is organized by the id corresponding to each item within an order. We use
four data types - varchar for string/character data, date, integer based types, and money. The
price column in the book table and the unitprice column in the orderdetails table alone are of
the money datatype; all other number fields contain integers.

Let us start by going over some of the scripts in the booksdbddl.sql files. Note that this file
contains mostly Data Definition Language (DDL) commands. DDL deals with table and
data format and creation. DML or Data Manipulation Language deals with data retrieval,
querying and so on. First, let us look at the statements that created the author table and
associated objects.

]
CREATE TABLE [dbo].[author](
[authorid] [int] IDENTITY(1,1) NOT NULL,
[lastname] [varchar](100) NOT NULL,
[firstname] [varchar](100) NOT NULL,
CONSTRAINT [PK_author] PRIMARY KEY CLUSTERED
(
[authorid] ASC
)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

Page 39 of 90
The 'create table' statement creates a new table within the current database. The table's
column definitions and other constraints should be specified within a pair of brackets after
the table name. Note that the square brackets are optional and added to make the statement
more readable. The author table has three columns. LASTNAME and FIRSTNAME contain
strings (VARCHAR(100)) that may be upto a hundred characters long. The AUTHORID
field contains a number that may be upto five digits long with no decimal digits (as defined
by int). Further, we define authorid to be an identity column with a seed of 1 and increment 1
through the directive IDENTITY(1,1). Recall that we did this through setting the '(Is
Identity)' property to Yes in the GUI. Also, we declare authorid to be a Primary key by
associating it with a constraint called PK_author.

The PK_author constraint basically restricts the AUTHORID field so that it contains no
duplicates even across clustered instances. Such constraints maintain the integrity of the data
and are therefore called integrity constraints. A constraint does not have to be a primary key
or a foreign key. We could even define 'check' constraints through a formula based on
constants or values in other fields. The following command adds a constraint to the
orderdetails table. This constraint restricts discounts: a discount cannot be applied to a single
copy order of a book that costs below $25. The NOCHECK directive causes SQL Server to
skip checking existing rows against the constraint.

ALTER TABLE [dbo].[orderdetails] WITH NOCHECK


ADD CONSTRAINT [CK_orderdetails] CHECK
(([QUANTITY]>(1) OR [UNITPRICE]>(25.00)) OR [DISCOUNT] IS NULL)
GO

Issue the following command to delete this constraint:

ALTER TABLE [dbo].[orderdetails]


DROP CONSTRAINT [CK_orderdetails]
GO

Let us now use the Management Server GUI to run the same command.

• Click on the plus sign next to the icon for the orderdetails table under the booksdb
database in the tree on the left hand side window.
• The expanded view will reveal a folder named 'Constraints'. Right click on this folder
and click on 'New Constraint...'
• In the pop-up window, click on the add button. Leave the name in the left hand
window as is.
• Click on the ellipsis symbol ('...') next to 'Expression' under General. Copy and paste
the following text into the pop up window:

((QUANTITY>1 OR UNITPRICE>25.00) OR DISCOUNT IS NULL)

• Click OK to close the window.


• Now, click the 'close' button.
Page 40 of 90
Constraint Definition

• Right click on the 'orderdetails' table tab and choose 'Save orderdetails'.
• You have now added a constraint through the GUI.
• Do not forget to delete the constraint - right click on the constraints folder, select 'New
Constraint' and hit the delete button twice (once to delete the default new constraint
and once to delete our constraint). Save the table after deleting the constraint to apply
changes. You may also delete the constraint by issuing the 'ALTER TABLE [dbo].
[orderdetails] DROP CONSTRAINT' command.

Let us take a look at the create command for the 'book' table to get a better idea about
establishing foreign key constraints. The bookid field is the primary key of the book table.
The table contains a title column, a price column, and a pubdate column. Note that the price
column may range upto seven digits with two decimal digits (in effect, 5 digits before the
decimal - NUMBER(7,2)). The book table also contains references to the book's author and
subject through the AUTHORID and SUBJECTID foreign keys.

CREATE TABLE [dbo].[book](


[bookid] [int] IDENTITY(1,1) NOT NULL,
[authorid] [int] NOT NULL,
[subjectid] [int] NOT NULL,
[title] [varchar](200) NOT NULL,
[price] [money] NOT NULL,
[pubdate] [char](4) NOT NULL,
CONSTRAINT [PK_book] PRIMARY KEY CLUSTERED
(
[bookid] ASC
)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY],
CONSTRAINT [FK_book1] FOREIGN KEY([authorid])
REFERENCES [dbo].[author] ([authorid]),
CONSTRAINT [FK_book2] FOREIGN KEY([subjectid])
REFERENCES [dbo].[subject] ([subjectid])
) ON [PRIMARY]
GO

Page 41 of 90
Note that the foreign key constraint is defined through the CONSTRAINT - FOREIGN KEY
- REFERENCES clause of the create table command. The references section specifies the
table whose primary key is referenced by the foreign key (in our case, author for the foreign
key authorid and subject for the foreign key subjectid).

An Index on a column improves efficiency when specific values of the column are often
looked up within large tables. An index works by storing a sorted value of each indexed
column(s) with the exact location (as an offset in bytes) of the row where the value occurs in
the table. Thus, the value is looked up in the index (quite fast, as it is sorted) and the row is
retrieved by 'jumping' to the specified location. All tables are automatically indexed on the
primary key. Indexes are useful in cases where there are many requests for single rows of
data via the indexed column. We create two indexes - one based on the Author's name and
another based on the book's title. The index command is quite basic, it creates the named
index on the specified table on the columns within the parenthesis in the order specified. A
non-clustered index pertains to just the table rows stored on the current machine rather than
across a cluster of servers. The 'WITH' directive specifies certain optional parameters such as
sorting space for the index and so on.

CREATE NONCLUSTERED INDEX [IX_author] ON [dbo].[author]


(
[lastname] ASC,
[firstname] ASC
)WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF,
IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
GO

The data types we used are just a small subset of SQL Server data types; following are the
names and descriptions of some of the more frequently used data types:

• bit - An Integer with value either 0 or 1


• int - A standard integer in the range -2^31 (-2,147,483,648) to 2^31 - 1
(2,147,483,647)
• smallint - An integer in the range 2^15 (-32,768) to 2^15 - 1 (32,767)
• decimal - Fixed precision numeric data in the range -10^38 -1 to 10^38 -1 (may also
be referred to as numeric - a synonym)
• money - a data type that may be used for currency from -2^63 (-
922,337,203,685,477.5808) to 2^63 - 1 (+922,337,203,685,477.5807) with four
decimal places
• float - floating precision number data from -1.79E + 308 to 1.79E + 308.
• datetime - Date and time data from January 1, 1753, to December 31, 9999
• char - Fixed-length character data of up to 8,000 characters.
• varchar - Variable-length stings or character data upto 8,000 characters.
• text - Variable-length non-Unicode strings up to 2^31 - 1 (2,147,483,647) characters.
• nvarchar - Variable-length Unicode dataof up to 4,000 characters.

Page 42 of 90
The drop table command may be used to delete a table. Obviously, right clicking on a table
name and clicking 'delete' also opens a confirmation and deletes a table. Drops and alter
should be used with great care to prevent data invalidation. Note that 'dbo' is prefixed to
every table and object name. This is because we are assuming that the user 'DBO' owns the
booksdb database and all constituent objects.

DROP TABLE User.MYTABLE

The alter command may be used to redefine a table and can be used in different ways. The
following commands changes the type of the price column and adds a new column to the
author table and deletes the newly added sample column.

ALTER TABLE dbo.book ALTER column price smallmoney


ALTER TABLE dbo.author ADD totalbooks smallint
ALTER TABLE dbo.author DROP column totalbooks
Home » Microsoft SQL Server
6

Basic Queries and Data Modification

Answering basic queries and data modification

In This Chapter:

Select Statement
Overview of a select statement and its basic functions
Functions
Defines server functions and its uses
Operators
Defines server operators and its uses
Data Insertion and Modification
A view on data insertion and modification

Select Statement

Overview of a select statement and its basic functions

The select command is the crux of all DML and is used extensively to query
and retrieve data from databases. The select command has the following basic syntax;
optional clauses are in square brackets:

select [distinct] <comma_separated_column(s)>


from <table>
[ where <condition1> [and/or] <condition2> ...]
Page 43 of 90
[ order by <comma_separated_column(s) [asc/desc]> ]

A simple SELECT query for the books database and results follow:

select subjectdesc, categoryid from subject

Select Query Results

As you can see, the query just returned the columns subjectdesc and categoryid from the
subject table. Columns are returned in the exact order specified.

Now that we have the basic SELECT query under our belts, let us try something new. When
we need all the columns in a table in the same order they appear within it, there is really no
need to specify each column's name in the SQL command. We may use the asterisk '*'
symbol instead. This symbol stands for 'all columns' when used in this context.

select * from category

Page 44 of 90
Select All Columns Results

If we want a list of customers and their addresses in alphabetical order, we could use the
ORDER BY clause after the SELECT statement

SELECT lastname, firstname, address


from customer ORDER BY lastname, firstname

Select with Order by - Partial Results

Specifying the letters 'ASC' or nothing at all next to the column name sorts the column in
ascending order while specifying 'DESC' sorts the column in descending order. Sometimes,
there is a need to leave out duplicates when we get an answer to a question. For example, let
us say that we need to know which subjects are represented in the list of books. Obviously,
the book table has as many entries for a subject as there are books on the subject. Ten books
Page 45 of 90
may be classified under subject 1, five under subject 2 and so on, in which case 1 will appear
ten times and 2 will appear five times. We only want a list of the subjects represented; we
don't want to see duplicates.

select distinct subjectid from book


order by subjectid

Select with Distinct Results

We can clearly see that all 15 subjects are represented. Run the same query without using the
distinct keyword and see what results you will get.

Say you have a special promotion on books that cost under $30 and you want to send flyers
containing just these books. How would you get a list of books that cost under $30? SQL
offers the 'WHERE' clause to help you do this.

SELECT title, price from book WHERE price <= 30


order by title

Page 46 of 90
Simple Where Clause

If your query returns a large result set that you only need a part of, you may use the TOP
keyword to obtain partial results. The following command returns the title of the first 10%
(24 rows) of the total number of books in the book table:

SELECT TOP (10) PERCENT title from book

Basically, the 'where' clause made the command go to the database and grab all the rows of
information where the column price contained a value less than or equal to 30. The select
clause may also contain arithmetic expressions, function results and so on. The following
command returns the price of discounted order items after the application of discount against
the orderid. The 'AS' keyword allows us to specify an alias for the computed column. The
result of the computation is displayed under the header that appears after the AS keyword.

select orderid, unitprice*(100-discount)/100 AS FINALPRICE


from orderdetails
where discount is not null

Page 47 of 90
Expression Usage

Several operators and functions may be used for the appropriate datatypes while performing
arithmetic operations after the select clause, the where clause and so on. Following are some
important SQL Server built-in functions.

Discuss this in the forums

Functions

Defines server functions and its uses

Several operators and functions may be used for the appropriate datatypes
while performing arithmetic operations after the select clause, the where clause and so on.
Following are some important SQL Server built-in functions.

Aggregate functions
Function Result
sum([all | distinct] expression) Total of all or distinct values in numeric column
avg([all | distinct] expression) Average of all or distinct values in numeric column
count([all | distinct] expression) Number of non-null values in column
count(*) Number of selected rows
max(expression) Highest value of expression or column
min(expression) Lowest value of expression or column

Page 48 of 90
Datatype conversion functions
Function Argument Result
(datatype
Converts between datatypes; reformats date/time and money
convert , expression[,
data for display.
style])

Date functions
Function Argument Result
(datepart,
adds the specified number of the specified date parts (such
dateadd numeric_expression,
as month, day etc.) to the date.
date)
datediff (datepart, date1, date2) Returns date2 - date1 in the specified date part.
Returns the name of the specified date part (such as the
month "August") of a datetime value, as a character string.
datename (datepart, date)
If the result is numeric, such as "12" for the day a
character string containing the number is returned.
datepart (datepart, date) Returns integer value of specified part.
getdate () Returns the current system date and time.

Mathematical functions
Function Argument Result
abs (numeric) Returns the absolute value of a given expression
acos (numeric) Returns the angle whose cosine is the specified value.
asin (numeric) Returns the angle whose sine is the specified value.
atan (numeric) Returns the angle whose tangent is the specified value.
(numeric1, Returns the angle (in radians) whose tangent is
atn2
numeric2) (numeric1/numeric2).
Returns the smallest integer greater than or equal to the specified
ceiling (numeric)
value.
cos (numeric) Returns the cosine of the specified angle .
cot (numeric) Returns the cotangent of the specified angle .

Page 49 of 90
degrees (numeric) Converts radians to degrees.
exp (numeric) Returns the exponential value of the specified value.
Returns the largest integer less than or equal to the specified
floor (numeric)
value.
log (numeric) Returns the natural logarithm of the specified value.
log10 (numeric) Returns the base 10 logarithm of the specified value.
pi () Returns the constant value 3.14159...
power (numeric, power) Returns the value of numeric raised to the power power.
radians (numeric) Converts degrees to radians.
Returns a random floating point value between 0 and 1 using the
rand ([integer])
optional integer as a seed.
(numeric,
round Rounds the numeric so that it has integer significant digits.
integer)
sign (numeric) Returns positive (+1), zero (0), or negative (-1).
sin (numeric) Returns the sine of the specified angle
sqrt (numeric) Returns the square root of the specified value.
tan (numeric) Returns the tangent of the specified angle

String Functions
Function Argument Result
ascii (char_expr) Returns the ASCII code for the first character in the expression.
char (integer) Converts a single-byte integer value to a character value
Searches expression2 for the first occurrence of expression1 and
(expression1, returns an integer representing its starting position. If
charindex
expression2) expression1 is not found, returns 0. Wildcards are treated as
literals.
lower (chars) Converts uppercase to lowercase, returning a character value.
ltrim (chars) Removes leading blanks from the character expression.
replace (expression, Replaces occurrences of char1 with char2 in expression

Page 50 of 90
char1, char2)
Returns the reverse of chars; if char_expr is "xyz", "zyx" is
reverse (chars)
returned.
Returns the part of the character expression starting the specified
right (chars, integer)
number of characters from the right.
rtrim (chars) Removes trailing blanks.
space (integer) Returns a string with the indicated number of spaces.
(numeric Returns a character representation of the floating point number.
str [, length [, length is total number of characters to be returned; decimal is the
decimal] ]) number of decimal digits to be returned.
(expression, Returns part of a character or binary string (from position 'start';
substring
start, length) 'length' characters are returned).
upper (char_expr) Converts lowercase to uppercase.

System functions
Function Argument Result
(object_id, column_id
col_name Returns the column name.
[, database_id])
(object_name,
col_length Returns the defined length of column.
column_name)
datalength (expression) Returns the actual length of expression in bytes.
db_id ([database_name]) Returns the database ID number. .
Returns the database name. database_id must be a
db_name ([database_id]) numeric expression. If no database_id is supplied,
db_name returns the name of the current database.
Returns the current host computer name of the client
host_name ()
process (not the Server process).
(expression1, Substitutes expression2 for expression1 when
isnull
expression2) expression1 evaluates to NULL.
object_id (object_name) Returns the object ID.
object_name (object_id[, Returns the object name.
Page 51 of 90
database_id])
user Returns the user's name.
user_id ([user_name]) Returns the user's ID number from the sysusers table.
user_name ([user_id]) Returns the user's name

Following are some important examples that illustrate function use:

• SELECT subjectid, isnull(keywords, 'none')


• AS KEYWORDS from subject

Returns 'none' if the keywords column contains a null; other wise returns keywords

• select replace(title,',',';')
• as semititle from book

Returns the books titles with ';' substituted in places where a ',' occurs.

• select bookid,
• unitprice/100*(case
• when discount is null then 100
• else 100-discount end) as FinalPrice
• from orderdetails

Returns the bookid and price after discount of ordered books. If discount value
contains null, regular price is returned; otherwise discounted price is returned

• select title, price from book


• where floor(price) between 25 and 30

Returns all the books with cost in range 25.00 - 30.99

SQL Server functions are most useful when called from T-SQL queries. Users may obtain
data from SQL Server through select statements even when the returned data is not from a
database in the following way:

select abs(sinh(12.5))
select user
select getdate()

Operators

Defines server operators and its uses

Page 52 of 90
Following is a list of operators used in where clauses and descriptions

Operator Description Example


= All data that Equals value SELECT * from book WHERE authorid=119
SELECT * from category WHERE
<> All data that does not equal value
categoryid != 4
All data that is Greater than the SELECT * from book WHERE pubdate >
>
value '2000'
SELECT * from custorder WHERE
< All data that is less than the value
orderdate < '1-Jan-2006'
All Data that is greater than or SELECT * from book WHERE pubdate >=
>=
Equal to the value '2000'
All data that is Less than or equal SELECT * from custorder WHERE
<=
to the value orderdate <= '1-Jan-2006'
Used to Separate values being SELECT * from book WHERE subjectid = 5
AND, OR
compared OR subjectid = 10
Between two values including SELECT * FROM book WHERE pubdate
BETWEEN
those two values Between '1980' And '1990'
Used to search for a string of SELECT * from author WHERE lastname
LIKE alphabets, numbers, or other LIKE 'Shakespeare' OR firstname LIKE
characters 'William'
SELECT * from book WHERE subjectid IN
IN Used to search for a set of values
(1,5,10,15)

The WHERE clause is one of the most useful features of the language. Try out all the
examples in the above table in a T-SQL query window and see the results. The equals, less
than/greater than signs, and between keywords are used mainly for numbers and dates while
the LIKE clause is used for comparing words and other character strings. We can use the '='
operator for comparing short strings but using LIKE makes more sense.

Most of the time, we only know part of the string we are searching for. Can SQL Server get
us information about such strings? The '%' keyword is used as a wild card in matching
strings in SQL. How would someone look for all books that begin with an 'A'?

Select * from book where title LIKE 'A%'


Select * from book where title like 'Harry Potter%'

Another wildcard is the '_' underscore. While the % sign matches any number of characters,
the '_' matches just one. The following command would return all books that start with two
letter words with a leading 'A'. We match the 'A' followed by any character followed by a
space and any sequence.

Select * from book where title LIKE 'A_ %'

Page 53 of 90
To specify a range of valid characters, enclose the range in a pair of square brackets. For
example, [a-z] matches any lowercase alphabet, [0-9] matches any digit, and [AaBbCc1-3]
matches a-c upper or lower case and 1, 2, or 3. A caret '^' right after the initial square bracket
negates the character set - anything but the characters within the brackets containing a caret
are matched. That is [^a-c] matches anything but lowercase alphabets a through c. To find
square brackets, the dash, the percent symbol etc literally, precede the symbol with a
backslash (^ or \%).

Data Insertion and Modification

A view on data insertion and modification

Central to all DDL is the simple INSERT statement that inserts data into the
database. For examples, see any of the sql scripts other than booksdbddl. The following
statement inserts the values specified in the parenthesis after the keyword 'Values' into the
corresponding columnname in the list of column names within the parenthesis before the
values keyword. Here, 14 is entered into authorid, 8 into subjectid and so on.

insert into book(authorid,subjectid,title,price,pubdate)


values (14,8,'Alice''s Adventures in Wonderland',49.25,'1965')

The two single quotes (Alice''s) are used to escape the single quote. All single quotes within
column values should be escaped in this way as most SQL variants consider single quotes to
be column delimiters. Also note that we are not specifying a value for the 'bookid' column. If
the values of all the column are specified in the right order, there is no need to specify the list
of column names. In the following statement, we do not have to worry about authorid as we
have set it to be an automatically incrementing identity field.

insert into author values ('Sample','Author')

The update command modifies existing rows based on values or expressions that occur after
the clause 'UPDATE <tablename> SET .. The rest of the syntax is virtually identical to that
of select statements. Note however that update queries cannot use aggregate functions, joins
or order by clauses. The following statement sets the price of book with id 1 to $50.00

UPDATE book SET price=50 where bookid=1

The value to be updated appears after the SET keyword. If you want to update more than one
column, you separate them by commas as follows

UPDATE book SET price=50, subjectid=2 where bookid=1

The delete command deletes rows based on conditions following 'where' statements. The
following command deletes all data from the book table. Do not try this one out!

DELETE FROM book

Page 54 of 90
There is almost no situation where you would need to update all the values in a column or
delete all information from a table. Update and delete statements are more complicated than
insert statements because we first have to find the value or set of values we need to change
or remove. We do this by using the WHERE clause just the way we did in the first part of the
book to select data. The following command replaces all single quotes in the book's title
column with the oblique quote character. Note that we escape the single quote with another
single quote.

UPDATE book SET title=replace(title,'''','`')

Delete the new sample author you created:

DELETE FROM author WHERE LASTNAME like 'Sample%'

Suppose all books that cost over $100 were picked for a 25% discount, the following update
would change the concerned rows in orderdetails:

UPDATE orderdetails set discount=25 where unitprice >= 100

Complex Queries and Views

Understanding of complex queries and views in SQL

In This Chapter:

Complex Queries
Definition and overview of complex queries
A View through the Management Studio GUI
The basics of management studio GUI
7

Complex Queries

Definition and overview of complex queries

SQL provides powerful ways to summarize data in terms of row counts,


computed averages, sums and so on. Aggregate functions include statistical functions like
min, max, sddev and so on. These functions are used to calculate totals over entire columns
or groups of rows. For example, let us say that the Book Store Owner wants to know the
total number of books sold that cost over $100. The following query would give him this
information. The AS keyword provides an alternate name or ALIAS for the calculated value.
Regular columns may also be renamed temporarily using the AS keyword. This query
returns '12'

Page 55 of 90
SELECT SUM(quantity) AS TotalBooks
from orderdetails
WHERE unitprice>= 100

To calculate the average price of all books sold that cost over $100, the owner may use the
following query (this returns 113.5). The round function is used to round the returned
average to two decimal places:

SELECT round(avg(unitprice),2)
AS AveragePrice
from orderdetails
WHERE unitprice >= 100

The GROUP BY clause offers a way to group date in relevant columns to calculate sums,
count, averages etc. by group. For example, the following command returns a subject-wise
average book price:

SELECT subjectid,round(avg(price),2) AS AveragePrice


from book
GROUP BY subjectid
order by subjectid

Following is a table that lists out common SQL aggregate functions. Try out the examples in
in a Query window to see the results

Function Example Explanation

AVG(column) SELECT AVG(price) from book The average price of books


where pubdate <='2000' published in or after 2000

COUNT(column) SELECT Count(bookid) from book where The total number of books on
subjectid = 1 subject 1.

MAX(column) SELECT MAX(quantity) from orderdetails The largest order for a single
book

MIN(column) SELECT MIN(price) from book The lowest priced book

SUM(column) SELECT SUM(unitprice*quantity) from The value of each order before


orderdetails group by orderid application of discount

SQL allows nested queries in which the result of one query is obtained via a sub-query.

Page 56 of 90
• The following query returns all books that belong to subjects that come under the
'Dramas and Plays' category with ID 1.
• select title, price from book where subjectid in
(select subjectid from subject where categoryid = 1)

• The following query returns details of all orders made during 2006.
• select * from orderdetails where orderid in
(select orderid from custorder where orderdate >= '1-Jan-2006')

The any and all keywords are used in subqueries to evaluate query results. For example, the
following subquery returns the title and price of books that cost more than all the books with
subject = 1.

select title, price from book where price


> all (select price from book where subjectid = 1)

Let us consider the following query: Give me the authors and subjects of all the books under
category fiction

How do we get this information?

• The book table contains book titles, the author and subject ids but no information
about categories
• The category table simply contains category description and id.
• The author table contains author information alone.
• The subject table contains information about categories

To get this information using SQL, we JOIN the four concerned tables together. JOIN can be
confusing because the actual JOIN clause is not really necessary while writing JOIN queries.
However, once we have the fundamentals of combining tables straight, using JOIN becomes
easy and intuitive. First, let us look at the query that will answer the above question. Partial
results follow the query; 64 rows are returned in all:

SELECT title, subjectdesc,


lastname+', '+firstname AS AUTHOR
FROM book, author, subject, category
WHERE category.categorydesc like 'Fiction'
AND book.authorid = author.authorid
AND book.subjectid = subject.subjectid
AND subject.categoryid = category.categoryid

Page 57 of 90
What was the problem we had? The table that we needed information from contained only
IDs and not names. The filter provided was a string (Fiction) and the output sought was also
string data and descriptions. The authorID-to-name mapping is in the author table while the
subjectid-to-name mapping is in the subject table. A good understanding of the concept of
foreign keys is essential to writing JOIN queries in SQL as most common JOINs are
performed using primary key-foreign key pairs. When a primary key of a table appears as a
column in any other table, it is called a foreign key. In the book table, we have two foreign
keys: subjectid and authorid. To hook up tables, all we have to do is hookup the foreign key
in one table with the corresponding primary key in the other table after the WHERE clause.
This is exactly what the following parts of the command did

AND book.authorid = author.authorid


AND book.subjectid = subject.subjectid
AND subject.categoryid = category.categoryid

In each case, we used the syntax Table Name.Column Name in order to not confuse the
database engine - the names of the columns are the same, so this is the only way to
distinguish between them.

Also, we listed all concerned tables (book, author, subject, and category) after the FROM
clause with a comma between each table's name. The hooking up of the foreign keys with
primary keys and the listing of all the tables’ names constitute the bulk of the join. What is
left over is simply setting the input and getting the output.

SELECT title, subjectdesc,


lastname+', '+firstname AS AUTHOR

The above statement gets us the output we want. We combine the lastname of the author
with the firstname and a comma in between through the '+' concatenation operator and assign
the alias 'AUTHOR' to this field. Interestingly, specifying the table name before the column
name is unnecessary here because there is only one column called 'title' and one column
called 'lastname' in all four tables. The following line indicates our input condition:

Page 58 of 90
WHERE category.categorydesc LIKE 'Fiction'

Here are a few more joins that you can try out:

• SELECT title, subjectdesc,


• lastname+', '+firstname AS AUTHOR
• FROM book, author, subject, category
• WHERE category.categorydesc like 'Fiction'
• AND book.authorid = author.authorid
• AND book.subjectid = subject.subjectid
• AND subject.categoryid = category.categoryid
• ORDER BY title

All this command does is perform the same JOIN we looked at earlier. Additionally, it
sorts the result by title.

• SELECT custorder.orderid, lastname, firstname,


• sum(unitprice*quantity) as totalprice
• from customer, custorder,orderdetails
• where customer.customerid = custorder.customerid
• and custorder.orderid = orderdetails.orderid
• group by custorder.orderid, customer.lastname, customer.firstname
• order by custorder.orderid

The above query returns the value of each customer's total order with the customer
name.

• SELECT lastname, firstname, orderdate, orderid from


• custorder, customer
• where custorder.customerid = customer.customerid
• order by orderdate

Retrieves orders with dates for each customer

When the JOIN keyword is left out, SQL performs an INNER JOIN by default. That is, for
each hooked up column, it brings up only the rows of information that match conditions in
both tables. A LEFT JOIN brings up all the rows of the table on the left hand side of the
LEFT JOIN keyword and only rows that where both fields are equal from the table on the
right, while a RIGHT JOIN brings up all the rows of the table on the right hand side of the
RIGHT JOIN keyword and only rows where both fields are equal from the table on the left.

Page 59 of 90
Let us use an example outside our database to understand the differences between the three
types of joins.

Table Authors:

AuthorID Name

01 Sample Author 1

02 Sample Author 2

03 Sample Author 3

04 Sample Author 4

Table SubjectAuthors:

SubjectID SubjectName AuthorID

1 Horror 04

2 Children 04

3 Western 02

4 Adventure

Join Types

INNER
Example:
SELECT Authors.Name, SubjectAuthors.SubjectName
FROM Authors, SubjectAuthors
WHERE Authors.AuthorID = SubjectAuthors.AuthorID

Output

Name SubjectName

Sample Author 4 Horror

Sample Author 4 Children

Page 60 of 90
Sample Author 2 Western

LEFT

Example:
SELECT Authors.Name, SubjectAuthors.SubjectName
FROM Authors LEFT JOIN SubjectAuthors
WHERE Authors.AuthorID = SubjectAuthors.AuthorID

Output:

Name SubjectName

Sample Author 1

Sample Author 2 Western

Sample Author 3

Sample Author 4 Horror

Sample Author 4 Children

RIGHT

Example:
SELECT Authors.Name, SubjectAuthors.SubjectName
FROM Authors RIGHT JOIN SubjectAuthors
WHERE Authors.AuthorID = SubjectAuthors.AuthorID

Output:

Name Product

Sample Author 2 Western

Sample Author 4 Horror

Sample Author 4 Children

Adventure

Page 61 of 90
Discuss this in the forums

A View through the Management Studio GUI

The basics of management studio GUI

• Delete the bookinfo view under the 'Views' folder by right clicking on
it and clicking delete. Confirm your choice and make sure that the view does not exist.
• Right click on the 'Views' Folder and click 'New View...'
• Hold down the shift key and click on the author, book, and subject tables. Click on the
'Add' button
• Now, click the 'close' button.
• The resultant Query designer dialogue should resemble the following screen shot. The
designer automatically performs joins on foreign keys in included tables.
• First, scroll to the 'table' column of the first row. Select 'book'.
• Now, scroll to the 'Column' field for this row and select 'title' from the list.
• Select 'Ascending' in sort type and enter '1' under order.
• Copy the following string into the 'Column' field of the next row:

dbo.author.lastname + ', ' + dbo.author.firstname

• Enter 'Author' under Alias.


• Select 'Ascending' in sort type and enter '2' under order.
• Scroll to the 'table' column of the third row. Select 'subject'.
• Select 'subjectdesc' from the list in the leftmost 'Column' field.
• Select 'Ascending' in sort type and enter '3' under order. Following is the resultant
screenshot:

Bookinfo View Design

• Note that the T-SQL for the view is displayed in the bottom window. Right click on
the View tab and click on 'Save View..'. Enter 'bookinfo' in the popup window and

Page 62 of 90
click 'OK'. You have now created the view through the GUI. You may test your view
by issuing a 'select * from bookinfo' command.

Discuss this in the forums

Home » Microsoft SQL Server


8

Advanced Programming with T-SQL

Writing structured programs for the SQL server DBMS

Transact SQL may be used to write structured programs for the SQL Server DBMS.
Programs may contain a sequence of commands, take input, return output and contain
variables that hold intermediate and final values. Some programs add structure to sets of
SQL commands and are referred to as procedures and others are called functions. Procedures
carry out tasks via conditional constructs, looping constructs that execute a piece of code
over and over again and so on.

print('Hello World')

The above command is a T-SQL directive, not a SQL query. The 'print' function carries out
the simple task of printing out a line onto the output. A stored procedure may carry out a set
of such tasks. For example, the following set of commands greet a user and display the
current date and time. This code may be compiled and executed by running the code on a
SQL Query window and hitting the execute button or the F5 key.

print('Hello ' + user + '!')


print ('It is ' + convert(char(8),getdate(),8) +
' on ' + convert(char(12),getdate(),107))

Output of greeting Code

We could define this simple set of commands as a stored procedure which we can schedule
to execute each time a user logs into the server. The CREATE PROCEDURE statement is
used to create a new procedure. Note that you should open a new query specifically for the

Page 63 of 90
database that you wish to add the procedure to in order to add the procedure to that database.
Right click on the database of interest and click on 'New Query' to open a query window for
that database. Once you run the following set of commands, you will be able to see a new
Stored procedure when you expand and refresh the 'Stored Procedures' folder under the
'Programmability' folder for the database under which you run the creation command.

CREATE PROCEDURE greeting


AS
BEGIN
DECLARE @this_user varchar(15)
set @this_user = user
print('Hello ' + @this_user + '!')
print ('It is ' + convert(char(8),getdate(),8) +
' on ' + convert(char(12),getdate(),107))
END

We declare a variable called this_user and set it to equal the current user (the 'user' function
returns the current user). Note that all user variables should be preceded by the symbol '@'.
We use the string concatenation operator '+' to combine literal strings such as 'Hello ' with
the values of variables such as this_user, and getdate(). The 'CREATE PROCEDURE
greeting' command creates a procedure with the name greeting. This is followed by the
keyword 'IS', variable initializations and declaration, and finally the code within the 'BEGIN'
and 'END' statements. Note that the BEGIN and END statements denote a code block. The
EXECUTE command may be used to run a stored procedure.

EXECUTE greeting
OR
EXEC greeting

You may delete a stored procedure by issuing a 'DROP PROCEDURE' command or by right
clicking on it and selecting delete.

DROP PROCEDURE greeting

Creating a procedure using the GUI is very similar to creating one using a SQL query
window. Right click on the 'Stored Procedures' Link under 'Programmability' and click on
'New Stored Procedure..'. A new query window with a Stored Procedure template will be
displayed. Change the template and save the stored procedure by right clicking on the Query
window tab and selecting 'Save'.

Stored Procedures are more efficient than raw queries as the SQL Server compiles each
stored procedure just once and then reuses the execution plan. Further, in cases where the
server is accessed remotely, a single execute statement saves bandwidth unlike lengthy
strings that contain sets of SQL queries. Also, users who are given access to stored
procedures need not necessarily have access to the underlying tables. Stored procedures may
be used to provide a layer of separation and security.
Page 64 of 90
A T-SQL procedure may be a lot more complicated and includes several loops or conditional
statements. It may also take input parameters and return output. The following stored
procedure takes in the author's lastname and firstname as parameters and returns the total
number of books written by the author placed on order.

CREATE PROCEDURE book_orders


(@lastname_param VARCHAR(100),
@firstname_param VARCHAR(100))
AS
BEGIN
DECLARE @book_count int
DECLARE @prn_message varchar(300)
set @book_count = (SELECT sum(quantity)
FROM book, author, orderdetails
WHERE book.authorid = author.authorid
AND orderdetails.bookid = book.bookid
AND author.lastname = @lastname_param
AND author.firstname = @firstname_param)
set @prn_message = CASE
WHEN @book_count > 1 THEN
@book_count + ' books by '
+ @lastname_param + ', ' + @firstname_param +' ordered'
WHEN @book_count = 1 THEN
'@One book by '
+@lastname_param + ', '+ @firstname_param +' ordered'
ELSE
'No books by '
+@lastname_param + ', ' + @firstname_param +' ordered'
END
print(@prn_message)
END

The procedure takes as input the lastname and firstname of the author. The variable
'book_count' is set to contain the results of the join query (in red) that produces the desired
result - a count of books by the specified author. Also note the CASE - ELSE - END loop.
This loop sets a custom message (@prn_message) based on the number of books found.
Finally, the print function is used to print the output message. Run the stored procedure in
the following way after creation:

exec book_orders 'Shakespeare', 'William'

Page 65 of 90
Output of book_orders

The above program is limited - it is hard to specify an exact author name string. How would
we make this procedure more general? We could fix it so that a short form of just the last
name is taken as input. The procedure should then return a sorted list of all authors whose
lastname matches the input string and the number of books by the author in the orderdetails
table. This result, however, involves sending back a set of rows. This may be accomplished
by using a cursor in the following way:

create PROCEDURE book_orders_extended


(@lastname_param VARCHAR(100))
AS
BEGIN
DECLARE @author VARCHAR(100)
DECLARE @book_count integer
DECLARE books_sum CURSOR FOR
SELECT lastname+', '+firstname AS AUTHOR, sum(quantity)
FROM book, author, orderdetails
WHERE book.authorid = author.authorid
AND orderdetails.bookid = book.bookid
AND author.lastname like @lastname_param+'%'
GROUP BY lastname, firstname
ORDER BY lastname, firstname
OPEN books_sum
FETCH NEXT FROM books_sum
INTO @author,@book_count
WHILE @@FETCH_STATUS = 0
BEGIN
print(convert(varchar(5),@book_count) + ' books by ' + @author)
FETCH NEXT FROM books_sum
INTO @author,@book_count
END
CLOSE books_sum
DEALLOCATE books_sum
END

We declare a cursor called books_sum and variables author and book_count. We set the
cursor variable books_sum equal to the set of rows returned by the query we used (with a
Page 66 of 90
few modifications - mainly, we use wildcards) within the program last time. We then open
the cursor and loop through each available line of the cursor (@@FETCH_STATUS is a
system variable that remains 0 while there are more rows in the cursor). Each returned row is
in turn stored in variables author and book_count and displayed using the 'print' command.
We exit the 'while' loop when new rows are not found. Finally, we close the cursor and re-
allocate the space it takes up in the SQL Server memory space. The following are a couple of
outputs for the new and improved book_orders_extended procedure. You may now look for
authors merely by typing part of the first name in either the upper or lowercase if you left the
default value of collation (case insensitive) as is during your installation. Otherwise, you
may have to be specific with the case of the partial name.

exec book_orders_extended 'b'


exec book_orders_extended 'car'

Output of Extended book Orders

A function is very similar to a stored procedure. However, a function always returns a single
value through the 'return' statement. A function may be used as part of a SQL command like
any SQL Server built-in function. The following function computes and returns the
discounted price of a book given the unitprice and discount percentage.

create function apply_discount


(@unitprice money,
@discount int)
returns money as
BEGIN
DECLARE @discountprice money
SET @discountprice = @unitprice*(100-@discount)/100
RETURN @discountprice
Page 67 of 90
END

The following query returns the total price after applying discount on each order. It uses the
apply_discount function to calculate the total price. The ISNULL T-SQL function returns the
discount percent if available or '0' if the discount column contains null. Note that we use
'dbo.apply_discount' - this prompts the SQL server to search for apply_discount among user
DBO's custom functions. Partial results are shown underneath.

select orderid,
sum(quantity*(dbo.apply_discount(unitprice,ISNULL(discount,0))))
AS TOTALPRICE
from orderdetails
group by orderid
order by orderid

Select Query that uses custom function

A trigger database object is used to execute sets of commands each time a certain event
happens. In other words, a specified event 'triggers' execution of the commands. The general
syntax of a trigger follows. The '|' symbol stands for or - just one of the keywords should be
used:

CREATE TRIGGER <trigger_name>


ON { table | view }
{
{ { FOR | AFTER | INSTEAD OF }
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
AS
<All T-SQL Statements that make up the trigger>
}

A trigger may be defined for a table or view and be executed before, after or instead of an
insert, update, or delete event. The following simple trigger prints out the name of a new
author after executing each insertion command for the author table:

CREATE TRIGGER SHOW_AUTHOR


ON AUTHOR
AFTER INSERT
AS
BEGIN
DECLARE @author VARCHAR(100)
SET @author = (SELECT firstname + ' ' + lastname FROM Inserted)
PRINT 'New author "' + @author + '" added'
END
Page 68 of 90
The above trigger is fired after the insertion of each new row into the AUTHOR table. It
performs just one function: printing the name of the author just added. We declare a variable
called author and set it to contain the inserted values. The "Inserted" table is a virtual SQL
Server System table that contains all the column values for the insert that fired the trigger.
We use this table to extract the values of author lastname and firstname. Finally, we print a
message with the name of the new author.

Testing our Trigger

An Introduction to SQL Server Tools and Components

Getting to know the components and tools that simplify Database administration and
monitoring

In This Chapter:

Business Intelligence Development Studio


Using BIDS to develop Integration Services projects
SQL Server Analysis Services
Designing simple multi dimensional cubes for analysis and data mining
The SQLCMD Tool and SQL Server Management Objects
Typing commands at prompt and automating repetitive SQL Server administration
related tasks
SQL Server Monitoring
Concepts of server monitoring

Discuss this in the forums

Business Intelligence Development Studio

Using BIDS to develop Integration Services projects

BIDS is a Microsoft Visual Studio Tool that may be used to develop


Integration Services projects, Reporting Server projects, Analysis Server projects and other

Page 69 of 90
database projects. BIDS is available in the SQL Server program group after installation.
Click on Start -> SQL Server -> Business Intelligence Development Studio to open BIDS.

Let us use BIDS to create a new database project and open the book_orders_extended stored
procedure for viewing and debugging. As you may have observed, using just the SQL Cmd
tool to create and work with Stored Procedures does not facilitate ease of debugging and
testing. BIS offers a far better tool to work with programmable database objects.

New Database Project

Upon opening BIDS, go to the 'File' menu and click New -> Project. The new project
window pops up; expand 'Other Project Types' and click 'Database'. Specify a suitable name
and location for the project and click 'OK'.

Choose Data Source

Select 'Microsoft SQL Server' in the choose datasource dialogue and click continue. Now,
select your server from the list after pressing the down arrow next to 'Server Name' in the
'Add Database Reference' window. You should now be able to pick the 'booksdb' database
from the list under 'Select Or Enter Database Name' (click the radio button next to this entry
if it is not checked by default). Click 'OK'.

Page 70 of 90
Add Database Reference

Click View -> Server Explorer. You should now be able to see the booksdb database tree.
Expand stored procedures and right click to load, edit, or debug stored procedures. If you
choose 'Step into Stored Procedures' after right clicking on book_orders_extended, you will
be prompted for an input parameter. Enter 'a' for all authors whose last name starts with 'A'.
You may now step into Stored Procedure execution line by line. This should give you an
idea about the usefulness of BIDS while working with SQL Server databases.

Discuss this in the forums

SQL Server Analysis Services

Designing simple multi dimensional cubes for analysis and data mining

The SQL Server makes the design of multi Dimensional cubes for OLAP
related analysis and data mining simple and intuitive. OLAP allows users to interactively
view and analyze large data sets in data warehouse type applications. OLAP uses a relational
data warehouse as a data source and places pre-specified measures (usually Key
Performance Indicators such as sales, course grades, and so on) into an appropriate context
through dimensions (a sales dimension may be months of the year, a course grade dimension
may be a semester or perhaps weekly exams). Essentially, multi dimensional 'cubes' are
created by relating measures and dimensions in two-dimensional database tables. The
constituent data for the cubes are pre-aggregated and analysis ready. Business users (rather
Page 71 of 90
than database developers) may now use tools to browse the information contained in the
cube and perform ad hoc analysis.

The SQL Server 2005 Business Intelligence Workbench suite utilizes a SQL Server
Relational Database to hold data; Analysis Services to create a multidimensional OLAP
cube; Reporting Services to design and output professional reports from OLAP cube data;
and a Data Mining component to extract and analyze data using algorithms that focus on
specific measures.

Once appropriate dimensions and KPI measures are identified, Analysis Services may be
used to construct an OLAP cube. Wizards make dimension creation quite intuitive. First, a
data source view that imports the relevant database objects should be created. Afterwards,
the dimensions and measures may be mapped to create the cube. A language called MDX
may be used to query multidimensional cubes for complex output called 'calculated
measures'. An Excel 'Pivot' table may also be used to query the OLAP cube.

The built-in cube browser may be used to view data in the cube. Users may drag and drop
dimensions and KPIs into x and y axes to see graphs and tables with aggregate data such as
KPI measured over time, shortfall from goals and so on.

Discuss this in the forums

The SQLCMD Tool and SQL Server Management Objects

Typing commands at prompt and automating repetitive SQL Server administration related
tasks

The SQLCMD Tool

Many DBMS include command line tools that allow DBAs and users to work with the
system by typing commands at a prompt. SQL Server 2005 contains a powerful new
command-prompt utility called SQLCMD . This tool communicates with the database server
using the OLE DB protocols. SQLCMD may be used to run database programming related
and maintenance related scripts with input parameters, ad hoc queries and more. One of the
most important features of the SQLCMD mode is preallocated connection bandwidth.
Consider a server that is 'hung' or stuck on some erroneously executing slow process.
SQLCMD may be used to obtain a guaranteed connection to the stalled server. The DBA
may use system monitoring tools called Dynamic Management Views to find the process
that is consuming all the CPU and delete it without having to shut down and restart the
server.

Page 72 of 90
The SQL Server Query Editor that opens when the 'New Query' icon is clicked may be used
to develop SQLCMD scripts. Simply turn on the SQLCMD mode by clicking 'SQLCMD' in
the 'Query' drop down menu. SQLCMD can also be invoked from the DOS command
prompt by typing 'SQLCMD'.

SQL Server Management Objects

SQL Server Management Objects (SMO) may be used to automate repetitive or frequently
executed SQL Server administration related tasks such as new database creation, T-SQL
script execution, SQL Server Agent Job creation, backups and so on. SMOs are implemented
as sets of .NET assemblies. SMOs may be used to create custom SQL Server instance
manipulation applications and web based applications.

Windows Management Instrumentation

SQL Server configuration may be managed through a Windows Management


Instrumentation (WMI) configuration provider. DBAs may write management programs and
scripts to monitor, configure, and control information about SQL Server by utilizing the
WMI framework. For example, a DBA may save a snapshot of all the configuration settings
of a SQL Server instance into an Excel file using WMI

SQL Server Monitoring

Concepts of server monitoring

SQL server provides a number of tools and views that allow DBAs to
effectively monitor the server. This section provides an introduction to SQL Server
monitoring tools. SQL Server allows DBAs to access a new set of SQL views called
Catalogue views to obtain system catalog information. These read-only views display
metadata and definitions. Many catalogue views are available. The views follow a naming
convention and may be queried through a lookup of an associated System function.

Page 73 of 90
System Catalogue Views

The SQL Server Profiler uses a trace file to capture the load on a server at specific points and
replay the situation. A DBA may use the profiler to analyze heavy workloads that affect
performance. The profiler may be used to diagnose performance problems, identify slow
running queries, identify the reason behind slow execution, and capture sets of commands
that lead to system failure, etc. SQL Server Profiler trace files may be saved and exported to
other formats.

Profiler Options

A SQL Server 2005 database alert may be scheduled to respond to error flags thrown by the
database when errors occur. The error flag normally contains information about error
severity. This information is relayed to the DBA through the alert. For example, an alert may
be created and signaled once an executing query crosses a certain threshold in terms of time
taken for execution. Alerts may be designed and deployed using SQL Server Agent or T-
SQL commands.

Page 74 of 90
SQL Server 2005 provides several new Dynamic Management Views (DMVs) that allow
DBAs to monitor instances from the server level down to the database level in Real Time.
DMVs are found in each database's System folder and have the prefix dm_<view_name>.
Execution of user modules and connections may be monitored via DMVs that start with the
prefix dm_exec_. Execution schedule and lock related information may be obtained through
dm_os_, transaction related information may be procured via DMVs prefixed with
dm_trans_. Disk input and output operations may be monitored through DMVs with prefix
dm_io_.

The Database tuning advisor recommends streamlined database designs that optimize data
manipulation and retrieval efficiency. Specifically, the advisor provides Time-bound tuning
recommendation, multi-database tuning, partition tuning, scalability suggestions, and event
tuning. Results of DTA analyses may serve as output to T-SQL scripts, text reports, or XML
reports. The DTA component runs outside the SQL Server processes on its own
DTAShell.exe executable.

Database Tuning Advisor

The SQL Server Management Studio includes a current activity window that displays
information about current users, locks, processes, locked objects and so on at the database
level. An event notification is a database object similar to a trigger and may be designed to
execute in response to specific DDL statements and so on. Unlike a trigger, an event
notification is not synchronously executed along with the triggering event; it may even be
sent to wait on a message queue for processing at some later time. Event notification may be
used to monitor changes to database structure and so on in complex scenarios.

Discuss this in the forums

Home » Microsoft SQL Server


10

XML Programming in SQL Server 2005

Using the universal format to represent information


Page 75 of 90
In This Chapter:

XML and SQL Server


XML programming in SQL server
Configure Web Site
How to configure a web site
Create Web User
How to create a web us

XML and SQL Server

XML programming in SQL server

XML is a universal format used to represent information. XML has already emerged as a
front runner among data manipulation and data transmission tools. An XML document
consists of data encapsulated within identifying tags. The tags may contain descriptive
attributes about the data. The following is an XML representation of a Book Object from our
database. We encapsulate information about the bookid, title, author, year first published,
and category.

<book>
<id>1</id>
<title>1000 Years, 1000 People: Ranking the
Men and Women Who Shaped the Millennium</title>
<author>Gottlieb, Agnes Hooper</author>
<pubdate>1999</pubdate>
<category>Non Fiction</category>
</book>

An XML document may be represented as a tree. The root of the tree is the base or root
element, and the innermost elements form the leaves. The book element in the above XML is
the root; all other elements are trees. More complex documents may have many intermediate
levels. An XML document may contain any sort of information and is extremely useful as a
universal datasource. The utility of XML documents is enhanced by XML schemas - a
Schema defines the structure of an XML document. Schemas are themselves XML
Page 76 of 90
documents with the extension ".xsd". XML validators check XML documents against a
specified schema. Validators perform an important function; entire databases may be
contained in a single XML document; the integrity of the data may be easily verified using a
schema.

<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="book">
<xs:complexType>
<xs:sequence>
<xs:element name="id" type="xs:integer"/>
<xs:element name="title" type="xs:string"/>
<xs:element name="author" type="xs:string"/>
<xs:element name="pubdate" type="xs:string"/>
<xs:element name="category" type="xs:string"/>
<xs:element name="price" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>

The above lines declare the version of XML and schema that book.xsd complies with. It also
declares an element called book. 'complexType' elements contain other elements. The
'Sequence' directive defines the exact order of elements within the complex type. Here, the
'book' root element contains all the book property elements. Note that id is declared as an
integer (xs:integer) while 'title' is declared as a string. The 'xs:decimal' xml type may be used
to hold currency values like our book price element. Validators used to verify XML against
the schema throw errors if the structure of xml does not pertain to the structure specified in
the xsd or if any elements have a different datatype.

XML Validation Through XSD

Page 77 of 90
XML stylesheets contain specifications on translating the XML document to HTML or some
other markup. XSL Transformers are used to modify XML documents according to a
provided XML stylesheet's specifications. XML Stylesheets have the extension ".xsl".
Several commercial products such as XML Spy and others offer an Environment to create
XML documents, StyleSheets, and Schemas.

XSL Transformer

The booksbyauthor stylesheet converts a list of several book elements held within start and
end <booksbyauthor> tags into an HTML document with the heading 'Books By Selected
Author'. This displays books in a tabular format within the xml document. Take a look at this
file along with the book XML and XSD to get a better idea about how XML stylesheets
work.

SQL Server 2005 supports an XML data type. This data type may be used to hold entire xml
documents or well formed XML fragments (like the book element xml above). Further, the
xml data type column may be 'Typed' by associating it with a schema stored within a SQL
Server schema collection. All new values for the XML data typed column are validated with
the schema before input into the database.

Also, XML columns associated with a schema are automatically broken down into individual
elements and stored internally in an efficient manner. SQL Server automatically reconstructs
the XML document or fragment when queried for the same. Untyped XML is stored as a
simple character string. However, even untyped columns should contain well formed XML -
all start tags should be complemented by corresponding end tags in the correct sequence.

In this chapter, we will create a bookxml table that holds a single XML data type column.
This column (bookelement) will house XML fragments containing important book related
information. We will associate the XML fragment with an xsd and run sample pass/fail
validations. We will also use programming techniques to use an xsl stylesheet (also stored in
a database table) to convert the xml into html and display the result on a web page. A stored
procedure will extract all books by authors whose lastname starts with a given string and
return an xml string containing the returned xml fragments. Some .NET CLR routines will
Page 78 of 90
be used within this procedure to convert this xml list to html and send it back to the user who
performed the search.

Let us first create a bookxml table and xml schema collection for the bookelement XML
column in the table. We will then construct XML fragments for our existing books and insert
these into the table. First, create an XML schema collection. Open Management Studio and
connect to your default server. Right click booksDB and click on 'New Query'. Copy and
paste the following query in the query window and run it by clicking '!' Execute or hitting the
F5 key.

USE [booksdb]
GO
CREATE XML SCHEMA COLLECTION [dbo].[BookSchema]
AS
N'<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="book">
<xs:complexType>
<xs:sequence>
<xs:element name="id" type="xs:integer"/>
<xs:element name="title" type="xs:string"/>
<xs:element name="author" type="xs:string"/>
<xs:element name="pubdate" type="xs:string"/>
<xs:element name="category" type="xs:string"/>
<xs:element name="price" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
'

Expand the booksdb database and the 'Programmability' folder. Expand 'Types'. Right click
on the XML Schema Collections and click 'Refresh'. Now, expand the 'XML Schema
Collections' Folder. You will see the BookSchema collection that we have just created.

The New Schema Collection


Page 79 of 90
Next, let us create a bookxml table. Right click on the 'tables' folder under booksdb and click
'New Table..'. Enter 'bookelement' in the column name and select 'XML' from the Data Type
list. Click on the box under 'Allow Nulls' so that it allows fields that are empty. Click on the
down arrow at the end of the XML Type Specification in the 'Column Properties' panel and
select 'dbo.BookSchema' from the list. Now, right click the table tab and save the table with
the name 'bookxml'. You have now created a table with a typed xml field.

bookxml Table Definition

In general, tables like 'bookxml' are indexed on a primary key for ease of retrieval. We use a
single column in our example as we will not have a lot of data in our table. Let us now
construct some xml fragments from our existing books for insertion into the bookxml table.
Run the following command in a query editor to create a list of 'insert into' statements. This
command simply retrieves data out of a query that joins the author, book, category, and
subject tables and uses the string concatenation operator and several string literals to
construct a valid book xml fragment using existing data. We are just experimenting; a real
world scenario would not require this type of insertion; data would be procured from sources
in XML format.

select 'insert into bookxml values(''<book><id>'+


rtrim(convert(char(6),bookid)) + '</id><title>'
+ title + '</title>' +
'<author>' + lastname + ', ' + firstname + '</author>' +
'<pubdate>' + pubdate +
'</pubdate><category>' + categorydesc +
'</category><price>' +
ltrim(convert(char(10),price)) + '</price></book>'')'
from book, author, subject, category where
author.authorid = book.authorid and
subject.subjectid = book.subjectid and
category.categoryid = subject.categoryid

Page 80 of 90
Once you run the above query, you will see a list of generated insert statements in the bottom
panel. Right click on the top bar of the results and click on 'Select All..'. Right click once
more and select copy. Open a new query window by right clicking on 'booksdb' and clicking
'New Query'. Now, paste the contents you just copied onto the new window by pressing
down the Ctrl key at the bottom left of the keyboard and typing 'V' (Ctrl -V').

Copy Paste Generated XML

There is a small problem with our generated xml. All single quotes have not been escaped.
SQL Server considers single quotes to be string delimiters; all single quotes that are part of a
string to be inserted into the database (e.g. O'Donell in author name or It's a Long Day in
title) should be 'escaped' by inserting an extra single quote. To do this, click on the Edit ->
Find and Replace -> Quick Replace on the top bar menu or hold down the Ctrl key and type
'H' (Ctrl - H). This will open the quick replace dialogue.

Fixing single Quotes in generated Insert Statements

The string {[^(]}'{[^)]} is a regular expression. It tells SQL Server to search for occurrences
of single quotes that do not immediately follow an opening bracket or immediately precede a

Page 81 of 90
closing bracket. This string is composed of inline single quotes. Enter this string in the 'Find'
box. Enter \1''\2 in the Replace box. This string indicates that the entire matched string from
'Find' should be replaced by \1 (the stuff matched within the first pair of '{}') followed by
two single quotes (in place of one) followed by \2 (the stuff matched within the second pair
of '{}'). Now, select 'Current Document' under 'Look In'. Expand the 'Find Options' box by
clicking on the plus sign next to it. Click on the 'Use' box so that it is checked and select
'Regular Expressions'. Now, click 'replace all'. This action should show you a pop-up
window with 'Replaced 20 Occurrences'. Close the pop-up and the Find-Replace window.
Now, simply select all the contents of the query window with the generated inserts and click
'!' Execute or hit the f5 key to insert all the records. You should see several '1 Row(s)
affected' messages.

The only reason we were able to insert all these XML fragments is because they comply with
the Schema Object dbo.BookSchema. Try to execute the following commands. One has a
value for price that is not of type 'xs:decimal' and the other has a pubdate element before the
author element. Both fail with the error shown below each. Thus, no invalid data may be
entered into the bookxml table.

insert into bookxml values('<book><id>235</id>


<title>What Is Life</title>
<author>Schrodinger, Erwin</author>
<pubdate>2044</pubdate>
<category>Non Fiction</category>
<price>##badprice##</price></book>')
Msg 6926, Level 16, State 1, Line 1
XML Validation: Invalid simple type value: '##badprice##'.
ocation: /*:book[1]/*:price[1]
insert into bookxml values('<book><id>235</id>
<title>What Is Life</title>
<pubdate>2044</pubdate>
<author>Schrodinger, Erwin</author>
<category>Non Fiction</category>
<price>22.22</price></book>')
Msg 6965, Level 16, State 1, Line 1
XML Validation: Invalid content. Expected element(s):author
where element 'pubdate' was specified.
Location: /*:book[1]/*:pubdate[1]

We will now create a table that will hold XSL stylesheet specifications for XML data
columns. Create a table in booksdb using the following command. This command creates a
table with an automatically incremented id, a column name in the xmlcol field (for the xml
column that the xsl corresponds to), and an xml data typed column that contains the xsl file
to use to transform the contents of the column name in 'xmlcol' to HTML.

USE [booksdb]

Page 82 of 90
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[xsl](
[xslid] [smallint] IDENTITY(1,1) NOT NULL,
[xmlcol] [varchar](50) NOT NULL,
[xslspec] [xml] NOT NULL,
CONSTRAINT [PK_xsl] PRIMARY KEY CLUSTERED
(
[xslid] ASC
)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF

Save the bookxsl.sql file onto your computer and open it using File -> Open -> File and
execute its contents by selecting the contents and clicking '!' Execute or pressing the F5 key.
You just ran an insert command that simply inserts a line with the xsl we will use later. Make
sure that the row has been entered into the table.

CLR stands for Commmon language Runtime. This is a .Net mechanism that allows .Net
code to be registered and used within SQL server. The code segment within the
XSLTTransform.cs file contains code (from msdn.com) that calls an XML transformer with
two input parameters - the xml file and xsl to use for conversion. A function called transform
within this code returns an HTML representation of the input XML. Knowing how this code
works is really not necessary; it is enough to know that we will create an assembly based on
this code within the SQL server and use its functions in a Stored Procedure. Save the file
onto your computer.

Open a command window (Run -> Cmd) and go to the


'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727' folder using the 'cd' command. You
may not have the v2.0.50727 folder; go to 'C:\WINDOWS\Microsoft.NET\Framework\'
folder through windows explorer and check for any directory with a 'v2' prefix and type the
CD command for this path. Link the above code using the following command. Note that the
path should correspond to the path to XSLTTransform.cs on your computer. This will create
a XSLTTransform.dll file in the 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727'
directory. Move the XSLTTransform.dll file to an appropriate location on your computer.
Now, go back to Management Studio, open a new query window for booksdb and execute
the following command:

Page 83 of 90
Assembly Creation

CREATE ASSEMBLY XsltTransform


from 'C:\\XSLT\\XSLTTransform.dll'

The path to the .dll file should be customized to the path on your computer; slashes in the
path should be escaped by double slashes. The XSLTTransform.cs file contains a 'Transform'
function that does the work of XML to HTML translation. Information about this function is
held in the assembly we just created. However, we need to map it to a SQL Server function
to use it in stored procedures and commands. Execute the following command on a booksdb
query window to map this function to an applyXsltTransform function that takes the xml and
xsl stylesheet as input. The last line maps the Transform function within the XSLTTransform
assembly to the new function. This line has the syntax
<assembly_name>.<class_name>.<function_name>

create function ApplyXsltTransform( @inputXML xml, @inputTransform xml )


returns xml
as external name XSLTTransform.XSLTTransform.Transform

Once you have created the assembly and the function, save the findbooksbyauthors.sql file
onto your system and open it by clicking on the File -> Open -> File top bar menu in
Management Studio. Execute the contents of the file by hitting the F5 key or clicking '!'
Execute.

Take a look at the code in the stored procedure. The procedure takes as input a string with
characters from the required author's last name. It returns as output a long xml string
corresponding to an HTML file that displays information about the books by the given
author(s) in tabular format. Consider the 'XQuery' command that gets the corresponding
entries from the bookxml file. XQuery is a language used to query XML documents:

set @strLength = len(@search)


set @search = UPPER(LEFT(@search, 1)) +
LOWER(SUBSTRING(@search, 2, (@strLength - 1)))
set @bookXMLString = '<booksbyauthor>'
DECLARE books_sum CURSOR FOR
SELECT convert(varchar(500),bookelement)
FROM bookxml
WHERE bookelement.exist
('/book/author[substring(., 1,sql:variable("@strLength")) =
sql:variable("@search")]') = 1

Page 84 of 90
The initial two lines obtain and save the length of the input parameter into a @strLength
variable and format @search to be a lowercase string with an uppercase initial character (the
format of our author name - an initial upper case letter followed by all lowercase). Note that
XQuery, unlike the default collation of SQL server, is case sensitive. This means that 'sh' or
'sH' will not match 'Shakespeare' or 'Shaw'. We reformat @search to take care of this issue.
We declare a bookXMLString variable and set it to a root '<booksbyauthor>' element. Then,
we declare a cursor for a select query that uses Xpath. The bookelement.exist function
returns 1 each time it finds an 'author' element within a 'book' element ('/book/author[..')
whose first few letters upto the length of the input string (substring(.,
1,sql:variable("@strLength")) ) contains the string we got as input
('...sql:variable("@search")]'). Note that stored procedure variables are used in SQL Server
XQueries by prefixing sql:variable to the variable name within parenthesis and double
quotes.

We next iterate through the books_sum cursor and append each found xml fragment to the
bookXMLString variable. Once we close the cursor after processing every returned row, we
add a closing tag of the root element to bookXMLString (select @bookXMLString =
@bookXMLString + '</booksbyauthor>')

The next few lines use the ApplyXsltTransform function to transform the @bookXMLString
variable's XML to HTML using the XSL stylesheet stored for the bookelement column in the
xsl table we created earlier and stores this HTML in the output parameter @bookHTML for
retrieval.

select @xml = (select @bookXMLString)


select @xslt = (select xslspec from xsl where xmlcol = 'bookelement')
set @bookHTML = dbo.ApplyXsltTransform( @xml, @xslt )

Our method is quite unique as all application steps including html generation are performed
in the database layer itself. This means that very little application code is required to deploy
our application. A more intuitive use for XML data is to create web services that directly use
XML output (we would not have to transform the data in such cases). Web services may be
designed to use HTTP endpoints created for our stored procedures.

To test our stored procedure, run it using the following command. Right click on the result
and click 'select all', right click again and click copy. Paste onto notepad or some other text
editor to view the output HTML

declare @myHTML xmlexec findbooksbyauthor 'pre', @myHTML OUTPUTselect


@myHTML

Deploying our stored procedure on the web is quite easy.

Discuss this in the forums

10
Page 85 of 90
Configure Web Site

How to configure a web site

• You should have IIS installed on your system. Create a folder named
'findAuthor' under C:\inetpub\wwwroot\.
• Copy the files findAuthor.aspx, , findAuthor.htm, and web.configto this folder.
• Open the findAuthor.aspx file using any text editor. Search for the string
'<YOUR_SQLSERVER_NAME>' and change it to reflect the name of your SQL
Server Installation. This is the name you use to login to your default server through
management studio.
• Open Settings -> Control Panel. Double click administrative tools. Double Click
Internet Information Services.
• Expand the main link and the 'Websites' link. You will see the 'FindAuthor' folder.
Right click on this folder and click 'properties'.
• In the properties dialogue, click on the 'Create' button under application settings and
click OK.

The findAuthor.htm file contains a simple web form that takes the author's name as input.
The findAuthor.aspx ASP file (the action specified upon form submission) opens a
connection to the booksdb database, passes the input specified by the user to the
findbooksbyauthor stored procedure, retrieves the output and prints it onto the web page
using a 'response.write' statement.

10

Create Web User

How to create a web user identity

• Go to the management studio. Expand the main security tab, right


click 'logins', and click 'New Login...'.
• In the dialogue, enter '<Your_Windows_loginname>\ASPNET'(exactly this) next to
'Name:'. Click the radio button next to 'Windows' to check it if it is not already
checked by default. Select 'booksdb' for default database and click 'OK'.
• Now, right click on the 'booksdb' database and click 'properties'. Click on the
'permissions' tab in the left panel in the property dialogue.
• Click on the user name to select the user you just created.
• In the bottom 'Permissions' panel, scroll to the 'Connect' row and click to check the
box under the 'Grant' column if it is not already checked. Similarly, check the
'Execute' box. One neat thing about using stored procedures instead of select
statements is that SPs allow you to give a minimal set of permissions to the web user
to prevent malicious actions.

You are now ready to browse your application. To run the application, simply open an
Internet Explorer Window and browse to the website
Page 86 of 90
http://www.edumax.com/assets/files/sql-server/findAuthor.htm and enter a short string like
'ad' or 'sh' or one corresponding to any Author's last name and click the 'Get books' button.
You will see the output of the stored procedure for your query.

What is Business
Intelligence?
Have you missed out on Business Intelligence for your business
or organisation?

What is Business Intelligence

Here is a definition of Business Intelligence (BI. It is a process for


increasing the competitive advantage or performance of a business or
organisation by the intelligent use of available data in decision making.

BI helps organisations achieve their goals and objectives by giving them


firstly a deeper understanding of what is going on and also feedback on
how progress to the goals is being made. It is a key component of
business performance management.

First class, leadership is clearly required to establish business


intelligence in an organistion.

A definition of business intelligence is a broad category of software


applications and technologies for gathering, storing, analysing, and
providing access to data to help managers and staff make better
business decisions. BI can include decision support systems, query and
reporting, online analytical processing (OLAP), statistical analysis,
forecasting, and data mining.

Click here for alternative definitions of business intelligence

Business intelligence has two main functions:

Page 87 of 90
Here is 'what is business intelligence' is in practical terms.

• Routine Information Delivery through reports or dashboards


• Supporting Decision making through ad hoc query and analysis

Why business intelligence?

You have not only to ask the question what is business intelligence but
why business intelligence?

Businesses or organisations are constantly faced with changing


circumstances and challenges. Nothing remains static for long.

Because of this changing environment, businesses and organisations


need to be continually making decisions to adjust their actions to grow
profitably or enhance the services they provide.

BI can help here on two counts by utilising the data held within the
organization, trusting that it is reasonably clean and accurate:

• Establishing Early Warning Systems and Detection of Trends


• Finding relevant Patterns and Insights

Estabishing a Warnings System

In establishing warnings the questions must be asked

What drives performance in your organization? What constitutes top


performance?

The goal of any BI or performance management system is to align the


workforce around a set of strategies , objectives and measures to
achieve and maintain improvements. Ideally before a performance
management system is set up some strategic planning exercise should
take place using say a balanced scorecard or strategy mapping process.

A set of management measures should be derived so that management


can continually assess the performance of the organization.

Each objective or goal derived from such an exercise should be


associated with a measure or a few measures. e.g. the hours of training
per staff member per year. The BI system can then be used to monitor
or track these measures

Achieving Top Performance

To reach top performance it sometimes necessary to participate in a

Page 88 of 90
benchmarking exercise to reveal what is possible and realistic.

The BI system can be used to set or find targets. For example if you
had several units or professionals doing the same activity, you could
use BI to find the best performers and use that as a benchmark.

It takes strong leadership to drive your business through data.

Note, BI can still successfully applied at the departmental or sub


department though, and it is sometimes best to start of the BI process
in a small way like this to gain some earlier successes.

Unexpected patterns or trends

Unexpected patterns or trends can be discovered using BI especially if


you have a powerful and easy to ad hoc querying or graph presentation
system.

By inspecting the data in a BI system where you can readily pose ad


hoc questions, you discover a difference in performance, which needs
investigation. Is there a genuine difference? Has all the data been
recorded correctly?

These discoveries can sometimes lead to breakthroughs in performance


e.g. a unnoticed growing market segment, outstanding performance
from one business unit.

To learn more about BI

The above summary answers the question...what is Business


Intelligence, but click on the following links to find out more...

How to Create a Business Intelligence Strategy

What are the Issues with Business Intelligence

Return to top of What is Business Intelligence

Want Even More Information?

then Subscribe to the FREE Rapid BI Newsletter/E-zine

Enter your E-mail Address

Enter your First Name (optional)

Page 89 of 90
Then

Subscribe

Don't worry -- your e-mail address is totally secure.


I promise to use it only to send you Rapid BI Newsletter.

Share this page:

What's This?</DIV< td>

Page 90 of 90

Potrebbero piacerti anche