Sei sulla pagina 1di 226

Page 1 of 226

Implementing High Availability


Item: 4 (Ref:Cert-70-432.8.1.3)

You are the database administrator of your company. You want to configure an eight-node failover cluster for SQL
Server 2008 in your network.

Which version of SQL Server 2008 should you use to achieve this objective?
j SQL Server 2008 Express Edition
k
l
m
n

j SQL Server 2008 Enterprise Edition


k
l
m
n
j SQL Server 2008 Standard Edition
k
l
m
n

j SQL Server 2008 Web Edition


k
l
m
n

Answer:
SQL Server 2008 Enterprise Edition

Explanation:
You should use SQL Server 2008 Enterprise Edition. Failover clusters are used for high availability of data. To
install a SQL Server 2008 failover cluster, you must have Microsoft Cluster Server (MSCS) configured on at least
one of the nodes in the cluster. Additionally, you must be using SQL Server 2008 Enterprise Edition or SQL
Server 2008 Standard Edition in conjunction with MSCS. The Enterprise Edition of SQL Server supports eight-
node failover clusters, and the Standard Edition supports two-node failover clusters. In this scenario, you want to
install an eight-node failover cluster. Therefore, you must use SQL Server 2008 Enterprise Edition. After installing
a failover cluster, you should confirm whether the nodes are able to failover. To test the failover manually, you can
manually cause a hardware error to occur. There are several conditions that can cause hardware errors, including
a broken connection or network cable, a faulty network interface card, changes made to firewall configuration, and
unavailability of the drive on which the transaction log is stored.

You should not use SQL Server 2008 Express Edition, SQL Server 2008 Standard Edition, or SQL Server 2008
Web Edition because these editions of SQL Server 2008 do not support eight-node clusters. Only the Enterprise
Edition or the Standard Edition of SQL Server 2008 can be used to configure failover clusters.

Objective:
Implementing High Availability

Sub-Objective:
Implement database mirroring.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Planning a SQL Server
Installation > High Availability Solutions Overview > Getting Started with SQL Server 2008 Failover Clustering >
Before Installing Failover Clustering

Item: 8 (Ref:Cert-70-432.8.4.1)

You are the database administrator of your company. You have configured merge replication on an instance of

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 2 of 226

SQL Server 2008.

You configure the Publisher and Distributor for error reporting and monitoring purposes. You create a publication
with the default retention period. Users want to be able to subscribe to the publication for a longer period of time.
You want to modify the expiration period for the subscription to the publication.

Which system stored procedure should you use?


j sp_changesubscription
k
l
m
n

j sp_changemergepublication
k
l
m
n
j sp_changesubscriptiondtsinfo
k
l
m
n

j sp_change_subscription_properties
k
l
m
n

Answer:
sp_changemergepublication

Explanation:
You should use the sp_changemergepublication system stored procedure. A subscription is a request to obtain
a copy of the data and database objects in a publication. While creating a publication, you can specify the
retention period for subscriptions. The retention period or the expiration period specifies the period of time after
which the subscription is expired and removed. When a subscription to a merge replication is expired, it must be
reinitialized because its metadata is removed when the subscription expires. When a subscription to a transaction
replication is expired, it must be re-created and synchronized because this type of subscription is dropped by the
Expired subscription clean up job on the Publisher. To change the expiration period for a subscription to a
merge replication, you should execute the sp_changemergepublication system stored procedure specifying
retention for the @property parameter and the desired number of days for the new subscription expiration period
for the @value parameter.

You should not use the sp_changesubscription, sp_changesubscriptiondtsinfo, or


sp_change_subscription_properties system stored procedures because none of these stored procedures can
be used to modify the expiration period for the subscription to a merge replication. The sp_changesubscription
system stored procedure modifies the properties of a snapshot or transactional push subscription or a pull
subscription that is involved in queued updating transactional replication. The sp_changesubscriptiondtsinfo
system stored procedure modifies the Data Transformation Services (DTS) package properties of a subscription.
The sp_change_subscription_properties system stored procedure updates information for pull subscriptions.

Objective:
Implementing High Availability

Sub-Objective:
Implement replication.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Development > Designing and Implementing:
How-to Topics > Creating, Modifying, and Deleting Publications and Articles > How to: Set the Expiration Period
for Subscriptions (Replication Transact-SQL Programming)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Development > Designing and Implementing >
Subscribing to Publications > Subscription Expiration and Deactivation

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 3 of 226

Item: 11 (Ref:Cert-70-432.8.1.1)

You are the database administrator for your company. You are administering the Sql1 SQL server that contains
the Prod1 database.

The Prod1 database is configured to use the Full Recovery model and implements a backup strategy to provide
minimum downtime if a failure occurs. You want to implement database mirroring to maximize the performance of
this database. The mirror database Prod2 is maintained on the Sql2 server.

The following conditions must be met:

 The database should be made available to the users even if the mirrored server becomes unavailable.
 Service should be forced on the mirrored server if the principal server fails.

What should you do?


j Set the transaction safety attribute to OFF.
k
l
m
n
j Set the transaction safety attribute to FULL.
k
l
m
n

j Configure a witness server for the mirrored session.


k
l
m
n
j Configure the mirrored sessions to operate in synchronous mode.
k
l
m
n

Answer:
Set the transaction safety attribute to OFF.

Explanation:
To meet the specified conditions, you should set the transaction safety attribute to OFF. This will enable the
mirrored database to operate in asynchronous mode. By configuring database mirroring, you can operate
mirrored sessions in asynchronous or synchronous mode. Operating mirrored sessions in asynchronous mode
improves database performance. In asynchronous mode, the transaction safety is set to OFF, and the principal
server does not wait for a transaction to be committed on the mirrored server. This minimizes the time required to
complete a transaction on the principal server and improves the performance. When the mirrored sessions are
operating in asynchronous mode, the principal server continues to function even if the mirrored server becomes
unavailable. In this scenario, the performance of the database is more critical than availability. If the principal
server fails, service can be forced on the mirrored server only if the mirrored sessions are operating in
asynchronous mode.

You should not set the transaction safety attribute to FULL because it will configure the mirrored session to
operate in synchronous mode. Synchronous mode ensures high availability of the database and does not ensure
high performance. In synchronous mode, the transactions will be committed on the principal server only when
they are copied to the mirrored server. This increases the time required to commit transactions and hampers
database performance.

You should not configure a witness server for the mirrored session. A witness server is used while configuring
database mirroring to ensure high availability of the database. A witness server can be used only in synchronous
mode. The witness server constantly monitors the mirrored sessions and initiates an automatic failover if the
principal server fails. This process ensures maximum availability of the database. This scenario demands better
performance of the database. Therefore, you should not configure a witness server.

You should not configure the mirrored sessions to operate in synchronous mode because synchronous mode
does not provide enhanced performance of the database. You should configure the mirrored sessions to operate
in synchronous mode when you want increased database availability or increased protection for database

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 4 of 226

transactions.

Objective:
Implementing High Availability

Sub-Objective:
Implement database mirroring.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview > Data Mirroring Sessions > Asynchronous Database Mirroring (High-
Performance Mode)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview > Data Mirroring Sessions > Synchronous Database Mirroring (High-
Safety Mode)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview

Item: 42 (Ref:Cert-70-432.8.1.2)

You are the database administrator for your company. You have set up database mirroring to increase the
availability of your database. You have a primary SQL Server 2008 instance named Primary1 and a mirroring
database instance named Secondary1.

You want to configure the database servers for high availability. To do this, you have decided to add a witness
server to your mirroring configuration. You have a SQL Server instance named DbInstance1 that will be
configured as the witness server. This instance may also act as a secondary server in some events.

You must create an endpoint named MirrorEndpoint on this instance.

Which statement should you execute to achieve this?


j
k
l
m
n
CREATE ENDPOINT MirrorEndpoint
STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,

ROLE=WITNESS);
CREATE ENDPOINT MirrorEndpoint
j
k
l
m
n
STATE=STARTED
AS HTTP (
AUTHENTICATION=(KERBEROS),
CLEAR_PORT=1500)
FOR DATABASE_MIRRORING (ROLE=ALL, MESSAGE_FORWARDING=ENABLED);

j
k
l
m
n
CREATE ENDPOINT MirrorEndpoint
STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=PARTNER);

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 5 of 226

j
k
l
m
n
CREATE ENDPOINT MirrorEndpoint
STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=ALL);

Answer:
CREATE ENDPOINT MirrorEndpoint
STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=ALL);

Explanation:
You should execute the following statement:

CREATE ENDPOINT MirrorEndpoint


STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=ALL);

To create an endpoint on a witness server, you must configure a transport protocol, and specify a listener port. A
database mirroring endpoint only listens on a TCP protocol. An HTTP protocol cannot be used for database
mirroring. In this scenario, DbInstance1 can act as a partner or as a secondary server in other mirroring
configurations. Therefore, you must set specify the ROLE=ALL clause on a TCP port. The ROLE=ALL clause
signifies that this instance can be used both as a partner as well as a witness.

You should not execute the following statement:

CREATE ENDPOINT MirrorEndpoint


STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=WITNESS);

In this statement, the ROLE=WITNESS clause specified. This will create an endpoint that can be used only by a
witness server. In this scenario, the instance for which the endpoint is being created will be used as both a
witness server and a partner server.

You should not execute the following statement:

CREATE ENDPOINT MirrorEndpoint


STATE=STARTED
AS HTTP (
AUTHENTICATION=(KERBEROS),
CLEAR_PORT=1500)
FOR DATABASE_MIRRORING (ROLE=WITNESS, MESSAGE_FORWARDING=ENABLED);

This statement is syntactically incorrect. This statement tries to create an endpoint on the HTTP protocol. An
endpoint created for database mirroring can only listen on the TCP protocol. Also, the

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 6 of 226

MESSAGE_FORWARDING clause is not allowed when you create an endpoint for a database mirroring
configuration.

You should not execute the following statement:

CREATE ENDPOINT MirrorEndpoint


STATE=STARTED
AS TCP (LISTENER_PORT=1500)
FOR DATABASE_MIRRORING (
AUTHENTICATION=WINDOWS KERBEROS,
ROLE=PARTNER);

This statement is syntactically incomplete. You are also required to specify the PATH clause in the statement.
Also, the ROLE=PARTNER clause is specified. This will create an endpoint that can be used only by a partner
server. In this scenario, the instance for which the endpoint is being created will be used both as a partner server
and as a witness server.

Objective:
Implementing High Availability

Sub-Objective:
Implement database mirroring.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Deployment > Setting Up Database Mirroring > Database Mirroring Endpoint

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE ENDPOINT (Transact-SQL)

Item: 55 (Ref:Cert-70-432.8.2.3)

You are the database administrator of Nutex Corporation. The company's network consists of a single Active
Directory domain named nutex.com.

You are in the process of creating a failover cluster for SQL Server 2008. You install SQL Server 2008 on three
computers named SQL1.nutex.com, SQL2.nutex.com, and SQL3.nutex.com. You create a three-node failover
cluster instance named SQLFC.nutex.com. You want to configure encryption for the failover cluster.

What should you do?


j Obtain a server certificate with the name SQLFC, and install it on SQL1.nutex.com, SQL2.nutex.com, and
k
l
m
n
SQL3.nutex.com.
j Obtain a server certificate with the name SQLFC.nutex.com, and install it on SQL1.nutex.com,
k
l
m
n
SQL2.nutex.com, and SQL3.nutex.com.
j Obtain a server certificate with the name nutex.com, and install it on SQL1.nutex.com, SQL2.nutex.com,
k
l
m
n
and SQL3.nutex.com.
j Obtain a server certificate with the fully qualified domain name (FQDN) of each node in the cluster. Ensure
k
l
m
n
that each node has a certificate issued to the other two nodes.

Answer:

Obtain a server certificate with the name SQLFC.nutex.com, and install it on SQL1.nutex.com,

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 7 of 226

SQL2.nutex.com, and SQL3.nutex.com.

Explanation:
You should obtain a server certificate with the name SQLFC.nutex.com and install it on SQL1.nutex.com,
SQL2.nutex.com, and SQL3.nutex.com. Failover clusters provide high availability for an entire instance of SQL
Server. To be able to install a failover cluster, you must have Microsoft Cluster Server (MSCS) configured on at
least one of the nodes in the cluster. When you want to configure encryption for a failover cluster, you must obtain
a server certificate with the FQDN of the failover cluster instance and install the certificate on all nodes in the
failover cluster. In this scenario, the FQDN of the failover cluster instance is SQLFC.nutex.com. Therefore, you
should obtain a certificate with the name SQLFC.nutex.com and install it on SQL1.nutex.com,
SQL2.nutex.com, and SQL3.nutex.com. After installing the certificate, you should select the Force protocol
encryption check box in SQL Server Configuration Manager to enable encryption for the failover cluster. The
Force protocol encryption check box is available on the Flags tab in the SQL Native Client 10.0
Configuration dialog box. To open the SQL Native Client 10.0 Configuration dialog box, you should right-click
the SQL Native Client 10.0 Configuration node in SQL Server Configuration Manager and select the Properties
option.

You should not obtain a server certificate with the name SQLFC or nutex.com and install it on SQL1.nutex.com,
SQL2.nutex.com, and SQL3.nutex.com. When you want to configure encryption for a failover cluster, you must
obtain a server certificate with the FQDN of the failover cluster instance and install the certificate on all nodes in
the failover cluster. In this scenario, the FQDN of the failover cluster instance is SQLFC.nutex.com. Therefore,
you should obtain a certificate with the name SQLFC.nutex.com and install it on SQL1.nutex.com,
SQL2.nutex.com, and SQL3.nutex.com.

You should not obtain a server certificate with the FQDN of each node in the cluster and ensure that each node
has a certificate issued to the other two nodes. When you want to configure encryption for failover cluster, you
must obtain a server certificate with the FQDN of the failover cluster instance and install the certificate on all
nodes in the failover cluster. In this scenario, the FQDN of the failover cluster instance is SQLFC.nutex.com.
Therefore, you should obtain a certificate with the name SQLFC.nutex.com and install it on SQL1.nutex.com,
SQL2.nutex.com, and SQL3.nutex.com.

Objective:
Implementing High Availability

Sub-Objective:
Implement a SQL Server clustered instance.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Planning a SQL Server
Installation > High Availability Solutions Overview > Getting Started with SQL Server 2008 Failover Clustering >
Before Installing Failover Clustering

Item: 58 (Ref:Cert-70-432.8.2.2)

You are the database administrator of your company. You want to configure high-availability for a SQL Server
2008 database named Corpdb by using database mirroring. You want to configure automatic failover database
mirroring sessions. You must also improve reliability of the automatic failover.

Which conditions must be met to achieve the stated objectives? (Choose three. Each correct answer represents
part of the solution.)
c The principal database and the mirror database must be stored on a single computer.
d
e
f
g

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 8 of 226

c The principal database and the mirror database must be stored on different computers.
d
e
f
g

c The database mirroring session must be running in synchronous mode.


d
e
f
g

c The database mirroring session must be running in asynchronous mode.


d
e
f
g

c The database mirroring session must have a witness server.


d
e
f
g

Answer:
The principal database and the mirror database must be stored on different computers.
The database mirroring session must be running in synchronous mode.
The database mirroring session must have a witness server.

Explanation:
To achieve the objectives stated in this scenario, the principal database and the mirror database must be stored
on different computers, the database mirroring session must be running in synchronous mode, and the database
mirroring session must have a witness server. Failover clusters provide high availability for an entire instance of
SQL Server, whereas database mirroring provides high availability for a single database. When you want to use
database mirroring with failover clusters, both the principal server and mirror server must reside on different
clusters. However, a mirroring session can also be established when one partner resides on the failover clustered
instance of a cluster and the other partner resides on a separate, unclustered computer. When you use database
mirroring with failover clusters, the database mirroring sessions can be configured for automatic failover. To
support automatic failover, the database mirroring session must be running with a witness in synchronous or high-
safety mode. In synchronous or high-safety mode, a transaction is committed on both servers, which ensures that
the mirror database is in synchronization with the principal database. To improve the reliability of automatic
failover, the principal database and the mirror database must reside on different computers.

The option stating that the principal database and the mirror database must be stored on a single computer is
incorrect because this will not improve reliability of automatic failover. When both the principal and mirror
databases are stored on a single computer, both databases will become unavailable if the computer fails. In this
case, automatic failover will not occur.

The option stating that the database mirroring session must be running in asynchronous mode is incorrect. To
support automatic failover, the database mirroring session must be running with a witness in synchronous or high-
safety mode. In asynchronous or high-performance mode, the transactions commit without waiting for the mirror
server to write the log to disk. This will not guarantee that the mirror database is updated with all the transactions
committed on the principal database.

Objective:
Implementing High Availability

Sub-Objective:
Implement a SQL Server clustered instance.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > High
Availability: Interoperability and Coexistence > Database Mirroring and Failover Clustering

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview > Database Mirroring Sessions > Role Switching During a Database
Mirroring Session > Automatic Failover

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 9 of 226

Item: 72 (Ref:Cert-70-432.8.2.4)

You are the database administrator of your company. You create a two-node SQL Server 2008 failover cluster
instance and configure database mirroring for a database named Products. The database mirroring sessions are
also configured to support automatic failover. You are relocating the two SQL servers. You want to disable
automatic failover.

Which actions could you take to accomplish this? (Choose two. Each correct answer presents a complete
solution.)
c Connect to either of the partner servers, and run the ALTER DATABASE Products SET WITNESS OFF
d
e
f
g
statement.
c Connect to either of the partner servers, and run the ALTER DATABASE Products SET WITNESS ON
d
e
f
g
statement.
c Connect to the principal server, and run the ALTER DATABASE Products SET PARTNER SAFETY OFF
d
e
f
g
statement.
c Connect to the mirror server, and run the ALTER DATABASE Products SET PARTNER SAFETY OFF
d
e
f
g
statement.

Answer:
Connect to either of the partner servers, and run the ALTER DATABASE Products SET WITNESS
OFF statement.
Connect to the principal server, and run the ALTER DATABASE Products SET PARTNER
SAFETY OFF statement.

Explanation:
You could connect to either of the partner servers and run the ALTER DATABASE Products SET WITNESS
OFF statement or connect to the principal server and run the ALTER DATABASE Products SET PARTNER
SAFETY OFF statement. When you use database mirroring with failover clusters, the database mirroring
sessions can be configured for automatic failover. Automatic failover ensures that if one node in the cluster fails,
the other node takes over immediately. To support automatic failover, the database mirroring session must be
running with a witness in synchronous or high-safety mode. To disable automatic failover, you can either change
the operating mode of the database mirroring session to asynchronous or turn off the witness. To change the
operating mode of the database mirroring session to asynchronous, you should connect to the principal server
and run the ALTER DATABASE Products SET PARTNER SAFETY OFF statement. To turn off the witness, you
should connect to any of the two partner servers and run the ALTER DATABASE Products SET WITNESS OFF
statement.

You should not connect to any of the two partner servers and run the ALTER DATABASE Products SET
WITNESS ON statement. To disable automatic failover, you should turn off the witness. To do this, the SET
WITNESS OFF parameter is used.

You should not connect to the mirror server and run the ALTER DATABASE Products SET PARTNER SAFETY
OFF statement. To disable automatic failover, this statement must be run on the principal server, not on the mirror
server.

Objective:
Implementing High Availability

Sub-Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 10 of 226

Implement a SQL Server clustered instance.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview > Database Mirroring Sessions > Role Switching During a Database
Mirroring Session > Automatic Failover

Item: 78 (Ref:Cert-70-432.8.4.2)

You are the database administrator of your company. You have two SQL Server 2008 servers in a peer-to-peer
transactional replication topology. You are in the process of adding a new node to the peer-to-peer transactional
replication. You want to ensure that all relevant transactions are replicated to the new node.

Which value should you configure for the @sync_type parameter of the sp_addsubscription system stored
procedure?
j initialize from lsn
k
l
m
n

j initialize with backup


k
l
m
n
j replication support only
k
l
m
n

j automatic
k
l
m
n

Answer:
initialize from lsn

Explanation:
You should configure the initialize from lsn value for the @sync_type parameter of the sp_addsubscription
system stored procedure. Peer-to-peer replication provides high-availability by maintaining copies of data on
multiple instances of SQL Server. Each server in a peer-to-peer replication is referred to as a node. After you
have configured a replication topology, you can add multiple nodes to it. To add a node to an existing peer-to-
peer replication topology, you can use the Publication page of the Configure Peer-to-Peer Topology Wizard.
To open the Configure Peer-to-Peer Topology Wizard, you should open SQL Server Management Studio and
expand the Replication node. You should then expand the Local Publications node under the Replication
node, right-click the publication to which you want to add a node, and select the Configure Peer-to-Peer
Topology option. To ensure that all relevant transactions are replicated to the new node, you should configure
the initialize from lsn value for the @sync_type parameter of the sp_addsubscription system stored
procedure.

You should not configure the initialize with backup, replication support only, or automatic value for the
@sync_type parameter. These values are not used to replicate all relevant transactions to a new node that is
added to a peer-to-peer transactional replication. The initialize with backup value obtains the schema and initial
data for published tables from a backup of the publication database. The replication support only value
provides automatic generation of article custom stored procedures at the Subscriber and triggers that support
updating subscriptions. The automatic value ensures that schema and initial data for published tables are
transferred to the Subscriber first.

Objective:
Implementing High Availability

Sub-Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 11 of 226

Implement replication.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Technical Reference > Replication Stored
Procedures (Transact-SQL) > sp_addsubscription (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Development > Designing and Implementing:
Walkthroughs > How to: Configure Peer-to-Peer Transactional Replication (Replication Transact-SQL
Programming)

Item: 88 (Ref:Cert-70-432.8.3.3)

You are a database administrator for Verigon Corporation. The company stores all its product-related data in a
SQL Server 2008 database named Product that resides on the Sqlserver1 server. You have configured a
secondary database named Product_sec for the Product database to provide high availability for the database.

You must determine the log shipping specific configuration details of the Product database on the Sqlserver1
server.

Which table can you use to achieve the objective?


j log_shipping_primary_databases
k
l
m
n
j log_shipping_primary_secondaries
k
l
m
n

j log_shipping_monitor_primary
k
l
m
n
j log_shipping_monitor_history_detail
k
l
m
n

Answer:
log_shipping_primary_databases

Explanation:
The log_shipping_primary_databases table is used to determine the log shipping specific configuration details
of the Product database. The log_shipping_primary_databases table contains configuration information about
the entire primary database on the server. In this scenario, the Product database is the primary database,
therefore, you should refer to the log_shipping_primary_databases table to determine the configuration details
of the database. This table contains one row for every primary database on the server and contains information
such as the backup directory, the backup retention time, and details of the most recent transaction log backup.

The log_shipping_primary_secondaries table cannot be used to determine the configuration details of the
Product database. The log_shipping_primary_secondaries table displays a mapping of the primary databases
to the secondary databases.

The log_shipping_monitor_primary table cannot be used to determine the configuration details of the Product
database. The log_shipping_monitor_primary table is used to monitor transaction log specific activities on the
primary database.

The log_shipping_monitor_history_detail table cannot be used to determine the configuration details of the
Product database. The log_shipping_monitor_history_detail table stores history information for the log
shipping jobs related to the primary database.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 12 of 226

Objective:
Implementing High Availability

Sub-Objective:
Implement log shipping.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Tables (Transact-SQL) > Log Shipping Tables (Transact-SQL) >
log_shipping_primary_databases (Transact-SQL)

Item: 95 (Ref:Cert-70-432.8.2.1)

You are the SQL administrator for your company. You currently manage a SQL Server 2008 failover cluster that
consists of six SQL Server 2008 instances named SQL1, SQL2, SQL3, SQL4, SQL5, and SQL6.

Because of performance issues, you decide to remove SQL6 from the failover cluster.

What should you do?


j Run the sp_configure system stored procedure on SQL6.
k
l
m
n

j Run setup.exe from the SQL Server 2008 installation media on SQL6.
k
l
m
n
j Run the sp_dropserver system stored procedure on SQL6.
k
l
m
n

j Run the sp_dropdevice system stored procedure on SQL6.


k
l
m
n

Answer:
Run setup.exe from the SQL Server 2008 installation media on SQL6.

Explanation:
You should run setup.exe from the SQL Server 2008 installation media on SQL6. To add or remove SQL Server
2008 nodes in a failover cluster, you must use SQL Server setup.

You should not run the sp_configure system stored procedure on SQL6. This system stored procedure is used
to configure server-wide settings, including the fill factor, audit mode, and default language. It cannot be used to
remove a server from a failover cluster.

You should not run the sp_dropserver system stored procedure on SQL6. This system stored procedure is used
to remove a remote or linked server from the local SQL Server 2008 instance. Linked servers allow you to use
distributed queries to execute Transact-SQL statements across multiple servers. Although linked servers can
exist in a failover cluster, dropping a linked server would not remove a node in a failover cluster.

You should not run the sp_dropdevice system stored procedure on SQL6. This system stored procedure is used
to remove a database device or backup device from a SQL Server 2008 instance. It cannot be used to remove a
server from a failover cluster.

Objective:
Implementing High Availability

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 13 of 226

Sub-Objective:
Implement a SQL Server clustered instance.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Planning a SQL Server
Installation > High Availability Solutions Overview > Getting Started with SQL Server 2008 Failover Clustering >
Failover Clustering How-to Topics > How To: Add or Remove Nodes in a SQL Server Failover Cluster (Setup)

Item: 98 (Ref:Cert-70-432.8.3.1)

You are a database administrator for your company. To configure the application environment in the high-
availability mode, you have configured a standby server in addition to the primary SQL Server 2008 server. This
standby server should appear online immediately if the primary server fails. The primary server has a large
volume of inserts, updates, and selects, and generates a transaction log that is almost 100 GB.

If the primary server fails, you should minimize the network impact while bringing the standby server online. You
should prevent any type of data loss.

Which strategies should you adopt to meet the requirements?


j Switch to the Simple Recovery model.
k
l
m
n

j Truncate the transaction log on the primary database every three minutes.
k
l
m
n
j Create a tail-log backup on the primary database, and restore all the transaction log backups on the standby
k
l
m
n
server including the WITH STANDBY option.
j Create a tail-log backup on the primary database, and restore all the transaction log backups on the standby
k
l
m
n
server including the WITH RECOVERY option.

Answer:
Create a tail-log backup on the primary database, and restore all the transaction log backups on
the standby server including the WITH STANDBY option.

Explanation:
You should create a tail-log backup on the primary database and restore all the transaction log backups on the
standby server including the WITH STANDBY option. Restoring the log on the secondary database with the
WITH STANDBY option applies the transaction logs to the standby server. If the primary server fails, only the
current transaction log needs to be applied, therefore, reducing the network impact.

You should not switch to the Simple Recovery model because the Simple Recovery model does not use
transaction log backups and increases the possibility of data loss in the event of a server failure.

You should not truncate the transaction log on the primary database every three minutes. Truncating the
transaction log will only reduce the logical size of the log. Truncation involves marking the space used by the old
log records as reusable. Truncation initiates the secondary database but does not minimize the network impact.

You should not create a tail-log backup on the primary database and restore all the transaction log backups on
the standby server including the WITH RECOVERY option. Restoring the log on the secondary database with the
WITH RECOVERY option applies the transaction log backup to the standby server and brings the standby
database online. In this scenario, the logs should be shipped to the standby database, but the server should be
brought online only when the primary server fails.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 14 of 226

Objective:
Implementing High Availability

Sub-Objective:
Implement log shipping.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Server Management How-to Topics > How to: Set Up, Maintain, and Bring Online
a Warm Standby Server (Transact-SQL)

Item: 108 (Ref:Cert-70-432.8.1.4)

You manage an instance of SQL Server 2008 named SQL1. SQL1 contains a database named Sales. You want
to configure database mirroring for the Sales database. You install a new instance of SQL Server 2008 on a
server named Server2. You perform a full backup of the Sales database and its transaction logs.

What should do to create the mirror database on Server2? (Choose two. Each correct answer presents part of
the solution.)
c Restore the backup of the Sales database with the same name on Server2 by using the WITH
d
e
f
g
NORECOVERY clause.
c Restore the backup of the Sales database with the same name on Server2 by using the WITH STANDBY
d
e
f
g
clause.
c Apply the transaction log backup to the Sales database on Server2 by using the WITH NORECOVERY
d
e
f
g
clause.
c Apply the transaction log backup to the Sales database on Server2 by using the WITH STANDBY clause.
d
e
f
g

Answer:
Restore the backup of the Sales database with the same name on Server2 by using the WITH
NORECOVERY clause.
Apply the transaction log backup to the Sales database on Server2 by using the WITH
NORECOVERY clause.

Explanation:
You should restore the backup of the Sales database with the same name on Server2 by using the WITH
NORECOVERY clause and then apply the transaction log backup to the Sales database on Server2 by using the
WITH NORECOVERY clause. Database mirroring is primarily used for high availability of databases. Database
mirroring can only be implemented on a per-database basis for databases that are configured with the Full
Recovery model. Database mirroring is not supported by the Bulk-Logged and Simple Recovery models.

A database mirroring session operates by using either synchronous or asynchronous mode. In synchronous or
high-safety mode, a transaction is committed on both servers, which ensures that the mirror database is in
synchronization with the principal database. In asynchronous or high-performance mode, the transactions commit
without waiting for the mirror server to write the log to disk. Therefore, the asynchronous mode maximizes
performance by supporting large volumes of transactions. To ensure that a mirroring session can be established,
you must ensure that the mirror database is created and is configured for mirroring. To create a new mirror
database, a full backup of the principal database and its transaction log backup must be restored onto the mirror
server instance by using the WITH NORECOVERY clause.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 15 of 226

You should not restore the backup of the Sales database with the same name on Server2 by using the WITH
STANDBY clause or apply the transaction log backup to the Sales database on Server2 by using the WITH
STANDBY clause. Creating a new mirror database requires a full backup of the principal database and its
transaction log backup to be restored onto the mirror server instance by using the WITH NORECOVERY clause.

Objective:
Implementing High Availability

Sub-Objective:
Implement database mirroring.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Deployment > Setting Up Database Mirroring > Preparing a Mirror Database for
Mirroring

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Database
Mirroring > Database Mirroring Overview

Item: 142 (Ref:Cert-70-432.8.4.3)

You are the database administrator of your company. The company has a main office and five branch offices in
different cities. Each branch office contains a computer running SQL Server 2008. The main office network
contains an instance of SQL Server 2008 named SQL1. SQL1 contains a database named Inventory.

Your company wants to implement a new inventory tracking application that will access a table named
Inventory_items in the Inventory database. You want to enable users in all the offices to update the
Inventory_items table. You decide to implement merge replication with SQL1 as the Publisher and instances of
SQL Server in the branch offices as the Subscribers.

To minimize the disk space on the Subscribers, you want to ensure that only the Publisher is used to store conflict
records.

Which system stored procedure should you run on SQL1 to configure this?
j the sp_addmergepublication system stored procedure with the @conflict_logging argument
k
l
m
n

j the sp_addmergesubscription system stored procedure with the @publication argument


k
l
m
n
j the sp_addmergepublication system stored procedure with the @centralized_conflicts argument
k
l
m
n

j the sp_addmergesubscription system stored procedure with the @centralized_conflicts argument


k
l
m
n

Answer:
the sp_addmergepublication system stored procedure with the @conflict_logging argument

Explanation:
You should run the sp_addmergepublication system stored procedure with the @conflict_logging argument.
The sp_addmergepublication system stored procedure creates a new merge publication. The
@conflict_logging parameter specifies where conflict records are stored. The possible values for this parameter
are publisher, subscriber, both, and NULL. Specifying the publisher value for the @conflict_logging

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 16 of 226

parameter ensures that the conflict records are stored at the Publisher.

You should not run the sp_addmergesubscription system stored procedure with the @publication or
@centralized_conflicts argument because this stored procedure cannot be used to configure the Publisher to
store conflict records. The sp_addmergesubscription system stored procedure creates a push or pull merge
subscription. The @publication argument specifies the name of the publication. The @centralized_conflicts
argument is not a valid argument for the sp_addmergesubscription system stored procedure.

You should not run the sp_addmergepublication system stored procedure with the @centralized_conflicts
argument. The @centralized_conflicts argument has been deprecated and is only supported for backward
compatibility. You should use the @conflict_logging argument to specify the location where conflict records are
stored.

Objective:
Implementing High Availability

Sub-Objective:
Implement replication.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Technical Reference > Replication Stored
Procedures (Transact-SQL) > sp_addmergepublication (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Replication > Technical Reference > Replication Stored
Procedures (Transact-SQL) > sp_addmergesubscription (Transact-SQL)

Item: 159 (Ref:Cert-70-432.8.3.2)

You are the database administrator for your company. You are in the process of enabling log shipping to improve
the availability of the database.

You have configured four servers of which Prod1 is the primary server instance that maintains your production
database, Prod2 and Prod3 are the secondary server instances for log shipping, and Prod4 is the monitor server
instance.

After configuring the servers, you must create jobs for the four server instances involved in log shipping.

Which jobs should you create? (Choose all that apply.)


c a copy job on Prod1
d
e
f
g
c a copy job on Prod3
d
e
f
g

c an alert job on Prod4


d
e
f
g
c an alert job on Prod1
d
e
f
g

c a restore job on Prod3


d
e
f
g
c a restore job on Prod4
d
e
f
g

c a backup job on Prod1


d
e
f
g
c a backup job on Prod2
d
e
f
g

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 17 of 226

Answer:
a copy job on Prod3
an alert job on Prod4
a restore job on Prod3
a backup job on Prod1

Explanation:
You should create a copy job on Prod3, an alert job on Prod4, a restore job on Prod3, and a backup job on
Prod1. When configuring log shipping, you create four different types of jobs. A backup job is created on a
primary server that maintains the production database for which log shipping should be configured. A backup job
performs backup operations. Therefore, in this scenario, a backup job should be created on the primary server
instance, Prod1. You must create a copy job on every secondary server instance in the configuration. A copy job
copies backup files from the primary server instance to the secondary server instance. Therefore, in this scenario,
a copy job should be created on the secondary server instances, Prod2 and Prod3. A restore job is created on
the secondary server instance and restores the copied backup files to the secondary server databases. In this
scenario, the restore job should be created on Prod2 and Prod3. An alert job can be optionally created on a
monitor server if the server has been configured in your log shipping environment. An alert job raises the
necessary alerts for the primary and secondary server instances using the monitor server instance. In this
scenario, an alert job should be created on Prod4, which is configured as the monitor server instance.

You should not create a copy job on Prod1 because a copy job is created on the secondary server. In this
configuration, Prod1 is the primary server instance.

You should not create a backup job on Prod2 because a backup job is created on the primary server. In this
configuration, Prod2 is a secondary server instance.

You should not create a restore job on Prod4 because a restore job is created on the secondary server. In this
configuration, Prod4 is the monitor server instance.

You should not create an alert job on Prod1 because an alert job is created on the monitor server. In this
configuration, Prod1 is the primary server instance.

Objective:
Implementing High Availability

Sub-Objective:
Implement log shipping.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > High Availability > Log
Shipping > Log Shipping Overview

Installing and Configuring SQL Server 2008


Item: 23 (Ref:Cert-70-432.1.2.4)

You are the SQL administrator for your company. A new SQL Server 2008 computer named SQL3 has been
deployed on your network. You create a default instance named MSSQLServer and two named instances named
HumanResources-SQL3 and Finance-SQL3.

After creating the three instances, you attempt to connect to each of them from your Windows Vista client
computer. You connect successfully to the default instance, but are unable to connect to the named instances.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 18 of 226

You ensure that you enabled remote connections on the named instances. The appropriate ports for the named
instances are open on the firewall.

What else should you do?


j Restart the SQL server.
k
l
m
n

j Restart the two named instances.


k
l
m
n
j Enable the SQL Server Agent service.
k
l
m
n

j Enable the SQL Server Browser service.


k
l
m
n

Answer:
Enable the SQL Server Browser service.

Explanation:
You should enable the SQL Server Browser service. To connect remotely to named instances, the SQL Server
Browser service is used. Without this service enabled, errors will occur. The following three actions must be taken
to establish remote connections to a named instance:

 Enable remote connections on the named instance.


 Enable the SQL Server Browser service.
 Configure the firewall to allow traffic related to SQL Server and the SQL Server Browser service.

You should not restart the SQL server. The problem is not with the SQL server because you are able to connect
to the default instance on the SQL server.

You should not restart the two named instances. The problem is with the SQL Server Browser service. Until this
service is enabled, you will be unable to connect remotely to named instances.

You should not enable the SQL Server Agent service. This service is responsible for managing alerts and jobs in
SQL Server 2008. It is not used to connect remotely to named instances.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
Microsoft Help and Support > How to configure SQL Server 2005 to allow remote connections

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > SQL Server Setup User
Interface Reference > Instance Configuration

Item: 38 (Ref:Cert-70-432.1.1.2)

You are the database administrator of your company. The network contains a server named SQL1 that has the
32-bit version of SQL Server 2005 Express edition installed. SQL1 hosts an application that backs up databases
stored on SQL1. You want to install a default instance of SQL Server 2008 on SQL1. You discover that the
application uses the DUMP statement that is no longer supported by SQL Server 2008.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 19 of 226

You want to upgrade SQL1 to SQL Server 2008.

Which editions of SQL Server 2008 can you use to upgrade SQL1? (Each correct answer represents a part of the
correct solution. Choose two.)
c SQL Server 2008 Standard
d
e
f
g

c SQL Server 2008 Enterprise


d
e
f
g
c SQL Server 2008 Express
d
e
f
g

c SQL Server 2008 Express Advanced


d
e
f
g

Answer:
SQL Server 2008 Express
SQL Server 2008 Express Advanced

Explanation:
Only SQL Server 2008 Express and SQL Server 2008 Express Advanced can be used to upgrade a 32-bit
version of SQL Server 2005 Express edition to SQL Server 2008. Before upgrading your SQL server, you should
consider backward compatibility for applications that are installed on the SQL server. This is useful when you
want to identify and use applications that are not supported by SQL Server 2008. If your SQL server contains an
application that contains features that are no longer supported by SQL Server 2008, such as the DUMP command
in this scenario, you should either modify your application or consider installing SQL Server 2008 as a named
instance. Installing SQL Server 2008 as a named instance will allow the default instance to remain as the older
version of SQL Server to support the deprecated features. If you upgrade an earlier version of SQL Server to SQL
Server 2008, the previous instance of SQL Server will be overwritten and will not be available on the computer
after the upgrade.

SQL Server 2008 Standard and SQL Server 2008 Enterprise cannot be used to upgrade a 32-bit version of SQL
Server 2005 Express edition to SQL Server 2008.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Install SQL Server 2008 and related services.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Upgrading to SQL
Server 2008 > Version and Edition Upgrades

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Deployment > Upgrade > Considerations
for Upgrading the Database Engine

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > SQL Server Setup User
Interface Reference > Instance Configuration

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 20 of 226

Item: 60 (Ref:Cert-70-432.1.2.6)

You are a database administrator in your company. You create a new database named Prod1 on a SQL Server
2008 instance named Sql1. The users on your company's LAN should have access to the Prod1 database. All
the client computers are running Windows XP Professional and have the SQL Client component installed. The
shared memory protocol is also enabled on all the client computers in the SQL Native Client Configuration.

What could you do to enable clients to connect to Sql1? (Choose all that apply. Each correct answer presents a
unique solution.)
c Enable the VIA protocol for all the clients.
d
e
f
g

c Enable the Named Pipes protocol for Sql1 only.


d
e
f
g
c Enable the TCP/IP protocol for all the clients.
d
e
f
g

c Enable the shared memory protocol for Sql1.


d
e
f
g
c Enable the Named Pipes protocol for all the clients and Sql1.
d
e
f
g

Answer:
Enable the TCP/IP protocol for all the clients.
Enable the Named Pipes protocol for all the clients and Sql1.

Explanation:
You could enable the Named Pipes protocol for all the clients and Sql1 or you could enable the TCP/IP protocol
for all the clients. In this scenario, the client computers should access Sql1 on the company's LAN. Therefore, you
should enable the Named Pipes protocol on both Sql1 and the client computers. The Named Pipes protocol is
designed primarily for use with LANs. With this protocol, a part of memory is used by a process to pass
information to another process. The information passed by the first process is used as input to the second
process that is located on the same computer or on a remote computer. You can also enable the TCP/IP protocol
for all the clients because the TCP/IP protocol is enabled on the server by default. Therefore, you can also use
the TCP/IP protocol to enable clients to connect to Sql1.

You should not enable the Virtual Interface Adapter (VIA) protocol for all the clients. The VIA protocol works only
with VIA hardware and should only be used in configurations that use VIA hardware.

You should not enable the Named Pipes protocol for Sql1 only. You also need to enable this protocol for the
client.

You should not enable the shared memory protocol for Sql1 because the shared memory protocol does not
support access over LANs. Using the shared memory protocol, you can only connect to a server instance running
on the same computer.

To view the network protocols that are currently enabled for the server, you can open SQL Server Configuration
Manager and expand SQL Server Network Configuration. This will display the four possible protocols you can
use and the status of each, as shown in the following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 21 of 226

The client network protocols are displayed in the Client Protocols section under SQL Native Client
Configuration.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Connecting
to the SQL Server Database Engine > Client Network Configuration > Choosing a Network Protocol

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Connecting
to the SQL Server Database Engine > Server Network Configuration > Default SQL Server Network Configuration

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 22 of 226

Item: 61 (Ref:Cert-70-432.1.2.3)

You have been hired as the SQL administrator for your company. The IT director tells you that the company has
three SQL servers: one SQL Server 2005 server and two SQL Server 2008 servers. He shows you where the
computers are located.

You need to determine the version and build number for the SQL Server 2008 servers. You also need to
determine the collation used by the server.

From which location should you obtain this information?


j the General page of the SQL server's Server Properties dialog box
k
l
m
n
j the Connections page of the SQL server's Server Properties dialog box
k
l
m
n

j the Database Settings page of the SQL server's Server Properties dialog box
k
l
m
n
j the Advanced page of the SQL server's Server Properties dialog box
k
l
m
n

Answer:
the General page of the SQL server's Server Properties dialog box

Explanation:
To determine the version and build number, as well as the server collation, for the SQL Server 2008 servers, you
should access the General page of the SQL server's Server Properties dialog box, as shown in the following
image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 23 of 226

You can also obtain this information by running the sp_server_info system stored procedure without any
parameters or passing the sp_server_info system stored procedure an @attribute_id parameter value of 2. For
example, you could use the following statement to display the version and build:

sp_server_info @attribute_id = 2

You cannot determine the version and build number or server collation for the SQL Server 2008 servers by
accessing the Connections page of the SQL server's Server Properties dialog box. This page allows you to
configure different connection parameters, as shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 24 of 226

You cannot determine the version and build number or server collation for the SQL Server 2008 servers by
accessing the Database Settings page of the SQL server's Server Properties dialog box. This page allows you
to configure different database settings, including backup/recovery settings and default file locations. This page is
shown in the following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 25 of 226

You cannot determine the version and build number or server collation for the SQL Server 2008 servers by
accessing the Advanced page of the SQL server's Server Properties dialog box. This page allows you to view
many advanced settings, including filestream, network, and parallelism settings. This page is shown in the
following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 26 of 226

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Server Properties F1 Help >
Server Properties (General Page) - SQL Server Management Studio

Item: 68 (Ref:Cert-70-432.1.3.3)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 27 of 226

You manage an instance of SQL Server 2008. You want to be able to back up and restore the SQL Server
instance by using the Volume Shadow Copy Service (VSS) framework.

Which service should you enable to provide this functionality?


j the SQL Writer service
k
l
m
n

j the SQL Server service


k
l
m
n

j the SQL Server Agent service


k
l
m
n

j the SQL Server Integration service


k
l
m
n

Answer:
the SQL Writer service

Explanation:
You should enable the SQL Writer service because this service provides the functionality for backing up and
restoring SQL Server through the VSS framework. The SQL Writer service is installed when you install SQL
Server, but it is not enabled by default. You must enable this service because this service is required by the VSS
application to perform a backup or restore.

You should not enable the SQL Server service, SQL Server Agent service, or SQL Server Integration service
because none of these services provide the functionality for backing up and restoring SQL Server through the
VSS framework. The SQL Server service is responsible for providing storage, processing, and controlled access
of data, and rapid transaction processing. The SQL Server Agent service is responsible for executing jobs,
monitoring SQL Server, and triggering alerts. The SQL Server Integration service is responsible for providing
management support for SQL Server Integration Services (SSIS) package storage and execution.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server services.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
the Database Engine Services > SQL Writer Service

Item: 85 (Ref:Cert-70-432.1.4.2)

You are your company's SQL administrator. A SQL Server 2008 computer named SQL3 is running SQL Server
Reporting Services (SSRS) in SharePoint integrated mode.

Your company has decided to discontinue the use of SharePoint Services due to lack of use. You must configure
SSRS to use native mode and still be able to access all SSRS content.

You start the SQL Reporting Service Configuration tool and configure SSRS to use native mode. You need to
complete the process.

Which three actions must you perform? (Choose three. Each correct answer represents part of the

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 28 of 226

complete solution.)
c Republish the content to SQL3.
d
e
f
g
c Assign roles on SQL3 that grant access to items and operations.
d
e
f
g

c Redefine subscriptions and scheduled operations.


d
e
f
g
c Re-create the report history for the SharePoint integrated mode data on SQL3.
d
e
f
g

c Point SQL3 to the SharePoint content location.


d
e
f
g

Answer:
Republish the content to SQL3.
Assign roles on SQL3 that grant access to items and operations.
Redefine subscriptions and scheduled operations.

Explanation:
You should republish the content to SQL3, assign roles on SQL3 that grant access to items and operations, and
redefine subscriptions and scheduled operations. All of these actions are required to complete the switch back to
native mode while still being able to access all of the previous content.

You cannot re-create the report history for the SharePoint integrated mode data on SQL3. It is not possible to
carry over any information or re-create the information when switching from SharePoint integrated mode to native
mode.

You should not point SQL3 to the SharePoint content location. A SQL Server 2008 SSRS computer cannot point
to the SharePoint content location. The content must be republished on the SSRS server.

Native mode is used when you want the report content to be accessed and managed from within SSRS. The
version history for SSRS in native mode can be viewed using the Report History feature. The version history for
SSRS in SharePoint integrated mode can be viewed from within SharePoint.

SharePoint integrated mode is used when you want the report content to be accessed and managed from within
SharePoint. The version history for SSRS in SharePoint integrated mode can be viewed from within the
SharePoint application.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure additional SQL Server components.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Reporting Services > Deployment > New Installation >
Reporting Services Configuration How-to Topics > How to: Switch Server Modes (Reporting Services
Configuration)

Item: 89 (Ref:Cert-70-432.1.2.2)

You are the SQL administrator for your company. You have three SQL Server 2008 computers named SQL1,
SQL2, and SQL3. Each server supports a single instance.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 29 of 226

You have been asked to configure SQL1 to connect to remote data sources that use OLE DB. You have also
been asked to configure SQL2 to control CPU usage and to configure SQL3 to provide access to physical
memory.

Which Transact-SQL script must you run on all three servers?


j
k
l
m
n
sp_configure 'ad hoc distributed queries', 1;
GO
RECONFIGURE;
GO

j
k
l
m
n
sp_configure 'awe enabled', 1;
GO
RECONFIGURE;
GO

j
k
l
m
n
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

j
k
l
m
n
SELECT * FROM sys.configurations
ORDER BY name;
GO

Answer:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

Explanation:
You must run the following Transact-SQL script on all three servers:

sp_configure 'show advanced options', 1;


GO
RECONFIGURE;
GO

In this scenario, all three servers require the configuration of advanced options, which should only be changed by
an experienced database administrator. Before you can change advanced options, you must use sp_configure
to set the show advanced options configuration option to a value of 1.

You should not run the following Transact-SQL script on all three servers:

sp_configure 'ad hoc distributed queries', 1;


GO
RECONFIGURE;
GO

The ad hoc distributed queries configuration option configures a server to connect to remote database sources
that use OLE DB. This script should be run on SQL1 after running the script that configures the show advanced
options option.

You should not run the following Transact-SQL script on all three servers:

sp_configure 'awe enabled', 1;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 30 of 226

GO
RECONFIGURE;
GO

The awe enabled configuration option configures a server to use physical memory. This allows the data cache to
expand to the amount of available physical memory, instead of being limited to the size of the virtual memory.
This script should be run on SQL3 after running the script that configures the show advanced options option.
You can also configure this setting from the Memory page of the SQL server's Server Properties dialog box by
selecting the Use AWE to allocate memory check box. The Memory page of the SQL server's Server
Properties dialog box is shown in the following image:

You should not run the following Transact-SQL script on all three servers:

SELECT * FROM sys.configurations


ORDER BY name;
GO

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 31 of 226

This script displays information for each configuration option you can pass to the sp_configure system stored
procedure. It is not necessary to view these settings to change them.

To configure SQL2 to control CPU usage, you would use the affinity I/O mask and affinity mask options
and the affinity64 I/O mask and affinity64 mask options (on a server with more than 32 processors
running the 64-bit version of SQL Server). For example, in this scenario, after running the script to set the
show advanced options configuration option, you could run the following script on SQL2 to set the affinity
mask configuration option:

sp_configure 'affinity mask', 38;


RECONFIGURE;
GO

You can also manage these options using the SQL Server Management Studio.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
Servers > Setting Server Configuration Options

Item: 112 (Ref:Cert-70-432.1.3.1)

You are the database administrator of your company. The network contains an instance of SQL Server 2008. You
configure a domain user account that will be used by the SQL Server service for the default instance. After
several days, you discover that the SQL Server service is unable to start. You investigate and discover the
password for the domain account has expired.

To avoid this problem in the future, you create a new user account that will be used by the SQL Server service.
The user account's password does not expire.

What should you do next to ensure that the SQL Server service starts successfully?
j Grant the new user account the Act as part of the operating system right.
k
l
m
n

j Grant the new user account the Log on locally right.


k
l
m
n
j Grant the new user account the Log on as a service right.
k
l
m
n

j Grant the new user account the Impersonate a client after authentication right.
k
l
m
n

Answer:
Grant the new user account the Log on as a service right.

Explanation:
You should grant the new user account the Log on as a service right. The SQL Server service account for the
default instance requires the Log on as a service right to start successfully. This user right is automatically
granted to the user account that you configure for the SQL Server service during SQL Server setup. When you

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 32 of 226

configure a new user account that will be used by the SQL Server service, you must ensure that you grant this
user right to the new user account. If the user account that is used by the SQL Server service does not have this
user right, an error message stating The service did not start due to a logon failure is
displayed.

You should not grant the new user account the Act as part of the operating system, Log on locally, or
Impersonate a client after authentication rights because granting any of these rights will not enable the SQL
Server service to start successfully. The SQL Server service for the default instance requires the Log on as a
service right to start successfully. The Act as part of the operating system right enables a process to
impersonate any user without authentication, which provides the process with access to the local resources to
which the user has access. The Log on locally right determines which users are allowed to log on interactively to
the computer on which this setting is configured. The Impersonate a client after authentication right determines
which accounts are allowed to impersonate other accounts.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server services.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started Initial Installation > Planning a SQL Server
Installation > Setting Up Windows Service Accounts

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Managing Services How-to Topics > Managing Services How-to Topics (SQL
Server Configuration Manager) > How to: Change the Password of the Accounts Used by SQL Server (SQL
Server Configuration Manager)

Item: 117 (Ref:Cert-70-432.1.5.1)

You are the SQL administrator for your company. Two SQL Server 2008 computers, named SQL1 and SQL2,
have been deployed.

Several reports run on a daily basis and are e-mailed to the appropriate personnel using Database Mail. The
msdb database is growing much larger than expected, and you discover that this is due to the volume of e-mail
messages generated by Database Mail. You need to delete all e-mail messages that are more than 30 days old
from the msdb database.

You decide to create a job that will automatically run once each week.

Which system stored procedure should the job execute?


j sysmail_delete_mailitems_sp
k
l
m
n
j sysmail_delete_log_sp
k
l
m
n

j sysmail_configure_sp
k
l
m
n
j sp_processmail
k
l
m
n

Answer:
sysmail_delete_mailitems_sp

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 33 of 226

Explanation:
The job should execute the sysmail_delete_mailitems_sp system stored procedure. The following syntax is
used:

sysmail_delete_mailitems_sp [ [ @sent_before = ] 'sent_before' ]


[ , [ @sent_status = ] 'sent_status' ]

If you execute this procedure without specifying the @sent_before or @sent_status parameter, all e-mails in the
Database Mail system are deleted. If you execute this procedure with the @sent_before parameter, all e-mail
messages older than the specified date are deleted. The following example executes the
sysmail_delete_mailitems_sp system stored procedure to delete all e-mail messages sent before May 30,
2008:

EXECUTE msdb.dbo.sysmail_delete_mailitems_sp
@sent_before = 'May 30, 2008';
GO

If you execute this procedure with the @sent_status parameter, you can delete e-mail messages based on their
status, such as sent, unsent, retrying, or failed. The following example executes the
sysmail_delete_mailitems_sp system stored procedure to delete all sent e-mail messages:

EXECUTE msdb.dbo.sysmail_delete_mailitems_sp
@sent_status = 'sent';
GO

The job should not execute the sysmail_delete_log_sp system stored procedure. This system stored procedure
deletes events from the Database Mail log, not e-mail messages.

The job should not execute the sysmail_configure_sp system stored procedure. This system stored procedure
changes the configuration settings for Database Mail, such as limiting the size of e-mail attachments or
configuring the logging level. It does not delete e-mail messages.

The job should not execute the sp_processmail system stored procedure. This system stored procedure
processes incoming e-mail messages from SQL Server's Inbox. It does not delete e-mail messages.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Implement database mail.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > Database Mail and SQL Mail Stored Procedures
(Transact-SQL) > sysmail_delete_mailitems_sp (Transact-SQL)

Item: 120 (Ref:Cert-70-432.1.3.4)

You are the database administrator of your company. The network contains a SQL Server 2008 computer that
has a default instance of SQL Server installed. You want to allow an assistant administrator named Adam to be
able to perform the following tasks:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 34 of 226

 Execute, start, or stop all local jobs.


 Delete the job history for any local job.
 Enable or disable all local jobs and schedules.

To which role should you add Adam to allow him to perform these tasks without giving him unnecessary
permissions?
j sysadmin
k
l
m
n
j SQLAgentReaderRole
k
l
m
n

j SQLAgentUserRole
k
l
m
n
j SQLAgentOperatorRole
k
l
m
n

Answer:
SQLAgentOperatorRole

Explanation:
You should add Adam to the SQLAgentOperatorRole fixed database role. Membership in this role will allow
Adam to execute, start, or stop all local jobs, delete the job history for any local job, and enable or disable all local
jobs and schedules. The SQLAgentUserRole, SQLAgentReaderRole, and SQLAgentOperatorRole roles are
fixed database roles in the msdb database that prevent users who are not members of the sysadmin fixed server
role from accessing the SQL Server Agent. The SQLAgentUserRole fixed database role is the least privileged
role that provides permission on operators, local jobs, and job schedules. The SQLAgentReaderRole fixed
database role includes all permissions provided by the SQLAgentUserRole role. Additionally, membership in the
SQLAgentReaderRole fixed database role enables users to view the list of multiserver jobs and their properties
and history. The SQLAgentReaderRole fixed database role also allows users to view the list of all jobs and job
schedules including jobs created by other users. The SQLAgentOperatorRole fixed database role includes all
permissions provided by SQLAgentUserRole and SQLAgentReaderRole. In addition to these permissions, the
members of the SQLAgentOperatorRole fixed database role can execute, start, or stop all local jobs and delete
the job history for any local job. Members of this role can also enable or disable all local jobs and job schedules
on the server.

You should not add Adam to the sysadmin fixed server role because it provides access to all functionalities of
SQL Server Agent, including viewing the properties and configuring the SQL Server Agent. Therefore, adding
Adam to the sysadmin fixed server role will provide him unnecessary permissions.

You should not add Adam to the SQLAgentReaderRole fixed database role because membership in this role will
not enable Adam to perform tasks specified in this scenario. The SQLAgentReaderRole fixed database role
includes all permissions provided by the SQLAgentUserRole fixed database role, and also enables users to view
the list of multiserver jobs and their properties and history.

You should not add Adam to the SQLAgentUserRole fixed database role because membership in this role will
not enable Adam to perform tasks specified in this scenario. The SQLAgentUserRole fixed database role is the
least privileged role that provides permission on operators, local jobs, and job schedules.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server services.

References:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 35 of 226

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing SQL Server Agent Security > SQL Server Agent Fixed
Database Roles

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing SQL Server Agent Security

Item: 122 (Ref:Cert-70-432.1.4.3)

You are the SQL administrator for your company. You have deployed two SQL Server 2008 computers named
SQL1 and SQL2 on your network.

You need to deploy SQL Server Reporting Services (SSRS) to support both ad hoc and scheduled reports. Your
deployment should provide maximum performance for SSRS.

Which SSRS deployment should you use?


j Deploy SSRS on one of the current SQL Server 2008 computers.
k
l
m
n

j Deploy a single SSRS server with a local SSRS catalog.


k
l
m
n
j Deploy a single SSRS server with a remote SSRS catalog.
k
l
m
n

j Deploy multiple SSRS servers with a remote SSRS catalog.


k
l
m
n

Answer:
Deploy multiple SSRS servers with a remote SSRS catalog.

Explanation:
You should deploy multiple SSRS servers with a remote SSRS catalog. A scaled-out SSRS solution provides
multiple servers for reporting. By deploying separate SSRS servers, you ensure that the performance of SSRS is
maximized. In addition, the SSRS deployment will minimally affect the performance of the SQL servers.

You should not deploy SSRS on one of the current SQL Server 2008 computers. This will not provide maximum
performance for SSRS. In addition, it will adversely affect the performance of the defined databases.

You should not deploy a single SSRS server with a local SSRS catalog. This will not provide maximum
performance for SSRS. Using a local catalog will require some resources to be consumed by the catalog. In
addition, a single SSRS server will not provide maximum performance for SSRS. This deployment will provide a
scaled-up solution. A scaled-up solution generally provides a single server that is deployed with maximum
resources, including the maximum amount of RAM and CPU resources.

You should not deploy a single SSRS server with a remote SSRS catalog. Although a remote SSRS catalog will
improve the performance of the SSRS solution, deploying a single SSRS server will not provide maximum
performance for SSRS. A scaled-up solution generally provides a single server that is deployed with maximum
resources, including the maximum amount of RAM and CPU resources.

When deploying SSRS, you must determine whether a scaled-up or scaled-out SSRS solution is best for your
organization. Although SSRS can be deployed on the same server as the SQL Server databases, performance
issues can occur, particularly if you have an application that performs many insert operations. To minimize
performance issues for the SQL Server databases and for the application, you should deploy SSRS on separate
computers from the SQL Server instances.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 36 of 226

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure additional SQL Server components.

References:
TechNet Home > Product & Technologies > SQL Server TechCenter Home > SQL Server 2005 > Planning for
Scalability and Performance with Reporting Services

Item: 130 (Ref:Cert-70-432.1.3.2)

You are the database administrator of your company. You install a SQL Server 2008 computer on the network.
You want to ensure that the SQL Server service is able to interact with other network services.

What is the recommended type of account that you should configure for this purpose?
j a Network Service account
k
l
m
n
j a local user account
k
l
m
n

j a Local Service account


k
l
m
n
j a domain user account
k
l
m
n

Answer:
a domain user account

Explanation:
You should configure a domain user account for the SQL Server service. To ensure that services installed on
SQL Server function properly, each service must be associated with a user account. You can configure SQL
Server services to use built-in system accounts or domain user accounts. A domain user account is
recommended when the SQL Server service must interact with network services or access network resources,
such as shared folders on file servers. Using a domain user account is also recommended if the SQL Server
service uses linked server connections to other SQL Server computers.

You should not configure a Network Service account for SQL Server. The Network Service account can also be
used to achieve the objective stated in this scenario, but using a Network Service account is not recommended
because it is a shareable account.

You should not configure a local user account or a Local Service account for SQL Server. The Local Service
account is one of the built-in accounts in Windows. This account has the same privileges as members of the
Users group. The services that use the Local Service account do not use credentials to access network
resources. The Local Service account cannot be configured for the SQL Server and the SQL Server Agent
services. A Local Service account or a local user account that does not have administrator privileges is
recommended if the server is not a member of the domain, such as a server installed on the perimeter network, or
if the service does not require access to network resources.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 37 of 226

Configure SQL Server services.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started Initial Installation > Planning a SQL Server
Installation > Setting Up Windows Service Accounts

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Planning a SQL Server
Installation > Setting Up Windows Service Accounts

Item: 132 (Ref:Cert-70-432.1.3.5)

You manage a SQL Server 2008 computer that has a default instance of SQL Server 2008 installed. You want to
integrate the Database Engine on the server with the NTFS file system. You enable FILESTREAM on the server.
You need to enable FILESTREAM for Transact-SQL and Win32 streaming access.

Which Transact-SQL code should you run?


j
k
l
m
n
EXEC sp_configure filestream_access_level, 0
RECONFIGURE

j
k
l
m
n
EXEC sp_configure filestream_access_level, 1
RECONFIGURE

j
k
l
m
n
EXEC sp_configure filestream_access_level, 2
RECONFIGURE

j
k
l
m
n
EXEC sp_configure filestream_access_level, 3
RECONFIGURE

Answer:
EXEC sp_configure filestream_access_level, 2
RECONFIGURE

Explanation:
You should run the following Transact-SQL code:

EXEC sp_configure filestream_access_level, 2


RECONFIGURE

The FILESTREAM feature integrates applications that use SQL Server with the NTFS file system. It stores
varbinary(max) binary large object (BLOB) data as files on the file system. To be able to use FILESTREAM, you
must enable it on the SQL Server instance that is running the Database Engine. To enable FILESTREAM, you
should perform the following steps:

1. Open SQL Server Configuration Manager.


2. Right-click the SQL Server Services node, and select the Open option.
3. Right-click the instance of SQL Server on which you want to enable FILESTREAM, and select the
Properties option.
4. Click the FILESTREAM tab in the SQL Server Properties dialog box of the SQL Server.
5. Select the Enable FILESTREAM for Transact-SQL access check box.
6. To read and write FILESTREAM data from Windows, select the Enable FILESTREAM for file I/O
streaming access option and type the name of the shared folder in the Windows Share Name field.
7. To enable remote clients to access the FILESTREAM data stored in the shared folder, select the Allow
remote clients to have streaming access to FILESTREAM data option.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 38 of 226

8. Click the Apply button.


9. Open SQL Server Management Studio, and click the New Query button.
10. Type the following Transact-SQL in the Query Editor and click the Execute button:

EXEC sp_configure filestream_access_level, 2


RECONFIGURE

The value 2 for the filestream_access_level configuration option is used to enable FILESTREAM for
Transact-SQL and Win32 streaming access.

You should not run the following Transact-SQL code:

EXEC sp_configure filestream_access_level, 0


RECONFIGURE

Specifying the value for the filestream_access_level configuration option as 0 disables FILESTREAM
support for the instance.

You should not run the following Transact-SQL code:

EXEC sp_configure filestream_access_level, 1


RECONFIGURE

Specifying the value for the filestream_access_level configuration option as 1 enables FILESTREAM only
for Transact-SQL access.

You should not run the following Transact-SQL code:

EXEC sp_configure filestream_access_level, 3


RECONFIGURE

The value 3 is not a valid value for the filestream_access_level configuration option.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server services.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
Servers > Setting Server Configuration Options > filestream access level

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing FILESTREAM Storage > Designing and Implementing FILESTREAM How-to Topics > How to:
Enable FILESTREAM

Item: 141 (Ref:Cert-70-432.1.1.1)

You are the database administrator of your company. The network contains a firewall to secure the corporate
network. You are upgrading a SQL Server 2005 server to SQL Server 2008. However, the upgrade fails, and the
following error message is displayed:

SQL Server Setup could not connect to the database service for server
configuration.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 39 of 226

You need to ensure that the upgrade process completes successfully without displaying this error message.

What should you do?


j Open User Datagram Protocol (UDP) port 1434 on the firewall.
k
l
m
n

j Stop the process that is using TCP port 1433, and run SQL Server Setup.
k
l
m
n
j Enable the Microsoft Distributed Transaction Coordinator (MSDTC) service on the SQL server.
k
l
m
n

j Ensure that the SQL Server Browser service is running on the SQL server.
k
l
m
n

Answer:
Stop the process that is using TCP port 1433, and run SQL Server Setup.

Explanation:
You should stop the process that is using TCP port 1433 and run SQL Server Setup. The error message given in
this scenario occurs when TCP port 1433 is not available during an upgrade to SQL Server 2008. To successfully
upgrade the SQL server, you should terminate the process that is using TCP port 1433 and run SQL Server
Setup. To view detailed information about the error message, you should examine the Setup log files. In SQL
Server 2008, the Setup log files are located in the <drive>:\Program Files\Microsoft SQL Server\100\Setup
Bootstrap\LOG\ folder and its subordinate folders. Detail and summary log files, containing information about
errors that occurred when running SQL Server Setup, are stored within the <drive>:\Program Files\Microsoft
SQL Server\100\Setup Bootstrap\LOG\ folder.

You should not open User Datagram Protocol (UDP) port 1434 on the firewall because this will not ensure that the
upgrade process completes successfully without displaying the error message. UDP port 1434 must be opened
when you want to use the SQL Browser service. The SQL Browser service allows users to connect to the
Database Engine instances that are configured to listen on ports other than port 1433.

You should not enable the Microsoft Distributed Transaction Coordinator (MSDTC) service on the SQL server
because this will not ensure that the upgrade process completes successfully without displaying the error
message. MSDTC is used to administer distributed transactions. MSDTC is required on the server on which you
are installing SQL Server for applications to enlist SQL Server resources in a distributed transaction.

You should not ensure that the SQL Server Browser service is running on the SQL Server because this will not
ensure that the upgrade process completes successfully without displaying the error message. The SQL Server
Browser service allows users to connect to the Database Engine instances that are configured to listen on ports
other than port 1433.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Install SQL Server 2008 and related services.

References:
MSDN > MSDN Library > Troubleshooting an Installation of SQL Server

Item: 143 (Ref:Cert-70-432.1.2.1)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 40 of 226

You are the database administrator for your company. You configure SQL Server instances, Sql1 and Sql2, as
the default instance and the named instance on your Server1 server, respectively. Both instances use fixed ports.
You create users and logins on these SQL Server instances, and grant the required permissions to connect.

You configure a firewall to ensure the security of the network. You also want to ensure that users are able to
successfully connect to the Sql1 and Sql2 instances.

What should you do? (Choose two.)


c Configure the firewall to open the TCP port 1433 to enable access to the default instance.
d
e
f
g

c Configure the firewall to open the TCP/UDP port 1434 to enable access to the default instance.
d
e
f
g
c Configure the firewall to open the TCP port 1433 to enable access to the named instance.
d
e
f
g

c Configure the firewall to open the TCP/UDP port 1434 to enable access to the named instance.
d
e
f
g
c Assign a port number for the default instance, and configure the firewall to open that port.
d
e
f
g

c Identify the port used by the named instance, and configure the firewall to open that TCP port.
d
e
f
g

Answer:
Configure the firewall to open the TCP port 1433 to enable access to the default instance.
Identify the port used by the named instance, and configure the firewall to open that TCP port.

Explanation:
You should configure the firewall to open the TCP port 1433 to enable access to the default instance. Then, you
should identify the port used by the named instance and configure the firewall to open that TCP port. After you
have configured a firewall to ensure the security of the network, the database engine listens for incoming
connections on the port for which the firewall is configured. The default instance on the server is automatically
configured to listen on TCP port 1433. For named instances on the server, you should specifically assign a port
other than TCP port 1433. The database engine will listen for incoming connections on the newly configured port.
After assigning the port, you must configure the firewall to open the respective ports for the instances.

To configure a named instance to use a specific port, you should open SQL Server Configuration Manager and
expand SQL Server Network Configuration. Expand Protocols for <instance_name>, and double-click
TCP/IP. Configure the TCP Dynamic Ports dialog box for the appropriate interface with the port number you
want to use. You will need to restart the SQL server for the changes to take effect.

You should not configure the firewall to open the TCP/UDP port 1434 to enable access to the default instance.
The default instance on the server will always use TCP port 1433, by default. Therefore, you cannot configure the
firewall to open TCP/UDP port 1434 for the default instance.

You should not configure the firewall to open the TCP port 1433 to enable access to the named instance. The
named instance cannot use TCP port 1433 because TCP port 1433 is always used by the default instance on the
server. A named instance should be assigned any other available port, and the firewall should be configured to
open that particular port for the named instance.

You should not configure the firewall to open the TCP/UDP port 1434 to enable access to the named instance. In
SQL Server 2008, named instances use dynamic ports, by default. To configure a named instance to use a fixed
port, you need to set the port for that instance. Because you do not know which port the named instance uses,
you would need to first discover which port the instance uses and then open that port on the firewall.

You should not assign a port number for the default instance and configure the firewall to open that port. The
default instance uses TCP port 1433, by default. You are only required to configure the firewall to open TCP port
1433 to ensure that the default instance enables clients to connect to the instance.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 41 of 226

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Database Engine Connectivity How-to Topics > Server Connectivity How-to
Topics > How to: Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager)

TechNet>TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Connecting
to the SQL Server Database Engine > Connecting to SQL Server over the Internet

Item: 146 (Ref:Cert-70-432.1.4.1)

You are the SQL administrator for your company. You have several SQL Server 2008 computers deployed
internationally. You have configured a SQL Server 2008 computer named SQLReport with SQL Server Reporting
Services (SSRS) enabled.

You need to configure an archive strategy. SQLReport manages thousands of reports. You need to manage your
reports according to the following requirements:

 All archived reports must be saved to the D:\Reports folder.


 For each archived report, one user must be able to access a fully interactive copy of the archived report.
 Delivery of archived reports must be automated.

What should you do? (Choose all that apply. Each correct answer represents part of the complete solution.)
c Create a subscription for each report that saves the report as a file.
d
e
f
g
c Create a subscription for each report that uses file-share delivery.
d
e
f
g

c Use the Report History feature.


d
e
f
g
c Create a subscription for each report that uses e-mail delivery.
d
e
f
g

Answer:
Create a subscription for each report that uses file-share delivery.
Create a subscription for each report that uses e-mail delivery.

Explanation:
You should create a subscription for each report that uses file-share delivery and create a subscription for each
report that uses e-mail delivery. The subscription that uses file-share delivery will allow all archived reports to be
saved to the D:\Reports folder. However, this version of the reports is static and does not include access to all
the reports' interactive features. To give a user access to a fully interactive copy of an archived report, the report
should be e-mailed to the user by creating a subscription that uses e-mail delivery.

You should not create a subscription for each report that saves the report as a file. This method should only be
used when you have a small number of reports that need archiving.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 42 of 226

You should not use the Report History feature. This is an internal SQL Server Reporting Services (SSRS)
feature that creates historical copies of reports in the report server database each time reports are run.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure additional SQL Server components.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Reporting Services > Development > Designing and
Implementing Reports > Designing and Implementing Reports Using Report Designer > Viewing and Saving
Reports > Saving Reports

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Reporting Services > Operations > Administration >
Subscription and Delivery > File Share Delivery in Reporting Services

Item: 148 (Ref:Cert-70-432.1.6.1)

You are a database administrator for a toy manufacturing company named NetFx. The company stores all its
product-related data in a database named Netfx_data that resides on a SQL Server 2008 server named Server1.
A table named Product_details exists in the database and contains the details of all the products manufactured
by the company. The table contains a brief description of each product in the Prod_desc column that is defined
as varchar(300).

Users query the Product_details table by using SQL statements like the following:

SELECT Prod_name, Prod_id, Prod_desc


FROM Product_details
WHERE CONTAINS (Prod_desc, ' " Blue Toy" ');

You must ensure optimal performance of this query.

What will you do to accomplish the objective?


j Create a clustered index on the Prod_desc column.
k
l
m
n
j Create a nonclustered index on the Prod_desc column.
k
l
m
n

j Create a full-text index on the Prod_desc column.


k
l
m
n

j Create a clustered index on the Prod_name column.


k
l
m
n

j Create an indexed view containing the Prod_name and Prod_id columns.


k
l
m
n

Answer:
Create a full-text index on the Prod_desc column.

Explanation:
You should create a full-text index on the Prod_desc column. A full-text index should be created on a column
when you want to perform keyword searches on the column. In this scenario, users are specifying keywords to

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 43 of 226

search for data in the Product_details table. Full-text indexes are designed for use in keyword-based queries. A
full-text index can be created on columns that contain the char, varchar, nvarchar, and varbinary(max) data
types. Full-text indexes cannot be created on columns containing numeric values.

You should not create a clustered index on the Prod_desc column because a clustered index will only enhance
the performance of a query when the column contains unique values. A clustered index on a column with a
varchar data type of 300 characters will not provide selectivity in the index. Clustered indexes should be created
on columns that uniquely identify a row.

You should not create a nonclustered index on the Prod_desc column. Similar to a clustered index, a
nonclustered index does not provide good selectivity in the index if the column contains a large number of non-
unique values. A nonclustered index will be beneficial when the column contains unique values such as numbers.
A nonclustered index should be created on columns that are not covered by the clustered index.

You should not create a clustered index on the Prod_name column because the query in this scenario uses the
Prod_desc column instead of the Prod_name column in the WHERE clause. A clustered index on the
Prod_name column will not affect the performance of the query.

You should not create an indexed view containing the Prod_name and Prod_id columns. Creating an indexed
view on the two columns will not affect the performance of the query. Indexed views are not used during full-text
searches. Creating a full-text index on the Prod_desc column will improve the performance in this scenario.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure full-text indexing.

References:
TechNet > TechNet Library > Full-Text Search Introduction

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data > Full-Text Search > Full-Text Search Concepts > Querying SQL Server Using Full-Text Search > Searching
for Specific Word or Phrase (Simple Term)

Item: 164 (Ref:Cert-70-432.1.2.5)

You are the SQL administrator for your company. A SQL Server 2008 computer named VMSQL01 is configured
with the following settings on the Security page of the Server Properties dialog box:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 44 of 226

A new security policy has been adopted by your company that requires all SQL servers to provide maximum
security for login accounts. Backward compatibility should not be a consideration.

What should you do?


j Select the Enable server proxy account check box, and enter the appropriate credentials.
k
l
m
n
j Select the Windows Authentication mode option, and restart the service.
k
l
m
n

j Select the Enable C2 audit tracing check box.


k
l
m
n
j Select the Successful logins only option, and restart the service.
k
l
m
n

Answer:
Select the Windows Authentication mode option, and restart the service.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 45 of 226

Explanation:
You should select the Windows Authentication mode option and restart the service. Any time you change the
authentication mode, a service restart is required. Windows Authentication provides better security than mixed
mode authentication. In this scenario, the Server Properties dialog box displayed for VMSQL01 shows it is
configured to use mixed mode authentication because the SQL Server and Windows Authentication mode
option is selected. Windows Authentication is more secure because it integrates all the operating system security
mechanisms into the security. If SQL Server authentication is allowed, accounts can be created within SQL
Server 2008. In addition, SQL Server authentication is provided only for backward compatibility. To identify the
authentication method used by a SQL Server 2008 computer, you can open SQL Server Management Studio,
right-click the instance name, and select Properties. Then, from the Server Properties dialog box, select the
Security page.

You should not select the Enable server proxy account check box and enter the appropriate credentials. A
server proxy account is used by the xp_cmdshell extended stored procedure for impersonation. The
xp_cmdshell extended stored procedure executes operating system commands. This setting will not improve the
server's security.

You should not select the Enable C2 audit tracing check box. When this option is enabled, audit trails are
maintained for any attempts to access statements and objects. Although this will provide you with an audit trail of
events that happened, it does not ensure maximum security for login accounts.

You should not select the Successful logins only option and restart the service. When this option is enabled,
audit trails are maintained for all successful logins. Although this will provide you with an audit trail for all
successful logins, it does not ensure maximum security for login accounts.

Objective:
Installing and Configuring SQL Server 2008

Sub-Objective:
Configure SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Server Properties F1 Help >
Server Properties (Security Page)

Maintaining a SQL Server Database


Item: 5 (Ref:Cert-70-432.4.4.5)

You have been hired as the database administrator of your company. You are responsible for managing an
instance of SQL Server 2008 that contains a database named Corpdb.

There are several database snapshots created for the Corpdb database. You want to identify the names of the
sparse files that are created for the Corpdb database.

Which Transact-SQL statement should you execute?


j
k
l
m
n
SELECT name FROM sys.database_files
WHERE database_id = DB_ID(N'Corpdb');

j
k
l
m
n
SELECT physical_name FROM sys.database_files
WHERE database_id = DB_ID(N'Corpdb');

j
k
l
m
n
SELECT name FROM sys.master_files
WHERE database_id = DB_ID(N'Corpdb');

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 46 of 226

j
k
l
m
n
SELECT physical_name FROM sys.master_files
WHERE database_id = DB_ID(N'Corpdb');

Answer:
SELECT physical_name FROM sys.master_files
WHERE database_id = DB_ID(N'Corpdb');

Explanation:
You should execute the following Transact-SQL statement:

SELECT physical_name FROM sys.master_files


WHERE database_id = DB_ID(N'Corpdb');

Sparse files are used by database snapshots to store data. Sparse files are created with the file names that are
specified in the CREATE DATABASE statement when creating the database snapshot. These file names are
stored in the physical_name column of the sys.master_files catalog view. To query the physical_name column
of the sys.master_files catalog view for a particular database, you should run the following Transact-SQL
statement:

SELECT physical_name FROM sys.master_files


WHERE database_id = DB_ID(N'<name_of_the_database>');

The sparse files are automatically deleted when you drop the database snapshot. To delete a database snapshot,
you would require the DROP DATABASE permission.

You should not run the following Transact-SQL statement:

SELECT name FROM sys.database_files


WHERE database_id = DB_ID(N'Corpdb');

The sys.database_files view is a per-database view that contains information about each file of a database as
stored in the database itself. The sparse file names of a database snapshot are stored in the physical_name
column of the sys.master_files catalog view. Therefore, you should query the sys.master_files view. Also, the
name column contains the logical name of the file in the database. It does not display the operating-system file
name.

You should not run the following Transact-SQL statement:

SELECT physical_name FROM sys.database_files


WHERE database_id = DB_ID(N'Corpdb');

The physical_name column in the sys.database_files view always contains the file names of the source
database files.

You should not run the following Transact-SQL statement:

SELECT name FROM sys.master_files


WHERE database_id = DB_ID(N'Corpdb');

The name column contains the logical name of the file in the database. It does not display the operating-system
file name.

Objective:
Maintaining a SQL Server Database

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 47 of 226

Sub-Objective:
Manage database snapshots.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Understanding
Sparse File Sizes in Database Snapshots

Item: 10 (Ref:Cert-70-432.4.5.1)

You are a database administrator for an insurance firm and manage the SQL Server 2008 databases in the
company. The company stores all customer data in a database named Customers that resides on the SQL
server named Sql_1.

You notice that there are several allocation and consistency errors in the Cust_details table in the Customers
database. You want to repair these errors as quickly as possible.

Which statement should you execute to achieve the objective?


j DBCC CHECKCATALOG (Customers);
k
l
m
n

j DBCC CHECKTABLE ('Cust_details', REPAIR_ALLOW_DATA_LOSS);


k
l
m
n

j DBCC CHECKDB (Cust_details, Customers, REPAIR_ALLOW_DATA_LOSS);


k
l
m
n

j DBCC CHECKALLOC (Cust_details, REPAIR_FAST);


k
l
m
n

Answer:
DBCC CHECKTABLE ('Cust_details', REPAIR_ALLOW_DATA_LOSS);

Explanation:
You should execute the following statement:

DBCC CHECKTABLE ('Cust_details', REPAIR_ALLOW_DATA_LOSS);

The DBCC CHECKTABLE statement performs an integrity check on all the pages and structures that constitute
the specified table or indexed view. In this scenario, the consistency and allocation errors exist only in the
Cust_details table. Therefore, to accomplish the task in the minimum possible time, you should check the
integrity at the table level. The REPAIR_ALLOW_DATA_LOSS argument used in the statement attempts to
automatically fix all the reported errors while performing the integrity check. Some data might be lost by using the
REPAIR_ALLOW_DATA_LOSS argument. When you need to repair corrupt tables and do not want to affect the
database, you should use a DBCC CHECKTABLE statement.

You should not execute the following statement:

DBCC CHECKCATALOG (Customers);

In this scenario, you only want to check the allocation and the integrity of the Cust_details table. The DBCC
CHECKCATALOG statement performs integrity checks of the system tables within a database.

You should not execute the following statement:

DBCC CHECKDB (Cust_details, Customers, REPAIR_ALLOW_DATA_LOSS);

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 48 of 226

This statement would generate a syntax error. You cannot specify the name specific database table in the DBCC
CHECKDB statement. You can only specify a database name in this statement. Also, you should also not perform
the allocation and integrity check on the whole database because, in this scenario, you must finish the task in the
minimum possible time. An integrity check on the whole database will consume more time than an integrity check
on a single table.

You should not execute the following statement:

DBCC CHECKALLOC (Cust_details, REPAIR_FAST);

This statement would generate a syntax error. The DBCC CHECKALLOC statement checks the consistency for
disk allocation structures for a particular database. You cannot specify the name of a table as an argument for this
statement.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference >DBCC (Transact-SQL) > DBCC CHECKTABLE (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC CHECKDB (Transact-SQL)

Item: 13 (Ref:Cert-70-432.4.5.5)

You are responsible for managing an instance of SQL Server 2008. The SQL server contains a database named
Custdb that is configured with the full recovery model. Your company has a partner company named nutex.com.
The partner company also maintains its customer data on an instance of SQL Server 2008. You import the
customer data from the partner company into the Custdb database.

After the import, users report that Custdb is performing slowly. You investigate and discover that the transaction
log for the Custdb database has increased in size dramatically, and currently occupies most of the free space on
the hard disk. You want to reclaim as much free space as possible.

Which Transact-SQL statement should you use?


j DBCC FREEPROCCACHE
k
l
m
n

j DBCC INDEXDEFRAG
k
l
m
n
j DBCC SHRINKFILE
k
l
m
n

j DBCC CHECKDB
k
l
m
n

Answer:
DBCC SHRINKFILE

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 49 of 226

Explanation:
You should use the DBCC SHRINKFILE statement. Database Console Commands (DBCC) are Transact-SQL
statements that allow you to perform various maintenance tasks on a database, transaction log, filegroup, or
index. When you want to reclaim hard disk space, you can shrink a data or log file by using the DBCC
SHRINKFILE statement. The DBCC SHRINKFILE statement reduces the size of the specified data or log file for
the current database. You can use the following Transact-SQL statement to shrink a log file named Custdb_Log
to 2 MB:

DBCC SHRINKFILE (Custdb_Log, 2);

You should not use the DBCC FREEPROCCACHE statement because this statement does not allow you to
reduce the size of the data or log file to recover the hard disk space. The DBCC FREEPROCCACHE statement
allows you to delete all elements from the plan cache, delete all workload groups from a specified resource pool,
or specify a plan handle or SQL handle to delete a particular plan from the plan cache.

You should not use the DBCC INDEXDEFRAG statement because this statement does not allow you to reduce
the size of the data or log file to recover the hard disk space. The DBCC INDEXDEFRAG statement defragments
indexes of the specified view or table.

You should not use the DBCC CHECKDB statement because this statement does not allow you to reduce the
size of the data or log file to recover the hard disk space. The Check Database Integrity task of a maintenance
plan performs the same functions as the DBCC CHECKDB statement. Both allow you to verify the allocation and
structural integrity of database objects. When you perform a database integrity check, an entry is recorded in the
Windows Application log, which you can use to verify whether the database integrity check was performed for a
particular database. This entry is also recorded in the SQL Server log.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC SHRINKFILE (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL)

Item: 22 (Ref:Cert-70-432.4.1.2)

You administer a SQL Server 2008 database named Products. The Products database contains four filegroups,
named Products1, Products2, Products3, and Products4. The database is used for online transaction
processing (OLTP) with a high volume of transactions. Data is evenly distributed across the four filegroups.

The customer demographic information is contained in the Products1 filegroup, and its nonclustered indexes are
located in the Products3 filegroup. No other tables or indexes span multiple filegroups.

You are designing a backup schedule for the Products database. Transaction log backups are performed every
30 minutes. All other backups must occur in the evening. While implementing a backup plan, you discover that
only half of the database can be backed up each night. You want to provide the maximum protection to the
environment.

Which backup schedule should you implement?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 50 of 226

j Back up the Products1 filegroup on the first day. Back up the Products2 filegroup on the second day. Back
k
l
m
n
up the Products3 filegroup on the third day. Back up the Products4 filegroup on the fourth day.
j Back up the Products1 and Products2 filegroups on Monday, Wednesday, and Friday. Back up the
k
l
m
n
Products3 and Products4 filegroups on Tuesday, Thursday, and Saturday.
j Back up the Products1 and Products4 filegroups on Monday, Wednesday, and Friday. Back up the
k
l
m
n
Products2 and Products3 filegroups on Tuesday, Thursday, and Saturday.
j Back up the Products1 and Products3 filegroups on Monday, Wednesday, and Friday. Back up the
k
l
m
n
Products2 and Products4 filegroups on Tuesday, Thursday, and Saturday.

Answer:
Back up the Products1 and Products3 filegroups on Monday, Wednesday, and Friday. Back up
the Products2 and Products4 filegroups on Tuesday, Thursday, and Saturday.

Explanation:
You should back up the Products1 and Products3 filegroups on Monday, Wednesday, and Friday and back up
the Products2 and Products4 filegroups on Tuesday, Thursday, and Saturday. Nonclustered indexes must be
backed up with the tables on which they are based to ensure that data is in a consistent state.

All options that back up Products1 with any filegroup other than Products3 are incorrect. The Products1 and
Products3 filegroups must be backed up together because Products3 contains the nonclustered indexes for
Products1's table. If you do not back them up at the same time, inconsistent data could be a result.

You should not back up the Products1 filegroup on the first day, the Products2 filegroup on the second day, the
Products3 filegroup on the third day, and the Products4 filegroup on the fourth day. This solution would not
allow you to restore Products1 and Products3 so that they are consistent with each other.

For SQL Server to re-create an index, all database files that contain the base table and all database files that are
affected by the index creation must be in the same condition in which they were in when the index was first
created.

If the index and the base table are contained in the same filegroup, you should back up the entire filegroup as a
single unit. If the index and the base table are contained in separate filegroups, you should back up all the
filegroups as a single unit. This will allow you to restore the indexes to a state consistent with the base table.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Back up databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Overview (SQL Server)

Item: 24 (Ref:Cert-70-432.4.5.7)

You are the database administrator of your company. The network contains a SQL Server 2008 computer that
stores several databases. You want to identify all databases that have torn pages.

Which Transact-SQL statement should you run?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 51 of 226

j
k
l
m
n
SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 1;

j
k
l
m
n
SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 2;

j
k
l
m
n
SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 3;

j
k
l
m
n
SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 4;

Answer:
SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 3;

Explanation:
You should run the following Transact-SQL statement:

SELECT database_id
FROM msdb..suspect_pages
WHERE event_type = 3;

The suspect_pages table is stored in the msdb database, and it contains information about suspect pages. A
suspect page is a page that the database engine fails to read due to an 823 or 824 error. However, the retrieve
ONLY torn pages, you need to use an event_type value of 3. An 823 error indicates a severe system-level error
that threatens database integrity, such as a disk error. An 824 error indicates that SQL Server detected an I/O
error based on logical consistency, such as a bad page ID. The status for each page in the suspect_pages table
is listed in the event_type column. The pages with event_type value of 3 indicate torn pages.

You should not run the following Transact-SQL statement:

SELECT database_id FROM msdb..suspect_pages


WHERE event_type = 1;

An event_type value of 1 indicates an 823 that results in a suspect page or an 824 error other than a bad
checksum or a torn page. Bad checksum errors can be retrieved using an event_type value of 2. Torn page
errors can be retreived using an event_type of 3.

You should not run the following Transact-SQL statement:

SELECT database_id FROM msdb..suspect_pages


WHERE event_type = 2;

An event_type value of 2 indicates a bad checksum, not a torn page.

You should not run the following Transact-SQL statement:

SELECT database_id FROM msdb..suspect_pages


WHERE event_type = 4;

An event_type value of 4 indicates a restored page, not a torn page.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 52 of 226

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Tables (Transact-SQL) > Backup and Restore Tables (Transact-SQL) > suspect_pages
(Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Implementing Restore Scenarios for SQL Server Databases >
Performing Page Restores > Understanding and Managing the suspect_pages Table

Item: 27 (Ref:Cert-70-432.4.3.2)

You are responsible for managing a SQL Server 2008 instance. You want to create a new database named
Salesdb that supports FILESTREAM data. You execute the following Transact-SQL statement to create the new
database:

CREATE DATABASE Salesdb


ON
PRIMARY (NAME = Salesdb1,
FILENAME = 'c:\FileData\SalesdbData.mdf'),
FILEGROUP FStreamFileGroup1 CONTAINS FILESTREAM (NAME = Salesdb2,
FILENAME = 'c:\FileData\Fstream')
LOG ON (NAME = SalesdbLog,
FILENAME = 'c:\FileData\SalesdbLog.ldf');

Which two conditions must be met for this statement to execute successfully? (Choose two.)
c The C:\FileData\Fstream folder must exist.
d
e
f
g
c The C:\FileData folder must exist.
d
e
f
g

c The Fstream subfolder in the C:\FileData folder must not exist.


d
e
f
g
c The C:\FileData folder must not exist.
d
e
f
g

Answer:
The C:\FileData folder must exist.
The Fstream subfolder in the C:\FileData folder must not exist.

Explanation:
The C:\FileData folder must exist, and the Fstream subfolder in the C:\FileData folder must not exist. The
FILESTREAM feature integrates applications that use SQL Server with the NTFS file system. It stores varbinary
(max) binary large object (BLOB) data as files on the file system. To be able to use FILESTREAM, you must
enable it on the SQL Server instance that is running the Database Engine. Permissions to the FILESTREAM data
are managed by NTFS, and the SQL Server service is the only service that can access this data.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 53 of 226

FILESTREAM provides security to the data. Only the SQL Server service account is granted NTFS permission to
the FILESTREAM container. The FILESTREAM feature uses a special type of filegroup. Therefore, when creating
the database, you must use the CONTAINS FILESTREAM clause when creating at least one filegroup. The
FILENAME parameter in the FILESTREAM filegroup is used to specify a path. All folders specified in the path up
to the last folder must exist. The last folder specified in the path must not exist. For example, in this scenario the
C:\FileData folder must exist, and the Fstream subfolder in the C:\FileData folder must not exist.

The option stating that the C:\FileData\Fstream folder must exist is incorrect. All folders specified in the path up
to the last folder must exist. The last folder specified in the path must not exist. Therefore, in this scenario the
Fstream subfolder in the C:\FileData folder must not exist.

The option stating that the C:\FileData folder must not exist is incorrect. All folders specified in the path up to the
last folder must exist. Therefore, the C:\FileData folder must exist.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage and configure databases.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing FILESTREAM Storage > Designing and Implementing FILESTREAM How-to Topics > How to:
Create a FILESTREAM-Enabled Database

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing FILESTREAM Storage > FILESTREAM Overview

Item: 29 (Ref:Cert-70-432.4.2.2)

You are a database administrator managing all the SQL Server 2008 databases of your company. All the
customer-related data is stored in the Prod database that is operating in the Full Recovery model. You adhere to
the following backup strategy for the Prod database:

At 4:00 P.M. on Wednesday, the hard disk on which your database files are stored fails. You lose all the data
stored on the disk and are required to recover the database from the existing database backups.

To most efficiently recover the database, which action should you take?
j Restore the full backup taken on Sunday, the differential backup taken on Tuesday, and the transaction log
k
l
m
n
backups taken at 10:00 A.M. and 2:00 P.M. on Wednesday.
j Restore the full backup taken on Sunday, the differential backup taken on Wednesday, and the transaction
k
l
m
n
log backup taken at 2:00 P.M. on Wednesday.
j Restore the full backup taken on Sunday, the differential backups taken on Monday and Tuesday, the
k
l
m
n
transaction log backups taken on Tuesday, and the transaction log backups taken on Wednesday at 10:00
A.M. and 2:00 P.M.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 54 of 226

j Restore the full backup taken on Sunday, the differential backup taken on Monday, and all the transaction
k
l
m
n
log backups taken on Tuesday and Wednesday.

Answer:
Restore the full backup taken on Sunday, the differential backup taken on Tuesday, and the
transaction log backups taken at 10:00 A.M. and 2:00 P.M. on Wednesday.

Explanation:
In this scenario, you should recover the database up to 2:00 P.M. on Wednesday by restoring the full backup
taken on Sunday, the differential backup taken on Tuesday, and the transaction log backups taken at 10:00 A.M.
and 2:00 P.M. on Wednesday. To recover a database from a set of full and differential backups, you should
restore the most recent full backup and the most recent differential backup created since the last full backup. This
differential backup will contain all the data that has been modified in the database since the last full backup. You
can also use the transaction log backups since the most recent differential backup. In this scenario, the most
recent full backup was taken on Sunday, and the most recent differential backup was taken on Tuesday. After
restoring these two backups, you can also restore the two transaction log backups that were taken after the
differential backup. Therefore, after recovery, the database will contain all the data up to 2:00 P.M. on
Wednesday.

You should not restore the full backup taken on Sunday, the differential backup taken on Wednesday, and the
transaction log backups taken at 2:00 P.M. on Wednesday. There is no differential backup of the database on
Wednesday because the differential backups are taken on weekdays at 8:00 P.M., and the database was
corrupted at 4:00 P.M. Therefore, you can only use the differential backup created on Tuesday.

You should not restore the full backup taken on Sunday, the differential backups taken on Monday and Tuesday,
the transaction log backups taken on Tuesday, and the transaction log backups taken on Wednesday at 10:00
A.M. and 2:00 P.M. To recover a database by restoring the backups, you should only restore the most recent
differential backup since the most recent full backup. You should not recover the differential backup created on
Monday.

You should not restore the full backup taken on Sunday, the differential backup taken on Monday, and all the
transaction log backups created on Tuesday and Wednesday. To recover a database from backups, you should
restore the most recent backup. This enables you to recover the database in less time by applying only those
transactions logs that are generated after the last differential backup. If you restore the differential backup taken
on Monday, you will be required to apply all the transaction logs generated on Tuesday and on Wednesday. This
process will require additional time and is not the most efficient method to recover the database.

The backup and restore process will vary based on the design of the database layout as well as the backup
requirements. For example, if you have a database that resides on drive G, a transaction log that resides on drive
H, and an archive that resides on drive I, you would design your backup and restore process based on the
requirements. In such a situation, it would be best to implement a separate backup strategy for each drive. You
could use a full backup/differential backup strategy to backup the database and use transaction log backups to
backup the transaction log. A separate backup strategy could be implemented for the archive. If the database
needs to be recovered, you would use a combination of the full backup, differential backups, and transaction log
backups. However if the drive on which the archive or transaction log resides is the only drive to fail, the database
would still be operational. You would simply need to replace the failed drive.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 55 of 226

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Understanding Recovery Performance in SQL Server > Reducing
Recovery Time When Restoring a Database

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration
>Administration: How-to Topics > Backing Up and Restoring How-to Topics > How to: Restore to a Point in Time
(Transact-SQL)

Item: 31 (Ref:Cert-70-432.4.4.1)

You are the database administrator for your company. You are administering two SQL Server databases, Sales1
and Sales2, on your company's server.

According to changes in your company policy, you should create and maintain snapshots of the Sales1 database
to provide a backup and recovery solution for the database.

Which restrictions apply while creating a snapshot of the Sales1 database?


j All the filegroups in the Sales1 database must be online to create a snapshot.
k
l
m
n

j The snapshot must be placed on the same SQL Server instance as the Sales1 database.
k
l
m
n

j You must delete the earlier snapshots of the Sales1 database before creating a new snapshot.
k
l
m
n

j A snapshot of the model database must be created along with the snapshot of the Sales1 database.
k
l
m
n

Answer:
The snapshot must be placed on the same SQL Server instance as the Sales1 database.

Explanation:
The snapshot must be placed on the same SQL Server instance as the Sales1 database. A snapshot is a read-
only copy of a database at a specific point in time. Any change made to the database after this point in time is not
available in the snapshot. A snapshot is always placed on the SQL Server instance on which the SQL Server
database resides. Snapshots are primarily used for reporting purposes and as test databases. They can also be
used to recover the database from an unwanted change. They provide a copy of a database using minimal disk
space.

The option stating that all the filegroups in the Sales1 database must be online to create a snapshot is incorrect. It
is not necessary to keep all the filegroups online while creating a snapshot of a database. A snapshot can be
created if some filegroups in the database are offline. The offline filegroups will not generate sparse files and will
remain offline in the database snapshot.

The option stating that you must delete the earlier snapshots of the Sales1 database before creating a new
snapshot is incorrect. You can create multiple snapshots of a database, but the names of two snapshots of a
database cannot be identical. The database snapshots should be named so that you can easily identify the point
in time at which the snapshot was created.

The option stating that a snapshot of the model database must be created along with a snapshot of the Sales1
database is incorrect. You are not required to create a snapshot of the model database along with the snapshot
of the Sales1 database. Creating a snapshot of the model, master, or tempdb database is not allowed.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 56 of 226

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage database snapshots.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > How Database
Snapshots Work

Item: 36 (Ref:Cert-70-432.4.5.4)

You are the database administrator for your company. You maintain a production database named Products on
your company's server. There are approximately 35 objects in the Products database.

Your Products database has crashed due to a hardware failure. You were able to copy the .mdf and .ldf files of
this database from the hard disks onto a different server. You have attached these files to a new server instance
and named the new database Products2.

You want to perform the following actions:

 Check the integrity of all objects in the newly created database.


 Minimize the time required to perform the integrity check.
 Repair errors without the risk of data loss.
 Suppress any informational messages displayed while performing the integrity check.

However, repairing errors without the risk of data loss is more important than reducing the time to perform the
operation or suppressing informational messages.

Which statement will you issue to achieve the stated objectives?


j DBCC CHECKDB WITH NO_INFOMSGS;
k
l
m
n

j DBCC CHECKDB('Products2', NOINDEX, REPAIR_FAST) WITH NO_INFOMSGS;


k
l
m
n

j DBCC CHECKDB('Products2', NOINDEX, REPAIR_REBUILD) WITH NO_INFOMSGS;


k
l
m
n
j DBCC CHECKDB('Products2', REPAIR_REBUILD) WITH NO_INFOMSGS;
k
l
m
n

Answer:
DBCC CHECKDB('Products2', REPAIR_REBUILD) WITH NO_INFOMSGS;

Explanation:
You should issue the following statement:

DBCC CHECKDB('Products2', REPAIR_REBUILD) WITH NO_INFOMSGS;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 57 of 226

This statement fulfills two of the three requirements of this scenario. The DBCC CHECKDB statement checks the
allocation, logical integrity, and physical integrity of all the objects in the specified database. In this scenario, the
following arguments are used in this statement:

 NOINDEX: Specifies that intensive checks for nonclustered indexes are not performed. Using NOINDEX
reduces the time required to perform the integrity checks. In this scenario, the time required must be
minimized. However, this is not the most important requirement. You cannot use this option with any repair
options.
 REPAIR_REBUILD: Specifies that errors will be repaired and ensures that there is no loss of data in the
repair process. In this scenario, this was the most important requirement. Therefore, you should include
this argument.
 NO_INFOMSGS: Specifies that no informational messages are displayed.

The database name can be specified by using the database_name argument. If the database name is not
specified, the current database will be used.

You should not issue the following statement:

DBCC CHECKDB WITH NO_INFOMSGS;

This statement does not specify any clause to reduce the time required by the integrity check. In addition, the
REPAIR_REBUILD argument is not used to ensure that the errors reported are repaired without incurring any
loss of data.

You should not issue the following statement:

DBCC CHECKDB('Products2', NOINDEX, REPAIR_FAST) WITH NO_INFOMSGS;

This statement does not fulfill the objectives specified in this scenario. To perform repairs without the risk of data
loss, you should use the REPAIR_REBUILD argument. You cannot use both NOINDEX and REPAIR_FAST in a
single DBCC CHECKDB statement.

You should not issue the following statement:

DBCC CHECKDB('Products2', NOINDEX, REPAIR_REBUILD) WITH NO_INFOMSGS;

You cannot use both NOINDEX and REPAIR_REBUILD in a single DBCC CHECKDB statement.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC CHECKDB (Transact-SQL)

Item: 40 (Ref:Cert-70-432.4.2.6)

You are the database administrator of your company. The network contains an instance of SQL Server 2008. You
are implementing a backup strategy for a database named Empdb stored on the SQL server. You want to be able
to instantly create a copy of the Empdb database to create reports.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 58 of 226

Which type of backup should you create?


j a copy-only backup
k
l
m
n
j a full backup
k
l
m
n

j a snapshot backup
k
l
m
n
j a differential backup
k
l
m
n

Answer:
a snapshot backup

Explanation:
You should create a snapshot backup of the Empdb database. A snapshot backup is a special type of backup
that is created instantaneously without affecting the server. Snapshot backups use a split-mirror provided by an
independent software and hardware vendor, which prevents the use of SQL Server resources to create the
backup. You should use the following syntax to create a snapshot backup:

CREATE DATABASE database_snapshot_name


ON (
NAME = logical_file_name,
FILENAME = 'os_file_name'
) [ ,...n ]
AS SNAPSHOT OF source_database_name;

Snapshot backups are beneficial because they allow you to perform a restore operation as quickly as the
snapshot backup is created. A snapshot backup cannot be used to perform an online restore because SQL
Server does not support this. When you restore a snapshot backup, the database is automatically taken offline. A
snapshot backup can be included in a piecemeal restore. A piecemeal restore is a process that allows you to
restore and recover a database with multiple filegroups in stages.

You should not create a copy-only backup of the Empdb database because copy-only backups are not created
as quickly as snapshot backups. A copy-only backup is a special type of backup that is taken outside of your
conventional SQL Server backup processing. The following syntax is used to create a copy-only backup:

BACKUP DATABASE <database_name> TO <backup_device>...


WITH COPY_ONLY;

You should not create a full backup of the Empdb database because full backups are not created as quickly as
snapshot backups. A full backup contains an entire copy of the specified database. A full backup takes a
considerable amount of time to complete depending on the amount of data stored in the database.

You should not create a differential backup of the Empdb database. A differential backup contains only data that
has changed since the last full database backup. Differential backups are smaller and can be taken more quickly
than full database backups, but they are not created as quickly as snapshot backups.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

References:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 59 of 226

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup and Restore in Large Mission-Critical Environments >
Snapshot Backups
Item: 62 (Ref:Cert-70-432.4.5.6)

You are the database administrator of your company. The network contains five instances of SQL Server 2008.
An instance named SQL1 contains a database named Salesdb that is used by users in the sales department.

Some users report that when they query the Salesdb database, they receive out-of-range values from several
tables. You want to identify out-of-range values for columns in all the tables in the Salesdb database.

Which Transact-SQL statement should you run?


j the DBCC CHECKDB statement with the DATA_PURITY clause
k
l
m
n
j the DBCC CHECKDB statement with the EXTENDED_LOGICAL_CHECKS clause
k
l
m
n

j the DBCC CHECKTABLE statement with the DATA_PURITY clause


k
l
m
n
j the DBCC CHECKTABLE statement with the EXTENDED_LOGICAL_CHECKS clause
k
l
m
n

Answer:
the DBCC CHECKDB statement with the DATA_PURITY clause

Explanation:
You should run the DBCC CHECKDB statement with the DATA_PURITY clause. The DBCC CHECKDB
statement allows you to verify the allocation and structural integrity of all the objects in the database specified in
the statement. When you specify the DATA_PURITY clause in the DBCC CHECKDB statement, it identifies all
invalid or out-of-range values for columns in all the tables in the database.

You should not run the DBCC CHECKDB statement with the EXTENDED_LOGICAL_CHECKS clause. The
EXTENDED_LOGICAL_CHECKS clause verifies the logical consistency of indexed views, XML indexes, and
spatial indexes when the compatibility level is SQL Server 2008 or higher. The EXTENDED_LOGICAL_CHECKS
clause cannot be used to identify invalid values for columns in the database.

You should not run the DBCC CHECKTABLE statement with the DATA_PURITY clause. In this scenario, you
want to identify invalid values in all the tables in the Salesdb database. This can be achieved by using the DBCC
CHECKDB statement. The DBCC CHECKTABLE statement is used for a particular table in the database, not all
tables in the database.

You should not run the DBCC CHECKTABLE statement with the EXTENDED_LOGICAL_CHECKS clause. The
EXTENDED_LOGICAL_CHECKS clause verifies the logical consistency of indexed views, XML indexes, and
spatial indexes when the compatibility level is SQL Server 2008 or higher. In addition, the DBCC CHECKTABLE
statement will only check a table within a database, not the entire database.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 60 of 226

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC CHECKDB (Transact-SQL)

Item: 70 (Ref:Cert-70-432.4.6.1)

You are a database administrator for Verigon Corporation. You maintain all the SQL Server 2008 databases for
the company. The company receives data regarding the processed orders from the various branches of the
company. After the data is verified by the audit department, the appropriate tables in the database are updated
with the data by 6 A.M.

You are required to perform the following tasks after the tables have been updated:

 Rebuild the indexes in the tables at 8:00 A.M.


 Update the statistics for the tables and indexes in the database at 10:00 P.M.
 Perform a differential backup of the database at 7:00 P.M.

Which method should you use to perform the tasks by using the least administrative effort?
j Create a new job for all the specified tasks.
k
l
m
n
j Create a new maintenance plan for the database by using the Maintenance Plan Wizard.
k
l
m
n

j Create new maintenance plans for each of the specified tasks by using the Maintenance Plan Wizard.
k
l
m
n
j Create a new maintenance plan for the database using the design surface.
k
l
m
n

Answer:
Create a new maintenance plan for the database by using the Maintenance Plan Wizard.

Explanation:
You should create a new maintenance plan for the database by using the Maintenance Plan Wizard. The
Maintenance Plan Wizard creates the necessary jobs to perform the specified tasks. With SQL Server 2005
Service Pack 2 and SQL Server 2008, a maintenance plan can schedule a group of tasks to be performed at a
specific time or you can use a specific schedule for each task. You can create a maintenance plan using the
Maintenance Plan Wizard or using the design surface, but using the Maintenance Plan Wizard would involve
the least administrative effort. In this scenario, you have tasks that should run at different times. Therefore, you
should create a separate schedule for each task in the maintenance plan.

You should not create new maintenance plans for each of the specified tasks by using the Maintenance Plan
Wizard. This would require more administrative effort than creating a single maintenance plan. With SQL Server
2005 Service Pack 2 and SQL Server 2008, a maintenance plan can be used to schedule the different tasks to be
performed at a specific time or you can use a specific schedule for each task.

You should not create a new job for all the specified tasks because you cannot create a single job and to perform
tasks that are scheduled to run at different times. You can create a job containing different tasks, but all the tasks
within the job are scheduled to run at the same time. To create jobs for tasks scheduled to run at different times,
you would have to create a separate job for each task.
You should not create a new maintenance plan for the database using the design surface. Using the design
surface would involve more administrative effort than using the Maintenance Plan Wizard. The Maintenance
Plan Wizard is beneficial if you must perform basic maintenance tasks. In this scenario, the tasks to be
performed are basic maintenance tasks. Therefore, the use of the Maintenance Plan Wizard is recommended.
You would use the design surface, if you needed to create more complex maintenance plans that needed to use
enhanced workflow features.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 61 of 226

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain a database by using maintenance plans.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Server Management How-to Topics > SQL Server Management Studio How-to
Topics > How To: Create a Maintenance Plan

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Maintaining Databases (Database Engine) >
Maintenance Plans

Item: 80 (Ref:Cert-70-432.4.4.4)

You manage an instance of SQL Server 2008 that contains five databases. You perform database snapshots for
all five databases on a daily basis.

A user reports that he accidentally deleted a table from the Mktgdb database. You verify that a snapshot named
Mktgdbsnapshot contains the deleted table. You want to revert the Mktgdb database to the Mktgdbsnapshot
snapshot.

Which permission do you require to perform this task?


j the LOAD DATABASE permission
k
l
m
n
j the RESTORE DATABASE permission
k
l
m
n

j the ALTER DATABASE permission


k
l
m
n

j the CREATE DATABASE permission


k
l
m
n

Answer:
the RESTORE DATABASE permission

Explanation:
You require the RESTORE DATABASE permission. A database snapshot is a special type of backup that
captures the state of data in a database at a point in time at which the snapshot creation was started. You can
revert to a snapshot backup when data in an online source database is damaged.

You should use the following Transact-SQL statement to revert to a database snapshot:

RESTORE DATABASE <database_name> FROM


DATABASE_SNAPSHOT = <'snapshot_name'>;

To perform a revert operation, you must have the RESTORE DATABASE permission on the source database.

You do not need the LOAD DATABASE permission because this permission is no longer available in SQL Server

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 62 of 226

2008. This permission has been replaced with the RESTORE DATABASE permission.

You do not need the ALTER DATABASE or CREATE DATABASE permission because neither of these
permissions is required to revert to a database snapshot. The ALTER DATABASE permission allows you to
modify a database, or modify the files and filegroups that are associated with the database. The CREATE
DATABASE permission allows you to create a new database or database snapshot. This permission also allows
you to attach a database using a set of existing detached files.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage database snapshots.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Designing and Implementing Structured Storage Hot-to
Topics > Databases How-to Topics > Database Snapshots How-to Topics > How to: Revert a Database to a
Database Snapshot (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Reverting to a
Database Snapshot

Item: 83 (Ref:Cert-70-432.4.2.3)

You are the database administrator for your company. You manage the production database named Prod. The
database is configured to use the Simple Recovery model. You perform a full backup of the database at 1:00 A.M.
and 1:00 P.M. daily. A differential backup is performed every two hours beginning at midnight.

A user in the database drops the table at 11:30 A.M.

Which restoration method will you use to recover the dropped table to the most recent point in time?
j Restore the database from the most recent full backup only.
k
l
m
n

j Restore the database from the most recent full backup, and apply all the differential backups.
k
l
m
n
j Restore the database from the most recent full backup, and apply the most recent differential backup since
k
l
m
n
the last full database backup.
j Recover the table from the most recent full backup, apply the most recent differential backup, and apply the
k
l
m
n
most recent transaction log backup.

Answer:
Restore the database from the most recent full backup, and apply the most recent differential
backup since the last full database backup.

Explanation:
You should restore the database from the most recent full backup and apply the latest differential backup since
the last full database backup. To recover a table from a series of full and differential database backups, you

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 63 of 226

should restore the database from the most recent full database backup and apply the latest differential backup
that was taken after the most recent full database backup. In this scenario, the database is configured to use the
simple recovery model. Therefore, you cannot use the transaction logs for recovery. In a database that uses the
simple recovery model, backups of the transaction logs are not maintained.

You should not restore the database from the most recent full backup. Restoring the most recent full database
backup will only restore the table to the point when the full database backup was taken. To restore the table to the
most recent point in time, you should apply the latest differential backup performed after the most recent full
database backup.

You should not restore the database from the most recent full backup and apply all the differential backups. After
restoring the table from the most recent full database backup, you are only required to restore the latest
differential backup since the most recent full database backup. You are not required to restore all the differential
database backups.

You should not recover the table from the most recent full backup, followed by the most recent differential
backups, and finally by the most recent transaction log backup. In this scenario, the database is using the simple
recovery model, and a backup of transaction logs is not maintained in the simple recovery model.

When restoring a database, you can restore to a particular point in time. You would execute the RESTORE
DATABASE statement with the NO RECOVERY clause for the database backups. When you restore the
transaction log backups, you execute the RESTORE LOG statement. The RESTORE LOG statement includes
the RECOVERY and STOPAT clauses, which ensure that the database is restored to full functionality and is
restored to a certain point in time, respectively.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Backing Up and Restoring How-to Topics > How to: Create a Differential
Database Backup (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Database in SQL Server > Creating Full and Differential Backups of a SQL Server database >
Using Differential Backups > Differential Database Backups

Item: 91 (Ref:Cert-70-432.4.1.4)

You manage an instance of SQL Server 2008. The server contains a database named Corpdb. You configure the
options for the Corpdb database as shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 64 of 226

You are creating a transaction log backup of the Corpdb database. You open the Back Up Database - Corpdb
dialog box to perform the transaction log backup. However, you discover that the Transaction Log option is not
available in the Backup type drop-down list. You want to ensure that the Transaction Log option is available in
the Backup type drop-down list.

What should you do? (Choose two. Each answer represents a complete solution.)
c Change the recovery model of Corpdb to Full.
d
e
f
g

c Ensure that you have the BACKUP LOG permission.


d
e
f
g
c Ensure that you are member of the db_backupoperator fixed database role.
d
e
f
g

c Change the recovery model of Corpdb to Bulk-logged.


d
e
f
g

Answer:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 65 of 226

Change the recovery model of Corpdb to Full.


Change the recovery model of Corpdb to Bulk-logged.

Explanation:
You should change the recovery model of Corpdb to Full or Bulk-logged. Transaction logs allow you to restore a
database to a particular point in time or to the point of failure if the database fails. In SQL Server 2008, transaction
logs are stored in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA
folder by default. Transaction log backups can only be created for databases that use the full or bulk-logged
recovery model. You can use the following Transact-SQL statement to change the recovery model for the Corpdb
database:

ALTER DATABASE Corpdb


SET RECOVERY FULL;

To perform a transaction log backup for the Corpdb database, you can use the following Transact-SQL
statement:

BACKUP LOG Corpdb


TO DISK = 'D:\LogBackups\CorpdbLog.bak';

You should not ensure that you have the BACKUP LOG permission. This permission allows users to create
transaction log backups. However, having the BACKUP LOG permission is not useful unless the recovery model
of the database is set to either full or bulk-logged.

You should not ensure that you are member of the db_backupoperator fixed database role. This fixed database
role allows users to back up databases and transaction logs and create checkpoints. However, being a member
of the db_backupoperator fixed database role is not useful unless the recovery model of the database is set to
either full or bulk-logged.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Back up databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > BACKUP (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies s and Enterprise Development > SQL Server >
SQL Server 2008 > Product Documentation > SQL Server 2008 Books Online > Database Engine > Operations >
Administration > Administration: How-to Topics > Backing Up and Restoring How-to Topics (Transact-SQL) >
How to: Create a Transaction Log Backup (Transact-SQL)

Item: 97 (Ref:Cert-70-432.4.4.3)

You are the database administrator for your company. The network contains a SQL Server 2008 computer. The
SQL server contains a database named Marketingdb. On Wednesday, you create two snapshots of the
Marketingdb database, named MktgSnapshot1 and MktgSnapshot2. The next day, you discover that the
Marketingdb database is damaged.

You verify that MktgSnapshot2 contains the most recent data. You run the following Transact-SQL statement to
revert to the MktgSnapshot2:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 66 of 226

RESTORE DATABASE Marketingdb FROM


DATABASE_SNAPSHOT = 'MktgSnapshot2';

An error occurs, and you discover that the database is not reverted to MktgSnapshot2. You think that the error
occurs because you have more than one snapshot. You want to ensure that reverting to the MktgSnapshot2
snapshot completes successfully.

What should you do?


j Ensure that the source database does not contain any read-only or compressed filegroups.
k
l
m
n

j Detach the Marketingdb database.


k
l
m
n
j Bring the Marketingdb database offline.
k
l
m
n

j Delete MktgSnapshot1.
k
l
m
n

Answer:
Delete MktgSnapshot1.

Explanation:
You should delete MktgSnapshot1. A database snapshot is a special type of backup that captures the state of
data in a database at a point in time at which the snapshot creation was started. You can revert to a snapshot
backup when data in an online source database is damaged. However, you must ensure that the snapshot was
created before the data in the source database became damaged and does not contain the corrupted data. You
will not be able to revert to a database snapshot if:

 any read-only or compressed filegroups in the source database exist


 any files are offline that were online at the time of snapshot creation
 more than one snapshot of the same database exists. For the reverting process to be successful, only the
snapshot that you are reverting can exist.

You should not ensure that the source database does not contain any read-only or compressed filegroups.
Although, this is required to ensure that reverting to a database snapshot is successful, the problem in this
scenario is that you have more than one snapshot of the same database. The scenario specifically stated that you
think the problem is due to having more than one snapshot.

You should not detach the Marketingdb database because this will not ensure that the reverting to
MktgSnapshot2 is successful. Detaching a database is useful when you want to move the database to a different
SQL Server instance. When you perform the revert operation, the source database must be online.

You should not bring the Marketingdb database offline because this will not ensure that the reverting to
MktgSnapshot2 is successful. When you restore a snapshot backup, the database is automatically taken offline.
However, the source database must be online when you perform the revert operation.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage database snapshots.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 67 of 226

Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Reverting to a
Database Snapshot

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Limitations on
Database Snapshots

Item: 107 (Ref:Cert-70-432.4.5.2)

You are your company's SQL administrator. A SQL Server 2008 instance named SQL1 contains several
databases, including a database named Products.

Several users are experiencing problems with the Product_details table of the Products database. The
Product_details table is very large.

You decide to check the integrity of the table using a DBCC CHECKTABLE statement. The statement you
execute must meet the following goals:

 The run time for the statement must be minimized.


 Other users should only be able to read the table's data during the statement's execution.

Which arguments should you use? (Choose all that apply.)


c REPAIR_FAST
d
e
f
g

c TABLOCK
d
e
f
g

c PHYSICAL_ONLY
d
e
f
g

c ESTIMATEONLY
d
e
f
g

c DATA_PURITY
d
e
f
g

Answer:
TABLOCK
PHYSICAL_ONLY

Explanation:
You should use the TABLOCK and PHYSICAL_ONLY arguments with the DBCC CHECKTABLE statement. The
TABLOCK argument ensures that other users are only able to read the table's data during the statement's
execution. If you do not specify this argument, other users will have full access to the table's data. The
PHYSICAL_ONLY argument minimizes the run time for the statement. Using this argument is recommended for
large tables. However, you should periodically run the DBCC CHECKTABLE statement without the
PHYSICAL_ONLY argument.

You should not use the REPAIR_FAST argument with the DBCC CHECKTABLE statement. The REPAIR_FAST
argument is only included in SQL Server 2008 for backward compatibility. The repair modes include
REPAIR_FAST, REPAIR_REBUILD, and REPAIR_ALLOW_DATA_LOSS. When these arguments are used,
the database must be in single-user mode, meaning users would not be able to read the table's data.

You should not use the ESTIMATEONLY argument with the DBCC CHECKTABLE statement. The
ESTIMATEONLY argument is used to determine the amount of tempdb space needed to run the statement.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 68 of 226

You should not specify the DATA_PURITY argument with the DBCC CHECKTABLE statement. The
DATA_PURITY argument allows you to check column values to ensure that the data in the columns is valid and
within the allowed range.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC CHECKTABLE (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data > Accessing and Changing Database Data > Locking and Row Versioning > Locking in the Database Engine
> Lock Modes

Item: 111 (Ref:Cert-70-432.4.1.3)

You are the database administrator of your company. Your server named Server1 contains an instance of SQL
Server 2008 named SQL1. Server1 contains two hard disks named Disk1 and Disk2. A database named
Corpdb contains sensitive information and is stored on Disk1. The transaction log for Corpdb is stored on Disk2.
Your company policy states that data loss must never exceed one hour's worth of data.

You configure a full backup to be performed on SQL1 every Sunday at 1:00 A.M. You configure a differential
backup to be performed daily on SQL1 at 4:00 A.M. You also configure the transaction log to be backed up every
hour. The database becomes corrupt on Wednesday at 3:20 P.M. You want to restore the database to the point of
failure.

What should you do first?


j Perform a tail-log backup.
k
l
m
n

j Apply the tail-log backup.


k
l
m
n
j Apply all transaction log backups in the order they were created after the differential backup created on
k
l
m
n
Wednesday at 4:00 A.M.
j Restore the differential backup that was created on Wednesday at 4:00 A.M.
k
l
m
n

j Restore the full backup that was created on Sunday.


k
l
m
n

Answer:
Perform a tail-log backup.

Explanation:
You should perform a tail-log backup. In a typical backup strategy that uses full database, differential database,
and transaction log backups, the full database backup is created at less frequent intervals, the differential backup
is created at medium intervals, and the transaction log backup is created at more frequent intervals. The backup
strategy that uses full database, differential database, and transaction log backups reduces the amount of time
required to restore a database to any point in time after the database backup was created. This strategy also

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 69 of 226

requires less disk space because the differential database backups take much less space than a full database
backup.

To restore a database to the point of failure, you should first perform the backup of the tail-log before restoring the
most recent full and differential database backups. The tail-log backup is the backup of the active transaction log
that was not included in the most recent transaction log backup. After creating the tail-log backup, you should
restore the most recent full database backup. Next, you should restore the most recent differential database
backup. Then, you should apply all transaction log backups taken after the last differential backup in sequential
order. Finally, you should apply the tail-log backup. If the transaction log is also corrupt or lost, you will not be able
to perform the tail-log backup and will not be able to restore the data to the point of failure.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Back up databases.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Understanding Recovery Performance in SQL Server > Reducing
Recovery Time When Restoring a Database

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Under the Full Recovery Model

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > How To > Transact-SQL > Administering SQL Server >
Backing Up and Restoring Databases > How to restore to a point in time (Transact-SQL)

Item: 123 (Ref:Cert-70-432.4.4.2)

You are the database administrator for your company. You are managing the SQL Server database named
Sql_prod1 located on your company's server.

You adhere to the following backup strategy for the Sql_prod1 database:

 a transaction log backup at noon


 a snapshot of the database at 3:00 P.M.
 a complete backup of the database at 8:00 A.M.
 a differential backup of the database at noon and at 5:00 P.M.

Between 3:00 PM and 3:14 PM there are no changes made. At 3:15 P.M., a programmer issues the following
statement:

DELETE FROM emp WHERE department_id = 40;

After this statement is issued and the changes have been committed, all the data for the employees in department
ID 40 is lost.

You want to recover the lost data using the least administrative effort.

What should you do?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 70 of 226

j Bulk copy the deleted rows from the snapshot created at 3:00 P.M., and merge the data into the database.
k
l
m
n

j Restore the full database backup that was taken at 8:00 A.M., and recover the database by applying the
k
l
m
n
transaction log files.
j Restore the full database backup followed by the differential backup taken at noon, and recover the
k
l
m
n
database by applying the transaction log files.
j Restore the database from the snapshot created at 3:00 P.M.
k
l
m
n

Answer:
Bulk copy the deleted rows from the snapshot created at 3:00 P.M., and merge the data into the
database.

Explanation:
In this scenario, you should bulk copy the deleted rows from the snapshot that was created at 3:00 P.M., and
merge the data into the database. Snapshots can be used to undo a delete operation from the database without
involving much overhead. The easiest method of recovering the deleted rows in this scenario is to identify the
deleted rows and subsequently transfer them from the snapshot to the source database. Snapshots can be used
to recover the database after an unwanted deletion, after a row has been erroneously updated, or after a table
has been accidentally dropped. Snapshots can also be used to completely recover a database by reverting to a
previously created snapshot.

You should not restore the database from the snapshot created at 3:00 P.M. To restore the database from a
previously created snapshot, you should delete all the other existing snapshots of the database. You cannot
revert to a snapshot of a database if there is more than one snapshot of the database. Reverting the complete
database is not required in this scenario because you can easily undo the deletion by inserting the rows from the
snapshot created at 3:00 P.M.

You should not restore the full backup of the database taken at 8:00 A.M. and recover the database by applying
the transaction log files. You can recover the lost data by using this method, but it involves more administrative
effort than using a snapshot. This option will only restore the database to the state that it was in at 12:00 P.M. If
any changes to the emp table were made after 12:00 P.M., the changes would be lost. Therefore, this option is
not appropriate in this scenario. A database can be recovered by using this technique when one or more data files
or filegroups have been damaged.

You should not restore the full backup of the database followed by the differential backup taken at noon and then
recover the database by applying the transaction log files. You can recover the lost data by this method, but it
involves more administrative effort than using a snapshot. This option will only restore the database to the state
that it was in at 12:00 P.M. If any changes to the emp table were made after 12:00 P.M., the changes would be
lost. Therefore, this option is not appropriate in this scenario. A database can be recovered by using this
technique when one or more data files or filegroups have been damaged.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage database snapshots.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Typical Uses of
Database Snapshots

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 71 of 226

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Database Snapshots > Reverting to a
Database Snapshot

Item: 124 (Ref:Cert-70-432.4.6.2)

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL_Prod contains
several databases your company uses on a daily basis.

You decide to create a maintenance plan to meet the following objectives:

 The database structure of all databases and indexes on SQL_Prod must be verified.
 All user databases must be backed up before the structure of the databases is verified.
 You must be notified when the maintenance plan has completed.

What should you do?


j Create a maintenance plan that includes tasks, as follows:
k
l
m
n

 the Back Up Database Task with the All databases option selected
 the Check Database Integrity Task with the All user databases and Include indexes options
selected
 the Notify Operator Task configured to notify you when the plan completes
j Create a maintenance plan that includes tasks, as follows:
k
l
m
n

 the Back Up Database Task with the All databases option selected
 the Check Database Integrity Task with the All user databases option selected
 the Rebuild Index Task with the All user databases option selected
 the Notify Operator Task configured to notify you when the plan completes
j Create a maintenance plan that includes tasks, as follows:
k
l
m
n

 the Back Up Database Task with the All user databases option selected
 the Check Database Integrity Task with the All databases option selected
 the Rebuild Index Task with the All databases option selected
 the Notify Operator Task configured to notify you when the plan completes
j Create a maintenance plan that includes tasks, as follows:
k
l
m
n

 the Back Up Database Task with the All user databases option selected
 the Check Database Integrity Task with the All databases and Include indexes options selected
 the Notify Operator Task configured to notify you when the plan completes

Answer:
Create a maintenance plan that includes tasks, as follows:

 the Back Up Database Task with the All user databases option selected
 the Check Database Integrity Task with the All databases and Include indexes options
selected
 the Notify Operator Task configured to notify you when the plan completes

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 72 of 226

Explanation:
You should create a maintenance plan that includes tasks, as follows:

 the Back Up Database Task with the All user databases option selected
 the Check Database Integrity Task with the All databases and Include indexes options selected
 the Notify Operator Task configured to notify you when the plan completes

The Back Up Database Task will back up the databases you select. In this scenario, you must select the All
user databases option because you only wanted to back up the user databases. The Check Database Integrity
Task will verify the structural integrity of the databases and indexes. The All databases option must be selected
because you wanted to verify the structural integrity of all the databases. The Include indexes option must also
be selected because you want to verify the structural integrity of the indexes. The Notify Operator Task must run
last because you want the maintenance plan to notify you when it completes.

You should not create a maintenance plan that includes tasks, as follows:

 the Back Up Database Task with the All databases option selected
 the Check Database Integrity Task with the All user databases and Include indexes options selected
 the Notify Operator Task configured to contact you when the plan completes

The Back Up Database Task should only be run on the user databases, not all databases. In addition, the
Check Database Integrity Task should be run on all databases, not just the user databases.

You should not create a maintenance plan that includes tasks, as follows:

 the Back Up Database Task with the All databases option selected
 the Check Database Integrity Task with the All user databases option selected
 the Rebuild Index Task with the All user databases option selected
 the Notify Operator Task configured to contact you when the plan completes

The Back Up Database Task should only be run on the user databases, not all databases. In addition, the
Check Database Integrity Task should be run on all databases, not just the user databases. Finally, you should
not run the Rebuild Index Task because it rebuilds indexes but does not verify the structural integrity of indexes.
You need to enable the Include indexes option of the Check Database Integrity Task to accomplish that.

You should not create a maintenance plan that includes tasks, as follows:

 the Back Up Database Task with the All user databases option selected
 the Check Database Integrity Task with the All databases option selected
 the Rebuild Index Task with the All databases option selected
 the Notify Operator Task configured to contact you when the plan completes

You should not run the Rebuild Index Task because it rebuilds indexes but does not verify the structural integrity
of indexes. You need to enable the Include indexes option of the Check Database Integrity Task to accomplish
that.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain a database by using maintenance plans.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 73 of 226

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Integration Services > Development > Designing and
Implementing Packages > Understanding the Components of an Integration Services Package > Control Flow
Elements > Integration Services Tasks > Maintenance Tasks

Item: 133 (Ref:Cert-70-432.4.2.4)

You are the database administrator for your company. You are maintaining the SQL Server instance named Sql1
on the Sqlserver1 server. The Sql1 instance stores the Prod1 database. The Prod1 database is configured to
use the full recovery model.

You are required to configure a secondary database for Prod1 to provide high availability of the database. You
are required to initialize the secondary database from a full database backup of the Prod1 database and restore
the logs from the online primary server to the secondary server. You do not want to bring the secondary database
online immediately.

Which actions should you perform to initialize the secondary database from the primary database? (Choose two.
Each correct answer represents part of the solution.)
c Issue the RESTORE DATABASE statement with the WITH NORECOVERY clause.
d
e
f
g

c Issue the RESTORE DATABASE statement with the WITH RECOVERY clause.
d
e
f
g

c Issue the RESTORE LOG statement with the WITH NORECOVERY clause.
d
e
f
g

c Issue the RESTORE LOG statement with the WITH RECOVERY clause.
d
e
f
g

c Issue the RESTORE LOG statement with the WITH STANDBY clause.
d
e
f
g

Answer:
Issue the RESTORE DATABASE statement with the WITH NORECOVERY clause.
Issue the RESTORE LOG statement with the WITH NORECOVERY clause.

Explanation:
You should perform the following steps to initialize a secondary database from a primary database without
bringing the secondary database online immediately:

1. Issue the RESTORE DATABASE statement with the WITH NORECOVERY option.
2. Issue the RESTORE LOG statement with the WITH NORECOVERY option.

To initialize a secondary database from a primary database, you must first restore a full database backup of the
primary database to the location where the secondary database will be stored. This is done by issuing the
RESTORE DATABASE statement. Specifying the WITH NORECOVERY clause with the statement will initialize
the secondary database. The WITH NORECOVERY clause is used when you need to restore additional
transaction logs on the database. Using the WITH RECOVERY clause with the RESTORE DATABASE
statement will perform a recovery on the secondary database and bring it online immediately. In this scenario, you
are not required to bring the secondary database online immediately. Using the WITH NORECOVERY clause
with the RESTORE LOG statement restores the transaction log. The WITH NORECOVERY clause should be
used with the RESTORE LOG statement when additional transaction logs must be applied to the secondary
database.

You should not issue the RESTORE DATABASE statement with the WITH RECOVERY clause. Using the WITH
RECOVERY clause with the RESTORE DATABASE statement performs a recovery on the secondary database

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 74 of 226

and brings the secondary database online immediately. In this scenario, you are not required to bring the
secondary database online.

You should not issue the RESTORE LOG command with the WITH RECOVERY option. The WITH RECOVERY
option with the RESTORE LOG command performs a recovery on the secondary database transaction log and
brings the secondary database online immediately. The WITH RECOVERY option is used with the RESTORE
LOG command when you want to apply the last transaction log to the secondary database and bring it online. In
this scenario, you are not required to bring the secondary database online.

You should not issue the RESTORE LOG statement with the WITH STANDBY clause because this is not allowed
for an online restore. For an online restore, you must use the RECOVERY or NORECOVERY clause. Using the
WITH STANDBY clause leaves the database in read-only, standby mode.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > RESTORE Statements for Restoring, Recovering, and Managing Backups (Transact-SQL) >
RESTORE Arguments (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration
>Administration: How-to Topics >Backing Up and Restoring How-to Topics > How to: Apply a Transaction Log
Backup (Transact-SQL)

Item: 135 (Ref:Cert-70-432.4.5.3)

You are the database administrator for a banking firm. Your company stores information regarding all its
customers in a database named Customers. A power failure causes your server to shut down unexpectedly. This
causes the database to close improperly.

After restoring the power, you restart the server, and start up the database. To ensure that the database is not
corrupted, you issue the following statement:

DBCC CHECKDB;

What is the result of issuing this statement? (Choose all that apply.)
c All databases on the server are checked for the logical integrity of their objects.
d
e
f
g
c All database tables in the current database are checked for consistency and logical integrity.
d
e
f
g

c The DBCC CHECKFILEGROUP statement is issued on the database.


d
e
f
g
c The DBCC CHECKCATALOG statement is issued on the database.
d
e
f
g

c The current database statistics are displayed.


d
e
f
g
c Errors reported as a result of the integrity check are repaired.
d
e
f
g

Answer:
All database tables in the current database are checked for consistency and logical integrity.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 75 of 226

The DBCC CHECKCATALOG statement is issued on the database.

Explanation:
The statement checks all the database tables in the current database for consistency and logical integrity, and the
DBCC CHECKCATALOG statement is issued on the database. When you issue the DBCC CHECKDB
statement, it checks the structural and logical integrity of all the objects in the specified database. If the name of
the database is not specified, the current database will be checked. The DBCC CHECKDB command internally
issues the following additional statements on the database:

 DBCC CHECKCATALOG
 DBCC CHECKTABLE
 DBCC CHECKALLOC

The DBCC CHECKDB statement also validates the Service Broker data and the indexed views in the database.
You can also check database integrity by selecting the Check Database Integrity Task, which is part of the
Maintenance Tasks section of the SSIS Designer.

The statement does not check all the databases on the server for the logical integrity of their objects. The DBCC
CHECKDB statement only checks the specified database or the current database for the logical integrity of its
objects, not all the databases on the server.

The DBCC CHECKFILEGROUP statement is not issued on the database. The DBCC CHECKDB statement does
not run the DBCC CHECKFILEGROUP statement on the database. The DBCC CHECKFILEGROUP statement
checks the allocation and structural integrity of the objects residing in the specified filegroup.

The current database statistics are not displayed. The current database statistics are displayed by using the
DBCC SHOW_STATISTICS statement.

Errors reported as a result of the integrity check are not repaired. The DBCC CHECKDB statement does not
repair any errors during the integrity check unless you specify the REPAIR_REBUILD or
REPAIR_ALLOW_DATA_LOSS argument in the statement.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Maintain database integrity.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC CHECKDB (Transact-SQL)

Item: 137 (Ref:Cert-70-432.4.2.1)

You are a database administrator for your company. You implement the following backup strategy for your
Prod_details database:

 Take a full backup at 10:00 P.M. every day.


 Take a differential level 1 backup at 6:00 A.M. every day.
 Take a transaction log backup every four hours starting at 12 midnight.
 Take a database snapshot at 2:00 P.M.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 76 of 226

A user in your database inadvertently deletes some important data from the Tbl_product table at 8:30 A.M. You
are required to recover the maximum possible data in the table.

Which activities should you perform to recover the database in the least amount of time?
j Restore the last full backup and the last differential backup of the database, apply the transaction log backup
k
l
m
n
taken at 8:00 A.M., and perform the recovery.
j Restore the differential backup of the database, apply the transaction log backup taken at 8:00 A.M., and
k
l
m
n
perform the recovery.
j Restore the full database backup, apply all the transaction log backups taken since 8:00 P.M. on the
k
l
m
n
previous day, and perform the recovery.
j Restore the data in the Tbl_product table by performing a bulk copy from the database snapshot created at
k
l
m
n
2:00 P.M.

Answer:
Restore the last full backup and the last differential backup of the database, apply the
transaction log backup taken at 8:00 A.M., and perform the recovery.

Explanation:
You should restore the last full backup and the last differential backup of the database, apply the transaction log
backup taken at 8:00 A.M., and perform the recovery. The full database backup taken at 10:00 P.M. on the
previous day contains all the data in the database until 10:00 P.M. The differential backup of the database taken
at 6:00 A.M. contains the backup of the data that has changed since the full backup at 10:00 P.M. on the previous
day. Therefore, when you restore the full backup and differential backup of the database, the data until 6:00 A.M.
can be recovered. Additionally, you have taken a backup of the transaction log at 8:00 A.M., which contains any
changes made to the database after the differential database backup was taken at 6:00 A.M. Therefore, you can
recover the data up to 8:00 A.M. by restoring the full backup taken at 10:00 P.M., differential backup taken at 6:00
A.M., and then applying the transaction log backup taken at 8:00 A.M. This method will require the shortest
amount of time than any of the other options.

You should not restore the differential backup of the database, apply the transaction log backup taken at 8:00
A.M., and perform the recovery. To recover a database from a failure, you must first restore the full backup and
then restore the differential backup of the database. Restoring the differential backup before the full backup will
generate an error in the restore and recovery process.

You should not restore the full database backup, apply all the transaction log backups since 8:00 P.M. on the
earlier day, and perform the recovery. This would take a longer time to recover the database. You should restore
the differential backup taken at 6:00 A.M. after restoring the full database backup. Then, you should apply the
transaction logs. This process will take less time because you will not be required to apply all the changes since
the full backup.

You should not restore the data in the Tbl_product table by performing a bulk copy from the database snapshot
created at 2:00 P.M. A snapshot of the database taken at 2:00 P.M. will not contain the changes made after 2:00
P.M. In this scenario, you should recover the maximum possible data in the table. Therefore, you should not use
the snapshot to recover the database.

When restoring a database, you can restore to a particular point in time. You would execute the RESTORE
DATABASE statement with the NO RECOVERY clause for the database backups. When you restore the
transaction log backups, you execute the RESTORE LOG statement. The RESTORE LOG statement includes
the RECOVERY and STOPAT clauses, which ensure that the database is restored to full functionality and is
restored to a certain point in time, respectively.

Administrators should also consider performing a tail-log backup to back up log records. This prevents work loss
and keeps the log chain intact.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 77 of 226

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

References:
TechNet>TechNet Library>Server Products and Technologies>SQL Server>SQL Server 2008>Product
Documentation>SQL Server 2008 Books Online>Database Engine>Operations>Administration>Backing Up and
Restoring Database in SQL Server>Understanding Recovery Performance in SQL Server>Reducing Recovery
Time When Restoring a Database

TechNet>TechNet Library>Server Products and Technologies>SQL Server>SQL Server 2008>Product


Documentation>SQL Server 2008 Books Online>Database Engine>Operations>Administration>Administration:
How-to Topics>Backing Up and Restoring How-to Topics>How to: Restore to a Point in Time (Transact-SQL)

Item: 139 (Ref:Cert-70-432.4.3.1)

You are a database administrator managing the SQL Server databases of your company. You have created a
new database named Db01 to store the data for a new division in your company. While creating the database,
you did not specify the recovery model for the database.

What will be the default recovery model for the Db01 database?
j the Simple Recovery model
k
l
m
n

j the Bulk-Logged Recovery model


k
l
m
n

j the recovery model of the model database


k
l
m
n

j the recovery model of the master database


k
l
m
n

Answer:
the recovery model of the model database

Explanation:
The recovery model for the Db01 database will be the same as the recovery model for the model database.
When you create a new database in SQL Server 2008, the recovery model, by default, is the same as that of the
model database. The recovery model of a database specifies the type of backup and restore operations that may
be performed on the database. Depending on the type of backup and restore operation you want to perform on a
database, you can specify one of the following three recovery models:

 Simple Recovery model: This model requires minimum overhead to maintain the database. The Simple
Recovery model requires less disk space than the other two recovery models. This recovery model does
not require maintenance of transaction log backups. After the transactions in a particular log are complete,
the inactive log is truncated. In the Simple Recovery model, recovering to the point of failure is not
possible. You can recover only up to the most recent data backup.
 Full Recovery model: This model involves a backup of the transaction logs and provides recovery of the
database up to a specific point in time if all the transaction logs up to that point in time are intact. This
recovery model should be used if the database must ensure minimum data loss in the event of a failure.
 Bulk-Logged Recovery model: This model supports large bulk operations. Transaction log backups similar
to the Full Recovery model are required. This recovery model does not support point-in-time recovery. The
database can be recovered only up to the last complete backup.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 78 of 226

All the other options are incorrect because they do not represent the default recovery model of the Db01
database. The default recovery setting for the Model database is the Full Recovery model, but this setting can be
changed.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Manage and configure databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Transaction Log Management > Recovery Models and Transaction Log Management > Choosing the Recovery
Model for a Database

Item: 145 (Ref:Cert-70-432.4.2.5)

You are the database administrator for your company. You maintain a SQL Server 2008 instance named Prod1.
This instance has two databases named Sql1 and Sql2. You take regular backups of these two databases, and
store them on tape drive media sets. The filenames in the two databases are the same.

Due to a media failure, you lose some data files of the Sql1 database. You are required to restore the data files of
the Sql1 database, but you notice that you have used the same name for the backup of Sql1 and Sql2.

Which restore statement should you use to identify the backup file that belongs to the Sql1 database?
j RESTORE LABELONLY
k
l
m
n
j RESTORE VERIFYONLY
k
l
m
n

j RESTORE HEADERONLY
k
l
m
n

j RESTORE FILELISTONLY
k
l
m
n

Answer:
RESTORE HEADERONLY

Explanation:
You should use the RESTORE HEADERONLY statement. This statement provides details regarding the backup
database and returns all the backup's header information. The output of this statement will enable you to
differentiate between the two backups.

You should not use the RESTORE LABELONLY statement. This statement returns a result set containing
information about the backup media on which the backup is stored. The output of this statement includes only
partial header information. In this scenario, you must identify the details about the databases to which each of the
backups belong. Therefore, you cannot use the RESTORE LABELONLY statement.

You should not use the RESTORE VERIFYONLY statement. This statement only verifies whether the backup
data is complete. This statement will not provide details of the database backups.

You should not use the RESTORE FILELISTONLY statement. This statement returns a list of data files and log
files in the backup. In this scenario, you are required to differentiate between the two database backups.
Therefore, you cannot use the RESTORE FILELISTONLY statement because the scenario states that the two
databases have the same file name.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 79 of 226

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Restore databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > RESTORE Statements for Restoring, Recovering, and Managing Backups (Transact-SQL) >
RESTORE HEADERONLY (Transact-SQL)

Item: 156 (Ref:Cert-70-432.4.1.1)

You are the SQL administrator for your company. A new SQL Server 2008 computer named SQL_test has been
deployed for the application developers employed by your company. SQL_test has two databases named
Customers and Products.

The SQL_test databases must be backed up on a weekly basis. You need to ensure that any backups taken for
the databases on SQL_test minimize the space occupied by the backup files using the least amount of
administrative effort.

What should you do?


j Execute the BACKUP statement with the WITH COMPRESSION argument for each of the database
k
l
m
n
backups.
j Use the sp_configure system stored procedure to set the backup compression default configuration
k
l
m
n
option to ON.

j On the Options page of the Backup Database dialog box for each of the database backups, select the
k
l
m
n
Compress backup option of the Set backup compression list.
j Create a maintenance plan that backs up the databases with the Compress backup option of the Set
k
l
m
n
backup compression list.

Answer:
Use the sp_configure system stored procedure to set the backup compression default
configuration option to ON.

Explanation:
You should use the sp_configure system stored procedure to set the backup compression default
configuration option to ON. This will ensure that any backups taken for the databases on SQL_test minimize the
space occupied by the backup files using the least amount of administrative effort. By enabling default backup
compression, you ensure that all backups taken for the server are compressed.

You should not execute the BACKUP statement with the WITH COMPRESSION argument for each of the
database backups. Although this will minimize the space occupied by the backup files created with the BACKUP
statement, it will not do so for all backups. Future backups taken using a BACKUP statement will also need to
include the WITH COMPRESSION argument.

You should not select the Compress backup option of the Set backup compression list on the Options page of
the Backup Database dialog box for each of the database backups. Although this will minimize the space

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 80 of 226

occupied by the backup files created, it will not do so for all backups. Any future backups that are taken using the
Backup Database dialog box will also need to include the Compress backup option.

You should not create a maintenance plan that backs up the databases with the Compress backup option of the
Set backup compression list. Although this will minimize the space occupied by the backup files created by the
maintenance plan, it will not do so for all backups. Any future backups that are taken using a maintenance plan
will also need to include the Compress backup option.

Objective:
Maintaining a SQL Server Database

Sub-Objective:
Back up databases.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Database in SQL Server > Backup Overview > Backup Compression

Maintaining SQL Server Instances


Item: 3 (Ref:Cert-70-432.2.1.7)

You manage an instance of SQL Server 2008 named SQL1. SQL1 has several jobs configured on it. The job
steps in each job are configured to include the step output in the job history. You want to view the job history for
all jobs and job steps that did not run successfully.

Which Transact-SQL script should you execute?


j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 0;
GO

j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 1;
GO

j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 2;
GO

j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 3;
GO

Answer:

USE msdb;
GO
EXEC dbo.sp_help_jobhistory

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 81 of 226

@run_status = 0;
GO

Explanation:
You should run the following Transact-SQL script:

USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 0;
GO

The sp_help_jobhistory system stored procedure is used to view the history of jobs. If you do not specify a
particular job, the output contains the history of all jobs. The @run_status parameter displays jobs that have the
specified status. Specifying a value of 0 for the @run_status parameter returns history for all failed jobs.

The history of jobs is stored in the SQL Server Agent job history log. To ensure that the output of a job step is
logged in the job history log, you should select the Include step output in history check box on the Advanced
page of the New Job Step dialog box. You can view the history of a job after you run the job.

You should not run the following Transact-SQL script:

USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 1;
GO

Specifying a value of 1 for the @run_status parameter returns history for all jobs that completed successfully.

You should not run the following Transact-SQL script:

USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 2;
GO

Specifying a value of 2 for the @run_status parameter returns history for all jobs steps that have a retry status.

You should not run the following Transact-SQL script:

USE msdb;
GO
EXEC dbo.sp_help_jobhistory
@run_status = 3;
GO

Specifying a value of 3 for the @run_status parameter returns history for all jobs that were canceled.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 82 of 226

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_help_jobhistory (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing Jobs > Viewing and Modifying Jobs

Item: 7 (Ref:Cert-70-432.2.1.5)

You manage an instance of SQL Server 2008 for your company. The instance contains a database named
MarketingDB. You have been asked to create an alert that will notify you when any user with insufficient
permission attempts to access the database. You decide to create an alert that will be triggered based on severity
level. You open the New Alert dialog box to create a new alert.

Which value should you select for the Type option to create the alert?
j SQL Server event alert
k
l
m
n
j SQL Server performance condition alert
k
l
m
n

j WMI event alert


k
l
m
n
j Windows performance alert
k
l
m
n

Answer:
SQL Server event alert

Explanation:
You should select SQL Server event alert for the Type option. An alert is an automated response to an event or
performance condition that is triggered when the event or performance condition occurs. In SQL Server 2008, an
alert can be defined to respond to SQL Server events, SQL Server performance conditions, and WMI events. The
type of event you select determines the parameters that can be used in the alert definition. When you select the
SQL Server event alert event type in the Type drop-down list in the New Alert dialog box, you can configure the
alert to occur in response to a particular error number, severity level, or when the event message contains a
particular text string as shown in the following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 83 of 226

While configuring an alert, you can also specify how the alert should respond to an event. You can configure an
alert to execute a job or notify operators when the event occurs. For example, you can configure an alert to send
you an e-mail when any file in a particular database grows. To do this, you can configure a job that uses the
xp_sendmail extended stored procedure or sp_send_dbmail stored procedure on the Response page of the
New Alert dialog box.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Monitoring and Responding to Events > Defining Alerts

Item: 26 (Ref:Cert-70-432.2.3.2)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 84 of 226

You are the SQL administrator of your company. The network contains 10 SQL Server 2008 computers. All
servers are members of a group named Corporate SQL Servers.

You are in the process of creating an operator for a master SQL Server Agent. You run the sp_add_operator
system stored procedure to configure the master SQL Server Agent operator on a new server named SQL1. Now,
you want to add SQL1 to the Corporate SQL Servers group.

Which SQL Server Agent system stored procedure should you use?
j sp_add_targetservergroup
k
l
m
n

j sp_add_jobserver
k
l
m
n

j sp_msx_enlist
k
l
m
n

j sp_add_targetsvrgrp_member
k
l
m
n

Answer:
sp_add_targetsvrgrp_member

Explanation:
You should use the sp_add_targetsvrgrp_member system stored procedure. You can use SQL Server Agent
stored procedures to create an operator for local and master server jobs. Operators are aliases for users or
groups that can receive notifications. You can create operators by using SQL Server Management Studio or using
the sp_add_operator system stored procedure. To create an operator for a local job, you should run the
sp_add_operator system stored procedure. To create an operator for a master server jobs, you should perform
the following steps:

1. Specify the master SQL Server Agent operator by using the sp_add_operator system stored procedure.
2. Add the specified target server to the target server group by using the sp_add_targetsvrgrp_member
system stored procedure.
3. Enlist the target server in the job by using the sp_msx_enlist system stored procedure.

You should not use the sp_add_targetservergroup system stored procedure because this stored procedure
cannot be used to add a target server to a target server group. The sp_add_targetservergroup system stored
procedure is used to create a target server group. In this scenario, the target server group, Corporate SQL
Servers, is already created.

You should not use the sp_add_jobserver system stored procedure because this stored procedure cannot be
used to add a target server to a target server group. The sp_add_jobserver system stored procedure is used to
add a specified job to the specified server.

You should not use the sp_msx_enlist system stored procedure because this stored procedure cannot be used
to add a target server to a target server group. The sp_msx_enlist system stored procedure is used to add the
current server to the list of available servers on the master server.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent operators.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 85 of 226

Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics (Transact-SQL) > Automated Administration How-to Topics (Transact-SQL) > How
to: Create Operators (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks > (SQL Server Agent) > Monitoring and Responding to Events > Defining Operators

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_add_operator (Transact-SQL)

Item: 30 (Ref:Cert-70-432.2.2.3)

You are the SQL administrator for your company. A SQL Server 2008 computer named Sql3 has several alerts
configured. One of the alerts was created using the following Transact-SQL statement:

EXEC dbo.sp_add_alert
@name = 'Backup Alert',
@enabled = 1,
@message_id = 0,
@severity = 20,
@notification_message = 'Error. The database will be backed up.',
@job_name = 'Back up the Production Database';

You need to edit the alert so that the alert has a message ID of 56100.

Which Transact-SQL statement should you use?


j
k
l
m
n
EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100;

j
k
l
m
n
EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 1;

j
k
l
m
n
EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 0;

j
k
l
m
n
EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 56100;

Answer:
EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 0;

Explanation:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 86 of 226

You should use the following Transact-SQL statement:

EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 0;

To configure a message ID value, the severity level must be set to 0. If the alert has a severity level configured,
the message ID value must be set to 0. You cannot have a value for both the message ID and severity level
configured on an alert.

You should not use the following Transact-SQL statement:

EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100;

This statement does not set the severity level to 0. A message ID cannot be assigned if the severity level is set to
20, which is what the original alert creation in this scenario specified.

You should not use the following Transact-SQL statement:

EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 1;

This statement does not set the severity level to 0. A message ID cannot be assigned if the severity level is set to
1.

You should not use the following Transact-SQL statement:

EXEC dbo.sp_update_alert
@name = 'Backup Alert',
@message_id = 56100,
@severity = 56100;

This statement does not set the severity level to 0. A message ID cannot be assigned if a severity level is
configured.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent alerts.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Serve
Item: 43 (Ref:Cert-70-432.2.1.9)

You are the database administrator of your company. The network contains an instance of SQL Server 2008.

You create several jobs on the SQL server. After several days, the SQL Server Agent service stops. You want to
view the jobs that were being executed when the service stopped.

Which system table should you check?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 87 of 226

j sysjobstepslogs
k
l
m
n

j sysjobhistory
k
l
m
n

j sysjobs
k
l
m
n

j sysjobactivity
k
l
m
n

Answer:
sysjobactivity

Explanation:
You should check the sysjobactivity system table. The sysjobactivity system table is a SQL Server Agent table
in the msdb database that records information about the current SQL Server Agent job activity and status. When
you start the SQL Server Agent service, a new session is created, and all the existing defined jobs are recorded in
the sysjobactivity system table. If the SQL Server Agent service stops unexpectedly, you can check the
sysjobactivity system table to identify jobs that were being executed when the service stopped.

You should not check the sysjobstepslogs system table because this table does not contain information about
current job activity and status. The sysjobstepslogs system table contains information about job step logs.

You should not check the sysjobhistory system table because this table does not contain information about
current job activity and status. The sysjobhistory system table contains information about the execution of SQL
Server Agent scheduled jobs.

You should not check the sysjobs system table because this table does not contain information about current job
activity and status. The sysjobs system table records the information for each SQL Server Agent scheduled job
that is to be executed.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: View Job Activity (SQL Server Management Studio)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Tables (Transact-SQL) > SQL Server Agent Tables (Transact-SQL)

Item: 44 (Ref:Cert-70-432.2.1.2)

You are your company's SQL administrator. You have two SQL Server 2008 computers named SQL1 and SQL2.
Both servers have several jobs that are configured to run on a daily basis.

Over recent weeks, the job history log has grown much larger than anticipated. You need to ensure that job

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 88 of 226

history that is older than 60 days is removed from the job history log.

Which tool should you use?


j SQL Server Configuration Manager
k
l
m
n

j Database Engine Tuning Advisor


k
l
m
n
j SQL Server Profiler
k
l
m
n
j SQL Server Management Studio
k
l
m
n

Answer:
SQL Server Management Studio

Explanation:
You should use SQL Server Management Studio to ensure that job history that is older than 60 days is removed
from the job history log. To configure this, you can perform the following steps:

1. Open SQL Server Management Studio.


2. Connect to the SQL Server instance you want to configure, and expand that instance.
3. Right-click SQL Server Agent, and click Properties .
4. Select the History page in the SQL Server Agent Properties dialog box.
5. Select the Automatically remove agent history option, and configure appropriate time period.

You can also use SQL Server Management Studio to clear the SQL Server Agent event logs.

You should not use SQL Server Configuration Manager. This tool is used to configure the SQL server, including
managing services, configuring network protocols, and managing network connectivity configuration.

You should not use Database Engine Tuning Advisor. This tool is used to automatically identify the recommended
indexes, views, and partitions for a particular workload.

You should not use SQL Server Profiler. This tool is used to capture events that are occurring on the SQL server
for later analysis.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics > How To: Set Up the Job History Log
(SQL Server Management Studio)

Item: 49 (Ref:Cert-70-432.2.2.4)

You are the SQL administrator for your company. You install a new instance of SQL Server 2008 on a server
named SQL1. SQL1 contains a database named SalesDB that is accessed by all users on the network.

You are in the process of creating an alert using the sp_add_alert system stored procedure that will be triggered

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 89 of 226

when the number of deadlocks for the SalesDB database exceeds two. You want to ensure that you receive a
notification message through e-mail that includes the description of the error.

How should you configure the @include_event_description_in parameter?


j Specify a value of 0.
k
l
m
n

j Specify a value of 1.
k
l
m
n
j Specify a value of 2.
k
l
m
n

j Specify a value of 4.
k
l
m
n

Answer:
Specify a value of 1.

Explanation:
You should specify a value of 1 for the @include_event_description_in parameter. An alert is an automated
response to an event or performance condition, which is triggered when a particular event or performance
condition occurs. You can use SQL Server Management Studio, the sp_add_alert system stored procedure, or
System Monitor to create a new alert. While configuring an alert, you can also specify how the alert should
respond to an event. You can configure an alert to execute a job or notify operators when a particular event is
occurred. The notification can be sent through via e-mail, a net send message, or a pager message. The
@include_event_description_in parameter can be used if you want to include the description of the error
message in the notification. Specifying a value of 1 for the @include_event_description_in parameter indicates
that the description of the error should be included in the notification. To complete the requirements in this
scenario, you would also use the sp_add_notification system stored procedure passing a value of 1 for the
@notification_method parameter to create an e-mail notification.

You should not specify a value of 0 for the @include_event_description_in parameter because this value
indicates that the description of the error message should not be included in the notification message.

You should not specify a value of 2 for the @include_event_description_in parameter because this value is
used to include the description of the error message in a pager message.

You should not specify a value of 4 for the @include_event_description_in parameter because this value is
used to include the description of the error message in a net send message.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent alerts.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_add_alert (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance >
Performance Monitoring and Tuning How-to Topics > Server Performance and Activity Monitoring How-to Topics
> How to: Set Up a SQL Server Database Alert (Windows)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 90 of 226

Item: 52 (Ref:Cert-70-432.2.5.2)

You are the database administrator for your company. You maintain a database named Hr1 on a SQL Server
2008 instance named Sql1. The database size is approximately 45 GB and uses the Full Recovery model. The
Hr1 database consists of a primary filegroup and four secondary filegroups. Each filegroup contains three data
files that are located on different disks.

The Hr1 database is an online transaction processing database. You perform a full backup of the database every
two days. The Transaction_details table in the database contains all the transaction details and is updated
frequently. You are required to frequently back up the data in the table to prevent loss of transaction details in the
event of a failure.

Which backup strategy should you implement?


j You should increase the frequency of performing a full database backup.
k
l
m
n
j You should increase the frequency of performing a differential database backup.
k
l
m
n

j You should increase the frequency of performing backups of data files containing the Transaction_details
k
l
m
n
table.
j You should increase the frequency of performing backups of the filegroups containing the
k
l
m
n
Transaction_details table.

Answer:
You should increase the frequency of performing backups of the filegroups containing the
Transaction_details table.

Explanation:
You should increase the frequency of performing backups of the filegroups containing the Transaction_details
table. In this scenario, you must ensure that the data in the Transaction_details table is backed up more often
because the data in this table changes frequently. You should only perform frequent backups of filegroups
containing the table without backing up the entire database or taking a differential backup of the database. To
perform a filegroup backup, you should use the BACKUP DATABASE statement with the FILEGROUP clause.

You should not increase the frequency of performing a full database backup. In this scenario, only the modified
data in the Transaction_details table should be backed up by taking a backup of the filegroup containing the
table.

You should not increase the frequency of performing a differential database backup. In this scenario, only the
modified data in the Transaction_details table should be backed up. Therefore, a differential backup that will
backup all the changes since the last full backup need not be performed. Additionally, the existing database is
large. Therefore, the backup size will increase every time a differential backup is performed, which is not
recommended. You should instead perform a new full backup of the database as a base for differential backups.
To perform a differential database backup, you should use the BACKUP DATABASE statement with the
DIFFERENTIAL clause.

You should not increase the frequency of performing backups of data files containing the Transaction_details
table. In this scenario, you do not know which data files contain the Transaction_details table. All the filegroups
in the database contain three data files each, and this table can exist in more than one data file. Therefore, you
should back up the filegroup containing the Transaction_details table. This will back up all the files in the
filegroup.

Objective:
Maintaining SQL Server Instances

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 91 of 226

Sub-Objective:
Back up a SQL Server environment.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases > Creating Full and Differential Backups of a SQL Server Database > Full File
Backups

Item: 59 (Ref:Cert-70-432.2.4.2)

You are the SQL administrator of your company. The network contains an instance of SQL Server 2008. You are
in the process of creating a Policy-Based Management policy on the SQL server to ensure that SQL logins are
created using a certain naming convention. You want to ensure that the policy is automated and not violated.

Which evaluation mode should you configure for the policy?


j On demand
k
l
m
n

j On schedule
k
l
m
n

j On change: log only


k
l
m
n

j On change: prevent
k
l
m
n

Answer:
On change: prevent

Explanation:
You should configure the On change: prevent evaluation mode for the policy. Policy-Based Management is used
to manage entities on an SQL Server 2008 instance. To create Policy-Based Management policies, you should
first select a facet that contains the properties that you want to configure. Facets are SQL Server entities in which
one or more related configuration options are defined. You should then define a condition that defines the state of
the facet. Then, you should specify the target, such as servers, databases, or logins, to which the policy should be
applied. You should select an evaluation mode that defines how the policy should be evaluated. You can select
one of the following four evaluation modes:

 On demand: This mode evaluates the policy when it is manually run by the user.
 On schedule: This mode allows the policy to be evaluated automatically by using a SQL Server Agent job.
This mode also logs any violations of the policy.
 On change: log only: This mode evaluates a policy automatically when a relevant change occurs. This
mode also logs any violations of the policy.
 On change: prevent: This mode is an automated mode that prevents any violation of the policy by using
DDL triggers.

You should not configure the On demand evaluation mode for the policy because this mode evaluates the policy
only when it is manually run by the user. In this scenario, you want the policy to be automated.

You should not configure the On schedule evaluation mode for the policy. In this scenario, you want to prevent
policy violations. The On schedule evaluation mode does not prevent policy violations. This mode only logs any
violations of the policy.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 92 of 226

You should not configure the On change: log only evaluation mode for the policy because this mode does not
prevent policy violations. The On change: log only evaluation mode only logs any violations of the policy.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Implement the declarative management framework (DMF).

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administering Servers by Using Policy-Based Management

Item: 65 (Ref:Cert-70-432.2.1.4)

You are your company's SQL administrator. Your company has several SQL Server 2008 computers that you
manage. One of the SQL Server 2008 computers named SQL5 is configured with dozens of jobs.

While performing routine maintenance, you decide that a job named ProdDBBackup is no longer needed. You
disable the job's schedule. The next day, you discover that several important maintenance jobs did not run. You
need to ensure that these jobs run as scheduled, but that the ProdDBBackup job does not run.

What should you do?


j Re-enable the schedule and disable the ProdDBBackup job.
k
l
m
n
j Detach the schedule from the ProdDBBackup job.
k
l
m
n

j Disable the ProdDBBackup job.


k
l
m
n

j Re-enable the jobs that must be run.


k
l
m
n

Answer:
Re-enable the schedule and disable the ProdDBBackup job.

Explanation:
You should re-enable the job schedule and disable the ProdDBBackup job. You can create a schedule in Object
Explorer and attach that schedule to one or more jobs. When you disable a schedule, the schedule will not be in
effect for any job attached to the schedule. In this scenario, you disabled a schedule, and the schedule will no
longer be in effect for any jobs attached to that schedule. Re-enabling the schedule will allow all the jobs attached
to the schedule to run accordingly. Disabling the ProdDBBackup job will prevent that specific job from running,
but still allow other jobs using the same schedule to run at the desired times. Alternatively, you could detach the
ProdDBBackup job from the schedule to prevent the job from running per the schedule. If the job is detached
from the schedule, the job can still be run manually or in response to an alert.

You should not detach the schedule from the ProdDBBackup job. This solution would only work if you also re-
enabled the schedule.

Although you should disable the ProdDBBackup job to prevent the job from running, you would also need to re-
enable the schedule to ensure that the other jobs attached to that schedule run at the desired times.

You should not re-enable the jobs you need. The jobs are not disabled. The problem is that the schedule that is

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 93 of 226

attached to the jobs has been disabled.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing Jobs > Creating and Attaching Schedules to Jobs

Item: 84 (Ref:Cert-70-432.2.5.1)

You are a database administrator for your company. You are required to perform a full backup of the Sales
database.

When performing the backup, you must fulfill the following requirements:

 The backup should be written to the disk backup device named ' C:\sales1a.bak '.
 The size of each block for the backup should be 64 KB.
 The backup should expire on the 11th day.
 The new backup sets should be appended to existing backup sets on the backup media.
 The newly created backup should be named FULL BACKUP OF SALES .

Which statement will fulfill all the requirements in this scenario?


j
k
l
m
n
BACKUP DATABASE sales
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
NAME = 'FULL BACKUP OF SALES';

j
k
l
m
n
BACKUP DATABASE sales
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
FORMAT,
NAME = 'FULL BACKUP OF SALES';

j
k
l
m
n
BACKUP DATABASE sales
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 64,
NOFORMAT,
NAME = 'FULL BACKUP OF SALES';

j
k
l
m
n
BACKUP DATABASE sales
NAME = 'FULL BACKUP OF SALES',
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 94 of 226

Answer:
BACKUP DATABASE sales
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
NAME = 'FULL BACKUP OF SALES';

Explanation:
You should issue the following statement because it fulfills all the requirements in this scenario:

BACKUP DATABASE sales


TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
NAME = 'FULL BACKUP OF SALES';

The following clauses are used in this statement:

 TO DISK: Specifies the location of the disk backup device on which the backup of the database will be
stored.
 BLOCKSIZE: Specifies the size of the blocks in which the data will be stored on the backup device. The
size is specified in bytes. If not specified, QL Server automatically selects the block size that is appropriate
either for the disk or for the tape device.
 RETAINDAYS: Specifies the retention time of the backup in number of days. After the specified days have
elapsed, the backup can be overwritten.
 NAME: Specifies the name of the backup set.

The BACKUP DATABASE statement specifies the location, the block size, the number of days to retain the
backup, and the name of the backup set required in this scenario. The BLOCKSIZE value specified is 65536
bytes or 64 KB. The RETAINDAYS clause is set to 10, specifying that the backups should expire on the 11th day.

The following option is incorrect:

BACKUP DATABASE sales


TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
FORMAT,
NAME = 'FULL BACKUP OF SALES';

You should not use the FORMAT clause in the statement because it specifies that a new backup media should be
created. By using the FORMAT clause, you can write a new media header for all the volumes in the backup set,
and invalidate all the data present on the media. In this scenario, the new backup sets should be appended to
existing backup sets on the backup media. If the FORMAT clause is not specified, it defaults to NOFORMAT.
This indicates that the new backup sets should be appended to existing backup sets on the backup media.

The following option is incorrect:

BACKUP DATABASE sales


TO DISK = 'C:\sales1a.bak'
WITH

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 95 of 226

BLOCKSIZE = 64,
NOFORMAT,
NAME = 'FULL BACKUP OF SALES';

This statement specifies an incorrect value for the BLOCKSIZE clause. To specify a block size of 64 KB, you
should set the BLOCKSIZE to 65536 because the value is specified in bytes.

The following option is incorrect because this statement is syntactically incorrect:

BACKUP DATABASE sales


NAME = 'FULL BACKUP OF SALES',
TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10;

The NAME clause that specifies the name of the backup set is specified in the WITH clause. To rectify the error in
this statement, use the following modified statement:

BACKUP DATABASE sales


TO DISK = 'C:\sales1a.bak'
WITH
BLOCKSIZE = 65536,
RETAINDAYS = 10,
NAME = 'FULL BACKUP OF SALES';

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Back up a SQL Server environment.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Backing Up and Restoring How-to Topics > How to: Create a Full Database
Backup (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > BACKUP (Transact-SQL)

Item: 87 (Ref:Cert-70-432.2.2.2)

You are the database administrator for your company. You have been assigned the task of managing alerts and
issuing notifications for events occurring in the database.

John is a new employee in the organization. You add an operator for John by using the following Transact-SQL
batch:

EXECUTE dbo.sp_add_operator
@name = 'John',
@enabled = 1,
@email_address = 'john@esoft.com',
@weekday_pager_start_time = 070000,
@weekday_pager_end_time = 200000,
@pager_days = 10,
@pager_address = '568923@pager.esoft.com';

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 96 of 226

GO

You want John to be notified each time the alert named Alert1 is generated. To add the notification, you issue the
following Transact-SQL batch:

EXECUTE dbo.sp_add_notification
@alert_name = 'Alert1',
@operator_name = 'John',
@notification_method = 1;
GO

Which statement is true of the notification received by John for the Alert1 alert?
j The SQL Server Agent sends an e-mail notification to John on Mondays and Wednesdays each time the
k
l
m
n
alert is generated between 7:00 A.M. and 8:00 P.M.
j The SQL Server Agent sends an e-mail notification to John on all weekdays each time the alert is generated
k
l
m
n
between 7:00 A.M. and 8:00 P.M.
j The SQL Server Agent sends pager notifications to John on weekdays each time the alert is generated
k
l
m
n
between 7:00 A.M. and 8:00 P.M.
j The SQL Server Agent sends pager notifications to John on Mondays, Tuesdays, and Wednesdays each
k
l
m
n
time the alert is generated between 7:00 A.M. and 8:00 P.M.

Answer:
The SQL Server Agent sends an e-mail notification to John on Mondays and Wednesdays each
time the alert is generated between 7:00 A.M. and 8:00 P.M.

Explanation:
The SQL Server Agent sends e-mail notifications to John on Mondays and Wednesdays each time the alert is
generated between 7:00 A.M. and 8:00 P.M. In this scenario, the notification method is set to a value of 1 while
adding the notification. A notification method value of 1 specifies that the notifications will be sent through e-mail.

The notification will be sent on Mondays and Wednesdays because the @pager_days argument is set to a value
of 10 while executing the sp_add_operator system stored procedure. The @weekday_pager_start_time,
@weekday_pager_end_time, @saturday_pager_start_time, @saturday_pager_end_time,
@sunday_pager_start_time, and @sunday_pager_end_time argument values determine when notifications
are sent. The @pager_days argument determines when the operator is on duty and available to receive
notifications. The values for the different days of the week are shown in the following graphic:

In this scenario, the @pager_days argument is set to a value of 10. This value indicates that the notifications will
be received on Mondays and Wednesdays because the sum of the corresponding values for Mondays and
Wednesdays is 10.

The option stating that the SQL Server Agent sends e-mail notifications to John on all weekdays between 7:00
A.M. and 8:00 P.M. is incorrect. The e-mail notifications will be sent to John only on Mondays and Wednesdays

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 97 of 226

because the @pager_days argument is set to a value of 10. To receive the notification on all weekdays, the
@pager_days argument should be set to a value of 62. This value is the sum of all the values corresponding to
weekdays.

The option stating that the SQL Server Agent sends pager notifications to John on weekdays between 7:00 A.M.
and 8:00 P.M. is incorrect because the SQL Server Agent will not send pager notifications. E-mail notifications will
be sent because the @notification_method argument is set to a value of 1 while executing the
sp_add_notification system stored procedure. To send pager notifications, the @notification_method
argument should be set to a value of 2. Notifications will be sent to John only on Mondays and Wednesdays
because the @pager_days argument is set to a value of 10.

The option stating that the SQL Server Agent sends pager notifications to John on Mondays, Tuesdays, and
Wednesdays between 7:00 A.M. and 8:00 P.M. is incorrect because the SQL Server Agent will not send pager
notifications. E-mail notifications will be sent because the @notification_method argument is set to a value or 1
while calling the sp_add_notification system stored procedure. To send pager notifications, the
@notification_method argument should be set to a value of 2. Notifications will be sent to John only on
Mondays and Wednesdays because the @pager_days argument is set to a value of 10. To send notifications on
Mondays, Tuesdays, and Wednesdays, the @pager_days argument should be set to a value or 14. This value is
the sum of the corresponding values for Mondays, Tuesdays, and Wednesdays.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent alerts.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_add_operator (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_add_notification (Transact-SQL)

Item: 93 (Ref:Cert-70-432.2.4.1)

You manage an instance of SQL Server 2008. You are creating a Policy-Based Management policy to enforce a
naming convention for databases. You want to ensure that the policy is enabled immediately after it is created.

Which evaluation mode will prevent you from creating such a policy?
j On demand
k
l
m
n

j On schedule
k
l
m
n
j On change: log only
k
l
m
n

j On change: prevent
k
l
m
n

Answer:
On demand

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 98 of 226

Explanation:
The On demand evaluation mode prevents you from creating a policy that is enabled immediately after it is
created. Policy-Based Management is used to manage entities on an instance of SQL Server 2008. While
creating Policy-Based Management policies, you are required to select an evaluation mode that defines how the
policy should be evaluated. You can select one of the following four evaluation modes:

 On demand: This mode evaluates the policy when it is manually run by the user. When this evaluation
mode is selected, the Enabled check box on the General page of the Create New Policy dialog box
cannot be selected. This prevents the policy from being enabled after it is created.
 On schedule: This mode allows the policy to be evaluated automatically by using a SQL Server Agent job.
This mode also logs any violations of the policy.
 On change: log only: This mode evaluates a policy automatically when a relevant change occurs. This
mode also logs any violations of the policy.
 On change: prevent: This mode is an automated mode that prevents any violation of the policy by using
DDL triggers.

You can select the On schedule, On change: log only, or On change: prevent evaluation mode because all of
these modes allow Policy-Based Management policies to be enabled immediately after they are created.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Implement the declarative management framework (DMF).

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administering Servers by Using Policy-Based Management

Item: 113 (Ref:Cert-70-432.2.1.3)

You are a database administrator for your company. You are managing the production database Prod1 that
resides on the SQL Server 2008 instance named Sql1.

You perform the following tasks on Prod1 as a part of your daily activities:

 Perform a differential database backup at 8:00 P.M.


 Rebuild indexes on database tables at 9:00 A.M.
 Perform a backup of the transaction log in the database at 12:00 P.M.
 Update the database statistics at 8:00 A.M.

You want to automate these tasks to occur at the scheduled time on all weekdays.

What should you do? (Choose two. Each correct answer represents part of the solution.)
c Ensure that the SQL Server Agent is running.
d
e
f
g

c Ensure that SQL Server Reporting Services is running.


d
e
f
g
c Create one job for all the tasks, and schedule each task to run at the desired time.
d
e
f
g

c Create four different jobs for the tasks, and schedule each job to run at the desired time.
d
e
f
g

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 99 of 226

Answer:
Ensure that the SQL Server Agent is running.
Create four different jobs for the tasks, and schedule each job to run at the desired time.

Explanation:
You should ensure that the SQL Server Agent is running, create four different jobs for the tasks, and schedule
each job to run at the desired time. When you create a job in SQL Server, you can run the job at a particular time.
You cannot schedule the different tasks within a job to be performed at different times. If you want each task to be
performed at a different time, you should create one job for each task. The SQL server will then perform these
tasks at the scheduled time. Additionally, to enable the SQL server to perform the tasks successfully, the SQL
Server Agent should be running. If the SQL Server Agent is not running, the scheduled jobs will not be run. The
SQL Server Agent is the service responsible for executing scheduled SQL administrative tasks.

After each of the jobs is created, you can change the properties of the job by viewing the Job Schedule
Properties dialog box. To disable a job, you can clear the Enabled check box in the Job Schedule Properties
dialog box. Disabling a job is a better choice than deleting the job if you think you may need to re-enable the job in
the future. The Enabled check box is shown in the following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 100 of 226

You can also change the schedule type and schedule frequency from this dialog box and configure the duration
for the job. The Schedule type option can be configured with one of the following settings:

 Start automatically when SQL Server Agent starts


 Start whenever the CPUs become idle
 Recurring
 One time

The Frequency section only becomes available if the Recurring schedule type is selected.

You should not ensure that SQL Server Reporting Services is running. SQL Server Reporting Services is not
required to perform scheduled tasks on SQL Server. SQL Server Reporting Services is a server-based tool that
can be used to create reports that contain data from different sources, such as SQL Server, Analysis Services, or
Microsoft .NET data providers. You can create different types of reports using SQL Server Reporting Services,
such as tabular, matrix and free-form reports.

You should not create one job for all the tasks and schedule each task to run at the desired time. You cannot
schedule different tasks within a job to run at different times. All the tasks in a job must be scheduled to run at the
same time. If you want to schedule different tasks to be performed at different times, you should create a separate
job for each task.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
MSDN > MSDN Library >Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > About SQL Server Agent

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing Jobs > Creating Jobs

Item: 115 (Ref:Cert-70-432.2.1.6)

You are the database administrator for your company. The network contains five SQL Server 2008 computers.
One of the servers named MKTGSQL1 is located in a remote branch office.

You want to enable a helpdesk technician in the branch office to create a job for database maintenance on
MKTGSQL1. You want to ensure that when the technician executes the job steps, the output is written to an
operating system file. To achieve this, you want to assign the helpdesk technician to an appropriate role.

To which role should you assign to the helpdesk technician?


j sysadmin
k
l
m
n
j SQLAgentUserRole
k
l
m
n

j SQLAgentReaderRole
k
l
m
n
j SQLAgentOperatorRole
k
l
m
n

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 101 of 226

Answer:
sysadmin

Explanation:
You should assign the helpdesk technician to the sysadmin fixed server role. A job step is an action that is
performed on a database or a server by a job. You must create at least one job step for every job. You can
configure a job step to run executable programs, operating system commands, Transact-SQL statements,
ActiveX and PowerShell scripts, replication tasks, Analysis Services tasks, and Integration Services packages.
The output generated by a job can either be written to the sysjobstepslogs system table in the msdb database
or to an operating system file. Job steps that run executable programs, operating system commands, Transact-
SQL statements, or Analysis Services tasks can write output to both destinations. The output of a job step is
written to an operating system file only when the user who executed the job has been assigned to the sysadmin
fixed server role. When a job step is executed by users who have been assigned the SQLAgentUserRole,
SQLAgentReaderRole, or SQLAgentOperatorRole fixed database role, the output of the job step can only be
written to the sysjobstepslogs system table. You should perform the following steps to create a job step that
executes Transact-SQL:

1. Open SQL Server Management Studio, and expand the instance of the Database Engine.
2. Expand the SQL Server Agent node in the Object Explorer.
3. Create a new job or modify an existing job by right-clicking the existing job and selecting the Properties
option.
4. Open the Steps page in the Job Properties dialog box, and click the New button to open the New Job
Step dialog box as shown in the following image:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 102 of 226

5. Type a name for the job step in the Step name textbox.
6. Select Transact-SQL script (T-SQL) from the Type drop-down list.
Note: If the Run as drop-down list is available, you can select a different SQL login under which the job
step will run.
7. Type the Transact-SQL statement batches in the Command box. To use a Transact-SQL file, you can
click the Open button and select the desired Transact-SQL file.
8. Click the Parse button to verify your syntax. If the syntax is correct, a message stating that Parse
succeeded will be displayed.
9. Use the Advanced page to configure the job step options as shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 103 of 226

10. Select an action from the On success action drop-down list that should be performed if the job succeeds.
11. Type a number from 0 to 9999 in the Retry attempts field to specify the number of retry attempts.
12. Type a number from 0 to 9999 in the Retry interval (minutes) field to specify the retry interval.
13. Select an action from the On failure action drop-down list that should be performed if the job fails.

If the job is a Transact-SQL script, you can configure the following options:

 Output file - Specifies the name of an operating system file to which the output of the job step
should be written. The file is overwritten by default each time the job step is executed. To prevent
the output file from being overwritten, select the Append output to existing file check box. This
check box is only available to members of the sysadmin fixed server role.
 Log to table - Specifies that output for the job step should be written to the sysjobstepslogs
system table. The table contents are overwritten by default each time the job step is executed. To
prevent the table contents from being overwritten, select the Append output to existing entry in
table check box. After the job step is executed, you can view the contents of the table by clicking
the View button.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 104 of 226

 Include step output in history - Specifies that output from the job step should be written to the
history.

Note: If you are a member of the sysadmin fixed server role, you can configure the job step to run by using a
different SQL login. To do this, select the desired SQL login from the Run as user list.

You should not assign the helpdesk technician to the SQLAgentUserRole, SQLAgentReaderRole, or
SQLAgentOperatorRole fixed database role. When a job step is executed by a user with these roles, the output
of the job step can only be written to the sysjobstepslogs table, not to operating system files.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > Implementing Jobs > Creating Job Steps

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: Create a Transact-SQL Job Step (SQL Server Management Studio)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: Define Transact-SQL Job Step Options (SQL Server Management Studio)

Item: 116 (Ref:Cert-70-432.2.1.1)

You are the database administrator for a major shipping company. You manage all the SQL Server 2008
instances of the company. For one of your instances, you have created jobs to perform regular administrative
activities.

Some of these jobs in the database fail because the server went down due to a power failure. You want to
analyze these failed jobs. You also want to find out the tasks performed by these failed jobs.

Which query will you use to achieve the objective?


j
k
l
m
n
SELECT job_id, step_id, step_name from msdb.dbo.sysjobhistory

WHERE run_status = 1

j
k
l
m
n
SELECT job_id, step_id, step_name from msdb.dbo.sysjobactivity

WHERE run_status = -1

j
k
l
m
n
SELECT job_id, step_id, step_name from msdb.dbo.sysjobhistory

WHERE run_status = 0

j
k
l
m
n
SELECT job_id, step_id, step_name from msdb.dbo.sysjobactivity

WHERE run_status = 0

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 105 of 226

Answer:
SELECT job_id, step_id, step_name from msdb.dbo.sysjobhistory

WHERE run_status = 0

Explanation:
The following option is correct:

SELECT job_id, step_id, step_name from msdb.dbo.sysjobhistory


WHERE run_status = 0

The sysjobhistory system table provides a historical record of the jobs that previously executed. This table
resides in the msdb database and contains columns, such as job_id, step_id, and step_name, which identify
jobs and the steps involved in the jobs. To retrieve the details of tasks being performed by the failed jobs, you
should retrieve rows for only the jobs that failed. You can accomplish this by specifying a condition of
run_status=0 in the WHERE clause of the query. A value of 0 in the run_status column indicates a failed job.
Therefore, the job_id, step_id, and step_name columns will be retrieved only for the jobs that have failed.

You should not use the following query:

SELECT job_id, step_id, step_name


FROM msdb.dbo.sysjobhistory
WHERE run_status = 1;

This query uses a condition of run_status=1 in the WHERE clause. A value of 1 in the run_status column
indicates that the jobs completed successfully. In this scenario, you must retrieve details about the jobs that failed,
not the ones that successfully completed.

You should not use the following query:

SELECT job_id, step_id, step_name


FROM msdb.dbo.sysjobactivity
WHERE run_status = -1;

This query uses the sysjobactivity system table, which does not provide information for failed jobs. The
sysjobactivity system table provides details on current jobs. This query will generate an error because the
sysjobactivity system table does not contain a step_id, step_name, or run_status column.

You should not use the following query:

SELECT job_id, step_id, step_name


FROM msdb.dbo.sysjobactivity
WHERE run_status = 0;

This query will generate an error because the sysjobactivity system table does not contain a step_id,
step_name, or run_status column. The sysjobactivity system table does not provide information for failed jobs.
The sysjobactivity system table provides details on current jobs and their job steps. This table contains
information, such as the time the last step in the job executed and the time at which the job is scheduled to run
next.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 106 of 226

References:
MSDN > MSDN Library> Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Tables (Transact-SQL) > SQL Server Agent Tables (Transact-SQL) > sysjobhistory
(Transact-SQL)

Item: 118 (Ref:Cert-70-432.2.4.3)

You are responsible for managing an instance of SQL Server 2008. The instance contains several databases that
are used to store information. Users create new tables in these databases. You discover that users are not
following any naming conventions while creating new tables in the databases. You want to create a Policy-Based
Management policy to enforce table naming conventions for all the databases.

You create a new category named CorpPolicies. You want to ensure that the Policy-Based Management policies
in the CorpPolicies category are applied to the entire instance of SQL Server 2008.

What should you do?


j Select the Mandate Database Subscriptions check box for the CorpPolicies category in the Manage
k
l
m
n
Policy Categories dialog box.
j Clear the Mandate Database Subscriptions check box for the CorpPolicies category in the Manage
k
l
m
n
Policy Categories dialog box.
j Ensure that the Server restriction option for all policies in the CorpPolicies category is set to None.
k
l
m
n
j Ensure that the instance of SQL Server 2008 is specified in the Against target field for all policies in the
k
l
m
n
CorpPolicies category.

Answer:
Select the Mandate Database Subscriptions check box for the CorpPolicies category in the
Manage Policy Categories dialog box.

Explanation:

You should select the Mandate Database Subscriptions check box for the CorpPolicies category in the
Manage Policy Categories dialog box as shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 107 of 226

The Manage Policy Categories dialog box allows you to easily apply all Policy-Based Management policies in a
particular category to the entire instance of SQL Server 2008. You should select the Mandate Database
Subscriptions check box for a category to ensure that all policies in that category are applied to the entire
instance. When the Mandate Database Subscriptions check box is not selected for a policy category, the policy
category must be applied individually to each relevant portion of the server. To open the Manage Policy
Categories dialog box, you should perform the following steps:

1. Open SQL Server Management Studio, and expand the Management node in the Object Explorer.
2. Right-click the Policy Management node, and click the Manage Categories option.

You should not clear the Mandate Database Subscriptions check box for the CorpPolicies category in the
Manage Policy Categories dialog box. When you clear the Mandate Database Subscriptions check box for a
category, the policy category must be applied individually to each relevant portion of the server, which is not what
was required in this scenario. To manually apply a policy, you can configure the policy with the On demand
evaluation mode.

You should not ensure that the Server restriction option for all policies in the CorpPolicies category is set to
None . The Server restriction option is used to limit a policy to a subset of the target types. This option is
available on the General page of the Create New Policy dialog box or the Open Policy dialog box if you open an
existing policy. The Server restriction option cannot be used to apply the policy to the entire instance of the SQL
Server 2008.

You should not ensure that the instance of the SQL Server 2008 is specified in the Against targets field for all
policies in the CorpPolicies category because it is not possible to specify an instance of SQL Server 2008 in the
Against targets field. The Against targets field is available on the General page of the Create New Policy
dialog box or the Open Policy dialog box if you open an existing policy, and it is used to specify entities of the
server, such as databases.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 108 of 226

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Implement the declarative management framework (DMF).

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Management Node (Object
Explorer) > Policy Management Node (Object Explorer) > Manage Policy Categories Dialog Box

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administering Servers by Using Policy-Based Management

Item: 127 (Ref:Cert-70-432.2.3.1)

You are the database administrator for your company. The company's network consists of a single Active
Directory domain named nutex.com.

You are configuring the SQL Server Agent operator information for a user named Paul. You want to ensure that
Paul receives notifications by pager Monday through Friday from 9:00 A.M. to 6:00 P.M.

Which Transact-SQL should you run?


j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 62;
GO

j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 32;
GO

j USE msdb;
k
l
m
n
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 109 of 226

@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 64;
GO

j
k
l
m
n
USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 127;
GO

Answer:
USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 62;
GO

Explanation:
You should run the following Transact-SQL:

USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 62;
GO

The SQL Server Agent service allows notifications to be sent through operators when a job has completed or an
alert is triggered. Operators are aliases for users or groups that can receive notifications. You can create
operators by using SQL Server Management Studio or using the sp_add_operator system stored procedure.
While configuring an operator, you must ensure that you specify a unique name for the operator and provide the
operator's contact information. To specify the start and end time for all weekdays when the operator can receive
notifications by pager, you should use the @weekday_pager_start_time and @weekday_pager_end_time
parameters. The time for these parameters must be entered by using the HHMMSS format. To specify the days
the operator will receive notifications, you should use the @pager_days parameter. The valid values for this
parameter are from 0 to 127, where 0 indicates that the operator is never available for receiving pager messages.
The value for each day of the week is assigned a value as follows: Sunday = 1, Monday = 2, Tuesday = 4,
Wednesday = 8, Thursday = 16, Friday = 32, and Saturday = 64. The value for the @pager_days parameter is

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 110 of 226

calculated by adding the values of the required days for which notifications should be sent. For example, the
value for weekdays from Monday through Friday would be 2+4+8+16+32 = 62.

You should not run the following Transact-SQL:

USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 32;
GO

The value for the @pager_days parameter is calculated by adding the values of the required days for which
notifications should be sent. A value of 32 is for Friday. Specifying 32 for the @pager_days parameter will allow
Paul to receive notifications by pager only on Fridays.

You should not run the following Transact-SQL:

USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 64;
GO

The value for the @pager_days parameter is calculated by adding the values of the required days for which
notifications should be sent. A value of 64 is for Saturday. Specifying 64 for the @pager_days parameter will
allow Paul to receive notifications by pager only on Saturdays.

You should not run the following Transact-SQL:

USE msdb;
GO
EXEC dbo.sp_add_operator
@name = N'Paul',
@enabled = 1,
@email_address = N'paul',
@pager_address = N'1234567AW@pager.nutex.com',
@weekday_pager_start_time = 090000,
@weekday_pager_end_time = 180000,
@pager_days = 127;
GO

The value for the @pager_days parameter is calculated by adding the values of the required days for which
notifications should be sent. Specifying 127 for the @pager_days parameter will allow Paul to receive
notifications by pager from Sunday through Saturday. In this scenario, you want to ensure that Paul receives
notifications by pager from Monday through Friday. Therefore, you should specify a value of 62 for the
@pager_days parameter.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 111 of 226

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent operators.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_add_operator (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks > (SQL Server Agent) > Monitoring and Responding to Events > Defining Operators

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: Create an Operator (SQL Server Management Studio)

Item: 152 (Ref:Cert-70-432.2.2.1)

You are the database administrator for your company. You maintain a database named Sql_ny on a SQL Server
2008 instance named Sql1. This server is located at the head office in New York. You have created 30 different
alerts on this database server.

Your company has recently opened two new branch offices in London and Paris. The databases, named Sql_lon
and Sql_par, in the London and Paris branch offices reside on SQL Server 2008 instances named Sql2 and
Sql3, respectively. The data in these databases is similar to the data stored in the Sql_ny database.

You are now required to define alerts on the Sql_lon and Sql_par databases and on the Sql_ny database.

What should you do to define the alerts for the databases in this scenario?
j Create an SSIS package to copy the master database from Sql1 onto the two destination servers.
k
l
m
n
j Create an SSIS package to copy the msdb database from Sql1 onto the two destination servers.
k
l
m
n

j Create a Transact-SQL script for all the alerts on Sql1 by using the Alert.Script() method, and run the script
k
l
m
n
on the destination servers.
j Create a Transact-SQL script for each alert on Sql1 by using SQL Server Management Studio, and run each
k
l
m
n
script on the destination servers.

Answer:
Create a Transact-SQL script for each alert on Sql1 by using SQL Server Management Studio,
and run each script on the destination servers.

Explanation:
You should create a Transact-SQL script for each alert on Sql1 by using SQL Server Management Studio and run
each script on the destination servers. To copy the alerts from one database server to another, you can use SQL
Server Management Studio to create scripts for the required alerts and then run these scripts on the servers to
which the alerts should be copied. To script an alert by using SQL Server Management Studio, you should

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 112 of 226

perform the following steps:

1. Expand the Alerts section in SQL Server Agent after connecting to an instance.
2. Right-click the alert, point to Script Alert As, and select either the CREATE TO or the DROP TO option to
create a script and save it in a text file or the clipboard.

You should not create a SQL Server Integration Services (SSIS) package to copy the master database to the two
destination servers. The master database only contains database startup settings, such as locations of the
various files, in the database. The master database does not contain any information about the alerts.

You should not create an SSIS package to copy the msdb database to the two destination servers. Copying the
msdb database to the destination servers will involve copying all the SQL Server Agent data. In this scenario,
only the alerts should be copied. Therefore, you should use a method that copies only the alerts to the destination
servers.

You should not create a Transact-SQL script for all the alerts on Sql1 by using the Alert.Script() method and
then run the script on the destination servers. The AlertCollection.Script() method is the correct method used to
create a Transact-SQL script for all the alerts in the database. You can use this method to create a script for all
the alerts in the database, and then run this script on the destination servers to re-create the alerts. The
Alert.Script() method is used to create a Transact-SQL script for an alert in the database.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent alerts.

References:
MSDN > Developer Centers > Microsoft Developer Network > Script Method

MSDN > Developer Centers > Microsoft Developer Network > How to: Script Alerts Using Transact-SQL (SQL
Server Management Studio)

MSDN > Developer Centers > Microsoft Developer Network > AlertCollection.Script Method

Item: 160 (Ref:Cert-70-432.2.1.8)

You are the database administrator of your company. The network contains a SQL Server 2008 computer named
SQL1. You are in the process of creating a SQL Server Agent job that will use an ActiveX script to monitor
database file sizes.

To ensure that the job runs successfully, you want to configure the job step to use a proxy account that has
permission for the ActiveX Script subsystem of the SQL Server Agent. To achieve this, you want to create a proxy
account for the job step.

What must you do before creating a proxy account?


j Create a new operator.
k
l
m
n
j Create a new SQL login.
k
l
m
n

j Create a credential.
k
l
m
n
j Create a non-administrative local user account.
k
l
m
n

Answer:
Create a credential.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 113 of 226

Explanation:
You should create a credential. By default, every job step runs under a specific security context. If you configure a
proxy account for a job step, the job step runs under the security context of the proxy's credentials. If you do not
configure a proxy account for a job step, the job step runs under the security context of the SQL Server Agent
service account. However, you should configure a proxy account with appropriate permission for the respective
SQL Server Agent subsystem when creating job steps that use ActiveX scripts, PowerShell scripts, and operating
system commands, as shown in the following graphic:

If the proxy account does not have permission for the appropriate subsystem, the job step may not run
successfully. To be able to create a proxy account, you must have at least one credential created that will be used
by the proxy account. A credential is a record that contains the authentication information required to connect to a
resource outside of SQL Server.

You should not create a new operator because that is not required by a proxy account. SQL Server operators are

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 114 of 226

used to receive alert and job status notification in response to events generated by the server.

You should not create a new SQL login or a non-administrative local user account because these are not required
by a proxy account. A proxy account requires a credential. A SQL login or a non-administrative local user account
may be used to create a credential, but these cannot be directly specified in a proxy account. A proxy account
requires a credential.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: Create a Proxy (SQL Server Management Studio)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Server Management How-to Topics > SQL Server Management Studio How-to
Topics > How to: Create a Credential (SQL Server Management Studio)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Automated Administration How-to Topics (SQL Server Management Studio) >
How to: Create an ActiveX Script Job Step (SQL Server Management Studio)

Item: 161 (Ref:Cert-70-432.2.1.10)

You are responsible for managing an instance of SQL Server 2008 that is installed on a server that has four
processors. The instance contains a database named FinanceDB.

The FinanceDB database is accessed by several users using different types of Transact-SQL queries. You want
to ensure that the SQL server creates and runs a parallel plan for a query if the estimated elapsed time for
running a serial plan for the query is greater than one second.

Which Transact-SQL should you execute?


j
k
l
m
n
sp_configure 'show advanced options', 0;
GO
RECONFIGURE;
GO
sp_configure 'cost threshold for parallelism', 1;
GO
RECONFIGURE;
GO

j
k
l
m
n
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'cost threshold for parallelism', 1;
GO
RECONFIGURE;
GO

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 115 of 226

j
k
l
m
n
sp_configure 'show advanced options', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO

j
k
l
m
n
sp_configure 'show advanced options', 0;
GO
RECONFIGURE WITH OVERRIDE;
GO
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO

Answer:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'cost threshold for parallelism', 1;
GO
RECONFIGURE;
GO

Explanation:
You should execute the following Transact-SQL:

sp_configure 'show advanced options', 1;


GO
RECONFIGURE;
GO
sp_configure 'cost threshold for parallelism', 1;
GO
RECONFIGURE;
GO

The cost threshold for parallelism configuration option allows you specify when the SQL server creates and
runs a parallel plan for a query. A parallel plan for a query is created only when the estimated elapsed time to run
a serial plan for the query is greater than the value configured in the cost threshold for parallelism option.
Therefore, to ensure that a parallel plan is created for a query if the estimated elapsed time for running a serial
plan for the query is greater than one second, you should set a value of 1 for the cost threshold for parallelism
option. The cost threshold for parallelism option is an advanced configuration option. To be able to change
advanced configuration options by using the sp_configure system stored procedure, you must first ensure that
the value for the show advanced options configuration option is set to 1.

You should not execute the following Transact-SQL:

sp_configure 'show advanced options', 0;


GO
RECONFIGURE;
GO
sp_configure 'cost threshold for parallelism', 1;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 116 of 226

GO
RECONFIGURE;
GO

The cost threshold for parallelism option is an advanced option. To be able to change this option by using the
sp_configure system stored procedure, you must first ensure that the value for the show advanced options
configuration option is set to 1.

You should not execute the following Transact-SQL because it does not allow you to specify the threshold at
which SQL Server creates and executes a parallel plan for a query:

sp_configure 'show advanced options', 1;


GO
RECONFIGURE WITH OVERRIDE;
GO
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO

When you install SQL Server 2008 on a multiprocessor computer, it automatically detects the optimal number of
processors that should be used to run a single statement for each parallel plan execution. This is referred to as
the degree of parallelism. The max degree of parallelism configuration option can be used to prevent SQL
Server from deciding the number of processors to use for each parallel plan. You can use the max degree of
parallelism option to explicitly specify the number of processors that should be used for executing parallel plans.
The default value for this setting is 0, which indicates that all available processors should be used. The max
degree of parallelism option is an advanced configuration option. To be able to change advanced options by
using the sp_configure system stored procedure, you must first ensure that the value for the show advanced
options configuration option is set to 1.

You should not execute the following Transact-SQL:

sp_configure 'show advanced options', 0;


GO
RECONFIGURE WITH OVERRIDE;
GO
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO

The max degree of parallelism option is an advanced configuration option. To be able to change advanced
configuration options by using the sp_configure system stored procedure, you must first ensure that the value for
the show advanced options configuration option is set to 1. Also, the max degree of parallelism option is used
to prevent SQL Server from deciding the number of processors to use for each parallel plan. It does not allow you
to specify the threshold at which SQL Server creates and executes a parallel plan for a query.

Objective:
Maintaining SQL Server Instances

Sub-Objective:
Manage SQL Server Agent jobs.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
Servers > Setting Server Configuration Options > cost threshold for parallelism Option

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 117 of 226

Managing SQL Server Security


Item: 6 (Ref:Cert-70-432.3.3.1)

You are the SQL administrator for your company. Two SQL Server 2008 computers named SQL_prod and
SQL_test are used by your company.

Due to recent security policies established by your company, you need to ensure that a user cannot establish
more than two sessions with SQL_prod. Users can establish as many sessions as necessary with SQL_test.

What should you do?


j Create a DML trigger on SQL_prod.
k
l
m
n
j Create a DDL trigger on SQL_prod.
k
l
m
n

j Create a logon trigger on SQL_prod.


k
l
m
n
j Create a stored procedure on SQL_prod.
k
l
m
n

Answer:
Create a logon trigger on SQL_prod.

Explanation:
You should create a logon trigger on SQL_prod. Logon triggers fire in response to a LOGON event. The FOR
LOGON clause specifies that a trigger is a LOGON trigger. For example, the following script creates a trigger that
fires if there are already two user sessions created by the JohnD user:

CREATE TRIGGER connection_limit_trigger


ON ALL SERVER WITH EXECUTE AS 'JohnD'
FOR LOGON
AS
BEGIN
IF ORIGINAL_LOGIN()= 'JohnD' AND
(SELECT COUNT(*) FROM sys.dm_exec_sessions
WHERE is_user_process = 1 AND
original_login_name = 'JohnD') > 2
ROLLBACK;
END;

You should not create a Data Manipulation Language (DML) trigger on SQL_prod. DML triggers fire when DML
statements, such as INSERT, UPDATE, and DELETE statements, are issued on tables or views.

You should not create a Data Definition Language (DDL) trigger on SQL_prod. DDL triggers fire when DDL
statements such as CREATE, ALTER, DROP, GRANT, DENY, REVOKE, or UPDATE STATISTICS statements,
are issued.

You should not create a stored procedure on SQL_prod. A stored procedure cannot be configured to execute
automatically in response to a certain event. Triggers are configured to execute automatically in response to
certain events.

Objective:
Managing SQL Server Security

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 118 of 226

Sub-Objective:
Manage SQL Server instance permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Logon
Triggers

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference >CREATE TRIGGER (Transact-SQL)

Item: 17 (Ref:Cert-70-432.3.7.1)

You are the database administrator of your company. The network contains an instance of SQL Server 2008
named SQL1. SQL1 contains a database named CorpData that contains confidential information. You want to
encrypt the CorpData database.

Which Transact-SQL should you run first?


j
k
l
m
n
ALTER DATABASE CorpData
SET ENCRYPTION ON;

j
k
l
m
n
USE CorpData;
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE SrvCertificate;

j CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';


k
l
m
n
j CREATE CERTIFICATE SrvCertificate WITH SUBJECT = 'Server Certificate';
k
l
m
n

Answer:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';

Explanation:
You should run the following Transact-SQL first:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';

In SQL Server 2008, Transparent Data Encryption (TDE) is used to encrypt the contents of an entire database.
TDE uses a database encryption key to encrypt the data. To encrypt a database by using TDE, the following
steps must be performed:

1. Create a master key. To do this, you should run the following statement:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';
2. Create a server level certificate by using the following Transact-SQL statement:
CREATE CERTIFICATE SrvCertificate WITH SUBJECT = 'Server Certificate';
3. Create a database encryption key by using the server certificate. You can use the following Transact-SQL
statement to do this:
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE SrvCertificate;
4. Alter the database and set encryption on. You can use the following statement to do this:
ALTER DATABASE CorpData
SET ENCRYPTION ON;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 119 of 226

When you encrypt a database file, the encryption is performed at the page level. SQL Server also allows
encryption at the cell level. Cell-level encryption provides a more granular level of encryption than database-level
encryption. With cell-level encryption, data is not decrypted until it is used, even if a page is loaded into memory.
This prevents sensitive data from being displayed in clear text.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage transparent data encryption.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Operation > SQL Server Encryption > Understanding Transparent Data Encryption (TDE)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Operation > SQL Server Encryption > SQL Server and Database Encryption Keys > SQL Server and Database
Encryption Keys How-to Topics > How to: Enable TDE using EKM

Item: 18 (Ref:Cert-70-432.3.4.2)

You are the database administrator for a banking firm and maintain all the SQL Server 2008 databases of the
firm.

The company stores customer-related data in the Cust_details table in a database named Customers. You have
created several roles in the database to effectively manage the permissions granted to database users.

The permissions granted to the different roles in the Cust_details table are as follows:

John is a new user in the audit department. To create reports on the data in the Cust_details table, John is
required to view the data in this table. John should be able to add new records to the table and update the
existing data in the table. You must ensure that John only has the required privileges. You want to accomplish this
with the least administrative effort.

What should you do? (Choose all that apply. Each correct answer presents a portion of the solution.)
c Add John to the Clerks role.
d
e
f
g
c Add John to the Auditors role.
d
e
f
g

c Add John to the Managers role.


d
e
f
g
c Grant John UPDATE permission on the table.
d
e
f
g

c Grant John INSERT permission on the table.


d
e
f
g
c Revoke DELETE permission on the table from John.
d
e
f
g

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 120 of 226

Answer:
Add John to the Auditors role.
Grant John UPDATE permission on the table.

Explanation:
You should add John to the Auditors role and grant John UPDATE permission on the table. In this scenario,
John must be able to perform select, insert, and update operations on the database. Therefore, John should be
granted the appropriate permissions to do so. The Auditors role is granted SELECT and INSERT permissions on
the table. Therefore, you are only required to grant the UPDATE permission to John to fulfill the requirement in
this scenario with the least administrative effort.

You should not add John to the Clerks role. Adding John to the Clerks role will grant John SELECT permission
on the table. Therefore, in this scenario, you would be required to grant two additional privileges to John. This will
involve additional administrative effort compared to granting John the Auditors role and UPDATE permission on
the table.

You should not add John to the Managers role because this will grant additional permissions to John. The
Managers role has been granted SELECT, UPDATE, and DELETE permissions on the table. Granting John the
Managers role will grant John DELETE permission on the table, which is not required in this scenario.

You should not grant John INSERT permission on the table. In this scenario, the least administrative effort would
be required by granting John the Auditors role, and additional UPDATE permission on the table. The Auditors
role has INSERT permission on the table.

You should not revoke DELETE permission on the table from John. In this scenario, the least administrative effort
will be involved in granting the Auditors role, and this role does not have DELETE permission on the table.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage database permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > GRANT (Transact-SQL) > GRANT Database Permissions (Transact-SQL)

Item: 19 (Ref:Cert-70-432.3.1.6)

You are the database administrator of your company. The network contains two instances of SQL Server 2008
named SQL1 and SQL2. You move a database named ProdDB from SQL1 to SQL2.

A database user in the ProdDB database reports that he is unable to log in to SQL2. You investigate and
discover that the user has become orphaned. You want to detect all orphaned users in the ProdDB database on
SQL2.

Which code should you execute?


j USE ProdDB;
k
l
m
n
GO
EXEC sp_change_users_login @action='Report', @UserNamePattern = 'user',
@LoginName ='login';

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 121 of 226

GO

j
k
l
m
n
USE ProdDB;
GO
sp_change_users_login @Action='Report';
GO

j
k
l
m
n
USE ProdDB;
GO
EXEC sp_helplogins
GO

j
k
l
m
n
USE ProdDB;
GO
EXEC sp_helplogins 'Report';
GO

Answer:
USE ProdDB;
GO
sp_change_users_login @Action='Report';
GO

Explanation:
You should execute the following code:

USE ProdDB;
GO
sp_change_users_login @Action='Report';
GO

A database user requires a valid SQL Server login to log in to an instance of SQL Server 2008. A database user
can become orphaned if you restore, attach, or copy a database from one SQL Server instance to another. You
can use the sp_change_users_login system stored procedure to detect orphaned users in a database. The
@Action parameter of the sp_change_users_login system stored procedure defines the action that should be
performed by the procedure. Specifying the Report value for the @Action parameter displays a list of users and
their security identifiers (SID) in the specified database that are not linked to any SQL login.

You should not run the following code:

USE ProdDB;
GO
EXEC sp_change_users_login @action='Report', @UserNamePattern = 'user', @LoginName
='login';
GO
This code is syntactically incorrect.

You should not run the following code:

USE ProdDB;
GO
EXEC sp_helplogins
GO

This code provides information about SQL logins and users that are associated with the logins in the specified
database. If you do not specify a particular database, information about SQL logins in all databases is displayed.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 122 of 226

You should not run the following code:

USE ProdDB;
GO
EXEC sp_helplogins 'Report';
GO

The Report parameter is not a valid parameter for the sp_helplogins system stored procedure. Also, the
sp_helplogins system stored procedure provides information about SQL logins and users that are associated
with those logins in the specified database, not orphaned accounts.

After generating a list of the orphaned accounts, you can then go to the original instance and create a script that
will create the SQL logins. After creating the script, run the script on the new instance to create the accounts.

Another possible solution is to transfer the logins to the other SQL Server 2008 instance. However, keep in mind
that transferred logins will no longer work on the original instance. The Transfer Logins task of SQL Server
Integration Services (SSIS) will transfer the logins between SQL Server 2008 instances.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Troubleshooting > Troubleshooting
Concepts > Troubleshooting Orphaned Users

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
Servers > Managing Metadata When Making a Database Available on Another Server Instance

Item: 21 (Ref:Cert-70-432.3.4.1)

You are a database administrator for your company. Your company stores all its product data in a SQL Server
2005 database named Products. You upgrade the Products database to SQL Server 2008. After the upgrade,
you create the respective logins and users in the new database on SQL Server 2008.

Eric, a user in your database, complains that since the upgrade he is not able to access the tables created by
another user, Adam, through a stored procedure. However, Adam is able to access the tables through the stored
procedure. You need to ensure that Eric is able to access the data while minimizing excessive permissions.

What should you do?


j Grant the CREATE TABLE permission to Eric so that he can create new tables.
k
l
m
n
j Ask Adam to re-create the stored procedure with the RECOMPILE option.
k
l
m
n

j Grant the db_owner role to Eric.


k
l
m
n
j Ask Adam to re-create the stored procedure with the EXECUTE AS OWNER clause.
k
l
m
n

Answer:
Ask Adam to re-create the stored procedure with the EXECUTE AS OWNER

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 123 of 226

clause.

Explanation:
You should ask Adam to re-create the stored procedure with the EXECUTE AS OWNER clause. When you
upgrade a database to a new version, all the database objects are transferred to the new database. The database
logins and users are not transferred, and you should explicitly create the database logins and users as they
existed on the old server. You must also grant the required permissions to users in the new database who access
objects owned by other users. In this scenario, Eric is trying to access tables through a stored procedure. Adam is
able to execute the store procedure and access the data. You could either have Eric run the store procedure as
Adam, give Eric permissions to the stored procedure, or edit the procedure to run under the owner's security
context.

You should not grant the CREATE TABLE permission to Eric so that he can create new tables. Granting the
CREATE TABLE permission to Eric will allow Eric to create new tables but will not give him permissions to the
existing tables that contain data.

You should not ask Adam to re-create the stored procedure with the RECOMPILE option. Re-creating the stored
procedure with the RECOMPILE option will not give Eric permissions to the stored procedure or the tables.
Including the RECOMPILE option will cause the stored procedure to be recompiled each time it is run.

You should not grant the db_owner role to Eric because granting the db_owner fixed database role to Eric would
grant Eric more permissions than required in this scenario. The db_owner fixed database role allows users to
perform all types of configuration and maintenance activities in the database, and is not required in this scenario.

While permissions are not required on the EXECUTE statement, keep in mind that users that execute a stored
procedure must also have permissions to execute the underlying statements within the stored procedure, such as
permission to use the INSERT statement. If users do not have permissions to run the underlying statements, the
user can impersonate another user that has the permissions using an EXECUTE AS statement.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage database permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online >Database Engine > Technical Reference > Transact-SQL
Reference > EXECUTE AS (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > EXECUTE (Transact-SQL)

Item: 28 (Ref:Cert-70-432.3.3.2)

You are the database administrator of your company. The network contains an instance of SQL Server 2008 that
is accessed by all users on the network.

The company's policy states that users must not have direct permission to create database triggers on the SQL
server. Any user that needs to create triggers must use the execution context of a different user or login that has
the appropriate permissions. To achieve this, you create a new login that has appropriate permissions to create
triggers. You want to ensure that users are able to use this login with the EXECUTE AS statement.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 124 of 226

What should you do?


j Ensure that the new login exists in the sys.database_principals view.
k
l
m
n
j Ensure that the new login exists in the sys.server_principals view.
k
l
m
n

j Ensure that the new login exists in the sys.database_role_members view.


k
l
m
n
j Ensure that the new login exists in the sys.server_role_members view.
k
l
m
n

Answer:
Ensure that the new login exists in the sys.server_principals view.

Explanation:
You should ensure that the new login exists in the sys.server_principals view. The EXECUTE AS statement
defines the execution context for a user's session. During a session when a user performs any operation, SQL
Server checks the user's permission to determine whether the user has the required permission to perform that
operation. If the user does not have the required permission, the operation is not performed. When a user
performs an operation by using the EXECUTE AS statement, the permissions are checked against the user or
login name specified in the EXECUTE AS statement. The EXECUTE AS statement runs successfully only if the
user or login name specified in the statement exists as a principal in the sys.database_principals or
sys.server_principals view. If the user needs to execute the statement across the entire server, the principal is
located in the sys.server_principals view. If the user needs to execute the statement only within a single
database, the principal is located in the sys.database_principals view.

You should not ensure that the new login name exists in the sys.database_principals view. The
sys.database_principals view contains information about each principal in a database. This answer would have
been correct if you had used a database user name in the EXECUTE AS statement. In this scenario, you have
created a new SQL login, which is a server-level object. Therefore, it must exist in the sys.server_principals
view because this view contains information about every server-level principal.

You should not ensure that the new login name exists in the sys.database_role_members view or in the
sys.server_role_members view. The EXECUTE AS statement runs successfully only if the user or login name
specified in the statement exists as a principal in the sys.database_principals or sys.server_principals view.
The sys.database_role_members view contains information about each member of each database role. The
sys.server_role_members view contains information about each member of each fixed server role.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage SQL Server instance permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > EXECUTE AS (Transact-SQL)

Item: 48 (Ref:Cert-70-432.3.1.7)

You are the database administrator of the Verigon corporation. Your network contains a SQL Server 2005
computer running on Windows 2000 Server with Mixed Mode authentication in the verigon.com domain. You

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 125 of 226

want to upgrade the SQL server to SQL Server 2008 and enforce the domain password policy for the users of the
verigon.com domain on the SQL Server. In addition, you want to use Kerberos authentication to provide better
security.

What should you do? (Choose all that apply. Each correct answer represents part of the solution.)
c Upgrade the server to Windows Server 2003 Service Pack 2 (SP2).
d
e
f
g

c Install Windows 2000 Service Pack 4 (SP4) on the server.


d
e
f
g
c Upgrade the SQL server to SQL Server 2008.
d
e
f
g

c Configure the SQL server to use Windows authentication.


d
e
f
g
c Configure the SQL server to use SQL Server authentication.
d
e
f
g

Answer:
Upgrade the server to Windows Server 2003 Service Pack 2 (SP2).
Upgrade the SQL server to SQL Server 2008.
Configure the SQL server to use Windows authentication.

Explanation:
You should upgrade the server to Windows Server 2003 Service Pack 2 (SP2), upgrade the SQL server to SQL
Server 2008, and configure the SQL server to use Windows authentication. SQL Server 2008 supports two types
of authentication modes: Windows authentication and SQL Server authentication. Windows authentication
enables users to connect to an instance of SQL Server by using Windows credentials. SQL Server authentication
enables users to connect to an instance of SQL Server by using both Windows authentication and SQL Server
authentication. In this scenario, users from the domain will connecting to the SQL Server. You should Windows
Authentication instead of SQL Server Authentication. Windows authentication provides better security because it
uses Kerberos security protocol and supports features, such as account lockout and password expiration. When
you install SQL Server 2008 on a computer running Windows Server 2003 or higher, Windows authentication also
supports password policy enforcement.

You should not install Windows 2000 Service Pack 4 (SP4) on the server. You can enforce password policies by
using Windows authentication when SQL Server 2008 is installed on a Windows Server 2003 or higher operating
system.

You should not configure the SQL server to use SQL Server authentication because SQL Server authentication
does not support Kerberos authentication. SQL Server authentication does support password policies.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administering Servers by Using Policy-Based Management > Monitoring and Enforcing Best Practices by Using
Policy-Based Management > Authentication Mode

Item: 63 (Ref:Cert-70-432.3.1.5)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 126 of 226

You are a SQL administrator for your company. You manage all of your company's SQL Server 2008 computers.
An administrator for another department needs access to the SQL Server 2008 computers that you manage. You
want to allow the other administrator to add and remove linked servers without granting the administrator
unnecessary permissions.

To which fixed server role should you add the other administrator?
j sysadmin
k
l
m
n

j serveradmin
k
l
m
n
j setupadmin
k
l
m
n

j diskadmin
k
l
m
n

Answer:
setupadmin

Explanation:
You should add the other administrator to the setupadmin fixed server role. This role can add and remove linked
servers on a SQL Server 2008 computer.

You should not add the other administrator to the sysadmin fixed server role. This role would grant the other
administrator all rights on the server.

You should not add the other administrator to the serveradmin fixed server role. This role does not grant the
member the permission to add and remove linked servers. Members of this role can change server-wide
configuration settings and reboot the server.

You should not add the other administrator to the diskadmin fixed server role. This role does not grant the
member the permission to add and remove linked servers. Members of this role can manage disk files.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Server-Level Roles

Item: 64 (Ref:Cert-70-432.3.6.3)

You have been hired as the database administrator for your company. The previous database administrator has
configured automatic auditing on an instance of SQL Server 2008 by using SQL Server Audit.

The ON_FAILURE=SHUTDOWN parameter for the SQL Server Audit is configured on the server to ensure that
the server shuts down if an audit failure occurs. You want to bypass the shutdowns caused by audit failures.

What should you do?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 127 of 226

j Start the server including the -c parameter.


k
l
m
n

j Start the server including the -m parameter.


k
l
m
n

j Start the server including the -s parameter.


k
l
m
n

j Start the server including the -n parameter.


k
l
m
n

Answer:
Start the server including the -m parameter.

Explanation:
You should start the server including the -m parameter. Auditing allows you to track and log various types of
events on an instance of SQL Server. You can use various types of auditing methods in SQL Server 2008, such
as the SQL Server Audit feature, C2 auditing mode, SQL Trace, and data definition language (DDL) triggers.
Audit events can be stored in a file, in the Windows Application event log, or in the Windows Security event log.
To ensure that audit events are written to the Windows Security log, you must add the SQL Server service
account to the Generate security audits policy and configure the Audit object access security policy for both
Success and Failure. When you configure the ON_FAILURE=SHUTDOWN parameter for SQL Server Audit, the
server shuts down when an audit failure occurs. To bypass the shutdowns caused by audit failures, an
administrator can specify the -m parameter when starting the server.

You should not start the server including the -c parameter because this will not allow you to bypass shutdowns
caused by audit failures. The -c parameter shortens the startup time to start SQL Server from a command prompt.

You should not start the server including the -s parameter because this will not allow you to bypass shutdowns
caused by audit failures. The -s parameter starts a named instance of SQL Server.

You should not start the server including the -n parameter because this will not allow you to bypass shutdowns
caused by audit failures. The -n parameter prevents the use of the Windows application log to store SQL Server
events.

Objective:
Managing SQL Server Security

Sub-Objective:
Audit SQL Server instances.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Operation > SQL Server Encryption > Auditing (Database Engine) > Understanding SQL Server Audit

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
the Database Engine Services > Starting and Restarting Services > Using the SQL Server Service Startup
Options

Item: 73 (Ref:Cert-70-432.3.6.2)

You are the database administrator of your company. The network contains a default instance of SQL Server

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 128 of 226

2008 named SQL1. You configure the C2 auditing mode on SQL1 to audit all successful and failed attempts to
access databases stored on the server. The auditing is configured to start automatically.

After several days, you discover that SQL1 has shut down due to lack of space in the data directory in which the
audit log files are being saved. You want to restart SQL1.

What should you do?


j Restart SQL1 including the -f parameter.
k
l
m
n

j Restart SQL1 including the -m parameter.


k
l
m
n
j Restart SQL1 including the -s parameter.
k
l
m
n

j Restart SQL1 including the -x parameter.


k
l
m
n

Answer:
Restart SQL1 including the -f parameter.

Explanation:
You should restart SQL1 including the -f parameter. The C2 auditing mode allows you to configure an instance of
SQL Server to audit both successful and failed attempts to access statements and objects. You can configure the
C2 auditing mode by using SQL Server Management Studio or using the sp_configure system stored procedure.
The C2 audit trace data is stored in a file in the default data directory of the SQL Server instance. The maximum
file size for this file is 200 MB. When this file reaches its size limit, SQL Server creates a new file to write the
auditing data and closes the old file. The file in which the auditing data is stored can grow very quickly because
C2 auditing mode saves information about several events in the log file. When the data directory that contains the
log files runs out of space, SQL Server shuts down. In this situation, if auditing is configured to start automatically,
you must either restart the instance including the -f parameter or free up hard disk space for the audit log. To start
a default instance with the -f parameter from a command prompt, you should run the sqlservr.exe -f command.
The -f parameter starts an instance of SQL Server with minimal configuration, which bypasses the C2 auditing.

You should not restart SQL1 including the -m parameter. The -m parameter starts an instance of SQL Server in
single-user mode. Using the -m parameter does not bypass C2 auditing, which is preventing SQL1 from starting
in this scenario.

You should not restart SQL1 including the -s parameter. The -s parameter starts a named instance of SQL
Server. In this scenario, SQL1 is a default instance of SQL Server. Also, the -s parameter does not bypass the C2
auditing mode.

You should not restart SQL1 including the -x parameter. The -x parameter disables monitoring features, such as
performance monitor counters. The -x parameter does not allow you to restart an instance of SQL Server that has
shut down due to lack of space in the data directory in which the audit log files are being saved.

Objective:
Managing SQL Server Security

Sub-Objective:
Audit SQL Server instances.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
Servers > Setting Server Configuration Options > c2 audit mode Option

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 129 of 226

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
the Database Engine Services > Starting and Restarting Services > Using the SQL Server Service Startup
Options

Item: 74 (Ref:Cert-70-432.3.8.1)

You are the SQL administrator of your company. The network contains a default instance of SQL Server 2008
that was upgraded from SQL Server 2005.

You discover that several unwanted settings and features are configured on the SQL server. You want to reduce
the SQL server surface area to prevent the server from attacks by malicious users. To achieve this, you want to
configure the connection, protocols, and startup options for the SQL server.

What should you use to perform this task?


j SQL Server Management Studio
k
l
m
n

j SQL Server Configuration Manager


k
l
m
n
j a SET statement
k
l
m
n

j the sys.configurations view


k
l
m
n

Answer:
SQL Server Configuration Manager

Explanation:
You should use SQL Server Configuration Manager. When you perform a new installation of SQL Server, several
features are not enabled by default. This reduces the surface area to prevent malicious users from attacking the
server. But when you upgrade the SQL server from a previous version, all services, settings, and features remain
enabled that were enabled before upgrading the server. This provides additional surface area to malicious users.

You can reduce the surface area by disabling or turning off unnecessary services and settings on the SQL server.
SQL Server 2008 provides various tools to achieve this. You can use the SQL Server Configuration Manager to
configure protocols, services, connection, and startup options. You will need to use an account that is a member
of the sysadmin fixed server role to be able to perform tasks such as stopping or configuring the SQL Server or
SQL Server Agent services. You can use SQL Server Management Studio to configure Database Engine
features. You can use the Invoke-PolicyEvaluation PowerShell cmdlet to invoke Surface Area Configuration
policies.

You should not use SQL Server Management Studio because it does not allow you to configure the connection,
protocols, and startup options for the SQL server. You can use SQL Server Management Studio to configure
Database Engine features.

You should not use a SET statement because it does not allow you to configure the connection, protocols, and
startup options for the SQL server. SET statements are used to change the current session handling of specific
information.

You should not use the sys.configurations view because it does not allow you to configure the connection,
protocols, and startup options for the SQL server. The sys.configurations view is a catalog view that contains
information about server-wide configuration options.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 130 of 226

Objective:
Managing SQL Server Security

Sub-Objective:
Configure surface area.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Deployment > Understanding Surface Area Configuration

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > SQL Server Setup User
Interface Reference > Minimize SQL Server 2008 Surface Area

Item: 77 (Ref:Cert-70-432.3.5.1)

You are the database administrator of your company. The network contains an instance of SQL Server 2008. The
network also contains a Web server that hosts a Web application to provide product information to customers.
The product information is stored in a database named Products on the SQL server.

You want to create a stored procedure for use by the Web application that will return information about products
from the Products database that matches the search criteria specified by the customer.

Which two permissions will you require to perform this task? (Choose two. Each correct answer represents a part
of the solution.)
c the CREATE PROCEDURE permission in the Products database
d
e
f
g

c the CREATE PROCEDURE permission in the master database


d
e
f
g

c the ALTER permission on the schema in which the stored procedure is being created
d
e
f
g

c the UPDATE permission on the schema in which the stored procedure is being created
d
e
f
g

Answer:
the CREATE PROCEDURE permission in the Products database
the ALTER permission on the schema in which the stored procedure is being created

Explanation:
You will require the CREATE PROCEDURE permission in the Products database and the ALTER permission on
the schema in which the stored procedure is being created. A stored procedure is a collection of Transact-SQL
statements that are saved in the database. A stored procedure can accept parameters supplied by a user and
return output parameters to the user. To create stored procedures in a database, you must have the CREATE
PROCEDURE permission in the database and ALTER permission on the schema in which the stored procedure
is being created.

The three types of stored procedures are user-defined stored procedures, extended stored procedures, and
system stored procedures. Transact-SQL and Common Language Runtime (CLR) stored procedures are referred
to as user-defined stored procedures. Extended stored procedures allow you to create your own external routines
by using an alternate programming language. Extended stored procedures are dynamic link libraries (DLLs) that
can be dynamically loaded and run by an instance of SQL Server. System stored procedures are built into the
instance of SQL Server and allow you to perform several administrative tasks.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 131 of 226

Note: PowerShell providers allow you to access the hierarchy of SQL Server objects by using a drive and path
structure similar to Windows file system.

You will not require the CREATE PROCEDURE permission in the master database. A stored procedure can be
created for permanent or temporary use. A stored procedure can only be created in the current database except
for the temporary procedures because temporary stored procedures are always created in the tempdb database.
In this scenario, the product information is stored in the Products database. Therefore, you will require the
CREATE PROCEDURE permission in the Products database.

You will not require the UPDATE permission on the schema in which the stored procedure is being created. To
create a stored procedure, you must have the ALTER permission on the schema in which the stored procedure is
being created. The UPDATE permission allows users to update existing data in a table or a view.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage schema permissions and object permissions.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data > Stored Procedures > Implementing Stored Procedures > Creating Stored Procedures (Database Engine)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE PROCEDURE (Transact-SQL)

Item: 94 (Ref:Cert-70-432.3.1.1)

You are your company's SQL administrator. A SQL Server 2008 computer named SQL_main contains a
database named Finance. The Finance database is used by the accounting department to manage your
company's accounts payable and accounts receivable. All users use SQL logins to access this database.
Password expiration is not enabled by default.

A user named Amy in the accounting department took a medical leave of absence and has just returned. During
that time, her SQL login, Amy16, was disabled. She has forgotten her password. She contacts you requesting
that you change her password. However, you must ensure that your password change is only temporary and that
she resets the password at login.

Which Transact-SQL statement should you use first?


j ALTER LOGIN Amy16 ENABLE WITH PASSWORD='@Bc12345' MUST_CHANGE;
k
l
m
n

j
k
l
m
n
ALTER LOGIN Amy16 ENABLE WITH PASSWORD='@Bc12345' MUST_CHANGE CHECK_POLICY=ON
CHECK_EXPIRATION=ON;

j ALTER LOGIN Amy16 ENABLE;


k
l
m
n
j ALTER LOGIN Amy16 WITH PASSWORD='@Bc12345' MUST_CHANGE;
k
l
m
n

j ALTER LOGIN Amy16 WITH CHECK_POLICY=ON, CHECK_EXPIRATION=ON;


k
l
m
n

Answer:
ALTER LOGIN Amy16 ENABLE;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 132 of 226

Explanation:
You should use the following Transact-SQL statement first:
ALTER LOGIN Amy16 ENABLE;

You must first enable the user account.

You should not use the following Transact-SQL statement:


ALTER LOGIN Amy16 ENABLE WITH PASSWORD='@Bc12345' MUST_CHANGE;

You cannot reset the password until after the user account is enabled. In addition, you cannot issue the
MUST_CHANGE parameter until after the password policies are enabled.
You should not use the following Transact-SQL statement:
ALTER LOGIN Amy16 ENABLE WITH PASSWORD='@Bc12345' MUST_CHANGE CHECK_POLICY=ON
CHECK_EXPIRATION=ON;

You cannot reset the password until after the user account is enabled. In addition, you cannot issue the
MUST_CHANGE parameter until after the password policies are enabled. Finally, you cannot enable the
password policies until after the user account is enabled. This command will return a syntax error.

You should not use the following Transact-SQL statement:


ALTER LOGIN Amy16 WITH PASSWORD='@Bc12345' MUST_CHANGE;

This step should be completed after the account is enabled and after the password policies are
enabled. It will be the third step in the process to completing the scenario requirements.

You should not use the following Transact-SQL statement:


ALTER LOGIN Amy16 WITH CHECK_POLICY=ON, CHECK_EXPIRATION=ON;

This step should be completed after the account is enabled. It will be the second step in the process to
completing the scenario requirements.

The ALTER LOGIN statement supports the following arguments:

 ENABLE / DISABLE - Enables or disables the SQL login.

 PASSWORD='password' - Configures the value for the password.

 MUST_CHANGE - Forces the user to reset the password the next time the SQL login is used.

 CHECK_EXPIRATION={ON/OFF} - Specifies whether the password expiration policy is enforced. The


OFF setting is the default.

 CHECK_POLICY={ON/OFF} - Specifies whether the Windows password policies are enforced. The ON
setting is the default. To reset a bad password count, issue the ALTER LOGIN statement with the
CHECK_POLICY = OFF argument, followed by another ALTER LOGIN statement with the
CHECK_POLICY = ON argument. When this setting is OFF, the CHECK_EXPIRATION argument is also
set to OFF.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 133 of 226

 UNLOCK - Unlocks a locked account.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online> Database Engine > Technical Reference > Transact-SQL
Reference > ALTER LOGIN (Transact-SQL)

Item: 100 (Ref:Cert-70-432.3.3.3)

You are the database administrator of your company. You create a logon trigger on a SQL Server 2008 computer.
The trigger will be used to deny login attempts to the server that are initiated by a login named Admin_login if
there are already two existing sessions for that login. You want to view the metadata for the logon trigger you
created.

Which SQL Server entity should you use?


j the sys.trigger_events catalog view
k
l
m
n
j the sp_helptrigger system stored procedure
k
l
m
n

j the sys.server_triggers catalog view


k
l
m
n
j the sp_monitor system stored procedure
k
l
m
n

Answer:
the sys.server_triggers catalog view

Explanation:
You should use the sys.server_triggers catalog view. Logon triggers are used to fire stored procedures when a
LOGON event occurs. A LOGON event occurs when a user establishes a session with a SQL Server instance.
Logon triggers fire after the authentication for the login process has finished, but before the user session is
established. Logon triggers do not fire if authentication fails. You can use logon triggers for auditing and
controlling server sessions, restricting logins to a SQL server during a specified time, or limiting the number of
sessions for a particular login. You can use the CREATE TRIGGER Transact-SQL statement to create logon
triggers. The metadata for logon triggers can be viewed by querying the sys.server_triggers catalog view. The
sys.server_triggers catalog view contains information about all server-level Data Definition Language (DDL)
triggers that have an object type of assembly trigger or SQL trigger.

You should not use the sys.trigger_events catalog view because it does not contain metadata for logon triggers.
In addition, you should not use the sp_helptrigger or sp_monitor system stored procedure because neither of
these stored procedures retrieve metadata for logon triggers. The sys.trigger_events catalog view contains
information about each event for which a trigger fires. The sp_helptrigger system stored procedure provides
information on the type or types of DML triggers that are defined on a specified table for the database in use. The
sp_monitor system stored procedure shows statistics for SQL Server.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 134 of 226

Objective:
Managing SQL Server Security

Sub-Objective:
Manage SQL Server instance permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Logon
Triggers

Item: 102 (Ref:Cert-70-432.3.1.2)

You are the SQL administrator for your company. You have installed SQL Server 2008 on two Windows Server
2008 computers. You create a new database named Research for the research department on the SQL Server
2008 instance named SQL_production.

You have been asked to ensure that all SQL logins created for the research department employees meet the
following criteria:

 The logins should be able to issue commands within the Research database without specifically including
the database name in the command.
 The Windows Server 2008 password policies and password expiration should be enforced for the SQL
logins.
 The account should create a temporary password and force the user to reset the password.

You decide to create a template account to use to create all the research department accounts. You plan to
create a CREATE LOGIN script from the template account for future use.

You open the SQL Server Management Studio, and open the SQL_production instance. Then you right-click the
Security folder, point to New, and select Login. On the General page, you type Template in the Login name
box.

What else should you do? (Choose all that apply. Each correct answer represents part of the solution.)

c On the General page, select the SQL Server authentication option.


d
e
f
g

c On the General page, select the Windows authentication option.


d
e
f
g
c On the General page, enter and confirm the temporary password in the appropriate fields.
d
e
f
g

c On the General page, select the Enforce password policy, Enforce password expiration, and User must
d
e
f
g
change password at next login check boxes.
c On the General page, change the Default database option to Research.
d
e
f
g

c On the General page, change the Default database option to msdb.


d
e
f
g
c On the Status page, select Enabled.
d
e
f
g

c On the Status page, select Disabled.


d
e
f
g

Answer:
On the General page, select the SQL Server authentication option.
On the General page, enter and confirm the temporary password in the appropriate fields.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 135 of 226

On the General page, select the Enforce password policy, Enforce password expiration, and
User must change password at next login check boxes.
On the General page, change the Default database option to Research.
On the Status page, select Disabled.

Explanation:
On the General page, you should complete the following steps:

1. Select the SQL Server authentication option. This option configures the server to use SQL authentication
2. Enter and confirm the temporary password in the appropriate fields. This will create a temporary password
for the template account.
3. Select the Enforce password policy, Enforce password expiration, and User must change password
at next login check boxes. This will ensure that the Windows Server 2008 password policies and
password expiration are enforced for the SQL logins.

4. Change the Default database option to Research. This will ensure that the research department employees
are able to issue commands within the Research database without specifically including the database name in
the command. The General page of the Login - New dialog box is shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 136 of 226

5. Finally, you should select Disabled on the Status page because the account you are creating is a template
account. You should not specifically enable template accounts.

You should not select the Windows authentication option on the General page. You need to use SQL
authentication for SQL logins. SQL authentication is mainly used for backward compatibility. Using Windows
Authentication is recommended because it provides better security.

You should not change the Default Database option to msdb. This scenario specifically stated that you want the
logins to be able to issue commands within the Research database without specifically including the database
name in the command. Therefore, the default database should be the Research database, not msdb.

You should not select Enabled on the Status page. Because you are creating a template account, you should
disable the account. You do not want the template account to become a possible security issue later. In addition,
if you enabled the account, a user could log in with the account, change its password at login, and have access to
the database.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 137 of 226

All of these settings can be configured or changed using the CREATE LOGIN or ALTER LOGIN statement.

After creating the template account, you could create a CREATE LOGIN script from the template account. The
CREATE LOGIN script created from the template account with the settings for this scenario would be as follows:

CREATE LOGIN [Template] WITH PASSWORD=N'@Bc12345', DEFAULT_DATABASE=[Research],


DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=ON, CHECK_POLICY=ON

GO

ALTER LOGIN [Template] DISABLE

GO

You should change the login name and the status to use this script to create the account.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Principals > Managing Logins and Users How-to Topics > How To: Create a SQL Server Login

Item: 110 (Ref:Cert-70-432.3.6.4)

You are the database administrator of your company. You configure automatic auditing by using SQL Server
Audit on a server that runs an instance of SQL Server 2008. You want to ensure that the failed login attempts to
the instance are logged in the Windows Security event log.

What should you do to achieve this?


j Add the SQL Server Agent service account to the Generate security audits policy.
k
l
m
n
j Add the SQL Server Writer service to the Generate security audits policy.
k
l
m
n

j Add the SQL Server Integration service to the Generate security audits policy.
k
l
m
n
j Add the SQL Server service account to the Generate security audits policy.
k
l
m
n

Answer:
Add the SQL Server service account to the Generate security audits policy.

Explanation:
You should add the SQL Server service account to the Generate security audits policy. Audit events can be
stored in a file, in the Windows Application event log, or in the Windows Security event log. The Windows
Application event log contains events logged by applications or programs running on the computer. Events that

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 138 of 226

are logged to the Windows Application event log are determined by users who developed the application or
program. The Windows Security event log records only security-related events. You require administrative
privileges to be able to use and specify events that should be logged in the Windows Security event log. To
ensure that failed login attempts are recorded in the Windows Security event log, the SQL Server service account
must be added to the Generate security audits policy. Additionally, you must also configure the Audit object
access security policy to audit successful login attempts, failed login attempts, or both successful and failed login
attempts. You can configure these policies by using the secpol.msc tool.

You should not add the SQL Server Agent service account, SQL Server Writer service, or the SQL Server
Integration service to the Generate security audits policy. Adding these services to the Generate security
audits policy will not enable the SQL server to write audit events to the Windows Security event log. To ensure
that failed login attempts are recorded in the Windows Security log, the SQL Server service account must be
added to the Generate security audits policy.

Objective:
Managing SQL Server Security

Sub-Objective:
Audit SQL Server instances.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Operation > SQL Server Encryption > Auditing (Database Engine) > SQL Server Audit How-to Topics > How to:
Write Server Audit Events to the Security Log

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Secure
Operation > SQL Server Encryption > Auditing (Database Engine) > Understanding SQL Server Audit

Item: 114 (Ref:Cert-70-432.3.1.4)

You are the SQL administrator for your company. You have two SQL Server 2008 computers named SQL_corp
and SQL_public. Multiple databases reside on these two servers.

A new user named John has been hired in your department. John must be able manage permissions for the
SQL_corp server. He should also be able to manage permissions on the Sales database on the SQL_public
computer. You should not grant John any permissions he does not need.

To which role should you add John's account?


j the sysadmin role on both servers
k
l
m
n
j the securityadmin role on both servers
k
l
m
n

j the securityadmin role on SQL_corp and the db_securityadmin role on the Sales database on
k
l
m
n
SQL_public
j the db_securityadmin role on all databases on both servers
k
l
m
n

Answer:
the securityadmin role on SQL_corp and the db_securityadmin role on the Sales database on
SQL_public

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 139 of 226

Explanation:
You should add John's account to the securityadmin fixed server role on SQL_corp and the db_securityadmin
fixed database role on the Sales database on SQL_public. Adding John's account to the securityadmin fixed
server role on SQL_corp will ensure that John is able to manage permissions for the SQL_corp server. Adding
John's account to the db_securityadmin fixed database role on the Sales database on SQL_public will ensure
that John can manage permissions on the Sales database.

You should not add John's account to the sysadmin fixed server role on both servers. This role membership
would allow John to configure all server settings, including all permissions on all databases. This is more
permission than John requires to complete his duties.

You should not add John's account to the securityadmin fixed server role on both servers. On the SQL_public
server, John should only be able to manage permissions for a single database, Sales. Adding his account to the
securityadmin fixed server role on the SQL_public server will allow him to manage permissions for all the
databases on the server.

You should not add John's account to the db_securityadmin fixed database role on all databases on both
servers. John only needs to manage the permissions for the Sales database on SQL_public.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies >SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Database-Level Roles

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Server-Level Roles

Item: 128 (Ref:Cert-70-432.3.1.3)

You are a database administrator in your company. You maintain the Corpdb database on the Sql1 SQL Server
2008 computer. The database contains data that is specific to the corporate department of the company and is
accessed only by corporate department members. Due to an organizational restructuring, the corporate
department has been relocated. You are required to transfer the Corpdb database to a SQL Server 2008
instance named Sql2 installed at the new location.

You successfully transfer the database to Sql2 by using SSIS packages. The managers of the corporate
department complain they cannot log in to Sql2. Although you can log in, you receive a Select permission
denied error after executing Transact-SQL queries on the database tables. You want to ensure that managers in
the corporate department are able to successfully access the Corpdb database and configure security for the
users in the corporate department.

What should you do?


j Grant the db_owner fixed database role to the corporate department managers on the Corpdb database.
k
l
m
n
j Grant the db_accessadmin fixed database role to the corporate department managers on the Corpdb
k
l
m
n
database.
j Grant the db_ securityadmin fixed database role to the corporate department managers on the Corpdb
k
l
m
n
database.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 140 of 226

j Grant the db_backupoperator fixed database role to the corporate department managers on the Corpdb
k
l
m
n
database.

Answer:
Grant the db_owner fixed database role to the corporate department managers on the Corpdb
database.

Explanation:
You should grant the db_owner fixed database role to the managers in the corporate department on the Corpdb
database. To enable the managers in the corporate department to access the Corpdb database, you should
grant the required read/write permissions to the managers. These permissions can be granted by assigning them
the db_owner fixed database role. The db_owner fixed database role grants the managers the permission to
read, write, or modify any object in the database. If the users are assigned this role, they will be able to access all
the objects in the Corpdb database.

You should not grant the db_accessadmin fixed database role to the managers in the corporate department on
the Corpdb database. The db_accessadmin fixed database role will not enable you to access the database
tables. The db_accessadmin fixed database role enables users to enable or disable logins and groups. These
logins and groups can be Windows authenticated or SQL Server authenticated.

You should not grant the db_ securityadmin fixed database role to the managers in the corporate department on
the Corpdb database. The db_ securityadmin fixed database role will not enable users to access the database
tables. The db_ securityadmin fixed database role enables users to manage permissions and modify role
memberships.

You should not grant the db_backupoperator fixed database role to the managers in the corporate department
on the Corpdb database. The db_backupoperator fixed database role will not enable users to access the
database tables. The db_backupoperator fixed database role enables users to back up the database.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage logins and server roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Database-Level Roles

Item: 138 (Ref:Cert-70-432.3.2.2)

You are the database administrator for a banking firm and maintain all the SQL Server 2008 databases of the
firm. You create the Accounts table in the Sales schema of the database to store all information regarding the
firm's customers. To update the account details of a customer's account, you instruct another database
administrator, Michelle, to create a stored procedure named Upd_account_details in the Sales schema in the
database. Michelle runs the following script to create the Upd_account_details procedure:

CREATE PROC Upd_account_details


@AccountName nvarchar(40)
@Address1 nvarchar(40)
@Address2 nvarchar(40)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 141 of 226

@City nvarchar(25)
@State nvarchar(2)
@Phone nvarchar(20)
AS
INSERT INTO Accounts
VALUES
(@AccountName, @Address1, @Address2, @City, @State, @Phone)
GO
ALTER AUTHORIZATION ON OBJECT::Upd_account_details TO Michelle;
GO

You want to assign permissions on this procedure to the members of the Role_Admin role.

You issue the following statement:

GRANT ALL ON OBJECT::Sales.Upd_account_details to Role_Admin

When John, a member of the Role_Admin role, attempts to update a record with the Upd_account_details
stored procedure, he receives an error and is unable to update the record.

What should you do to allow the members of the Role_Admin role to update the Accounts table with the least
administrative effort?
j Issue the following statement:
k
l
m
n
GRANT Update ON OBJECT::Sales.Upd_account_details to Role_Admin

j Issue the following statement:


k
l
m
n
GRANT Update, Delete ON OBJECT::Sales.Upd_account_details to Role_Admin

j Add the Role_Admin role to the db_datawriter role.


k
l
m
n

j Run the ALTER AUTHORIZATION statement to change the ownership of the Upd_account_details stored
k
l
m
n
procedure.

Answer:
Run the ALTER AUTHORIZATION statement to change the ownership of the
Upd_account_details stored procedure.

Explanation:
You should run the ALTER AUTHORIZATION statement to change the ownership of the Upd_account_details
stored procedure. John is unable to update information in the Accounts table because the owner of the table and
the owner of the stored procedure are different. You created the table, but the stored procedure was created by
Michelle. Michelle ran a script to create the stored procedure that ran the ALTER AUTHORIZATION statement on
the stored procedure. The ALTER AUTHORIZATION statement changes the ownership of the stored procedure
to Michelle. In this scenario, the owner of stored procedure is not the same as the owner of the table, and the
ownership chain will break. A broken ownership chain will cause the permissions to be checked for each object. If
the ownership chain is not broken, permissions are checked only at the procedure level. John does not have
permissions on the table and therefore will not be able to update the table using the Upd_account_details stored
procedure. You should run the ALTER AUTHORIZATION statement to remove Michelle as owner of the stored
procedure. To change the owner of the stored procedure, you could also drop the object, and re-create it. By
doing so, the owner of the stored procedure will default to the owner of the schema.

You should not issue the GRANT Update ON OBJECT::Sales.Upd_account_details to Role_Admin


statement because you cannot explicitly grant the update permission on a stored procedure. To execute a stored
procedure, you require the execute permission on the stored procedure.

You should not issue the GRANT Update, Delete ON OBJECT::Sales.Upd_account_details to


Role_Admin statement because you cannot explicitly grant the update and delete permissions on a stored

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 142 of 226

procedure. To execute a stored procedure, you require the execute permission on the stored procedure.

You should not add the Role_Admin role to the db_datawriter fixed database role because this will give the
Role_Admin role, the update, insert, and delete permissions on all the tables in the database. The role only
needs permission on the Accounts table.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage users and database roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER AUTHORIZATION (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Securables > Ownership Chains

Item: 144 (Ref:Cert-70-432.3.6.5)

You are responsible for managing an instance of SQL Server 2008. You create a stored procedure named
MaintenanceTasks_sp to perform maintenance tasks in the tempdb database. You want to ensure that the
MaintenanceTasks_sp stored procedure executes automatically when the SQL Server starts.

Which system stored procedure should you use to configure the MaintenanceTasks_sp stored procedure?
j sp_configure
k
l
m
n
j sp_serveroption
k
l
m
n

j sp_procoption
k
l
m
n

j sp_addextendedproc
k
l
m
n

Answer:
sp_procoption

Explanation:
You should use the sp_procoption system stored procedure. When you want to configure a stored procedure to
perform some operations on a regular basis, you can configure the stored procedure for automatic execution. A
stored procedure that is configured for automatic execution runs automatically every time the SQL server starts.
The stored procedures that are configured for automatic execution run with the same permissions as members of
the sysadmin fixed server role. To configure a stored procedure for automatic execution, the sp_procoption
system stored procedure is used.

When you create user-defined stored procedures, you should avoid naming your procedures with sp_ as a prefix
because the sp_ prefix is used by SQL Server for system stored procedures. You can create a Data Definition
Language (DDL) trigger to ensure that all user-defined stored procedures are created with a particular prefix. You
can also configure a Policy-Based Management policy that ensures that all user-defined stored procedures start
with a pre-determined prefix.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 143 of 226

You should not use the sp_configure system stored procedure because this stored procedure does not allow you
to configure stored procedures for automatic execution. The sp_configure system stored procedure allows you to
display or change global configuration settings for the current server.

You should not use the sp_serveroption system stored procedure because this stored procedure does not allow
you to configure stored procedures for automatic execution. The sp_serveroption system stored procedure
allows you to configure server options for remote servers and linked servers.

You should not use the sp_addextendedproc system stored procedure because this stored procedure does not
allow you to configure stored procedures for automatic execution. The sp_addextendedproc system stored
procedure is used to register the name of a new extended stored procedure with SQL Server.

Objective:
Managing SQL Server Security

Sub-Objective:
Audit SQL Server instances.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data > Stored Procedures > Implementing Stored Procedures > Executing Stored Procedures (Database Engine)
> Automatic Execution of Stored Procedures

Item: 149 (Ref:Cert-70-432.3.2.3)

You are the SQL administrator for your company. You manage all of the SQL Server 2008 computers for your
company. All of your databases use Windows Authentication.

Because of recent security issues with human resources information, you have been asked to grant a user in
human resources the right to remove access for Windows logins to the HR database. You do not want to grant
the user more permissions than required.

What should you do?


j Add the user's account to the db_accessadmin fixed database role for the HR database.
k
l
m
n

j Create a new database role that allows its members to remove access for Windows logins to the HR
k
l
m
n
database. Add the user to the new database role.
j Add the user's account to the db_securityadmin fixed database role for the HR database.
k
l
m
n
j Add the user's account to the db_denydatareader and db_denydatawriter fixed database roles for the HR
k
l
m
n
database.

Answer:
Create a new database role that allows its members to remove access for Windows logins to the
HR database. Add the user to the new database role.

Explanation:
You should create a new database role that allows its members to remove access for Windows logins to the HR
database and add the user to the new database role. This will ensure that the user can remove access for
Windows logins to the HR database.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 144 of 226

You should not add the user's account to the db_accessadmin fixed database role for the HR database.
Members of this role can add and remove database access for Windows logins, Windows groups, and SQL
logins. You only wanted the user to be able to remove Windows logins.

You should not add the user's account to the db_securityadmin fixed database role for the HR database.
Members of this role can add and remove role members and manage database permissions. However, members
of this role cannot remove access for Windows logins.

You should not add the user's account to the db_denydatareader and db_denydatawriter fixed database roles
for the HR database. The db_denydatareader fixed database role prevents users from reading data in the
database. The db_denydatawriter fixed database role prevents users from writing data to the database.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage users and database roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Database-Level Roles

Item: 151 (Ref:Cert-70-432.3.6.1)

You are the database administrator for your company and manage all the SQL Server 2008 databases of the
company. You are responsible for performing the daily backup activities on all the databases on the server.
Database users regularly create new objects in one of the databases. You want to be notified when new objects
are created in the database.

You want to create a trigger that will fire whenever a new user-defined function is created in the database. The
trigger should insert statement-specific data, such as the name of the database user who issued the CREATE
FUNCTION statement, the time the statement was issued, and the Transact-SQL statement that was issued, into
a database table named Log_structures. The Log_structures table was created by using the following
statement:

CREATE TABLE Log_structures (


user1 nvarchar(100),
createtime datetime,
SQL_stat nvarchar(2000));

Which statement should you use to create the trigger?


j
k
l
m
n
CREATE TRIGGER Audit_functions
ON DATABASE
AFTER CREATE_FUNCTION
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

j CREATE TRIGGER Audit_functions


k
l
m
n
AFTER CREATE_FUNCTION

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 145 of 226

ON DATABASE
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

j
k
l
m
n
CREATE TRIGGER Audit_functions
ON DATABASE
AFTER CREATE_FUNCTION
AS
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

j
k
l
m
n
CREATE TRIGGER Audit_functions
ON DATABASE
AFTER CREATE_FUNCTION
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(2000)'));

Answer:
CREATE TRIGGER Audit_functions
ON DATABASE
AFTER CREATE_FUNCTION
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

Explanation:
You should use the following statement to create the trigger:

CREATE TRIGGER Audit_functions


ON DATABASE
AFTER CREATE_FUNCTION
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 146 of 226

This statement uses the correct syntax to create a trigger that will fire whenever a new function is created in the
database. The trigger will insert the name of the current user, the time, and the Transact-SQL statement that
caused the trigger to fire into the Log_structures table. The CREATE_FUNCTION event is fired when a function
is created in the database. The ON DATABASE clause in the statement specifies that the scope of the trigger is
the current database. Therefore, this trigger will be fired whenever a function is created in the current database.
The trigger body also uses the EVENTDATA function. This function returns information regarding the database
event that caused the trigger to fire, such as the name of the instance on which the event was fired, the Transact-
SQL statement that caused the event to fire, and the type of event that was fired. This function can be called from
within trigger to return the specified information. The following line in the CREATE TRIGGER statement specifies
that the Transact-SQL statement that caused the trigger to be fired should be returned:

@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'))

The value returned is stored in the Log_structures table. The Log_structures table also stores the name of the
database user who created the function and the time at which the function was created. The CURRENT_USER
function returns the name of the user who executed the statement, and the GETDATE() function returns the date
and time the statement was executed.

You should not use the following statement:

CREATE TRIGGER Audit_functions


AFTER CREATE_FUNCTION
ON DATABASE
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

This statement is syntactically incorrect. In a CREATE TRIGGER statement, the ON DATABASE clause appears
before the AFTER event_name clause. To correct this statement, you should place the ON DATABASE clause
before the AFTER CREATE_FUNCTION clause.

You should not use the following statement:

CREATE TRIGGER Audit_functions


ON DATABASE
AFTER CREATE_FUNCTION
AS
SET @mydata = EVENTDATA()
INSERT INTO Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(
CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),
@mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'));

The mydata variable is not declared. You cannot use the variable until you declare the variable by using the
DECLARE statement.

You should not use the following statement:

CREATE TRIGGER Audit_functions


ON DATABASE
AFTER CREATE_FUNCTION
AS
DECLARE @mydata XML
SET @mydata = EVENTDATA()
INSERT Log_structures(USER1, CREATETIME, SQL_stat)
VALUES(

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 147 of 226

CONVERT(nvarchar(100), CURRENT_USER), GETDATE(),


@mydata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(2000)'))

This statement inserts the EventType property returned by the EVENTDATA function. In this scenario, you are
required to insert the Transact-SQL statement into the Log_structures table. To insert the Transact-SQL
statement into the Log_structures table, you should replace @mydata.value
('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(2000)'))
with the @mydata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)')) .

Objective:
Managing SQL Server Security

Sub-Objective:
Audit SQL Server instances.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE TRIGGER (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > EVENTDATA (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > DDL
Triggers > Designing DDL Triggers > DDL Events

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > DDL
Triggers > Designing DDL Triggers > Using the EVENTDATA Function

Item: 155 (Ref:Cert-70-432.3.5.2)

You are responsible for managing an instance of SQL Server 2008 named SQL1. SQL1 contains a schema
named HumanResource. You want to transfer the ownership of the schema to a user named HRHead.

Which Transact-SQL statement should you use?


j ALTER SCHEMA
k
l
m
n

j OBJECTPROPERTY
k
l
m
n
j OBJECTPROPERTYEX
k
l
m
n

j ALTER AUTHORIZATION
k
l
m
n

Answer:
ALTER AUTHORIZATION

Explanation:
You should use the ALTER AUTHORIZATION Transact-SQL statement. A schema is a container of objects that
exists as a distinct namespace independently of the database user in SQL Server 2008. The ownership of a

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 148 of 226

schema is transferable, and you can configure any user as the owner of the schema using the ALTER
AUTHORIZATION Transact-SQL statement. This statement allows you to change the ownership of any entity that
has an owner. The ALTER AUTHORIZATION statement requires the following syntax:

ALTER AUTHORIZATION
ON [ <entity_type> :: ] entity_name
TO { SCHEMA OWNER | principal_name };

In this scenario, you want to transfer the ownership of the HumanResource schema to the HRHead user. To
achieve this, you can use the following Transact-SQL statement:

ALTER AUTHORIZATION ON SCHEMA::HumanResource TO HRHead;

You should not use the ALTER SCHEMA Transact-SQL statement because this statement does not allow you to
change the ownership of a schema. The ALTER SCHEMA Transact-SQL statement allows you to transfer a
securable from one schema to another.

You should not use the OBJECTPROPERTY or OBJECTPROPERTYEX Transact-SQL statement because
neither of these statements allows you to change the ownership of a schema. These statements only provide
information about schema-scoped objects.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage schema permissions and object permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER AUTHORIZATION (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Principals > User-Schema Separation

Item: 158 (Ref:Cert-70-432.3.4.3)

You manage an instance of SQL Server 2008 named SQL1 . SQL1 contains a stored procedure named
Sales_SP . The Sales_SP is stored in a database named SalesDB in the Sales schema.

You discover that a temporary employee named Mary is able to run Sales_SP . You want to prevent Mary from
running the Sales_SP stored procedure. You also want to prevent Mary from inheriting permission to run the
stored procedure through any group or role membership.

Which Transact-SQL script should you use?


j
k
l
m
n
USE SalesDB;
REVOKE EXECUTE ON OBJECT::SalesDB.Sales_SP TO Mary;
GO

j
k
l
m
n
USE SalesDB;
REVOKE EXECUTE ON OBJECT::Sales_SP TO Mary;
GO

j
k
l
m
n
USE SalesDB;
DENY EXECUTE ON OBJECT::SalesDB.Sales_SP TO Mary;
GO

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 149 of 226

j
k
l
m
n
USE SalesDB;
DENY EXECUTE ON OBJECT::Sales.Sales_SP TO Mary;
GO

Answer:
USE SalesDB;
DENY EXECUTE ON OBJECT::Sales.Sales_SP TO Mary;
GO

Explanation:
You should run the following Transact-SQL script:

USE SalesDB;
DENY EXECUTE ON OBJECT::Sales.Sales_SP TO Mary;
GO

Stored procedures are members of the OBJECT class. The DENY Transact-SQL statement is used to deny
permissions on members of the OBJECT class. The DENY statement prevents a principal from inheriting
permissions through group or role membership. In this scenario, you want to prevent Mary from running the
stored procedure. To achieve this, you should deny the EXECUTE permission to Mary on the stored procedure.

You should not run the following Transact-SQL script:

USE SalesDB;
REVOKE EXECUTE ON OBJECT::SalesDB.Sales_SP TO Mary;
GO

The REVOKE Transact-SQL statement is used to remove a previously assigned granted or denied permission.
The REVOKE Transact-SQL statement does not prevent a principal from inheriting permissions through group or
role membership.

You should not run the following Transact-SQL script:

USE SalesDB;
REVOKE EXECUTE ON OBJECT::Sales_SP TO Mary;
GO

The REVOKE Transact-SQL statement is used to remove a previously assigned granted or denied permission.
The REVOKE Transact-SQL statement does not prevent a principal from inheriting permissions through group or
role membership.

You should not run the following Transact-SQL script:

USE SalesDB;
DENY EXECUTE ON OBJECT::SalesDB.Sales_SP TO Mary;
GO

The OBJECT clause requires the following syntax:

[ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ]

Therefore, you should specify the complete path for the Sales_SP stored procedure, which in this scenario is
Sales.Sales_SP, not SalesDB.Sales_SP.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 150 of 226

Objective:
Managing SQL Server Security

Sub-Objective:
Manage database permissions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Deny (Transact-SQL) > DENY Object Permissions (Transact-SQL)

Item: 162 (Ref:Cert-70-432.3.2.1)

You are a database administrator for your company. You implement application roles to manage the security of
the applications accessed by your company's users.

You create an application role with the following code:

CREATE APPLICATION ROLE Cust_role1


WITH PASSWORD = '1254mtnl',
DEFAULT_SCHEMA = Cust;
GO

You have granted all the required permissions to the Cust_role1 role to execute the application. You also grant
permissions on this role to the user Amy who is using the application.

When Amy logs in to the application, which system stored procedure is executed by the application?
j sp_setapprole
k
l
m
n

j sp_addapprole
k
l
m
n

j sp_unsetapprole
k
l
m
n
j sp_addrolemember
k
l
m
n

Answer:
sp_setapprole

Explanation:
The sp_setapprole system stored procedure is executed by the application when Amy logs in to the application.
When a user logs in to an application, the application connects to the SQL Server instance by using the
credentials of the user. Then, the application executes the sp_setapprole system stored procedure by providing
the password specified while creating the application role. The password is known only to the application. After
the application role and password are authenticated, the application role is activated, and the connection resumes
with the permissions of the application role. An application role is a role that enables you to restrict data access to
users who log in through a particular application.

The sp_addapprole system stored procedure is not executed by an application when a user logs in to the
application. The sp_addapprole system stored procedure adds an application role to the database.

The sp_unsetapprole system stored procedure is not executed by an application when a user logs in to the
application. The sp_unsetapprole system stored procedure reverts to the user's permissions.

The sp_addrolemember system stored procedure is not executed by an application when a user logs in to the

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 151 of 226

application. The sp_addrolemember system stored procedure adds database users, database roles, and
Windows logins to an existing database role.

Objective:
Managing SQL Server Security

Sub-Objective:
Manage users and database roles.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > Security Stored Procedures (Transact-SQL) >
sp_setapprole (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Security and Protection > Identity and
Access Control > Principals > Application Roles

Monitoring and Troubleshooting SQL Server


Item: 2 (Ref:Cert-70-432.6.2.4)

You are the database administrator for your company and manage all the SQL Server 2008 databases of the
company. The production database named Prod1 contains all the product and sales-related data of the company.

John, a database user, complains that he is not able to update the data in the tables. You suspect that a user
session is blocking John's request. You want to identify the blocking sessions.

Which dynamic management view should you use to identify these sessions?
j sys.dm_tran_locks
k
l
m
n

j sys.dm_exec_requests
k
l
m
n
j sys.dm_exec_sessions
k
l
m
n

j sys.dm_tran_active_transactions
k
l
m
n

Answer:
sys.dm_exec_requests

Explanation:
The sys.dm_exec_requests dynamic management view should be used to determine the session ids of the
blocking sessions in the database. The sys.dm_exec_requests view provides information about every request
being executed on the SQL server. The columns in the view can be used to determine details of the requests. The
blocking_session_id column returns the session ID of the session that is blocking the request. Other columns,
such as wait_type and wait_time, can be used to determine the type of waits held and the duration of the waits
in milliseconds.

The sys.dm_tran_locks dynamic management view cannot be used to determine the session IDs of the blocking
sessions in the database. The sys.dm_tran_locks dynamic management view provides information regarding
the currently active lock manager resources. Every row in the view represents an active request either for the lock

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 152 of 226

that has been granted or for the lock that is waiting to be granted.

The sys.dm_exec_sessions dynamic management view cannot be used to determine the session ids of the
blocking sessions in the database. The sys.dm_exec_sessions view provides information regarding each
authenticated session in the SQL server. The view contains one row for each authenticated session in the
database.

The sys.dm_tran_active_transactions dynamic management view cannot be used to determine the session ids
of the blocking sessions in the database. The sys.dm_tran_active_transactions view provides information
regarding the different transactions in the SQL Server instance. The view contains information, such as the
transaction ID of the transaction, the start time of the transaction, and the state of the transaction. The transaction
ID is displayed at the instance level, particularly at the SQL Server instance, and not at the database level and is
unique across all databases.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL)
>Execution Related Dynamic Management Views and Functions (Transact-SQL) > sys.dm_exec_requests
(Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL) >
Transaction Related Dynamic Management Views and Functions (Transact-SQL) > sys.dm_tran_locks (Transact-
SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL)>Dynamic Management Views and Functions (Transact-SQL)
>Execution Related Dynamic Management Views and Functions (Transact-SQL) > sys.dm_exec_sessions
(Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL) >
Transaction Related Dynamic Management Views and Functions (Transact-SQL) >
sys.dm_tran_active_transactions (Transact-SQL)

Item: 9 (Ref:Cert-70-432.6.1.2)

You are the database administrator of your company. You maintain an instance of SQL Server 2008 named
SQL1. SQL1 contains a database named Employees that contains information about the company's employees.
You create a maintenance job for the Employees database. You configure the job to run when the processor
utilization on the server is below 20 percent.

After several days, you discover that the maintenance job has not run. You discover that the SQL Server Agent
service is not running. You try to start the SQL Server Agent by using SQL Server Management Studio, but the
agent will not start. You want to diagnose the problem with the SQL Server Agent.

Which command-line utility should you use to do this?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 153 of 226

j sqlcmd.exe
k
l
m
n

j sqldiag.exe
k
l
m
n

j sqlagent.exe
k
l
m
n

j sqlservr.exe
k
l
m
n

Answer:
sqlagent.exe

Explanation:
You should use the sqlagent.exe utility. The sqlagent.exe utility allows you to start SQL Server Agent from the
command prompt. It is recommended that you run the sqlagent.exe utility from the command prompt only when
you want to diagnose SQL Server Agent. The syntax for using this utility is as follows:

sqlagent -c [-v] [-i instance_name]

The -c parameter indicates that SQL Server Agent is running from the command prompt and is independent of
the Microsoft Windows Services Control Manager. The -v parameter runs the SQL Server Agent in verbose
mode writing diagnostic information to the command-prompt window. The -i parameter specifies the named
instance of SQL Server to which SQL Server Agent will connect.

You should not use the sqlcmd.exe utility because this utility does not allow you to diagnose SQL Server Agent.
The sqlcmd.exe utility allows you to enter Transact-SQL statements, system procedures, and script files in
different locations, including at the command prompt, in Query Editor in SQLCMD mode, in a Windows script file,
or in an operating system job step of a SQL Server Agent job.

You should not use the sqldiag.exe utility because this utility does not allow you to diagnose SQL Server Agent.
The sqldiag.exe utility is used to collect various types of diagnostic information from SQL Server and other types
of servers.

You should not use the sqlservr.exe utility because this utility does not allow you to diagnose SQL Server Agent.
The sqlservr.exe utility is a command-line utility that allows you to start, stop, and pause an instance of SQL
Server 2008.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify SQL Server service problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Tools Reference >
Command Prompt Utilities > sqlagent90 Application

Item: 16 (Ref:Cert-70-432.6.2.7)

You are the database administrator of your company. The network contains a SQL Server 2008 computer that
has 15 databases. Users report that some queries take a long time to complete. You investigate and discover
deadlocks are causing this problem.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 154 of 226

You want to receive information about the resources and types of locks that are causing deadlocks along with the
current command affected by the deadlocks. You want to receive this information in Extensible Markup Language
(XML) format.

Which Transact-SQL statement should you run?


j DBCC TRACEON (1204);
k
l
m
n

j DBCC TRACEON (1204, -1);


k
l
m
n

j DBCC TRACEON (1222);


k
l
m
n

j DBCC TRACEON (1222, -1);


k
l
m
n

Answer:
DBCC TRACEON (1222, -1);

Explanation:
You should run the following Transact-SQL statement:

DBCC TRACEON (1222, -1);

A deadlock is a condition in which two or more tasks are blocked permanently by each other when each task has
a lock on a resource that the other task is trying to lock. When a deadlock occurs, the database engine decides
which task participating in the deadlock should be ended. This decision is made based on the transaction that is
the least expensive to roll back. Alternatively, you can use the SET DEADLOCK_PRIORITY statement to define
the priority of sessions participating in a deadlock. You can set deadlock priority to LOW, NORMAL, or HIGH, or it
can also be set to a value in the range from -10 to 10. When you manually set deadlock priority, the session with
lower priority is terminated by the database engine. When deadlocks occur, trace flags 1204 and 1222 provide
information about the deadlocks, which is recorded in the SQL Server 2008 error log. Trace flag 1204 returns
information about resources and types of locks that are causing deadlocks. Trace flag 1222 also provides the
same information in XML format that does not comply with the XML Schema Definition (XSD) schema. Trace flag
1204 and 1222 are global trace flags that can be enabled either by using the DBCC TRACEON statement or by
using the sqlservr.exe -T command. The complete syntax for using the DBCC TRACEON statement is as follows:

DBCC TRACEON ( trace# [ ,...n ][ , -1 ] ) [ WITH NO_INFOMSGS ];

The trace# parameter specifies the number of the trace flag that you want to enabled. The n parameter indicates
that multiple trace flags can be specified in a single command. The -1 parameter is used to globally turn on trace
flags. The WITH NO_INFOMSGS parameter can be used to suppress all informational messages.

You should not run the following Transact-SQL statement:

DBCC TRACEON (1204);

Trace flag 1204 provides the required information stated in the scenario, but it does not provide the information in
XML format as required. Also, because trace flag 1204 is a global trace flag, you must specify the -1 parameter in
the statement.

You should not run the following Transact-SQL statement:

DBCC TRACEON (1204, -1);

Trace flag 1204 provides the required information stated in the scenario, but it does not provide the information in
XML format as required.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 155 of 226

You should not run the following Transact-SQL statement:

DBCC TRACEON (1222);

Trace flag 1222 is a global trace flag. Therefore, you must specify the -1 parameter in the statement.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Trace Flags (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data (Database Engine) > Accessing and Changing Database Data > Locking and Row Versioning > Locking in
the Database Engine > Deadlocking > Detecting and Ending Deadlocks

Item: 20 (Ref:Cert-70-432.6.2.9)

You are responsible for managing an instance of SQL Server 2008. You discover that the server is performing
very poorly. You investigate and discover that deadlocks are causing the problem.

You want to identify which processes and resources are causing the problem. You want to ensure that the
information you receive about deadlocks is formatted by processes and then by resources.

Which trace flag should you use?


j trace flag 2528
k
l
m
n

j trace flag 1211


k
l
m
n

j trace flag 1222


k
l
m
n

j trace flag 1224


k
l
m
n

Answer:
trace flag 1222

Explanation:
You should use trace flag 1222. Trace flags can be used to diagnose performance issues or debug stored
procedures. Trace flags allow you to set characteristics of a server on a temporary basis or to turn off specific
behavior. Trace flags 1204 and 1222 provide information about deadlocks that is recorded in the SQL Server
2008 error log when deadlocks occur. Trace flag 1222 provides information about resources and types of locks
that are causing deadlocks along with information about the statements affected by deadlocks. The information is
formatted by process and then by resource.

You should not use trace flag 2528. Trace flag 2528 disables parallel checking of objects by DBCC CHECKDB,

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 156 of 226

DBCC CHECKFILEGROUP, and DBCC CHECKTABLE. This trace flag does not provide information about
deadlocks.

You should not use trace flag 1211 because this trace flag does not provide information about deadlocks. Trace
flag 1211 disables lock escalation based on memory pressure or the number of locks.

You should not use trace flag 1224 because this trace flag does not provide information about deadlocks. Trace
flag 1224 disables lock escalation based on the number of locks.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Trace Flags (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data (Database Engine) > Accessing and Changing Database Data > Locking and Row Versioning > Locking in
the Database Engine > Deadlocking > Detecting and Ending Deadlocks

Item: 25 (Ref:Cert-70-432.6.4.3)

You are the database administrator of your company. Several users connect to an instance of SQL Server 2008
named SQL1. A user reports that his session terminated unexpectedly. You want to identify the cause of the
problem.

In which two log files could you look to obtain additional information about the error? (Choose two. Each correct
answer represents a complete solution.)
c the Windows Application log
d
e
f
g

c the Windows System log


d
e
f
g
c the Windows Security log
d
e
f
g

c the SQL Server Error log


d
e
f
g

Answer:
the Windows Application log
the SQL Server Error log

Explanation:
You can use the Windows Application log or the SQL Server Error log to obtain additional information about the
error. SQL Server records system events and user-defined events to the SQL Server Error log and the Windows
Application log. All events that are related to SQL Server sessions are written to the Windows Application log.
SQL Server reads these events from the Windows Application log and stores them in the SQL Server Error log,
which can be viewed by using the SQL Server Log File Viewer.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 157 of 226

You should not use the Windows System log or the Windows Security log because these logs do not contain
information about SQL Server sessions. The Windows System log contains events that are logged by the
Windows operating system. The Windows Security log records security events such as failed login attempts.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Locate error information.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Monitoring the Error Logs > Viewing the Windows Application Log

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Monitoring the Error Logs

Item: 32 (Ref:Cert-70-432.6.4.2)

You are the SQL administrator for your company. You manage two SQL Server 2008 instances named
SQL_Prod and SQL_Test. The SQL_Test instance is used by application developers to test new applications.

A developer wants to be able to view the error log files for SQL Server, SQL Agent, SQL auditing, and Windows
Events using the Log File Viewer component in SQL Server Management Studio. You need to grant the
developer the appropriate permission.

What should you do?


j Add the user's account to the securityadmin fixed server role.
k
l
m
n
j Add the user's account to the setupadmin fixed server role.
k
l
m
n

j Add the user's account to the db_securityadmin fixed database role in all databases on SQL_Test.
k
l
m
n
j Add the user's account to the db_accessadmin fixed database role in all databases on SQL_Test.
k
l
m
n

Answer:
Add the user's account to the securityadmin fixed server role.

Explanation:
You should add the user's account to the securityadmin fixed server role. Members of the securityadmin role
can view the error log files for SQL Server, SQL Agent, SQL auditing, and Windows Events using the Log File
Viewer component in SQL Server Management Studio. Because these error logs exist at the server level, you
must use a fixed server role to access them.

You should not add the user's account to the setupadmin fixed server role. Members of this role can add and
remove linked servers.

You should not add the user's account to the db_securityadmin or db_accessadmin fixed database role in all

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 158 of 226

databases on SQL_Test. SQL Server error logs are not accessed on a per database basis. They are accessed at
the server level. Members of the db_securityadmin fixed database role can modify database role membership
and manage database permissions. Members of the db_accessadmin fixed database role can manage database
access for Windows logins and groups and SQL Server logins.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Locate error information.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Management Node (Object
Explorer) > Log File Viewer

Item: 33 (Ref:Cert-70-432.6.1.3)

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL_Prod contains all the
production databases.

From SQL_Prod, you need to connect to another SQL Server 2008 instance named SQL_Test. However, you
cannot obtain a list of the available servers. You suspect that the appropriate service is not started.

Which command should you issue to start the appropriate service?


j sqlagent.exe
k
l
m
n
j sqlcmd.exe
k
l
m
n

j sqlbrowser.exe
k
l
m
n

j sqlservr.exe
k
l
m
n

Answer:
sqlbrowser.exe

Explanation:
You should issue the sqlbrowser.exe command to start the SQL Server Browser service. The SQL Server
Browser service allows you to browse available servers and connect to other SQL Server instances.

None of the other command-line utilities can be used start the SQL Server Browser service. The sqlagent.exe
command is used to start, stop, and pause the SQL Server Agent service. The sqlcmd.exe command is used the
enter Transact-SQL commands outside the Query Editor window. The sqlservr.exe command is used to start,
stop, and pause the SQL Server service.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 159 of 226

Identify SQL Server service problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Browser Service

Item: 35 (Ref:Cert-70-432.6.3.3)

You are the database administrator of your company. You are responsible for managing all databases stored on a
SQL Server 2008 computer named SQL1. You create a job with multiple steps that collects data from multiple
tables for reporting purposes. Each job step collects data for a particular report. You configure the job to run daily.

Users report that data for several reports is not updated regularly. You suspect that all steps configured in the job
are not being performed. You want to ensure that each job step is performed even when if an error occurs.

What should you do?


j Modify the On success action option for all the steps in the job.
k
l
m
n

j Modify the On failure action option for all the steps in the job.
k
l
m
n

j Modify the Retry attempts option for all the steps in the job.
k
l
m
n
j Modify the Retry interval (minutes) option for all the steps in the job.
k
l
m
n

Answer:
Modify the On failure action option for all the steps in the job.

Explanation:
You should modify the On failure action option for all the steps in the job. You can specify an action that SQL
Server should take when a job step executes successfully or when a failure occurs during the execution of a job
step. You can configure the On success action option and the On failure action option on the Advanced page
of the job step to specify the action that SQL Server should take in each situation. You can configure the following
three actions: Go to the next step, Quit the job reporting failure, or Quit the job reporting success. By
default, the Quit the job reporting failure option is selected for each job step if any error occurs during the
execution of the job step. This prevents the remaining steps configured in the job from being performed. To
ensure that each job step is performed even when errors occur, you should select the Go to the next step action
for the On failure action option for each job step.

You should not modify the On success action option for all the steps in the job. By default, the Go to the next
step action is selected for the On success action option. This ensures that when a job step is successfully
performed, the next job step will be performed. To ensure that all job steps are performed even when error
occurs, you should select the Go to the next step action for the On failure action option of each job step.

You should not modify the Retry attempts option for all the steps in the job. The Retry attempts option specifies
the number of times a job step should be repeated before it is considered to have failed. When a job is considered
to have failed, the action specified in the On failure action option is taken. By default, the Quit the job reporting
failure option is selected for each job step if any error occurs during the execution of the job step. To ensure that
all job steps are performed even when an error occurs, you should select the Go to the next step action for the
On failure action option for each job step.

You should not modify the Retry interval (minutes) option for all the steps in the job. The Retry interval
(minutes) option is used to specify the number of minutes that must pass before a job step is retried. Modifying

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 160 of 226

the Retry interval (minutes) option for all the steps will not ensure that each job step is performed even when
errors occur.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify SQL Agent job execution problems.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics (Database Engine) > Automated Administration How-to Topics (SQL Server
Management Studio) > How to: Set Job Step Success or Failure Flow (SQL Server Management Studio)

Item: 50 (Ref:Cert-70-432.6.2.5)

You are the database administrator for your company. You manage all the SQL Server 2008 databases in your
company. The Prod_details database is the main database accessed by the company's users. The head office
receives data from other branches of the company. After the data is verified by employees of the audit
department, the database is updated with this information.

Some employees in the audit department complain they cannot update data in certain tables. You suspect that
other database users are holding locks on these tables.

You must identify the oldest transaction in the database and the SQL Server logins associated with the
transaction.

Which statements or functions should you use to obtain the desired results? (Choose two. Each answer
represents a part of the solution.)
c the DBCC OPENTRAN statement
d
e
f
g
c the DBCC ROWLOCK statement
d
e
f
g

c the USER_NAME function


d
e
f
g

c the SUSER_SNAME function


d
e
f
g

c the SUSER_SID function


d
e
f
g
c the USER_ID function
d
e
f
g

Answer:
the DBCC OPENTRAN statement
the SUSER_SNAME function

Explanation:
You should use the DBCC OPENTRAN statement and the SUSER_SNAME function. The DBCC OPENTRAN
statement is used to obtain the details of the oldest transaction in the database. This statement returns the
security identification number (SID), the server process ID (SPID), the name, and the start time of the transaction.
The SID value returned can be used to obtain the SQL Server login associated with the transaction. The
SUSER_SNAME function is used to obtain the SQL Server login associated with the SID. When you pass the SID
returned by the DBCC OPENTRAN statement to the SUSER_SNAME function, you can obtain the SQL Server

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 161 of 226

login associated with the oldest transaction in the database.

You should not use the DBCC ROWLOCK statement because this statement does not provide information about
the oldest transaction in the database or the SQL Server login associated with the transaction. The DBCC
ROWLOCK statement was valid in the earlier versions of the SQL Server, but is not supported in SQL Server
2008.

You should not use the USER_NAME function because the USER_NAME function does not provide the
information required in this scenario. The USER_NAME function returns the database user name associated with
the user identification number passed as an argument.

You should not use the SUSER_SID function because this function does not provide the information required in
this scenario. The SUSER_SID function returns the SID associated with the database user name passed as an
argument.

You should not use the USER_ID function because this function does not provide information required in this
scenario. The USER_ID function returns the user ID associated with the database user name passed as an
argument.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC OPENTRAN (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > SUSER_SNAME (Transact-SQL)

Item: 54 (Ref:Cert-70-432.6.4.5)

You manage an instance of SQL Server 2008. You use SQL Server Agent to notify you of certain performance
conditions. You want to change the file to which SQL Server Agent will write error messages.

From which dialog box can you make this change?


j the SQL Server Agent Properties dialog box
k
l
m
n
j the Configure SQL Server Agent Error Logs dialog box
k
l
m
n

j the SQL Server Agent (MSSQLSERVER) Properties dialog box


k
l
m
n
j the Server Properties dialog box
k
l
m
n

Answer:
the Configure SQL Server Agent Error Logs dialog box

Explanation:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 162 of 226

You can change the file to which SQL Server Agent will write error messages from the Configure SQL Server
Agent Error Logs dialog box. By default, the information about errors related to SQL Server Agent are written to
the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.OUT file.
However, you can change this location by using the Configure SQL Server Agent Error Logs dialog box. To do
this, you should perform the following steps:

1. Open SQL Server Management Studio, and expand the SQL Server Agent node.
2. Right-click the Error Logs node, and select the Configure option.
3. In the Configure SQL Server Agent Error Logs dialog box, specify the new file name and path in the
Error log file field, as shown:

You cannot use the SQL Server Agent Properties dialog box to change the file to which SQL Server Agent will
write error messages. This dialog box displays the error log file name on the General page, but it does not allow
you to change the file name or the file's location.

You cannot use the SQL Server Agent (MSSQLSERVER) Properties dialog box to change the file to which SQL
Server Agent will write error messages. This dialog box is used to configure properties of the SQL Server Agent
service.

You cannot use the Server Properties dialog box to change the file to which SQL Server Agent will write error
messages. The Server Properties dialog box is used to configure the server-level properties of the SQL server.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 163 of 226

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Locate error information.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > SQL Server Agent F1 Help >
Configure SQL Server Agent Error Logs (General Page)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Automating
Administrative Tasks (SQL Server Agent) > SQL Server Agent > Using the SQL Server Agent Error Log

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > SQL-DMO
Reference > Properties (SQL-DMO) > A (SQL-DMO Properties) > AgentLogFile Property

Item: 71 (Ref:Cert-70-432.6.1.1)

You have been hired by your company to manage an instance of SQL Server 2008. The server contains several
user-defined stored procedures that query the system databases. You discover that queries against some system
databases are not returning the correct results. You decide to repair the corrupt system databases. To achieve
this, you want to start the SQL Server instance in single-user mode without using the minimal configuration.

Which command should you use to restart the instance?


j sqlservr.exe -f
k
l
m
n
j sqlservr.exe -x
k
l
m
n

j sqlservr.exe -m
k
l
m
n
j sqlservr.exe -s
k
l
m
n

Answer:
sqlservr.exe -m

Explanation:
You should use the sqlservr.exe -m command. The sqlservr.exe command is a command-line utility that allows
you to start, stop, and pause an instance of SQL Server 2008. Specifying the -m parameter with the sqlservr.exe
command starts a SQL Server instance in single-user mode. Starting a SQL Server instance in single-user mode
is required when you experience a problem with system databases and want to repair them.

You should not use the sqlservr.exe -f command because the -f parameter does start an instance of SQL Server
in single-user mode with the minimal configuration. The scenario stated you did not want to use minimal
configuration.

You should not use the sqlservr.exe -x command because the -x parameter does not start an instance of SQL
Server in single-user mode. The -x parameter disables monitoring features, such as performance monitor

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 164 of 226

counters.

You should not use the sqlservr.exe -s command because the -s parameter does not start an instance of SQL
Server in single-user mode. The -s parameter starts a named instance of SQL Server.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify SQL Server service problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Tools Reference >
Command Prompt Utilities > sqlservr Application

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Managing
the Database Engine Services > Starting and Restarting Services > Using the SQL Server Service Startup
Options

Item: 99 (Ref:Cert-70-432.6.2.1)

You are the database administrator for your company. You maintain a database named Prod1 on a SQL Server
2008 instance named Sql1. The Prod1 database contains all the orders and product-related information for your
company. Indexes have been created on the key columns of the database tables. The data in the tables is
constantly updated, and new rows are added to the table.

You notice performance on the database server is degrading. You consider a high number of indexes in the
database to be the cause of this problem.

You are required to identify the lock and latch information and the access methods used for these indexes.

Which function should you use to obtain the required information?


j sys.dm_tran_current_transaction
k
l
m
n

j sys.dm_db_index_operational_stats
k
l
m
n
j sys.dm_db_index_physical_stats
k
l
m
n

j sys.dm_db_index_usage_stats
k
l
m
n

Answer:
sys.dm_db_index_operational_stats

Explanation:
You should use the sys.dm_db_index_operational_stats function to obtain the required information. The
sys.dm_db_index_operational_stats function provides the current locking and latching information about a
partition, either in a database table or in an index. This function also provides statistics related to the access
method and the physical I/O of the table or index partition. The information retrieved from this function is useful in
analyzing the characteristics of a database table or index.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 165 of 226

You should not use the sys.dm_tran_current_transaction function. The sys.dm_tran_current_transaction


function displays the state information of the transaction in the current session, but does not provide lock or latch
information.

You should not use the sys.dm_db_index_physical_stats function. The sys.dm_db_index_physical_stats


function obtains information regarding the fragmentation of data and indexes of either a database table or a
database view. The details provided in the columns of the returned table can be used to detect both logical and
extent fragmentation of data in a database.

You should not use the sys.dm_db_index_usage_stats function. The sys.dm_db_index_usage_stats function
calculates the different operation types performed on indexes and the time at which these operations were last
performed. The different types of operations displayed in the output of this function are seek, scan, lookup, and
update.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL) >
Index Related Dynamic Management Views and Functions (Transact-SQL) > sys.dm_db_index_operational_stats
(Transact-SQL)

Item: 103 (Ref:Cert-70-432.6.2.3)

You are the database administrator for a banking company. You manage all the SQL Server 2008 databases of
the company. The company stores customer-related data in the database named Cust01. This database is
accessed by most users in the company for different purposes. The users daily perform insert and updates to the
database through a .NET application.

Eric, a user in the database, complains that his transaction has frozen and that he is not able to perform any
operation in the database. You find out that the problem is due to a deadlock. You want to find out the user who is
the other participant in the deadlock by using SQL Server Profiler.

Which event should you trace?


j the Lock: Deadlock event
k
l
m
n
j the Lock: Deadlock chain event
k
l
m
n

j the Lock: Deadlock cancel event


k
l
m
n
j the Lock: Escalation event
k
l
m
n

Answer:
the Lock: Deadlock chain event

Explanation:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 166 of 226

You should trace the Lock: Deadlock chain event to detect the other participant in the deadlock. The Lock:
Deadlock chain event is produced for each participant in the deadlock. The columns in the Lock: Deadlock
chain event provide information, such as the ID of the two sessions participating in the deadlock, the IDs of the
objects involved in the deadlock, and the ID of the transaction in which the deadlock was detected. Tracing the
Lock: Deadlock chain event will provide you with all the information necessary to identify the cause of the
deadlock.

You should not trace the Lock: Deadlock event to detect the other participant in the deadlock. The Lock:
Deadlock event contains information to track the transaction in the database that has requested a lock on a
resource that was already locked by another transaction and caused a deadlock. This information is useful when
determining whether performance of an application is being degraded by the occurrence of a deadlock.

You cannot trace the Lock: Deadlock cancel event because this is an invalid event. The correct event is the
Lock: Cancel event. This event indicates that the process of acquiring a lock on a resource has been cancelled.

You should not trace the Lock: Escalation event to detect the other participant in the deadlock. The Lock:
Escalation event indicates the conversion of a higher level lock to a lower level lock. For example, a row-level
lock has been converted to a table-level lock.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Locks Event Category > Lock: Deadlock Chain Event Class

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Locks Event Category

Item: 106 (Ref:Cert-70-432.6.2.8)

You are the database administrator of your company. You are in the process of configuring transaction isolation
level to define the locking and row versioning behavior of Transact-SQL statements. You want to ensure that
phantom reads do not occur when you configure the transaction isolation level.

Which two transaction isolation levels can you configure to accomplish this? (Choose two. Each correct answer
represents a complete solution.)
c READ UNCOMMITTED
d
e
f
g

c READ COMMITTED
d
e
f
g

c REPEATABLE READ
d
e
f
g

c SNAPSHOT
d
e
f
g

c SERIALIZABLE
d
e
f
g

Answer:
SNAPSHOT
SERIALIZABLE

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 167 of 226

Explanation:
You can configure the SNAPSHOT or SERIALIZABLE isolation level. Only these two isolation levels prevent
phantom reads. Isolation levels determine the degree of isolation for a transaction from changes made to
resource or data by other transactions. In SQL Server 2008, you can configure the following five types of isolation
levels for transactions: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SNAPSHOT, and
SERIALIZABLE. The READ UNCOMMITTED isolation level is the least restrictive isolation level. This isolation
level allows statements to read rows modified by other transactions but that are not yet committed. The READ
COMMITTED isolation level prevents statements from reading data that has been modified by other transactions
but not committed. The REPEATABLE READ isolation level prevents statements from reading data that has
been modified by other transactions but not yet committed, and it also prevents other transactions from modifying
data that has been read by the current transaction until the current transaction completes. The SNAPSHOT
isolation level ensures that the data read by any statement in a transaction will remain consistent until the
transaction is complete. The SERIALIZABLE isolation is the most restrictive isolation level in which the
transactions are completely isolated from one another. Only the SNAPSHOT and SERIALIZABLE isolation levels
do not allow any concurrency side effects, such as a dirty reads, phantom reads, or non-repeatable reads. To
define an isolation level for transactions, you can use the SET TRANSACTION ISOLATION LEVEL Transact-
SQL statement.

You cannot configure the READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ isolation
levels to accomplish the desired objective because these isolation levels enable phantom reads.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > SET (Transact-SQL) > SET TRANSACTION ISOLATION LEVEL (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data (Database Engine) > Accessing and Changing Database Data > Locking and Row Versioning > Managing
Concurrent Data Access > Isolation Levels in the Database Engine

Item: 119 (Ref:Cert-70-432.6.2.10)

You are the database administrator of your company. Users report an instance of SQL Server 2008 named SQL1
is performing poorly. You suspect that deadlocks are causing the server to perform slowly. You want to identify
the cause of the deadlocks. To achieve this, you want to collect information about the deadlocks and save the
information to a file.

What should you do to achieve this objective?


j Create a SQL Server Profiler trace.
k
l
m
n

j Create a trace flag.


k
l
m
n

j Create a Policy-based Management policy.


k
l
m
n

j Create a System Monitor counter.


k
l
m
n

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 168 of 226

Answer:
Create a SQL Server Profiler trace.

Explanation:
You should create a SQL Server Profiler trace. SQL Server Profiler is a tool that provides a graphical user
interface for monitoring an instance of the SQL Server Database Engine or Analysis Service services. SQL Server
Profiler allows you to capture and save data about an event to a table or a file for analysis. To trace deadlock
events, you should add the Deadlock graph event class to a trace. To do this, you should create a new trace by
using SQL Server Profiler. You should perform the following steps to create a new trace:

1. Open SQL Server Profiler, select the New Trace option on the File menu, and then connect to an instance
of SQL Server.
2. In the Trace Properties dialog box, type a name for the trace in the Trace name field.
 If you want to base your trace on a template, select the template from the Use the template drop-
down list. Otherwise, select the Blank option.
 If you want to capture the trace to a file, select the Save to file check box. Specify a value in the Set
maximum file size field to specify the maximum file size for the trace file. You can also optionally
select the Enable file rollover option and the Server processes trace data option.
 If you want to capture the trace to a table, select the Save to table check box. To specify the
number of maximum rows in the table, specify a value in the Set maximum rows field.
 To stop the trace at a specified time, specify a stop date and time in the Enable trace stop time
field.
3. Click the Events Selection tab.
4. Expand the Locks event category in the Events data column, and select the Deadlock graph check box.
If the Locks event category is not available, select the Show all events check box to display it. The
Events Extraction Settings tab will be added to the Trace Properties dialog box.
5. Select the Save Deadlock XML Events Separately option on the Events Extraction Settings tab.
6. Specify a file name for the trace file in the Save As dialog box.
7. Select the All Deadlock XML batches in a single file option to save all deadlock graph events in a single
Extensible Markup Language (XML) file, or select the Each Deadlock XML batch in a distinct file option
to create a new XML file for each deadlock graph.

You cannot create a new trace flag in SQL Server 2008. Trace flags can be used to diagnose performance issues
or debug stored procedures. Trace flags allow you to set particular characteristics of a server on a temporary
basis or to turn off specific behavior. SQL Server contains some existing trace flags that can be used for this
purpose. SQL Server 2008 does not allow you to create new trace flags.

You should not create a Policy-based Management policy because it cannot be used to collect information about
deadlock events and save the information to a file. Policy-Based Management policies manage entities, such as
databases or other SQL Server objects, on an instance of SQL Server 2008.

You should not create a System Monitor counter. System Monitor is primarily used for monitoring resource usage
associated with server processes. A System Monitor counter does not allow you to capture information about
deadlock events and save the information to a file.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance >

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 169 of 226

Performance Monitoring and Tuning How-to Topics > Server Performance and Activity Monitoring How-to Topics
> How to: Save Deadlock Graphs (SQL Server Profiler)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance > Monitoring
and Tuning for Performance > Tools for Performance Monitoring and Tuning

Item: 134 (Ref:Cert-70-432.6.2.2)

You are the database administrator for a major electronics chain. You manage all the SQL Server 2008
databases of the company. The data is stored in databases named Prod1 and Prod2. These databases are
accessed by users in the company by using four different applications named Cust_sales, Cust_audit,
Retail_sales, and Yearly_total.

Users of the Cust_sales and Cust_audit applications complain that either the transactions time out frequently or
take too long to complete.

You want to monitor the applications causing the transaction blocks by using the Activity Monitor.

What should you do to achieve this objective?


j In the Process Info page, filter by using the host name.
k
l
m
n

j In the Process Info page, filter by using the process ID.


k
l
m
n
j In the Process Info page, filter by using the application name.
k
l
m
n

j In the Process Info page, filter by using the database name.


k
l
m
n

Answer:
In the Process Info page, filter by using the application name.

Explanation:
You should filter by using the application name in the Process Info page to achieve this objective. You can
specify different filters in the Process Info page of the Activity Monitor. These filters are used to determine the
object-specific data that should be displayed in the output page. The different options available are as follows:

 Application: Specifies the name of the application from which the connections should be monitored.
 Host: Specifies the name of the hosts from which the connections should be monitored.
 Processes: Specifies the ID of the processes that should be monitored.
 Database: Specifies the name of the database that should be monitored.
 User: Specifies the name of the database users whose processes should be monitored.

In this scenario, the users accessing the Cust_sales and Cust_audit applications are experiencing problems.
Therefore, you should monitor connections to the Cust_sales and Cust_audit applications by specifying the
name of these two applications in the Application filter in the Process Info page shown in the following graphic:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 170 of 226

All the other options are incorrect because these options cannot be used to monitor the connections to the
Cust_sales and Cust_audit applications.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Management Node (Object
Explorer) > Activity Monitor F1 Help > Activity Monitor (Process Info Page)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Feature
Reference > SQL Server Management Studio F1 Help > Object Explorer F1 Help > Management Node (Object
Explorer) > Activity Monitor F1 Help > Activity Monitor (Filter Properties Page)

Item: 140 (Ref:Cert-70-432.6.3.1)

You are the database administrator for a major shipping company. You manage all the SQL Server 2008
databases of the company. You have created different jobs in the database to perform different administrative
functions.

Due to recent scheduling changes, you must reschedule the jobs to be executed at a different time. You want to
identify details of the last job that was successfully executed on the server.

Which Transact-SQL should you use?

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 171 of 226

j
k
l
m
n
DECLARE @num AS int;
SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time

FROM master.dbo.sysjobhistory
WHERE run_status = 2
ORDER BY run_date, run_time;

j
k
l
m
n
DECLARE @num AS int;
SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time

FROM master.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date DESC, run_time DESC;

j
k
l
m
n
DECLARE @num AS int;
SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time

FROM msdb.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date, run_time;

j
k
l
m
n
DECLARE @num AS int;
SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time

FROM msdb.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date DESC, run_time DESC;

Answer:
DECLARE @num AS int;
SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time

FROM msdb.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date DESC, run_time DESC;

Explanation:
You should use the following Transact-SQL:

DECLARE @num AS int;


SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time
FROM msdb.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date DESC, run_time DESC;

The dbo.sysjobhistory system table provides the history of previously executed jobs. The table is in the msdb
database; therefore, you should only query the msdb database. This is accomplished in the query by prepending
the database name to the table name. The dbo.sysjobhistory table contains columns, such as run_date and
run_time, which indicate the date and time at which the jobs were executed, respectively. To retrieve the details
of the jobs executed last, you should sort the result set of the query in descending order on the run_date and
run_time columns. To retrieve the jobs that executed successfully, you should use the search for a value of 1 for

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 172 of 226

run_status column in the WHERE clause condition. A value of 1 in the run_status column indicates jobs that
have been completed successfully. The TOP expression used in the query uses a value of 1. This indicates that
only the first row should be retrieved from the result set.

You should not use the following Transact-SQL:

DECLARE @num AS int;


SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time
FROM master.dbo.sysjobhistory
WHERE run_status = 2
ORDER BY run_date, run_time;

This query will return an error because the dbo.sysjobhistory table is located in the msdb database, not in the
master database. In addition, specifying run_status = 2 in the WHERE clause condition will retrieve jobs with a
retry status. In this scenario, you must retrieve jobs that completed successfully. Therefore, you should use the
run_status = 1 in the WHERE clause condition.

You should not use the following Transact-SQL:

DECLARE @num AS int;


SET @num = '1';
SELECT TOP(@num) job_id, run_status, run_date, run_time
FROM master.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date DESC, run_time DESC;

This query will return an error because the dbo.sysjobhistory table is located in the msdb database, not in the
master database.

You should not use the following Transact-SQL:

DECLARE @num AS int;


SET @num = '1'
SELECT TOP(@num) job_id, run_status, run_date, run_time
FROM msdb.dbo.sysjobhistory
WHERE run_status = 1
ORDER BY run_date, run_time;

Because you omitted the DESC keyword in the ORDER BY clause of the query, the rows returned will be sorted
in ascending order by default. In this scenario, the rows returned by the query should be sorted in descending
order by using the DESC keyword in the ORDER BY clause. Sorting the rows in a descending order will retrieve
the job that was the last to be executed successfully. If you sort the rows in the ascending order, the first job that
was executed successfully will be returned.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify SQL Agent job execution problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Tables (Transact-SQL) > SQL Server Agent Tables (Transact-SQL) > sysjobhistory
(Transact-SQL)
Item: 147 (Ref:Cert-70-432.6.4.4)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 173 of 226

You are the database administrator of your company. You configure a new SQL Server Agent job on a SQL
Server 2008 computer named SQL1. You confirm that the job runs successfully.

After a month, you discover that the SQL Server Agent has stopped unexpectedly. You try to start the SQL Server
Agent manually but the agent will not start.

You want to identify the cause of the problem.

Which file should you check?


j C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.1
k
l
m
n

j C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.OUT


k
l
m
n
j C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG
k
l
m
n

j C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG.1


k
l
m
n

Answer:
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.OUT

Explanation:
You should check the C:\Program Files\Microsoft SQL
Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.OUT file. The SQLAGENT.OUT file is the default
SQL Server Agent log, which contains information about errors related to SQL Server Agent. This file is located in
the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log folder by default. To
change the location where the file is stored, you should perform the following steps:

1. Open SQL Server Management Studio, and expand the SQL Server Agent node.
2. Right-click the Error Logs node, and click the Configure option.
3. In the Configure SQL Server Agent Error Logs dialog box, specify the new file name and path in the
Error log file field.

You should not check the C:\Program Files\Microsoft SQL


Server\MSSQL10.MSSQLSERVER\MSSQL\Log\SQLAGENT.1 file. The SQLAGENT.1 indicates an archived
error log. SQL Server maintains up to nine error logs that contain information about errors related to SQL Server
Agent. These archived logs are given extensions from .1 to .9, which indicate the age of the log file. The .1
extension indicates the latest archived error log. To find information about recent errors related to SQL Server
Agent, you should check the current SQL Server Agent error log, which is SQLAGENT.OUT by default.

You should not check the C:\Program Files\Microsoft SQL


Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG file or the C:\Program Files\Microsoft SQL
Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG.1 file. The ERRORLOG and ERRORLOG.1 files
are SQL Server error logs that record information about server-level error messages. These logs do not contain
information about SQL Server Agent errors. To find information about recent errors related to SQL Server Agent,
you should check the current SQL Server Agent error log, which is SQLAGENT.OUT by default.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Locate error information.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 174 of 226

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > SQL Server Database Engine > Administering the Database
Engine > Automating Administrative Tasks (SQL Server Agent) > SQL Server Agent > Using the SQL Server
Agent Error Log

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > SQL-DMO
Reference > Properties (SQL-DMO) > A (SQL-DMO Properties) > AgentLogFile Property

Item: 150 (Ref:Cert-70-432.6.4.1)

You are the SQL administrator for your company. You manage two SQL Server 2008 instances named SQL1 and
SQL2. Each instance maintains its own error log.

You discover that the hard drive on which the error log for SQL1 resides is full. You delete non-vital information
from the hard drive to free up space. After cleaning up the hard drive, you decide that you want to start a new
error log.

What should you do? (Choose two. Each correct answer represents a complete solution.)
c Restart SQL1.
d
e
f
g

c Restart the computer on which the SQL1 instance resides.


d
e
f
g

c Run the sp_cycle_errorlog system stored procedure.


d
e
f
g

c Run the sp_cycle_agent_errorlog system stored procedure.


d
e
f
g

Answer:
Restart SQL1.
Run the sp_cycle_errorlog system stored procedure.

Explanation:
You have two options for starting a new error log:

 Restart SQL1.
 Run the sp_cycle_errorlog system stored procedure.

If you are able, you can simply restart the instance. A new error log is started with every instance restart.
However, if you do not want to restart the instance, you can run the sp_cycle_errorlog system stored procedure
to generate a new error log.

The SQL Server error logs are stored in the Program Files\Microsoft SQL
Server\MSSQL.n\MSSQL\LOG\directory by default. The current error log is named ERRORLOG. The previous
six error logs are also maintained in the directory and named ERRORLOG.1, ERRORLOG.2, and so on, with the
most recent log ending with the .1 extension. The SQL Server error log can be viewed using a text editor or SQL
Server Management Studio.

You should not restart the computer on which the SQL1 instance resides. It is not necessary to restart the entire
computer. You only need to restart the SQL Server 2008 instance.

You should not run the sp_cycle_agent_errorlog system stored procedure. This system stored procedure

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 175 of 226

creates a new SQL Server Agent error log, not a SQL Server error log.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Locate error information.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Monitoring the Error Logs > Viewing the SQL Server Error Log

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
>sp_cycle_errorlog (Transact-SQL)

Item: 153 (Ref:Cert-70-432.6.2.6)

You are a database administrator for your company managing all your company's SQL Server 2008 databases.
The Prod_details database is the main database accessed by users in the company. The head office of the
company receives data from other branches. After the employees of the audit department have verified the data,
the database is updated with the data.

Some employees in the audit department complain that either the update process is very slow, or timeout error
messages are displayed when they try to update the data in the database. You suspect concurrency issues to be
the cause of the problem and decide to monitor the transaction locks held on the database objects by using the
SQL Server Profiler.

Which event should you monitor in the SQL Server Profiler?


j the Lock:Timeout event
k
l
m
n

j the Lock:Acquired event


k
l
m
n
j the Lock:Released event
k
l
m
n

j the Lock:Cancel event


k
l
m
n

Answer:
the Lock:Acquired event

Explanation:
You should monitor the Lock:Acquired event in the SQL Server Profiler to view details regarding the transaction
locks held on the database objects. The Lock:Acquired event indicates that a lock has been acquired on a
resource in the database. The details provided in the data columns for this event can be used to determine
information, such as the time at which a lock was acquired on a database object, the type of lock held on an
object, and the users holding locks on an object.

You should not monitor the Lock:Timeout event. The Lock:Timeout event indicates that the request for a lock
on a database resource has timed out because the resource was locked by another process. This event is used
to determine the when timeouts for locking conditions occurred in a database. The details provided in the data
columns for this event can be used to determine whether the timeouts are adversely affecting database
performance and to determine the objects involved in locks that resulted in timeouts. The Lock:Timeout event

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 176 of 226

cannot be used to monitor the time and the type of locks held on the database objects.

You should not monitor the Lock:Released event because this event indicates the time at which a lock on a
database resource is released. This information will not be helpful in this scenario because you are required to
monitor the time at which locks are acquired on database resources.

You should not monitor the Lock:Cancel event because this event indicates the time at which the process of
acquiring a lock is cancelled. This information will not be helpful in this scenario because you are required to
monitor the time at which locks are acquired on database resources.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify concurrency problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Locks Event Category > Lock:Acquired Event Class

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Locks Event Category

Item: 163 (Ref:Cert-70-432.6.3.2)

You manage an instance of SQL Server 2008 named SQL1. SQL1 contains a database named Products that
stores product data for the company. You create a maintenance job for the Products database. The job runs
three Transact-SQL statements to perform maintenance tasks on the Products database. You configure the job
to run every evening at 8:00 P.M. You verify the next day that the job executed as scheduled.

After three weeks, you discover that the job has failed and is no longer running as scheduled. You want to
determine the cause of the failure.

Which system stored procedure should you use?


j sp_help_jobactivity
k
l
m
n

j sp_help_jobhistory
k
l
m
n
j sp_help_jobsteplog
k
l
m
n

j sp_help_notification
k
l
m
n

Answer:
sp_help_jobhistory

Explanation:
You should use the sp_help_jobhistory system stored procedure. The sp_help_jobhistory system stored
procedure provides information about the jobs configured on SQL servers in a multiserver administration domain.
You can call this stored procedure to obtain information about the cause of a job failure.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 177 of 226

You should not use the sp_help_jobactivity system stored procedure because this stored procedure does not
provide information about the cause of a SQL Server Agent job failure. The sp_help_jobactivity system stored
procedure provides information about the runtime state of SQL Server Agent jobs.

You should not use the sp_help_jobsteplog system stored procedure because this stored procedure does not
provide information about the cause of a SQL Server Agent job failure. The sp_help_jobsteplog system stored
procedure provides metadata for a particular SQL Server Agent job step.

You should not use the sp_help_notification system stored procedure because this stored procedure does not
provide information about the cause of a SQL Server Agent job failure. The sp_help_notification system stored
procedure returns a list of alerts for a given operator or a list of operators for a given alert.

Objective:
Monitoring and Troubleshooting SQL Server

Sub-Objective:
Identify SQL Agent job execution problems.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Agent Stored Procedures (Transact-SQL)
> sp_help_jobhistory (Transact-SQL)

Optimizing SQL Server Performance


Item: 1 (Ref:Cert-70-432.7.5.1)

You manage an instance of SQL Server 2008 named SQL1. You install an Online Transaction Processing
(OLTP) database application on SQL1.

You want to monitor how server performance is affected by a large number of users connecting and
disconnecting from SQL1. You want to identify the number of users connecting and disconnecting per second
from SQL1.

Which performance object should you use?


i SQLServer:General Statistics
j
k
l
m
n
j SQLServer:Exec Statistics
k
l
m
n

j SQLServer:SQL Statistics
k
l
m
n
j SQLServer:Wait Statistics
k
l
m
n

Answer:
SQLServer:General Statistics

Explanation:
You should use the SQLServer:General Statistics performance object. The SQLServer:General Statistics
performance object provides counters that can be used to monitor general server-wide activity. For example, you
can use this performance object to identify the number of current connections made to an instance of SQL Server
and the number of users connecting and disconnecting per second from an instance of SQL Server. You can also

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 178 of 226

query the sys.dm_exec_sessions Dynamic Management View (DMV) to identify all active user connections.

You should not use the SQLServer:Exec Statistics performance object because this performance object cannot
be used to identify the number of users connecting and disconnecting per second from a SQL Server instance.
The SQLServer:Exec Statistics object is a SQL Server performance object that provides counters to monitor
various execution statistics on a SQL Server instance.

You should not use the SQLServer:SQL Statistics performance object because this performance object cannot
be used to identify the number of users connecting and disconnecting per second from a SQL Server instance.
The SQLServer:SQL Statistics object provides counters that monitor compilation and the type of requests sent
to a SQL Server instance.

You should not use the SQLServer:Wait Statistics performance object because this performance object cannot
be used to identify the number of users connecting and disconnecting per second from a SQL Server instance.
The SQLServer:Wait Statistics object provides performance counters that report information about wait status.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect performance data by using System Monitor.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Resource Usage (System Monitor) > Using SQL Server Objects > SQL Server, General Statistics Object

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance > Monitoring
and Tuning for Performance > Evaluating Performance > Determining User Activity

Item: 12 (Ref:Cert-70-432.7.3.3)

You manage an instance of SQL Server 2008. You discover some deadlocks on one of the databases stored in
the instance. You want to start the global trace flag 1204 to obtain information about locks and resources
participating in these deadlocks.

What is the recommended action you should take to start global trace flags?
j Use the sqlservr.exe -T command.
k
l
m
n
j Use the sqlservr.exe -m command.
k
l
m
n

j Use the DBCC TRACEON Transact-SQL statement.


k
l
m
n

j Use the sp_trace_setstatus system stored procedure.


k
l
m
n

Answer:
Use the sqlservr.exe -T command.

Explanation:
You should use the sqlservr.exe -T command to start global trace flags. Trace flag 1204 is a global trace flag.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 179 of 226

You must enable a global trace flag globally. Otherwise, the trace flag will not have any effect. The recommended
method of enabling global trace flags is to start the instance by using the sqlservr.exe -T command. The -T
parameter is used to specify the trace number that should be enabled at startup. When you enable the deadlock
flags globally in SQL Server 2008, the deadlock reporting information is recorded in the SQL Server error log.

You should not use the sqlservr.exe -m command because the -m parameter is used to start an instance of SQL
Server in single-user mode. The -m parameter cannot be used to start global trace flags.

You should not use the DBCC TRACEON Transact-SQL statement. This statement is used to enable a specified
trace flag, but it is not the recommended method for starting global trace flags.

You should not use the sp_trace_setstatus system stored procedure because this stored procedure does not
allow you to start global trace flags. The sp_trace_setstatus system stored procedure is used to start, stop, or
close a trace created by using the sp_trace_create system stored procedure.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect trace data by using SQL Server Profiler.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Trace Flags (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > DBCC (Transact-SQL) > DBCC TRACEON (Transact-SQL)

Item: 14 (Ref:Cert-70-432.7.6.1)

You are the database administrator for your company and manage all the company's SQL Server 2008
databases. Database users report that overall performance of the database has degraded. You suspect that lack
of memory is the cause of the degrading performance.

Before you add memory to the SQL server, you want to monitor the amount of memory used by SQL Server.

Which performance monitor counters should you add in the counter log? (Choose three.)
c Processor: %Processor Time
d
e
f
g
c Process: Working Set
d
e
f
g

c SQLServer: Buffer Manager: Buffer Cache Hit Ratio


d
e
f
g

c SQLServer: Locks: Number of deadlocks/sec


d
e
f
g

c SQLServer: Memory Manager: Total Server Memory (KB)


d
e
f
g

Answer:
Process: Working Set
SQLServer: Buffer Manager: Buffer Cache Hit Ratio
SQLServer: Memory Manager: Total Server Memory (KB)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 180 of 226

Explanation:
You can use a counter log in System Monitor to monitor the amount of memory used by SQL Server 2008. You
should use the following performance counters to monitor the amount of memory being used by SQL Server:

 Process: Working Set - Monitors if SQL Server 2008 is using too much memory. This number should not
be consistently below the amount of memory that is set in the min server memory and max server
memory server options.
 SQLServer: Buffer Manager: Buffer Cache Hit Ratio - Reflects the percentage of data requests that
were fulfilled from the cache without requiring the need to fetch the data from the disk. This number should
be greater than 90 percent.
 SQLServer: Memory Manager: Total Server Memory (KB) - Indicates the total amount of dynamic
memory currently consumed by the server. This value should be compared against the amount of physical
memory. If the Total Server Memory is consistently high, you should add memory to the server.

The Processor: %Processor Time counter cannot be used to monitor the amount of memory in the SQL Server.
This counter indicates the percentage of elapsed time that the processor spends to execute a working idle thread.

The SQLServer: Locks: Number of deadlocks/sec counter measures the number of lock requests per second
that resulted in a deadlock. This counter cannot be used to determine if memory needs to be added to the SQL
Server.

There are many SQL Server performance objects that are added to the Performance Monitor when you install
SQL Server 2008. The performance objects are divided into 23 categories. A description of some of the SQL
Server performance objects is as follows:

 SQLServer: Access Methods - Searches through and categorizes access methods. For example, you
could use its counters to determine the number of table scans.
 SQLServer: Buffer Manager - Provides memory buffer information.
 SQLServer: CLR - Provides common language runtime (CLR) information.
 SQLServer: Databases - Provides performance information about a specific database.
 SQLServer: Deprecated Features - Provides deprecated features information.
 SQLServer: Locks - Provides individual lock request information.
 SQLServer: Memory Manager - Provides SQL Server memory usage information.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Resource Usage (System Monitor) > Monitoring Memory Usage

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Resource Usage (System Monitor) > Using SQL Server Objects

Item: 15 (Ref:Cert-70-432.7.6.2)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 181 of 226

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL3 contains a database
named Sales. The Sales database is accessed throughout the day.

You need to monitor the amount of I/O generated by the instance.

Which two performance monitor counters should you monitor? (Choose two. Each correct answer represents part
of the solution.)
c SQLServer: Databases: Transactions/sec
d
e
f
g

c SQLServer: Databases: Write Transactions/sec


d
e
f
g
c SQL Server: Buffer Manager: Page reads/sec
d
e
f
g

c SQL Server: Buffer Manager: Page writes/sec


d
e
f
g
c SQL Server: General Statistics: Logins/sec
d
e
f
g

c SQL Server: General Statistics: Logouts/sec


d
e
f
g

Answer:
SQL Server: Buffer Manager: Page reads/sec
SQL Server: Buffer Manager: Page writes/sec

Explanation:
You should use the SQL Server: Buffer Manager: Page reads/sec and SQL Server: Buffer Manager: Page
writes/sec counters to monitor the amount of I/O generated by the instance. These counters are helpful when
monitoring disk usage.

All of the other options are incorrect because none of these counters would be helpful in determining the amount
of I/O generated by the instance. The SQLServer: Databases: Transactions/sec counter displays the number of
transactions started each second. The SQLServer: Databases: Write Transactions/sec counter displays the
number of write transactions committed each second. The SQL Server: General Statistics: Logins/sec counter
displays the total number of logins per second. The SQL Server: General Statistics: Logouts/sec counter
displays the total number of logouts per second.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Resource Usage (System Monitor) > Monitoring Disk Usage

Item: 37 (Ref:Cert-70-432.7.3.2)

You are database administrator for your company. You manage an instance of SQL Server 2008 named SQL1.
You create a new trace for tracing server-level events by using the sp_trace_create system stored procedure on
SQL1. Next, you want to start the trace.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 182 of 226

Which system stored procedure should you use?


j sp_trace_generateevent
k
l
m
n

j sp_trace_setevent
k
l
m
n

j sp_trace_setfilter
k
l
m
n
j sp_trace_setstatus
k
l
m
n

Answer:
sp_trace_setstatus

Explanation:
You should use the sp_trace_setstatus system stored procedure. SQL Server 2008 contains Transact-SQL
system stored procedures that can be used for creating traces on a SQL Server instance. These system stored
procedures can be used as an alternative to SQL Server Profiler for creating and running traces. The
sp_trace_create system stored procedure allows you to create a new trace definition. The new trace created by
using this stored procedure remains in a stopped state unless it is started. While creating a trace by using the
sp_trace_create system stored procedure, you can use the @tracefile parameter to specify the location in which
the trace file will be stored. When you create a trace by using SQL Server Profiler, it is saved in the C:\Program
Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG folder by default. To start, stop, or close
a trace, you should use the sp_trace_setstatus system stored procedure. The complete syntax for the
sp_trace_setstatus system stored procedure is as follows:

sp_trace_setstatus [ @traceid = ] trace_id , [ @status = ] status;

The @traceid parameter specifies the ID of the trace that you want to modify. The @status parameter is used to
configure the action that should be implemented for the trace. The possible values for the @status parameter are
0, 1, and 2. A value of 0 is used to stop a trace. A value of 1 is used to start a trace. A value of 2 is used to close
a trace and remove the trace's information from the server.

You should not use the sp_trace_generateevent system stored procedure because this stored procedure does
not allow you to start a trace. The sp_trace_generateevent system stored procedure is used to create a user-
defined event.

You should not use the sp_trace_setevent system stored procedure because this stored procedure does not
allow you to start a trace. The sp_trace_setevent system stored procedure is used to add or remove an event or
event column in a trace.

You should not use the sp_trace_setfilter system stored procedure because this stored procedure does not
allow you to start a trace. The sp_trace_setfilter system stored procedure is used to apply a filter to a trace.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect trace data by using SQL Server Profiler.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Introducing SQL Trace > Using SQL Trace

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 183 of 226

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Stored Procedures (Transact-SQL) > SQL Server Profiler Stored Procedures (Transact-
SQL) > sp_trace_setstatus (Transact-SQL)

Item: 41 (Ref:Cert-70-432.7.1.2)

You are the SQL administrator for your company. A SQL Server 2008 instance named Prod contains all
production databases. You use the Resource Governor to manage resource usage on Prod.

You configure a new resource pool and workload group. Later that day, you discover that the new workload is not
being enforced.

What should you do?


j Restart the Prod instance.
k
l
m
n
j Run the ALTER RESOURCE GOVERNOR RECONFIGURE statement.
k
l
m
n

j Run the ALTER RESOURCE GOVERNOR RESET STATISTICS statement.


k
l
m
n

j Restart the SQL Server service.


k
l
m
n

Answer:
Run the ALTER RESOURCE GOVERNOR RECONFIGURE statement.

Explanation:
You should run the ALTER RESOURCE GOVERNOR RECONFIGURE statement. Whenever you change
Resource Governor settings, including workload group or resource policy settings, you need to issue this
statement for the changes to take effect. By default, the Resource Governor is not enabled.

You should not restart the Prod instance. Restarting the instance will result in the old Resource Governor settings
being enforced. The Resource Governor must be explicitly reconfigured for the new settings to take affect.

You should not run the ALTER RESOURCE GOVERNOR RESET STATISTICS statement. This statement only
resets the Resource Governor statistics. It does not reconfigure the Resource Governor.

You should not restart the SQL Server service. Restarting the service will result in the old Resource Governor
settings being enforced. The Resource Governor must be explicitly reconfigured for the new settings to take
affect.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Implement Resource Governor.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER RESOURCE GOVERNOR (Transact-SQL)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 184 of 226

Item: 47 (Ref:Cert-70-432.7.6.5)

You are the database administrator for your company. You upgrade a SQL Server 2000 computer named SQL1
to SQL Server 2008. You configure the SQLServer:Deprecated Features performance object to monitor the use
of features that have been deprecated in SQL Server 2008. During the monitoring process, you discover that the
text, ntext, and image data types are being used in an application on SQL1.

You want to modify the data types in the application to ensure that these data types are not encountered by the
SQLServer:Deprecated Features performance object.

Which data type should you use to replace the text data types in the application?
j char
k
l
m
n

j nchar
k
l
m
n

j varchar
k
l
m
n

j varchar(max)
k
l
m
n

Answer:
varchar(max)

Explanation:
You should use the varchar(max) data type to replace the text data types. Several features have been
deprecated in SQL Server 2008. The use of these deprecated features can be monitored by using the
SQLServer:Deprecated Features performance object and trace events. When you configure the
SQLServer:Deprecated Features performance object on an instance of SQL Server 2008, the use of text, ntext,
or image data types is also monitored by SQL Server. Because these data types are no longer supported, you
should modify the application to replace these data types. You can use the nvarchar(max) data type to replace
ntext data types, and you can use the varbinary(max) data type can to replace image data types.

You should not use the char, nchar, or varchar data types to replace the text data types in the application. The
char, nchar, and varchar data types do not support large amounts of data. Therefore, it is recommended that
you use the varchar(max), nvarchar(max), and varbinary(max) data types instead of the text, ntext, and
image data types.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Resource Usage (System Monitor) > Using SQL Server Objects > SQL Server, Deprecated Features Object

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Getting Started > Initial Installation > Upgrading to SQL
Server 2008 > Backward Compatibility > SQL Server Database Engine Backward Compatibility > Deprecated
Database Engine Features in SQL Server 2008

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 185 of 226

Item: 66 (Ref:Cert-70-432.7.1.3)

You are the database administrator of your company. You need to manage the resource usage on an instance of
SQL Server 2008 named SQL1. You decide to enable the Resource Governor.

Which Transact-SQL statement should you run?


j ALTER RESOURCE GOVERNOR ENABLE;
k
l
m
n

j ALTER RESOURCE GOVERNOR WITH RECONFIGURE;


k
l
m
n

j ALTER RESOURCE GOVERNOR RECONFIGURE;


k
l
m
n

j ALTER RESOURCE GOVERNOR RESET STATISTICS;


k
l
m
n

Answer:
ALTER RESOURCE GOVERNOR RECONFIGURE;

Explanation:
You should run the following Transact-SQL statement:

ALTER RESOURCE GOVERNOR RECONFIGURE;

The Resource Governor is a new feature in SQL Server 2008 that allows you to control SQL Server workload and
resources by specifying limits on resource consumption by incoming requests. The Resource Governor uses
resource pools, workload groups, and classification functions to control resource consumption. Resource pools
represent the physical resources of the SQL server. By using resource pools, you can configure a minimum and
maximum percentage for CPU use and a minimum and maximum percentage for memory use. Workload groups
are containers for session requests that match the classification criteria that are applied to each request.
Workload groups are assigned to resource pools, and they are used to group applications and limit SQL Server
resources. By using workload groups, you can configure settings, such as a maximum memory allocation and a
maximum CPU time limit for requests. Classification functions are used to assign workloads to workload groups.
When you install SQL Server 2008, the Resource Governor is disabled by default. To be able to use the
Resource Governor, you must first enable it. To enable the Resource Governor, you can either use the ALTER
RESOURCE GOVERNOR RECONFIGURE Transact-SQL statement or use SQL Server Management Studio. To
enable the Resource Governor by using SQL Server Management Studio, you should right-click the Resource
Governor node under the Management node and select the Enable option.

You should not run the following Transact-SQL statement:

ALTER RESOURCE GOVERNOR ENABLE;

ENABLE is not a valid clause of the ALTER RESOURCE GOVERNOR statement.

You should not run the following Transact-SQL statement:

ALTER RESOURCE GOVERNOR WITH RECONFIGURE;

The WITH clause is used to specify a CLASSIFIER_FUNCTION parameter value. Specifying the WITH clause
with the RECONFIGURE parameter will generate an error.

You should not run the following Transact-SQL statement:

ALTER RESOURCE GOVERNOR RESET STATISTICS;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 186 of 226

The RESET STATISTICS parameter is used to reset statistics on all workload groups and resource pools. This
parameter cannot be used to enable the Resource Governor.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Implement Resource Governor.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER RESOURCE GOVERNOR (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance > Managing
SQL Server Workloads with Resource Governor > Introducing Resource Governor

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance > Managing
SQL Server Workloads with Resource Governor > Resource Governor Concepts

Item: 69 (Ref:Cert-70-432.7.6.4)

You are the SQL administrator for your company. You manage two SQL Server 2008 instances named
SQL_Prod and SQL_Test. You have configured data collection to occur on both instances. The instances upload
the data collection set data to the Management Data Warehouse located on SQL_Test.

A non-cached data collection set named Perf_collection that you created runs on a schedule to collect data from
SQL_Test. However, you notice that the performance of SQL_Test is adversely affected when Perf_collection
runs. You decide to configure Perf_collection so that different schedules are used for data collection and upload.

What should you do?


j Stop data collection on SQL_Test. Edit the schedule of the Perf_collection data collection set. Restart data
k
l
m
n
collection on SQL_Test.
j Stop data collection on SQL_Test. Change the Perf_collection data collection set to cached, and edit the
k
l
m
n
schedule of the data collection set. Restart data collection on SQL_Test.
j Stop the Perf_collection data collection set. Edit the schedule of the Perf_collection data collection set.
k
l
m
n
Restart the Perf_collection data collection set.
j Stop the Perf_collection data collection set. Change the Perf_collection data collection set to cached, and
k
l
m
n
edit the schedule of the data collection set. Restart the Perf_collection data collection set.

Answer:
Stop the Perf_collection data collection set. Change the Perf_collection data collection set to
cached, and edit the schedule of the data collection set. Restart the Perf_collection data
collection set.

Explanation:
You should perform the following steps:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 187 of 226

1. Stop the Perf_collection data collection set.


2. Change the Perf_collection data collection set to cached, and edit the schedule of the data collection set.
3. Restart the Perf_collection data collection set.

To change the settings of a data collection set, you only need to stop the set. Because the data collection set is
non-cached, both the collection and upload use the same schedule. To configure them to use different schedules,
you will need to change the data collection set to cached and edit the schedules accordingly. Then, you need to
restart the data collection set.

You should not stop data collection on SQL_Test, edit the schedule of the Perf_collection data collection set,
and restart data collection on SQL_Test. Data collection can only be stopped when all data collection sets are
stopped. In addition, because a cached data collection set is being used, you will need to change the mode to
non-cached before you will be able to edit the schedule.

You should not stop data collection on SQL_Test, change the Perf_collection data collection set to cached, edit
the schedule of the data collection set, and restart data collection on SQL_Test. Data collection can only be
stopped when all data collection sets are stopped. You only need to edit a single data collection set. Stopping
data collection would prevent all other data collection sets from running.

You should not s top the Perf_collection data collection set, edit the schedule of the Perf_collection data
collection set, and restart the Perf_collection data collection set. Because a cached data collection set is being
used, you will need to change the mode to non-cached before you will be able to edit the schedule appropriately.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Data
Collection > Managing Data Collection How-to Topics > Managing Data Collection Using SQL Server
Management Studio > How to: View or Change Collection Set Schedules

Item: 75 (Ref:Cert-70-432.7.3.1)

You are a database administrator for your company. The company stores all its business information in a
database named Prod1 residing on a SQL Server instance Srv1, which in turn resides on the Sql1 computer. The
Prod1 database is 35 GB in size and occupies almost 90 percent of the hard disk space allocated to Sql1.
Another SQL Server instance Srv2, located on server computer Sql2 is configured as a linked server on Srv1.

You manage the Prod1 database. To improve query performance and optimize indexes, queries and stored
procedures, you frequently run traces using SQL Server Profiler. The trace information is accessed by other team
members using Transact-SQL queries. When you run SQL Server Profiler, other team members complain that the
performance of Srv1 is adversely affected.

You want to ensure that when you run SQL Server Profiler, the performance of the Prod1 database is not
affected.

What should you do?


j Run SQL Profiler on Srv1, and save trace logs in a database table on Sql2.
k
l
m
n

j Run SQL Profiler on Srv1, and save trace logs in a database table on Sql1.
k
l
m
n

j Run SQL Profiler on Srv1, and save trace logs in a trace file on Sql1.
k
l
m
n
j Run SQL Profiler on Srv1, and save trace logs in a trace file on Sql2.
k
l
m
n

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 188 of 226

Answer:
Run SQL Profiler on Srv1, and save trace logs in a database table on Sql2.

Explanation:
You should run SQL Profiler on Srv1 and save trace logs in a database table on Sql2. In this scenario, there is
limited free space on Sql1 and storing the logs on Sql1 might degrade the performance of the Prod1 database.
The trace logs should be stored in a database table because other team members will access the information
using Transact-SQL queries. Therefore, storing the trace log information in a trace file will not meet the
requirements in this scenario.

You should not run SQL Profiler on Srv1 and save the trace logs in a database table on Sql1. Saving the trace
logs on Sql1 might cause degradation in the performance of the Prod1 database because there is limited free
space on Sql1. Additionally, the Srv2 instance on Sql2 is configured as a linked server on Srv1. This can result in
the trace log information being transferred from Srv1 to Srv2, and vice versa.

You should not run SQL Profiler on Srv1 and save trace logs in a trace file on Sql1. In this scenario, other team
members will access the tracing information using Transact-SQL queries. Storing the trace log information in a
trace file will not allow users to access this information using Transact-SQL queries. Additionally, saving the trace
logs on the Sql1 computer might cause performance of the Prod1 database to degrade because there is limited
free space on the Sql1 computer.

You should not run SQL Profiler on Sql1 and save trace logs in a trace file on Sql2. In this scenario, other team
members will access the tracing information using Transact-SQL queries. Storing the trace log information in a
trace file will not allow users to access this information using Transact-SQL queries.

SQL Server Profiler traces are used to record information based on events that occur. They are used mainly to
diagnose computer issues. Unlike SQL Server Profiler traces, trace flags are used to temporarily configure or
disable a particular behavior to diagnose performance issues or debug a system. Global trace flags are enabled
using the sqlservr.exe -T tracenumber command. Session trace flags are enabled using the DBCC TRACEON
statement. Two types of trace flags are available: global and session. Configured at the server level, global trace
flags are visible to every server connection. Active for a single connection, session trace flags are visible only to
that connection. The following list is some of the trace flags that are available in SQL Server 2008:

 260 - A global or session trace flag that returns extended stored procedure dynamic link library (DLL)
version information.
 1204 - A global trace flag that returns information on a deadlock, including the statement affected by the
deadlock.
 1222 - A global trace flag that returns information on a deadlock, including the statement affected by the
deadlock.
 3205 - A global or session trace flag that disables hardware compression for tape drives.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect trace data by using SQL Server Profiler.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Introducing SQL Server Profiler > Using SQL Server Profiler

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 189 of 226

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > Introducing SQL Server Profiler >Using SQL Server Profiler > Saving Traces and Trace Templates

Item: 76 (Ref:Cert-70-432.7.2.1)

You are the database administrator for your company and manage all the company's SQL Server 2008
databases. The production database named Prod1 contains all the product and sales data for the company.

Database users complain that the database performance has degraded. You must diagnose the problem and
tune the database by using the Database Engine Tuning Advisor.

You create a workload by using SQL Server Profiler and want to use this workload to tune the database using the
Database Engine Tuning Advisor.

Which statement is true?


j The trace file and the trace table workload must exist on the local server if you want to use them to tune the
k
l
m
n
database using the Database Engine Tuning Advisor.
j The data generated as tuning recommendations is stored in the trace file used while tuning the database.
k
l
m
n
j The Database Engine Tuning Advisor will generate recommendations for all the databases on the server.
k
l
m
n

j The Database Engine Tuning Advisor cannot use the trace tables as a workload while trace events are being
k
l
m
n
written to it.

Answer:
The Database Engine Tuning Advisor cannot use the trace tables as a workload while trace
events are being written to it.

Explanation:
The Database Engine Tuning Advisor cannot use the trace tables as a workload while trace events are being
written to it. The process of writing the trace events to the trace table should be finished before the trace table can
be used as a workload for tuning the database. The Database Engine Tuning Advisor does not support using a
trace table to which trace events are still being written.

The option stating that the trace file and trace table workload must exist on the local server if you want to use
them for tuning the database using the Database Engine Tuning Advisor is incorrect. The trace file to be used as
a workload can also exist on the remote server. The Database Engine Tuning Advisor can use a trace file located
either on a local or on a remote server as a workload for tuning the database. If you are specifying a trace table to
be used as a workload, the trace table must exist on the local server only. The Database Engine Tuning Advisor
cannot use a trace table located on a remote server as a workload.

The option stating that the data generated as tuning recommendations is stored in the trace file used while tuning
the database is incorrect. The data generated as tuning recommendations is stored in the msdb database.

The option stating that the Database Engine Tuning Advisor will generate recommendations for all the databases
on the server is incorrect. The Database Engine Tuning Advisor will generate recommendations only for those
databases or database objects that you have selected while performing the tuning operation.

Objective:
Optimizing SQL Server Performance

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 190 of 226

Sub-Objective:
Use the Database Engine Tuning Advisor.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance >
Performance Monitoring and Tuning How-to Topics > Database Engine Tuning Advisor How-to Topics > How to:
Tune a Database > How to: Tune a Database by Using Database Engine Tuning Advisor

TechNet > TechNet Library > Server Products and Technologies > SQL Server >SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Performance > Monitoring
and Tuning for Performance > Tuning the Physical Database Design > Using Database Engine Tuning Advisor >
Using Session Monitor to Evaluate Tuning Recommendations

Item: 79 (Ref:Cert-70-432.7.1.1)

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL1 contains several
production databases. Because of the number of users accessing the databases, performance is the primary
concern.

You configure a SQL trace that monitors the CPU Threshold Exceeded event class. After the trace is running for
several days, the trace does not generate any warnings of the condition. You are concerned that the threshold
that was specified is too large. You want to reset the threshold to its default value.

What should you do?


j Run the sp_configure system stored procedure.
k
l
m
n
j Execute the ALTER WORKLOAD GROUP statement with the WITH REQUEST_MAX_CPU_TIME_SEC
k
l
m
n
parameter set to 0 seconds.
j Run the sp_trace_setevent system stored procedure.
k
l
m
n

j Execute the CREATE WORKLOAD GROUP statement with the WITH REQUEST_MAX_CPU_TIME_SEC
k
l
m
n
parameter set to 0 seconds.

Answer:
Execute the ALTER WORKLOAD GROUP statement with the WITH
REQUEST_MAX_CPU_TIME_SEC parameter set to 0 seconds.

Explanation:
You should execute the ALTER WORKLOAD GROUP statement with the WITH
REQUEST_MAX_CPU_TIME_SEC parameter set to 0 seconds. This will configure the threshold to a value of 0
seconds, which is the default value.

You should not run the sp_configure system stored procedure. The sp_configure system stored procedure is
not used to manage the threshold for the CPU Threshold Exceeded event class. It is used to manage server-
wide settings.

You should not run the sp_trace_setevent system stored procedure. This system stored procedure is used to
add or remove events from a trace. It cannot be used to manage the CPU threshold for the Resource Governor.

You should not execute the CREATE WORKLOAD GROUP statement with the WITH
REQUEST_MAX_CPU_TIME_SEC parameter set to 0 seconds. This setting is already configured with a value.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 191 of 226

Therefore, you need to edit the value using the ALTER WORKLOAD GROUP statement.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Implement Resource Governor.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 >Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Errors and Warnings Event Category (Database Engine) > CPU
Threshold Exceeded Event Class

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER WORKLOAD GROUP (Transact-SQL)

Item: 86 (Ref:Cert-70-432.7.6.7)

You are the database administrator of your company. You are creating a new trace on an instance of SQL Server
2008 named SQL1. You want to capture events when a compiled plan is cached for the first time, recompiled, or
evicted from the plan cache.

You navigate to the Events Selection tab in the Trace Properties dialog box.

Which event class should you monitor?


j Plan Guide Successful
k
l
m
n
j Performance Statistics
k
l
m
n

j Showplan All
k
l
m
n

j Plan Guide Unsuccessful


k
l
m
n

Answer:
Performance Statistics

Explanation:
You should monitor the Performance Statistics event class. You can create a new trace in SQL Server Profiler
to capture events as they occur on an instance of SQL Server 2008. The Events Selection tab in the Trace
Properties dialog box contains a complete list of event classes and their event categories that you can configure
to capture information about specific events. The Performance Statistics event class under the Performance
event category allows you to capture events that occur when a compiled plan is cached for the first time,
recompiled, or evicted from the plan cache.

You can also configure Resource Governor to detect queries that have exceeded the CPU threshold value. To do
this, you should configure the CPU threshold exceeded event class under the Errors and Warnings event
category.

You should not monitor the Plan Guide Successful, Showplan All, or Plan Guide Unsuccessful event classes

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 192 of 226

because these event classes cannot be used to capture events when a compiled plan is cached for the first time,
recompiled, or evicted from the plan cache. The Plan Guide Successful event class captures events when SQL
Server successfully produces an execution plan for a query or batch that contained a plan guide. The Showplan
All event class displays the query plan along with complete compile-time details of the SQL statement that is
being executed. The Plan Guide Unsuccessful event class captures events when SQL Server is unsuccessful in
producing an execution plan for a query or batch that is contained a plan guide.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Performance Event Category > Performance Statistics Event
Class

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference

Item: 92 (Ref:Cert-70-432.7.4.1)

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL_Prod resides on a
computer named Server1. SQL_Prod contains three databases named Accounting, Research, and HR.

You need to obtain a list of transactions that are occurring in the HR database. The information must include the
transaction ID, transaction state, and transaction type.

What should you do?


j Query the sys.dm_tran_database_transactions dynamic management view in the Accounting database.
k
l
m
n

j Query the sys.dm_tran_database_transactions dynamic management view in the HR database.


k
l
m
n

j Query the sys.dm_tran_database_transactions dynamic management view in the Research database.


k
l
m
n

j Query the sys.dm_tran_database_transactions dynamic management view in the SQL_Prod instance.


k
l
m
n

Answer:
Query the sys.dm_tran_database_transactions dynamic management view in the HR database.

Explanation:
You should query the sys.dm_tran_database_transactions dynamic management view in the HR database.
This dynamic management view (DMV) returns the transaction ID, database ID, transaction start time, transaction
type, transaction state, and other information on the transactions for a database.

You should not query the sys.dm_tran_database_transactions DMV in the Accounting or Research database.
You need to obtain the information for the HR database, not for the Accounting or Research database.

You should not query the sys.dm_tran_database_transactions DMV in the SQL_Prod instance. The

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 193 of 226

sys.dm_tran_database_transactions DMV returns values for a particular database, not for all databases on the
instance.

DMVs are either used against a server or database. Using server-scoped DMVs requires the VIEW SERVER
STATE permission. Using database-scoped DMVs requires the VIEW DATABASE STATE permission.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect performance data by using Dynamic Management Views (DMVs).

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online >Database Engine > Technical Reference >Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL)
>Transaction Related Dynamic Management Views and Functions (Transact-SQL) >
sys.dm_tran_database_transactions (Transact-SQL)

TechNet > TechNet Library > Server Products and Technologies > SQL Server >SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL)

Item: 104 (Ref:Cert-70-432.7.6.6)

You are the database administrator of your company. You are creating a new trace on an instance of SQL Server
2008 named SQL1. You want to monitor all connection events when a client requests a connection to SQL1.

You navigate to the Events Selection tab in the Trace Properties dialog box.

Which event class should you monitor?


j Audit Login
k
l
m
n
j Audit Login GDR Event
k
l
m
n

j Audit Login Failed


k
l
m
n
j ExistingConnection
k
l
m
n

Answer:
Audit Login

Explanation:
You should monitor the Audit Login event class. SQL Server Profiler allows you to capture and save event data
to a table or a file for analysis. You can create a new trace in SQL Server Profiler to capture events as they occur
on an instance of SQL Server 2008. The Events Selection tab in the Trace Properties dialog box contains a
complete list of event classes and their event categories that you can configure to capture information about
specific events. To open the Trace Properties dialog box, you should open SQL Server Profiler, select the New
Trace option from the File menu, and connect to an instance of SQL Server. To collect information about all
connection events when a client requests a connection to an instance of SQL Server, you should configure the
Audit Login event class under the Security Audit event category. This event class allows you to capture all new
connection events since the trace was started.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 194 of 226

You should not monitor the Audit Login GDR Event, Audit Login Failed, or ExistingConnection event classes
because these event classes cannot be used to capture information about when a client requests a connection to
an instance of SQL Server. The Audit Login GDR Event event class captures information about grant, revoke,
and deny actions on Windows account login rights for the sp_grantlogin, sp_revokelogin, and sp_denylogin
system stored procedures. The Audit Login Failed event class captures information about failed client login
attempts. The ExistingConnection event class captures information about properties of existing connections
when the trace was started.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference > Security Audit Event Category (SQL Server Profiler) > Audit Login
Event Class

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Monitoring
Events > SQL Server Event Class Reference

Item: 109 (Ref:Cert-70-432.7.6.3)

You are the SQL administrator for your company. You manage a SQL Server 2008 instance named SQL_1. The
instance contains two production databases used by your company.

You need to collect performance data on the growth of all the database and log files within both databases. The
data must be collected in the Management Data Warehouse. Your solution needs to use the same schedule for
the collection and upload of the data. Your solution must meet the requirements while using the least
administrative effort.

What should you do?


j Start the Query Statistics Collection Set.
k
l
m
n
j Start the Server Activity Collection Set.
k
l
m
n

j Start the Disk Usage Collection Set.


k
l
m
n
j Create and start a scheduled, cached custom data collection set.
k
l
m
n

j Create and start a scheduled, non-cached custom data collection set.


k
l
m
n

Answer:
Start the Disk Usage Collection Set.

Explanation:
You should start the Disk Usage Collection Set, one of the three system data collector sets installed with SQL
Server 2008. This data collector set collects information about database and log file growth for all databases and

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 195 of 226

provides statistical information collected throughout the day. This data collector set is a non-cached data collector
set, meaning that the same schedule is used for both the collection and upload of the data.

You should not start the Query Statistics Collection Set. This data collection set is also installed with SQL
Server 2008. It collects statistical information regarding queries and is a cached data collection set. A cached data
collection set uses different schedules for the collection and upload of the data. It does not collect information
about database or log file growth.

You should not start the Server Activity Collection Set. This data collection set is also installed with SQL Server
2008. It collects statistical information on resource usage and performance information from the server and is a
cached data collection set. It does not collect information about database or log file growth.

You should not create and start a scheduled, cached custom data collection set. You do not need to create a
custom data collection set because a system data collection set already exists that will collect the required data.
Creating a custom data collection set would require more administrative effort than is necessary. In addition, you
should not create a cached data collection set because a cached data collection set uses different schedules for
the collection and upload of the data.

You should not create and start a scheduled, non-cached custom data collection set. You do not need to create a
custom data collection set because a system data collection set already exists that will collect the required data.
Creating a custom data collection set would require more administrative effort than is necessary.

A non-cached data collection set can be configured to run on a schedule or on demand. A scheduled data
collection set runs on a pre-configured schedule. An on-demand data collection set runs only when started by an
administrator.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Use Performance Studio.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Monitoring > Data
Collection > System Data Collection Sets

Item: 136 (Ref:Cert-70-432.7.4.2)

You are the database administrator of a SQL Server 2008 instance that contains several query plans. You want to
obtain information about a query plan that is cached by SQL Server for faster query execution.

Which Dynamic Management View (DMV) should you use?


j sys.dm_exec_query_stats
k
l
m
n

j sys.dm_exec_query_plan
k
l
m
n
j sys.dm_exec_text_query_plan
k
l
m
n

j sys.dm_exec_cached_plans
k
l
m
n

Answer:
sys.dm_exec_cached_plans

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 196 of 226

Explanation:
You should use the sys.dm_exec_cached_plans DMV. DMVs return information about server state, which can
be used to monitor the health of a server instance, tune the performance of the server, and diagnose problems.
The sys.dm_exec_cached_plans is an execution-related DMV that can be used to obtain information about
query plans that are cached by SQL Server for faster query execution.

You should not use the sys.dm_exec_query_stats DMV. The sys.dm_exec_query_stats DMV provides
aggregate performance data for cached query plans. This DMV can be used to gather performance information,
such as identifying the longest-running queries.

You should not use the sys.dm_exec_query_plan DMV. The sys.dm_exec_query_plan DMV provides query
plan information in Extensible Markup Language (XML) format for a specified batch.

You should not use the sys.dm_exec_text_query_plan DMV. The sys.dm_exec_text_query_plan DMV
provides query plan information in text format for a Transact-SQL batch or for a particular statement within a
batch.

Objective:
Optimizing SQL Server Performance

Sub-Objective:
Collect performance data by using Dynamic Management Views (DMVs).

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL) >
Execution Related Dynamic Management Views and Functions (Transact-SQL) > sys.dm_exec_cached_plans
(Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL) >
Execution Related Dynamic Management Views and Functions (Transact-SQL)

Performing Data Management Tasks


Item: 34 (Ref:Cert-70-432.5.1.3)

You are a database administrator for your company. You transfer data from the Prod1 database maintained on
the Sql1 instance of SQL Server 2008 into an external data file before sending the data file to vendors. You must
send all the data from the Tbl_auction_rate table.

You must use the bcp utility to accomplish this.

Which two bcp commands could you use to accomplish this? (Choose two. Each correct answer represents a
complete solution.)
c bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" sql c:\Auctionrates.dat -c -T
d
e
f
g
c
d
e
f
g
bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" queryout c:\Auctionrates.dat -c -
T

c bcp Prod1.dbo.Tbl_auction_rate out c:\Auctionrates.dat -c -T


d
e
f
g
c bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" out c:\Auctionrates.dat c -T
d
e
f
g
bcp Prod1.dbo.Tbl_auction_rate sql c:\Auctionrates.dat -c -T

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 197 of 226

c
d
e
f
g

Answer:
bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" queryout c:\Auctionrates.dat
-c -T
bcp Prod1.dbo.Tbl_auction_rate out c:\Auctionrates.dat -c -T

Explanation:
You could execute either of the following bcp commands:

bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" queryout c:\Auctionrates.dat -c -T

bcp Prod1.dbo.Tbl_auction_rate out c:\Auctionrates.dat -c -T

The bcp utility is used to copy data from and copy data into SQL Server. The bcp utility is typically used to import
delimited files or XML data into a database. You can also use the bcp utility to export data. You can specify the
format of the file you are importing by creating format files. The bcp utility can be used to create format files. It
can also create XML format files for XML data.

In this scenario, the following command will export all the rows from the Tbl_auction_rate table to the
c:\Auctionrates.dat file:

bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" queryout c:\Auctionrates.dat -c -T

This statement specifies a Transact-SQL query that selects a result set containing all table rows from the
Prod1.dbo.Tbl_auction_rate table. The result set output is then directed to the c:\Auctionrates.dat file by using
the queryout argument. The queryout argument is specified when you want to perform a bulk copy operation
by executing a Transact-SQL query.

The following statement will also export all rows from the Tbl_auction_rate table to the c:\Auctionrates.dat file:

bcp Prod1.dbo.Tbl_auction_rate out c:\Auctionrates.dat -c -T

This statement specifies the name of the table from which the rows will be exported. The out argument specifies
that the contents are copied from a database table or a view to a file.

The following option is incorrect because sql is not a valid argument for the bcp utility:

bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" sql c:\Auctionrates.dat -c -T

The following option is incorrect because you cannot execute a Transact-SQL query with the out argument:

bcp "SELECT * FROM Prod1.dbo.Tbl_auction_rate" out c:\Auctionrates.dat -c -T

You must use the queryout argument if you specify a Transact-SQL query.

The following option is incorrect because sql is not a valid argument specified for the bcp utility:

bcp Prod1.dbo.Tbl_auction_rate sql c:\auctionrates.dat -c -T

Objective:
Performing Data Management Tasks

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 198 of 226

Sub-Objective:
Import and export data.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Tools Reference >
Command Prompt Utilities > bcp Utility

Item: 39 (Ref:Cert-70-432.5.3.3)

You are the database administrator of your company. You manage an instance of SQL Server 2008 that stores a
database named UserData. UserData is configured with the full recovery model and contains a large amount of
static data. To minimize the amount of space required for the data, you create a new read-only filegroup named
UserDataFileGrp and move the data to that filegroup. You also compress the UserDataFileGrp filegroup using
NTFS compression.

After several days, you receive a request to update the data for one of the users in the UserData database. To
achieve this, you want to modify the compressed data stored in UserDataFileGrp.

Which action or actions must you perform to be able to complete this task using minimum administrative effort?
(Choose all that apply. Each correct answer represents part of the solution.)
c Uncompress the .ndf files for the UserDataFileGrp filegroup.
d
e
f
g

c Move the data to the primary filegroup of the UserData database.


d
e
f
g

c Set the UserDataFileGrp filegroup to read/write.


d
e
f
g

c Set the recovery model of the UserData database to simple.


d
e
f
g

Answer:
Uncompress the .ndf files for the UserDataFileGrp filegroup.
Set the UserDataFileGrp filegroup to read/write.

Explanation:
You should uncompress the .ndf files for the UserDataFileGrp filegroup and set the UserDataFileGrp filegroup
to read/write. SQL Server 2008 allows you to mark filegroups as read-only. You can mark any existing filegroup
as read-only except the primary filegroup. You can also compress read-only filegroups. Compressing data stored
in read-only filegroups is useful when you have a large amount of historical or static data that must be available to
users for limited read-only access or when you want to save disk space. To compress data in read-only
filegroups, only Windows NTFS compression can be used. When you mark a filegroup as read-only, it cannot be
modified. To be able to modify data in a compressed read-only filegroup, you must ensure that the files are
uncompressed and the filegroup is set to read/write.

You should not move the data to the primary filegroup of the UserData database. Although this will allow you to
modify the data, it will require a considerable amount of administrative effort.

You should not set the recovery model of the UserData database to simple. Setting the recovery model of the
UserData database to simple will not be of any use in this scenario. To be able to modify data in a compressed
read-only filegroup, you must ensure that the files are uncompressed and the filegroup is set to read/write.

Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 199 of 226

Performing Data Management Tasks

Sub-Objective:
Implement data compression.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Designing Databases > Designing Files and
Filegroups > Read-Only Filegroups and Compression

Item: 45 (Ref:Cert-70-432.5.2.2)

You manage an instance of SQL Server 2008 named SQL1. SQL1 contains two schemas named Corp and
Customer. You want to move a table named Products from the Corp schema to the Customer schema.

Which two permissions will you require to perform this task? (Choose two. Each correct answer represents part of
the solution.)
c the CONTROL permission on the Products table
d
e
f
g

c the ALTER permission on the Customer schema


d
e
f
g

c he ALTER permission on the Corp schema


d
e
f
g

c the CONTROL permission on the Customer schema


d
e
f
g

c the CONTROL permission on the Corp schema


d
e
f
g
c the ALTER permission on the Products table
d
e
f
g

Answer:
the CONTROL permission on the Products table
the ALTER permission on the Customer schema

Explanation:
To perform this task, you require the CONTROL permission on the Products table and the ALTER permission on
the Customer schema. The ALTER SCHEMA Transact-SQL statement allows you to transfer a securable from
one schema to another. The syntax for using this command is given below:

ALTER SCHEMA <schema_name> TRANSFER <securable_name>;

For example, to move a table named Products from the Corp schema to the Customer schema, you should run
the following Transact-SQL statement:

ALTER SCHEMA Customer TRANSFER Corp.Products;

To perform this task, the current user must have CONTROL permission on the securable and the ALTER
permission on the target schema.

All other options are incorrect because they do not specify the correct permissions for the securable and the
target schema required to perform this task.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 200 of 226

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage data partitions.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER SCHEMA (Transact-SQL)

Item: 46 (Ref:Cert-70-432.5.3.4)

You manage an instance of SQL Server 2008 named SQL1. SQL1 contains a database named CorpData that is
configured for full recovery. You want to compress the primary filegroup and transaction logs for the CorpData
database.

What should you do?


j Change the recovery model of the database to simple.
k
l
m
n
j Set the database to read-only.
k
l
m
n

j Configure Transparent Data Encryption (TDE) on the database.


k
l
m
n
j Change the recovery model of the database to bulk-logged.
k
l
m
n

Answer:
Set the database to read-only.

Explanation:
You should set the database to read-only. SQL Server 2008 allows you to mark filegroups as read-only. You can
mark any existing filegroup as read-only except the primary filegroup. You can also compress read-only
filegroups. To compress data in filegroups, you can only use Windows NTFS compression. You cannot compress
the primary filegroup and transaction logs unless the database itself is configured as read-only. Therefore, to be
able to compress the primary filegroup and transaction logs, you should set the database as read-only.

You should not change the recovery model of the database to simple or bulk-logged. Recovery models are
designed for controlling the maintenance of transaction logs. Changing the recovery model of the database will
not allow you to compress the primary filegroup and transaction logs for a database.

You should not configure Transparent Data Encryption (TDE) on the database. In SQL Server 2008, TDE is used
to encrypt the contents of an entire database. TDE does not allow you to compress the primary filegroup and
transaction logs for a database.

Objective:
Performing Data Management Tasks

Sub-Objective:
Implement data compression.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 201 of 226

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Databases > Designing Databases > Designing Files and
Filegroups > Read-Only Filegroups and Compression

Item: 51 (Ref:Cert-70-432.5.4.1)

You are the database administrator for your company and manage all the SQL Server 2008 databases of the
company. A production database named Prod1 contains all the product and sales data for your company. The
Prod_details table contains detailed data for all of the products manufactured by your company and is frequently
accessed by the employees of the company.

Most of the employees' queries include the Prod_id column of the Prod_details table in the WHERE clause. You
create an online index on the Prod_id column by using the following statement:

CREATE INDEX Prod_id_indx


ON Prod_details (Prod_id) WITH (ONLINE = ON);

You must understand the restrictions that apply to an online index.

Which statements are true for online indexes? (Choose all that apply.)
c An online index cannot be created on a global temporary table.
d
e
f
g
c An online index cannot be created on a local temporary table.
d
e
f
g

c An online index cannot be created on an XML column.


d
e
f
g
c A unique clustered index cannot be rebuilt online.
d
e
f
g

c An index defined with a text or ntext type columns cannot be rebuilt online.
d
e
f
g

Answer:
An online index cannot be created on a local temporary table.
An online index cannot be created on an XML column.
An index defined with a text or ntext type columns cannot be rebuilt online.

Explanation:
An online index can be created by specifying the ONLINE = ON argument while creating an index. This argument
specifies that the tables and associated indexes remain online and are available for modifications and queries
while an operation is being performed on the index. The following restrictions apply to creating or rebuilding
indexes online:

 Online indexes cannot be created on local temporary tables.


 Disabled clustered indexes cannot be rebuilt online.
 Clustered indexes cannot be rebuilt online if the underlying table contains large object binary (LOB) data
types. LOB data types include image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and
xml data types.
 Nonclustered indexes created on LOB data type columns cannot be rebuilt online.
 XML indexes cannot be rebuilt online.

The option stating that an online index cannot be created on a global temporary table is incorrect because you

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 202 of 226

can create an online index on a global temporary table. You cannot create an online index on a local temporary
table.

A unique clustered index can be rebuilt online provided the underlying table does not contain LOB data type
columns.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE INDEX (Transact-SQL)

Item: 53 (Ref:Cert-70-432.5.4.6)

You are the database administrator of your company. The network contains an instance of SQL Server 2008. You
run the following Transact-SQL statements to create a table named PartnerInfo and a spatial index on the
PartnerLocation column in the PartnerInfo table in a database named Partners:

CREATE TABLE PartnerInfo(


id int primary key,
PartnerLocation geography);

CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation


ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = MEDIUM));

You want to rebuild the spatial index to change the grid density to high. You want to ensure that the table and its
associated indexes are not available for queries while rebuilding the spatial index.

Which Transact-SQL statement should you run?


j
k
l
m
n
CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation

ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ALLOW_ROW_LOCKS = ON,
DROP_EXISTING = ON);

j
k
l
m
n
CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation

ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ALLOW_PAGE_LOCKS = ON,
DROP_EXISTING = ON);

j
k
l
m
n
CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation

ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ONLINE = OFF,
DROP_EXISTING = ON);

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 203 of 226

j
k
l
m
n
CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation

ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
STATISTICS_NORECOMPUTE = OFF,
DROP_EXISTING = ON);

Answer:
CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation

ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ONLINE = OFF,
DROP_EXISTING = ON);

Explanation:
You should run the following Transact-SQL statement:

CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation


ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ONLINE = OFF,
DROP_EXISTING = ON);

You can create a spatial index on a spatial column of a table that contains spatial data. SQL Server 2008 allows
you to create spatial indexes on columns that have the geography or geometry data type. To create a spatial
index on a table, you must ensure that the table has a primary key. SQL Server 2008 does not support online
index builds for spatial indexes. When you want to prevent users from querying the tables and their associated
indexes during the index operation, you should either specify the ONLINE = OFF parameter or omit the ONLINE
parameter. When you set the ONLINE parameter to ON, the index operation fails, and an error occurs. While
rebuilding a spatial index with the same name, you must also specify the DROP_EXISTING = ON parameter. This
parameter ensures that the existing spatial index is dropped and rebuilt. If you do not specify this parameter or set
this parameter to OFF, an error is raised when an existing spatial index with the same name is found.

You should not run the following Transact-SQL statement:

CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation


ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
ALLOW_ROW_LOCKS = ON,
DROP_EXISTING = ON);

The ALLOW_ROW_LOCKS = ON parameter specifies whether row locks are allowed when the index is
accessed. It cannot be used to ensure that the table and its associated indexes are not available for queries
during an index operation.

You should not run the following Transact-SQL statement:

CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation


ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 204 of 226

GRIDS = (LEVEL_3 = HIGH),


ALLOW_PAGE_LOCKS = ON,
DROP_EXISTING = ON);

The ALLOW_PAGE_LOCKS = ON parameter specifies whether page locks are allowed when the index is
accessed. It cannot be used to ensure that the table and its associated indexes are not available for queries
during an index operation.

You should not run the following Transact-SQL statement:

CREATE SPATIAL INDEX SI_PartnerInfo_PartnerLocation


ON PartnerInfo(PartnerLocation)
WITH (BOUNDING_BOX = (0, 0, 300, 100),
GRIDS = (LEVEL_3 = HIGH),
STATISTICS_NORECOMPUTE = OFF,
DROP_EXISTING = ON);

The STATISTICS_NORECOMPUTE = OFF parameter is used to re-compute the distribution statistics. When this
parameter is set to ON, out-of-date statistics are not automatically recomputed. This parameter cannot be used to
ensure that the table and its associated indexes are not available for queries during an index operation.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE INDEX ( Transact-SQL) > CREATE SPATIAL INDEX (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Spatial Storage (Database Engine) > Working with Spatial Indexes (Database Engine) > Spatial
Indexing Overview

Item: 56 (Ref:Cert-70-432.5.1.2)

You manage a SQL Server 2008 database for a banking firm. You currently perform a full database backup of the
database by using the Full Recovery model with transaction log backups. This ensures that there is no data loss
in case of a database failure.

You regularly receive large files from other trading and investment firms. The data in the files is imported into
temporary tables. After the import, you insert this data into the appropriate tables in your database by using BULK
INSERT statements. This import process exhausts all the log space assigned to the database.

This bulk insert operation is not critical because the bulk operation can be restarted without affecting the database
operations if it fails. You must prevent the log file from growing. Data loss is unacceptable.

What should you do to achieve this?


j Switch to the Simple Recovery model.
k
l
m
n
j Switch to the Bulk-Logged Recovery model.
k
l
m
n

j Manually truncate the transaction log on the server.


k
l
m
n

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 205 of 226

j Take a full backup of the database before the bulk operation, and restore the database if the bulk operation
k
l
m
n
fails.

Answer:
Switch to the Bulk-Logged Recovery model.

Explanation:
You should switch to the Bulk-Logged Recovery model. The Bulk-Logged Recovery model ensures that bulk
operations are minimally logged. With the Bulk-Logged Recovery model, the BULK INSERT, SELECT INTO, and
INSERT...SELECT * FROM OPENROWSET statements are minimally logged. After the bulk operation is
completed, the recovery model should be switched back to the Full Recovery model.

You should not switch to the Simple Recovery model. With the Simple Recovery model, the transaction log is
truncated after each backup. If you switch to this mode, the information stored in the transaction log for the other
users currently using the system will be lost. You will also be unable to recover the database to a point of failure
and data loss is probable. In this scenario, any type of data loss is unacceptable.

You should not truncate the transaction log because recovering to the point of failure would not be possible if the
transaction log was truncated. The information stored in the transaction log for the other users currently using the
system will be lost. Typically, you should only truncate the transaction log after you have performed a full backup
of the database and transaction logs.

You should not take a full backup of the database before the bulk operation and restore the database if the bulk
operation fails. Taking a full backup of the database does not reduce the size of the transaction log. In addition,
this requires more effort than is necessary.

Objective:
Performing Data Management Tasks

Sub-Objective:
Import and export data.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Under the Bulk-Logged Recovery Model

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Under the Full Recovery Model

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Under the Simple Recovery Model

Item: 57 (Ref:Cert-70-432.5.3.1)

You are the database administrator of your company. The network contains an instance of SQL Server 2008
Enterprise Edition named SQL1. SQL1 contains a database named EmpDetails. You create a partition for a
column named EmpID in the Employees table in the EmpDetails database.

You configure page compression on the partition. You want to monitor the page compression statistics for the

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 206 of 226

partition.

Which dynamic management function should you use?


j sys.dm_db_index_physical_stats
k
l
m
n

j sys.dm_db_index_usage_stats
k
l
m
n
j sys.dm_db_index_operational_stats
k
l
m
n
j sys.dm_db_partition_stats
k
l
m
n

Answer:
sys.dm_db_index_operational_stats

Explanation:
You should use the sys.dm_db_index_operational_stats dynamic management function. SQL Server 2008
Enterprise and Developer editions allow you to configure compression for tables and indexes. The two types of
data compression are row compression and page compression. When you have configured partitions on a table
or index, you can configure different types of compression for each partition. You can use the
DATA_COMPRESSION parameter of the CREATE TABLE, CREATE INDEX, ALTER TABLE, and ALTER
INDEX Transact-SQL statements to configure data compression. The possible values for the
DATA_COMPRESSION parameter are NONE, ROW, and PAGE. When you configure the page compression on
partitions, you can use the sys.dm_db_index_operational_stats dynamic management function to obtain page
compression statistics.

You should not use the sys.dm_db_index_physical_stats dynamic management function because this function
does not provide page compression statistics for a partition. The sys.dm_db_index_physical_stats function
returns information about the size and fragmentation for the data and indexes of the specified table or view.

You should not use the sys.dm_db_index_usage_stats dynamic management function because this function
does not provide page compression statistics for a partition. The sys.dm_db_index_usage_stats function
returns information about the number of different types of index operations along with the time each type of
operation was last performed.

You should not use the sys.dm_db_partition_stats dynamic management function because this function does
not provide page compression statistics for a partition. The sys.dm_db_partition_stats function returns
information about page and row counts for every partition in the current database.

Objective:
Performing Data Management Tasks

Sub-Objective:
Implement data compression.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Tables > Creating and Modifying Tables > Creating
Compressed Tables and Indexes

Item: 67 (Ref:Cert-70-432.5.2.1)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 207 of 226

You are the database administrator for your company and manage all your company's SQL Server 2008
databases. A production database named Prod1 contains all the product and sales data of the company.

You must create a table named Prod_details to hold detailed data about products manufactured by your
company. Company employees will frequently access data in this table in such a way that you want to create the
table as a partitioned table.

You use the following statements to create the partition function My_pf and the partition scheme My_ps1:

CREATE PARTITION FUNCTION My_pf (int)


AS RANGE RIGHT FOR VALUES (1, 1000, 2000);

CREATE PARTITION SCHEME My_ps1


AS PARTITION My_pf
TO (FG1, FG2, FG3, FG4);

Which statement should you use to create the Prod_details table as a partitioned table?
j
k
l
m
n
CREATE TABLE Prod_details (
prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1(prod_id);

j
k
l
m
n
CREATE TABLE Prod_details (
prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1(prod_name);

j
k
l
m
n
CREATE TABLE Prod_details (
prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_pf(prod_id);

j
k
l
m
n
CREATE TABLE Prod_details (
prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1;

Answer:
CREATE TABLE Prod_details (
prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1(prod_id);

Explanation:
You should use the following statement to create the table as a partitioned table:

CREATE TABLE Prod_details (


prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1(prod_id);

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 208 of 226

This statement uses the correct syntax for creating a partitioned table. A partitioned table initially involves the
creation of a partition function that specifies how the values in the table that use the function will be partitioned.
Then, a partition scheme is created based on the previously created partition function. The partition scheme maps
the different partitions created by the partition function to filegroups. One or more filegroups can be specified in
the partition scheme. The partition table is created by using the following syntax that includes the partition
scheme:

CREATE TABLE table_name (column_def1, column_def2, ...)


ON partition_scheme_name (partition_column_name);

The arguments used in the statement syntax are as follows:

 table_name: Specifies the name of the table to be created


 column_defn: Specifies the details of the column(s) in the table
 partition_scheme_name: Specifies the name of the partition scheme that identifies the filegroups to
which the partitions of the table will be written
 partition_column_name: Specifies the name of the column in the table on which the table will be
partitioned. The column specified must match the column definition specified in the partition function in
terms of the data type, length, and precision

In the Transact-SQL statement used to create the Prod_details table, the data type of the partition column is int.
This data type is the same as the data type that was specified in the My_pf partition function. The name of the
partition scheme is correctly specified as My_ps1. Therefore, the statement will successfully create a table
partitioned on the prod_id column by using the My_ps1 partition scheme.

You should not use the following statement:

CREATE TABLE Prod_details (


prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1(prod_name);

This statement specifies the partition column as prod_name. The partition column does not match the column
specifications provided while creating the partition My_pf function. In a partitioned table, the partition column must
match the column definition specified in the partition function in terms of the data type, length, and precision. Any
violation of this will generate an error while creating the partitioned table.

You should not use the following statement:

CREATE TABLE Prod_details (


prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_pf(prod_id);

When you create a partitioned table, you must specify the name of the partition scheme instead of the partition
function. In this scenario, My_ps1 is the partition scheme, and it must be specified while creating the table.

You should not use the following statement:

CREATE TABLE Prod_details (


prod_id int,
prod_desc varchar(50),
prod_name varchar(20))
ON My_ps1;

When creating a partitioned table, you must specify the column name on which the table will be partitioned. You

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 209 of 226

cannot create a partitioned table without specifying the column name on which the table will be partitioned. In this
statement, you did not specify a value of My_ps1 for the partition scheme; therefore, the SQL server will consider
My_ps1 as a filegroup.

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage data partitions.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE TABLE (Transact-SQL)

Item: 81 (Ref:Cert-70-432.5.3.2)

You are responsible for managing a SQL Server 2008 computer. The server contains several databases that
contain mission-critical information. You configure daily backups for all databases. To ensure that the backup
process takes less time, you want to configure backup compression on the server.

Which fixed server role or roles will allow you to perform this task? (Choose all that apply.)
c dbcreator
d
e
f
g

c sysadmin
d
e
f
g
c serveradmin
d
e
f
g

c setupadmin
d
e
f
g
c securityadmin
d
e
f
g

Answer:
sysadmin
serveradmin

Explanation:
The sysadmin and serveradmin fixed server roles will allow you to perform the task. Backup compression is a
new feature introduced in SQL Server 2008 Enterprise Edition. A compressed backup is smaller than an
uncompressed backup of the same data. Therefore, compressed backups require significantly less time than
uncompressed backups. Compressed backups are useful when you want to move backups from one instance of
SQL Server to another, such as in log shipping. After configuring log shipping, it is good practice to compress
transaction log backups so that they consume less space and transfer more quickly to secondary databases on
separate SQL Server 2008 instances. To ensure that all new backups are compressed by default, you can
configure the backup compression default option by using SQL Server Management Studio. To perform this
task, you must be a member of the sysadmin or serveradmin fixed server role. You can also configure backup
compression using the sp_configure system stored procedure.

The dbcreator, setupadmin, and securityadmin fixed server roles do not allow you to configure the backup
compression default option on the SQL server. The dbcreator fixed server role allows its members to create,
modify, drop, and restore any database on the server. The setupadmin fixed server role allows its members to
add and remove linked servers. The securityadmin fixed server role allows its members to manage logins and
their properties.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 210 of 226

Objective:
Performing Data Management Tasks

Sub-Objective:
Implement data compression.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration >
Administration: How-to Topics > Server Management How-to Topics > Server Configuration How-to Topics > How
to: View or Change the backup compression default Option (SQL Server Management Studio)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Backing
Up and Restoring Databases in SQL Server > Backup Overview > Backup Compression (SQL Server)

Item: 82 (Ref:Cert-70-432.5.5.2)

You are responsible for managing an instance of SQL Server 2008. You run the following Transact-SQL
statements in the Query Editor to create a new table named CorpData to test different collation settings for
different columns:

You want to generate output that displays the values of both the columns without producing any errors.

Which Transact-SQL statement or statements can you use? (Choose all that apply.)
c SELECT * FROM CorpData;
d
e
f
g

c
d
e
f
g
SELECT * FROM CorpData
WHERE Collation1 = Collation2;

c
d
e
f
g
SELECT * FROM CorpData
WHERE Collation1 = Collation2 COLLATE SQL_Latin1_General_CP1_CI_AS;

c
d
e
f
g
SELECT * FROM CorpData
WHERE Collation1 = Collation2 COLLATE SQL_Latin1_General_CP1_CS_AS;

Answer:
SELECT * FROM CorpData;
SELECT * FROM CorpData
WHERE Collation1 = Collation2 COLLATE SQL_Latin1_General_CP1_CI_AS;

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 211 of 226

Explanation:
You can use the following two Transact-SQL statements:

SELECT * FROM CorpData;

or

SELECT * FROM CorpData


WHERE Collation1 = Collation2 COLLATE SQL_Latin1_General_CP1_CI_AS;

Running these two queries will generate the following output:

When you have configured more than one collation setting in a table, SQL Server uses collation precedence to
determine the collation of the final result of an expression that evaluates to a character string. The collation
precedence rules apply only to the character string data types. There are four categories that are used to identify
the collations of all objects: coercible-default, implicit, explicit, and no-collation. In coercible-default collation, the
object is assigned the default collation of the database in which the object is created. In implicit collation, the
collation defined for the column in the table or view is assigned to the specified expression. Even if you explicitly
assign a collation to the column by using the COLLATE clause in the CREATE TABLE or CREATE VIEW
statement, the column reference is labeled as implicit. The explicit collation is assigned to expressions that
explicitly specify a collation by using a COLLATE clause in the expression. No-collation indicates that the value of
an expression is obtained due to conflicting collations between two strings that have implicit collation.

You should not use the following Transact-SQL statement:

SELECT * FROM CorpData


WHERE Collation1 = Collation2;

Running this statement generates the following error message:

Cannot resolve collation conflict between 'SQL_Latin1_General_CP1_CI_AS' and


'SQL_Latin1_General_CP1_CS_AS' in the equal to operation.

In this scenario, the collations defined for the two columns will be taken as implicit collation. When you combine
two implicit expressions that have different collations, it results in no-collation. Therefore, the error is raised.
Collation conflict prevents the database engine from joining or comparing two values with each other. To prevent
this error from occurring, you should specify an explicit collation for the expression before the equals sign in the
WHERE clause condition. In this scenario, the collation for the Collation1 column is
SQL_Latin1_General_CP1_CI_AS. Therefore, you should use this collation with the COLLATE clause.

You should not use the following Transact-SQL statement:

SELECT * FROM CorpData


WHERE Collation1 = Collation2 COLLATE SQL_Latin1_General_CP1_CS_AS;

Running this statement will not generate any error, but it will also not generate any output because in this
statement the COLLATE clause specifies the collation for the Collation2 column. To display the values of both
the columns, you should specify an explicit collation for the expression before the equals sign in the WHERE
clause condition, which is Collation1 in this scenario.

Objective:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 212 of 226

Performing Data Management Tasks

Sub-Objective:
Manage collations.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Data Types (Transact-SQL) > Collation Precedence (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data (Database Engine) > International Considerations for Databases and Database Engine Applications >
Working with Collations > Setting and Changing Collations > Setting and Changing the Column Collation

Item: 90 (Ref:Cert-70-432.5.1.1)

You are a database administrator for a financial firm. You receive data files from other financial firms. Records in
these input files are delimited with a new line character, and column values are delimited with tab character.

You must validate the data in the input files, and insert the data into multiple tables. To achieve this, you decide to
move the data to a local temporary table, and then read the data in the temporary table to insert the data into your
database tables.

Which statement or tool will best suit your purpose in this scenario?
j the bcp utility
k
l
m
n
j the SSIS Import and Export Wizard
k
l
m
n

j the BULK INSERT statement


k
l
m
n
j the SELECT...INTO statement
k
l
m
n

Answer:
the BULK INSERT statement

Explanation:
The BULK INSERT statement is the best option in this scenario. The BULK INSERT statement can be used to
read data from a flat file and write the data to a permanent or temporary database table. After the data is copied
into a temporary table, it can be validated and then moved into multiple tables. The BULK INSERT statement
allows you to specify the FIELDTERMINATOR and ROWTERMINATOR arguments to assign the row and column
delimiters for the import process.

The bcp utility cannot be used in this scenario because the bcp utility does not provide any facility for the
intermediate processing of data. The bcp utility can be used to directly transfer data from a file into a database
table or from a database table to a file. However, the bcp utility would not allow you to transfer the data into a
local temporary table for intermediate validation processing.

The SSIS Import and Export Wizard cannot be used in this scenario because the SSIS Import and Export Wizard
cannot write to temporary tables.

The SELECT...INTO statement cannot be used in this scenario because it cannot be used to extract data from a
flat file. The SELECT...INTO statement is used to transfer data between two tables in the database or between

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 213 of 226

two databases.

Objective:
Performing Data Management Tasks

Sub-Objective:
Import and export data.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Importing
and Exporting Bulk Data > Scenarios for Bulk Importing and Exporting Data > Exporting Data from or Importing
Data to a Temporary Table

Item: 96 (Ref:Cert-70-432.5.4.7)

You manage an instance of SQL Server 2008 named SQL1. SQL1 contains a database named
HumanResource. The HumanResource database contains a table named Employees. You want to create a
spatial index on a column named EmpAddress in the Employees table.

Which permissions will you require to perform this task? (Choose all that apply.)
c ALTER permission on the table
d
e
f
g

c ALTER permission on the database


d
e
f
g

c membership in the sysadmin fixed server role


d
e
f
g

c membership in the serveradmin fixed server role


d
e
f
g

Answer:
ALTER permission on the table
membership in the sysadmin fixed server role

Explanation:
You will require ALTER permission on the table and membership in the sysadmin fixed server role. You can
create a spatial index on a spatial column of a table that contains spatial data. SQL Server 2008 allows you to
create spatial indexes on columns that have the geography or geometry data type. To be able to create spatial
index on a table, you must ensure that the table has a primary key. To create a spatial index, you must have
ALTER permission on the table or view. You must also have membership in the sysadmin fixed server role or the
db_ddladmin and db_owner fixed database roles.

The options stating ALTER permission on the database and membership of the serveradmin fixed server role
are incorrect because these permissions are not required to create a spatial index.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 214 of 226

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE INDEX ( Transact-SQL) > CREATE SPATIAL INDEX (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Spatial Storage (Database Engine) > Working with Spatial Indexes (Database Engine) >
Restrictions on Spatial Indexes

Item: 101 (Ref:Cert-70-432.5.5.3)

You manage an instance of SQL Server 2008. The server contains several databases that have different
collations. An assistant administrator named Paul wants to create a table in a database named Products, but he
is unfamiliar with collation precedence rules.

You want to inform Paul about collation precedence rules that he should consider when specifying different
collations for columns in the table.

Which statements are true? (Choose all that apply.)


c Explicit collation takes precedence over implicit collation.
d
e
f
g

c Implicit collation takes precedence over coercible-default collation.


d
e
f
g

c You can combine two expressions that have been assigned different explicit collations.
d
e
f
g

c When you combine two expressions that have been assigned different implicit collations, the output is
d
e
f
g
displayed using implicit collation.

Answer:
Explicit collation takes precedence over implicit collation.
Implicit collation takes precedence over coercible-default collation.

Explanation:
Explicit collation takes precedence over implicit collation, and implicit collation takes precedence over coercible-
default collation. SQL Server uses collation precedence to determine the collation of the final result of an
expression that evaluates to a character string. There are four categories that are used to identify the collations of
all objects. These categories are coercible-default, implicit, explicit, and no-collation. In coercible-default collation,
the object is assigned the default collation of the database in which the object is created. In implicit collation, the
collation defined for the column in the table or view is assigned to the specified expression. Even if you explicitly
assign a collation to the column by using the COLLATE clause in the CREATE TABLE or CREATE VIEW
statement, the column reference is labeled as implicit. The explicit collation is assigned to expressions that
explicitly specify a collation by using a COLLATE clause in the expression. No-collation indicates that the value of
an expression is obtained due to conflicting collations between two strings that have implicit collation. For
example, when you run the following Transact-SQL statement:

SELECT * FROM CorpData WHERE Column1 = Column2;

In this statement, Column1 and Column2 have SQL_Latin1_General_CP1_CI_AS and


SQL_Latin1_General_CP1_CS_AS collation, respectively, the following error message will be displayed:

Cannot resolve collation conflict between 'SQL_Latin1_General_CP1_CI_AS' and


'SQL_Latin1_General_CP1_CS_AS' in the equal to operation.

When you combine two implicit expressions that have different collations, it results in no-collation. Collation

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 215 of 226

conflict prevents the Database Engine from joining or comparing two values with each other. To prevent this error
from occurring, you should specify an explicit collation for the expression before the equals sign in the WHERE
clause condition as shown in the following example:

SELECT * FROM CorpData


WHERE Column1 = Column2 COLLATE SQL_Latin1_General_CP1_CI_AS;

The options stating that you can combine two expressions that have been assigned different explicit collations
and when you combine two expressions that have been assigned different implicit collations, the output is
displayed using implicit collation are incorrect. When you combine two expressions that have been assigned
different explicit collations, the statement generates an error. When you combine two implicit expressions that
have different collations, the statement results in no-collation.

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage collations.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > Data Types (Transact-SQL) > Collation Precedence (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Querying and Changing
Data (Database Engine) > International Considerations for Databases and Database Engine Applications >
Working with Collations > Setting and Changing Collations > Setting and Changing the Column Collation

Item: 105 (Ref:Cert-70-432.5.4.4)

You are a database administrator for a toy manufacturing company named NetFx. The company stores all its
product-related data in a database named Netfx_data that resides on a SQL Server 2008 server named Server1.
A table named Invoice_details exists in the database and contains the details of all the new invoices generated
by the company in the last month. The table was created by using the following CREATE TABLE statement:

CREATE TABLE Invoice_details (


Invoice_id int NOT NULL,
Customer_id int NOT NULL,
Invoice_date datetime DEFAULT GETDATE () NULL,
Amount_total int NULL,
Serial_num int IDENTITY (1,1) NOT NULL)

A clustered index exists on the Serial_num column, and a composite nonclustered index exists on the Invoice_id
and the Customer_id columns.

An application will be used to execute queries to retrieve all the invoices for a particular customer. The following
query is used by the application to retrieve data:

SELECT Customer_id, Invoice_id, Invoice_date


FROM Invoice_details
WHERE Customer_id = 1234
ORDER BY Invoice_date DESC;

You must ensure optimal performance of the query.

Which action should you perform to achieve this?


Create a clustered index on the Customer_id column.

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 216 of 226

j
k
l
m
n

j Create a nonclustered index on the Customer_id column.


k
l
m
n

j Create a nonclustered index on the Invoice_date column.


k
l
m
n

j Alter the nonclustered index on the table to include the Invoice_date column.
k
l
m
n

j Alter the nonclustered index on the table to remove the Customer_id column.
k
l
m
n

Answer:
Alter the nonclustered index on the table to include the Invoice_date column.

Explanation:
You should alter the nonclustered index on the table to include the Invoice_date column. Including the
Invoice_date column in the nonclustered index on the table will create a covering index for all three columns in
the SELECT list of the query. A covering index on all the columns present in the SELECT list increases the
performance of the query because all the data required for the query can be retrieved from the index. Only the
index pages containing the index must be scanned to return the required data. This improves performance of the
query.

You should not create a clustered index on the Customer_id column because a clustered index exists in the
table. A table can contain only one clustered index.

You should not create a nonclustered index on the Customer_id column. Creating a nonclustered index on the
Customer_id column on the table will not improve the performance in this scenario because the SQL Server
engine would scan both the nonclustered indexes in the table to retrieve the rows required by the query. A
composite nonclustered index including all three columns in the SELECT list should be used in this scenario.

You should not create a nonclustered index on the Invoice_date column. Creating a nonclustered index on the
Invoice_date column will not improve the performance in this scenario because the SQL Server engine will have
to scan both nonclustered indexes in the table to retrieve the rows required by the query. A composite
nonclustered index including all three columns in the SELECT list should be used in this scenario.

You should not alter the nonclustered index on the table to remove the Customer_id column. Removing the
Customer_id column from the nonclustered index will not improve the performance in this scenario. The
nonclustered index should instead be altered to include the Invoice_date column. This will create an index that
contains all the data required by the query in this scenario.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Indexes > Implementing Indexes > Creating Indexes
(Database Engine)

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Indexes > Implementing Indexes > Creating Indexes

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 217 of 226

(Database Engine) > Creating Indexes with Included Columns

Item: 121 (Ref:Cert-70-432.5.4.2)

You are the SQL administrator for your company. A SQL Server 2008 instance named SQL_Prod includes a
database named Research. The Research database contains many indexes.

Users report that the Research database is experiencing performance issues. Upon investigation, you discover
that the indexes for the Research database are showing an average fragmentation of 20 percent. You decide to
create a maintenance plan that will run on a weekly basis to resolve this issue.

Which task should you include in the maintenance plan?


j Check Database Integrity Task
k
l
m
n
j Rebuild Index Task
k
l
m
n

j Reorganize Index Task


k
l
m
n
j Update Statistics Task
k
l
m
n

Answer:
Reorganize Index Task

Explanation:
You should include the Reorganize Index Task task in the maintenance plan. Reorganizing defragments and
compacts the indexes. When indexes are fragmented, you can either reorganize or rebuild them. Reorganizing is
suggested when average fragmentation is under 30 percent.

The maintenance plan should not include the Check Database Integrity Task. This task verifies the structural
integrity of a database.

The maintenance plan should not include the Rebuild Index Task. Rebuilding an index is suggested when
average fragmentation is 30 percent or more.

The maintenance plan should not include the Update Statistics Task. This task updates the statistics for a table
or view.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Indexes > Optimizing Indexes > Reorganizing and
Rebuilding Indexes

TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Integration Services > Development > Designing and
Implementing Packages > Understanding the Components of an Integration Services Package > Control Flow

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 218 of 226

Elements > Integration Services Tasks > Maintenance Tasks

Item: 125 (Ref:Cert-70-432.5.2.3)

You are the database administrator for your company. The network contains a SQL Server 2008 computer. The
SQL server contains a database named Sales. You want to create four partitions on the CustID column in the
Customers table of the Sales database. The values for each partition will be as follows:

You want to create a partition function that will be used to create the appropriate partitions for the Customers
table as per these requirements.

Which Transact-SQL statement should you run?


j
k
l
m
n
CREATE PARTITION FUNCTION PFsales1 (int)
AS RANGE LEFT FOR VALUES (1, 1000, 2000);

j
k
l
m
n
CREATE PARTITION FUNCTION PFsales1 (int)
AS RANGE RIGHT FOR VALUES (1, 1000, 2000);

j
k
l
m
n
CREATE PARTITION FUNCTION PFsales1 (int)
AS RANGE LEFT FOR VALUES (1, 1000, 2000, >2000);

j
k
l
m
n
CREATE PARTITION FUNCTION PFsales1 (int)
AS RANGE RIGHT FOR VALUES (1, 1000, 2000, >2000);

Answer:
CREATE PARTITION FUNCTION PFsales1 (int)
AS RANGE RIGHT FOR VALUES (1, 1000, 2000);

Explanation:
You should run the following Transact-SQL statement:

CREATE PARTITION FUNCTION PFsales1 (int)


AS RANGE RIGHT FOR VALUES (1, 1000, 2000);

Partitions allow you to place a subset of a table or index on a specified filegroup. The CREATE PARTITION
FUNCTION statement is used to create a partition function. A partition function maps the rows of a table or index
into partitions based on the values of a specified column. The complete syntax for the CREATE PARTITION
FUNCTION statement is:

CREATE PARTITION FUNCTION name_of_partition_function (type_of_input_parameter)


AS RANGE [ LEFT | RIGHT ]
FOR VALUES ( [ boundary_value [ ,...n ] ] );

The LEFT or RIGHT arguments specify the side of each boundary value interval to which the boundary_value
[ ,...n ] belongs when interval values are sorted in ascending order from left to right by the database engine. For
example, running the following Transact-SQL statement will create a partition function that can be used to
partition a table or index into four partitions:

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 219 of 226

CREATE PARTITION FUNCTION PFsales1 (int)


AS RANGE RIGHT FOR VALUES (1, 1000, 2000);

If you use the above partition function on a partitioning column named CustID, the partitioning would be done as
follows:

You should not run the following Transact-SQL statement:

CREATE PARTITION FUNCTION PFsales1 (int)


AS RANGE LEFT FOR VALUES (1, 1000, 2000);

If you use this partition function on a partitioning column named CustID, the partitioning would be done as follows:

This would not meet the requirements of the scenario. To ensure that the values for each partition meet the
requirements, you should use the LEFT range direction instead of the RIGHT range direction when creating the
partition function.

You should not run the following Transact-SQL statement:

CREATE PARTITION FUNCTION PFsales1 (int)


AS RANGE LEFT FOR VALUES (1, 1000, 2000, >2000);

This statement is syntactically incorrect because values in the values list cannot contain symbols, such as a
relational operator.

You should not run the following Transact-SQL statement:

CREATE PARTITION FUNCTION PFsales1 (int)


AS RANGE RIGHT FOR VALUES (1, 1000, 2000, >2000);

This statement is syntactically incorrect because values cannot contain symbols. Also, to ensure that the values
for each partition meet the requirements, you should use the LEFT range direction instead of the RIGHT range
direction when creating the partition function.

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage data partitions.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > CREATE PARTITION FUNCTION (Transact-SQL)

Item: 126 (Ref:Cert-70-432.5.5.1)

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 220 of 226

You are the SQL administrator for your company. You manage a SQL Server 2008 instance named SQL1 that
hosts several databases. SQL1 was originally configured to use a SQL Server collation. You need to change the
instance to a Windows collation.

You create scripts for creating all the user databases and the objects in them. Then, you export all your data and
drop all user databases.

What should you do next?


j Rebuild the master database with the new collation, and import your data.
k
l
m
n

j Rebuild the master database with the new collation, create the user databases and objects using the scripts,
k
l
m
n
and import your data.
j Rebuild the master database with the new collation, and create the user databases and objects using the
k
l
m
n
scripts.
j Rebuild the master database with the new collation, import your data, and create the user databases and
k
l
m
n
objects using the scripts.

Answer:
Rebuild the master database with the new collation, create the user databases and objects using
the scripts, and import your data.

Explanation:
You should rebuild the master database with the new collation, create the user databases and objects using the
scripts, and import your data. This will ensure that all future user databases will use the new collation. It will also
ensure that the current user databases are changed to use the new collation. The data should be imported after
the collation is changed and the user databases have been created.

All other options are incorrect because they do not specify the required steps in the correct order or because they
do not include all the steps needed.

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage collations.

References:
TechNet>TechNet Library>Server Products and Technologies>SQL Server>SQL Server 2008>Product
Documentation>SQL Server 2008 Books Online>Database Engine>Development>Querying and Changing
Data>International Considerations for Databases and Database Engine Applications>Working with
Collations>Setting and Changing Collations>Setting and Changing the Server Collation

Item: 129 (Ref:Cert-70-432.5.4.3)

You are the database administrator for your company. One of the databases that you maintain contains large
volumes of XML data. An XML column appears in the WHERE clauses of most queries that users execute against
the database. The large binary objects (BLOBs) are shredded at runtime for each row of the query. This process
becomes very expensive for the database server in terms of resources and performance.

In this scenario, you decide to create an index on the XML columns. You need to ensure that you understand

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 221 of 226

XML indexes.

Which statements about XML indexes are true? (Choose all that apply.)
c PATH, VALUE, and XML are types of secondary XML indexes.
d
e
f
g

c XML indexes exist in the same namespace as non-XML indexes.


d
e
f
g
c You can create only one primary XML index on an XML data type column.
d
e
f
g
c A primary key clustered index must exist before an XML index can be created on a table.
d
e
f
g

c An XML index can be created on an XML column in a database table or view.


d
e
f
g
c It is not necessary to have a primary XML index on an XML column to create a secondary XML index.
d
e
f
g

Answer:
XML indexes exist in the same namespace as non-XML indexes.
You can create only one primary XML index on an XML data type column.
A primary key clustered index must exist before an XML index can be created on a table.

Explanation:
The following statements about XML indexes are true:

 XML indexes exist in the same namespace as non-XML indexes.


 You can create only one primary XML index on an XML data type column.
 A primary key clustered index must exist before creating an XML index on a table. An XML index is created
on a column of the data type XML.

You can create two types of XML indexes, primary and secondary. Primary XML indexes shred the XML BLOB in
the XML data type column and save the XML BLOB as several rows of data. Secondary indexes are created on
the path, value, and property of the column. XML indexes exist in the same namespace as non-XML indexes, and
you can create only one XML index on an XML data type column. When you create an XML index, a clustered
index on the primary key should always exist to ensure that if the table containing the XML column is partitioned,
the primary XML index on the columns can also be partitioned based on the same partitioning scheme and
partitioning function.

The option stating that PATH, VALUE, and XML are types of secondary XML indexes is incorrect. The three types
of secondary indexes are PATH, VALUE, and PROPERTY. XML is not a type of secondary XML index.

The option stating that you can create an XML index on an XML column in a database table or view is incorrect.
You can create an XML index on an XML column only in a database table. You cannot create an XML index on
an XML column in a view.

The option stating that it is not necessary to have a primary XML index on an XML column to create a secondary
XML index is incorrect. You must have a primary XML index on an XML column to create a secondary XML index.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
TechNet > TechNet Library > Server Products and Technologies > SQL Server > SQL Server 2008 > Product

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 222 of 226

Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Semistructured Storage (Database Engine) > Implementing XML in SQL Server > XML Data Type
Variables and Columns > Indexes on XML Data Type Columns

Item: 131 (Ref:Cert-70-432.5.4.5)

You are responsible for managing an instance of SQL Server 2008 named SQL1. SQL1 contains a database
named Customers. The Customers database contains a table named CustDetails. You want to create a spatial
index on a column named CustAddress.

Which two data types can you configure for the CustAddress column to perform the required task? (Choose two.
Each correct answer represents a complete solution.)
c sql_variant
d
e
f
g
c geography
d
e
f
g

c ntext
d
e
f
g
c geometry
d
e
f
g

c table
d
e
f
g

Answer:
geography
geometry

Explanation:
You can configure the CustAddress column as a geography or geometry data type. SQL Server 2008 provides
support for spatial data, which identifies geographic locations and shapes. You can create a spatial index on a
column of a table that contains spatial data. SQL Server 2008 allows you to create spatial indexes only on
columns that have the geography or geometry data type. The geography data type represents data in a round-
earth coordinate system, such as latitude and longitude coordinates. The geometry data type represents data in
a Euclidean or flat coordinate system. To be able to create a spatial index on a table, you must ensure that the
table has a primary key. To create spatial indexes, you can either use the CREATE SPATIAL INDEX Transact-
SQL statement or SQL Server Management Studio.

You cannot create a spatial index on columns of the sql_variant, ntext, and table data types. The sql_variant
data type enables a single column, variable, or parameter to store values of different data types. The ntext data
type is a variable-length data type that is used for storing large non-Unicode and Unicode character and binary
data. The table data type is a special data type that allows you to store a result set for processing at a later time.

Objective:
Performing Data Management Tasks

Sub-Objective:
Maintain indexes.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Spatial Storage (Database Engine) > Working with Spatial Indexes (Database Engine) >
Restrictions on Spatial Indexes

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 223 of 226

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Spatial Storage (Database Engine) > Working with Spatial Indexes (Database Engine) > Spatial
Indexing Overview

Item: 154 (Ref:Cert-70-432.5.2.4)

You are the database administrator of your company. The network contains a SQL Server 2008 computer that
has two partitioned tables named Sales and Sales_Hy. You want to switch a partition from the Sales table to the
Sales_Hy table.

Which Transact-SQL statement should you use?


j ALTER SCHEMA
k
l
m
n
j ALTER PARTITION FUNCTION
k
l
m
n

j ALTER PARTITION SCHEME


k
l
m
n
j ALTER TABLE
k
l
m
n

Answer:
ALTER TABLE

Explanation:
You should use the ALTER TABLE statement. The ALTER TABLE statement modifies a table definition to
modify, add, or drop columns and constraints, switch partitions, or disable or enable triggers. The ALTER
TABLE...SWITCH statement allows you to switch a partition from one partitioned table to another, which helps
you transfer subsets of data quickly and efficiently. To switch the first partition of the Sales table into the third
partition of the Sales_Hy table, you should use the following Transact-SQL statement:

ALTER TABLE Sales SWITCH PARTITION 1 TO Sales_Hy PARTITION 3

To be able to switch partitions, you must ensure that the following requirements are met:

 Before you perform the switch operation, you must ensure that the table that contains the partition and the
table that will receive the partition exist in the database. You must also ensure that both the tables share
the same filegroup.
 The target partition must be empty.
 The partitions must be created on the same column.

You should not use the ALTER SCHEMA statement because this statement does not allow you to switch a
partition from one partitioned table to another. The ALTER SCHEMA statement is used to transfer a securable
from one schema to another.

You should not use the ALTER PARTITION FUNCTION statement because this statement does not allow you to
switch a partition from one partitioned table to another. This statement modifies a partition function by splitting or
merging its boundary values. For example, you can use the ALTER PARTITION FUNCTION statement to split a
partition of any table or index that uses the partition function into two partitions, or merge two partitions into one
partition. The complete syntax for the ALTER PARTITION FUNCTION statement is as follows:

ALTER PARTITION FUNCTION partition_function_name()


{ SPLIT RANGE ( boundary_value ) | MERGE RANGE ( boundary_value ) };

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 224 of 226

You should not use the ALTER PARTITION SCHEME statement because this statement does not allow you to
switch a partition from one partitioned table to another. This statement adds or modifies the designation of a
filegroup to a partition scheme.

Objective:
Performing Data Management Tasks

Sub-Objective:
Manage data partitions.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Transact-SQL
Reference > ALTER TABLE (Transact-SQL)

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Development > Designing and
Implementing Structured Storage (Database Engine) > Partitioned Tables and Indexes > Designing Partitioned
Tables and Indexes > Designing Partitions to Manage Subsets of Data

Item: 157 (Ref:Cert-70-432.5.1.4)

You are the database administrator for Nutex Corporation. The company has a main office and a branch office.
Each office contains an instance of SQL Server 2008.

You have received a text file named emp.txt from the human resources department in the branch office. The text
file contains information about employees, and the data for each column is separated by a comma. You want to
import the data from the files into a table named employees in a database named hrdb using minimum
administrative effort.

Which command should you run?


j bcp hrdb.employees in emp.txt
k
l
m
n

j bcp hrdb.employees in emp.txt -t ,


k
l
m
n

j bcp hrdb.employees in -f emp.txt -t ,


k
l
m
n

j bcp hrdb.employees in -i emp.txt -t ,


k
l
m
n

Answer:
bcp hrdb.employees in emp.txt -t ,

Explanation:

You should run the bcp hrdb.employees in emp.txt -t , command. The bcp utility allows you to copy
bulk data between a SQL server and a data file in a user-specified format. You can use the bcp command to
import new rows into SQL Server tables or to export data stored in SQL Server tables to data files. In this
scenario, you include the in parameter in the bcp command. Specifying the in parameter copies data from the
specified file into the table or view. The -t parameter allows you to specify the field terminator, which in this
scenario is a comma.

You should not run the bcp hrdb.employees in emp.txt command. The default field terminator used by

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 225 of 226

the bcp utility is the tab character. In this scenario, the data for each column is separated by a comma in the text
file. Therefore, you must specify the -t parameter to specify the comma (,) terminator.

You should not run the bcp hrdb.employees in -f emp.txt -t , command. This command is
syntactically incorrect. You must specify the full path to the data file directly after the in parameter. Also, the -f
parameter is used to specify the full path of a format file. A format file stores format information for each field in a
data file that is related to a particular table. You can use a format file while bulk importing data into a SQL Server
table or bulk exporting data from a SQL Server table.

SQL Server 2005 and SQL Server 2008 support Extended Markup Language (XML) format files in addition to
standard format files. While XML and non-XML format files are interchangeable, it is recommended that you use
XML format files because they provide several advantages over non-XML format files. The following is an
example of an XML format file generated from a table named SampleFormatFile by using the bcp utility:

<?xml version="1.0"?>
<BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<RECORD>
<FIELD ID="1" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="7"/>
<FIELD ID="2" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="100"
COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
<FIELD ID="3" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="100"
COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
<FIELD ID="4" xsi:type="CharTerm" TERMINATOR="\r\n" MAX_LENGTH="100"
COLLATION="SQL_Latin1_General_CP1_CI_AS"/>
</RECORD>
<ROW>
<COLUMN SOURCE="1" NAME="Col1" xsi:type="SQLSMALLINT"/>
<COLUMN SOURCE="2" NAME="Col2" xsi:type="SQLNVARCHAR"/>
<COLUMN SOURCE="3" NAME="Col3" xsi:type="SQLNVARCHAR"/>
<COLUMN SOURCE="4" NAME="Col4" xsi:type="SQLNVARCHAR"/>
</ROW>
</BCPFORMAT>

bcp Testdb..SampleFormatFile format nul -c -t, -x -f SampleFormatFile.Xml T

In this scenario, you already have text files that contain data to be imported into the hrdb database. Therefore,
creating a non-XML or XML format file would require additional administrative effort.

You should not run the bcp hrdb.employees in -i emp.txt -t , command. This command is
syntactically incorrect. You must specify the full path to the data file directly after the in parameter. Also, the -i
parameter is used to specify the name of a response file that contains answers for the command prompt
questions for each data file when you perform the bulk copy by using interactive mode.

Objective:
Performing Data Management Tasks

Sub-Objective:
Import and export data.

References:
MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Technical Reference > Tools Reference >
Command Prompt Utilities > bcp Utility

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.
Page 226 of 226

MSDN > MSDN Library > Servers and Enterprise Development > SQL Server > SQL Server 2008 > Product
Documentation > SQL Server 2008 Books Online > Database Engine > Operations > Administration > Importing
and Exporting Bulk Data > Format Files for Importing or Exporting Data > Introduction to Format Files

Copyright © 2011 Transcender LLC, a Kaplan Professional Company. All Rights Reserved.

Potrebbero piacerti anche