Sei sulla pagina 1di 54

Error Handling in SQL 2000 a

Background
An SQL text by Erland Sommarskog, SQL Server MVP. Last revision 2009-11-29.
http://www.sommarskog.se/error-handling-I.html

This is one of two articles about error handling in SQL Server 2000. This article
focuses on how SQL Server and to some extent ADO behave when an error
occurs. The other article, Implementing Error Handling with Stored Procedures,
gives advice for how you should check for errors when you write stored
procedures. Logically, this article is part one, and Implementing... is part two.
However, you can read the articles in any order, and if you are relatively new
to SQL Server, I recommend that you start with Implementing.... The article here
gives a deeper background and may answer more advanced users' questions about
error handling inSQL Server.
Note: this article was written for SQL 2000 and earlier versions. All I have for SQL
2005 is unfinished article with a section Jumpstart Error Handling. The content in
this article is to some extent applicable to SQL 2005 as well, but you will have to
use your imagination to map what I say to SQL 2005. The article includes a short
section on TRY-CATCH. I hope to produce a complete article for error handling in
SQL 2005 later on.
Table of Contents:
Introduction
The Basics
The Anatomy of an Error Message
How to Detect an Error in T-SQL @@error
Return Values from Stored Procedures
@@rowcount
@@trancount
More on Severity Levels
What Happens when an Error Occurs?
The Possible Actions
When Does SQL Server Take which Action?
Connection-termination
Scope-abortion
Statement-termination and Batch-abortion
Trigger Context
Errors in User-Defined Functions
Control Over Error Handling
SET XACT_ABORT
ARITHABORT, ARITHIGNORE and ANSI_WARNINGS
RAISERROR WITH NOWAIT
Duplicates
Using Linked Servers
Retrieving the Text of an Error Message
TRY-CATCH in SQL 2005
Client-side Error Handling
DB-Library
ODBC
ADO
ADO .Net
Acknowledgements and Feedback
Revision History
Introduction
In many aspects SQL Server is a very good DBMS that permits you implement
powerful solutions with good performance. However, when it comes to error
handling... To be blunt: error handling in SQL Server is poor. It is a patchwork of
not-always-so-consistent behaviour. It's also weak in that you have fairly little
control over error handling, and for advanced error handling like suppressing
errors or logging errors, you must take help from the client-side. Unfortunately,
depending on which client library you use, you may find that the client library has
its own quirks, sometimes painting you into a corner where there is no real good
solution.
In this article, I will first look at what parts an error message consists of, and how
you can detect that an error has occurred in T-SQL code. Next, I describe the
possible actions can SQL Server can take in case of an error. I then proceed to
describe the few possibilities you have to control SQL Server's error handling.
Finally, there is a section on how the different client libraries from Microsoft
behave, with most of the focus on ADO and ADO.Net.
General disclaimer: whereas some information in this text is drawn from Books
Online and other documentation from Microsoft, a lot of what I say is based on
observations that I have made from working with SQLServer, and far from all of
this is documented in Books Online. Therefore, you should be wary to rely on a
specific behaviour like "this error have this-and-this effect", as it could be different
in another version of SQL Server, even different between service packs.
The Basics
The Anatomy of an Error Message
Here is a typical error message you can get from SQL Server when working from
Query Analyzer.
Server: Msg 547, Level 16, State 1, Procedure error_demo_sp, Line 2
UPDATE statement conflicted with COLUMN FOREIGN KEY constraint
'fk7_acc_cur'.
The conflict occurred in database 'bos_sommar', table 'currencies',
column 'curcode'.
The statement has been terminated.
Note: Under Tools->Options->Connections, I have checked Parse ODBC Message Prefixes.
The error information that SQL Server passes to the client consists of several
components, and the client is responsible for the final interpretation of the
message. These are the components that SQL Server passes to the client.
Message number each error message has a number. You can find most of the
message numbers in the table sysmessages in the master database. (There some
special numbers like 0 and 50000 that do not appear there.) In this example, the
message number is 547. Since most interesting messages are errors, I will also use
the term error number. Message numbers from 50001 and up are user-defined.
Lower numbers are system defined.
Severity level a number from 0 to 25. The stort story is that if the severity level
is in the range 0-10, the message is informational or a warning, and not an error.
Errors resulting from programming errors in your SQLcode have a severity level in
the range 11-16. Severity levels 17-25 indicate resource problems, hardware
problems or internal problems in SQL Server, and if the severity is 20 or higher, the
connection is terminated. For the long story, see the section More on Severity
Levels for some interesting tidbits. For system messages you can find the severity
level in master..sysmessages, but for some messages SQL Server employs a
different severity level than what's in sysmessages.
State a value between 0 and 127. The meaning of this item is specific to the error
message, but Microsoft has not documented these values, so this value is rarely of
interest to you.
Procedure in which stored procedure, trigger or user-defined function the error
occurred. Blank if the error occurred in a plain batch of SQL statements (including
dynamic SQL).
Line Line number within the procedure/function/trigger/batch the error occurred.
A line number of 0 indicates that the problem occurred when the procedure was
invoked.
Message text the actual text of the message that tells you what went wrong. You
can find this text in master..sysmessages, or rather a template for it, with
placeholders for names of databases, tables etc.
As I mentioned the client is responsible for the formatting of the error message,
and for messages with a severity level with 10 or lower, most client programs print
only the message text, but not severity level, procedure etc. In fact, we see an
example of this above. The text The statement has been terminated is a message on
its own, message 3621.
When you write your own client program, you can choose your own way to display
error messages. You may be somewhat constrained by what your client library
supplies to you. The full information is available with low-level interfaces such as
DB-Library, ODBC or the OLE DB provider for SQL Server. On the other hand,
in ADO you only have access to the error number and the text of the message.
There are two ways an error message can appear: 1) an SQL statement can result in
an error (or a warning) 2) you emit it yourself with RAISERROR (or PRINT). Let's take
a brief look at RAISERROR here. Here is sample statement:
RAISERROR('This is a test', 16, 1)
Here you supply the message text, the severity level and the state. The output is:
Server: Msg 50000, Level 16, State 1, Line 1
This is a test
Thus, SQL Server supplies the message number 50000, which is the error number
you get when you supply a text string to RAISERROR. (There is no procedure name
here, since I ran the statement directly from Query Analyzer.) Rather than a string,
you could have a supplied a number of 50001 or greater, and SQL Server would
have looked up that number in sysmessages to find the message text. You would
have stored that message with the system procedure sp_addmessage. (If you just
supply a random number, you will get an error message, saying that the message is
missing.) Whichever method you use, the message can include placeholders, and
you can provide values for these placeholders as parameters to RAISERROR,
something I do not cover here. Please refer to Books Online for details.
As I mentioned State is rarely of interest. With RAISERROR, you can use it as you
wish. If you raise the same message in several places, you can provide different
values to State so that you can conclude whichRAISERROR statement that fired. The
command-line tools OSQL and ISQL have a special handling of state: if you use a
state of 127, the two tools abort and set the DOS variable ERRORLEVEL to the
message number. This can be handy in installation scripts if you want to abort the
script if you detect some serious condition. (For instance, that database is not on
the level that the installation script is written for.) This behaviour is entirely client-
dependent; for instance, Query Analyzer does not react on state 127.
How to Detect an Error in T-SQL @@error
After each statement in T-SQL, with one single exception that I cover in the next
section, SQL Server sets the global variable @@error to 0, unless an error occurs, in
which case @@error is set to the number of that error.(Note: these days, the SQL
Server documentation refers to @@error as a "function". Being an old-timer, I prefer "global
variables" for the entities whose names that start with @@.)
More precisely, if SQL Server emits a message with a severity of 11 or higher,
@@error will hold the number of that message. And if SQL Server emits a message
with a severity level of 10 or lower, SQL Server does not set @@error, and thus you
cannot tell from T-SQL that the message was produced.
But the message number is also the only field of the error message that you easily
can access from T-SQL. A common question on the newsgroups is how to retrieve
the text of an error message, and for a long time the answer was "you can't". But
Mark Williams pointed out to me a way to do it. However it requires that the user
to have sysadmin privileges, so you cannot easily use it in an application. I will
return to this topic in the sectionRetrieving the Text of an Error Message.
There is no way to prevent SQL Server from raising error messages. There is a small
set of conditions for which you can use SET commands to control whether these
conditions are errors or not. We will look closer at these possibilities later, but I
repeat that this is a small set, and there is no general way in T-SQL to suppress error
messages. You will need to take care of that in your client code. (Another common
question on the newsgroups.)
As I mentioned, @@error is set after each statement. Therefore, you should always
save the save the value of @@error into a local variable, before you do anything
with it. Here is an example of what happens if you don't:
CREATE TABLE notnull(a int NOT NULL)
DECLARE @value int
INSERT notnull VALUES (@value)
IF @@error <> 0
PRINT '@@error is ' + ltrim(str(@@error)) + '.'
The output is:
Server: Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'a', table
'tempdb.dbo.notnull'; column does not allow nulls. INSERT fails.
The statement has been terminated.
@@error is 0.
Here is the correct way.
CREATE TABLE notnull(a int NOT NULL)
DECLARE @err int,
@value int
INSERT notnull VALUES (@value)
SELECT @err = @@error
IF @err <> 0
PRINT '@err is ' + ltrim(str(@err)) + '.'
The output is:
Server: Msg 515, Level 16, State 2, Line 3
Cannot insert the value NULL into column 'a', table
'tempdb.dbo.notnull'; column does not allow nulls. INSERT fails.
The statement has been terminated.
@err is 515.
Return Values from Stored Procedures
All stored procedures have a return value, determined by the RETURN statement.
The RETURN statement takes one optional argument, which should be a numeric
value. If you say RETURN without providing a value, the return value is 0 if there is
no error during execution. If an error occurs during execution of the procedure, the
return value may be 0, or it may be a negative number. The same is true if there is
no RETURN statement at all in the procedure: the return value may be a negative
number or it may be 0.
Whether these negative numbers have any meaning, is a bit difficult to tell. It used
to be the case, that the return values -1 to -99 were reserved for system-generated
return values, and Books Online for earlier versions ofSQL Server specified
meanings for values -1 to -14. However, Books Online for SQL 2000 is silent on
any such reservations, and does not explain what -1 to -14 would mean.
With some occasional exception, the system stored procedures that Microsoft ships
with SQL Server return 0 to indicate success and any non-zero value indicates
failure.
While there is no law that requires you to follow the same convention for your
stored procedures, my strong recommendation is that you use return values solely
to indicate success/failure. If you want to return data such as the id for an inserted
row, number of affected rows or whatever, use an OUTPUT parameter instead. It
follows from the fact that a blank RETURN may return 0, even if there has been an
error during execution, that you should be careful to return an explict value
yourself if an error occurs in the procedure.
There is one situation when a stored procedure does not return any value at all,
leaving the variable receiving the return value unaffected. This is when the
procedure is aborted because of a scope-aborting error. We will look more into this
later. There is also one situation when the return value is NULL: this happens with
remote procedures and occurs when the batch is aborted on the remote server.
(Batch-abortion is also something we will look into more later on.)
There is one curious exception to the rule that @@error is set after each statement:
a RETURN without parameters does not change the value of @@error, but leaves the
variable unchanged. In my opinion, this is not really practically useful. (I owe this
information to a correspondent who gave me this tip by e-mail. Alas, I lost his mail due to
problems at my ISP, so I can credit him by name.)
@@rowcount
@@rowcount is a global variable reports the number of affected rows in the most
recently executed statement. Just like @@error you need to save it in a local
variable if you want to use the value later, since @@rowcount is set
after each statement. Since with SET you can only assign variable at a time, you
must use SELECT if you need to save both @@error and @@rowcount into local
variables:
SELECT @err = @@error, @rowc = @@rowcount
(For this reason, I prefer to always use SELECT for variable assignment, despite Microsoft's
recommendations to use SET.)
In T-SQL it is not an error if, for instance, an UPDATE statement did not affect any
rows. But it can of course indicate an error in your application, as it could be an
error if a SELECT returns more that one row. For these situations, you can check
@@rowcount and raise an error and set a return value, if @@rowcount is not the
expected value.
@@trancount
@@trancount is a global variable which reflects the level of nested transactions.
Each BEGIN TRANSACTION increases @@trancount by 1, and each COMMIT
TRANSACTION decreases @@trancount by 1. Nothing is actually committed until
@@trancount reaches 0. ROLLBACK TRANSACTION rolls back everything to the
outermost BEGIN TRANSACTION (unless you have used the fairly exotic SAVE
TRANSACTION), and forces @@trancount to 0, regards of the previous value.
When you exit a stored procedure, if @@trancount does not have the same value
as it had when the procedure commenced execution, SQL Server raises error 266.
This error is not raised, though, if the procedure is called from a trigger, directly or
indirectly. Neither is it raised if you are running with SET IMPLICIT
TRANSACTIONS ON.
More on Severity Levels
In this section we will look a little closer on the various severity levels.
0
Messages with Level 0 are purely informational. A PRINT statement produces a message
on severity level 0. These messages do not set @@error. Most query tools prints only the
text part of a level 0 message.
1-9
These levels, too, are for informational messages/warnings. I cannot recall that I have
encountered this from SQL Server, but I've used it myself in RAISERROR at times. Query
Analyzer and SQL Management Studio prints the message number, the level and the
state, but not the procedure and line number for these messages.
10
This level does not really exist. It appears that SQL Server internally converts level 10 to
level 0, both for its own messages when you use level 10 in RAISERROR.
11-
16
These levels indicate a regular programming error of some sort. But it is not the case that
level 16 is more serious than level 11. Rather it appears to be a somewhat random
categorisation. Books Online gives no details on what the levels might mean,
but SQL Server MVP Jacco Schalkwijk pointed out to me that there is a drop-down box in
the dialog for defining alerts in Enterprise Manager and SQL Management Studio which
has description on these levels. Here is what the drop-down box has to say:
11 Specified Database Object Not Found
12 Unused
13 User Transaction Syntax Error
14 Insufficient Permission
15 Syntax Error in SQL Statements
16 Miscellaneous User Error
My experience is that it may not always be this way, but there certain are matches.
Deadlock, for instance is level 13. (So now you know what a User Transaction Syntax
Error is!)
17-
25
Messages with any of these severity levels indicate some sort of resource problem (for
instance running out of disk space), or internal error in SQL Server, or a problem with the
operating system or hardware. The higher the severity, the more serious problems. These
levels are documented in in the setion Troubleshooting->Error Messages->Error
Message Formats->Error Message Severity Levels in Books Online.
19-
25
To use level 19 or higher in RAISERROR you must use the WITH LOG option, and you
must have sysadmin rights.
20-
25
Errors with these severity levels are so fatal, that they always terminate the connection.
What Happens when an Error Occurs?
Many programming languages have a fairly consistent behaviour when there is a
run-time error. Common is that the execution simply terminates in case of an error,
unless you have set up an exception handler that takes care the error. In other
languages, some error variable is set and you have to check this variable. T-SQL is
confusing, because depending on what error that occurs and in which context it
occurs, SQL Server can take no less than four different actions. I first give an
overview of these alternatives, followed by a more detailed discussion of which
errors that cause which actions. I then discuss two special cases: trigger context
and user-defined functions.
The Possible Actions
These are the four main possible actions SQL Server can take:
Statement-termination. The current statement is aborted and rolled back.
Execution continues on the next statement. Any open transaction is not rolled back.
@@error is set to the number of the error. Since the statement is rolled back, this
means that if you run an UPDATE statement that affects 1000 rows, and for one row
a CHECK constraint is violated, none of the rows will be updated. But if
the UPDATE statement was part of a longer transaction, the effect of the
preceding INSERT, UPDATE or DELETE statements are not affected. You need to issue
a ROLLBACK TRANSACTION yourself to undo them.
Scope-abortion. The current scope (stored procedure, user-defined function, or
block of loose SQL statements, including dynamic SQL) is aborted, and execution
continues on the next statement in the calling scope. That is, if stored procedure A
calls B and B runs into a scope-aborting error, execution continues in A, just after
the call to B. @@error is set, but the aborted procedure does not have a return
value, but the variable to receive the return value is unaffected. As for statement-
termination, any outstanding transaction is not affected, not even if it was started
by the aborted procedure.
Batch-abortion. The execution of the entire batch that is, the block
of SQL statements that the client submitted to SQL Server is aborted. Any open
transaction is rolled back. @@error is still set, so if you would retrieve @@error
first in the next batch, you would see a non-zero value. There is no way you can
intercept batch-abortion in T-SQL code. (Almost. We will look a possibility
using linked servers later on.)
Connection-termination. The client is disconnected and any open transaction is
rolled back. In this case there is no @@error to access.
One can note from this, that there are two things that cannot happen:
The transaction is rolled back, but execution of the current batch continues.
The batch is aborted, but the transaction is not rolled back.
But I like to stress that this is based on my own observations. I have found no
documentation that actually states that these two cases cannot occur under any
circumstances.
The above caters for most of the error situations in SQL Server, but since a hallmark
of the error handling in SQL Server is inconsistency, every now and then I discover
some new odd situation. I am overlooking these cases here, not to burden the
reader with too many nitty-gritty details.
There is however, one more situation you should be aware of and that is batch-
cancellation. The client may at any time tell SQL Server to stop executing the
batch, and SQL Server will comply more or less immediately. In this
situation SQL Server will not roll back any open transaction. (In the general case that
is. It seems that if the T-SQL execution is in a trigger, when the cancellation request comes,
then there is a rollback.) However, if the current statement when the cancellation
request comes in is an UPDATE, INSERT or DELETE statement, then SQL Server will
roll back the updates from that particular statement. Batch-cancellation may occur
because an explicit call to a cancellation method in the client code, but the most
common reason is that a query timeout in the client library expires. ODBC, OLE DB,
ADO and ADO.Net all have a default timeout of 30 seconds. (Which judging from
the questions on the newsgroups, many programmers believe to come
from SQL Server, but not so.)
When Does SQL Server Take which Action?
As you may guess, it depends on the error which action SQL Server takes, but not
only. Context also matters. One is the setting of the command SET XACT_ABORT,
which we shall look at in a later section. A special case is trigger context, in which
almost all errors abort the batch and this will be the topic for the next section.
Right now we will discuss the default context, that is outside triggers and when the
setting XACT_ABORT is OFF.
You may guess that the more severe the error is, the more drastic action SQL Server
takes, but this is only really true for connection-termination. When it comes
to scope-abortion, this occurs for a fairly well-defined family, but I am not sure
that I agree with that these errors are less severe than the errors that abort the batch.
And there is not really any clear distinction between the errors that abort the batch
on the one hand, and those that merely terminate the statement on the other. For
this reason, I will first cover connection-termination, then scope-abortion and then
the other two together.
Connection-termination
When SQL Server terminates the connection, this is because something really bad
happened. The most common reason is an execution error in the SQL Server process
itself, e.g. an access violation (that is, attempt to access an illegal memory
address), a stack overflow, or an assertion error (a programmer-added check for a
certain condition that must be true for his code to work). It could also be a protocol
error in the communication between the client library and SQL Server. These errors
are normally due to bugs in SQL Server or in the client library, but they can also
appear due to hardware problems, network problems, database corruption or severe
resource problems.
SQL Server terminates the connection, because it would not be safe to continue
execution, as internal process structures may be damaged. In some cases, not only
is your connection terminated, but SQL Server as such crashes.
Connection-termination can sometimes be due to errors in your application in so
far that you may have written some bad SQL that SQL Server could not cope with.
But in such case it is still an SQL Server bug if the connection terminates, because
you should get a proper error message. (The error messages in conjunction with
connection-termination are often very opaque.)
There is one case, though, where a bug in application code can cause connection-
termination on its own, and that is if you have your written your own extended
stored procedures or your own OLE objects that you call through the sp_OAxxxxx
procedures. An unhandled execution error in such code will terminate your
connection and may crash SQL Server as well.
There is one way to terminate the connection from T-SQL: if you issue
a RAISERROR statement with a severity level >= 20. To do this you must
provide WITH LOG, and you must be sysadmin. Since errors with severities >= 19
may trigger an operator alert, and eventually may alert someone's pager, don't do
this just for fun.
Scope-abortion
This appears to be confined to compilation errors. At least I have not seen it
happen with any other sort of error. Due to the feature known as deferred name
resolution (in my opinion this is a misfeature), compilation errors can happen
during run-time too. Consider this example (you can run it in the Northwind
database):
CREATE PROCEDURE inner_sp @productid int AS

CREATE TABLE #temp (orderid int NOT NULL,
orderdate datetime NOT NULL)

PRINT 'This prints.'
BEGIN TRANSACTION

INSERT #temp (orderid, orderdate)
SELECT o.OrderID, o.OrderDate
FROM Orders
WHERE EXISTS (SELECT *
FROM [Order Details] od
WHERE od.OrderID = o.OrderID
AND od.ProductID = @productid)

COMMIT TRANSACTION
PRINT 'This does not print.'
go
CREATE PROCEDURE outer_sp AS

DECLARE @ret int
SET @ret = 4711
EXEC @ret = inner_sp 76

PRINT '@@error is ' + ltrim(str(@@error)) + '.'
PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.'
PRINT '@ret ' + coalesce(ltrim(str(@ret)), 'NULL') + '.'
IF @@trancount > 0 ROLLBACK TRANSACTION
go
EXEC outer_sp
go
Because the table #temp does not exist when you create inner_sp, SQL Server
defers examination of the entire INSERT-SELECT statement until run-time. Again,
when you invoke inner_sp, SQL Server cannot find #temp and defers building a
query plan for the INSERT-SELECT statement until it actually comes to execute the
statement. It is first at this point, that SQL Server discovers that
the SELECT statement is incorrect (the alias forOrders is missing). And at that
precise point, the execution of inner_sp is aborted. Here is the output:
This prints.
Server: Msg 266, Level 16, State 2, Procedure inner_sp, Line 18
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK
TRANSACTION
statement is missing. Previous count = 0, current count = 1.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9
The column prefix 'o' does not match with a table name or alias name
used in the query.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9
The column prefix 'o' does not match with a table name or alias name
used in the query.
Server: Msg 107, Level 16, State 1, Procedure inner_sp, Line 9
The column prefix 'o' does not match with a table name or alias name
used in the query.
@@error is 266.
@@trancount is 1.
@ret is 4711.
Note the next-to-last line in the output: inner_sp started a transaction. But just
because inner_sp was aborted does not mean that the transaction was rolled back.
When you implement you error handling, this is something you need to consider,
and I look closer at this in the accompanying article on error handling.
Also observe that @ret never was set, but retained the value it had prior to the call.
Not all compilation errors passes unnoticed when SQL Server loads a procedure. A
pure syntax error like a missing parenthesis will be reported when you try to create
the procedure. But the list of errors not detected because of deferred name
resolution is longer than you might expect. After all, one would expect SQL Server
be able to detect the missing alias even if #temp is missing. With some effort, it
could even detect the missing alias with the Orders table missing, couldn't it?
Actually, I can offer a way to avoid this problem altogether. My toolset AbaPerls, offerde as
freeware that includes a load tool, ABASQL. Before creating a procedure, ABASQL extracts
all temp tables in the procedure and creates them, so that SQL Server will flag errors such as
missing aliases or columns. ABASQL also checks the SQL code for references to non-
existing tables.
Statement-termination and Batch-abortion
These two groups comprise regular run-time errors, such as duplicates in unique
indexes, running out of disk space etc. As I have already have discussed, which
error that causes which action is not always easy to predict beforehand. This table
lists some common errors, and whether they abort the current statement or the
entire batch.
Error Aborts
Duplicate primary key. Statement
NOT NULL violation. Statement
Violation of CHECK or FOREIGN KEY constraint. Statement
Most conversion errors, for instance conversion of non-numeric string to a
numeric value.
BATCH
Attempt to execute non-existing stored procedure. Statement
Missing or superfluous parameter to stored procedure to a procedure with
parameters.
Statement
Superfluous parameter to a parameterless stored procedure. BATCH
Exceeding the maximum nesting-level of stored procedures, triggers and
functions.
BATCH
Being selected as a deadlock victim. BATCH
Permission denied to table or stored procedure. Statement
ROLLBACK or COMMIT without any active transaction. Statement
Mismatch in number of columns in INSERT-EXEC. BATCH
Declaration of an existing cursor Statement
Column mismatch between cursor declaration and FETCH statement. Statement.
Running out of space for data file or transaction log. BATCH
I am only able to make out a semi-consistency. Some real fatal errors after which I
would not really be interested in continuing execution do abort the batch. The
examples here are deadlock victim and running out of disk space. But why would it
be more severe to pass a superfluous parameter to a parameterless one, than to one
that has parameters? And conversion errors? Are they more severe than a
constraint violation? And why not all conversion errors? (We will return to
conversion errors, as well as arithmetic errors that I purposely excluded from this
table, when we discuss the SET commands ANSI_WARNINGS and ARITHABORT. They
belong to the small et of errors, where you have some sort of a choice.)
And don't look to severity levels for help. As noteed above, the severity levels 11-
16 is another classification, that don't reflect any difference in severity. Most of the
errors above have severity level 16, but being a deadlock victim has severity level
13. (Running out of a disk space, which is a resource problem, is level 17.)
Trigger Context
You have trigger context when you are in a trigger, or you are in a stored
procedure, user-defined function or block of dynamic SQL that has been called
directly or indirectly from a trigger. That is, somewhere on the call stack, there is a
trigger. If you are in trigger context, all errors terminate the batch and roll back the
transaction on the spot. (Connection-terminating errors still terminate the
connection, of course.)
Well, almost. When it comes to error handling in SQL Server, no rule is valid
without an exception. Errors you raise yourself with RAISERROR do not abort the
batch, not even in trigger context. Neither does error 266,Transaction count
after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is
missing. This error is simply not raised at all when this condition occurs in trigger
context. No, this is not a bug, but it is documented in Books Online, and according
to Books Online, error 266 is informational only. (Now, taste that concept: an
informational error.)
There is one more way that a trigger can terminate the batch. This happens
if @@trancount is 0 when the trigger exits. A trigger always executes in the
context of a transaction, since even if there is no multi-statement transaction in
progress each INSERT, UPDATE and DELETE statement is its own transaction
in SQL Server, and the trigger is part of that transaction. Thus, @@trancount is at
least 1 when you enter a trigger, and if it is 0 on exit this means that somewhere
has been a ROLLBACK statement. (Or sufficiently many COMMIT to bring @@trancount
to 0.) Why this would have to abort the batch? Because the sky is blue. Seriously, I
don't know, but it has always been that way, and there is no way you can change it.
The normal use for this is that if you have an integrity check in a trigger you raise a
message and roll back the transaction, as in this example.
IF EXISTS (SELECT *
FROM inserted i
JOIN abainstallhistory inh ON i.inhid = inh.inhid
WHERE inh.ss_label <> i.ss_label
OR inh.ss_label IS NULL AND i.ss_label IS NOT NULL
OR inh.ss_label IS NOT NULL AND i.ss_label IS NULL)
BEGIN
ROLLBACK TRANSACTION
RAISERROR('Values on ss_label does not match abainstallhistory.', 16,
1)
RETURN
END
Thus, this trigger aborts the batch, not because of the RAISERROR, but because of
the ROLLBACK TRANSACTION, and since the trigger is permitted to execute to end,
the RAISERROR statement is executed.
Errors in User-Defined Functions
User-defined functions are usually invoked as part of a SET, SELECT, INSERT,
UPDATE or DELETE statement. What I have found is that if an error appears in
a multi-statement table-valued function or in a scalarfunction, the execution of the
function is aborted immediately, and so is the statement the function is part of.
Execution continues on the next line, unless the error aborted the batch. In either
case, @@error is 0. Thus, there is no way to detect that an error occurred in a
function from T-SQL.
The problem does not appear with inline table-functions, since an inline table-
valued function is basically a macro that the query processor pastes into the query.
You can also execute scalar functions with the EXEC statement. In this case,
execution continues if an error occurs (unless it is a batch-aborting error).
@@error is set, and you can check the value of @@error within the function. It
can be problematic to communicate the error to the caller though.
Control Over Error Handling
No, SQL Server does not offer much in this area, but we will look at the few
possibilities, of which the most important is SET XACT_ABORT ON.
SET XACT_ABORT
What I have said this far applies to when XACT_ABORT is OFF, which is the default.
When you issue SET XACT_ABORT ON, the very most of the statement-
terminating errors instead become batch-aborting errors. Thus, if you don't want to
litter your T-SQL code with checks on @@error, and if you are not interested in
trying to recover from the error or invoke some error-logging routine in T-SQL, but
you are content with execution being aborted on first error, then XACT_ABORT is for
you.
Beware, though, that even when XACT_ABORT is ON, not all errors terminate the
batch. Here are the exceptions I know of:
Errors you raise yourself with RAISERROR.
Compilation errors (which normally terminate the scope) do not terminate the
batch.
Error 266, Transaction count after EXECUTE indicates that
a COMMIT or ROLLBACK TRANSACTION statement is missing.
So at a minimum you still need to check @@error after the execution of a stored
procedure or a block of dynamic SQL even if you use XACT_ABORT ON.
ARITHABORT, ARITHIGNORE and ANSI_WARNINGS
These three SET commands give you very fine-grained control for a very small set
of errors. When a division by zero or an overflow occurs, there are no less four
choices.
No action at all, result is NULL when ARITHIGNORE is ON.
Warning message, result is NULL when all are OFF.
Statement-termination when ANSI_WARNINGS is ON.
Batch-abortion when ARITHABORT is ON and ANSI_WARNINGS is OFF.
ARITHABORT and ARITHIGNORE also control domain errors, such as attempt to take
the square root of a negative number. But this is error is not covered
by ANSI_WARNINGS, so here you only have three choices.
As for what is an overflow, SQL Server has extended the domain of this error to
datetime value in a way which is not really intuitive. Consider these two
statements:
select convert(datetime, '2003123') -- This causes a conversion error
select @@error
go
select convert(datetime, '20031234') -- This causes an overflow
select @@error
Thus, if you have a string which conforms syntactically to some date format, but
some element is out of range, this particular form of conversion error only aborts
the batch with a certain setting and in other settings it may not cause an error at
all.
ANSI_WARNINGS controls a few more errors and warnings. With ANSI_WARNINGS ON,
it is an error to assign a character or binary column a value that exceeds the the
maximum length of the column, and this terminates the statement.
When ANSI_WARNINGS is OFF, this condition is not an error, but the value is silently
truncated. The error is never raised for variable assignment. Also,
with ANSI_WARNINGS ON, if an aggregate function such as SUM() or MIN() sees
a NULL value, you get a warning message. (Thus it does not set @@error, nor
terminate the statement.)
When you use ODBC, OLE DB and Query Analyzer
(SQL 2000), ANSI_WARNINGS is ON by default. Since some features (indexed views,
index on computed columns and distributed queries) in SQL Server
requiresANSI_WARNINGS to be ON, I strongly recommend that you stick to this.
Indexed views and index on computed columns also require ARITHABORT to be ON,
but I don't think you can rely on it being ON by default.
Finally, I should mention that there is one more SET command in this
area: NUMERIC_ROUNDABORT. When ON, the batch is aborted if operation with a
decimal data type results in loss of precision. The option is OFF by default, and it
must be OFF for indexed views and indexes on computed columns to work.
RAISERROR WITH NOWAIT
SQL Server buffers the output, so an error message or a result set may not appear
directly at the client. In many cases, this is not an issue, but if you are running a
long-running procedure, you may want to produce diagnostic messages. To have
them displayed immediately in the client, you can use the WITH NOWAIT clause to
the RAISERROR statement, as in this example:
PRINT 'This message does not display immediately'
WAITFOR DELAY '00:00:05'
RAISERROR ('But this one does', 0, 1) WITH NOWAIT
WAITFOR DELAY '00:00:05'
PRINT 'It''s over now'
Once there is a message with NOWAIT, all that is ahead of the message in the buffer
is also passed to the client.
Unfortunately, there is a bug in SQL Server with NOWAIT, which affects you only if
you are calling a procedure through RPC (remote procedure call), so that it this
case, SQL Server buffers the messages nevertheless. RPCis the normal way to call a
procedure from an application (at least it should be), but if you are running a script
from OSQL or Query Analyzer, this bug does not affect you.
Duplicates
Normally when you try to insert a value that would be a duplicate in a unique
index, this is an error and the statement is rolled back. However, the syntax for
the CREATE INDEX statement includes the optionIGNORE_DUP_KEY. When this option
is in effect, duplicates are merely discarded. The statement is not rolled back, and
if the INSERT statement compassed several rows, the rows that do not violate the
uniqueness of the index are inserted.
According to Books Online, SQL Server issues a warning when ignoring a duplicate
row. However, in real life the message has severity level 16, and thus comes across
to the client as an error. Nevertheless, SQL Server does not set @@error, and as I
noted the statement is not rolled back, this message falls in none of four categories
I have presented. Microsoft has acknowledged the incorrect severity level as a bug,
so hopefully this will be fixed in some future version of SQL Server.
This option applies to unique indexes only. It is not available for PRIMARY
KEY or UNIQUE constraints.
Using Linked Servers
There is no way to switch off batch-abortion on a general level. But there is
actually one way to handle the case in T-SQL, and that is through linked servers. If
you call a remote stored procedure, and the procedure runs into a batch-aborting
error, the batch in the calling server is not aborted. On return to the local server,
@@error holds the value of the error that aborted the batch on the remote server,
and the return value of the stored procedure is set to NULL. (At least my tests
indicate this. Thus, it is not the same case as when a local procedure dies
with scope-abortion, when the return value is not set at all.) It goes without saying,
that this is a fairly lame workaround that is only applicable in special situations.
Some notes:
It must be a truly remote server. If you call a procedure in the local server
with four-part notation, SQL Server is too smart for you.
Set up the remote server with SQLOLEDB. When I set up the remote server with
the OLE DB-over-ODBC provider (MSDASQL), the diagnostics about the error was
poorer on the calling server.
Retrieving the Text of an Error Message
There is no supported way to retrieve the full text of an error message in SQL 2000.
You can get a text from master.dbo.sysmessages, but then you only get
placeholders for interesting things like which constraint that was violated. To get
the full text of the error message in a proper way, you need a client to pick it up
and log it.
However, Mark Williams pointed out you can retrieve the full mesage text from
within T-SQL with help of DBCC OUTPUTBUFFER. To wit, after an error has been
raised, the messge text is in the output buffer for the process.
The output from DBCC OUTPUTBUFFER is a single colunm, where each row as a byte
number, a list of hex values, and a textual representation of the hex values. Mark
made the effort to extract the message from the last part, and was kind to send me a
stored procedure he had written. As I looked at the output from DBCC
OUTPUTBUFFER, I found a byte that appeared to hold the length of the message,
which helped me to improve Mark's procedure.
Much later I was contacted by Paulo Santos who looked even deeper into the
output from DBCC OUTPUTBUFFER and he was able to significantly improve the
procedure, and dig out not only the error message, but also severity, procedure and
that. My testing shows that it is still not perfect. Sometimes one of several
messages are dropped, junk characters appear and not all line numbers reported
correctly. But it is far better than nothing at all and you should not expect
something which relies on undocumented behaviour to be perfect. (Of course, on
SQL 2005 you would use TRY-CATCH and call error_message() in
your CATCH handler.)
There is a very significant restriction with this trick: to run DBCC
OUTPUTBUFFER you need sysadmin rights even to look at your own spid, so you
cannot put this in an application that is to be run by plain users.
The prime source for the stored procedure is at Paulo's web site, where you find the
code and some background. In case his site is down or unavailable, you can find a
copy of his spGET_LastErrorMessage here as well. (But check his site first, as he
may have updates). If you are curious in history, you can also look the
original showErrorMessage that Mark and I produced.
TRY-CATCH in SQL 2005
Next version of SQL Server, SQL 2005, code-named Yukon, introduces significant
improvements to the error handling in SQL Server. Here is a simple example:
BEGIN TRY
SELECT convert(smallint, '2003121')
END TRY
BEGIN CATCH
PRINT 'errno: ' + ltrim(str(error_number()))
PRINT 'errmsg: ' + error_message()
END CATCH
The output is:
errno: 244
errmsg: The conversion of the varchar value '2003121' overflowed
an INT2 column. Use a larger integer column.
The construct is similar to error-handling concepts in languages like C++. If an
error occurs in the TRY block, or in a stored procedure called by the TRY block,
execution is transferred to the CATCH block. In the CATCHblock, you have access to
six new
functions: error_number(), error_severity(), error_state(), error_message(), err
or_procedure() and error_line(), that gives you all parts of the message
associated with the error. And, yes, error_message(), is the expanded message
with the parameters filled in.
If you are in a transaction, and the error occurred is a batch-abortion error, your
transaction will be doomed. This means that you cannot commit or perform any
more updates within the transaction you must roll back.
One caveat is that if you catch an error in this way, the client will never see the
error, unless you call RAISERROR in the error handler. Unfortunately, you cannot
reraise the exact error message, since RAISERROR does not permit you to use error
numbers less than 50000.
Client-side Error Handling
The various client libraries from which you can access SQL Server have their quirks
too. Some libraries are low-level libraries like DB-Library, ODBC and
the SQLOLEDB provider. Others are higher-level libraries that sit on top of one of the
low-level libraries, one example is ADO. If the low-level library has some quirk or
limitation, the high-level library is likely to inherit that. The high-level library
might also add its own quirks and limitations.
I am covering four libraries here: DB-Library, ODBC, ADO and ADO .Net, although
the first two I discuss very briefly, since most devleopers today
use ADO or ADO .Net.
DB-Library
When it comes to error handling, DB-Library is probably the best in the game.
When SQL Server produces a message be that an error, a warning or just an
informational message such as a PRINT statement DB-Library invokes a callback
routine, and in that callback routine you have full access to all parts of the
message: error number, severity level, state, procedure, line number and of course
the message itself. You can then set some global variable to determine what should
happen when you come back from the DB-Library call that caused the error.
Unfortunately, Microsoft stopped developing DB-Library with SQL 6.5, and you
have poor or no support for new features in SQL Server with DB-Library. Thus, I
cannot but discourage you from using DB-Library.
ODBC
With ODBC, you have to rely on return-status values, and then retrieve the error
message yourself. Exactly how, I have to admit that I am bit foggy on at this point.
However, you do have access to all parts of the error message, and you get all
messages. This is evidenced by the fact that you get all this information in Query
Analyzer which connects through ODBC.
ADO
ADO is not that good when it comes to error handling. First, you don't have full
access to the error message. You only get the error number and the error text. You
do not get the severity level (so you don't know whether really is an error at all),
nor do you get state, procedure or line number. You do get something called
SQLState, which is a five-letter code, not related to SQL Server but inherited
from ODBC. Another problem is that you do far from always get all error messages,
as I will detail below.
The basic operation with ADO appears simple: You submit a command
to SQL Server and if there is an error in the T-SQL execution, ADO raises an error,
and if you have set up an error handler with On Error Goto, this is where you will
wind up. (On Error is for Basic-derived languages. In C++ I suppose you can
use try-catch, but I have not verified this.) You can retrieve all messages
from SQL Server in the Errors collection on theConnection object.
But there are quite some surprises hiding here. One thing that
makes ADO complicated, is that there are so many ways that you can submit a
command and retrieve the results. Partly, this is due to that ADO permits you to
access other data sources than SQL Server, including non-relational ones. Also, as
your "command" you can simply provide a table name. Since this text is about
error handling with stored procedures in SQL Server, I disregard other possibilities.
But even if you want to invoke a stored procedure, there are a whole lot of choices:
Which provider. You can use SQLOLEDB or MSDASQL (OLE DB over ODBC).
Cursor location. Server-side cursor or client-side cursor? (The concept of a
cursor in this context confused me for a long time. Being an SQL programmer, I think
cursors are bad and should be avoided. Eventually, I have understood that a client-side
cursor is not really a cursor at all. You get the entire data to the client in one go. A
Server-side cursor gets the data from the server in pieces, which may or may not involve
an SQL cursor, depending on the cursor type.)
From which object to invoke the stored procedure. You can use
the .Execute method of the Connection and Command objects or
the .Open method of the Recordset object.
Command type. You can construct an EXEC command as a string and
use adCmdText. You can also use adCmdText with ODBC syntax and supply
parameters through the .Parameters collection. And you can
useadCmdStoredProc to supply the name of a stored procedure and use
the .Parameters collection.
Cursor type. Cursors can be forward-only, static, dynamic or keyset.
Lock type. You can choose between read-only, optimistic, batch optimistic
and pessimistic.
And that's not really all.
What errors you see in your client code, depends on which combination of all these
parameters you use. I developed a form, from which I could choose between these
parameters, and then I played with a fairly stupid stored procedure which
depending on input could cause some errors, generate some PRINT messages and
produce some results sets. And there was a great difference in what I got back.
When I used SQLOLEDB and client-side cursors, I did not get any of my
two PRINT messages in my .Errors collection if there were no errors, whereas
with SQLOLEDB and server-side cursors I got both messages. With MSDASQL, I got
the firstPRINT message, but not the second, no matter the cursor location.
If there were error messages, I did not always get all of them, but at least one error
was communicated and an error was raised in the VB code. However, there is a
gotcha here, or two depending on how you see it. The first gotcha is that if the
stored procedure produces one or more recordsets before the error occurs, ADO will
not raise an error until you have walked past those preceding recordsets
with .NextRecordset. This is not peculiar to ADO, but as far as I know applies to all
client libraries, and is how SQL Server pass the information to the client. The only
odd thing with ADO is that many programmers do not use .NextRecordset, or even
know about it. I have also found that in some situations ADO may raise an error and
say that .NextRecordset is not supported for your provider or cursor type.
The second gotcha is that your procedure may have more recordsets than you can
imagine. To wit, INSERT, UPDATE and DELETE statements generate recordsets to
report the rowcount, unless the setting NOCOUNT isON.
Another irritating feature with ADO that I found, was that as soon there had been an
error in the stored procedure, all subsequent result sets from the stored procedure
were discarded. I could still tell from the return value of the stored procedure that
execution had continued. I have found no combination where you can get the result
sets that were produced after an error.
ADO also takes the freedom to make its own considerations about what is an error. I
found that ADO always considers division by zero to be an error, even if
both ARITHABORT and ANSI_WARNINGS are OFF. In this case, SQL Server merely
produces a warning, but ADO opts to handle this warning as an error. A good thing
in my opinion. Of what I have found, this only happens with division by zero; not
with arithmetic errors such as overflow.
Above I said that even if I did not get all errors from SQL Server, ADO would raise
an error. This is true as long as we are talking about commands you submit
yourself. But ADO can submit commands behind your back, and if they result in
errors, ADO may not alert you even if the abort the batch and thereby rollback any
outstanding transaction. This ugly situation is described further in KB article
810100.
Finally, a note on the return value and value of output parameters from a stored
procedure. They are accessible from ADO, even if there is an error during execution
of the stored procedure (as long the error does causes the procedure to terminate
execution). If you use a client-side cursor you can normally access them directly
after executing the procedure, whereas with a server-side cursor you must first
retrieve all rows in all result sets. (Which means that if .NextRecordset is not
supported for your cursor, you may not be able to retrieve the return value.)
Beware that if you try to retrieve these values too soon, you will not be able to
retrieve them even when you have retrieved all rows.
It is not really the topic for this text, but the reader might want to know my recommendation
of what to choose from all these possibilities. And I say that you should use the SQLOLEDB
provider (note that MSDASQL is the default), client-side cursors (note that server-side
cursors is the default), invoke your stored procedures from the Command object,
using adCmdStoredProcedure. Not because this is the best for error handling, but this
appears to be the best from an overall programming perspective. (If you make these choices
you will get a static read-only cursor.)
ADO .Net
Note: this applies to ADO .Net 1.1. Since some behaviour I describe may be due to
bugs or design flaws, earlier or later versions of ADO .Net may be different in some
points.
To some extent, ADO .Net is much better fitted than ADO to handle errors and
informational messages from SQL Server, but unfortunately neither ADO .Net is
without shortcomings.
The ADO .Net classes can be divided into two groups. The disconnected classes that
are common for all data sources, and the connected classes that are data-source
specific, but.derived from a common interface. A group such of connected classes
makes up a .Net Data Provider and each provider has its own name space. Three
providers can connect to SQL Server: There is SqlClient, which is specific
to SQL Server, and there are theOLE DB and ODBC .Net Data Providers that connect
to anything for which there is an OLE DB provider or an ODBC driver. I will refer to
them here as OleDb and Odbc, as this is how their namespaces are spelled in the
.Net Framework.
If the only data source you target is SQL Server, SqlClient is of course the natural
choice. As we shall see, however, there are situations where OleDb may be
preferrable. There is even the odd case where Odbc is the best choice, but as I will
detail later, you do best to avoid Odbc when connecting to SQL Server.
The three data providers have some common characteristics when it comes to
handling of errors and messages from SQL Server, but there are also significant
differences. I will first cover the common features.
To invoke a stored procedure from ADO .Net, you need a Command object.
(SqlCommand, OleDbCommand or OdbcCommand). Normally you specify
the CommandType as StoredProcedure and provide the procedure name as the
command text, but you can also use the CommandType Text and specify
an EXEC statement.
There are four methods that you can use to invoke a stored procedure
from ADO .Net, and I list them here in the order you are most likely to use them:
DataAdapter.Fill Fills a DataTable or a DataSet with the data from the stored procedure.
The are several overloaded Fill methods, some of which permit you to
pass a CommandBehavior to specify that you want key or schema
information, or that you want only a single row or a single result set.
ExecuteNonQuery Performs a command that does not return any result set (or if it does, you
are not interested in it). One example is a store procedure that updates
data.
ExecuteReader Returns a DataReader object, through which you can access the rows as
they come from SQL Server. If there are several result sets, you
use .NextResult to traverse them. This is the most general method to
access data. Also here you can specify CommandBehavior.
ExecuteScalar Use this method to run a command that produces a result set of a single
value.
To test the possible variations, I wrote a simple application in VB .Net, from which
I could pass an SQL command or a stored procedure, and select which data provider
and which call method to use. For most of the tests, I used a procedure that
depending on input parameters would produce results sets, informational or error
messages, possibly interleaved. What follows is based on my observations when
playing with this application.
If an error occurs during execution of a stored procedure, the method you used to
invoke the procedure will raise an exception. Thus, you should always call these
methods within a Try-Catch block, so that you can handle the error message in
some way. In the exception handler you have access to a provider-
specific Exception object with an ErrorCollection, that containts information
about the error. What information that is available is specific for the provider.
If you are interested in informational messages, that is messages with a severity
10, you can set up an InfoMessage event handler, which you register with
the Connection object. It seems, though, if there are both errors and informational
messages, that the informational messages comes with the exception. In the event
handler, too, you have access to the ErrorsCollection from where you can retrieve
the individual messages.
As long as you stick to Fill, ExecuteNonQuery and ExecuteScalar, your life is
very simple, as all data has been retrieved once you come back, and if there is an
error you wind up in your exception handler. Thus, in difference to ADO, you don't
have to bother about unexpected result sets and all that. If you want the return
value of a stored procedure or the value of output parameters, these are available in
the Parameters collection. However, the OleDb and Odbc providers normally do
not fill in these values, if an error occurs during execution of a stored procedure.
If you use ExecuteReader, there are a few extra precautions. If the stored
procedure first produces a result set, and then a message, you must first
call .NextResult before you get an exception, or, for an informational message,
any InfoMessage event handler is invoked. In difference to ADO, ADO .Net does not
produce extra result sets for the rowcount of of INSERT,
UPDATE and DELETE statements. However, under some circumstances, errors and
messages may give cause to extraneous result sets.
Beware that if .NextResult throws an exception, it does not return a value, so if
you have something like:
Do
....
Try
more_results = reader.NextResult()
Catch e as Exception
MsgBox(e.Message)
End Try
Loop Until Not more_results
more_results retains the value it had before you called .NextResult. (Caveat: I'm
not an experienced .Net programmer, but this is my observation.)
To get the return value from a stored procedure and the value of output parameters
when you use ExecuteReader, you first have to retrieve all rows and all result sets
for these values to be available.
Just like ADO, ADO .Net can sometimes generate commands behind your back; this
appears mainly to happen when you use
the CommandBehaviors KeyInfo and SchemaOnly. But in difference to ADO,
ADO .Net communicates any SQL errors from these extra commands, and throws an
exception in this case too.
So far, it may seem that ADO .Net is lot more well-behaving than ADO. To some
extent it is, but I will now will procede to the specifics for each data provider, and
this mainly deals with their respective shortcomings.
SqlClient
One very nice thing with SqlClient, is that the SqlError class includes
all components of an SQL Server message: server, error number, message text,
severity level, state, procedure and line number.
Another good thing with SqlClient, is that in difference to the other two providers,
you do almost always get the return value and the value of output parameters from
a stored procedure, even if there is an error during execution (provided that the
error does not terminate the execution of the procedure, of course.).
But there are a couple of bad things too:
If the procedure produces more than one error, you only get one error
message, unless you are using ExecuteNonQuery. This may be addressed by
the fix described in KB 823679.
If the procedure produces an error before the first result set, you cannot access
any data with any of the methods. (ExecuteReader does not even return
a SqlDataReader object.) If you need to access data in this case, Odbc is your
sole possibility.
If the stored procedure produces a result set, then an error, then another result
set, there is only one way to retrieve the second result set:
use ExecuteReader and be sure to have SET NOCOUNT ON. If you run
withNOCOUNT OFF, things can go really bad, and data may linger on the
connection and come back when the connection is reused from the pool.
Eventually SqlClient may get stuck in an infinite loop or throw some
nonsensical exception.
RAISERROR WITH NOWAIT does not work with ExecuteNonQuery, but the
messages are buffered as if there was no NOWAIT. Use any of the other
methods, if you need RAISERROR WITH NOWAIT. (Note that to use NOWAIT; you
must use CommandType Text, and a single unparameterized SQL string, due
to a bug in SQL Server.)
OleDb
In an OleDbErrorCollection, you don't have access to all information about the
error from SQL Server, but only the message text and the message number.
Notes on OleDb:
If there is an error message during execution, OleDb does in most situations
not provide the return value of the stored procedure or the value of any output
parameters.
If the procedure produces more than one error, you only get one error message
if NOCOUNT is OFF. If NOCOUNT is ON, you may get all messages, unless there
are result sets interleaved with the messages. For some reason the error
messages comes in reverse order.
If the procedure produces an error before the first result set, you cannot access
any data with any of the methods. (ExecuteReader does not even return
a OleDbDataReader object.) If you need to access data in this case, Odbc is
your sole possibility.
If the stored procedure produces a result set, then an error, then another result
set, there is only one way to retrieve the second and successive result sets:
use ExecuteReader and be sure to have SET NOCOUNTOFF. If you
have NOCOUNT ON, you will still get a lot of result sets, but most of them will
be empty.
RAISERROR WITH NOWAIT does not always work with OleDb, but the messages
are sometimes buffered. I have not been able to find a pattern for this.
For NOWAIT to work at all, you must use CommandTypeText, because a bug
in SQL 2000,
Odbc
In an OdbcErrorCollection, you don't have access to all information about the
error from SQL Server, but only the message text and the message number.
Odbc has all sorts of problems with errors and informational messages. If there are
several informational messages, Odbc may lose control and fail to return data,
including providing the return value and the values of output parameters of stored
procedures. It does not matter whether you have declared an InfoMessage event
handler. If there are error messages, and you try to retrieve data, you may get
exceptions from the ODBC SQL Server driver saying Function sequence
error or Associated statement not prepared.
If there are error messages before any result sets are produced, Odbc may not
throw an exception for the first error message, but only invoke
your InfoMessage event handler. And if you don't have one, you will not even
notice that there was an error. Under some circumstances more than one error
message may be dropped this way.
Some of these problems may go away if you run with SET NOCOUNT ON, but not all.
In general therefore, I'll advice against using the Odbc .Net Data Provider to
access SQL Server.
Still, there is one situation where Odbc is your sole choice, and that is if you call a
stored procedure that first produces an error message and then a result set. The
other two providers never return any data in this situation. With Odbc you can do it
but it is a narrow path to follow. You must have SET NOCOUNT ON. If you only
have one result set, you can probably use OdbcDataAdapter.Fill. If there are
more than one result set, you must use ExecuteReader, and you must specify
the CommandBehavior SingleResult (!). You may get an exception
about Function Sequence Error at the end, but by then you have retrieved all your
data.
Acknowledgements and Feedback
Thanks to Trevor Morris who pointed out the tidbit on IMPLICIT_TRANSACTIONS and
error 266, Mark Williams and Paulo Santos who investigated DBCC
OUTPUTBUFFER and SQL Server MVP Jacco Schalkwijk who found the definition of
the severity levels 11-16.
If you have suggestions for improvements or corrections on contents, language or
formatting, please mail me at esquel@sommarskog.se. If you have technical
questions that any knowledgeable person could answer, I encourage you to post to
any of the
newsgroups microsoft.public.sqlserver.programming or comp.databases.ms-
sqlserver.
Revision History
2009-11-29 Added a reference for the new, and still unfinished article for error
handling in SQL 2005.
2008-03-31 Thanks to Paulo Santos, there is now an improved stored procedure
for retrieving the error message through DBCC OUTPUTBUFFER. Actually, the best
way to get the scoop is to go to his web site directly.
2006-01-21 Added a section on how to retrieve the text from an error message,
and a description of severity levels 11-16.
2004-12-26 Rewritten the section on TRY-CATCH in SQL 2005, to adapt for the
Beta 2 changes.
2004-05-30 One more case where error 266 is not raised.
2004-02-17 Added a section on ADO .Net.
2003-12-03 Rewrote section on the RETURN statement. Added section on TRY-
CATCH in Yukon.
Back to my home page.


Implementing Error Handling with Stored
Procedures
in SQL 2000
An SQL text by Erland Sommarskog, SQL Server MVP. Last revision 2009-11-29.

This is one of two articles about error handling in SQL 2000. This article gives you
recommendations for how you should implement error handling when you write
stored procedures, including when you call them from ADO. The other
article, Error Handling in SQL Server a Background, gives a deeper description
of the idiosyncrasies with error handling in SQL Server and ADO. That article is in
some sense part one in the series. However, you can read this article without
reading the background article first, and if you are not a very experienced user of
SQL Server, I recommend you to start here. In places there are links to the
background article, if you want more information about a certain issue.
Note: this article is aimed at SQL 2000 and earlier versions of SQL Server.
SQL 2005 offers significantly improved methods for error handling with TRY-
CATCH. This article is not apt if you are using SQL 2005 or later. I don't have a
complete article on error handling for SQL 2005, but I have an unfinished
article with a section Jumpstart Error Handling that still can be useful.
Table of Contents:
Introduction
The Presumptions
A General Example
Checking Calls to Stored Procedures
The Philosophy of Error Handling
General Requirements
Why Do We Check for Errors?
When Should You Check @@error?
ROLLBACK or not to ROLLBACK That's the Question
SET XACT_ABORT ON revisited
Error Handling with Cursors
Error Handling with Triggers
Error Handling with User-Defined Functions
Error Handling with Dynamic SQL
Error Handling in Client Code
What to Do in Case of an Error?
Command Timeouts
Why is My Error Not Raised?
Getting the Return Value from a Stored Procedure
Acknowledgements and Feedback
Revision History
Introduction
Error handling in stored procedures is a very tedious task, because T-SQL offers
no exception mechanism, or any On Error Goto. All you have is the global variable
@@error which you need to check after each statement for a non-zero value to be
perfectly safe. If you call a stored procedure, you also need to check the return
value from the procedure.
In fact, this is so extremely tedious, so you will find that you will have to make
compromises and in some situations assume that nothing can go wrong.
Still, you cannot just ignore checking for errors, because ignoring an error could
cause your updates to be incomplete, and compromise the integrity of your data. Or
it can cause a transaction to run for much longer time than intended, leading to
blocking and risk that the user loses all his updates when he logs out.
In the first section, I summarize the most important points of the material in
the background article, so you know under which presumptions you have to work.
Next, I show you a general example that covers the most essential parts of how to
do error handling, which I follow with the special considerations when you call a
stored procedure. I then wander into a section where I discuss some philosophical
questions on how error handling should be implemented; this is a section you can
skip if you are short on time. I recommend that you read the section When Should
You Check @@error, though. I take a look at SET XACT_ABORT ON, which can
simplify your error handling but not as much as you might hope. I then look at
error handling for four special areas: cursors, triggers, user-defined functions and
dynamic SQL. Finally, I look at error handling in client code, with focus on ADO
and ADO .Net.
To save space, I am focusing on stored procedures that run as part of an
application. I am not covering loose SQL statements sent from a client, and I
disregard administrative scripts like scripts for backup or scripts that create or
change tables. Neither do I consider distributed transactions, nor situations where
you use SAVE TRANSACTION.
I'm not discussing different versions of SQL Server. The recommendations are
based from how SQL 2000 works, but they apply equally well to SQL 7 and
SQL 6.5. (The situation in SQL 6.5 is actually slightly less complex, but since you
presumably will move to SQL 7 or SQL 2000 you might as well write your error handling for
these versions directly.) As noted above SQL 2005 offers new options for error
handling which are much easier to use.
The Presumptions
This is a brief summary of the presumptions for implementing error handling in T-
SQL. The points below are detailed in the background article, but here we just
accept these points as the state of affairs.
Afer each statement, SQL Server sets @@error to 0 if the statement was
successful. If the statement results in an error, @@error holds the number of
that error. Because @@error is so volatle, you should always save @@error
to a local variable before doing anything else with it.
In some situatons when an error occurs, SQL Server aborts the batch and
rolls back any open transacton, but for many errors SQL Server
only terminates the statement where the error occurred, and it is your
responsibility to roll back any transacton. Since SQL Server is not very
consistent in which acton it takes, your basic approach to error handling
should be that SQL Server might permit executon to contnue.
Yet an acton SQL Server can take in case of an error, is to abandon executon
of the current stored procedure, but return control to the calling procedure
without rolling back any transacton, even if it was started by the aborted
procedure.
The return value from a stored procedure should only serve to indicate
whether the stored procedure was successful or not, by returning 0 in case of
success, and a non-zero value in case of an error. You are the one who is
responsible for that the procedure returns a non-zero value in case of an
error.
With SET XACT_ABORT ON, you can get SQL Server to abort the batch and
rollback the transacton for most errors, but not all errors. Even if you use SET
XACT_ABORT ON, you must at a minimum error-check calls to stored
procedures.
A General Example
There is not any single universal truth on how to implement error handling in
stored procedures. There are several considerations on whether to roll back in all
situations or not, to use GOTO to an error label etc. Some of these considerations, I
am covering in this text. Some I have opted to stay silent on, since this text is long
enough already. Take what I present in this article as recommendations. If they are
in conflict with your common sense, it might be your common sense that you
should follow.
In this example I show how I implement error checking in a stored procedure that
creates a temp table, performs some manipulation on the temp table, calls another
stored procedure, and eventually starts a transaction to update a couple of
permanent tables. The procedure accepts a char(1) parameter for which only
certain values are permitted. In interest of brevity, I am only outlining of the actual
logic of the procedure.
CREATE PROCEDURE error_test_demo @mode char(1) AS

CREATE TABLE #temp (...)

DECLARE @err int,
...

IF @mode NOT IN ('A', 'B', 'C')
BEGIN
RAISERROR('Illegal value "%s" passed for @mode.', 16, -1, @mode)
RETURN 50000
END

INSERT #temp (...)
SELECT ...
SELECT @err = @@error IF @err <> 0 RETURN @err

UPDATE #temp
SET ...
FROM ...
SELECT @err = @@error IF @err <> 0 RETURN @err

EXEC @err = some_other_sp @value OUTPUT
SELECT @err = coalesce(nullif(@err, 0), @@error)
IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN @err END

BEGIN TRANSACTION

INSERT permanent_tbl1 (...)
SELECT ...
FROM ...
SELECT @err = @@error IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@err END

UPDATE permanent_tbl2
SET ...
FROM #temp ....
SELECT @err = @@error IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@err END

DELETE permanent_tbl3
WHERE ...
SELECT @err = @@error IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@err END

EXEC @err = one_more_sp @value
SELECT @err = coalesce(nullif(@err, 0), @@error)
IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN @err END

COMMIT TRANSACTION
SELECT @err = @@error IF @err <> 0 RETURN @err
Comments on various points:
Atomicity of transactions: Without all these tests on @@error, an error in the
UPDATE statement could lead to that we committed a transaction that only
included the result of the INSERT and DELETE statements and whatever
update one_more_sp performs. New users to SQL Server are sometimes shocked
when they find out the state of affairs, since they have been taught that transactions
are atomic. And in theory they are right, but this is how SQL Server works. (And
there is no reason to feel stupid if you held this belief. Many years ago, this was an
unpleasant surprise to me as well.)
Always save @@error into a local variable. Even if you can write error checking
without any local variable, you would still have need for it as soon you want to do
something "fancy", so you better always use it. (To repeat: @@error is set
after each statement, so the snippet IF @@error <> 0 SELECT @@error will never
return anything but 0, because the IF statement is successful. See also
the background article for an example.)
Exit on first error. As soon as there is an error, I abandon the rest of the
procedure and return a non-zero value to the caller. No attempt to recovery or local
error handling, not even an error exit. Keep it as simple as possible. There are
situations where you might want to have some alternate action in case of error, for
instance set a status column in some table. In such case, you would use an IF @err
<> 0 GOTO err_handle, but in my experience this is too uncommon to warrant
using GOTO in all cases. (There is one generic situation where my proposed
strategy needs some modification, and that is when you use cursors, which we will
look at later.)
Rollback or not. In the example, when I perform an SQL statement outside my
own transaction I don't include an explicit ROLLBACK TRANSACTION, but I do
it inside my transaction. When I call a stored procedure, I always have a
ROLLBACK. This may seem inconsistent, but for the moment take this a fact. I
discuss the issue further in the next section and in the section ROLLBACK or not to
ROLLBACK.
Error check on stored procedures. You may be bewildered by the complex
expression. The point is that you must check @@error as well as the return value
from the procedure. We will look closer at this in the next section.
Assertion. Notice the initial check for @mode where I raise an error in case of an
illegal mode and exit the procedure with a non-zero value to indicate an error.
Overall, it is a good recommendation to validate your input data, and raise an error
if data is something your code does not handle. Particularly this is important, if the
procedure is of a more general nature that could be called from many sources. This
is a programming technique that also is used in traditional languages, and these
checks are generally known as assertions.
Return value. You can see that I am returning the actual error code, and 50000 for
the RAISERROR. This is basically a habit I have. I don't think there are many
places in our application that the caller would actually look at it. So you can return
1, 4711 or whatever as long is not zero. (One strategy I applied for a while was that the
first RETURN returned 1, next returned 2 and so on, with the idea that the return value would
identify where things went wrong. I cannot recall that I ever had any real use for it, though.)
Formatting. The formatting of the error checking merits a comment. The idea is
that I want the error checking as un-intrusive as possible so that the actual mission
of the procedure is not obscured. Thus, I put all on one long line, and attach it
directly to the statement I am checking, as logically I see the error checking as part
of that statement. The checking for the stored procedure is on two lines, though,
since else that line would be very long.
Checking Calls to Stored Procedures
When checking a call to a stored procedure, it is not sufficient to check @@error.
By the time execution returns to the caller, @@error may again be 0, because the
statement that raised an error was the not last the one executed. For instance, if the
DELETE statement in error_demo_test above fails on a constraint violation, the
last statement the procedure executes is RETURN @err, and this is likely to be
successful.
This is where the careful use or the RETURN statement comes in: If you get a non-
zero value back from a stored procedure, this indicates that an error occurred in
that procedure or a procedure it called in its turn. And unless you have any special
error handling, or have reasons to ignore any error, you should back out yourself.
But neither is checking the return value enough. If the invocation of the procedure
as such fails, for instance because of incorrect parameter count, SQL Server does
not set the return value at all, so that variable retains its old value. And if you are
like me and use the same variable throughout your procedure, that value is likely to
be 0.
This is why in error_test_demo, I have this somewhat complex check:
EXEC @err = some_other_sp @value OUTPUT
SELECT @err = coalesce(nullif(@err, 0), @@error)
IF @err <> 0 BEGIN ROLLBACK TRANSACTION RETURN @err END
You may ask what does that on line 2 mean? The nullif function says that if @err
is 0, this is the same as NULL. coalesce is a function that returns the first non-
NULL value in its argument. That is, if the procedure returned a non-zero return
value, we use that value, else we use @@error. If any of them has a non-zero
value, an error has occurred somewhere.
As I noted in the previous section, I suggest that you always have a ROLLBACK
TRANSACTION if a call to a stored procedure results in error. This is because the
procedure may start a transaction that it does not commit. This can happen either
because there is a BEGIN TRANSACTION without a matching COMMIT or
ROLLBACK TRANSACTION being executed, or because an error causes SQL
Server to abort execution of the procedure because of a compilation error that was
not detected when you loaded the procedure because of deferred name resolution.
See the discussion on scope-aborting errors in the background article for an
example. I discuss ROLLBACK more in the section ROLLBACK or not to
ROLLBACK.
Note: if you are calling a remote stored procedure, the return value will be NULL,
if the remote procedure runs into an error that aborts the batch. I would expect
@@error to have a non-zero value in this situation, but if you are really paranoid,
you can do something like this:
EXEC @err = REMOTESRV.db.dbo.remote_sp @value
SELECT @err = coalesce(nullif(@@error, 0), @err, -4711)
As for whether you should roll back in this situation or not, I have to admit that I
have not analysed this.
Finally, while most system procedures that come with SQL Server obey to the
principle of returning 0 in case of success and a non-zero value in case of failure,
there are a few exceptions. Here I only mention one:sp_xml_removedocument,
which returns 1 in all situations, so for this procedure you should only check
@@error (I believe Microsoft has acknowledged this as a bug.) For other system
procedures: when in doubt, consult the documentation for the system procedure in
question in Books Online.
The Philosophy of Error Handling
In this section, I try to give a rationale for error handling I recommend and try to
cover what trade-offs you may be forced to when you implement your own error
handling. This section is somewhat philosophical in nature, and if all you want is a
cookbook on error handling, feel free to move to the next section (about SET
XACT_ABORT ON). You may however want to study the sub-section When
Should You Check @@error.
General Requirements
In an ideal world, this is what we would want from our error handling:
1. Simplicity. Error handling must be simple. If the error handling is too
complex, bugs might creep into the error handling, and what is the likelihood
that every single piece of error-handling code is tested? Partcularly, when
error-handling appears afer each statement?
2. Incomplete transactons must never be commited. This is a coin with two
sides. 1) When an error occurs in a statement, you should somewhere issue a
ROLLBACK TRANSACTION if there was an open transacton. 2) If a user-
defned transacton has been rolled back, you must not contnue with the
processing, because you would only carry out the second part of the
transacton.
3. You must not leave incomplete transactons open. There are situatons
where, if you are not careful, you could leave the process with an open
transacton. When the user contnues his work, he will acquire more and
more locks as he updates data, with increased risk for blocking other users.
When he eventually disconnects, a big fat ROLLBACK sets in and he loses all
his changes.
4. Modularity, take one. A stored procedure should not assume that just
because it did not start a transacton itself, there is no transacton actve, as
the calling procedure or client may have started a transacton.
5. Modularity, take two. Ideally, a stored procedure should not roll back a
transacton that was started by a caller, as the caller may want to do some
recovery or take some other acton.
6. Avoid unnecessary error messages. If you rollback too much, or rollback in a
stored procedure that did not start the transacton, you will get the messages
266 Transacton count afer EXECUTE indicates that a COMMIT or ROLLBACK
TRANSACTION statement is missing and 3903 The ROLLBACK
TRANSACTION request has no corresponding BEGIN TRANSACTION - which
become white noise in the error-message output and occludes the real error.
These requirements tend to conflict with each other, particularly the requirements
2-6 tend to be in opposition to the requirement on simplicity. The order above
roughly reflects the priority of the requirements, with the sharp divider going
between the two modularity items. Actually, my opinion is that trying to address
the very last point on the list, would incur too much complexity, so I almost always
overlook it entirely.
It may baffle some readers that I have put simplicity on the top of the list, but the
idea is that if your error handling is too complex, then you run the risk of messing
up the transaction just because of errors in the error handling.
Why Do We Check for Errors?
This question may seem to have an obvious answer, but it is worth considering this
question in some detail, to get a deeper understanding of what we are trying to
achieve.
The answer is that we don't want to continue execution after an error, because we
are likely to have incorrect data, and thus it is likely that the execution will yield an
incorrect result, either in terms of incorrect data being returned to the client, or
database tables being incorrectly updated.
If you look at error_test_demo above, you can easily see if we get an error in one
the statements between the BEGIN and COMMIT TRANSACTION, the
transaction will be incomplete if we don't check @@error and roll back and exit on
an error. For instance, we may delete the old data, without inserting any new. But it
is also important to check the manipulation of the temp table before the transaction
starts, because if any of these operations fail, the INSERT, UPDATE and DELETE
in the transaction will operate from the wrong data with unknown consequences.
Consider this outlined procedure:
CREATE PROCEDURE error_test_select @mode char(1) AS

CREATE TABLE #temp (...)

DECLARE @err int,
...

IF @mode NOT IN ('A', 'B', 'C')
BEGIN
RAISERROR('Illegal value "%s" passed for @mode.', 16, -1, @mode)
RETURN 50000
END

INSERT #temp (...)
SELECT ...
SELECT @err = @@error IF @err <> 0 RETURN @err

UPDATE #temp
SET ...
FROM ...
SELECT @err = @@error IF @err <> 0 RETURN @err

SELECT col1, col2, ...
FROM #temp
JOIN ...
As you see the initial part is similar to error_test_demo, but instead of a
transaction, there is a SELECT statement that produces a result set. We still check
for errors, so that we don't go on and produce a result set with incorrect data.
You may note that the SELECT statement itself is not followed by any error
checking. I will discuss this in the next section.
When Should You Check @@error?
After any statement in which an error could affect the result of the stored
procedure, or a stored procedure that has called it. And that is about any statement
in T-SQL.
In practice, this is not really workable. These are the statements for which I
recommend you to always check @@error:
DML statements, that is, INSERT, DELETE and UPDATE, even when they afect
temp tables or table variables.
SELECT INTO.
Invocaton of stored procedures.
Invocaton of dynamic SQL.
COMMIT TRANSACTION.
DECLARE and OPEN CURSOR.
FETCH from cursor.
WRITETEXT and UPDATETEXT.
SELECT is not on this list. That does not mean that I like to discourage your from
checking @@error after SELECT, but since I rarely do this myself, I felt I could
not put it on a list of recommendations. SELECT can occur in three different
situations:
Assignment of local variables. (This also includes of SET for the same task).
You can run into errors like overfow or permissions problems, that would
cause the variables to get incorrect values, and thus highly likely to afect the
result of the stored procedure. When in doubt, check @@error.
Producing a result set. Ofen a SELECT that produces a result set is the last
statement before control of executon returns to the client, and thus any
error will not afect the executon of T-SQL code. The client does need any
non-zero return value, since it sees the error itself. (You can never hide an
error from a client.), and hopefully understand that the result set is not to be
trusted, should rows have been returned before the error occurs.
The construct INSERT-EXEC permits you to insert the output of a stored
procedure into a table in the calling procedure. In this case it would be best
to check @@error and set return status afer the SELECT. Problem is, you can
never tell if someone decides to call your procedure with INSERT-EXEC. This
construct is not that common, and personally I discourage use of it. (Follow
the link to it, to see why.) I'm inclined to say that it is up to the developer
who decides to call your procedure with INSERT-EXEC to make sure that he
gets correct error informaton back.
Conditonal tests for IF and WHILE. This is where things defnitely get out of
hand. For starters, where to you put the check of @@error? (You put it where
executon would end up if the conditon does not yield a true value. With the error checking a
long way from what it checks, you get quite obscure code. ) Workaround: write IF and
WHILE with SELECTs that are so simple that they cannot go wrong. Or save
result of the test into a local variable, and check @@error before the
conditonal.
Here I have not covered DDL statements (CREATE VIEW etc) or DBA statements
like BACKUP or DBCC. They are not in the scope for this article, since I am
restricting myself to application development. Only two DDL statements are likely
to appear in application code: CREATE and DROP TABLE for temp tables. It
seems that if there is an error in a CREATE TABLE statement, SQL Server always
aborts the batch. Were execution to continue, it is likely that any reference to the
table would cause an error, since the table never was created. Thus, I rarely check
@@error after CREATE TABLE.
A note on COMMIT TRANSACTION: the one error that could occur with
COMMIT TRANSACTION is that you do not have a transaction in progress. In
itself this is not likely to affect the continued processing, but it is a token of that
something has already gone wrong, why it is best to back out, so that you do not
cause more damage.
Note: whereas I cover most of the statements above in one way or another in this
text, I am not giving any further coverage to text/image manipulation with
READTEXT, WRITETEXT and UPDATETEXT, as I have little experience from
working with these.
Finally, keep in mind that these are these recommendations covers the general
case. There are situations when checking @@error is unnecessary, or even
meaningless. This is when you basically have nowhere to go with the error. I'll
show you an example of this when we look at error handling with cursors.
ROLLBACK or not to ROLLBACK That's the Question
You saw in error_test_demo that I did only issue a ROLLBACK when 1) I had
started a transaction myself or 2) I had called a stored procedure. In this section, I
will further discuss when to roll back and not.
The quick answer on when to roll back is that if you want maximum simplicity:
whenever you get a non-zero value in @@error or a non-zero return value from a
stored procedure, your error checking should include a ROLLBACK
TRANSACTION, even if there is no transaction in sight in your stored procedure.
In such case you are taking care of the first four of the general requirements: #1
Simple. #2 ROLLBACK on first error. #3 Do not leave transactions open. #4
Caller may have started a transaction. But you are ignoring the last two
requirements: #5 The scope that started the transaction should also roll it back and
#6 Avoid unnecessary error messages.
I have already said that I don't care about #6. If you find the extra error messages
annoying, write your error handling in the client so that it ignores errors 266 and
3903 if they are accompanied by other error messages. (If they appear on their
own, they indicate that you have an error in the application, so then you should not
drop them on the floor.)
It is in an attempt to respect #5 let the scope that started the transaction also roll
it back that I in error_demo_test do not issue a ROLLBACK when I have not
started a transaction myself. But it is only half-hearted, because when I call a
stored procedure, I always roll back, since the procedure I called may have started
a transaction but not rolled it back as I discussed above. I cannot trust the guy who
called me to roll it back, because if he had no transaction in progress he has as
much reason as I to roll back. Thus I have to sacrifice #5 in order to save the more
important requirement #3 don't leave transactions open.
To fully respect point #5, we would have to save @@trancount in the beginning of
the procedure:
CREATE PROCEDURE error_test_modul2 @mode char(1) AS
CREATE TABLE #temp (...)

DECLARE @err int,
@save_tcnt int
...

SELECT @save_tcnt = @@trancount
...

EXEC @err = some_other_sp @value OUTPUT
SELECT @err = coalesce(nullif(@err, 0), @@error)
IF @err <> 0 BEGIN IF @save_tcnt = 0 ROLLBACK TRANSACTION RETURN @err
END

BEGIN TRANSACTION

INSERT permanent_tbl1 (...)
SELECT ...
FROM ...
SELECT @err = @@error IF @err <> 0 BEGIN IF @save_tcnt = 0 ROLLBACK
TRANSACTION RETURN @err END
Personally, I feel that this violates the simplicity requirement a bit too much to be
acceptable, but as they say, you mileage may vary.
When calling a procedure you may "know" that this is a read-only procedure, and
therefore cannot leave you with a stray transaction. However, this thinking is
somewhat dangerous. What if some developer next year decides that this procedure
should have a BEGIN TRANSACTION? Overall, the less you assume about the
code you call, the better.
There is a special case where you can skip the ROLLBACK entirely, even for
error-checks of calls to stored procedures:
CREATE PROCEDURE error_test_inner @mode char(1) AS

IF @@trancount = 0
BEGIN
RAISERROR ('This procedure must be called with a transaction in
progress', 16, 1)
RETURN 50000
END

INSERT permanent_tbl1 (...)
SELECT ...
FROM ...
SELECT @err = @@error IF @err <> 0 RETURN @err END
This procedure has an assertion that checks that there is an active transaction when
the procedure is invoked. This may be an idea that is new to you, but I have written
more than one procedure with this check. Such a procedure is part of a larger
operation and is a sub-procedure to a main procedure. It would be an error to
perform only the updates in this procedure. (Such procedures also commonly
check @@nestlevel.) Since we know that the caller has an active transaction, we
also trust it to handle the rollback for us.
Before I close this section, I should add that I have made the tacit assumption that
all code in a set of a nested procedures is written within the same organisation
where all programmers obey to the same error-handling principles. If your
procedure might be called by programmers in a different town in a different
country, you need to take extra precautions. Not the least do you need
to document how you handle transactions in case of an error.
SET XACT_ABORT ON revisited
One way to make your error handling simpler is to run with SET XACT_ABORT
ON. With this setting, most errors abort the batch. This may give you the idea that
you don't need any error handling at all in your stored procedures, but not so fast! I
said most errors, not all errors.
Even if XACT_ABORT is ON, as a minimum you must check for errors when
calling stored procedures, and when you invoke dynamic SQL. This is because
XACT_ABORT does not affect compilation errors, and compilation errors are
typically those that cause SQL Server to abandon execution of a procedure and
return control to the caller. Nor will the batch be aborted because of a
RAISERROR, so if you detect an error condition, you still need to return a non-
zero value to the caller, that has to check for it.
Also, when XACT_ABORT is ON, error 266, Transaction count after EXECUTE
indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing,
does not abort the batch. This is not documented in Books Online, and it makes me
a little nervous that there might be more errors that SET XACT_ABORT ON does
not affect.
In any case, I would suggest that if you use SET XACT_ABORT ON, you should
use it consistently, preferably submitting the command from the client directly on
connection. What you should not do, is to use it sometimes and sometimes not.
Particularly it is bad, if you as an individual programmer as your private standard
insert a SET XACT_ABORT ON in the procedures you write, while your
colleagues do not. Say that another programmer calls your code. He might have
some error-handling code where he logs the error in a table. While SQL Server
may abort the batch for some errors, sufficiently many errors let execution
continue to make such a scheme worthwhile. As long as not any joker starts to play
games with SET XACT_ABORT ON, that is. (Note: there are some situations with
distributed queries where SET XACT_ABORT ON is required for them to work. This is the
exception to the rule that you should not use XACT_ABORT ON sometimes.)
Error Handling with Cursors
When you use cursors or some other iterative scheme, there are some special
considerations for error handling. Some of this due to the nature of cursors as such,
whereas other issues have to with the iteration in general.
You create a cursor with the DECLARE CURSOR statement, which despite the
name is an executable statement. A cursor can be either process-global or local to
the scope where it was created. The default is process-global, but. I recommend
that you use local cursors, which you specify by adding the keyword LOCAL after
the keyword CURSOR. However, you cannot use local cursors if you create the
cursor from dynamic SQL, or access the cursor from several procedures or from
dynamic SQL.
If you apply the standard error handling we have used this far with a process-
global cursor, you will leave the cursor as existing and open. Next time the same
process calls the procedure, you will get an error saying that the cursor already
exists and is open. If you ignore the error, the cursor will continue where you left it
last time, although the input parameters say that a completely different set of data
should be handled. That's bad.
To deal with this, you need this error-checking code for a global cursor:
DECLARE some_cur CURSOR FOR
SELECT col FROM tbl
SELECT @err = @@error IF @err <> 0 BEGIN DEALLOCATE some_cur RETURN
@err END
That is, if DECLARE CURSOR fails, issue a statement to deallocate the cursor. (I
did not include ROLLBACK TRANSACTION here, but this may be a good idea,
as I discussed in the previous section.)
In the cursor loop, assuming that you want to abort the procedure on first error, this
is how you should do it:
OPEN some_cur
WHILE 1 = 1
BEGIN
FETCH col INTO @var
SELECT @err = @@error IF @err <> 0 BREAK
IF @@fetch_status <> 0
BREAK

UPDATE some_tbl
SET ...
SELECT @err = @@error IF @err <> 0 BREAK
...
END

DEALLOCATE some_cur

IF @err <> 0
BEGIN
ROLLBACK TRANSACTION
RETURN @err
END
...
That is, when running a global cursor you cannot exit immediately, but you must
first make sure that the cursor is closed and deallocated. Once this has been done,
you can check @err, and leave the procedure. At this point, it is safest to always
include a ROLLBACK TRANSACTION, as we no longer know at which point the
error occurred, and there could have been a transaction in progress there.
In passing, note here how I write the cursor loop with regards to FETCH. This
style with a single FETCH statement is highly recommendable, because if you
change the column list in the cursor declaration, there is only one FETCH to
change, and one possible source of error less.
When you do iterative processing, there are cases when you do not want to exit the
procedure on first error. You go through a set of rows that are handled
independently, and if an operation fails for one row, you may still want to try to
process remaining rows, possibly setting an error flag for the failed row. Here is an
outline of such a procedure may look like:
CREATE PROCEDURE error_demo_cursor AS

DECLARE @err int,
...
IF @@trancount > 0
BEGIN
RAISERROR ('This procedure must not be called with a transaction
in progress', 16, 1)
RETURN 50000
END

DECLARE some_cur CURSOR FOR
SELECT id, col1, col2, ...
FROM tbl
WHERE status = 'New'
...
SELECT @err = @@error IF @err <> 0 BEGIN DEALLOCATE some_cur RETURN
@err END

OPEN some_cur
SELECT @err = @@error IF @err <> 0 BEGIN DEALLOCATE some_cur RETURN
@err END

WHILE 1 = 1
BEGIN
FETCH some_cur INTO @id, @par1, @par2, ...
SELECT @err = @@error
IF @err <> 0 OR @@fetch_status <> 0
BREAK

BEGIN TRANSACTION

EXEC @err = some_sp @par1, ...
SELECT @err = coalesce(nullif(@err, 0), @@error) IF @err <> 0 GOTO
Fail

INSERT other_tbl (...)
SELECT @err = @@error IF @err <> 0 GOTO Fail

UPDATE tbl
SET status = 'OK'
WHERE id = @id
SELECT @err = @@error IF @err <> 0 GOTO Fail

COMMIT TRANSACTION
SELECT @err = @@error IF @err <> 0 BREAK

-- Handle next guy
CONTINUE

Fail:
ROLLBACK TRANSACTION
UPDATE tbl
SET status = 'Error'
WHERE id = @id
-- No error-checking here.
END

DEALLOCATE some_cur

RETURN @err
Here, if we get an error while we are handling the row, we don't want to exit the
procedure, but only set an error status for this row and then move on to the next.
The particular UPDATE statement where we set the status to 'Error' has no error
checking, because well, there is not really any action we can take if this
UPDATE fails. If we for some reason cannot set the status, this is not reason to
abort the procedure. As you see, there is a comment that explicitly says that there is
no error checking, so that anyone who reviews the code can see that the omission
of error checking is intentional.
If you look closer, you see that in some cases we abort the procedure in case of an
error even within the loop. We do so for FETCH, because the most likely error
with a FETCH statement is a mismatch between the variables and the column list
in the cursor. In this case, all executions of the FETCH statement will fail, so there
is no reason to hang around. A similar reasoning applies when it comes to
COMMIT TRANSACTION. Errors with COMMIT are so unexpected, that if they
occur we have very little idea of what is going on, why the best is to leave here and
now.
I've also added an assertion to disallow the caller to have an open transaction when
calling error_demo_cursor. If we were to start with an open transaction, and there
is an error with the processing of the fourth element in the cursor, the processing of
the first three will be rolled back. Since the idea that we want rows committed as
we handle them, there is little reason to embed error_demo_cursor in a
transaction. (If you really need this, you could play with the obscure command SAVE
TRANSACTION, but I'm not going into details here.)
Finally, you can see that I permitted me to simplify the exit some, by just
saying RETURN @err, since I know that I cannot have any transaction active at this
point that was not active when error_demo_cursor was called.
Error Handling with Triggers
Triggers differ from stored procedures in some aspects. If you are lazy, you can
actually skip error checking in triggers, because as soon as an error occurs in a
trigger, SQL Server aborts the batch. With one exception: if you raise an error
yourself with RAISERROR, the batch is not aborted. However, if you issue a
ROLLBACK TRANSACTION, the batch is aborted when the trigger exits. So
here is how you would do:
IF EXISTS(SELECT *
FROM inserted i
JOIN deleted d ON d.accno = i.accno
WHERE d.acctype <> i.acctype)
BEGIN
ROLLBACK TRANSACTION
RAISERROR('Change of account type not permitted', 16, 1)
RETURN
END
In the philosophical section, I had a discussion on whether you should rollback or
not. These considerations do not apply in a trigger, but in a trigger you should
always roll back when you detect a breach against a business rule. Forget all ideas
about not rolling back someone else's transaction. The reason for this is simple: In
a trigger, @@trancount is always 1, because if there was no transaction in
progress, the INSERT, UPDATE or DELETE statement is its own transaction. But
if you wrap the statement in an explicit transaction, @@trancount is still 1 and not
2. So you don't have any knowledge whether the caller have a transaction in
progress or not.
Note also a trivial difference to stored procedures: the RETURN statement does
not take parameters in triggers.
If you are really paranoid, there is one check you may want to add to triggers that
call stored procedures. Normally, if you call a stored procedure and it starts a
transaction which it for some reason does not commit or rollback, SQL Server
raises error 266, Transaction count after EXECUTE indicates that a COMMIT or
ROLLBACK TRANSACTION statement is missing. But for some reason, this error
is not raised when the procedure is invoked from a trigger. (It is documented in
Books Online, so it is not a bug.) This could lead to violation of general
requirement #3, don't leave transactions open. Consider this very stupid example:
CREATE TABLE stray_trans_demo (a int NOT NULL)
go
CREATE PROCEDURE start_trans AS BEGIN TRANSACTION
go
CREATE TRIGGER stray_trans_trigger ON stray_trans_demo
FOR INSERT AS
EXEC start_trans
go
INSERT stray_trans_demo (a) VALUES (4711)
SELECT @@error, @@trancount
ROLLBACK TRANSACTION
The output is:
(1 row(s) affected)

----------- -----------
0 1

(1 row(s) affected)
Not an error in sight, but we came out of the INSERT statement with an open
transacton with all the problems it could lead to.
The remedy for this would be to save @@trancount in the beginning of the trigger,
and then compare this value against @@trancount after call to each stored
procedure, and raise an error and roll back in case there is a difference.
Note here that this situation can only occur because of a stray BEGIN
TRANSACTION. The other reason that a procedure may leave you with an orphan
transaction because it was aborted by an error is not an issue here, because in
trigger context, these errors do abort the batch.
Error Handling with User-Defined Functions
If an error occurs in a user-defined function (with the exception of table-valued
inline functions), this is very difficult for the caller to detect. Normally a UDF is
invoked as part of a query. When an error occurs in a UDF, execution of the
function is aborted immediately and so is the query, and unless the error is one that
aborts the batch, execution continues on the next statement but @@error is 0!
If you want it waterproof, I can only see one way to go:
Run with SET XACT_ABORT ON, so that SQL Server aborts the batch on most
errors.
To cover the compilaton errors, that SET XACT_ABORT does not afect, use
WITH SCHEMABINDING in all your functons. With this opton in efect, SQL
Server requires that all tables and views that the functon refers to must exist,
and furthermore you cannot drop them, as long as the functon exists.
If you find this too heavy-duty, what are your choices?
Write simple functions that are simple to test and verify that they absolutely cannot
cause any error. If they use table variables, declare all columns as nullable, so that
you cannot get a NOT NULL error in the function. If the UDF is used in an
INSERT or UPDATE statement, you may get a NOT NULL violation in the target
table instead, but in this case @@error is set. For the same reason, don't use
constraints in your table variables.
If the logic of your UDF is complex, write a stored procedure instead. This makes
the calling code a little clumsier, but multi-valued table functions are mainly
syntactic sugar. I have an article sharing data between stored procedures that
discusses this more in detail. As for scalar functions, you should be wary to use
them anyway, because they often lead to serialization of the query leading to
extreme performance penalties.
In all fairness, the risk for errors in user-defined function is smaller than in a stored
procedure, since you are limited in what you can do in a function.
Note: that the problems I have mentioned does not apply to table-valued inline
functions. These functions are basically macros that are pasted into the query, so
they are never called in the true sense of the word.
Note: you can invoke a scalar function through EXEC as well. In this case, when
an error occurs in the function, execution continues and you can check @@error
within the UDF. The problem with communicating the error to the caller remains,
as the caller will not see the value of @@error. You would have to define a certain
return value, for instance NULL, to indicate that an error occurred.
Error Handling with Dynamic SQL
If you invoke of a batch of dynamic SQL like this:
EXEC(@sql)
SELECT @@error
@@error will hold the status of the last command executed in @sql. This means
that if there was an error in one of the statements in @sql, but other statements
were executed afer this statement, @@error will be 0. Thus, here is a potental
risk that an error goes unnotced.
But this only applies only if your dynamic SQL includes several statements. I
would suppose that most batches of dynamic SQL consist of a single SELECT
command, in which case error-detection is not a problem.
Also, the most likely errors from a batch of dynamic SQL are probably syntax
errors. This means that these errors are not taken care of by SET XACT_ABORT
ON. So by all means, check @@error after all invocations of dynamic SQL.
If you use sp_executesql you also have a return value:
exec @err = sp_executesql @sql
select @@error, @err
However, the return value from sp_executesql appears to always be the fnal value
of @@error, so in practce you only have one value.
If you really run multi-statement batches of dynamic SQL, use sp_executesql
rather than EXEC and use an @err OUTPUT parameter to indicate whether there
was errors in the batch, and check both this parameter and @@error. See my
article on dynamic SQL for an example of using OUTPUT parameters with
sp_executesql.
Error Handling in Client Code
Since the capabilities for error handling in T-SQL is limited, and you cannot
suppress errors from being raised, you have to somehow handle T-SQL errors in
your client code too. There are plenty of client libraries you can use to access SQL
Server. Here I mainly cover ADO and ADO .Net, since I would expect these to be
the most commonly used client libraries. I give more attention to ADO, for the
simple reason that ADO is more messy to use.
Note: I'm mainly an SQL developer. Therefore, I am not inclined to make any
distinction between "real" clients and middle-tiers. For me they are all clients. For
the same reason, my experience of ADO and ADO .Net programming is not in par
with my SQL knowledge . Therefore, I will be fairly brief and be short on code
samples. For more articles on error handling in .Net languages, there is a good
collection on ErrorBank.com.
I will jump straight to what have you to take care of. If you want to know about
how ADO and ADO .Net handles errors in general, the accompanying background
article on error handling has one section each on ADOand ADO .Net.
What to Do in Case of an Error?
By now, you probably know that when calling a stored procedure from T-SQL, the
recommendation is that your error handling should include a ROLLBACK
TRANSACTION, since the stored procedure could have started a transaction that
it failed to roll back, because of a stray BEGIN TRANSACTION or because of an
error that aborted execution and returned control to the caller.
This applies when you call a stored procedure from a client as well. In your error
handling code, you should have something like this (example for ADO):
If cnn Is Not Nothing Then _
cnn.Execute "IF @@trancount > 0 ROLLBACK TRANSACTION", ,
adExecuteNoRecords
Note: if you have started a transaction on ADO level with the .BeginTrans method
on the Connection object, you should probably use the .RollbackTrans method
rather than issuing an SQL batch.
In ADO .Net, there are ways to tell ADO .Net that you want to immediately want
to disconnect after a query. I have not explored this, but I suppose that in this
situation it may be difficult to issue a ROLLBACK command. You may think that
if you are disconnected, that you don't have a problem, but see the next section
about connection pooling.
Command Timeouts
Command timeout is an error that can occur only client level. Most client libraries
from Microsoft ADO, ODBC and ADO .Net are all among them have a default
command timeout of 30 seconds, so that if the library has not received any
response from SQL Server within 30 seconds, the client library cancels the SQL
command and raises the error Timeout Expired.
Nevertheless, it is very important that you handle a timeout error as you would
handle any other error from a stored procedure: issue IF @@trancount > 0
ROLLBACK TRANSACTION, (or Connection.RollbackTrans). This is necessary
because, if the procedure started a transaction, neither SQL Server nor the client
library will roll it back. (There is one exception to this in ADO .Net: if you have
associated a transaction with theConnection object, ADO .Net will issue a
rollback.)
You may get the idea that you can skip the rollback, because you will close the
connection any way. But both ADO and ADO .Net (but not ODBC or DB-Library)
employs connection pooling, which means that when you close a connection, ADO
and ADO .Net keep it open for some 30-60 seconds in case the application would
reconnect. Once you reconnect, ADO and ADO .Net issue sp_reset_connection to
give you a clean connection, which includes rollback of any open transaction. But
on the moment you close the connection, nothing at all happens, so the locks taken
out during the transaction linger, and may block other users.
All client libraries I know of, permit you to change the command timeout. In ADO
there is a .CommandTimeout property on the Connection and Command objects.
You need to set it on both objects; the Commandobject does not inherit the setting
from the Connection object. In ADO .Net, CommandTimeout is only on
the Command object. My recommendation is to set the timeout to 0 which means
"no timeout", unless you have a clear understanding what you want to use the
timeout for.
Note: several of the issues that I have covered here, are also discussed in KB article
224453, in the section Common Blocking Scenarios and Resolution, point 2.
Why is My Error Not Raised?
Sometimes you see people on the newsgroups having a problem with ADO not
raising an error, despite that the stored procedure they call produces an error
message. Short answer: use SET NOCOUNT ON, but there are a few more
alternatives. To discuss them, I first need to explain what is going on:
Say you have a procedure like this one:
CREATE PROCEDURE some_sp AS
CREATE TABLE #temp (...)
INSERT #temp (...)
UPDATE #temp ...
SELECT ... FROM #temp
Assume that the UPDATE statement generates an error. If you run the procedure
from Query Analyzer, you will see something like:
(19 row(s) affected)

Server: Msg 547, Level 16, State 1, Procedure some_sp, Line 4
UPDATE statement conflicted with COLUMN CHECK ...
The statement has been terminated.
a
-----------
1
2
3

(3 row(s) affected)
But if you invoke the procedure from ADO in what appears to be a normal way,
you will see nothing. No error, no result set. The reason for this is that this
procedure generates two recordsets. The first recordset is a closed recordset, that
only carries with it the 19 row(s) affected message for the INSERT statement. It is
not until you retrieve the next recordset, the one for the UPDATE statement, that
the error will be raised. This is the way ADO works.
ADO .Net is different: here you do not get these extra recordsets. And anyway,
most often you use DataAdapter.Fill which does not return until it has retrieved
all data, and if there is an SQL error, it throws an exception.
In ADO, there are several ways of handling this situation, and they can be
combined. (The next three sections apply to ADO only.)
SET NOCOUNT ON
This is the most important method. With SET NOCOUNT ON you instruct SQL
Server to not produce these rows affected messages, and the problem vanishes into
thin air. (Unless you generate a real result set, and then produce an error in your
stored procedure, but this is not a common scenario.) Put this command in your
stored procedure. You can also issue it directly as you connect. Unfortunately,
there is no way to get this into the connection string, so if you connect in many
places, you need to issue SET NOCOUNT ON in many places. And, as if that is
not enough, there are situations when ADO opens a second physical connection to
SQL Server for the same Connection object behaind your back. This is an attempt
to be helpful, when you initiate an operation and there is unprocessed data on the
connection, but can be a real source for confusion.
If you don't have any code which actually retrieves the number of affected rows,
then I strongly recommend that you use SET NOCOUNT ON. Not only makes it
error handling easier, but you also gain performance by reducing network traffic.
(You can even make SET NOCOUNT ON the default for your server, by setting the
configuration option useroptions, but I am hesitant to recommend this. While the rows
affected messages are rarely of use in an application, I find them handy when running ad hoc
statements from Query Analyzer.)
.NextRecordset
You can continue to retrieve recordsets with Recordset.NextRecordset until you
get Nothing in return. Once you have consumed all the recordsets that comes
before the error, the error will be raised.
For me who has programmed a lot with DB-Library this is a natural thing to do.
But more experienced ADO programmers has warned me that this causes round-
trips to the server (which I have not been able to detect), and this does not really
seem to be the ADO way of thinking. I still like the idea from the perspective of
robust programming. What if your stored procedure has a stray result set, because
of a debug SELECT that was accidentally left behind? Then again, I have noticed
that with some server-side cursor types, .NextRecordset does not always seem to
be supported.
adExecuteNoRecords
You can specify this option in the third parameter to the .Execute methods of
the Connection and Command objects. This option instructs ADO to discard any
result sets. Obviously, this is not a good idea if you want data back. But if you
have procedure which only performs updates to the database, this option gives
some performance improvement by discarding the rows affected messages. And
since there are no recordsets, any errors from the stored procedure are raised
immediately.
Getting the Return Value from a Stored Procedure
When checking for errors from a stored procedure in T-SQL, we noted that it is
important to check both the return status and @@error. When you have called a
stored procedure from a client, this is not equally interesting, because any error
from the procedure should raise an error in the client code, if not always
immediately.
Nevertheless, if you want to get the return value, this is fairly straightforward. In
ADO, you use the .Parameters collection, and use the parameter 0 for the return
value. For Parameter.Direction you specifyadParamReturnValue.
If you use a client-side cursor, you can retrieve the return value at any time. But if
you use a server-side cursor, you must first retrieve all recordsets, before you can
retrieve the return value.
The procedure for getting the return value is similar in ADO .Net. If you
use ExecuteReader, you must first retrieve all rows and result sets for the return
value to be available. Beware that the OleDb and Odbc .Net Data Providers, do not
always provide the return value, if there was an errur during the execution of the
procedure.
Acknowledgements and Feedback
Thanks to Thomas Hummel who pointed out a weakness in error_demo_cursor.
If you have suggestions for improvements or corrections on contents, language or
formatting, please mail me at esquel@sommarskog.se. If you have technical
questions that any knowledgeable person could answer, I encourage you to post to
any of the
newsgroups microsoft.public.sqlserver.programming or comp.databases.ms-
sqlserver.
For more articles error-handling in .Net, check out ErrorBank.com.
Revision History
2009-11-29 Added a note that there is now at least an unfinished article for SQL
2005 with an introduction that can be useful.
2006-01-21 Minor edits to reflect that SQL 2005 is no longer in beta.
2005-01-23 Added assertion for @@trancount = 0 in error_demo_cursor.
2004-06-12 Added link for further reading on ErrorBank.com.
2004-02-17 Also covering ADO .Net in the client-code section.
2003-12-05 Added brief comment on Yukon to introduction.
Back to my home page.

Potrebbero piacerti anche