T-SQL Error Handling in SQL Server 2005#

My first session at Tech Ed Malaysia focused on the new error handling features of T-SQL in SQL Server 2005.

Although its a cool session, I have to wonder about the practicalities of doing error handling SQL Server, after all, as a server application, its usually our job to receive requests from clients and then report back on the results of the request, whether they're happy noises or error messages. And I see error handling as the ability to actually handle an error - as in, detect it, fix the problem and retry the previously error-causing SQL statement. So when would you ever do this?

The only logical place I can think of is deadlocking. Deadlocking is an artifact of SQL Server, not clients... okay, granted, a poorly written client can generate lots of deadlocks, but no client ever sends the request “can I have a deadlock please”... although for some clients its a pretty close thing.

Deadlocks, for those out there new to the concepts, are not blocking, which is where one SQL statement has to wait while another finishes its business. A deadlock occurs when one connection has a lock and requires another lock, and another connection has the other lock and requires the one held by the first connection. Its unresolvable, so SQL Server detects it, and then picks a loser, who's transaction fails... the winner goes on to complete their transaction with no awareness of the suffering spread.

With SQL Server 2005, we can actual handle deadlocks - recover from them transparently so that the client applications don't even know they're happening. Now that doesn't mean that we shouldn't continue to avoid deadlocks in the first place. The best way is to avoid the scenario I just described: make sure all clients lock resources in the same order, so that they block each other, rather than deadlock. Blocks will naturally resolve on their own (or else timeout). Deadlocks are uglier. Stored procedures can help by making sure that clients can't grab data themselves, they have to go through stored procedures. And then you have rules for writing stored procedures, so that the same tables are modified in the same order in every procedure. As long as you make certain you're locking in the same order, you'll block rather than deadlock.

But even with those efforts, you'll still get the occasional deadlock, as transactional velocities rise and queries take longer and longer to run. That's where this clever bit of code comes in:

CREATE PROCEDURE DeadLock_Test AS

SET NOCOUNT ON
SET XACT_ABORT ON
SET DEADLOCK_PRIORITY LOW

DECLARE @Err INTEGER
DECLARE @ErrMsg VARCHAR(200)

RETRY:
BEGIN TRY
  BEGIN TRANSACTION
  UPDATE tblContact SET LastName = 'SP_LastName_1' WHERE ContactID = 1
  UPDATE tblContact SET LastName = 'SP_LastName_2' WHERE ContactID = 2
  COMMIT TRANSACTION
END TRY
BEGIN CATCH
  SET @Err = @@ERROR
  IF @Err = 1205
    ROLLBACK TRANSACTION
    INSERT INTO ErrorLog (ErrID, ErrMsg) VALUES (@Err, 'Deadlock recovery attempt.')
    WAITFOR DELAY '00:00:10'
    GOTO RETRY
  IF @Err = 2627
    SET @ErrMsg = 'PK Violation.'
  IF @ErrMsg IS NULL
    SET @ErrMsg = 'Other Error.'
  INSERT INTO ErrorLog (ErrID, ErrMsg) VALUES (@Err, @ErrMsg)
END CATCH

So in this sample stored procedure, I use the new Try and Catch statements to provide error handling. The code in the Try block executes my intended update statements, in the example I'm just updating two contact rows, changing the last names. The two updates are wrapped in a transaction. If there's no problems, the transaction starts, the two rows are updated, the transaction is committed, and the stored procedure finishes.

Now, to introduce some fun, on a separate connection I do this:

BEGIN TRANSACTION
  UPDATE tblContact SET LastName = 'Direct_LastName_2' WHERE ContactID = 2

And then in another connection I execute the stored procedure. So the first connection is holding a lock on ContactID = 2, and the stored procedure has got a lock on ContactID = 1 and is waiting for the lock on ContactID = 2. Now I hop back to the first connection and execute the following:

UPDATE tblContact SET LastName = 'Direct_LastName_1' WHERE ContactID = 1

Now I've created a deadlock situation. The first connection has a lock on ContactID = 2, waiting for a lock on ContactID = 1, and the stored procedure connection has a lock on ContactID = 1 and is waiting for a lock on ContactID = 2. SQL Server detects the deadlock and chooses a loser. Since the stored procedure has its deadlock priority set low, it always loses.

When the deadlock error occurs in the stored procedure, the code in the Catch block executes. The first thing that happens is I retrieve the error code from @@ERROR and take a look at it. If its an error 1205, I know that's a deadlock, so I can do my deadlock recovery. The first step is to rollback the transaction so far. This releases the lock the stored procedure was holding on ContactID = 1, so the other connection finishes the transaction successfully. Since its finished, I can commit it, changing the last names to Direct_LastName_1 and Direct_LastName_2.

Meantime, back at the stored procedure, after rolling back the transaction, the stored procedure records the fact that it had a deadlock into an error logging table, then waits a few seconds to give the other side of the deadlock a chance to finish. Once the wait is over, the GOTO statement jumps execution back up to just above the BEGIN TRY block in the stored procedure, and the updates are attempted again. Since the other transaction is already finished, all is well, the transaction completes and the names are changed to SP_LastName_1 and SP_LastName_2. Who said being the deadlock loser sucked?

Next session - the SQL Server/Windows interop with Oracle/Linux session... supposed to be a duet with Steve Forte, but his flights got messed up and he has to leave early, so I'm doing this one solo again, just like in Montreal. Right after the interop session I'm reprising my SQL Server Profiler for the Developer session.

Wednesday, September 15, 2004 6:00:29 PM (Pacific Standard Time, UTC-08:00) #    Comments [6]  | 

 

Thursday, February 10, 2005 4:49:10 AM (Pacific Standard Time, UTC-08:00)
Does SQL 2005 handle nested transactions differently than 2000? In 2000, the rollback tran will rollback to the *outermost* nested transaction. If the nested transaction semantics are unchanged in 2005, then your code will re-submit the local transaction change, but miss the changes from the enclosing transaction.
Tuesday, January 22, 2008 3:33:01 AM (Pacific Standard Time, UTC-08:00)
I think you worry too much on this subject but you posted a good question there !
Wednesday, March 19, 2008 1:18:42 PM (Pacific Standard Time, UTC-08:00)
A very excellent article. Very articulately and clearly put. Thank you for explaining this useful technique.
Tuesday, June 17, 2008 8:46:52 PM (Pacific Standard Time, UTC-08:00)
Great stuff Richard; just one problem...

IF @Err = 1205
ROLLBACK TRANSACTION
INSERT INTO ErrorLog (ErrID, ErrMsg) VALUES (@Err, 'Deadlock recovery attempt.')
WAITFOR DELAY '00:00:10'
GOTO RETRY

You are missing BEGIN...END...

ROLLBACK TRANSACTION

IF @Err = 1205
BEGIN
INSERT INTO ErrorLog (ErrID, ErrMsg) VALUES (@Err, 'Deadlock recovery attempt.')
WAITFOR DELAY '00:00:10'
GOTO RETRY

END

All the best, Tony.
Sunday, October 26, 2008 7:09:40 AM (Pacific Standard Time, UTC-08:00)
hogyas 43jkdfs hjkfdshjkfdh sjkfhkjdsh fjkdsh fjkdhsfkd hsjfdpodgka pfmb
Sunday, October 26, 2008 7:40:26 AM (Pacific Standard Time, UTC-08:00)
hdf hfdhf dhfdhfd hrehr bhsf gdsg sdgd sgs gdsgr dhfh dfhdf hfdhfd
Comments are closed.
All content © 2019, Richard Campbell
On this page
This site
Calendar
<June 2019>
SunMonTueWedThuFriSat
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
Archives
Sitemap
Blogroll OPML
Disclaimer

Powered by: newtelligence dasBlog 1.9.7067.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

Send mail to the author(s) E-mail

Theme design by Jelle Druyts


Pick a theme: