Saturday, February 25, 2012

Problem with dead-locks

We are experiencing some problems with deadlocks in our system. We call
Sql server 2k from a .Net applicationlayer.
The flow is
Create transaction object in .Net
FOR each table in a list (e.g. 11 tables in total)
DELETE FROM Table WHERE SomeID IN (x,y) // SomeID is foreign key
(done in stored procedure with no cursors)
// END FOR
FOR each table in a list (same list af before, but in opposite sequence)
BULK INSERT MyTable ..
(SQL-statement created in .Net and sent to Sql svr)
// END FOR
transactionsobject.Commit();
We receive data from an external system once every w. The external
system cannot give os delta-information, e.g. only new data, s they send
us their entire data. This is the reason for why we delete all data in
our system and insert them again.
The problem occurs at BULK INSERT, but odly not always. The database in
question is offline for the rest of the system, so nothing takes place on
the server apart from this job. The transaction object is sent to the
server using Microsoft Practices Enterprise Library's data-block to
enable us to roll-back the transaction if one of the statements fail.
We are kind of blank with regards to how to solve this - do you guys have
an idea to a solution?
Thanks,
:o)
Jesper Stocholm
http://stocholm.dk
<a href="http://links.10026.com/?link=http://www.sony.com">evil</a>
Findes din kiosk p nettet? Se http://ekiosk.dkHave you followed the basic recommendations on deadlocks in Books Online?
Start here, if you haven't:
http://msdn.microsoft.com/library/d... />
a_3hdf.asp
Tracing deadlocks might also be much help:
http://msdn.microsoft.com/library/d...
tabse_5xrn.asp
ML|||I have two suggestions. (1) change the mechanism so that the deletes are
committed prior to starting the bulk inserts, or (2) generate a temporary
stored procedure or dynamic SQL to execute the entire job in a single batch,
complete with error handling, etc.
You could save off the rows your deleting into a separate database, and
reinsert them if a failure occurs during the bulk inserts.
My preference is to migrate transaction processing to the data tier whenever
possible. It's a lot easier to modify a stored procedure than to recompile
and redeploy a middle-tier component or God forbid, a client application.
"Jesper Stocholm" <j@.stocholm.invalid> wrote in message
news:Xns97105AA34EC40stocholmdk@.207.46.248.16...
> We are experiencing some problems with deadlocks in our system. We call
> Sql server 2k from a .Net applicationlayer.
> The flow is
> Create transaction object in .Net
> FOR each table in a list (e.g. 11 tables in total)
> DELETE FROM Table WHERE SomeID IN (x,y) // SomeID is foreign key
> (done in stored procedure with no cursors)
> // END FOR
> FOR each table in a list (same list af before, but in opposite sequence)
> BULK INSERT MyTable ..
> (SQL-statement created in .Net and sent to Sql svr)
> // END FOR
> transactionsobject.Commit();
> We receive data from an external system once every w. The external
> system cannot give os delta-information, e.g. only new data, s they send
> us their entire data. This is the reason for why we delete all data in
> our system and insert them again.
> The problem occurs at BULK INSERT, but odly not always. The database in
> question is offline for the rest of the system, so nothing takes place on
> the server apart from this job. The transaction object is sent to the
> server using Microsoft Practices Enterprise Library's data-block to
> enable us to roll-back the transaction if one of the statements fail.
> We are kind of blank with regards to how to solve this - do you guys have
> an idea to a solution?
> Thanks,
> :o)
> --
> Jesper Stocholm
> http://stocholm.dk
> <a href="http://links.10026.com/?link=http://www.sony.com">evil</a>
> Findes din kiosk p nettet? Se http://ekiosk.dk|||"Brian Selzer" <brian@.selzer-software.com> wrote in
news:epOZG6w6FHA.3416@.TK2MSFTNGP15.phx.gbl:

> I have two suggestions. (1) change the mechanism so that the deletes
> are committed prior to starting the bulk inserts, or (2) generate a
> temporary stored procedure or dynamic SQL to execute the entire job in
> a single batch, complete with error handling, etc.
I will look into this, thanks.

> My preference is to migrate transaction processing to the data tier
> whenever possible. It's a lot easier to modify a stored procedure
> than to recompile and redeploy a middle-tier component or God forbid,
> a client application.
We have since my first post moved the SQL to the database in a stored
procedure that creates some dynamic sql and executes it, so it is no longer
in the application layer. The problem, however, did not go away.
Also, we cannot do it all in the application layer. The entire process is a
ETL-process, where the data is (heavily) transformed before being loaded
into the (other) database. It is not possible to put some of the key parts
in the database-layer - for various purposes we need to do the
transformation in the application layer.
I talked to one of our DBAs, and he suggested that another benefit of
moving the sql to a stored procedure would be that it would complete the
current batch after each "GO"-statement at the end of executing the stored
procedure. We moved the SQL-code and the problem seems to have dissapeared
... for now at least. We are keeping our fingers crossed. If this works, we
are happy ... a bit nervous that we didn't find the cause of the error ...
but happy, none the less.
:o)
Jesper Stocholm
http://stocholm.dk
<a href="http://links.10026.com/?link=http://www.sony.com">evil</a>

No comments:

Post a Comment