c# - Retryable SQLBulkCopy for SQL Server 2008 R2 -
i database background , new .net stuff. please bear me if question sounds silly.
i using sqlbulkcopy in code transfer data 1 sql server other. failing due network issues. avoid planning 2 things
decrease batch size (from 5000 1000) , increase timeout (from 3min. 1min)
implement retry logic
my question
- what best way implement retry, i.e., @ table level or @ batch level (if @ possible)?
- i found frame work resiliency sql azure here: https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx have thing similar sql server 2008 r2?
sample code using:
private void bulkcopytable(string schemaname, string tablename) {using (var reader = srcconnection.executereader($"select * [{sourcedbname}].[{schemaname}].[{tablename}]")) { const sqlbulkcopyoptions bulkcopyoptions = sqlbulkcopyoptions.tablelock | sqlbulkcopyoptions.firetriggers | sqlbulkcopyoptions.keepnulls | sqlbulkcopyoptions.keepidentity; using (var bcp = new sqlbulkcopy(dstconnection.connectionstring, bulkcopyoptions)) { const int threeminutes = 60*3; bcp.bulkcopytimeout = threeminutes; //timeout single batch bcp.batchsize = 5000; bcp.destinationtablename = $"[{destinationdb}].[{schemaname}].[{tablename}]"; bcp.enablestreaming = true; foreach (var col in table.columns.cast<column>().where(c => !c.computed)) { bcp.columnmappings.add(col.name, col.name); } bcp.writetoserver(reader); } } }
a simple approach to:
- implement batches yourself. results in minor inefficiency
sqlbulkcopy
needs query meta data eachwritetoserver
call. don't make batches small. experiment. - insert temporary table (not
#temp
table durable 1 can lose connection , continue). - then, execute
insert...select
final step move rows temp table real table.
this dance splits work retryable batches acts if 1 transaction.
if don't need atomicity can leave @ step (1).
Comments
Post a Comment