Wednesday, March 28, 2012

Optimizing a large

Hi,
I'm running a script that updates the same column on every row in a huge tab
le. The table is being updated with data from another table in another datab
ase, but they are in the same SQL instance.
The log and data files for the database being read from are on an internal d
rive (D:\). The log and datafiles for the database being updated are on an e
xternal SAN (X:\)
Would any of the following speed the process up significantly?
1)Remove the indexes from the table being updated?
2)Set the recovery model to 'simple' on the database being updated?
3)Put the log and data files on the database being updated on different phys
ical disks (not necessary i believe if I perform number 2).
thanks,
KevinKevin,
1. If the column that you are updating has an index on it you might
consider dropping it until after the update.
2. This won't affect in any way the number of items logged or the amount of
data logged during an Update. If you do the update in one transaction it
won't matter either way.
3. It's always best to place the Log and Data files on separate drive
arrays. It makes no difference where the log for the one being read from
are since the log is not used for reads.
I would recommend you attempt the Updates in smaller batches. This will
keep the log in check and usually results in a faster overall operation.
You can usually achieve this with a loop of some sort.
Andrew J. Kelly
SQL Server MVP
"Kevin" <anonymous@.discussions.microsoft.com> wrote in message
news:715A5D36-5F4F-43D5-8B5B-854B711F3A9A@.microsoft.com...
quote:

> Hi,
> I'm running a script that updates the same column on every row in a huge

table. The table is being updated with data from another table in another
database, but they are in the same SQL instance.
quote:

> The log and data files for the database being read from are on an internal

drive (D:\). The log and datafiles for the database being updated are on an
external SAN (X:\)
quote:

> Would any of the following speed the process up significantly?
> 1)Remove the indexes from the table being updated?
> 2)Set the recovery model to 'simple' on the database being updated?
> 3)Put the log and data files on the database being updated on different

physical disks (not necessary i believe if I perform number 2).
quote:

> thanks,
> Kevin
|||Thanks for your response.
Does the frequency of log checkpoints significantly slow down a long running
update statement? Should I create a large transaction log that will not nee
d to dynamically grow and set the recovery interval to a high value so that
the frequency of checkpoint
s is reduced?
thanks,
Kevin|||Kevin,
First off you should always have the log file larger than it needs to be for
any given operation. Anytime it has to grow it will impact performance to
some degree. As for check points that depends. Checkpoints certainly can
add overhead to the system, especially disk IO and CPU. Whether they have a
large negative effect on your situation is hard to say from here. If you
make a large recovery interval it will most definitely adversely affect the
other users when it does happen as there will be a lot more to do at one
time. The key in your situation is to do the updates in smaller batches.
Andrew J. Kelly
SQL Server MVP
"Kevin" <anonymous@.discussions.microsoft.com> wrote in message
news:4A65CF81-2DD6-4F53-AA7A-E7B00FF50CE2@.microsoft.com...
quote:

> Thanks for your response.
> Does the frequency of log checkpoints significantly slow down a long

running update statement? Should I create a large transaction log that will
not need to dynamically grow and set the recovery interval to a high value
so that the frequency of checkpoints is reduced?
quote:

> thanks,
> Kevin

No comments:

Post a Comment