Showing posts with label size. Show all posts
Showing posts with label size. Show all posts

Wednesday, March 28, 2012

Log shipping seems to fail because of database size

Hi,
I am having a problem setting up log shipping for a large, 45GB,
database. I had log shipping working for this database. Then one
evening it failed because it could not apply a log file. I cleaned up
by removing log shipping from the maintenance plan, deleting the plans,
jobs, suspect destination database, log files and entries in the msdb
log shipping tables. When I try and re-establish log shipping for the
same database I get the error "unable to copy the initialization file
to the secondary server". I can see the initial backup file being
created on the source server and being copied to the destination server
but at some point it fails. What is strange is that I can still get log
shipping working for a small test database in exactly the same
environment and using the same steps.
Most of the posts on this error I have read suggest that permissions
are at the root. However, I cannot see how permissions can be causing
my problem as I can get it to work for a small database. It seems as
though size is the issue! Has anyone else experienced such a problem?
Any help would be very gratefully received as I've been struggling with
this one for a few days now.
Thanks in advance,
Gavin
have you thought about integrating backup compression software with log
shipping - like LiteSpeed, or RedGate's SQL Backup? These should be able to
get you around this size problem. I find also I have more control when I
roll my own log shipping solution as opposed to using the log shipping
wizard.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<geetastic@.hotmail.com> wrote in message
news:1169489699.082941.198830@.l53g2000cwa.googlegr oups.com...
> Hi,
> I am having a problem setting up log shipping for a large, 45GB,
> database. I had log shipping working for this database. Then one
> evening it failed because it could not apply a log file. I cleaned up
> by removing log shipping from the maintenance plan, deleting the plans,
> jobs, suspect destination database, log files and entries in the msdb
> log shipping tables. When I try and re-establish log shipping for the
> same database I get the error "unable to copy the initialization file
> to the secondary server". I can see the initial backup file being
> created on the source server and being copied to the destination server
> but at some point it fails. What is strange is that I can still get log
> shipping working for a small test database in exactly the same
> environment and using the same steps.
> Most of the posts on this error I have read suggest that permissions
> are at the root. However, I cannot see how permissions can be causing
> my problem as I can get it to work for a small database. It seems as
> though size is the issue! Has anyone else experienced such a problem?
> Any help would be very gratefully received as I've been struggling with
> this one for a few days now.
> Thanks in advance,
> Gavin
>
|||Hi Hilary,
Thank you for this. Yes I have thought of using these tools but have
been put off by having already spent a fortune on 2 * 4CPU licences for
SQL Server 2000 Enterprise and having already had this database log
shipping happily for over 2 years in another hosting environment. If we
have to use some 3rd party software like this then we have to but it
just seems wrong.
Which of these 2 have you used? Have you found them to be reliable?
Cheers,
Gavin
Hilary Cotter wrote:
> have you thought about integrating backup compression software with log
> shipping - like LiteSpeed, or RedGate's SQL Backup? These should be able to
> get you around this size problem. I find also I have more control when I
> roll my own log shipping solution as opposed to using the log shipping
> wizard.
> --
|||I've used lite speed and it has hooks built into it that will work with the
log shipping wizard.
SQL Backup from RedGate is a highly competitive product - cheaper too.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Geetastic" <geetastic@.hotmail.com> wrote in message
news:1169541651.197139.116160@.s48g2000cws.googlegr oups.com...
> Hi Hilary,
> Thank you for this. Yes I have thought of using these tools but have
> been put off by having already spent a fortune on 2 * 4CPU licences for
> SQL Server 2000 Enterprise and having already had this database log
> shipping happily for over 2 years in another hosting environment. If we
> have to use some 3rd party software like this then we have to but it
> just seems wrong.
> Which of these 2 have you used? Have you found them to be reliable?
> Cheers,
> Gavin
> Hilary Cotter wrote:
>
|||Fixed the problem by rebooting both servers! I didn't want to do this
straight away because this is not something I want to have to do in
production. However, it works.
Thanks for your help.
On Jan 23, 1:11 pm, "Hilary Cotter" <hilary.cot...@.gmail.com> wrote:[vbcol=seagreen]
> I've used lite speed and it has hooks built into it that will work with the
> log shipping wizard.
> SQL Backup from RedGate is a highly competitive product - cheaper too.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTShttp://www.indexserverfaq.com
> "Geetastic" <geetas...@.hotmail.com> wrote in messagenews:1169541651.197139.116160@.s48g2000cws.g ooglegroups.com...
>
>
>
>

Monday, March 19, 2012

log shipping initial backup restore

hi,
sql2000 enterprise edi. Planning for 15 min log shipping .
database size is around 6 gb. took the full backup and try to copy to the
secondary server over the network taking too much of time (say .5mb/sec).
meanwhile i stoped the transaction log backup . because the full backup is
not restored in the secondary server. my question is , ' can i restore log
backups after the full backup restore on secondary to make sync. ? then start
the log shipping process? or peer to peer connection needed in between
primary and secondary?
2) Another question : after the reindex process, the log file size will be
similar to dbsize. so transafering the log file over the network will take
take and log shipping may break because of copy time? how can i handle this?
thanks
Hello
I had a quick question - Did you start the file copy process for your
Complete backup through the Log Shipping Setup wizard?
Assuming that you did not, then regarding your first question :
You can actually perform the transaction log backups on your primary server
while the complete backup is copying over.
1. Once the complete backup is finished copying, start the restore of this
complete backup in NORECOVERY mode and in the meantime start copying over
the transaction log backups.
2. When the restore of the complete backup is complete, start applying the
transaction logs with NORECOVERY option.
3. At some point, stop performing transaction log backups on your primary
server and complete copying/restoring the transaction log backups on the
secondary server.
4. Once the secondary database is ready, start the Log Shipping Setup
Wizard (through the Maintenance Plan) and on the Log Shipping secondary
dialog, select Existing database option and select the NORECOVERY database
that you have created in steps earlier. Selecting this option will prevent
the Wizard from actually initiating a copy/load of the complete backup
during setup.
Regarding your second question:
Since rebuilding an index is a logged operation, there is no way for you to
get around this problem. The only way you can avoid this is by
reinitializing log shipping which is going to be more time consuming. Think
of this process as "keeping your secondary database in complete sync with
your primary". Assume for a second that as soon as you've finished
restoring the large transaction log backup (performed after the index
rebuild operation) to the secondary standby database, your primary database
goes offline. At this point since you have to bring the secondary online,
you would expect it to perform just as fast as your primary (given all
other factors including hardware etc are the same between the 2 machines).
Well if there was a way for you to avoid transferring the info related to
index rebuild to the secondary, the performance on your secondary most
likely would be very slow since there were no index updates performed on it.
Let me know if you have further questions.
Thank you for using Microsoft newsgroups.
Sincerely
Pankaj Agarwal
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.
|||pankaj,
Thanks for you suggestion.
I took the full backup and copied over the network to the standby server,
before starting Logshipping process Setup. But it was very slow. For setting
up logshipping for production environment, i dont want any sync problems
like copying delay and restore log after the reindex process. I want to
know that whether i missed something.
I am thinking of adding one more NIC to my production server to connect to
the standby to improve the copy process.
"Pankaj Agarwal [MSFT]" <pankaja@.online.microsoft.com> wrote in message
news:k0XK9CVkEHA.2656@.cpmsftngxa10.phx.gbl...
> Hello
> I had a quick question - Did you start the file copy process for your
> Complete backup through the Log Shipping Setup wizard?
> Assuming that you did not, then regarding your first question :
> You can actually perform the transaction log backups on your primary
server
> while the complete backup is copying over.
> 1. Once the complete backup is finished copying, start the restore of this
> complete backup in NORECOVERY mode and in the meantime start copying over
> the transaction log backups.
> 2. When the restore of the complete backup is complete, start applying the
> transaction logs with NORECOVERY option.
> 3. At some point, stop performing transaction log backups on your primary
> server and complete copying/restoring the transaction log backups on the
> secondary server.
> 4. Once the secondary database is ready, start the Log Shipping Setup
> Wizard (through the Maintenance Plan) and on the Log Shipping secondary
> dialog, select Existing database option and select the NORECOVERY database
> that you have created in steps earlier. Selecting this option will prevent
> the Wizard from actually initiating a copy/load of the complete backup
> during setup.
> Regarding your second question:
> Since rebuilding an index is a logged operation, there is no way for you
to
> get around this problem. The only way you can avoid this is by
> reinitializing log shipping which is going to be more time consuming.
Think
> of this process as "keeping your secondary database in complete sync with
> your primary". Assume for a second that as soon as you've finished
> restoring the large transaction log backup (performed after the index
> rebuild operation) to the secondary standby database, your primary
database
> goes offline. At this point since you have to bring the secondary online,
> you would expect it to perform just as fast as your primary (given all
> other factors including hardware etc are the same between the 2 machines).
> Well if there was a way for you to avoid transferring the info related to
> index rebuild to the secondary, the performance on your secondary most
> likely would be very slow since there were no index updates performed on
it.
> Let me know if you have further questions.
> Thank you for using Microsoft newsgroups.
> Sincerely
> Pankaj Agarwal
> Microsoft Corporation
> This posting is provided AS IS with no warranties, and confers no rights.
>
|||Unfortunately there is no way to get around the large transaction log when
you are performing index rebuild. If you feel that adding another NIC will
provide better bandwidth and hence faster copy, then that may be something
to consider.
Thank you for using Microsoft newsgroups.
Sincerely
Pankaj Agarwal
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.

log shipping initial backup restore

hi,
sql2000 enterprise edi. Planning for 15 min log shipping .
database size is around 6 gb. took the full backup and try to copy to the
secondary server over the network taking too much of time (say .5mb/sec).
meanwhile i stoped the transaction log backup . because the full backup is
not restored in the secondary server. my question is , ' can i restore log
backups after the full backup restore on secondary to make sync. ? then star
t
the log shipping process? or peer to peer connection needed in between
primary and secondary?
2) Another question : after the reindex process, the log file size will be
similar to dbsize. so transafering the log file over the network will take
take and log shipping may break because of copy time? how can i handle this
?
thanksHello
I had a quick question - Did you start the file copy process for your
Complete backup through the Log Shipping Setup wizard?
Assuming that you did not, then regarding your first question :
You can actually perform the transaction log backups on your primary server
while the complete backup is copying over.
1. Once the complete backup is finished copying, start the restore of this
complete backup in NORECOVERY mode and in the meantime start copying over
the transaction log backups.
2. When the restore of the complete backup is complete, start applying the
transaction logs with NORECOVERY option.
3. At some point, stop performing transaction log backups on your primary
server and complete copying/restoring the transaction log backups on the
secondary server.
4. Once the secondary database is ready, start the Log Shipping Setup
Wizard (through the Maintenance Plan) and on the Log Shipping secondary
dialog, select Existing database option and select the NORECOVERY database
that you have created in steps earlier. Selecting this option will prevent
the Wizard from actually initiating a copy/load of the complete backup
during setup.
Regarding your second question:
Since rebuilding an index is a logged operation, there is no way for you to
get around this problem. The only way you can avoid this is by
reinitializing log shipping which is going to be more time consuming. Think
of this process as "keeping your secondary database in complete sync with
your primary". Assume for a second that as soon as you've finished
restoring the large transaction log backup (performed after the index
rebuild operation) to the secondary standby database, your primary database
goes offline. At this point since you have to bring the secondary online,
you would expect it to perform just as fast as your primary (given all
other factors including hardware etc are the same between the 2 machines).
Well if there was a way for you to avoid transferring the info related to
index rebuild to the secondary, the performance on your secondary most
likely would be very slow since there were no index updates performed on it.
Let me know if you have further questions.
Thank you for using Microsoft newsgroups.
Sincerely
Pankaj Agarwal
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.|||pankaj,
Thanks for you suggestion.
I took the full backup and copied over the network to the standby server,
before starting Logshipping process Setup. But it was very slow. For setting
up logshipping for production environment, i dont want any sync problems
like copying delay and restore log after the reindex process. I want to
know that whether i missed something.
I am thinking of adding one more NIC to my production server to connect to
the standby to improve the copy process.
"Pankaj Agarwal [MSFT]" <pankaja@.online.microsoft.com> wrote in message
news:k0XK9CVkEHA.2656@.cpmsftngxa10.phx.gbl...
> Hello
> I had a quick question - Did you start the file copy process for your
> Complete backup through the Log Shipping Setup wizard?
> Assuming that you did not, then regarding your first question :
> You can actually perform the transaction log backups on your primary
server
> while the complete backup is copying over.
> 1. Once the complete backup is finished copying, start the restore of this
> complete backup in NORECOVERY mode and in the meantime start copying over
> the transaction log backups.
> 2. When the restore of the complete backup is complete, start applying the
> transaction logs with NORECOVERY option.
> 3. At some point, stop performing transaction log backups on your primary
> server and complete copying/restoring the transaction log backups on the
> secondary server.
> 4. Once the secondary database is ready, start the Log Shipping Setup
> Wizard (through the Maintenance Plan) and on the Log Shipping secondary
> dialog, select Existing database option and select the NORECOVERY database
> that you have created in steps earlier. Selecting this option will prevent
> the Wizard from actually initiating a copy/load of the complete backup
> during setup.
> Regarding your second question:
> Since rebuilding an index is a logged operation, there is no way for you
to
> get around this problem. The only way you can avoid this is by
> reinitializing log shipping which is going to be more time consuming.
Think
> of this process as "keeping your secondary database in complete sync with
> your primary". Assume for a second that as soon as you've finished
> restoring the large transaction log backup (performed after the index
> rebuild operation) to the secondary standby database, your primary
database
> goes offline. At this point since you have to bring the secondary online,
> you would expect it to perform just as fast as your primary (given all
> other factors including hardware etc are the same between the 2 machines).
> Well if there was a way for you to avoid transferring the info related to
> index rebuild to the secondary, the performance on your secondary most
> likely would be very slow since there were no index updates performed on
it.
> Let me know if you have further questions.
> Thank you for using Microsoft newsgroups.
> Sincerely
> Pankaj Agarwal
> Microsoft Corporation
> This posting is provided AS IS with no warranties, and confers no rights.
>|||Unfortunately there is no way to get around the large transaction log when
you are performing index rebuild. If you feel that adding another NIC will
provide better bandwidth and hence faster copy, then that may be something
to consider.
Thank you for using Microsoft newsgroups.
Sincerely
Pankaj Agarwal
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.

Monday, March 12, 2012

Log shipping goes out of sync

Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
Toby
On Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.

Log shipping goes out of sync

Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log[vbcol=seagreen]
> shipping process? " - the answer is yes I am, so clearly here lies the problem.[/v
bcol]
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
>
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...

Log shipping goes out of sync

Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log
> shipping process? " - the answer is yes I am, so clearly here lies the problem.
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
> > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
> > On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> > > Hi
> > > In reply to your question "Are you running transaction log backups IN
> > > ADDITION to the log
> > > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > > I'm running a daily log backup, truncate and shrink - which I realise
> > > now is going to cause issues with the log shipping. However, and
> > > forgive me if this appears trivial, but the reason for run a log
> > > backup, truncate and shrink is to prevent the log file getting too
> > > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > > the log shipping process only, will this provide adequate truncating
> > > of the log?
> > > Thanks
> > Couple of key things here:
> > 1. Log backups truncate the log - the more frequently that you run a
> > log backup, the quicker committed transactions will get truncated, and
> > the less likely your log is to grow. Note that LARGE transactions can
> > still cause growth, because they can't be truncated until fully
> > committed.
> > 2. You are hurting your overall performance in one, possibly two,
> > ways. By repeatedly shrinking the log file, you are forcing SQL
> > Server to grow it again as needed, which introduces additional
> > overhead, possibly during a busy period. Also, repeatedly growing/
> > shrinking/growing/shrinking will lead to disk fragmentation, which
> > will also ultimately hurt your performance.
> > My advice would be to not use the log-shipping wizard that is built-
> > in. You can kill two birds with one stone by writing your own backup
> > routines. Create a log backup job that runs every hour (we do 5-
> > minute intervals here), have that job create backup files that contain
> > a date/time stamp, place these files into some folder, let's say
> > "FolderX". Write a log-shipping routine that monitors FolderX for new
> > files. When a new file is detected, your log shipping routine should
> > restore it and record the file name in a logging table. It then goes
> > back to monitoring FolderX for new files that aren't in the logging
> > table.
> > It's really not as complicated as it seems, and you'll solve all of
> > these problems...
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...

Log Shipping Fails Once a Week

Hi
I have log shipping setup on 2 servers - A and B. Both are SQL Server
2000 and the size of the database is 95Gb. The average transaction log
is around 100Mb
A ships to B and B is also the monitoring server.
The schedule is set to ship every 15 minutes.
Everything works fine, however every Sunday at 23:00 without fail the
log ship fails due to the LSN counter being out of sync. The error
reports an ealier backup is available but any attempt to restore fails
and the log ship schedule must be deleted and re-created.
There are no other backups/restores or any other scheduled jobs that
occur at this time.
Does anyone have any suggestions?
Thanks.
What is the exact error message you receive? Please can you post it up for
us...
Cheers,
Paul Ibison