We are testing the canned log shipping built into sqlserver2k ent. Somehow,
the monitor db that tracks what has been backed-up, shipped, etc is out of
sync. The primary and secondary DBs are synced up and the shipping is
working fine but the monitor doesnt seem to know that. We suspect we know
what happened (sqlAgent was stopped when log shipping was started) but this
raises a bigger question.
If the monitor db should get out of sync (say, for example, it is stopped
for some time), how to we resync it? Conceptually, we want to tell it that
the primary and secondary are in sync (which we can separately verify). Any
ideas? Is this even the right news group?
-alanNever mind. It was an NT domain security issue. We just needed to make the
userName match for all the services (sqlserver and agent) on all the
machines.
"Alan Berezin" <aberezin@.drillinginfo.com> wrote in message
news:eEjNKJkRDHA.1924@.TK2MSFTNGP12.phx.gbl...
> We are testing the canned log shipping built into sqlserver2k ent.
Somehow,
> the monitor db that tracks what has been backed-up, shipped, etc is out of
> sync. The primary and secondary DBs are synced up and the shipping is
> working fine but the monitor doesnt seem to know that. We suspect we know
> what happened (sqlAgent was stopped when log shipping was started) but
this
> raises a bigger question.
> If the monitor db should get out of sync (say, for example, it is stopped
> for some time), how to we resync it? Conceptually, we want to tell it
that
> the primary and secondary are in sync (which we can separately verify).
Any
> ideas? Is this even the right news group?
> -alan
>sql
Showing posts with label shipped. Show all posts
Showing posts with label shipped. Show all posts
Friday, March 30, 2012
Wednesday, March 28, 2012
log shipping set up via scripts instead of GUI
Is it possible to set up log shipping via scripts instead
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!
Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!
|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping=20
http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =
(White Paper)=20
http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping=20
http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping=20
http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping=20
http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =
Errors=20
http://support.microsoft.com/?id=3D329133
--=20
Keith
"Tom Horner" <thorner@.s1.com> wrote in message =
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead=20
> of GUI? We have over 50 databases being log shipped, and=20
> occasional (almost random) failures for various reasons=20
> cause us to have to "reset" (redo) the log shipping. =20
> Doing this via the GUI is a time-consuming process. I=20
> found 5/23/03 webcast with PPT slide saying this is "not=20
> supported". Does this mean not POSSIBLE? We are using=20
> SQL server 2000 on Windows 2000 boxes, two separate=20
> servers with primary and secondary (log shipped-to)=20
> databases.
>=20
> Thanks in advance!
|||yes.
this is exactly what we are doing here.
Here is an example script to apply logs to the standby.
Cheers,
Greg Jackson
PDX, Oregon
begin 666 RETORE LOG SCRIPT FOR ALL TRAN BAKS IN A DIR.SQL
M__XO`"H`*@.`J`"H`*@.`J`" `3P!B`&H`90!C`'0`.@.`@.`" `4P!T`&\`<@.!E
M`&0`( !0`'(`;P!C`&4`9 !U`'(`90`@.`'4`<P!P`%\`4@.!A`'0`:0!N`&<`
M<P!?`%0`<@.!A`&X`<P!A`&,`= !I`&\`;@.!,`&\`9P!?`%(`90!S`'0`;P!R
M`&4`( `@.`" `( !3`&,`<@.!I`' `= `@.`$0`80!T`&4`.@.`@.`# `-0`O`#$`
M,@.`O`#(`, `P`# `( `X`#H`-0`W`#H`,@.`U`" `4 !-`" `*@.`J`"H`*@.`J
M`"H`+P`-``H`+P`J`"H`*@.`J`"H`*@.`@.`$\`8@.!J`&4`8P!T`#H`( `@.`%,`
M= !O`'(`90!D`" `4 !R`&\`8P!E`&0`=0!R`&4`( !U`',`< !?`%(`80!T
M`&D`;@.!G`',`7P!4`'(`80!N`',`80!C`'0`:0!O`&X`3 !O`&<`7P!2`&4`
M<P!T`&\`<@.!E`" `( `J`"H`*@.`J`"H`*@.`O``T`"@.`O`"H`"0!7`'(`:0!T
M`'0`90!N`" `0@.!Y`#H`( !!`&P`90!X`" `5P!E`'(`9P!E`&P`90!S``T`
M"@.`J`" `"0!#`'(`90!A`'0`90!D`#H`( `@.`# `-0`O`# `,0`O`#(`, `P
M`#$`#0`*`"H`"0!.`&\`= !E`',`.@.`-``H`*@.`-``H`*@.`@.`" `( `@.`" `
M( `@.`" `( !!`&P`= !E`'(`90!D`" `0@.!Y`#H`( `@.`$$`; !E`'@.`( !7
M`&4`<@.!G`&4`; !E`',`#0`*`"H`( `@.`" `( `@.`" `( `@.`" `1 !A`'0`
M90`Z`" `( `P`#8`+P`P`#$`+P`R`# `, `Q``T`"@.`J`" `( `@.`" `( `@.
M`" `( `@.`$X`;P!T`&4`<P`Z`" `( !#`&\`;0!M`&4`;@.!T`&4`9 `-``H`
M*@.`-``H`*@.`@.`" `( `@.`" `( `@.`" `( !!`&P`= !E`'(`90!D`" `0@.!Y
M`#H`( `-``H`*@.`@.`" `( `@.`" `( `@.`" `( !$`&$`= !E`#H`#0`*`"H`
M( `@.`" `( `@.`" `( `@.`" `3@.!O`'0`90!S`#H`( `@.`$D`1@.`@.`%D`3P!5
M`" `00!,`%0`10!2`" `5 !(`$D`4P`@.`% `4@.!/`$,`10!$`%4`4@.!%`"P`
M( !#`$\`4 !9`" `00!.`$0`( !0`$$`4P!4`$4`( !4`$@.`10`@.`"<`00!,
M`%0`10!2`$4`1 `@.`$(`60`G``T`"@.`J`" `( `@.`" `( `@.`" `( `@.`%,`
M10!#`%0`20!/`$X`( !4`$\`( !$`%4`4 !,`$D`0P!!`%0`10`@.`$D`5 `[
M`" `5 !(`$4`3@.`@.`$4`1 !)`%0`( !4`$@.`10`@.`$8`20!2`%,`5 `@.`$(`
M3 !!`$X`2P`@.`$,`3P!0`%D`+ `@.`$P`10!!`%8`20!.`$<`#0`*`"H`( `@.
M`" `( `@.`" `( `@.`" `00`@.`$0`50!0`$P`20!#`$$`5 !%`" `20!.`" `
M4 !,`$$`0P!%`" `1@.!/`%(`( !4`$@.`10`@.`$X`10!8`%0`( !0`$4`4@.!3
M`$\`3@.`@.`%0`3P`@.`$4`1 !)`%0`( !4`$@.`20!3`" `4 !2`$\`0P`N``T`
M"@.`J``T`"@.`J`"\`#0`*``T`"@.`M`"T`9 !R`&\`< `@.`' `<@.!O`&,`90!D
M`'4`<@.!E`" `=0!S`' `7P!2`&$`= !I`&X`9P!S`%\`5 !R`&$`;@.!S`&$`
M8P!T`&D`;P!N`$P`;P!G`%\`4@.!E`',`= !O`'(`90`-``H`#0`*``T`"@.`M
M`"T`80!S``T`"@.`-``H`<P!E`'0`( !N`&\`8P!O`'4`;@.!T`" `;P!N``T`
M"@.`-``H`9 !E`&,`; !A`'(`90`@.`$ `<P!T`'(`:0!N`&<`( !V`&$`<@.!C
M`&@.`80!R`"@.`,0`P`# `, `I`"P`#0`*``D`0 !,`&\`9P!&`&D`; !E`" `
M=@.!A`'(`8P!H`&$`<@.`H`#$`, `P`# `*0`-``H`#0`*`&D`9@.`@.`&4`> !I
M`',`= !S`"@.`<P!E`&P`90!C`'0`( `J`" `9@.!R`&\`;0`@.`'0`90!M`' `
M9 !B`"X`+@.!S`'D`<P!O`&(`:@.!E`&,`= !S`" `=P!H`&4`<@.!E`" `:0!D
M`" `/0`@.`&\`8@.!J`&4`8P!T`%\`:0!D`"@.`)P!T`&4`;0!P`&0`8@.` N`"X`
M(P!,`&\`9P!3`&@.`:0!P`"<`*0`I``T`"@.!D`'(`;P!P`" `= !A`&(`; !E
M`" `(P!,`&\`9P!3`&@.`:0!P``T`"@.`-``H`8P!R`&4`80!T`&4`( !T`&$`
M8@.!L`&4`#0`*``D`(P!,`&\`9P!3`&@.`:0!P``T`"@.`)``D`* `-``H`"0`)
M``D`3 !O`&<`1@.!I`&P`90`@.`'8`80!R`&,`: !A`'(`* `Q`# `, `P`"D`
M#0`*``D`"0`I``T`"@.`-``H`:0!N`',`90!R`'0`( `C`$P`;P!G`%,`: !I
M`' `#0`*`&4`> !E`&,`( !M`&$`<P!T`&4`<@.`N`&0`8@.!O`"X`> !P`%\`
M8P!M`&0`<P!H`&4`; !L`" `)P!D`&D`<@.`@.`%P`7 !D`&$`= !A`'<`: !S
M`&4`, `Q`%P`4P!1`$P`0@.!!`$,`2P!5`% `7 !0`%(`3P!$`%,`40!,`# `
M,P!<`&0`8@.!(`$$`50!<`%0`4@.!.`%,`7 `J`" `+P!"`" `+P!/`$0`)P`-
M``H`#0`*``T`"@.`M`"T`<P!E`'0`( ! `$P`;P!G`$8`:0!L`&4`( `]`" `
M* !S`&4`; !E`&,`= `@.`"H`( `@.`&8`<@.!O`&T`( `C`$P`;P!G`%,`: !I
M`' `( !O`'(`9 !E`'(`( !B`'D`( !,`&\`9P!&`&D`; !E`" `9 !E`',`
M8P`I``T`"@.!D`&4`8P!L`&$`<@.!E`" `8P!U`'(`,0`@.`&,`=0!R`',`;P!R
M`" `9@.!O`'(`#0`*`',`90!L`&4`8P!T`" `; !O`&<`9@.!I`&P`90`-``H`
M9@.!R`&\`;0`@.`",`3 !O`&<`4P!H`&D`< `-``H`#0`*`&\`< !E`&X`( !C
M`'4`<@.`Q``T`"@.`-``H`9@.!E`'0`8P!H`" `;@.!E`'@.`= `@.`&8`<@.!O`&T`
M( !C`'4`<@.`Q`" `:0!N`'0`;P`@.`$ `3 !O`&<`1@.!I`&P`90`-``H`#0`*
M`'<`: !I`&P`90`@.`$ `0 !F`&4`= !C`&@.`7P!S`'0`80!T`'4`<P`@.`#T`
M( `P``T`"@.!B`&4`9P!I`&X`#0`*``T`"@.`-``H`<P!E`'0`( ! `',`= !R
M`&D`;@.!G`" `/0`@.`"<`4@.!%`%,`5 !/`%(`10`@.`$P`3P!'``T`"@.`)``D`
M"0!D`&(`2 !!`%4`#0`*``D`"0!&`%(`3P!-`" `1 !)`%,`2P`@.`#T`( `@.
M``T`"@.`)``D`"0!.`"<`( `K`" `)P`G`"<`)P`@.`"L`( `G`%P`7 !D`&$`
M= !A`'<`: !S`&4`, `Q`%P`4P!1`$P`0@.!!`$,`2P!5`% `7 !0`%(`3P!$
M`%,`40!,`# `,P!<`&0`8@.!(`$$`50!<`%0`4@.!.`%,`7 `G`" `*P`@.`$ `
M3 !O`&<`1@.!I`&P`90`@.`"L`( `G`"<`)P`G`" `*P`-``H`"0`)`"<`( !7
M`$D`5 !(``T`"@.`)``D`( `)`%,`5 !!`$X`1 !"`%D`( `]`" `)P`@.`"L`
M( `G`"<`)P`G`" `*P`@.`"<`7 !<`&0`80!T`&$`=P!H`',`90`P`#$`7 !3
M`%$`3 !"`$$`0P!+`%4`4 !<`% `4@.!/`$0`4P!1`$P`, `S`%P`9 !B`$@.`
M00!5`%P`5 !2`$X`4P!<`&0`8@.!(`$$`50!?`&P`;P!G`',`: !I`' `7P!5
M`&X`9 !O`$8`:0!L`&4`+@.!L`&0`9@.`G`"L`( `G`"<`)P`G``T`"@.`-``H`
M< !R`&D`;@.!T`" `* ! `',`= !R`&D`;@.!G`"D`#0`*`"T`+0!U`',`90`@.
M`' `<@.!I`&X`= `@.`'0`;P`@.`&,`<@.!E`&$`= !E`" `<P!C`'(`:0!P`'0`
M<P`-``H`+0`M`&,`80!N`" `8P!H`&$`;@.!G`&4`( !A`&(`;P!V`&4`( !T
M`&\`( !E`'@.`90!C`" `= !O`" `<@.!U`&X`( !I`&X`; !I`&X`90`-``H`
M9@.!E`'0`8P!H`" `;@.!E`'@.`= `@.`&8`<@.!O`&T`( !C`'4`<@.`Q`" `:0!N
M`'0`;P`@.`$ `3 !O`&<`1@.!I`&P`90`-``H`90!N`&0`#0`*``T`"@.!D`&4`
J80!L`&P`;P!C`&$`= !E`" `8P!U`'(`,0`-``H`#0`*`$<`3P`-``H`
`
end
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!
Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!
|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping=20
http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =
(White Paper)=20
http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping=20
http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping=20
http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping=20
http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =
Errors=20
http://support.microsoft.com/?id=3D329133
--=20
Keith
"Tom Horner" <thorner@.s1.com> wrote in message =
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead=20
> of GUI? We have over 50 databases being log shipped, and=20
> occasional (almost random) failures for various reasons=20
> cause us to have to "reset" (redo) the log shipping. =20
> Doing this via the GUI is a time-consuming process. I=20
> found 5/23/03 webcast with PPT slide saying this is "not=20
> supported". Does this mean not POSSIBLE? We are using=20
> SQL server 2000 on Windows 2000 boxes, two separate=20
> servers with primary and secondary (log shipped-to)=20
> databases.
>=20
> Thanks in advance!
|||yes.
this is exactly what we are doing here.
Here is an example script to apply logs to the standby.
Cheers,
Greg Jackson
PDX, Oregon
begin 666 RETORE LOG SCRIPT FOR ALL TRAN BAKS IN A DIR.SQL
M__XO`"H`*@.`J`"H`*@.`J`" `3P!B`&H`90!C`'0`.@.`@.`" `4P!T`&\`<@.!E
M`&0`( !0`'(`;P!C`&4`9 !U`'(`90`@.`'4`<P!P`%\`4@.!A`'0`:0!N`&<`
M<P!?`%0`<@.!A`&X`<P!A`&,`= !I`&\`;@.!,`&\`9P!?`%(`90!S`'0`;P!R
M`&4`( `@.`" `( !3`&,`<@.!I`' `= `@.`$0`80!T`&4`.@.`@.`# `-0`O`#$`
M,@.`O`#(`, `P`# `( `X`#H`-0`W`#H`,@.`U`" `4 !-`" `*@.`J`"H`*@.`J
M`"H`+P`-``H`+P`J`"H`*@.`J`"H`*@.`@.`$\`8@.!J`&4`8P!T`#H`( `@.`%,`
M= !O`'(`90!D`" `4 !R`&\`8P!E`&0`=0!R`&4`( !U`',`< !?`%(`80!T
M`&D`;@.!G`',`7P!4`'(`80!N`',`80!C`'0`:0!O`&X`3 !O`&<`7P!2`&4`
M<P!T`&\`<@.!E`" `( `J`"H`*@.`J`"H`*@.`O``T`"@.`O`"H`"0!7`'(`:0!T
M`'0`90!N`" `0@.!Y`#H`( !!`&P`90!X`" `5P!E`'(`9P!E`&P`90!S``T`
M"@.`J`" `"0!#`'(`90!A`'0`90!D`#H`( `@.`# `-0`O`# `,0`O`#(`, `P
M`#$`#0`*`"H`"0!.`&\`= !E`',`.@.`-``H`*@.`-``H`*@.`@.`" `( `@.`" `
M( `@.`" `( !!`&P`= !E`'(`90!D`" `0@.!Y`#H`( `@.`$$`; !E`'@.`( !7
M`&4`<@.!G`&4`; !E`',`#0`*`"H`( `@.`" `( `@.`" `( `@.`" `1 !A`'0`
M90`Z`" `( `P`#8`+P`P`#$`+P`R`# `, `Q``T`"@.`J`" `( `@.`" `( `@.
M`" `( `@.`$X`;P!T`&4`<P`Z`" `( !#`&\`;0!M`&4`;@.!T`&4`9 `-``H`
M*@.`-``H`*@.`@.`" `( `@.`" `( `@.`" `( !!`&P`= !E`'(`90!D`" `0@.!Y
M`#H`( `-``H`*@.`@.`" `( `@.`" `( `@.`" `( !$`&$`= !E`#H`#0`*`"H`
M( `@.`" `( `@.`" `( `@.`" `3@.!O`'0`90!S`#H`( `@.`$D`1@.`@.`%D`3P!5
M`" `00!,`%0`10!2`" `5 !(`$D`4P`@.`% `4@.!/`$,`10!$`%4`4@.!%`"P`
M( !#`$\`4 !9`" `00!.`$0`( !0`$$`4P!4`$4`( !4`$@.`10`@.`"<`00!,
M`%0`10!2`$4`1 `@.`$(`60`G``T`"@.`J`" `( `@.`" `( `@.`" `( `@.`%,`
M10!#`%0`20!/`$X`( !4`$\`( !$`%4`4 !,`$D`0P!!`%0`10`@.`$D`5 `[
M`" `5 !(`$4`3@.`@.`$4`1 !)`%0`( !4`$@.`10`@.`$8`20!2`%,`5 `@.`$(`
M3 !!`$X`2P`@.`$,`3P!0`%D`+ `@.`$P`10!!`%8`20!.`$<`#0`*`"H`( `@.
M`" `( `@.`" `( `@.`" `00`@.`$0`50!0`$P`20!#`$$`5 !%`" `20!.`" `
M4 !,`$$`0P!%`" `1@.!/`%(`( !4`$@.`10`@.`$X`10!8`%0`( !0`$4`4@.!3
M`$\`3@.`@.`%0`3P`@.`$4`1 !)`%0`( !4`$@.`20!3`" `4 !2`$\`0P`N``T`
M"@.`J``T`"@.`J`"\`#0`*``T`"@.`M`"T`9 !R`&\`< `@.`' `<@.!O`&,`90!D
M`'4`<@.!E`" `=0!S`' `7P!2`&$`= !I`&X`9P!S`%\`5 !R`&$`;@.!S`&$`
M8P!T`&D`;P!N`$P`;P!G`%\`4@.!E`',`= !O`'(`90`-``H`#0`*``T`"@.`M
M`"T`80!S``T`"@.`-``H`<P!E`'0`( !N`&\`8P!O`'4`;@.!T`" `;P!N``T`
M"@.`-``H`9 !E`&,`; !A`'(`90`@.`$ `<P!T`'(`:0!N`&<`( !V`&$`<@.!C
M`&@.`80!R`"@.`,0`P`# `, `I`"P`#0`*``D`0 !,`&\`9P!&`&D`; !E`" `
M=@.!A`'(`8P!H`&$`<@.`H`#$`, `P`# `*0`-``H`#0`*`&D`9@.`@.`&4`> !I
M`',`= !S`"@.`<P!E`&P`90!C`'0`( `J`" `9@.!R`&\`;0`@.`'0`90!M`' `
M9 !B`"X`+@.!S`'D`<P!O`&(`:@.!E`&,`= !S`" `=P!H`&4`<@.!E`" `:0!D
M`" `/0`@.`&\`8@.!J`&4`8P!T`%\`:0!D`"@.`)P!T`&4`;0!P`&0`8@.` N`"X`
M(P!,`&\`9P!3`&@.`:0!P`"<`*0`I``T`"@.!D`'(`;P!P`" `= !A`&(`; !E
M`" `(P!,`&\`9P!3`&@.`:0!P``T`"@.`-``H`8P!R`&4`80!T`&4`( !T`&$`
M8@.!L`&4`#0`*``D`(P!,`&\`9P!3`&@.`:0!P``T`"@.`)``D`* `-``H`"0`)
M``D`3 !O`&<`1@.!I`&P`90`@.`'8`80!R`&,`: !A`'(`* `Q`# `, `P`"D`
M#0`*``D`"0`I``T`"@.`-``H`:0!N`',`90!R`'0`( `C`$P`;P!G`%,`: !I
M`' `#0`*`&4`> !E`&,`( !M`&$`<P!T`&4`<@.`N`&0`8@.!O`"X`> !P`%\`
M8P!M`&0`<P!H`&4`; !L`" `)P!D`&D`<@.`@.`%P`7 !D`&$`= !A`'<`: !S
M`&4`, `Q`%P`4P!1`$P`0@.!!`$,`2P!5`% `7 !0`%(`3P!$`%,`40!,`# `
M,P!<`&0`8@.!(`$$`50!<`%0`4@.!.`%,`7 `J`" `+P!"`" `+P!/`$0`)P`-
M``H`#0`*``T`"@.`M`"T`<P!E`'0`( ! `$P`;P!G`$8`:0!L`&4`( `]`" `
M* !S`&4`; !E`&,`= `@.`"H`( `@.`&8`<@.!O`&T`( `C`$P`;P!G`%,`: !I
M`' `( !O`'(`9 !E`'(`( !B`'D`( !,`&\`9P!&`&D`; !E`" `9 !E`',`
M8P`I``T`"@.!D`&4`8P!L`&$`<@.!E`" `8P!U`'(`,0`@.`&,`=0!R`',`;P!R
M`" `9@.!O`'(`#0`*`',`90!L`&4`8P!T`" `; !O`&<`9@.!I`&P`90`-``H`
M9@.!R`&\`;0`@.`",`3 !O`&<`4P!H`&D`< `-``H`#0`*`&\`< !E`&X`( !C
M`'4`<@.`Q``T`"@.`-``H`9@.!E`'0`8P!H`" `;@.!E`'@.`= `@.`&8`<@.!O`&T`
M( !C`'4`<@.`Q`" `:0!N`'0`;P`@.`$ `3 !O`&<`1@.!I`&P`90`-``H`#0`*
M`'<`: !I`&P`90`@.`$ `0 !F`&4`= !C`&@.`7P!S`'0`80!T`'4`<P`@.`#T`
M( `P``T`"@.!B`&4`9P!I`&X`#0`*``T`"@.`-``H`<P!E`'0`( ! `',`= !R
M`&D`;@.!G`" `/0`@.`"<`4@.!%`%,`5 !/`%(`10`@.`$P`3P!'``T`"@.`)``D`
M"0!D`&(`2 !!`%4`#0`*``D`"0!&`%(`3P!-`" `1 !)`%,`2P`@.`#T`( `@.
M``T`"@.`)``D`"0!.`"<`( `K`" `)P`G`"<`)P`@.`"L`( `G`%P`7 !D`&$`
M= !A`'<`: !S`&4`, `Q`%P`4P!1`$P`0@.!!`$,`2P!5`% `7 !0`%(`3P!$
M`%,`40!,`# `,P!<`&0`8@.!(`$$`50!<`%0`4@.!.`%,`7 `G`" `*P`@.`$ `
M3 !O`&<`1@.!I`&P`90`@.`"L`( `G`"<`)P`G`" `*P`-``H`"0`)`"<`( !7
M`$D`5 !(``T`"@.`)``D`( `)`%,`5 !!`$X`1 !"`%D`( `]`" `)P`@.`"L`
M( `G`"<`)P`G`" `*P`@.`"<`7 !<`&0`80!T`&$`=P!H`',`90`P`#$`7 !3
M`%$`3 !"`$$`0P!+`%4`4 !<`% `4@.!/`$0`4P!1`$P`, `S`%P`9 !B`$@.`
M00!5`%P`5 !2`$X`4P!<`&0`8@.!(`$$`50!?`&P`;P!G`',`: !I`' `7P!5
M`&X`9 !O`$8`:0!L`&4`+@.!L`&0`9@.`G`"L`( `G`"<`)P`G``T`"@.`-``H`
M< !R`&D`;@.!T`" `* ! `',`= !R`&D`;@.!G`"D`#0`*`"T`+0!U`',`90`@.
M`' `<@.!I`&X`= `@.`'0`;P`@.`&,`<@.!E`&$`= !E`" `<P!C`'(`:0!P`'0`
M<P`-``H`+0`M`&,`80!N`" `8P!H`&$`;@.!G`&4`( !A`&(`;P!V`&4`( !T
M`&\`( !E`'@.`90!C`" `= !O`" `<@.!U`&X`( !I`&X`; !I`&X`90`-``H`
M9@.!E`'0`8P!H`" `;@.!E`'@.`= `@.`&8`<@.!O`&T`( !C`'4`<@.`Q`" `:0!N
M`'0`;P`@.`$ `3 !O`&<`1@.!I`&P`90`-``H`90!N`&0`#0`*``T`"@.!D`&4`
J80!L`&P`;P!C`&$`= !E`" `8P!U`'(`,0`-``H`#0`*`$<`3P`-``H`
`
end
log shipping set up via scripts instead of GUI
Is it possible to set up log shipping via scripts instead
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx
.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping=20
http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =
(White Paper)=20
http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping=20
http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping=20
http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping=20
http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =
Errors=20
http://support.microsoft.com/?id=3D329133
--=20
Keith
"Tom Horner" <thorner@.s1.com> wrote in message =
news:1c0c601c45214$85a56890$a501280a@.phx
.gbl...
> Is it possible to set up log shipping via scripts instead=20
> of GUI? We have over 50 databases being log shipped, and=20
> occasional (almost random) failures for various reasons=20
> cause us to have to "reset" (redo) the log shipping. =20
> Doing this via the GUI is a time-consuming process. I=20
> found 5/23/03 webcast with PPT slide saying this is "not=20
> supported". Does this mean not POSSIBLE? We are using=20
> SQL server 2000 on Windows 2000 boxes, two separate=20
> servers with primary and secondary (log shipped-to)=20
> databases.
>=20
> Thanks in advance!sql
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx
.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping=20
http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =
(White Paper)=20
http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping=20
http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping=20
http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping=20
http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =
Errors=20
http://support.microsoft.com/?id=3D329133
--=20
Keith
"Tom Horner" <thorner@.s1.com> wrote in message =
news:1c0c601c45214$85a56890$a501280a@.phx
.gbl...
> Is it possible to set up log shipping via scripts instead=20
> of GUI? We have over 50 databases being log shipped, and=20
> occasional (almost random) failures for various reasons=20
> cause us to have to "reset" (redo) the log shipping. =20
> Doing this via the GUI is a time-consuming process. I=20
> found 5/23/03 webcast with PPT slide saying this is "not=20
> supported". Does this mean not POSSIBLE? We are using=20
> SQL server 2000 on Windows 2000 boxes, two separate=20
> servers with primary and secondary (log shipped-to)=20
> databases.
>=20
> Thanks in advance!sql
log shipping set up via scripts instead of GUI
Is it possible to set up log shipping via scripts instead
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =(White Paper) http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =Errors http://support.microsoft.com/?id=3D329133
-- Keith
"Tom Horner" <thorner@.s1.com> wrote in message =news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead > of GUI? We have over 50 databases being log shipped, and > occasional (almost random) failures for various reasons > cause us to have to "reset" (redo) the log shipping. > Doing this via the GUI is a time-consuming process. I > found 5/23/03 webcast with PPT slide saying this is "not > supported". Does this mean not POSSIBLE? We are using > SQL server 2000 on Windows 2000 boxes, two separate > servers with primary and secondary (log shipped-to) > databases.
> > Thanks in advance!|||Tom,
the sql server resource kit has some scripts to do this,
which are particularly useful if you only have standard
edition or lower. Similar scripts are available for free
from various sites (eg http://www.sql-server-
performance.com/sql_server_log_shipping.asp). In both
cases you won't get access to a monitoring GUI, but
alerts/reports can be set up to perform this work.
Regards,
Paul Ibison
of GUI? We have over 50 databases being log shipped, and
occasional (almost random) failures for various reasons
cause us to have to "reset" (redo) the log shipping.
Doing this via the GUI is a time-consuming process. I
found 5/23/03 webcast with PPT slide saying this is "not
supported". Does this mean not POSSIBLE? We are using
SQL server 2000 on Windows 2000 boxes, two separate
servers with primary and secondary (log shipped-to)
databases.
Thanks in advance!Hi Tom
It certainly is possible. All that log shipping does under the covers is
basically back up the database log, ftp the file to the target server then
restore it with sandby. That can easily be achieved in scripts - just BACKUP
LOG [dbname] to disk='c:\backupname.lbak', then perform an ftp using
xp_cmdshell, then run the restore command on the target server.. Many people
do this & if you search the newsgroup archive on google, you'll see lots of
sample scripts etc.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Tom Horner" <thorner@.s1.com> wrote in message
news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead
> of GUI? We have over 50 databases being log shipped, and
> occasional (almost random) failures for various reasons
> cause us to have to "reset" (redo) the log shipping.
> Doing this via the GUI is a time-consuming process. I
> found 5/23/03 webcast with PPT slide saying this is "not
> supported". Does this mean not POSSIBLE? We are using
> SQL server 2000 on Windows 2000 boxes, two separate
> servers with primary and secondary (log shipped-to)
> databases.
> Thanks in advance!|||Some good information:
314515 INF: Frequently Asked Questions - SQL Server 2000 - Log Shipping http://support.microsoft.com/?id=3D314515=20
323135 INF: Microsoft SQL Server 2000 - How to Set Up Log Shipping =(White Paper) http://support.microsoft.com/?id=3D323135=20
325220 Support WebCast: Microsoft SQL Server 2000 Log Shipping http://support.microsoft.com/?id=3D325220=20
821786 Support WebCast: Microsoft SQL Server 2000: Using Log Shipping http://support.microsoft.com/?id=3D821786=20
321247 HOW TO: Configure Security for Log Shipping http://support.microsoft.com/?id=3D321247=20
329133 INF: Troubleshooting SQL Server 2000 Log Shipping "Out of Sync" =Errors http://support.microsoft.com/?id=3D329133
-- Keith
"Tom Horner" <thorner@.s1.com> wrote in message =news:1c0c601c45214$85a56890$a501280a@.phx.gbl...
> Is it possible to set up log shipping via scripts instead > of GUI? We have over 50 databases being log shipped, and > occasional (almost random) failures for various reasons > cause us to have to "reset" (redo) the log shipping. > Doing this via the GUI is a time-consuming process. I > found 5/23/03 webcast with PPT slide saying this is "not > supported". Does this mean not POSSIBLE? We are using > SQL server 2000 on Windows 2000 boxes, two separate > servers with primary and secondary (log shipped-to) > databases.
> > Thanks in advance!|||Tom,
the sql server resource kit has some scripts to do this,
which are particularly useful if you only have standard
edition or lower. Similar scripts are available for free
from various sites (eg http://www.sql-server-
performance.com/sql_server_log_shipping.asp). In both
cases you won't get access to a monitoring GUI, but
alerts/reports can be set up to perform this work.
Regards,
Paul Ibison
Monday, March 26, 2012
Log shipping reversal - Two legs in SQL 2000
Anyone do log shipping role reversal when two legs are being shipped?
--
jlCan you please better describe what you're looking for. One database being
shipped, two, etc.
Many thanks.
"John L" wrote:
> Anyone do log shipping role reversal when two legs are being shipped?
> --
> jl
--
jlCan you please better describe what you're looking for. One database being
shipped, two, etc.
Many thanks.
"John L" wrote:
> Anyone do log shipping role reversal when two legs are being shipped?
> --
> jl
Log shipping recovery trials
We have a Production Server A thats log shipped to Standby Server B. We want
to simulate some disaster recovery runs .
What are some things to consider. It seems obtainable from A to B.. But how
do I revert back to A again.. Do I have to redo log shipping and this time
from B to A which means I need to restore all dbs/logs in norecovery state
onto A from B using the wizard
Are there better ways to simulate this ? I dont wish for it to be a
nightmare especially since if we are going to test this and we would want to
revert back to A as soon as possible during this run...You don't have to set up reverse log-shipping. You simply backup B & restore
it on A & then re-initialise the log shipping process to continue testing..
Regards,
Greg Linwood
SQL Server MVP
"sql" <sql@.hotmail.com> wrote in message
news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> We have a Production Server A thats log shipped to Standby Server B. We
want
> to simulate some disaster recovery runs .
> What are some things to consider. It seems obtainable from A to B.. But
how
> do I revert back to A again.. Do I have to redo log shipping and this time
> from B to A which means I need to restore all dbs/logs in norecovery state
> onto A from B using the wizard
> Are there better ways to simulate this ? I dont wish for it to be a
> nightmare especially since if we are going to test this and we would want
to
> revert back to A as soon as possible during this run...
>
>|||When we fail to Standby Server B initially and simulate to run for 1 or 2
days before we recover Primary Server A, does that mean my downtime to
revert back to server A would be taking server B offline... doing a total
restore of all dbs to server A and then proceed. So my downtime is equal to
the amount of time it takes to restore the dbs as opposed to the initial
failover from A to B as the only process involved there is recovering the
dbs.. and other minor stuff such as logins,msdb related jobs and DTS
packages,sysmessages..assuming we are good to go here
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> You don't have to set up reverse log-shipping. You simply backup B &
restore
> it on A & then re-initialise the log shipping process to continue
testing..
> Regards,
> Greg Linwood
> SQL Server MVP
> "sql" <sql@.hotmail.com> wrote in message
> news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> want
> how
time
state
want
> to
>|||Hi Hassan.
That's one way of doing it. Another is to do it in multiple steps - eg leave
B online whilst you perform your full backup restore to A (DTS packages &
whatever else), then restore the remaining log tails to A & your window of
downtime is fairly small. If you want to keep that downtime window even
smaller you could iterate the log tail restores.
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23YHO71D%23DHA.132@.TK2MSFTNGP09.phx.gbl...
> When we fail to Standby Server B initially and simulate to run for 1 or 2
> days before we recover Primary Server A, does that mean my downtime to
> revert back to server A would be taking server B offline... doing a total
> restore of all dbs to server A and then proceed. So my downtime is equal
to
> the amount of time it takes to restore the dbs as opposed to the initial
> failover from A to B as the only process involved there is recovering the
> dbs.. and other minor stuff such as logins,msdb related jobs and DTS
> packages,sysmessages..assuming we are good to go here
>
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> restore
> testing..
We
But
> time
> state
> want
>
to simulate some disaster recovery runs .
What are some things to consider. It seems obtainable from A to B.. But how
do I revert back to A again.. Do I have to redo log shipping and this time
from B to A which means I need to restore all dbs/logs in norecovery state
onto A from B using the wizard
Are there better ways to simulate this ? I dont wish for it to be a
nightmare especially since if we are going to test this and we would want to
revert back to A as soon as possible during this run...You don't have to set up reverse log-shipping. You simply backup B & restore
it on A & then re-initialise the log shipping process to continue testing..
Regards,
Greg Linwood
SQL Server MVP
"sql" <sql@.hotmail.com> wrote in message
news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> We have a Production Server A thats log shipped to Standby Server B. We
want
> to simulate some disaster recovery runs .
> What are some things to consider. It seems obtainable from A to B.. But
how
> do I revert back to A again.. Do I have to redo log shipping and this time
> from B to A which means I need to restore all dbs/logs in norecovery state
> onto A from B using the wizard
> Are there better ways to simulate this ? I dont wish for it to be a
> nightmare especially since if we are going to test this and we would want
to
> revert back to A as soon as possible during this run...
>
>|||When we fail to Standby Server B initially and simulate to run for 1 or 2
days before we recover Primary Server A, does that mean my downtime to
revert back to server A would be taking server B offline... doing a total
restore of all dbs to server A and then proceed. So my downtime is equal to
the amount of time it takes to restore the dbs as opposed to the initial
failover from A to B as the only process involved there is recovering the
dbs.. and other minor stuff such as logins,msdb related jobs and DTS
packages,sysmessages..assuming we are good to go here
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> You don't have to set up reverse log-shipping. You simply backup B &
restore
> it on A & then re-initialise the log shipping process to continue
testing..
> Regards,
> Greg Linwood
> SQL Server MVP
> "sql" <sql@.hotmail.com> wrote in message
> news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> want
> how
time
state
want
> to
>|||Hi Hassan.
That's one way of doing it. Another is to do it in multiple steps - eg leave
B online whilst you perform your full backup restore to A (DTS packages &
whatever else), then restore the remaining log tails to A & your window of
downtime is fairly small. If you want to keep that downtime window even
smaller you could iterate the log tail restores.
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23YHO71D%23DHA.132@.TK2MSFTNGP09.phx.gbl...
> When we fail to Standby Server B initially and simulate to run for 1 or 2
> days before we recover Primary Server A, does that mean my downtime to
> revert back to server A would be taking server B offline... doing a total
> restore of all dbs to server A and then proceed. So my downtime is equal
to
> the amount of time it takes to restore the dbs as opposed to the initial
> failover from A to B as the only process involved there is recovering the
> dbs.. and other minor stuff such as logins,msdb related jobs and DTS
> packages,sysmessages..assuming we are good to go here
>
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> restore
> testing..
We
But
> time
> state
> want
>
Log shipping recovery trials
We have a Production Server A thats log shipped to Standby Server B. We want
to simulate some disaster recovery runs .
What are some things to consider. It seems obtainable from A to B.. But how
do I revert back to A again.. Do I have to redo log shipping and this time
from B to A which means I need to restore all dbs/logs in norecovery state
onto A from B using the wizard
Are there better ways to simulate this ? I dont wish for it to be a
nightmare especially since if we are going to test this and we would want to
revert back to A as soon as possible during this run...You don't have to set up reverse log-shipping. You simply backup B & restore
it on A & then re-initialise the log shipping process to continue testing..
Regards,
Greg Linwood
SQL Server MVP
"sql" <sql@.hotmail.com> wrote in message
news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> We have a Production Server A thats log shipped to Standby Server B. We
want
> to simulate some disaster recovery runs .
> What are some things to consider. It seems obtainable from A to B.. But
how
> do I revert back to A again.. Do I have to redo log shipping and this time
> from B to A which means I need to restore all dbs/logs in norecovery state
> onto A from B using the wizard
> Are there better ways to simulate this ? I dont wish for it to be a
> nightmare especially since if we are going to test this and we would want
to
> revert back to A as soon as possible during this run...
>
>|||When we fail to Standby Server B initially and simulate to run for 1 or 2
days before we recover Primary Server A, does that mean my downtime to
revert back to server A would be taking server B offline... doing a total
restore of all dbs to server A and then proceed. So my downtime is equal to
the amount of time it takes to restore the dbs as opposed to the initial
failover from A to B as the only process involved there is recovering the
dbs.. and other minor stuff such as logins,msdb related jobs and DTS
packages,sysmessages..assuming we are good to go here
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> You don't have to set up reverse log-shipping. You simply backup B &
restore
> it on A & then re-initialise the log shipping process to continue
testing..
> Regards,
> Greg Linwood
> SQL Server MVP
> "sql" <sql@.hotmail.com> wrote in message
> news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> > We have a Production Server A thats log shipped to Standby Server B. We
> want
> > to simulate some disaster recovery runs .
> >
> > What are some things to consider. It seems obtainable from A to B.. But
> how
> > do I revert back to A again.. Do I have to redo log shipping and this
time
> > from B to A which means I need to restore all dbs/logs in norecovery
state
> > onto A from B using the wizard
> >
> > Are there better ways to simulate this ? I dont wish for it to be a
> > nightmare especially since if we are going to test this and we would
want
> to
> > revert back to A as soon as possible during this run...
> >
> >
> >
>|||Hi Hassan.
That's one way of doing it. Another is to do it in multiple steps - eg leave
B online whilst you perform your full backup restore to A (DTS packages &
whatever else), then restore the remaining log tails to A & your window of
downtime is fairly small. If you want to keep that downtime window even
smaller you could iterate the log tail restores.
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23YHO71D%23DHA.132@.TK2MSFTNGP09.phx.gbl...
> When we fail to Standby Server B initially and simulate to run for 1 or 2
> days before we recover Primary Server A, does that mean my downtime to
> revert back to server A would be taking server B offline... doing a total
> restore of all dbs to server A and then proceed. So my downtime is equal
to
> the amount of time it takes to restore the dbs as opposed to the initial
> failover from A to B as the only process involved there is recovering the
> dbs.. and other minor stuff such as logins,msdb related jobs and DTS
> packages,sysmessages..assuming we are good to go here
>
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> > You don't have to set up reverse log-shipping. You simply backup B &
> restore
> > it on A & then re-initialise the log shipping process to continue
> testing..
> >
> > Regards,
> > Greg Linwood
> > SQL Server MVP
> >
> > "sql" <sql@.hotmail.com> wrote in message
> > news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> > > We have a Production Server A thats log shipped to Standby Server B.
We
> > want
> > > to simulate some disaster recovery runs .
> > >
> > > What are some things to consider. It seems obtainable from A to B..
But
> > how
> > > do I revert back to A again.. Do I have to redo log shipping and this
> time
> > > from B to A which means I need to restore all dbs/logs in norecovery
> state
> > > onto A from B using the wizard
> > >
> > > Are there better ways to simulate this ? I dont wish for it to be a
> > > nightmare especially since if we are going to test this and we would
> want
> > to
> > > revert back to A as soon as possible during this run...
> > >
> > >
> > >
> >
> >
>
to simulate some disaster recovery runs .
What are some things to consider. It seems obtainable from A to B.. But how
do I revert back to A again.. Do I have to redo log shipping and this time
from B to A which means I need to restore all dbs/logs in norecovery state
onto A from B using the wizard
Are there better ways to simulate this ? I dont wish for it to be a
nightmare especially since if we are going to test this and we would want to
revert back to A as soon as possible during this run...You don't have to set up reverse log-shipping. You simply backup B & restore
it on A & then re-initialise the log shipping process to continue testing..
Regards,
Greg Linwood
SQL Server MVP
"sql" <sql@.hotmail.com> wrote in message
news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> We have a Production Server A thats log shipped to Standby Server B. We
want
> to simulate some disaster recovery runs .
> What are some things to consider. It seems obtainable from A to B.. But
how
> do I revert back to A again.. Do I have to redo log shipping and this time
> from B to A which means I need to restore all dbs/logs in norecovery state
> onto A from B using the wizard
> Are there better ways to simulate this ? I dont wish for it to be a
> nightmare especially since if we are going to test this and we would want
to
> revert back to A as soon as possible during this run...
>
>|||When we fail to Standby Server B initially and simulate to run for 1 or 2
days before we recover Primary Server A, does that mean my downtime to
revert back to server A would be taking server B offline... doing a total
restore of all dbs to server A and then proceed. So my downtime is equal to
the amount of time it takes to restore the dbs as opposed to the initial
failover from A to B as the only process involved there is recovering the
dbs.. and other minor stuff such as logins,msdb related jobs and DTS
packages,sysmessages..assuming we are good to go here
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> You don't have to set up reverse log-shipping. You simply backup B &
restore
> it on A & then re-initialise the log shipping process to continue
testing..
> Regards,
> Greg Linwood
> SQL Server MVP
> "sql" <sql@.hotmail.com> wrote in message
> news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> > We have a Production Server A thats log shipped to Standby Server B. We
> want
> > to simulate some disaster recovery runs .
> >
> > What are some things to consider. It seems obtainable from A to B.. But
> how
> > do I revert back to A again.. Do I have to redo log shipping and this
time
> > from B to A which means I need to restore all dbs/logs in norecovery
state
> > onto A from B using the wizard
> >
> > Are there better ways to simulate this ? I dont wish for it to be a
> > nightmare especially since if we are going to test this and we would
want
> to
> > revert back to A as soon as possible during this run...
> >
> >
> >
>|||Hi Hassan.
That's one way of doing it. Another is to do it in multiple steps - eg leave
B online whilst you perform your full backup restore to A (DTS packages &
whatever else), then restore the remaining log tails to A & your window of
downtime is fairly small. If you want to keep that downtime window even
smaller you could iterate the log tail restores.
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23YHO71D%23DHA.132@.TK2MSFTNGP09.phx.gbl...
> When we fail to Standby Server B initially and simulate to run for 1 or 2
> days before we recover Primary Server A, does that mean my downtime to
> revert back to server A would be taking server B offline... doing a total
> restore of all dbs to server A and then proceed. So my downtime is equal
to
> the amount of time it takes to restore the dbs as opposed to the initial
> failover from A to B as the only process involved there is recovering the
> dbs.. and other minor stuff such as logins,msdb related jobs and DTS
> packages,sysmessages..assuming we are good to go here
>
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:O%23wVThA%23DHA.632@.TK2MSFTNGP12.phx.gbl...
> > You don't have to set up reverse log-shipping. You simply backup B &
> restore
> > it on A & then re-initialise the log shipping process to continue
> testing..
> >
> > Regards,
> > Greg Linwood
> > SQL Server MVP
> >
> > "sql" <sql@.hotmail.com> wrote in message
> > news:uNMLuPA%23DHA.888@.tk2msftngp13.phx.gbl...
> > > We have a Production Server A thats log shipped to Standby Server B.
We
> > want
> > > to simulate some disaster recovery runs .
> > >
> > > What are some things to consider. It seems obtainable from A to B..
But
> > how
> > > do I revert back to A again.. Do I have to redo log shipping and this
> time
> > > from B to A which means I need to restore all dbs/logs in norecovery
> state
> > > onto A from B using the wizard
> > >
> > > Are there better ways to simulate this ? I dont wish for it to be a
> > > nightmare especially since if we are going to test this and we would
> want
> > to
> > > revert back to A as soon as possible during this run...
> > >
> > >
> > >
> >
> >
>
Monday, March 12, 2012
Log shipping goes out of sync
Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
Toby
On Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
Toby
On Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.
Log shipping goes out of sync
Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log[vbcol=seagreen]
> shipping process? " - the answer is yes I am, so clearly here lies the problem.[/v
bcol]
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
>
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log[vbcol=seagreen]
> shipping process? " - the answer is yes I am, so clearly here lies the problem.[/v
bcol]
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
>
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
>
>
>
>
>
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...
Log shipping goes out of sync
Hi
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log
> shipping process? " - the answer is yes I am, so clearly here lies the problem.
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
> > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
> > On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> > > Hi
> > > In reply to your question "Are you running transaction log backups IN
> > > ADDITION to the log
> > > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > > I'm running a daily log backup, truncate and shrink - which I realise
> > > now is going to cause issues with the log shipping. However, and
> > > forgive me if this appears trivial, but the reason for run a log
> > > backup, truncate and shrink is to prevent the log file getting too
> > > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > > the log shipping process only, will this provide adequate truncating
> > > of the log?
> > > Thanks
> > Couple of key things here:
> > 1. Log backups truncate the log - the more frequently that you run a
> > log backup, the quicker committed transactions will get truncated, and
> > the less likely your log is to grow. Note that LARGE transactions can
> > still cause growth, because they can't be truncated until fully
> > committed.
> > 2. You are hurting your overall performance in one, possibly two,
> > ways. By repeatedly shrinking the log file, you are forcing SQL
> > Server to grow it again as needed, which introduces additional
> > overhead, possibly during a busy period. Also, repeatedly growing/
> > shrinking/growing/shrinking will lead to disk fragmentation, which
> > will also ultimately hurt your performance.
> > My advice would be to not use the log-shipping wizard that is built-
> > in. You can kill two birds with one stone by writing your own backup
> > routines. Create a log backup job that runs every hour (we do 5-
> > minute intervals here), have that job create backup files that contain
> > a date/time stamp, place these files into some folder, let's say
> > "FolderX". Write a log-shipping routine that monitors FolderX for new
> > files. When a new file is detected, your log shipping routine should
> > restore it and record the file name in a logging table. It then goes
> > back to monitoring FolderX for new files that aren't in the logging
> > table.
> > It's really not as complicated as it seems, and you'll solve all of
> > these problems...
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...
I have two SQL 2000 boxes setup to log ship. The box being shipped to
is also the monitor
The size of the db being shipped is around 110GB - so the initial log
ship first creates and copies across the entire db (takes a while)
then begins the transaction logs.
The problem is that the log ship goes out of sync also immediately -
from testing I've managed to get the first shipped transaction log to
load, and sometimes the second, but never further as it gets out of
sync.
Things I've tried:
Changing the schedule from the default 15 minutes to 2 hours (for copy
and load)
Ensuring the log ship process doesn't clash with a routine backup
Any suggestions?
Thanks
TobyOn Feb 19, 7:40 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> I have two SQL 2000 boxes setup to log ship. The box being shipped to
> is also the monitor
> The size of the db being shipped is around 110GB - so the initial log
> ship first creates and copies across the entire db (takes a while)
> then begins the transaction logs.
> The problem is that the log ship goes out of sync also immediately -
> from testing I've managed to get the first shipped transaction log to
> load, and sometimes the second, but never further as it gets out of
> sync.
> Things I've tried:
> Changing the schedule from the default 15 minutes to 2 hours (for copy
> and load)
> Ensuring the log ship process doesn't clash with a routine backup
> Any suggestions?
> Thanks
> Toby
Please explain this further:
"Ensuring the log ship process doesn't clash with a routine backup"
Are you running transaction log backups IN ADDITION to the log
shipping process? This will break the log shipping chain. Log
shipping works by taking a backup of the transaction log and restoring
that backup onto another database. If you run your own independent
log backup, you're advancing the LSN pointer, throwing the log
shipping backups out of sync.|||Hi
In reply to your question "Are you running transaction log backups IN
ADDITION to the log
> shipping process? " - the answer is yes I am, so clearly here lies the problem.
I'm running a daily log backup, truncate and shrink - which I realise
now is going to cause issues with the log shipping. However, and
forgive me if this appears trivial, but the reason for run a log
backup, truncate and shrink is to prevent the log file getting too
big, as it currently grows at the rate of 20-30gb a day. If I rely on
the log shipping process only, will this provide adequate truncating
of the log?
Thanks|||On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> Hi
> In reply to your question "Are you running transaction log backups IN
> ADDITION to the log
> > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> I'm running a daily log backup, truncate and shrink - which I realise
> now is going to cause issues with the log shipping. However, and
> forgive me if this appears trivial, but the reason for run a log
> backup, truncate and shrink is to prevent the log file getting too
> big, as it currently grows at the rate of 20-30gb a day. If I rely on
> the log shipping process only, will this provide adequate truncating
> of the log?
> Thanks
Couple of key things here:
1. Log backups truncate the log - the more frequently that you run a
log backup, the quicker committed transactions will get truncated, and
the less likely your log is to grow. Note that LARGE transactions can
still cause growth, because they can't be truncated until fully
committed.
2. You are hurting your overall performance in one, possibly two,
ways. By repeatedly shrinking the log file, you are forcing SQL
Server to grow it again as needed, which introduces additional
overhead, possibly during a busy period. Also, repeatedly growing/
shrinking/growing/shrinking will lead to disk fragmentation, which
will also ultimately hurt your performance.
My advice would be to not use the log-shipping wizard that is built-
in. You can kill two birds with one stone by writing your own backup
routines. Create a log backup job that runs every hour (we do 5-
minute intervals here), have that job create backup files that contain
a date/time stamp, place these files into some folder, let's say
"FolderX". Write a log-shipping routine that monitors FolderX for new
files. When a new file is detected, your log shipping routine should
restore it and record the file name in a logging table. It then goes
back to monitoring FolderX for new files that aren't in the logging
table.
It's really not as complicated as it seems, and you'll solve all of
these problems...|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
OK that's great thanks. I really don't know why I had the task shrink
the log in the first place, given the rate it increases.|||On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
> On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
>
> > Hi
> > In reply to your question "Are you running transaction log backups IN
> > ADDITION to the log
> > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > I'm running a daily log backup, truncate and shrink - which I realise
> > now is going to cause issues with the log shipping. However, and
> > forgive me if this appears trivial, but the reason for run a log
> > backup, truncate and shrink is to prevent the log file getting too
> > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > the log shipping process only, will this provide adequate truncating
> > of the log?
> > Thanks
> Couple of key things here:
> 1. Log backups truncate the log - the more frequently that you run a
> log backup, the quicker committed transactions will get truncated, and
> the less likely your log is to grow. Note that LARGE transactions can
> still cause growth, because they can't be truncated until fully
> committed.
> 2. You are hurting your overall performance in one, possibly two,
> ways. By repeatedly shrinking the log file, you are forcing SQL
> Server to grow it again as needed, which introduces additional
> overhead, possibly during a busy period. Also, repeatedly growing/
> shrinking/growing/shrinking will lead to disk fragmentation, which
> will also ultimately hurt your performance.
> My advice would be to not use the log-shipping wizard that is built-
> in. You can kill two birds with one stone by writing your own backup
> routines. Create a log backup job that runs every hour (we do 5-
> minute intervals here), have that job create backup files that contain
> a date/time stamp, place these files into some folder, let's say
> "FolderX". Write a log-shipping routine that monitors FolderX for new
> files. When a new file is detected, your log shipping routine should
> restore it and record the file name in a logging table. It then goes
> back to monitoring FolderX for new files that aren't in the logging
> table.
> It's really not as complicated as it seems, and you'll solve all of
> these problems...
Hi again
I now have the transaction log shipping running fine - I have used the
Wizard for now so I'll see how it goes.
One thing though - having removed the additional process to backup and
truncate the log, the log is now not being truncated.
Any thoughts?
Thanks.|||On Feb 25, 6:44 am, "Toby" <tjbeaum...@.gmail.com> wrote:
> On 22 Feb, 21:06, "Tracy McKibben" <tracy.mckib...@.gmail.com> wrote:
>
> > On Feb 22, 2:48 pm, "Toby" <tjbeaum...@.gmail.com> wrote:
> > > Hi
> > > In reply to your question "Are you running transaction log backups IN
> > > ADDITION to the log
> > > > shipping process? " - the answer is yes I am, so clearly here lies the problem.
> > > I'm running a daily log backup, truncate and shrink - which I realise
> > > now is going to cause issues with the log shipping. However, and
> > > forgive me if this appears trivial, but the reason for run a log
> > > backup, truncate and shrink is to prevent the log file getting too
> > > big, as it currently grows at the rate of 20-30gb a day. If I rely on
> > > the log shipping process only, will this provide adequate truncating
> > > of the log?
> > > Thanks
> > Couple of key things here:
> > 1. Log backups truncate the log - the more frequently that you run a
> > log backup, the quicker committed transactions will get truncated, and
> > the less likely your log is to grow. Note that LARGE transactions can
> > still cause growth, because they can't be truncated until fully
> > committed.
> > 2. You are hurting your overall performance in one, possibly two,
> > ways. By repeatedly shrinking the log file, you are forcing SQL
> > Server to grow it again as needed, which introduces additional
> > overhead, possibly during a busy period. Also, repeatedly growing/
> > shrinking/growing/shrinking will lead to disk fragmentation, which
> > will also ultimately hurt your performance.
> > My advice would be to not use the log-shipping wizard that is built-
> > in. You can kill two birds with one stone by writing your own backup
> > routines. Create a log backup job that runs every hour (we do 5-
> > minute intervals here), have that job create backup files that contain
> > a date/time stamp, place these files into some folder, let's say
> > "FolderX". Write a log-shipping routine that monitors FolderX for new
> > files. When a new file is detected, your log shipping routine should
> > restore it and record the file name in a logging table. It then goes
> > back to monitoring FolderX for new files that aren't in the logging
> > table.
> > It's really not as complicated as it seems, and you'll solve all of
> > these problems...
> Hi again
> I now have the transaction log shipping running fine - I have used the
> Wizard for now so I'll see how it goes.
> One thing though - having removed the additional process to backup and
> truncate the log, the log is now not being truncated.
> Any thoughts?
> Thanks.
As I said, I would suggestion NOT using the wizards... I've never
used that log shipping wizard, I have no idea what sort of backup job
it creates. Create your OWN processes, then you know what's going
on...
Log Shipping file deletes
I have log shipping working on my OLTP server. One and
only one of the log shipped databases is not deleting the
transaction log files on the destination server.
The deletes are happening on the source server, and they
are happening for all other databases on the destination
server.
I have looked at the various log shipping tables in msdb,
and I can't find any differences to explain this.
Has anyone got any ideas ?
TIA.Take a look also a file permissions and ownership... Is there a difference
there?
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"im Trowbridge" <jtrowbridge@.adelaidebank.com.au> wrote in message
news:3bc001c37b2e$494b75a0$a601280a@.phx.gbl...
> I have log shipping working on my OLTP server. One and
> only one of the log shipped databases is not deleting the
> transaction log files on the destination server.
> The deletes are happening on the source server, and they
> are happening for all other databases on the destination
> server.
> I have looked at the various log shipping tables in msdb,
> and I can't find any differences to explain this.
> Has anyone got any ideas ?
> TIA.
only one of the log shipped databases is not deleting the
transaction log files on the destination server.
The deletes are happening on the source server, and they
are happening for all other databases on the destination
server.
I have looked at the various log shipping tables in msdb,
and I can't find any differences to explain this.
Has anyone got any ideas ?
TIA.Take a look also a file permissions and ownership... Is there a difference
there?
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"im Trowbridge" <jtrowbridge@.adelaidebank.com.au> wrote in message
news:3bc001c37b2e$494b75a0$a601280a@.phx.gbl...
> I have log shipping working on my OLTP server. One and
> only one of the log shipped databases is not deleting the
> transaction log files on the destination server.
> The deletes are happening on the source server, and they
> are happening for all other databases on the destination
> server.
> I have looked at the various log shipping tables in msdb,
> and I can't find any differences to explain this.
> Has anyone got any ideas ?
> TIA.
Friday, March 9, 2012
Log shipping data
For log shipping, is all of the data entered into the primary shipped
via the transaction log to the secondary? I guess it is more of a
transaction log question. Does the transaction log also contain all
of the data stored in the db tables? If yes, does this mean there is
duplication of data - storage of the same data in the mdf and ldf
files?The transaction log keeps all changes to application / system data in the
database are recorded serially in the transaction log. Using this
information, the DBMS can track which transaction made which changes to SQL
Server data.
Information recorded on the transaction log includes:-
1. beginning of each transaction
2. Actual changes made to the data and info to undo the modifications made
during each transaction
3. Allocation changes and deallocation changes of database pages
Using this data, Microsoft SQL Server can accomplish data integrity
operations to ensure consistent data is maintained in the database. The
transaction log is used when SQL Server is restarted, when transactions are
rolled back, and to restore the database to the state prior to the
transaction.
So transaction log is not duplicated. Once the transaction is writtent to
disk automatically it will clear the LDF file
for SIMPLE recovery model and for FULL and BULK_LOGGED the LDF file will be
cleared after the trasbnaction log backup.
Which will help while there is a recovery/poin_in_time recovery needed.
THis is a very broad topic. please go thru transaction log and Recovery
model topics in books online.
Thanks
Hari
"erdos" <account@.cygen.com> wrote in message
news:1176439295.175918.296680@.w1g2000hsg.googlegroups.com...
> For log shipping, is all of the data entered into the primary shipped
> via the transaction log to the secondary? I guess it is more of a
> transaction log question. Does the transaction log also contain all
> of the data stored in the db tables? If yes, does this mean there is
> duplication of data - storage of the same data in the mdf and ldf
> files?
>
via the transaction log to the secondary? I guess it is more of a
transaction log question. Does the transaction log also contain all
of the data stored in the db tables? If yes, does this mean there is
duplication of data - storage of the same data in the mdf and ldf
files?The transaction log keeps all changes to application / system data in the
database are recorded serially in the transaction log. Using this
information, the DBMS can track which transaction made which changes to SQL
Server data.
Information recorded on the transaction log includes:-
1. beginning of each transaction
2. Actual changes made to the data and info to undo the modifications made
during each transaction
3. Allocation changes and deallocation changes of database pages
Using this data, Microsoft SQL Server can accomplish data integrity
operations to ensure consistent data is maintained in the database. The
transaction log is used when SQL Server is restarted, when transactions are
rolled back, and to restore the database to the state prior to the
transaction.
So transaction log is not duplicated. Once the transaction is writtent to
disk automatically it will clear the LDF file
for SIMPLE recovery model and for FULL and BULK_LOGGED the LDF file will be
cleared after the trasbnaction log backup.
Which will help while there is a recovery/poin_in_time recovery needed.
THis is a very broad topic. please go thru transaction log and Recovery
model topics in books online.
Thanks
Hari
"erdos" <account@.cygen.com> wrote in message
news:1176439295.175918.296680@.w1g2000hsg.googlegroups.com...
> For log shipping, is all of the data entered into the primary shipped
> via the transaction log to the secondary? I guess it is more of a
> transaction log question. Does the transaction log also contain all
> of the data stored in the db tables? If yes, does this mean there is
> duplication of data - storage of the same data in the mdf and ldf
> files?
>
Log Shipping Copy Errors
We've been experiencing above average log shipping
log "copy" errors. This is the job in which the server
which the log is shipped to has a job on it that copies
the newest log files from the host server. For no apparent
reason, this job fails , but the next time it's run, it
works. Some times it fails 10 times in a row then catches
up all at once. Other times it's less frequent. Anyone
know what might cause this . Disk space and netowkr access
isn't an issue, we've checked that.
thanksTom
Do you have a delay between the source server backing up
the database and the log shipping picking it up? It may be
that the target database is trying to copy the source
backup before it is finished. When you have problems is it
at a busy time, when for instance the log may be larger
than normal?
Regards
John|||Whats the error message?
>--Original Message--
>We've been experiencing above average log shipping
>log "copy" errors. This is the job in which the server
>which the log is shipped to has a job on it that copies
>the newest log files from the host server. For no
apparent
>reason, this job fails , but the next time it's run, it
>works. Some times it fails 10 times in a row then catches
>up all at once. Other times it's less frequent. Anyone
>know what might cause this . Disk space and netowkr
access
>isn't an issue, we've checked that.
>thanks
>.
>
log "copy" errors. This is the job in which the server
which the log is shipped to has a job on it that copies
the newest log files from the host server. For no apparent
reason, this job fails , but the next time it's run, it
works. Some times it fails 10 times in a row then catches
up all at once. Other times it's less frequent. Anyone
know what might cause this . Disk space and netowkr access
isn't an issue, we've checked that.
thanksTom
Do you have a delay between the source server backing up
the database and the log shipping picking it up? It may be
that the target database is trying to copy the source
backup before it is finished. When you have problems is it
at a busy time, when for instance the log may be larger
than normal?
Regards
John|||Whats the error message?
>--Original Message--
>We've been experiencing above average log shipping
>log "copy" errors. This is the job in which the server
>which the log is shipped to has a job on it that copies
>the newest log files from the host server. For no
apparent
>reason, this job fails , but the next time it's run, it
>works. Some times it fails 10 times in a row then catches
>up all at once. Other times it's less frequent. Anyone
>know what might cause this . Disk space and netowkr
access
>isn't an issue, we've checked that.
>thanks
>.
>
Subscribe to:
Comments (Atom)