[rescue] Block size and the single DD

Joshua Boyd jdboyd at jdboyd.net
Wed Feb 11 10:47:44 CST 2004


On Wed, Feb 11, 2004 at 11:32:07AM -0500, Sheldon T. Hall wrote:

> Mine's also a DLT200xt, so I suppose it wants 1.25/2.50 MBs, too.
> 
> > Since the
> > internal disks are plain narrow SCSI it probably doesn't matter I guess).
> 
> Yeah, but even they are supposed to be 10 MBs, I think.

Having reviewed the archives a bit, I see you are trying to stream over
the network and straight to tape, so at the moment the classic's local
disk speed is irrelevent anyway.

Looking back on the command you are trying to use:
> rsh $MACHINE xfsdump -l 0 -F - $FS | dd of=$TAPE bs=$BS

Why don't you try doing something like:
rsh $MACHINE xfsdump -l 0 -F - $FS > /dev/null, and see what the
performance is like that way.  You might as well make sure it isn't some
strange network problem before looking for other issues.  At least, that
is what I would do.

Also, is xfsdump fast enough?  Maybe just on the Challenge do something
like xfsdump -lo -F - $FS > /dev/null and see if maybe xfsdump isn't
generating the data fast enough.

Assuming that the network is fine, then I wonder if perhaps switching
from bs=whatever to ibs=something and obs=somethingelse would be
sensible.  I say this because I'm wondering if dd is blocking somehow in
appropriately.  Perhaps you should have ibs be something really small,
and obs be something a few K in size?  There might be a way to look up
what the idea outgoing byte size is for the DLT, and I would imagine
the ideal incoming byte size is going to be how much rsh can transfer
per packet, perhaps something less than 1024b, like maybe 512b?  In my
experience, matchings block sizes of devices can make dd perform much,
much faster.

>From a different post:
> The SPARCclassic _is_ the remote backup server, as viewed from the SGIs,
> albeit a remote backup server without a whole lot of disk space.  Or CPU
> power.  Or network bandwidth.  It is, however, what I have.  It also
> runs 24/7, which makes running "pull" backups from cron pretty easy.

I think the ideal would be to find a way to cache things on disk.  I
realize that you don't want to spend any money, but putting another 9
gig or 18gig disk on the SS Classic so that you can copy the filesystem
to local disk then go from local disk to the DLT.  It will obviously
cost a few dollars, but it will break the problem down into smaller
pieces. 

Does the Classic have enough space to back up a smaller file system on
the Challenge?  If so, try going from challenge to local disk, then
local disk to Classic first?

But then, take all of the above with the grain of salt that I haven't
tried to use a DLT, but I have spent a lot of time trying to make pipe
things about work faster.



More information about the rescue mailing list