script for determining space consumed by increments
dean gaudet
dean-list-rdiff-backup@arctic.org
Mon, 13 May 2002 10:22:19 -0700 (PDT)
On Sat, 11 May 2002, Ben Escoto wrote:
> Are you running ssh with compression enabled (-C)? If bandwidth
> is your problem that should really speed things up...
yeah i am... and i'm using blowfish rather than 3des.
i had the unfortunate luck of a cabling failure on my main server over
the weekend and i brought the system home for some tender loving care.
this gave me the opportunity to do a resync on my local network.
this still ended up taking about 3h. (i didn't do this resync with -C
because the local net is switched 100mbit and it didn't seem worth it.)
i haven't really done a very detailed analysis yet -- but what i was
seeing was somewhere around a maximum of 16kbyte/s transmitted from the
backup server to the main server. this should be the checksums and such.
since this wasn't maxing out the network i'm less convinced i've got an
uplink problem when i'm normally running this across my dsl.
i noticed a fair amount of superfluous syscalls in strace output on the
mirror side:
lstat64("tmp/pstTA5YUD", 0xbfffe4ec) = -1 ENOENT (No such file or directory)
lstat64("tmp/pstTA5YUD", 0xbfffde5c) = -1 ENOENT (No such file or directory)
lstat64("tmp/rdiff-backup.tmp.49", 0xbfffe32c) = -1 ENOENT (No such file or directory)
open("tmp/rdiff-backup.tmp.49", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 6
close(6) = 0
lstat64("tmp/rdiff-backup.tmp.49", {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
chown32(0x82c6ae8, 0x232, 0x232) = 0
chmod("tmp/rdiff-backup.tmp.49", 0600) = 0
gettimeofday({1021166300, 985930}, NULL) = 0
utime("tmp/rdiff-backup.tmp.49", [2002/05/11-18:18:20, 2002/04/15-13:27:12]) = 0
rename("tmp/rdiff-backup.tmp.49", "tmp/pstTA5YUD") = 0
lstat64("tmp/rdiff-backup.tmp.49", 0xbfffe72c) = -1 ENOENT (No such file or directory)
gettimeofday({1021166300, 987585}, NULL) = 0
one of the first two lstats seems extra ... and the last lstat shouldn't
be necessary either.
the lstat of rdiff-backup.tmp.49 right before the open could be removed
if you add O_EXCL to the open() -- that way if the file exists you'll
get an error from open(). (i'm assuming it's a test to see if the file
exists before opening it.)
on the system side, on linux, none of this should really be a huge perf
gain due to the dcache in 2.2.x and beyond (although on solaris it's
all worth the effort). but the reduced work in python might be good.
my system aren't running at full cpu, or full disk bandwidth (dunno about
disk seeking), or full net ... which is why i suspect serialisation.
my systems are generally capable of handling several of the above snippets
in parallel -- especially on my main server where there's 4 disk spindles;
and in both cases i've got SMP boxes. but even on a single spindle,
single cpu box, there's advantage to issuing multiple i/o in parallel
-- because you then get to put more requests into the disk elevators,
and they may be satisfied faster than in series.
there's an async i/o interface named aio, but it's meant to give you
async i/o on long-term opened files. on programs which do a lot of fs
metadata manipulation, the only option for parallelism is multithreading
or multiprocesses.
mind you, i'm pretty damn happy with performance as is :)
-dean