Another idea
Tobias Polzin
polzin_spamprotect_@gmx.de
Thu, 29 Aug 2002 09:40:06 +0200
> TP> Using the idea from www.mikerubel.org I could use hard-links:
>
> TP> 1) BACKUPSERVER: cp -al backup/ mirror
> TP> rm -rf mirror/rdiff-backup-data
>
> TP> I tried it a little and it worked (even with copying
> TP> hard-links).
>
> TP> Do you see any problems with this?
>
> Seems like it shoudl work and is a good idea - I hadn't considered
> combining the two approaches in that way. Of course, you still end up
> creating either an extra directory or an extra hard link for every
> file being backed up, but not much extra space should be taken up.
>
> You may be able to speed your scheme up, maybe by a factor of
> two or so, by not copying the backup/rdiff-backup directory to begin
> with (instead of copying it and then deleting it). Not sure what the
> easiest way of doing this would be (maybe a few lines of shell code).
Perhaps
mv backup/rdiff-backup-data tmp/
cp -al backup/ mirror/
mv tmp/rdiff-backup-data backup
The best way would be, if rdiff-backup supported the "cp -l" way
of copying: Create hard-links instead of copying.
Do you think this would be possible to implement.
I assume, that I just have to add another switch
and update Robust.copy and Robust.copy_with_attributes.
What do you thing?
> so now you have a very scientific sample size of 2..
> I just accumulate them until I run out of disk space and then I use
> --remove-older-than. Currently I have about 45 days worth (at one
> session per day). I used to have 100+; time to get a bigger hard
> drive I suppose.
In both cases that increment files where only some percent of the mirror size.
So I do not quiet understand, why changing from 45 to 100+ days makes
such a problem... Hm, I will find out myself...
Tobias