--bwlimit how to

trevor@tecnopolis.ca trevor@tecnopolis.ca
Sat, 7 Sep 2002 14:12:48 -0500 (CDT)


> Sorry, I just realized this is totally wrong.  With compression
> enabled, it looks like there is no way (using cstream) to correctly
> bandwidth limit, since the actual compressed traffic only occurs
> between the ssh client and sever, and not over any pipe (duh).

I believe you are correct.  It can get confusing figuring out the
relation of cstream to ssh in all these examples.  But fundamentally
your conjecture about ssh compression is sound.

So I guess the 2 different methods I provided for each of the 2 cases
are in fact going to behave identically.

Since (I believe) ssh compression operates on small blocks of the data,
the average compression you'll see on average data files would be at
most 50%.  So if you double your cstream -t amount, you could probably
compensate.

It's not perfect, but then again, I think most people, like me, are
mainly interested in making sure their internet pipe does not get
saturated with backups.  We wouldn't want to screw up the UT game we're
playing while backups are running! :-)

As a follow-up to the questions about cstream:

I realized an easy way to test cstream, and the results show that my
suspicions were half correct:

test 1
perl -e '$|=1; open(F,">o"); print F time."\n"; sleep 10; print F time."\n"; print "z\n"x50; print F time."\n";' | cstream -v1 -t 10
z
z
[...]
z
z
100 B 100.0 B 10.00 s 10 B/s 10.00 B/s

test 2
perl -e '$|=1; open(F,">o"); print F time."\n"; sleep 10; print F time."\n"; print "z"x50; print F time."\n";' | cstream -v1 -t 10
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz50 B 50.0 B 10.00 s 5 B/s 5.00 B/s


When I ran test 1, it waited 10 seconds then dumped all 50 z's then
ended.  So with test 1, you can see cstream would be bursty -- trying to
maintain the ideal bandwidth usage over the life of the process.

But then I try it in test 2 with the only difference being I don't print
newlines with the z's.  Strangely enough, this scales it back to 5 B/s.
WEIRD!  The initial $|=1 in both scripts is supposed to turn off stdout
buffering, so I'm not really sure how to account for the difference
between test 1 and 2.  cstream itself must be line buffered.

I don't think this would affect rdiff-backup much because in most files
(binary or text) there will be a good mix of NL's (hopefully).

Bottom line is the ssh compression issue will mean the cstream limiter
is only approximate; and the bursty nature of cstream will still allow
big saturated bursts after rdiff-backup has been thinking.  You should
add these caveats to the FAQ.

>     Perhaps some kind of proxy/port redirector could do it though.

I think one of your original suggestions was correct: that this bw
limiting should be implemented in ssh.  There really is no other way to
ensure perfection at the application level.

A friend suggested to me that perhaps one could use linux's network
layer rate limiting / quality of service capabilities instead.  I've
never played with those facilities and since cstream (for now) will fit
the bill adequately, I don't think I'll investigate it further at this
time.

Thanks for your help!