Stuart Clouston
2007-11-19 07:26:54 UTC
Hi All,
I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It's probably just some newbie trap that's messing things up but I'm buggered if I can see it.
The following script is run on the server (192.168.10.30): (I have simplified it and removed all of the ceil parameters during my troubleshooting process)
# Remove any existing qdisc
tc qdisc del dev eth0 root handle 1:
# Root queueing discipline
tc qdisc add dev eth0 root handle 1: htb default 10
# Root class
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 1500 ceil 100mbit
# Default class
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30mbit ceil 100mbit burst 1500
# Rate limited classes
tc class add dev eth0 parent 1:1 classid 1:4 htb rate 300kbit
tc class add dev eth0 parent 1:4 classid 1:40 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:41 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:42 htb rate 200kbit
tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10
tc qdisc add dev eth0 parent 1:41 handle 41: sfq perturb 10
tc qdisc add dev eth0 parent 1:42 handle 42: sfq perturb 10
# Filters to direct traffic to the right classes:
U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32"
$U32 match ip dst 192.168.10.85 match ip sport 3389 0xffff flowid 1:42
$U32 match ip dst 192.168.10.85 match ip sport 1352 0xffff flowid 1:41
$U32 match ip dst 192.168.10.85 flowid 1:40
The client (192.168.10.85) then runs iperf to test the results:
iperf -c 192.168.10.30 -p 1352 -P 5 -f k
[SUM] 0.0-11.4 sec 3016 KBytes 2163 Kbits/sec
iperf -c 192.168.10.30 -p 23 -P 5 -f k
[SUM] 0.0-11.4 sec 2856 KBytes 2053 Kbits/sec
iperf -c 192.168.10.30 -p 3389 -P 5 -f k
[SUM] 0.0-10.3 sec 11264 KBytes 8956 Kbits/sec
The traffic is being shaped proportially as I'd hoped but each class is well in excess of its configured limit.
I am getting similar results on two separate units:
1: Debian (testing), Kernel v2.6.16.19, iproute2 ss070313
2: Ubuntu (dapper), Kernel v2.6.23.1, iproute2 ss041019
I'd be very grateful for any information that could help me out.
Thanks,
Stu (newbie to HTB)
_________________________________________________________________
What are you waiting for? Join Lavalife FREE
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Flavalife9%2Eninemsn%2Ecom%2Eau%2Fclickthru%2Fclickthru%2Eact%3Fid%3Dninemsn%26context%3Dan99%26locale%3Den%5FAU%26a%3D30288&_t=764581033&_r=email_taglines_Join_free_OCT07&_m=EXT
I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It's probably just some newbie trap that's messing things up but I'm buggered if I can see it.
The following script is run on the server (192.168.10.30): (I have simplified it and removed all of the ceil parameters during my troubleshooting process)
# Remove any existing qdisc
tc qdisc del dev eth0 root handle 1:
# Root queueing discipline
tc qdisc add dev eth0 root handle 1: htb default 10
# Root class
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 1500 ceil 100mbit
# Default class
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30mbit ceil 100mbit burst 1500
# Rate limited classes
tc class add dev eth0 parent 1:1 classid 1:4 htb rate 300kbit
tc class add dev eth0 parent 1:4 classid 1:40 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:41 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:42 htb rate 200kbit
tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10
tc qdisc add dev eth0 parent 1:41 handle 41: sfq perturb 10
tc qdisc add dev eth0 parent 1:42 handle 42: sfq perturb 10
# Filters to direct traffic to the right classes:
U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32"
$U32 match ip dst 192.168.10.85 match ip sport 3389 0xffff flowid 1:42
$U32 match ip dst 192.168.10.85 match ip sport 1352 0xffff flowid 1:41
$U32 match ip dst 192.168.10.85 flowid 1:40
The client (192.168.10.85) then runs iperf to test the results:
iperf -c 192.168.10.30 -p 1352 -P 5 -f k
[SUM] 0.0-11.4 sec 3016 KBytes 2163 Kbits/sec
iperf -c 192.168.10.30 -p 23 -P 5 -f k
[SUM] 0.0-11.4 sec 2856 KBytes 2053 Kbits/sec
iperf -c 192.168.10.30 -p 3389 -P 5 -f k
[SUM] 0.0-10.3 sec 11264 KBytes 8956 Kbits/sec
The traffic is being shaped proportially as I'd hoped but each class is well in excess of its configured limit.
I am getting similar results on two separate units:
1: Debian (testing), Kernel v2.6.16.19, iproute2 ss070313
2: Ubuntu (dapper), Kernel v2.6.23.1, iproute2 ss041019
I'd be very grateful for any information that could help me out.
Thanks,
Stu (newbie to HTB)
_________________________________________________________________
What are you waiting for? Join Lavalife FREE
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Flavalife9%2Eninemsn%2Ecom%2Eau%2Fclickthru%2Fclickthru%2Eact%3Fid%3Dninemsn%26context%3Dan99%26locale%3Den%5FAU%26a%3D30288&_t=764581033&_r=email_taglines_Join_free_OCT07&_m=EXT