Discussion:
Unexpected results using HTB qdisc
Stuart Clouston
2007-11-19 07:26:54 UTC
Permalink
Hi All,

I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It's probably just some newbie trap that's messing things up but I'm buggered if I can see it.

The following script is run on the server (192.168.10.30): (I have simplified it and removed all of the ceil parameters during my troubleshooting process)

# Remove any existing qdisc
tc qdisc del dev eth0 root handle 1:

# Root queueing discipline
tc qdisc add dev eth0 root handle 1: htb default 10

# Root class
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 1500 ceil 100mbit

# Default class
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30mbit ceil 100mbit burst 1500

# Rate limited classes
tc class add dev eth0 parent 1:1 classid 1:4 htb rate 300kbit
tc class add dev eth0 parent 1:4 classid 1:40 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:41 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:42 htb rate 200kbit

tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10
tc qdisc add dev eth0 parent 1:41 handle 41: sfq perturb 10
tc qdisc add dev eth0 parent 1:42 handle 42: sfq perturb 10

# Filters to direct traffic to the right classes:

U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32"
$U32 match ip dst 192.168.10.85 match ip sport 3389 0xffff flowid 1:42
$U32 match ip dst 192.168.10.85 match ip sport 1352 0xffff flowid 1:41
$U32 match ip dst 192.168.10.85 flowid 1:40



The client (192.168.10.85) then runs iperf to test the results:

iperf -c 192.168.10.30 -p 1352 -P 5 -f k
[SUM] 0.0-11.4 sec 3016 KBytes 2163 Kbits/sec

iperf -c 192.168.10.30 -p 23 -P 5 -f k
[SUM] 0.0-11.4 sec 2856 KBytes 2053 Kbits/sec

iperf -c 192.168.10.30 -p 3389 -P 5 -f k
[SUM] 0.0-10.3 sec 11264 KBytes 8956 Kbits/sec


The traffic is being shaped proportially as I'd hoped but each class is well in excess of its configured limit.

I am getting similar results on two separate units:
1: Debian (testing), Kernel v2.6.16.19, iproute2 ss070313
2: Ubuntu (dapper), Kernel v2.6.23.1, iproute2 ss041019

I'd be very grateful for any information that could help me out.
Thanks,
Stu (newbie to HTB)
_________________________________________________________________
What are you waiting for? Join Lavalife FREE
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Flavalife9%2Eninemsn%2Ecom%2Eau%2Fclickthru%2Fclickthru%2Eact%3Fid%3Dninemsn%26context%3Dan99%26locale%3Den%5FAU%26a%3D30288&_t=764581033&_r=email_taglines_Join_free_OCT07&_m=EXT
Derek Sims
2007-11-19 09:42:11 UTC
Permalink
Hi

I have a router with a large number of iptables rules and some extensive
traffic shaping (HTB + RED + ... ) + conntrack.

The router is running Centos5 on a P4 Celeron 2.4 with 512Mb ram


30% soft interrupt cpu utilisation
7000 packets/second on each of eth1 and eth0 (forwarded packets)
20Mbit/second on both eth1 and eth0
e1000 ethernet on both eth0 and eth1 (eth1 running at 100Mbit)

I am trying to optimise the firewall rules and have already managed to
reduce cpu si by about 40% however I need to get this router to handle a
throughput rate of 100Mbit or more.

I have seen hints that using SMP (or multicore) processors will not help
for soft interrupt. My question is this:

1. What processors should I be looking for in order to achieve the best
routing throughput on a linux router?

2. Is it true that multicore processors will not help much in this
situation?

Best regards,
Derek
Marek Kierdelewicz
2007-11-19 16:40:34 UTC
Permalink
Post by Derek Sims
Hi
Hi
Post by Derek Sims
I have a router with a large number of iptables rules and some
extensive traffic shaping (HTB + RED + ... ) + conntrack.
Performance boost tips:

- Use "set" module instead of sequential iptables rules. It can lower
cpu usage.

- Use hashing filters for shaping if you're using many u32 filters.

- configure conntrack to use bigger hashsize for better performance;
i'm passing following parameter to kernel in grub to achieve this:
ip_conntrack.hashsize=1048575

- configure routecache to use bigger to use more memory for better
performance; i'm passing following parameter to kernel in grub to
achieve this: rhash_entries=2400000
Post by Derek Sims
1. What processors should I be looking for in order to achieve the
best routing throughput on a linux router?
I've had good experiences with P4 (with and without HT), Athlon64, Xeon
[dempsey], Xeon [woodcrest]. The last one is the best choice because of
the large cache and architecture. I think you can use Core 2 Duo too
if you want to save some money.
Post by Derek Sims
2. Is it true that multicore processors will not help much in this
situation?
Not true. In your setup with two nics with same load you can easily use
two cores. You can assign each nic to different core by the means of
smp_affinity setting in /proc/irq/... or by using irqbalance daemon.
Post by Derek Sims
Best regards,
Derek
pozdrawiam
Marek Kierdelewicz
KoBa ISP
Derek Sims
2007-11-19 17:55:20 UTC
Permalink
Post by Marek Kierdelewicz
Post by Derek Sims
Hi
Hi
Post by Derek Sims
I have a router with a large number of iptables rules and some
extensive traffic shaping (HTB + RED + ... ) + conntrack.
- Use "set" module instead of sequential iptables rules. It can lower
cpu usage.
Hmm - I don't know what the "set" module is - can you point me to some
documentation please?
Post by Marek Kierdelewicz
- Use hashing filters for shaping if you're using many u32 filters.
Only 3
Post by Marek Kierdelewicz
- configure conntrack to use bigger hashsize for better performance;
ip_conntrack.hashsize=1048575
I have 64k in conntrack_max and hashsize of 16000
Currently running with about 20000 conntrack connections

I will try increasing this
Post by Marek Kierdelewicz
- configure routecache to use bigger to use more memory for better
performance; i'm passing following parameter to kernel in grub to
achieve this: rhash_entries=2400000
Post by Derek Sims
1. What processors should I be looking for in order to achieve the
best routing throughput on a linux router?
I've had good experiences with P4 (with and without HT), Athlon64, Xeon
[dempsey], Xeon [woodcrest]. The last one is the best choice because of
the large cache and architecture. I think you can use Core 2 Duo too
if you want to save some money.
Thanks - I will see what I can get
Post by Marek Kierdelewicz
Post by Derek Sims
2. Is it true that multicore processors will not help much in this
situation?
Not true. In your setup with two nics with same load you can easily use
two cores. You can assign each nic to different core by the means of
smp_affinity setting in /proc/irq/... or by using irqbalance daemon.
That is good news :) - however I guess 4 core with dual ethernet would
not help very much!
Post by Marek Kierdelewicz
Post by Derek Sims
Best regards,
Derek
pozdrawiam
Marek Kierdelewicz
KoBa ISP
_______________________________________________
LARTC mailing list
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Best regards,
Derek
Mohan Sundaram
2007-11-20 03:03:11 UTC
Permalink
Post by Derek Sims
Hmm - I don't know what the "set" module is - can you point me to some
documentation please?
Search for ipset extensions for iptables or look up extension projects
in netfilter.org.

ipset gives the facility to create sets of IPs and use the sets in
iptables rules. Makes the rules more orderly, easy to read, easy to
manage and is easier on the CPU.

Mohan
Mohan Sundaram
2007-11-20 03:18:48 UTC
Permalink
Post by Derek Sims
Post by Marek Kierdelewicz
Post by Derek Sims
1. What processors should I be looking for in order to achieve the
best routing throughput on a linux router?
I've had good experiences with P4 (with and without HT), Athlon64, Xeon
[dempsey], Xeon [woodcrest]. The last one is the best choice because of
the large cache and architecture. I think you can use Core 2 Duo too
if you want to save some money.
Thanks - I will see what I can get
I used AMD Opteron 2Ghz and it blazes. In my packet switching
benchmarks, onchip cache gave no benefit. On price-performance basis,
you'd be better off going for greater CPU speed than for cache.

Mohan
Marco C. Coelho
2007-11-20 16:03:47 UTC
Permalink
I like multicore / multicpu amd opteron boards from Tyan. With a dual
core (later you can double to 4 core) dual cpu motherboard, you can
route and shape to your hearts content. To really improve throughput,
you should use a higher end network controller with pci-express with
some smarts and multiple ports if you need it.

Hope that helps.
Post by Mohan Sundaram
Post by Derek Sims
Post by Marek Kierdelewicz
Post by Derek Sims
1. What processors should I be looking for in order to achieve the
best routing throughput on a linux router?
I've had good experiences with P4 (with and without HT), Athlon64, Xeon
[dempsey], Xeon [woodcrest]. The last one is the best choice because of
the large cache and architecture. I think you can use Core 2 Duo too
if you want to save some money.
Thanks - I will see what I can get
I used AMD Opteron 2Ghz and it blazes. In my packet switching
benchmarks, onchip cache gave no benefit. On price-performance basis,
you'd be better off going for greater CPU speed than for cache.
Mohan
_______________________________________________
LARTC mailing list
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
"C." Bergström
2007-11-20 17:38:21 UTC
Permalink
I'm mostly just a lurker here, but with recent discussion just wanted to
toss this question out to the community/vendors.

My preferences are..

a) low thermals (maybe sbc)
b) routes 100-200Mbps with shaping
c) 1U-4U rack mountable factor
d) not sparc, alpha or mips based
e) pci-x /pcie slot so I can put some quad port nic (open to
suggestions)
f) priced between 300-1000 USD
g) serial port preferably with bios level access

Looking at the vyatta project, which was recommended by a senior person
on this list, they are currently supporting Dell 860's. Which fits some
of the requirements, but I'm open to different vendors such as Tyan.
Routerboard and soekris 4801 with my limited testing couldn't push
enough bandwidth otherwise they'd be a nice fit. Feel free to email me
off list if you think this is too OT.

Thanks in advance!

Christopher
Mohan Sundaram
2007-11-21 01:17:16 UTC
Permalink
Post by "C." Bergström
I'm mostly just a lurker here, but with recent discussion just wanted to
toss this question out to the community/vendors.
My preferences are..
a) low thermals (maybe sbc)
b) routes 100-200Mbps with shaping
Most low thermal SBCs come with low end network controllers like 100Mbps
RTL8139. Getting 100Mbps thro' most of these is impossible.
Post by "C." Bergström
c) 1U-4U rack mountable factor
I'd recommend a 1U server m/c. I used a Sunfire x2100 in March 2006
costing $750 on the Net. Added a dual port PCIe GigE Intel board for
$175 and was home booting from a USB stick with a memory based Linux for
routing (persistent storage on USB stick for config).
Post by "C." Bergström
d) not sparc, alpha or mips based
e) pci-x /pcie slot so I can put some quad port nic (open to
suggestions)
Go Intel
Post by "C." Bergström
f) priced between 300-1000 USD
g) serial port preferably with bios level access
The Sunfire had a management processor independent of the Opteron which
is a nice thing. Did not try to access it though. Helps if the systems
hangs for some reason. Need to see if you can login to the processor
thro' the onboard ethernet so that you can reboot the system remotely
without a KVM/power control.

The Opteron generates a lot of heat. 1U chassis is well provisioned to
cool it. These machines being servers are rated to run non-stop for
pretty long. You may want to look at water cooling rigs to be safer.

Since these are not just routers, I know guys like IEI have OEM network
boxes made for networking with multiple high end nics and switches on
board. Helpful if you want to run large stateful firewalls or IDS etc on
the edge. I'd be satisfied with a 1U server for your requirements though.

Mohan
sawar
2007-11-19 23:08:40 UTC
Permalink
Hi

is there any how-to which can guide me through all available tuning options
in /proc/ filesystem

Pozdrawiam
Szymon Turkiewicz
Post by Marek Kierdelewicz
Post by Derek Sims
Hi
Hi
Post by Derek Sims
I have a router with a large number of iptables rules and some
extensive traffic shaping (HTB + RED + ... ) + conntrack.
- Use "set" module instead of sequential iptables rules. It can lower
cpu usage.
- Use hashing filters for shaping if you're using many u32 filters.
- configure conntrack to use bigger hashsize for better performance;
ip_conntrack.hashsize=1048575
- configure routecache to use bigger to use more memory for better
performance; i'm passing following parameter to kernel in grub to
achieve this: rhash_entries=2400000
Post by Derek Sims
1. What processors should I be looking for in order to achieve the
best routing throughput on a linux router?
I've had good experiences with P4 (with and without HT), Athlon64, Xeon
[dempsey], Xeon [woodcrest]. The last one is the best choice because of
the large cache and architecture. I think you can use Core 2 Duo too
if you want to save some money.
Post by Derek Sims
2. Is it true that multicore processors will not help much in this
situation?
Not true. In your setup with two nics with same load you can easily use
two cores. You can assign each nic to different core by the means of
smp_affinity setting in /proc/irq/... or by using irqbalance daemon.
Post by Derek Sims
Best regards,
Derek
pozdrawiam
Marek Kierdelewicz
KoBa ISP
_______________________________________________
LARTC mailing list
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Marek Kierdelewicz
2007-11-20 01:03:17 UTC
Permalink
Post by sawar
Hi
Hi
Post by sawar
is there any how-to which can guide me through all available tuning
options in /proc/ filesystem
Proc filesystem is described in file Documentation/filesystems/proc.txt
in the linux kernel sources. You can find there something about
smp_affinity and linux network stack parameters (and about many more
things). No info about netfilter-related setting is supplied in that
document.

As for guide... Lartc howto: lartc. org / howto / lartc . kernel . html
and many more available throu the google search.

pozdrawiam
Marek Kierdelewicz
KoBa ISP
John Default
2007-11-19 21:49:03 UTC
Permalink
Hi
Post by Stuart Clouston
Hi All,
I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It's probably just some newbie trap that's messing things up but I'm buggered if I can see it.
The following script is run on the server (192.168.10.30): (I have simplified it and removed all of the ceil parameters during my troubleshooting process)
it think you should have not removed ceiling parameters : )
Post by Stuart Clouston
# Remove any existing qdisc
# Root queueing discipline
tc qdisc add dev eth0 root handle 1: htb default 10
# Root class
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 1500 ceil 100mbit
# Default class
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30mbit ceil 100mbit burst 1500
# Rate limited classes
tc class add dev eth0 parent 1:1 classid 1:4 htb rate 300kbit
tc class add dev eth0 parent 1:4 classid 1:40 htb rate 50kbit
You should use ceil here right after rate, otherwise the class can
borrow from its parent class and therefore your overall traffic will be
shaped in correct proportion but not absolutely (i.e to proper bandwidth)
Once you set ceil value, the class will not get any more throughput even
if the link is free...
Post by Stuart Clouston
tc class add dev eth0 parent 1:4 classid 1:41 htb rate 50kbit
tc class add dev eth0 parent 1:4 classid 1:42 htb rate 200kbit
tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10
tc qdisc add dev eth0 parent 1:41 handle 41: sfq perturb 10
tc qdisc add dev eth0 parent 1:42 handle 42: sfq perturb 10
U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32"
$U32 match ip dst 192.168.10.85 match ip sport 3389 0xffff flowid 1:42
$U32 match ip dst 192.168.10.85 match ip sport 1352 0xffff flowid 1:41
$U32 match ip dst 192.168.10.85 flowid 1:40
iperf -c 192.168.10.30 -p 1352 -P 5 -f k
[SUM] 0.0-11.4 sec 3016 KBytes 2163 Kbits/sec
iperf -c 192.168.10.30 -p 23 -P 5 -f k
[SUM] 0.0-11.4 sec 2856 KBytes 2053 Kbits/sec
iperf -c 192.168.10.30 -p 3389 -P 5 -f k
[SUM] 0.0-10.3 sec 11264 KBytes 8956 Kbits/sec
The traffic is being shaped proportially as I'd hoped but each class is well in excess of its configured limit.
1: Debian (testing), Kernel v2.6.16.19, iproute2 ss070313
2: Ubuntu (dapper), Kernel v2.6.23.1, iproute2 ss041019
I'd be very grateful for any information that could help me out.
Thanks,
Stu (newbie to HTB)
_________________________________________________________________
I am newbie too, so if i am wrong please someone correct me.
--
___________________________________
S pozdravom / Best regards

John Default
Stuart Clouston
2007-11-19 23:31:52 UTC
Permalink
Hi John,

Thanks for the reply. I removed the ceil parameters as a troubleshooting process to ensure that they weren't what was causing the excess of the configured rate. From what I can see if the ceil parameter is not specified it defaults to the same figure as the rate parameter. I have verified this by running "tc -s -d class list dev eth0". The output from this command also shows that the rate limited classes have not borrowed at all (see below). I have tried what you suggested anyway and it is still exceeding the configured rate. The output below was generated on the server immediately after the completion of the iperf tests. Another thing that doesn't make sense to me is that all but one of the classes are reported to have been lending but which class are they lending to? None of the classes have been recorded as borrowing.
# tc -s -d class list dev eth0class htb 1:10 parent 1:1 prio 0 quantum 200000 rate 30000Kbit ceil 100000Kbit burst 39093b/8 mpu 0b overhead 0b cburst 126587b/8 mpu 0b overhead 0b level 0 Sent 574506 bytes 1223 pkts (dropped 0, overlimits 0) rate 63888bit 18pps lended: 1223 borrowed: 0 giants: 0 tokens: 10155 ctokens: 9883
class htb 1:1 root rate 100000Kbit ceil 100000Kbit burst 1487b/8 mpu 0b overhead 0b cburst 126587b/8 mpu 0b overhead 0b level 7 Sent 1006166 bytes 7723 pkts (dropped 0, overlimits 0) rate 181840bit 240pps lended: 0 borrowed: 0 giants: 0 tokens: 110 ctokens: 9883
class htb 1:40 parent 1:4 leaf 40: prio 0 quantum 1000 rate 50000bit ceil 50000bit burst 1661b/8 mpu 0b overhead 0b cburst 1661b/8 mpu 0b overhead 0b level 0 Sent 81010 bytes 1225 pkts (dropped 341, overlimits 0) rate 21272bit 40pps lended: 1225 borrowed: 0 giants: 0 tokens: -239487 ctokens: -239487
class htb 1:4 parent 1:1 rate 300000bit ceil 300000bit burst 1974b/8 mpu 0b overhead 0b cburst 1974b/8 mpu 0b overhead 0b level 6 Sent 431660 bytes 6500 pkts (dropped 0, overlimits 0) rate 117952bit 222pps lended: 0 borrowed: 0 giants: 0 tokens: 39055 ctokens: 39055
class htb 1:41 parent 1:4 leaf 41: prio 0 quantum 1000 rate 50000bit ceil 50000bit burst 1661b/8 mpu 0b overhead 0b cburst 1661b/8 mpu 0b overhead 0b level 0 Sent 78502 bytes 1189 pkts (dropped 294, overlimits 0) rate 20376bit 39pps lended: 1189 borrowed: 0 giants: 0 tokens: -176795 ctokens: -176795
class htb 1:42 parent 1:4 leaf 42: prio 0 quantum 2500 rate 200000bit ceil 200000bit burst 1849b/8 mpu 0b overhead 0b cburst 1849b/8 mpu 0b overhead 0b level 0 Sent 272120 bytes 4086 pkts (dropped 809, overlimits 0) rate 71768bit 135pps lended: 4086 borrowed: 0 giants: 0 tokens: 4616 ctokens: 4616
_________________________________________________________________
New music from the Rogue Traders - listen now!
http://ninemsn.com.au/share/redir/adTrack.asp?mode=click&clientID=832&referral=hotmailtaglineOct07&URL=http://music.ninemsn.com.au/roguetraders
Stuart Clouston
2007-11-23 03:01:42 UTC
Permalink
Dear All,

I have compiled the latest iproute2 (ss071016) with the latest kernel (2.6.23.8) and my test client is still getting rates of approximately 40 times what I have configured it to. I observed from the output of the "tc -s -d class list dev eth0" command that the bit rate reported appears to be correct. This in conjunction with other documentation I have read leads me to think it may be the timer setting in the kernel (tried both 100Hz and 250hz). Just for fun I tried a simple tbf qdisc rate limited to 50mbit on the server, the client achieved almost 90mbit.

Also, since compiling the new iproute2, if I type "tc qdisc show" with only the default pfifo-fast qdisc enabled, linux responds with "Segmentation fault". I'm not too worried about this because I would prefer to get htb working as opposed to using pfifo-fast.

Has anyone out there actually got this working properly on a Debian or Ubuntu distro? If so, can you let me know what versions of iproute, kernel, etc you used?

Thanks,
Stuart Clouston


From: ***@hotmail.comTo: ***@mailman.ds9a.nlSubject: RE: [LARTC] Unexpected results using HTB qdiscDate: Tue, 20 Nov 2007 10:31:52 +1100


Hi John, Thanks for the reply. I removed the ceil parameters as a troubleshooting process to ensure that they weren't what was causing the excess of the configured rate. From what I can see if the ceil parameter is not specified it defaults to the same figure as the rate parameter. I have verified this by running "tc -s -d class list dev eth0". The output from this command also shows that the rate limited classes have not borrowed at all (see below). I have tried what you suggested anyway and it is still exceeding the configured rate. The output below was generated on the server immediately after the completion of the iperf tests. Another thing that doesn't make sense to me is that all but one of the classes are reported to have been lending but which class are they lending to? None of the classes have been recorded as borrowing.# tc -s -d class list dev eth0class htb 1:10 parent 1:1 prio 0 quantum 200000 rate 30000Kbit ceil 100000Kbit burst 39093b/8 mpu 0b overhead 0b cburst 126587b/8 mpu 0b overhead 0b level 0 Sent 574506 bytes 1223 pkts (dropped 0, overlimits 0) rate 63888bit 18pps lended: 1223 borrowed: 0 giants: 0 tokens: 10155 ctokens: 9883class htb 1:1 root rate 100000Kbit ceil 100000Kbit burst 1487b/8 mpu 0b overhead 0b cburst 126587b/8 mpu 0b overhead 0b level 7 Sent 1006166 bytes 7723 pkts (dropped 0, overlimits 0) rate 181840bit 240pps lended: 0 borrowed: 0 giants: 0 tokens: 110 ctokens: 9883class htb 1:40 parent 1:4 leaf 40: prio 0 quantum 1000 rate 50000bit ceil 50000bit burst 1661b/8 mpu 0b overhead 0b cburst 1661b/8 mpu 0b overhead 0b level 0 Sent 81010 bytes 1225 pkts (dropped 341, overlimits 0) rate 21272bit 40pps lended: 1225 borrowed: 0 giants: 0 tokens: -239487 ctokens: -239487class htb 1:4 parent 1:1 rate 300000bit ceil 300000bit burst 1974b/8 mpu 0b overhead 0b cburst 1974b/8 mpu 0b overhead 0b level 6 Sent 431660 bytes 6500 pkts (dropped 0, overlimits 0) rate 117952bit 222pps lended: 0 borrowed: 0 giants: 0 tokens: 39055 ctokens: 39055class htb 1:41 parent 1:4 leaf 41: prio 0 quantum 1000 rate 50000bit ceil 50000bit burst 1661b/8 mpu 0b overhead 0b cburst 1661b/8 mpu 0b overhead 0b level 0 Sent 78502 bytes 1189 pkts (dropped 294, overlimits 0) rate 20376bit 39pps lended: 1189 borrowed: 0 giants: 0 tokens: -176795 ctokens: -176795class htb 1:42 parent 1:4 leaf 42: prio 0 quantum 2500 rate 200000bit ceil 200000bit burst 1849b/8 mpu 0b overhead 0b cburst 1849b/8 mpu 0b overhead 0b level 0 Sent 272120 bytes 4086 pkts (dropped 809, overlimits 0) rate 71768bit 135pps lended: 4086 borrowed: 0 giants: 0 tokens: 4616 ctokens: 4616 > Date: Mon, 19 Nov 2007 22:49:03 +0100> From: ***@advaita.sytes.net> Subject: Re: [LARTC] Unexpected results using HTB qdisc> To: ***@mailman.ds9a.nl> > Hi> > Stuart Clouston wrote:> > Hi All,> >> > I am using the script below to limit download rates and manage traffic for a certain IP address and testing the results using iperf. The rate that iperf reports is much higher than the rate I have configured for the HTB qdisc. It's probably just some newbie trap that's messing things up but I'm buggered if I can see it.> >> > The following script is run on the server (192.168.10.30): (I have simplified it and removed all of the ceil parameters during my troubleshooting process)> > > it think you should have not removed ceiling parameters : )> > # Remove any existing qdisc> > tc qdisc del dev eth0 root handle 1:> >> > # Root queueing discipline> > tc qdisc add dev eth0 root handle 1: htb default 10> >> > # Root class> > tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit burst 1500 ceil 100mbit> >> > # Default class> > tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30mbit ceil 100mbit burst 1500> >> > # Rate limited classes> > tc class add dev eth0 parent 1:1 classid 1:4 htb rate 300kbit> > > > tc class add dev eth0 parent 1:4 classid 1:40 htb rate 50kbit> > > You should use ceil here right after rate, otherwise the class can > borrow from its parent class and therefore your overall traffic will be > shaped in correct proportion but not absolutely (i.e to proper bandwidth)> Once you set ceil value, the class will not get any more throughput even > if the link is free...> > > tc class add dev eth0 parent 1:4 classid 1:41 htb rate 50kbit> > tc class add dev eth0 parent 1:4 classid 1:42 htb rate 200kbit> >> > tc qdisc add dev eth0 parent 1:40 handle 40: sfq perturb 10> > tc qdisc add dev eth0 parent 1:41 handle 41: sfq perturb 10> > tc qdisc add dev eth0 parent 1:42 handle 42: sfq perturb 10> >> > # Filters to direct traffic to the right classes:> >> > U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32"> > $U32 match ip dst 192.168.10.85 match ip sport 3389 0xffff flowid 1:42> > $U32 match ip dst 192.168.10.85 match ip sport 1352 0xffff flowid 1:41> > $U32 match ip dst 192.168.10.85 flowid 1:40> >> >> >> > The client (192.168.10.85) then runs iperf to test the results:> >> > iperf -c 192.168.10.30 -p 1352 -P 5 -f k> > [SUM] 0.0-11.4 sec 3016 KBytes 2163 Kbits/sec> >> > iperf -c 192.168.10.30 -p 23 -P 5 -f k> > [SUM] 0.0-11.4 sec 2856 KBytes 2053 Kbits/sec> >> > iperf -c 192.168.10.30 -p 3389 -P 5 -f k> > [SUM] 0.0-10.3 sec 11264 KBytes 8956 Kbits/sec> >> >> > The traffic is being shaped proportionally as I'd hoped but each class is well in excess of its configured limit. > >> > I am getting similar results on two separate units:> > 1: Debian (testing), Kernel v2.6.16.19, iproute2 ss070313> > 2: Ubuntu (dapper), Kernel v2.6.23.1, iproute2 ss041019> >> > I'd be very grateful for any information that could help me out.> > Thanks,> > Stu (newbie to HTB)> > _________________________________________________________________> >> > > I am newbie too, so if i am wrong please someone correct me.> > -- > ___________________________________> S pozdravom / Best regards> > John Default> > > _______________________________________________> LARTC mailing list> ***@mailman.ds9a.nl> http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

Listen now! New music from the Rogue Traders.
_________________________________________________________________
New music from the Rogue Traders - listen now!
http://ninemsn.com.au/share/redir/adTrack.asp?mode=click&clientID=832&referral=hotmailtaglineOct07&URL=http://music.ninemsn.com.au/roguetraders
Mario Antonio Garcia
2007-11-23 13:50:42 UTC
Permalink
Stuart,

I am using Debian Etch with:

Customized Kernel: Linux Deb-Bridge 2.6.22.6-qos2 #1 SMP
I configured the Kernel with
CONFIG_HZ_1000=y
CONFIG_HZ=1000
But I think (perhaps I am wrong) this configs do not apply any more to htb since:
http://lists.openwall.net/netdev/2007/03/16/22
"These patches convert the packet schedulers to use ktime as only clock source and kill off the manual clock source selection. Additionally all packet schedulers are converted to use hrtimer-based watchdogs, greatly increasing scheduling precision."

Package from Stable branch (Etch) iptables v1.3.6
Package from Stable branch (Etch) ip utility, iproute2-ss060323

I am in testing phase (not in production yet) shaping just a Class C subnet.

So far It has been working fine ( I am just playing a bit with it)

Regards,

Mario Antonio


----- Original Message -----
From: "Stuart Clouston" <***@hotmail.com>
To: ***@mailman.ds9a.nl
Sent: Thursday, November 22, 2007 10:01:42 PM (GMT-0500) America/New_York
Subject: RE: [LARTC] Unexpected results using HTB qdisc
Mario Antonio Garcia
2007-11-28 13:40:27 UTC
Permalink
Stuart,

FTP transfer (which includes application overhead) has been my tool to test bandwidth shaping.

I also used a bit : btest (Bandwidth utility from Mikrotik), netio, and iperf.

Regards,

Mario Antonio

----- Original Message -----
From: "Stuart Clouston" <***@hotmail.com>
To: "Mario Antonio Garcia" <***@webjogger.net>
Sent: Wednesday, November 28, 2007 3:20:09 AM (GMT-0500) America/New_York
Subject: RE: [LARTC] Unexpected results using HTB qdisc


Hi Mario,

Thanks for your reply. What utility have you used to test your deployment?
Date: Fri, 23 Nov 2007 08:50:42 -0500
Subject: Re: [LARTC] Unexpected results using HTB qdisc
Stuart,
Customized Kernel: Linux Deb-Bridge 2.6.22.6-qos2 #1 SMP
I configured the Kernel with
CONFIG_HZ_1000=y
CONFIG_HZ=1000
http://lists.openwall.net/netdev/2007/03/16/22
"These patches convert the packet schedulers to use ktime as only clock source and kill off the manual clock source selection. Additionally all packet schedulers are converted to use hrtimer-based watchdogs, greatly increasing scheduling precision."
Package from Stable branch (Etch) iptables v1.3.6
Package from Stable branch (Etch) ip utility, iproute2-ss060323
I am in testing phase (not in production yet) shaping just a Class C subnet.
So far It has been working fine ( I am just playing a bit with it)
Regards,
Mario Antonio
----- Original Message -----
Sent: Thursday, November 22, 2007 10:01:42 PM (GMT-0500) America/New_York
Subject: RE: [LARTC] Unexpected results using HTB qdisc
_______________________________________________
LARTC mailing list
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Find it at www.seek.com.au Your Future Starts Here. Dream it? Then be it!
Loading...