Dave Taht
2014-09-20 17:55:27 UTC
We'd had a very long thread on cerowrt-devel and in the end sebastian
(I think) had developed some scripts to exaustively (it took hours)
derive the right encapsulation frame size on a link. I can't find the
relevant link right now, ccing that list...
- the default can hurt with eg, htb as if you don't add qdiscs to
classes it takes (last time I looked) its qlen from that.
Sfq was only ever meant for bulk, so should really be in addition to
some classification to separate interactive - I don't really get the
Hmm? sfq separates bulk from interactive pretty nicely. It tends to do
bad things to bulk as it doesn't manage queue length.
A little bit of prioritization or deprioritization for some traffic is
helpful, but most traffic is hard to classify.
http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die
up on everything yet :-)
I have extra geek score for using mini jumbos = running pppoe with mtu
1500 which works for me on plusnet. You need a recent pppd for this and
a nic that works with mtu >= 1508.
As for overheads, initial searching indicated that it's not easy or
maybe even truly possible like adsl.
On slow uplink adsl it was possible with ping to infer the fixed part
but you needed to send loads of pings increasing in size and plot the
best time for each to make a stepped graph.
to be safe. Timers may be different or there may be OAM/Reporting data
going up, albeit rarely.
atm with stab. Unfortunately stab doesn't do 64/65.
As for the fixed part - I am not sure, but roughly starting with IP as
that's what tc sees on ppp (as opposed to ip + 14 on eth)
IP
+8 for PPPOE
+14 for ethertype and macs
+4 because Openreach modem uses vlan
+2 CRC ??
+ "a few" 64/65
That's it for fixed - of course 64/65 adds another one for every 64 TBH
I didn't get the precice detail from the spec and not having looked
recently I can't remember.
BT Sin 498 does give some of this info and a couple of examples of
throughput for different frame sizes - but it's rounded to kbit which
means I couldn't work out to the byte what the overheads were.
Worse still VDSL can use link layer retransmits and the sin says that
though currently (2013) not enabled, they would be in due course. I have
no clue how these work.
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
(I think) had developed some scripts to exaustively (it took hours)
derive the right encapsulation frame size on a link. I can't find the
relevant link right now, ccing that list...
Hi,
I am looking to figure out the most fool proof way to calculate stab
overheads for ADSL/VDSL connections.
ppp0 Link encap:Point-to-Point Protocol inet addr:81.149.38.69
P-t-P:81.139.160.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP
MULTICAST MTU:1492 Metric:1 RX packets:17368223 errors:0 dropped:0
overruns:0 frame:0 TX packets:12040295 errors:0 dropped:0 overruns:0
carrier:0 collisions:0 txqueuelen:100 RX bytes:17420109286 (16.2 GiB)
TX bytes:3611007028 (3.3 GiB)
I am setting a longer txqueuelen as I am not currently using any fair
queuing (buffer bloat issues with sfq)
Whatever is txqlen is on ppp there is likely some other buffer after itI am looking to figure out the most fool proof way to calculate stab
overheads for ADSL/VDSL connections.
ppp0 Link encap:Point-to-Point Protocol inet addr:81.149.38.69
P-t-P:81.139.160.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP
MULTICAST MTU:1492 Metric:1 RX packets:17368223 errors:0 dropped:0
overruns:0 frame:0 TX packets:12040295 errors:0 dropped:0 overruns:0
carrier:0 collisions:0 txqueuelen:100 RX bytes:17420109286 (16.2 GiB)
TX bytes:3611007028 (3.3 GiB)
I am setting a longer txqueuelen as I am not currently using any fair
queuing (buffer bloat issues with sfq)
- the default can hurt with eg, htb as if you don't add qdiscs to
classes it takes (last time I looked) its qlen from that.
Sfq was only ever meant for bulk, so should really be in addition to
some classification to separate interactive - I don't really get the
bad things to bulk as it doesn't manage queue length.
A little bit of prioritization or deprioritization for some traffic is
helpful, but most traffic is hard to classify.
bufferbloat bit, you could make the default 128 limit lower if you wanted.
htb + fq_codel, if available, is the right thing here....http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die
The connection is a BT Infinity FTTC VDSL connection synced at
80mbit/20mbit. The modem is connected directly to the ethernet port
on a server running a slightly tweaked HFSC setup that you folks
helped me set up in July - back when I was on ADSL. I am still
running pppoe I believe from my server.
I have similar since May 2013 and I still haven't got round to reading80mbit/20mbit. The modem is connected directly to the ethernet port
on a server running a slightly tweaked HFSC setup that you folks
helped me set up in July - back when I was on ADSL. I am still
running pppoe I believe from my server.
up on everything yet :-)
I have extra geek score for using mini jumbos = running pppoe with mtu
1500 which works for me on plusnet. You need a recent pppd for this and
a nic that works with mtu >= 1508.
As for overheads, initial searching indicated that it's not easy or
maybe even truly possible like adsl.
The largest ping packet that I can fit out onto the wire is 1464
# ping -c 2 -s 1464 -M do google.com PING google.com (31.55.166.216)
1464(1492) bytes of data. 1472 bytes from 31.55.166.216: icmp_seq=1
ttl=58 time=11.7 ms 1472 bytes from 31.55.166.216: icmp_seq=2 ttl=58
time=11.9 ms
# ping -c 2 -s 1465 -M do google.com PING google.com (31.55.166.212)
1465(1493) bytes of data. From
host81-149-38-69.in-addr.btopenworld.com (81.149.38.69) icmp_seq=1
Frag needed and DF set (mtu = 1492) From
host81-149-38-69.in-addr.btopenworld.com (81.149.38.69) icmp_seq=1
Frag needed and DF set (mtu = 1492)
You can't work out your overheads like this.# ping -c 2 -s 1464 -M do google.com PING google.com (31.55.166.216)
1464(1492) bytes of data. 1472 bytes from 31.55.166.216: icmp_seq=1
ttl=58 time=11.7 ms 1472 bytes from 31.55.166.216: icmp_seq=2 ttl=58
time=11.9 ms
# ping -c 2 -s 1465 -M do google.com PING google.com (31.55.166.212)
1465(1493) bytes of data. From
host81-149-38-69.in-addr.btopenworld.com (81.149.38.69) icmp_seq=1
Frag needed and DF set (mtu = 1492) From
host81-149-38-69.in-addr.btopenworld.com (81.149.38.69) icmp_seq=1
Frag needed and DF set (mtu = 1492)
On slow uplink adsl it was possible with ping to infer the fixed part
but you needed to send loads of pings increasing in size and plot the
best time for each to make a stepped graph.
Based on this I believe overhead should be set to 28, however with 28
set as my overhead and hfsc ls m2 20000kbit ul m2 20000kbit I seem
to be loosing about 1.5mbit of upload...
Even if you could do things perfectly I would back off a few kbit justset as my overhead and hfsc ls m2 20000kbit ul m2 20000kbit I seem
to be loosing about 1.5mbit of upload...
to be safe. Timers may be different or there may be OAM/Reporting data
going up, albeit rarely.
http://www.thinkbroadband.com/speedtest/results.html?id=141116089424883990118
http://www.thinkbroadband.com/speedtest/results.html?id=141116216621093133034
Am I calculating overhead incorrectly?
VDSL doesn't use ATM I think the PTM it uses is 64/65 - so don't specifyhttp://www.thinkbroadband.com/speedtest/results.html?id=141116216621093133034
Am I calculating overhead incorrectly?
atm with stab. Unfortunately stab doesn't do 64/65.
As for the fixed part - I am not sure, but roughly starting with IP as
that's what tc sees on ppp (as opposed to ip + 14 on eth)
IP
+8 for PPPOE
+14 for ethertype and macs
+4 because Openreach modem uses vlan
+2 CRC ??
+ "a few" 64/65
That's it for fixed - of course 64/65 adds another one for every 64 TBH
I didn't get the precice detail from the spec and not having looked
recently I can't remember.
BT Sin 498 does give some of this info and a couple of examples of
throughput for different frame sizes - but it's rounded to kbit which
means I couldn't work out to the byte what the overheads were.
Worse still VDSL can use link layer retransmits and the sin says that
though currently (2013) not enabled, they would be in due course. I have
no clue how these work.
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Dave Täht
https://www.bufferbloat.net/projects/make-wifi-fast
Dave Täht
https://www.bufferbloat.net/projects/make-wifi-fast