All of lore.kernel.org
 help / color / mirror / Atom feed
* Problem to priorize SSH traffic
@ 2016-12-16 16:50 Ludovic Leroy
  2016-12-16 20:34 ` Alan Goodman
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Ludovic Leroy @ 2016-12-16 16:50 UTC (permalink / raw)
  To: lartc

[-- Attachment #1: Type: text/plain, Size: 6684 bytes --]

Hello LARTC community,

I am building a TC policy at home that answers my needs for a small DSL uplink 800kbit:
* High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
* SSH traffic gets higher priority. I view my camera remotely via ssh tunnel (Leaf 1:20 prio 2)
* Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
* Torrent seeding (Leaf 1:40 prio 4)
* Default (Leaf 1:99 prio 5)
* Gigabit local network (Leaf 1:1000 prio 1000)

The problem is torrent traffic consumes all the bandwidth leaving little room for SSH traffic (<100kbit). See attached picture.
SSH traffic class with higher priority than torrent class should be offered excess bandwidth first, but that is not the case. 
The only solution I found is to reduce the torrent ceil value.
Could you help me?

Regards,
  Ludovic L.

# tc -d class show dev eth1
class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0 
class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 66Kbit ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0 
class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 100Mbit linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 1600b/1 mpu 0b overhead 0b level 0 
class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 7 
class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 396Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0 
class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 198Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0 
class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0 
class sfq 140:22 parent 140: 
class sfq 140:34 parent 140: 
class sfq 140:3b parent 140: 
class sfq 140:6c parent 140: 
class sfq 140:a9 parent 140: 
class sfq 140:149 parent 140: 
class sfq 140:287 parent 140: 
class sfq 140:2fd parent 140: 
class sfq 140:318 parent 140: 
class sfq 140:376 parent 140: 
class sfq 140:3d6 parent 140: 
class sfq 140:3e3 parent 140:

# tc -d qdisc show dev eth1
qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 ver 3.17 direct_qlen 1000
qdisc pfifo 110: parent 1:10 limit 1000p
qdisc pfifo 120: parent 1:20 limit 1000p
qdisc pfifo 130: parent 1:30 limit 1000p
qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 flows 128/1024 divisor 1024 perturb 10sec 
qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 flows 128/1024 divisor 1024 perturb 10sec

# tc -d filter show dev eth1
filter parent 1: protocol all pref 1 fw 
filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10 
filter parent 1: protocol all pref 2 fw 
filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20 
filter parent 1: protocol all pref 3 fw 
filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30 
filter parent 1: protocol all pref 4 fw 
filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40 
filter parent 1: protocol all pref 99 fw 
filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99 
filter parent 1: protocol all pref 1000 fw 
filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 1:1000

# tc -s class show dev eth1
class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 800Kbit burst 16Kb cburst 1599b 
 Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 8048bit 6pps backlog 0b 0p requeues 0 
 lended: 10742 borrowed: 0 giants: 0
 tokens: 29290142 ctokens: 198864

class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 200Kbit burst 16Kb cburst 1599b 
 Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 48bit 0pps backlog 0b 0p requeues 0 
 lended: 229 borrowed: 0 giants: 0
 tokens: 30859841 ctokens: 943734

class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b cburst 1600b 
 Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 784bit 1pps backlog 0b 0p requeues 0 
 lended: 563 borrowed: 0 giants: 0
 tokens: 1917 ctokens: 1917

class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b 
 Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 796440bit 78pps backlog 0b 0p requeues 0 
 lended: 108779 borrowed: 0 giants: 0
 tokens: 2192729 ctokens: -117287

class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 800Kbit burst 16Kb cburst 1599b 
 Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 64032bit 6pps backlog 0b 0p requeues 0 
 lended: 4448 borrowed: 0 giants: 0
 tokens: 5142031 ctokens: 235296

class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 800Kbit burst 16Kb cburst 1599b 
 Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
 lended: 216 borrowed: 0 giants: 0
 tokens: 10309330 ctokens: 241546

class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 800Kbit burst 16Kb cburst 1599b 
 Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 724312bit 65pps backlog 0b 27p requeues 0 
 lended: 10187 borrowed: 108779 giants: 0
 tokens: -2031932 ctokens: -222767

class sfq 140:56 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 2814b 2p requeues 0 
 allot 1520 

class sfq 140:63 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 7570b 5p requeues 0 
 allot 1520 

class sfq 140:9a parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 6056b 4p requeues 0 
 allot 1448 

class sfq 140:f8 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 1310b 2p requeues 0 
 allot 528 

class sfq 140:1c7 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 3028b 2p requeues 0 
 allot 1520 

class sfq 140:269 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 4542b 3p requeues 0 
 allot 1304 

class sfq 140:2ff parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 6056b 4p requeues 0 
 allot -72 

class sfq 140:30d parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 1514b 1p requeues 0 
 allot 1520 

class sfq 140:326 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 1502b 1p requeues 0 
 allot 1520 

class sfq 140:3ad parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 1514b 1p requeues 0 
 allot 1520 

class sfq 140:3c5 parent 140: 
 (dropped 0, overlimits 0 requeues 0) 
 backlog 1560b 2p requeues 0 
 allot 1520 

[-- Attachment #2: tcgraph.png --]
[-- Type: image/png, Size: 50235 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
@ 2016-12-16 20:34 ` Alan Goodman
  2016-12-16 21:16 ` Ludovic Leroy
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Alan Goodman @ 2016-12-16 20:34 UTC (permalink / raw)
  To: lartc

It might help if you provide the script you are using to build your tc 
queues...  I find this more readable than the output from the tc stats.

Alan


On 16/12/16 16:50, Ludovic Leroy wrote:
> Hello LARTC community,
>
> I am building a TC policy at home that answers my needs for a small DSL uplink 800kbit:
> * High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
> * SSH traffic gets higher priority. I view my camera remotely via ssh tunnel (Leaf 1:20 prio 2)
> * Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
> * Torrent seeding (Leaf 1:40 prio 4)
> * Default (Leaf 1:99 prio 5)
> * Gigabit local network (Leaf 1:1000 prio 1000)
>
> The problem is torrent traffic consumes all the bandwidth leaving little room for SSH traffic (<100kbit). See attached picture.
> SSH traffic class with higher priority than torrent class should be offered excess bandwidth first, but that is not the case.
> The only solution I found is to reduce the torrent ceil value.
> Could you help me?
>
> Regards,
>    Ludovic L.
>
> # tc -d class show dev eth1
> class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
> class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 66Kbit ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
> class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 100Mbit linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 1600b/1 mpu 0b overhead 0b level 0
> class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 7
> class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 396Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
> class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 198Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
> class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
> class sfq 140:22 parent 140:
> class sfq 140:34 parent 140:
> class sfq 140:3b parent 140:
> class sfq 140:6c parent 140:
> class sfq 140:a9 parent 140:
> class sfq 140:149 parent 140:
> class sfq 140:287 parent 140:
> class sfq 140:2fd parent 140:
> class sfq 140:318 parent 140:
> class sfq 140:376 parent 140:
> class sfq 140:3d6 parent 140:
> class sfq 140:3e3 parent 140:
>
> # tc -d qdisc show dev eth1
> qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 ver 3.17 direct_qlen 1000
> qdisc pfifo 110: parent 1:10 limit 1000p
> qdisc pfifo 120: parent 1:20 limit 1000p
> qdisc pfifo 130: parent 1:30 limit 1000p
> qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 flows 128/1024 divisor 1024 perturb 10sec
> qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 flows 128/1024 divisor 1024 perturb 10sec
>
> # tc -d filter show dev eth1
> filter parent 1: protocol all pref 1 fw
> filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10
> filter parent 1: protocol all pref 2 fw
> filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20
> filter parent 1: protocol all pref 3 fw
> filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30
> filter parent 1: protocol all pref 4 fw
> filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40
> filter parent 1: protocol all pref 99 fw
> filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99
> filter parent 1: protocol all pref 1000 fw
> filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 1:1000
>
> # tc -s class show dev eth1
> class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 800Kbit burst 16Kb cburst 1599b
>   Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 8048bit 6pps backlog 0b 0p requeues 0
>   lended: 10742 borrowed: 0 giants: 0
>   tokens: 29290142 ctokens: 198864
>
> class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 200Kbit burst 16Kb cburst 1599b
>   Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 48bit 0pps backlog 0b 0p requeues 0
>   lended: 229 borrowed: 0 giants: 0
>   tokens: 30859841 ctokens: 943734
>
> class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b cburst 1600b
>   Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 784bit 1pps backlog 0b 0p requeues 0
>   lended: 563 borrowed: 0 giants: 0
>   tokens: 1917 ctokens: 1917
>
> class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b
>   Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 796440bit 78pps backlog 0b 0p requeues 0
>   lended: 108779 borrowed: 0 giants: 0
>   tokens: 2192729 ctokens: -117287
>
> class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 800Kbit burst 16Kb cburst 1599b
>   Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 64032bit 6pps backlog 0b 0p requeues 0
>   lended: 4448 borrowed: 0 giants: 0
>   tokens: 5142031 ctokens: 235296
>
> class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 800Kbit burst 16Kb cburst 1599b
>   Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 0bit 0pps backlog 0b 0p requeues 0
>   lended: 216 borrowed: 0 giants: 0
>   tokens: 10309330 ctokens: 241546
>
> class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 800Kbit burst 16Kb cburst 1599b
>   Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 requeues 0)
>   rate 724312bit 65pps backlog 0b 27p requeues 0
>   lended: 10187 borrowed: 108779 giants: 0
>   tokens: -2031932 ctokens: -222767
>
> class sfq 140:56 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 2814b 2p requeues 0
>   allot 1520
>
> class sfq 140:63 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 7570b 5p requeues 0
>   allot 1520
>
> class sfq 140:9a parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 6056b 4p requeues 0
>   allot 1448
>
> class sfq 140:f8 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 1310b 2p requeues 0
>   allot 528
>
> class sfq 140:1c7 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 3028b 2p requeues 0
>   allot 1520
>
> class sfq 140:269 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 4542b 3p requeues 0
>   allot 1304
>
> class sfq 140:2ff parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 6056b 4p requeues 0
>   allot -72
>
> class sfq 140:30d parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 1514b 1p requeues 0
>   allot 1520
>
> class sfq 140:326 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 1502b 1p requeues 0
>   allot 1520
>
> class sfq 140:3ad parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 1514b 1p requeues 0
>   allot 1520
>
> class sfq 140:3c5 parent 140:
>   (dropped 0, overlimits 0 requeues 0)
>   backlog 1560b 2p requeues 0
>   allot 1520


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
  2016-12-16 20:34 ` Alan Goodman
@ 2016-12-16 21:16 ` Ludovic Leroy
  2016-12-17 10:23 ` Alan Goodman
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ludovic Leroy @ 2016-12-16 21:16 UTC (permalink / raw)
  To: lartc

[-- Attachment #1: Type: text/plain, Size: 7528 bytes --]

Thanks for replying.
The script is attached. Hope this helps.

Ludovic

Le 16/12/2016 21:34, Alan Goodman a écrit :
> It might help if you provide the script you are using to build your tc 
> queues...  I find this more readable than the output from the tc stats.
>
> Alan
>
>
> On 16/12/16 16:50, Ludovic Leroy wrote:
>> Hello LARTC community,
>>
>> I am building a TC policy at home that answers my needs for a small 
>> DSL uplink 800kbit:
>> * High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
>> * SSH traffic gets higher priority. I view my camera remotely via ssh 
>> tunnel (Leaf 1:20 prio 2)
>> * Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
>> * Torrent seeding (Leaf 1:40 prio 4)
>> * Default (Leaf 1:99 prio 5)
>> * Gigabit local network (Leaf 1:1000 prio 1000)
>>
>> The problem is torrent traffic consumes all the bandwidth leaving 
>> little room for SSH traffic (<100kbit). See attached picture.
>> SSH traffic class with higher priority than torrent class should be 
>> offered excess bandwidth first, but that is not the case.
>> The only solution I found is to reduce the torrent ceil value.
>> Could you help me?
>>
>> Regards,
>>    Ludovic L.
>>
>> # tc -d class show dev eth1
>> class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 66Kbit 
>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>> cburst 1599b/1 mpu 0b overhead 0b level 0
>> class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 66Kbit 
>> ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>> cburst 1599b/1 mpu 0b overhead 0b level 0
>> class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 100Mbit 
>> linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 1600b/1 
>> mpu 0b overhead 0b level 0
>> class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet burst 
>> 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 7
>> class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 396Kbit 
>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>> cburst 1599b/1 mpu 0b overhead 0b level 0
>> class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 198Kbit 
>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>> cburst 1599b/1 mpu 0b overhead 0b level 0
>> class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 66Kbit 
>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>> cburst 1599b/1 mpu 0b overhead 0b level 0
>> class sfq 140:22 parent 140:
>> class sfq 140:34 parent 140:
>> class sfq 140:3b parent 140:
>> class sfq 140:6c parent 140:
>> class sfq 140:a9 parent 140:
>> class sfq 140:149 parent 140:
>> class sfq 140:287 parent 140:
>> class sfq 140:2fd parent 140:
>> class sfq 140:318 parent 140:
>> class sfq 140:376 parent 140:
>> class sfq 140:3d6 parent 140:
>> class sfq 140:3e3 parent 140:
>>
>> # tc -d qdisc show dev eth1
>> qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 ver 
>> 3.17 direct_qlen 1000
>> qdisc pfifo 110: parent 1:10 limit 1000p
>> qdisc pfifo 120: parent 1:20 limit 1000p
>> qdisc pfifo 130: parent 1:30 limit 1000p
>> qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 flows 
>> 128/1024 divisor 1024 perturb 10sec
>> qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 flows 
>> 128/1024 divisor 1024 perturb 10sec
>>
>> # tc -d filter show dev eth1
>> filter parent 1: protocol all pref 1 fw
>> filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10
>> filter parent 1: protocol all pref 2 fw
>> filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20
>> filter parent 1: protocol all pref 3 fw
>> filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30
>> filter parent 1: protocol all pref 4 fw
>> filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40
>> filter parent 1: protocol all pref 99 fw
>> filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99
>> filter parent 1: protocol all pref 1000 fw
>> filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 1:1000
>>
>> # tc -s class show dev eth1
>> class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 800Kbit 
>> burst 16Kb cburst 1599b
>>   Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 8048bit 6pps backlog 0b 0p requeues 0
>>   lended: 10742 borrowed: 0 giants: 0
>>   tokens: 29290142 ctokens: 198864
>>
>> class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 200Kbit 
>> burst 16Kb cburst 1599b
>>   Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 48bit 0pps backlog 0b 0p requeues 0
>>   lended: 229 borrowed: 0 giants: 0
>>   tokens: 30859841 ctokens: 943734
>>
>> class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b 
>> cburst 1600b
>>   Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 784bit 1pps backlog 0b 0p requeues 0
>>   lended: 563 borrowed: 0 giants: 0
>>   tokens: 1917 ctokens: 1917
>>
>> class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b
>>   Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 796440bit 78pps backlog 0b 0p requeues 0
>>   lended: 108779 borrowed: 0 giants: 0
>>   tokens: 2192729 ctokens: -117287
>>
>> class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 800Kbit 
>> burst 16Kb cburst 1599b
>>   Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 64032bit 6pps backlog 0b 0p requeues 0
>>   lended: 4448 borrowed: 0 giants: 0
>>   tokens: 5142031 ctokens: 235296
>>
>> class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 800Kbit 
>> burst 16Kb cburst 1599b
>>   Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 0bit 0pps backlog 0b 0p requeues 0
>>   lended: 216 borrowed: 0 giants: 0
>>   tokens: 10309330 ctokens: 241546
>>
>> class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 800Kbit 
>> burst 16Kb cburst 1599b
>>   Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 requeues 0)
>>   rate 724312bit 65pps backlog 0b 27p requeues 0
>>   lended: 10187 borrowed: 108779 giants: 0
>>   tokens: -2031932 ctokens: -222767
>>
>> class sfq 140:56 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 2814b 2p requeues 0
>>   allot 1520
>>
>> class sfq 140:63 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 7570b 5p requeues 0
>>   allot 1520
>>
>> class sfq 140:9a parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 6056b 4p requeues 0
>>   allot 1448
>>
>> class sfq 140:f8 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 1310b 2p requeues 0
>>   allot 528
>>
>> class sfq 140:1c7 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 3028b 2p requeues 0
>>   allot 1520
>>
>> class sfq 140:269 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 4542b 3p requeues 0
>>   allot 1304
>>
>> class sfq 140:2ff parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 6056b 4p requeues 0
>>   allot -72
>>
>> class sfq 140:30d parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 1514b 1p requeues 0
>>   allot 1520
>>
>> class sfq 140:326 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 1502b 1p requeues 0
>>   allot 1520
>>
>> class sfq 140:3ad parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 1514b 1p requeues 0
>>   allot 1520
>>
>> class sfq 140:3c5 parent 140:
>>   (dropped 0, overlimits 0 requeues 0)
>>   backlog 1560b 2p requeues 0
>>   allot 1520
>


[-- Attachment #2: qos.sh --]
[-- Type: application/x-shellscript, Size: 5112 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
  2016-12-16 20:34 ` Alan Goodman
  2016-12-16 21:16 ` Ludovic Leroy
@ 2016-12-17 10:23 ` Alan Goodman
  2016-12-17 10:56 ` Andy Furniss
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Alan Goodman @ 2016-12-17 10:23 UTC (permalink / raw)
  To: lartc

Hi,

The obvious issue at present is that your upload rate isnt hitting the 
ceiling rate you have specified, or at least the rate estimator doesnt 
think it is.  This could be because you've set your upload incorrectly 
(what is your sync speed) or could be because you are not accounting for 
the overheads and ATM characteristics in your connection (makes the rate 
estimator inaccurate).

You should try adding stab overhead 40 linklayer atm to your root qdisc.

The overhead 40 bit will be whatever number 
https://github.com/moeller0/ATM_overhead_detector figures out for you.  
If you dont have matlab you can send me your ping collection result 
(bzip it first and send it off list or upload it some place please) and 
I will process it for you.

Alan


On 16/12/16 21:16, Ludovic Leroy wrote:
> Thanks for replying.
> The script is attached. Hope this helps.
>
> Ludovic
>
> Le 16/12/2016 21:34, Alan Goodman a écrit :
>> It might help if you provide the script you are using to build your 
>> tc queues...  I find this more readable than the output from the tc 
>> stats.
>>
>> Alan
>>
>>
>> On 16/12/16 16:50, Ludovic Leroy wrote:
>>> Hello LARTC community,
>>>
>>> I am building a TC policy at home that answers my needs for a small 
>>> DSL uplink 800kbit:
>>> * High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
>>> * SSH traffic gets higher priority. I view my camera remotely via 
>>> ssh tunnel (Leaf 1:20 prio 2)
>>> * Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
>>> * Torrent seeding (Leaf 1:40 prio 4)
>>> * Default (Leaf 1:99 prio 5)
>>> * Gigabit local network (Leaf 1:1000 prio 1000)
>>>
>>> The problem is torrent traffic consumes all the bandwidth leaving 
>>> little room for SSH traffic (<100kbit). See attached picture.
>>> SSH traffic class with higher priority than torrent class should be 
>>> offered excess bandwidth first, but that is not the case.
>>> The only solution I found is to reduce the torrent ceil value.
>>> Could you help me?
>>>
>>> Regards,
>>>    Ludovic L.
>>>
>>> # tc -d class show dev eth1
>>> class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 66Kbit 
>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>> class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 66Kbit 
>>> ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>> class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 
>>> 100Mbit linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 
>>> 1600b/1 mpu 0b overhead 0b level 0
>>> class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet 
>>> burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b 
>>> level 7
>>> class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 396Kbit 
>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>> class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 198Kbit 
>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>> class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 66Kbit 
>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>> class sfq 140:22 parent 140:
>>> class sfq 140:34 parent 140:
>>> class sfq 140:3b parent 140:
>>> class sfq 140:6c parent 140:
>>> class sfq 140:a9 parent 140:
>>> class sfq 140:149 parent 140:
>>> class sfq 140:287 parent 140:
>>> class sfq 140:2fd parent 140:
>>> class sfq 140:318 parent 140:
>>> class sfq 140:376 parent 140:
>>> class sfq 140:3d6 parent 140:
>>> class sfq 140:3e3 parent 140:
>>>
>>> # tc -d qdisc show dev eth1
>>> qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 
>>> ver 3.17 direct_qlen 1000
>>> qdisc pfifo 110: parent 1:10 limit 1000p
>>> qdisc pfifo 120: parent 1:20 limit 1000p
>>> qdisc pfifo 130: parent 1:30 limit 1000p
>>> qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 flows 
>>> 128/1024 divisor 1024 perturb 10sec
>>> qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 flows 
>>> 128/1024 divisor 1024 perturb 10sec
>>>
>>> # tc -d filter show dev eth1
>>> filter parent 1: protocol all pref 1 fw
>>> filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10
>>> filter parent 1: protocol all pref 2 fw
>>> filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20
>>> filter parent 1: protocol all pref 3 fw
>>> filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30
>>> filter parent 1: protocol all pref 4 fw
>>> filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40
>>> filter parent 1: protocol all pref 99 fw
>>> filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99
>>> filter parent 1: protocol all pref 1000 fw
>>> filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 1:1000
>>>
>>> # tc -s class show dev eth1
>>> class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 800Kbit 
>>> burst 16Kb cburst 1599b
>>>   Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 8048bit 6pps backlog 0b 0p requeues 0
>>>   lended: 10742 borrowed: 0 giants: 0
>>>   tokens: 29290142 ctokens: 198864
>>>
>>> class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 200Kbit 
>>> burst 16Kb cburst 1599b
>>>   Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 48bit 0pps backlog 0b 0p requeues 0
>>>   lended: 229 borrowed: 0 giants: 0
>>>   tokens: 30859841 ctokens: 943734
>>>
>>> class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b 
>>> cburst 1600b
>>>   Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 784bit 1pps backlog 0b 0p requeues 0
>>>   lended: 563 borrowed: 0 giants: 0
>>>   tokens: 1917 ctokens: 1917
>>>
>>> class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b
>>>   Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 796440bit 78pps backlog 0b 0p requeues 0
>>>   lended: 108779 borrowed: 0 giants: 0
>>>   tokens: 2192729 ctokens: -117287
>>>
>>> class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 800Kbit 
>>> burst 16Kb cburst 1599b
>>>   Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 64032bit 6pps backlog 0b 0p requeues 0
>>>   lended: 4448 borrowed: 0 giants: 0
>>>   tokens: 5142031 ctokens: 235296
>>>
>>> class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 800Kbit 
>>> burst 16Kb cburst 1599b
>>>   Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 0bit 0pps backlog 0b 0p requeues 0
>>>   lended: 216 borrowed: 0 giants: 0
>>>   tokens: 10309330 ctokens: 241546
>>>
>>> class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 800Kbit 
>>> burst 16Kb cburst 1599b
>>>   Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 requeues 0)
>>>   rate 724312bit 65pps backlog 0b 27p requeues 0
>>>   lended: 10187 borrowed: 108779 giants: 0
>>>   tokens: -2031932 ctokens: -222767
>>>
>>> class sfq 140:56 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 2814b 2p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:63 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 7570b 5p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:9a parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 6056b 4p requeues 0
>>>   allot 1448
>>>
>>> class sfq 140:f8 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 1310b 2p requeues 0
>>>   allot 528
>>>
>>> class sfq 140:1c7 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 3028b 2p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:269 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 4542b 3p requeues 0
>>>   allot 1304
>>>
>>> class sfq 140:2ff parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 6056b 4p requeues 0
>>>   allot -72
>>>
>>> class sfq 140:30d parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 1514b 1p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:326 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 1502b 1p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:3ad parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 1514b 1p requeues 0
>>>   allot 1520
>>>
>>> class sfq 140:3c5 parent 140:
>>>   (dropped 0, overlimits 0 requeues 0)
>>>   backlog 1560b 2p requeues 0
>>>   allot 1520
>>
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (2 preceding siblings ...)
  2016-12-17 10:23 ` Alan Goodman
@ 2016-12-17 10:56 ` Andy Furniss
  2016-12-17 11:19 ` Andy Furniss
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Andy Furniss @ 2016-12-17 10:56 UTC (permalink / raw)
  To: lartc

Alan Goodman wrote:
> Hi,
>
> The obvious issue at present is that your upload rate isnt hitting
> the ceiling rate you have specified, or at least the rate estimator
> doesnt think it is.  This could be because you've set your upload
> incorrectly (what is your sync speed) or could be because you are not
> accounting for the overheads and ATM characteristics in your
> connection (makes the rate estimator inaccurate).
>
> You should try adding stab overhead 40 linklayer atm to your root
> qdisc.
>
> The overhead 40 bit will be whatever number
> https://github.com/moeller0/ATM_overhead_detector figures out for
> you.

Just in case that just reveals the fixed overhead on IP, on eth tc sees
ip + 14 already, so you can subtract 14, which for ipoa or pppoa vcmux
means you should use a negative overhead.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (3 preceding siblings ...)
  2016-12-17 10:56 ` Andy Furniss
@ 2016-12-17 11:19 ` Andy Furniss
  2016-12-17 12:27 ` Ludovic Leroy
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Andy Furniss @ 2016-12-17 11:19 UTC (permalink / raw)
  To: lartc

Ludovic Leroy wrote:
> Thanks for replying. The script is attached. Hope this helps.

$TC qdisc add dev $NETCARD root handle 1: htb default 99 r2q 5

SFQ saves the day, but you should be aware that using default on eth in
this case catches arp and sending arp to a low bandwidth class is not a
good thing to do.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (4 preceding siblings ...)
  2016-12-17 11:19 ` Andy Furniss
@ 2016-12-17 12:27 ` Ludovic Leroy
  2016-12-17 14:45 ` Ludovic Leroy
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ludovic Leroy @ 2016-12-17 12:27 UTC (permalink / raw)
  To: lartc

[-- Attachment #1: Type: text/plain, Size: 9175 bytes --]

Alan,

You are right. This solution gives immediate relief.
I cannot believe tuning the overheads has such an impact on upload 
rates. That is the way it is...
But you can see in the attached picture that the upload bandwidth drops 
down to 700kbit from 800kbit.
Do you have an idea how to return to the sync speed?
Thank you once again.

Ludovic

Le 17/12/2016 11:23, Alan Goodman a écrit :
> Hi,
>
> The obvious issue at present is that your upload rate isnt hitting the 
> ceiling rate you have specified, or at least the rate estimator doesnt 
> think it is.  This could be because you've set your upload incorrectly 
> (what is your sync speed) or could be because you are not accounting 
> for the overheads and ATM characteristics in your connection (makes 
> the rate estimator inaccurate).
>
> You should try adding stab overhead 40 linklayer atm to your root qdisc.
>
> The overhead 40 bit will be whatever number 
> https://github.com/moeller0/ATM_overhead_detector figures out for 
> you.  If you dont have matlab you can send me your ping collection 
> result (bzip it first and send it off list or upload it some place 
> please) and I will process it for you.
>
> Alan
>
>
> On 16/12/16 21:16, Ludovic Leroy wrote:
>> Thanks for replying.
>> The script is attached. Hope this helps.
>>
>> Ludovic
>>
>> Le 16/12/2016 21:34, Alan Goodman a écrit :
>>> It might help if you provide the script you are using to build your 
>>> tc queues...  I find this more readable than the output from the tc 
>>> stats.
>>>
>>> Alan
>>>
>>>
>>> On 16/12/16 16:50, Ludovic Leroy wrote:
>>>> Hello LARTC community,
>>>>
>>>> I am building a TC policy at home that answers my needs for a small 
>>>> DSL uplink 800kbit:
>>>> * High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
>>>> * SSH traffic gets higher priority. I view my camera remotely via 
>>>> ssh tunnel (Leaf 1:20 prio 2)
>>>> * Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
>>>> * Torrent seeding (Leaf 1:40 prio 4)
>>>> * Default (Leaf 1:99 prio 5)
>>>> * Gigabit local network (Leaf 1:1000 prio 1000)
>>>>
>>>> The problem is torrent traffic consumes all the bandwidth leaving 
>>>> little room for SSH traffic (<100kbit). See attached picture.
>>>> SSH traffic class with higher priority than torrent class should be 
>>>> offered excess bandwidth first, but that is not the case.
>>>> The only solution I found is to reduce the torrent ceil value.
>>>> Could you help me?
>>>>
>>>> Regards,
>>>>    Ludovic L.
>>>>
>>>> # tc -d class show dev eth1
>>>> class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 66Kbit 
>>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>>> class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 66Kbit 
>>>> ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>>> class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 
>>>> 100Mbit linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 
>>>> 1600b/1 mpu 0b overhead 0b level 0
>>>> class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet 
>>>> burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b 
>>>> level 7
>>>> class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 
>>>> 396Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>> class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 
>>>> 198Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>> class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 66Kbit 
>>>> ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b overhead 0b 
>>>> cburst 1599b/1 mpu 0b overhead 0b level 0
>>>> class sfq 140:22 parent 140:
>>>> class sfq 140:34 parent 140:
>>>> class sfq 140:3b parent 140:
>>>> class sfq 140:6c parent 140:
>>>> class sfq 140:a9 parent 140:
>>>> class sfq 140:149 parent 140:
>>>> class sfq 140:287 parent 140:
>>>> class sfq 140:2fd parent 140:
>>>> class sfq 140:318 parent 140:
>>>> class sfq 140:376 parent 140:
>>>> class sfq 140:3d6 parent 140:
>>>> class sfq 140:3e3 parent 140:
>>>>
>>>> # tc -d qdisc show dev eth1
>>>> qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 
>>>> ver 3.17 direct_qlen 1000
>>>> qdisc pfifo 110: parent 1:10 limit 1000p
>>>> qdisc pfifo 120: parent 1:20 limit 1000p
>>>> qdisc pfifo 130: parent 1:30 limit 1000p
>>>> qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 flows 
>>>> 128/1024 divisor 1024 perturb 10sec
>>>> qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 flows 
>>>> 128/1024 divisor 1024 perturb 10sec
>>>>
>>>> # tc -d filter show dev eth1
>>>> filter parent 1: protocol all pref 1 fw
>>>> filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10
>>>> filter parent 1: protocol all pref 2 fw
>>>> filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20
>>>> filter parent 1: protocol all pref 3 fw
>>>> filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30
>>>> filter parent 1: protocol all pref 4 fw
>>>> filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40
>>>> filter parent 1: protocol all pref 99 fw
>>>> filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99
>>>> filter parent 1: protocol all pref 1000 fw
>>>> filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 1:1000
>>>>
>>>> # tc -s class show dev eth1
>>>> class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 800Kbit 
>>>> burst 16Kb cburst 1599b
>>>>   Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 8048bit 6pps backlog 0b 0p requeues 0
>>>>   lended: 10742 borrowed: 0 giants: 0
>>>>   tokens: 29290142 ctokens: 198864
>>>>
>>>> class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 200Kbit 
>>>> burst 16Kb cburst 1599b
>>>>   Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 48bit 0pps backlog 0b 0p requeues 0
>>>>   lended: 229 borrowed: 0 giants: 0
>>>>   tokens: 30859841 ctokens: 943734
>>>>
>>>> class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b 
>>>> cburst 1600b
>>>>   Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 784bit 1pps backlog 0b 0p requeues 0
>>>>   lended: 563 borrowed: 0 giants: 0
>>>>   tokens: 1917 ctokens: 1917
>>>>
>>>> class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b
>>>>   Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 796440bit 78pps backlog 0b 0p requeues 0
>>>>   lended: 108779 borrowed: 0 giants: 0
>>>>   tokens: 2192729 ctokens: -117287
>>>>
>>>> class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 
>>>> 800Kbit burst 16Kb cburst 1599b
>>>>   Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 64032bit 6pps backlog 0b 0p requeues 0
>>>>   lended: 4448 borrowed: 0 giants: 0
>>>>   tokens: 5142031 ctokens: 235296
>>>>
>>>> class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 
>>>> 800Kbit burst 16Kb cburst 1599b
>>>>   Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 0bit 0pps backlog 0b 0p requeues 0
>>>>   lended: 216 borrowed: 0 giants: 0
>>>>   tokens: 10309330 ctokens: 241546
>>>>
>>>> class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 800Kbit 
>>>> burst 16Kb cburst 1599b
>>>>   Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 requeues 0)
>>>>   rate 724312bit 65pps backlog 0b 27p requeues 0
>>>>   lended: 10187 borrowed: 108779 giants: 0
>>>>   tokens: -2031932 ctokens: -222767
>>>>
>>>> class sfq 140:56 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 2814b 2p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:63 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 7570b 5p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:9a parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 6056b 4p requeues 0
>>>>   allot 1448
>>>>
>>>> class sfq 140:f8 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 1310b 2p requeues 0
>>>>   allot 528
>>>>
>>>> class sfq 140:1c7 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 3028b 2p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:269 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 4542b 3p requeues 0
>>>>   allot 1304
>>>>
>>>> class sfq 140:2ff parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 6056b 4p requeues 0
>>>>   allot -72
>>>>
>>>> class sfq 140:30d parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 1514b 1p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:326 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 1502b 1p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:3ad parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 1514b 1p requeues 0
>>>>   allot 1520
>>>>
>>>> class sfq 140:3c5 parent 140:
>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>   backlog 1560b 2p requeues 0
>>>>   allot 1520
>>>
>>
>


[-- Attachment #2: tcgraph2.png --]
[-- Type: image/png, Size: 53242 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (5 preceding siblings ...)
  2016-12-17 12:27 ` Ludovic Leroy
@ 2016-12-17 14:45 ` Ludovic Leroy
  2016-12-17 14:50 ` Ludovic Leroy
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ludovic Leroy @ 2016-12-17 14:45 UTC (permalink / raw)
  To: lartc

Hello Andy,

I have put it badly. My linux system is used as a firewall/router for a 
small local network that is connected to eth0 interface.
I have created a dedicated class on eth1 interface to access files on a 
shared network drive integrated into the DSL modem.


Le 17/12/2016 12:19, Andy Furniss a écrit :
> Ludovic Leroy wrote:
>> Thanks for replying. The script is attached. Hope this helps.
>
> $TC qdisc add dev $NETCARD root handle 1: htb default 99 r2q 5
>
> SFQ saves the day, but you should be aware that using default on eth in
> this case catches arp and sending arp to a low bandwidth class is not a
> good thing to do.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (6 preceding siblings ...)
  2016-12-17 14:45 ` Ludovic Leroy
@ 2016-12-17 14:50 ` Ludovic Leroy
  2016-12-17 15:14 ` Alan Goodman
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Ludovic Leroy @ 2016-12-17 14:50 UTC (permalink / raw)
  To: lartc

Andy,

ip + 14? How do you get this result?

Thank you.

Ludovic

Le 17/12/2016 11:56, Andy Furniss a écrit :
> Alan Goodman wrote:
>> Hi,
>>
>> The obvious issue at present is that your upload rate isnt hitting
>> the ceiling rate you have specified, or at least the rate estimator
>> doesnt think it is.  This could be because you've set your upload
>> incorrectly (what is your sync speed) or could be because you are not
>> accounting for the overheads and ATM characteristics in your
>> connection (makes the rate estimator inaccurate).
>>
>> You should try adding stab overhead 40 linklayer atm to your root
>> qdisc.
>>
>> The overhead 40 bit will be whatever number
>> https://github.com/moeller0/ATM_overhead_detector figures out for
>> you.
>
> Just in case that just reveals the fixed overhead on IP, on eth tc sees
> ip + 14 already, so you can subtract 14, which for ipoa or pppoa vcmux
> means you should use a negative overhead.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (7 preceding siblings ...)
  2016-12-17 14:50 ` Ludovic Leroy
@ 2016-12-17 15:14 ` Alan Goodman
  2016-12-17 15:44 ` Andy Furniss
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Alan Goodman @ 2016-12-17 15:14 UTC (permalink / raw)
  To: lartc

In my experience those types of graphs dont take into account overhead.  
Therefore bandwidth used may look smaller than it is in reality.  If you 
look at tc -s qdisc show interfacename youll see what the rate estimator 
thinks the rate is which ought to be more accurate.

I usually set ceiling rate at 95-99% of your upload sync speed, 
depending upon how many errors the line has on upstream.

The simplified explanation for why the bandwidth graphs can be lower 
than expected is they dont take into account the ATM overheads.

The slightly more in depth explanation would be overhead estimation is 
important on ATM type connections because the cells transmitted on the 
wire have to be a certain size, if the data packet is smaller than this 
certain size then the atm cell is padded to make it up to the minimum 
size.  If you dont account for this then your rate estimation can be 
very inaccurate.

I would run the ATM overhead detector first to ensure that you are not 
over stating that overhead value.  The output from the matlab script 
will give you the stab parameters that are approximately correct for 
your connection.  I find those to be a good starting point in general.

Alan


On 17/12/16 12:27, Ludovic Leroy wrote:
> Alan,
>
> You are right. This solution gives immediate relief.
> I cannot believe tuning the overheads has such an impact on upload 
> rates. That is the way it is...
> But you can see in the attached picture that the upload bandwidth 
> drops down to 700kbit from 800kbit.
> Do you have an idea how to return to the sync speed?
> Thank you once again.
>
> Ludovic
>
> Le 17/12/2016 11:23, Alan Goodman a écrit :
>> Hi,
>>
>> The obvious issue at present is that your upload rate isnt hitting 
>> the ceiling rate you have specified, or at least the rate estimator 
>> doesnt think it is.  This could be because you've set your upload 
>> incorrectly (what is your sync speed) or could be because you are not 
>> accounting for the overheads and ATM characteristics in your 
>> connection (makes the rate estimator inaccurate).
>>
>> You should try adding stab overhead 40 linklayer atm to your root qdisc.
>>
>> The overhead 40 bit will be whatever number 
>> https://github.com/moeller0/ATM_overhead_detector figures out for 
>> you.  If you dont have matlab you can send me your ping collection 
>> result (bzip it first and send it off list or upload it some place 
>> please) and I will process it for you.
>>
>> Alan
>>
>>
>> On 16/12/16 21:16, Ludovic Leroy wrote:
>>> Thanks for replying.
>>> The script is attached. Hope this helps.
>>>
>>> Ludovic
>>>
>>> Le 16/12/2016 21:34, Alan Goodman a écrit :
>>>> It might help if you provide the script you are using to build your 
>>>> tc queues...  I find this more readable than the output from the tc 
>>>> stats.
>>>>
>>>> Alan
>>>>
>>>>
>>>> On 16/12/16 16:50, Ludovic Leroy wrote:
>>>>> Hello LARTC community,
>>>>>
>>>>> I am building a TC policy at home that answers my needs for a 
>>>>> small DSL uplink 800kbit:
>>>>> * High UDP responsiveness for DNS queries and ping (Leaf 1:10 prio 1)
>>>>> * SSH traffic gets higher priority. I view my camera remotely via 
>>>>> ssh tunnel (Leaf 1:20 prio 2)
>>>>> * Guarantied http(s)/IMAP (Leaf 1:30 prio 3)
>>>>> * Torrent seeding (Leaf 1:40 prio 4)
>>>>> * Default (Leaf 1:99 prio 5)
>>>>> * Gigabit local network (Leaf 1:1000 prio 1000)
>>>>>
>>>>> The problem is torrent traffic consumes all the bandwidth leaving 
>>>>> little room for SSH traffic (<100kbit). See attached picture.
>>>>> SSH traffic class with higher priority than torrent class should 
>>>>> be offered excess bandwidth first, but that is not the case.
>>>>> The only solution I found is to reduce the torrent ceil value.
>>>>> Could you help me?
>>>>>
>>>>> Regards,
>>>>>    Ludovic L.
>>>>>
>>>>> # tc -d class show dev eth1
>>>>> class htb 1:99 parent 1:1 leaf 199: prio 5 quantum 1650 rate 
>>>>> 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>>> class htb 1:10 parent 1:1 leaf 110: prio 1 quantum 1650 rate 
>>>>> 66Kbit ceil 200Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>>> class htb 1:1000 root prio 0 quantum 200000 rate 100Mbit ceil 
>>>>> 100Mbit linklayer ethernet burst 1600b/1 mpu 0b overhead 0b cburst 
>>>>> 1600b/1 mpu 0b overhead 0b level 0
>>>>> class htb 1:1 root rate 800Kbit ceil 800Kbit linklayer ethernet 
>>>>> burst 16Kb/1 mpu 0b overhead 0b cburst 1599b/1 mpu 0b overhead 0b 
>>>>> level 7
>>>>> class htb 1:20 parent 1:1 leaf 120: prio 2 quantum 9900 rate 
>>>>> 396Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>>> class htb 1:30 parent 1:1 leaf 130: prio 3 quantum 4950 rate 
>>>>> 198Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>>> class htb 1:40 parent 1:1 leaf 140: prio 4 quantum 1650 rate 
>>>>> 66Kbit ceil 800Kbit linklayer ethernet burst 16Kb/1 mpu 0b 
>>>>> overhead 0b cburst 1599b/1 mpu 0b overhead 0b level 0
>>>>> class sfq 140:22 parent 140:
>>>>> class sfq 140:34 parent 140:
>>>>> class sfq 140:3b parent 140:
>>>>> class sfq 140:6c parent 140:
>>>>> class sfq 140:a9 parent 140:
>>>>> class sfq 140:149 parent 140:
>>>>> class sfq 140:287 parent 140:
>>>>> class sfq 140:2fd parent 140:
>>>>> class sfq 140:318 parent 140:
>>>>> class sfq 140:376 parent 140:
>>>>> class sfq 140:3d6 parent 140:
>>>>> class sfq 140:3e3 parent 140:
>>>>>
>>>>> # tc -d qdisc show dev eth1
>>>>> qdisc htb 1: root refcnt 2 r2q 5 default 99 direct_packets_stat 2 
>>>>> ver 3.17 direct_qlen 1000
>>>>> qdisc pfifo 110: parent 1:10 limit 1000p
>>>>> qdisc pfifo 120: parent 1:20 limit 1000p
>>>>> qdisc pfifo 130: parent 1:30 limit 1000p
>>>>> qdisc sfq 140: parent 1:40 limit 127p quantum 1514b depth 127 
>>>>> flows 128/1024 divisor 1024 perturb 10sec
>>>>> qdisc sfq 199: parent 1:99 limit 127p quantum 1514b depth 127 
>>>>> flows 128/1024 divisor 1024 perturb 10sec
>>>>>
>>>>> # tc -d filter show dev eth1
>>>>> filter parent 1: protocol all pref 1 fw
>>>>> filter parent 1: protocol all pref 1 fw handle 0xa classid 1:10
>>>>> filter parent 1: protocol all pref 2 fw
>>>>> filter parent 1: protocol all pref 2 fw handle 0x14 classid 1:20
>>>>> filter parent 1: protocol all pref 3 fw
>>>>> filter parent 1: protocol all pref 3 fw handle 0x1e classid 1:30
>>>>> filter parent 1: protocol all pref 4 fw
>>>>> filter parent 1: protocol all pref 4 fw handle 0x28 classid 1:40
>>>>> filter parent 1: protocol all pref 99 fw
>>>>> filter parent 1: protocol all pref 99 fw handle 0x63 classid 1:99
>>>>> filter parent 1: protocol all pref 1000 fw
>>>>> filter parent 1: protocol all pref 1000 fw handle 0x3e8 classid 
>>>>> 1:1000
>>>>>
>>>>> # tc -s class show dev eth1
>>>>> class htb 1:99 parent 1:1 leaf 199: prio 5 rate 66Kbit ceil 
>>>>> 800Kbit burst 16Kb cburst 1599b
>>>>>   Sent 1705141 bytes 10742 pkt (dropped 0, overlimits 0 requeues 0)
>>>>>   rate 8048bit 6pps backlog 0b 0p requeues 0
>>>>>   lended: 10742 borrowed: 0 giants: 0
>>>>>   tokens: 29290142 ctokens: 198864
>>>>>
>>>>> class htb 1:10 parent 1:1 leaf 110: prio 1 rate 66Kbit ceil 
>>>>> 200Kbit burst 16Kb cburst 1599b
>>>>>   Sent 20229 bytes 229 pkt (dropped 0, overlimits 0 requeues 0)
>>>>>   rate 48bit 0pps backlog 0b 0p requeues 0
>>>>>   lended: 229 borrowed: 0 giants: 0
>>>>>   tokens: 30859841 ctokens: 943734
>>>>>
>>>>> class htb 1:1000 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b 
>>>>> cburst 1600b
>>>>>   Sent 79426 bytes 563 pkt (dropped 0, overlimits 0 requeues 0)
>>>>>   rate 784bit 1pps backlog 0b 0p requeues 0
>>>>>   lended: 563 borrowed: 0 giants: 0
>>>>>   tokens: 1917 ctokens: 1917
>>>>>
>>>>> class htb 1:1 root rate 800Kbit ceil 800Kbit burst 16Kb cburst 1599b
>>>>>   Sent 164307843 bytes 134601 pkt (dropped 0, overlimits 0 
>>>>> requeues 0)
>>>>>   rate 796440bit 78pps backlog 0b 0p requeues 0
>>>>>   lended: 108779 borrowed: 0 giants: 0
>>>>>   tokens: 2192729 ctokens: -117287
>>>>>
>>>>> class htb 1:20 parent 1:1 leaf 120: prio 2 rate 396Kbit ceil 
>>>>> 800Kbit burst 16Kb cburst 1599b
>>>>>   Sent 5042698 bytes 4448 pkt (dropped 0, overlimits 0 requeues 0)
>>>>>   rate 64032bit 6pps backlog 0b 0p requeues 0
>>>>>   lended: 4448 borrowed: 0 giants: 0
>>>>>   tokens: 5142031 ctokens: 235296
>>>>>
>>>>> class htb 1:30 parent 1:1 leaf 130: prio 3 rate 198Kbit ceil 
>>>>> 800Kbit burst 16Kb cburst 1599b
>>>>>   Sent 32111 bytes 216 pkt (dropped 0, overlimits 0 requeues 0)
>>>>>   rate 0bit 0pps backlog 0b 0p requeues 0
>>>>>   lended: 216 borrowed: 0 giants: 0
>>>>>   tokens: 10309330 ctokens: 241546
>>>>>
>>>>> class htb 1:40 parent 1:1 leaf 140: prio 4 rate 66Kbit ceil 
>>>>> 800Kbit burst 16Kb cburst 1599b
>>>>>   Sent 157507664 bytes 118966 pkt (dropped 0, overlimits 0 
>>>>> requeues 0)
>>>>>   rate 724312bit 65pps backlog 0b 27p requeues 0
>>>>>   lended: 10187 borrowed: 108779 giants: 0
>>>>>   tokens: -2031932 ctokens: -222767
>>>>>
>>>>> class sfq 140:56 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 2814b 2p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:63 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 7570b 5p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:9a parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 6056b 4p requeues 0
>>>>>   allot 1448
>>>>>
>>>>> class sfq 140:f8 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 1310b 2p requeues 0
>>>>>   allot 528
>>>>>
>>>>> class sfq 140:1c7 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 3028b 2p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:269 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 4542b 3p requeues 0
>>>>>   allot 1304
>>>>>
>>>>> class sfq 140:2ff parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 6056b 4p requeues 0
>>>>>   allot -72
>>>>>
>>>>> class sfq 140:30d parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 1514b 1p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:326 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 1502b 1p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:3ad parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 1514b 1p requeues 0
>>>>>   allot 1520
>>>>>
>>>>> class sfq 140:3c5 parent 140:
>>>>>   (dropped 0, overlimits 0 requeues 0)
>>>>>   backlog 1560b 2p requeues 0
>>>>>   allot 1520
>>>>
>>>
>>
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (8 preceding siblings ...)
  2016-12-17 15:14 ` Alan Goodman
@ 2016-12-17 15:44 ` Andy Furniss
  2016-12-17 15:49 ` Andy Furniss
  2016-12-17 16:04 ` Dave Taht
  11 siblings, 0 replies; 13+ messages in thread
From: Andy Furniss @ 2016-12-17 15:44 UTC (permalink / raw)
  To: lartc

Ludovic Leroy wrote:
> Andy,
>
> ip + 14? How do you get this result?

It's IP + src/dst MAC + ethertype protocol (which you can match with tc
filters using a negative offset)

On a vlan interface it's it's + 18

On ppp it's just IP length.

If you make a filter/class for say icmp and play with ping you can
infer the overhead by looking at the byte counters.

This is nothing to do with stab - it's always the case, so you need
to take it into account when working out what number to use for overhead.

My old DSL link was pppoa vc mux so the fixed overhead was ip + 10.

When I has a PCI modem and shaped on ppp0 the overhead param of stab
was 10.

When I later got a stand alone modem connected by eth I had to use -4
as the overhead to allow for the fact that tc already saw ip + 14.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (9 preceding siblings ...)
  2016-12-17 15:44 ` Andy Furniss
@ 2016-12-17 15:49 ` Andy Furniss
  2016-12-17 16:04 ` Dave Taht
  11 siblings, 0 replies; 13+ messages in thread
From: Andy Furniss @ 2016-12-17 15:49 UTC (permalink / raw)
  To: lartc

Ludovic Leroy wrote:
> Hello Andy,
>
> I have put it badly. My linux system is used as a firewall/router for a
> small local network that is connected to eth0 interface.
> I have created a dedicated class on eth1 interface to access files on a
> shared network drive integrated into the DSL modem.

If you are shaping on eth you need to be careful what you do with arp.

Using htb default in your setup sends arp to 99, but as you have sfq on
that it will work. If say you had pfifo on it then the arp may get
delayed long enough for the kernel to thing that the remote has gone,
which is not good.

The default default for htb lets things through unshaped which is good
for arp.

> Le 17/12/2016 12:19, Andy Furniss a écrit :
>> Ludovic Leroy wrote:
>>> Thanks for replying. The script is attached. Hope this helps.
>>
>> $TC qdisc add dev $NETCARD root handle 1: htb default 99 r2q 5
>>
>> SFQ saves the day, but you should be aware that using default on eth in
>> this case catches arp and sending arp to a low bandwidth class is not a
>> good thing to do.
>
> .
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Problem to priorize SSH traffic
  2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
                   ` (10 preceding siblings ...)
  2016-12-17 15:49 ` Andy Furniss
@ 2016-12-17 16:04 ` Dave Taht
  11 siblings, 0 replies; 13+ messages in thread
From: Dave Taht @ 2016-12-17 16:04 UTC (permalink / raw)
  To: lartc

We try to handle framing overhead and multiple tiers of qos sanely in
the cake qdisc,

and also have the "sqm-scripts" which do so sanely for htb + fq_codel.
(or sfq if you prefer)

https://www.bufferbloat.net/projects/codel/wiki/CakeTechnical/

https://github.com/tohojo/sqm-scripts

On Sat, Dec 17, 2016 at 7:49 AM, Andy Furniss <adf.lists@gmail.com> wrote:
> Ludovic Leroy wrote:
>>
>> Hello Andy,
>>
>> I have put it badly. My linux system is used as a firewall/router for a
>> small local network that is connected to eth0 interface.
>> I have created a dedicated class on eth1 interface to access files on a
>> shared network drive integrated into the DSL modem.
>
>
> If you are shaping on eth you need to be careful what you do with arp.
>
> Using htb default in your setup sends arp to 99, but as you have sfq on
> that it will work. If say you had pfifo on it then the arp may get
> delayed long enough for the kernel to thing that the remote has gone,
> which is not good.
>
> The default default for htb lets things through unshaped which is good
> for arp.
>
>
>> Le 17/12/2016 12:19, Andy Furniss a écrit :
>>>
>>> Ludovic Leroy wrote:
>>>>
>>>> Thanks for replying. The script is attached. Hope this helps.
>>>
>>>
>>> $TC qdisc add dev $NETCARD root handle 1: htb default 99 r2q 5
>>>
>>> SFQ saves the day, but you should be aware that using default on eth in
>>> this case catches arp and sending arp to a low bandwidth class is not a
>>> good thing to do.
>>
>>
>> .
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe lartc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-12-17 16:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-16 16:50 Problem to priorize SSH traffic Ludovic Leroy
2016-12-16 20:34 ` Alan Goodman
2016-12-16 21:16 ` Ludovic Leroy
2016-12-17 10:23 ` Alan Goodman
2016-12-17 10:56 ` Andy Furniss
2016-12-17 11:19 ` Andy Furniss
2016-12-17 12:27 ` Ludovic Leroy
2016-12-17 14:45 ` Ludovic Leroy
2016-12-17 14:50 ` Ludovic Leroy
2016-12-17 15:14 ` Alan Goodman
2016-12-17 15:44 ` Andy Furniss
2016-12-17 15:49 ` Andy Furniss
2016-12-17 16:04 ` Dave Taht

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.