All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alan Goodman <notifications@yescomputersolutions.com>
To: lartc@vger.kernel.org
Subject: Re: HFSC not working as expected
Date: Sun, 06 Jul 2014 22:18:36 +0000	[thread overview]
Message-ID: <53B9CB3C.9040302@yescomputersolutions.com> (raw)
In-Reply-To: <53AC30A8.2080403@yescomputersolutions.com>

Hi Andy/all,

Thanks for your further useful input.

On 06/07/14 21:42, Andy Furniss wrote:
> Wrong link - but that's data throughput which is < ip throughput which
> is < atm level throughput. Assuming you are using stab when setting the
> 17100kbit 14mbit date throughput is only a bit below expected.
>
> I assume your mtu is 1492, also assuming you have default linux settings
> of tcptimestamps on (costs 12 bytes), so with tcp + ip headers there is
> only 1440 bytes data /packet each of which after allowing for ppp/aal5
> overheads will probably use 32 cells = 1696 bytes.
>
> 1440 / 1696 = 0.85 * 17.1 = 14.5.
>
> I am not sure what overhead you should add with stab for your pppoe as
> tc already sees eth as ip + 14 - maybe adding 40 is too much and you are
> getting 33 cells per packet.

Sorry regarding the link related mistake.  What you would have seen if I 
had correctly linked you was that the connection without traffic shaping 
is managing around 16.4mbit.  With traffic shaping, upper limit set to 
17100kbit and stab overhead 40 I only see 14.5 ish mbit - which means we 
are likely being overly conservative?

My download shaping is being completed on the outbound leg from the 
server...  Traffic flows ADSL -> router in bridge mode -> eth0 -> ppp0 
-> eth1 switch client device.  My download shaping occurs on the eth1 
device.

Quickly relating to mtu... By default CentOS has CLAMPMSS enabled on the 
pppoe connection.  This is set to 1412 bytes.  MTU of the connection is  
1492 according to ifconfig.

> Well, it's quite a fast link, maybe 25 is too short - I would test IIRC
> 128 is default for sfq.

I have experimented with loads of sfq limit settings now and really it 
seems to make little difference so I've decided to set this at 128 
(default) for now.

> Thinking more about it, there could be other reasons that you got the
> latency you saw.
>
> As I said I don't know HFSC, but I notice on both your setups you give
> very little bandwidth to "syn ack rst", I assume ack here means you
> classified by length to get empty (s)acks as almost every packet has ack
> set. Personally I would give those < prio than time critical and you
> should be aware on a highly asymmetric 20:1 adsl line they can eat a
> fair bit of your upstream (2 cells each, 1 for every 2 incoming best
> case, 1 per incoming in recovery after loss).

I feel thats a very valid point.  I have decided to roll class 10 and 
class 11 together for the time being which should mean time critical + 
syn/ack etc get roughly 50% of upload capacity.

I've been playing around with my worst case bittorent scenario some 
more.  Whilst troubleshooting I decided to set ul 15000kbit on the 
download class 1:13 (which the torrent hits).  With the torrent using 
around 200 flows I Immediately saw latency in the priority queue within 
acceptable limits.  So I thought bingo perhaps I set my class 1:2 upper 
limit too high overall?  So I reduced 17100kbit to 15000kbit, adjusted 
sc rates so that total was 15000kbit and deleted the upper limit on 
class 1:13 and reloaded the rules.  Unfortunately this behaved exactly 
like 17100kbit upper limit latency wise.  But I dont understand why this 
is?  Could this be the crux of my issues - some hfsc misunderstandings?

While I had the ul 15000kbit set on the bulk class I also played around 
with getting traffic to hit the 'interactive' 1:12 class.  I found this 
caused similar overall behaviour to when I didnt limit the bulk class at 
all - roundtrip hitting over 100+ms.  Its as though it doesnt like 
hitting the branch linkshare/upper limit?

Below is my current script:

#QoS for Upload

tc qdisc del dev ppp0 root

tc qdisc add dev ppp0 stab mtu 1492 overhead 40 linklayer atm root 
handle 1:0 hfsc default 14

tc class add dev ppp0 parent 1:0 classid 1:1 hfsc sc rate 90mbit
tc class add dev ppp0 parent 1:1 classid 1:2 hfsc sc rate 1100kbit ul 
rate 1100kbit

tc class add dev ppp0 parent 1:2 classid 1:11 hfsc sc umax 1412b dmax 
20ms rate 495kbit # Time critical
tc class add dev ppp0 parent 1:2 classid 1:12 hfsc sc rate 300kbit 
#Interactive
tc class add dev ppp0 parent 1:2 classid 1:13 hfsc sc rate 305kbit #bulk
tc class add dev ppp0 parent 1:1 classid 1:14 hfsc sc rate 90mbit

tc qdisc add dev ppp0 parent 1:11 handle 11: sfq perturb 10
tc qdisc add dev ppp0 parent 1:12 handle 12: sfq perturb 10
tc qdisc add dev ppp0 parent 1:13 handle 13: sfq perturb 10
tc qdisc add dev ppp0 parent 1:14 handle 14: pfifo

tc filter add dev ppp0 parent 1:0 protocol ip prio 2 handle 11 fw flowid 
1:11
tc filter add dev ppp0 parent 1:0 protocol ip prio 3 handle 12 fw flowid 
1:12
tc filter add dev ppp0 parent 1:0 protocol ip prio 4 handle 13 fw flowid 
1:13

#QoS for Download

tc qdisc del dev eth1 root

tc qdisc add dev eth1 stab overhead 40 linklayer atm root handle 1:0 
hfsc default 14

tc class add dev eth1 parent 1:0 classid 1:1 hfsc sc rate 90mbit
tc class add dev eth1 parent 1:1 classid 1:2 hfsc sc rate 17100kbit ul 
rate 17100kbit

tc class add dev eth1 parent 1:2 classid 1:11 hfsc sc umax 1412b dmax 
20ms rate 1545kbit #Time critical
tc class add dev eth1 parent 1:2 classid 1:12 hfsc sc rate 4955kbit 
#Interactive
tc class add dev eth1 parent 1:2 classid 1:13 hfsc sc rate 10600kbit #ul 
rate 15000kbit #Bulk
tc class add dev eth1 parent 1:1 classid 1:14 hfsc sc rate 90mbit

tc qdisc add dev eth1 parent 1:11 handle 11: sfq perturb 10
tc qdisc add dev eth1 parent 1:12 handle 12: sfq perturb 10
tc qdisc add dev eth1 parent 1:13 handle 13: sfq perturb 10
tc qdisc add dev eth1 parent 1:14 handle 14: pfifo


tc filter add dev eth1 parent 1:0 protocol ip prio 2 handle 11 fw flowid 
1:11
tc filter add dev eth1 parent 1:0 protocol ip prio 3 handle 12 fw flowid 
1:12
tc filter add dev eth1 parent 1:0 protocol ip prio 4 handle 13 fw flowid 
1:13

Quick note regarding class 1:14...  Class 1:14 is only getting traffic 
which I dont mark in iptables.  My iptables ruleset guarantees that all 
traffic destined for the internet leaving via ppp0 and all traffic 
destined for a machine inside the NAT leaving via eth1 which hasnt 
already been marked as more important gets marked 13.  Therefore traffic 
not marked must be source localsubnet destination local subnet.  Hope 
this makes sense!

Alan

  parent reply	other threads:[~2014-07-06 22:18 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-26 14:39 HFSC not working as expected Alan Goodman
2014-07-01 12:25 ` Michal Soltys
2014-07-01 13:19 ` Alan Goodman
2014-07-01 13:30 ` Michal Soltys
2014-07-01 14:33 ` Alan Goodman
2014-07-03  0:12 ` Michal Soltys
2014-07-03  0:56 ` Alan Goodman
2014-07-06  1:18 ` Michal Soltys
2014-07-06 15:34 ` Alan Goodman
2014-07-06 16:42 ` Andy Furniss
2014-07-06 16:49 ` Andy Furniss
2014-07-06 16:49 ` Alan Goodman
2014-07-06 16:54 ` Alan Goodman
2014-07-06 20:42 ` Andy Furniss
2014-07-06 22:18 ` Alan Goodman [this message]
2014-07-06 22:24 ` Andy Furniss
2014-07-07  0:01 ` Alan Goodman
2014-07-07  9:54 ` Michal Soltys
2014-07-07  9:58 ` Michal Soltys
2014-07-07 10:08 ` Michal Soltys
2014-07-07 10:10 ` Michal Soltys
2014-07-07 10:59 ` Alan Goodman
2014-07-07 15:38 ` Alan Goodman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53B9CB3C.9040302@yescomputersolutions.com \
    --to=notifications@yescomputersolutions.com \
    --cc=lartc@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.