All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andy Furniss <adf.lists@gmail.com>
To: lartc@vger.kernel.org
Subject: Re: HFSC not working as expected
Date: Sun, 06 Jul 2014 20:42:27 +0000	[thread overview]
Message-ID: <53B9B4B3.6010704@gmail.com> (raw)
In-Reply-To: <53AC30A8.2080403@yescomputersolutions.com>

Alan Goodman wrote:
> Thanks Andy,
>
> I have been playing around a bit and may have been slightly quick to
>  comment in regard of download...  With hfsc engaged and total limit
> set to 17100kbit the actual throughput I see is closer to 14mbit for
> some reason.
>
> No traffic shaping:
> http://www.thinkbroadband.com/speedtest/results.html?id\x140466829057682641064
>
>
> hfsc->sfq perturb 10
> http://www.thinkbroadband.com/speedtest/results.html?id\x140466829057682641064
>
Wrong link - but that's data throughput which is < ip throughput which
is < atm level throughput. Assuming you are using stab when setting the
17100kbit 14mbit date throughput is only a bit below expected.

I assume your mtu is 1492, also assuming you have default linux settings
of tcptimestamps on (costs 12 bytes), so with tcp + ip headers there is
only 1440 bytes data /packet each of which after allowing for ppp/aal5
overheads will probably use 32 cells = 1696 bytes.

1440 / 1696 = 0.85 * 17.1 = 14.5.

I am not sure what overhead you should add with stab for your pppoe as
tc already sees eth as ip + 14 - maybe adding 40 is too much and you are
getting 33 cells per packet.


> On 06/07/14 17:42, Andy Furniss wrote:
>> If you have the choice of pppoa vs pppoe why not use a so you can
>> use overhead 10 and be more efficient for upload.
>>
>> THe 88.2 thing is not atm rate, they do limit slightly below sync,
>> but that is a marketing (inexact) approximate ip rate.
>>
>> If you were really matching their rate after allowing for overheads
>> your incoming shaping would do nothing at all.
>
> My understanding is that they limit the BRAS profile to 88.2% of your
>  downstream sync to prevent traffic backing up in the exchange
> links.

They do but they also call it the "IP Profile" so in addition to
limiting slightly below sync rate they are also allowing for atm
overheads in presenting the figure that they do.

>>> This works great in almost every test case except 'excessive
>>> p2p'. As a test I configured a 9mbit RATE and upper limit m2
>>> 10mbit on my bulk class.  I then started downloading a CentOS
>>> torrent with very high maximum connection limit set.  I see
>>> 10mbit coming in on my ppp0 interface however latency in my
>>> priority queue (sc umax 1412b dmax 20ms rate 460kbit) however my
>>> priority queue roundtrip is hitting 100+ms. Below is a clip from
>>> a ping session which shows what happens when I pause the torrent
>>> download.
>>
>> Shaping from the wrong end of the bottleneck is not ideal, if you
>> really care about latency you need to set lower limit for bulk and
>> short queue length.
>>
>> As you have found hitting hard with many connections is the worse
>> case.
>
> Are you saying that in addition to setting the 10mbit upper limit I
> should also set sfq limit to say 25 packets?

Well, it's quite a fast link, maybe 25 is too short - I would test IIRC
128 is default for sfq.

Thinking more about it, there could be other reasons that you got the
latency you saw.

As I said I don't know HFSC, but I notice on both your setups you give
very little bandwidth to "syn ack rst", I assume ack here means you
classified by length to get empty (s)acks as almost every packet has ack
set. Personally I would give those < prio than time critical and you
should be aware on a highly asymmetric 20:1 adsl line they can eat a
fair bit of your upstream (2 cells each, 1 for every 2 incoming best
case, 1 per incoming in recovery after loss).

When using htb years ago, I found that latency was better if I way over
allocated bandwidth for my interactive class and gave the bulks a low
rate so they had to borrow.





  parent reply	other threads:[~2014-07-06 20:42 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-26 14:39 HFSC not working as expected Alan Goodman
2014-07-01 12:25 ` Michal Soltys
2014-07-01 13:19 ` Alan Goodman
2014-07-01 13:30 ` Michal Soltys
2014-07-01 14:33 ` Alan Goodman
2014-07-03  0:12 ` Michal Soltys
2014-07-03  0:56 ` Alan Goodman
2014-07-06  1:18 ` Michal Soltys
2014-07-06 15:34 ` Alan Goodman
2014-07-06 16:42 ` Andy Furniss
2014-07-06 16:49 ` Andy Furniss
2014-07-06 16:49 ` Alan Goodman
2014-07-06 16:54 ` Alan Goodman
2014-07-06 20:42 ` Andy Furniss [this message]
2014-07-06 22:18 ` Alan Goodman
2014-07-06 22:24 ` Andy Furniss
2014-07-07  0:01 ` Alan Goodman
2014-07-07  9:54 ` Michal Soltys
2014-07-07  9:58 ` Michal Soltys
2014-07-07 10:08 ` Michal Soltys
2014-07-07 10:10 ` Michal Soltys
2014-07-07 10:59 ` Alan Goodman
2014-07-07 15:38 ` Alan Goodman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53B9B4B3.6010704@gmail.com \
    --to=adf.lists@gmail.com \
    --cc=lartc@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.