All of lore.kernel.org
 help / color / mirror / Atom feed
* Problem with HTB bandwidth slicing when using TCP traffic
@ 2014-04-10 19:17 Slavica Tomovic
  2014-04-10 22:48 ` Andy Furniss
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Slavica Tomovic @ 2014-04-10 19:17 UTC (permalink / raw)
  To: lartc

Hi to all,

I am using CentOS 6.4 and have problems when I want to limit bandwidth
of TCP flow to some value smaller than 15 Mbit/s. Namely, I used iperf
to generate TCP traffic and limited bandwidth (with tc command) for
that flow on  6 Mbit/s. I got approximately 6 Mbit/s in average but
iperf, which I adjusted to report me statistics every second, showed
that in one second flow got 10 Mbit/s or more and than for few
consecutive seconds 0 Mbit/s. With UDP traffic everything works fine.
I expected that TCP bandwidth will fluctuate because of congestion
mechanism but not like this. When I reserve more than 15 Mbit/s
situation is pretty much OK.

Also I had similar problem when I tried to split link bandwidth (which
I had throttled previously on 10 Mbit/s with tc) between two TCP
flows. On the other side, TCPvsUDP and UDPvsUDP slicing works fine.

I had updated kernel version to 2.6.32-431 recently. I don't know did
this cause a problem because I didn't use tc htb mechanism in the
older version.

Do you have any idea why is this happening and how I can fix it?

These are commands I had used to create htb classes:

tc class add dev eth0 parent 1: classid 1:1 htb rate 10000kbps ceil 10000kbps
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 4000kbps ceil 4000kbps
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 6000kbps ceil 6000kbps

I will appreciate any help!

Slavica

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
@ 2014-04-10 22:48 ` Andy Furniss
  2014-04-12 10:20 ` Slavica Tomovic
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Andy Furniss @ 2014-04-10 22:48 UTC (permalink / raw)
  To: lartc

Slavica Tomovic wrote:
> Hi to all,
>
> I am using CentOS 6.4 and have problems when I want to limit
> bandwidth of TCP flow to some value smaller than 15 Mbit/s. Namely, I
> used iperf to generate TCP traffic and limited bandwidth (with tc
> command) for that flow on  6 Mbit/s. I got approximately 6 Mbit/s in
> average but iperf, which I adjusted to report me statistics every
> second, showed that in one second flow got 10 Mbit/s or more and than
> for few consecutive seconds 0 Mbit/s. With UDP traffic everything
> works fine. I expected that TCP bandwidth will fluctuate because of
> congestion mechanism but not like this. When I reserve more than 15
> Mbit/s situation is pretty much OK.
>
> Also I had similar problem when I tried to split link bandwidth
> (which I had throttled previously on 10 Mbit/s with tc) between two
> TCP flows. On the other side, TCPvsUDP and UDPvsUDP slicing works
> fine.
>
> I had updated kernel version to 2.6.32-431 recently. I don't know
> did this cause a problem because I didn't use tc htb mechanism in
> the older version.
>
> Do you have any idea why is this happening and how I can fix it?
>
> These are commands I had used to create htb classes:
>
> tc class add dev eth0 parent 1: classid 1:1 htb rate 10000kbps ceil
> 10000kbps tc class add dev eth0 parent 1:1 classid 1:10 htb rate
> 4000kbps ceil 4000kbps tc class add dev eth0 parent 1:1 classid 1:11
> htb rate 6000kbps ceil 6000kbps

kbps means k bytes/sec, use kbit or mbit

If you don't specify child qdiscs for htb it will use pfifo with the
txqlen on the device as the limit which may be a bit long on eth (1000)
or on ppp too short (3).






^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
  2014-04-10 22:48 ` Andy Furniss
@ 2014-04-12 10:20 ` Slavica Tomovic
  2014-04-13 19:43 ` Andy Furniss
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Slavica Tomovic @ 2014-04-12 10:20 UTC (permalink / raw)
  To: lartc

Thank you Andy!

I had changed limit values and that solved the problem.

However, I did it  by typing command like this for each created queue:

tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 10

Is there some command with which I can change default limit size so
that I don't have to this every time I create class?


Thanks a lot one more time!

Slavica



2014-04-11 0:48 GMT+02:00 Andy Furniss <adf.lists@gmail.com>:
> Slavica Tomovic wrote:
>>
>> Hi to all,
>>
>> I am using CentOS 6.4 and have problems when I want to limit
>> bandwidth of TCP flow to some value smaller than 15 Mbit/s. Namely, I
>> used iperf to generate TCP traffic and limited bandwidth (with tc
>> command) for that flow on  6 Mbit/s. I got approximately 6 Mbit/s in
>> average but iperf, which I adjusted to report me statistics every
>> second, showed that in one second flow got 10 Mbit/s or more and than
>> for few consecutive seconds 0 Mbit/s. With UDP traffic everything
>> works fine. I expected that TCP bandwidth will fluctuate because of
>> congestion mechanism but not like this. When I reserve more than 15
>> Mbit/s situation is pretty much OK.
>>
>> Also I had similar problem when I tried to split link bandwidth
>> (which I had throttled previously on 10 Mbit/s with tc) between two
>> TCP flows. On the other side, TCPvsUDP and UDPvsUDP slicing works
>> fine.
>>
>> I had updated kernel version to 2.6.32-431 recently. I don't know
>> did this cause a problem because I didn't use tc htb mechanism in
>> the older version.
>>
>> Do you have any idea why is this happening and how I can fix it?
>>
>> These are commands I had used to create htb classes:
>>
>> tc class add dev eth0 parent 1: classid 1:1 htb rate 10000kbps ceil
>> 10000kbps tc class add dev eth0 parent 1:1 classid 1:10 htb rate
>> 4000kbps ceil 4000kbps tc class add dev eth0 parent 1:1 classid 1:11
>> htb rate 6000kbps ceil 6000kbps
>
>
> kbps means k bytes/sec, use kbit or mbit
>
> If you don't specify child qdiscs for htb it will use pfifo with the
> txqlen on the device as the limit which may be a bit long on eth (1000)
> or on ppp too short (3).
>
>
>
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
  2014-04-10 22:48 ` Andy Furniss
  2014-04-12 10:20 ` Slavica Tomovic
@ 2014-04-13 19:43 ` Andy Furniss
  2014-04-13 20:17 ` Dave Taht
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Andy Furniss @ 2014-04-13 19:43 UTC (permalink / raw)
  To: lartc

Slavica Tomovic wrote:
> Thank you Andy!
>
> I had changed limit values and that solved the problem.
>
> However, I did it  by typing command like this for each created
> queue:
>
> tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 10
>
> Is there some command with which I can change default limit size so
> that I don't have to this every time I create class?

I guess you could change the txqlen on eth0 before setting up the htb,
but I would just make a script and use qdiscs.

limit 10 may be a bit short, also you could consider other qdiscs to
pfifo eg. if you wanted some fairness between flows you could use sfq or
fq_codel.

sfq defaults would be fine fq_codel may need some parameters tweaking
for lower bit rates, though I haven't properly tested it yet, just
guessing from what man tc-fq_codel says.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
                   ` (2 preceding siblings ...)
  2014-04-13 19:43 ` Andy Furniss
@ 2014-04-13 20:17 ` Dave Taht
  2014-04-14  1:30 ` Horace
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Dave Taht @ 2014-04-13 20:17 UTC (permalink / raw)
  To: lartc

On Sun, Apr 13, 2014 at 12:43 PM, Andy Furniss <adf.lists@gmail.com> wrote:
> Slavica Tomovic wrote:
>>
>> Thank you Andy!
>>
>> I had changed limit values and that solved the problem.
>>
>> However, I did it  by typing command like this for each created
>> queue:
>>
>> tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 10
>>
>> Is there some command with which I can change default limit size so
>> that I don't have to this every time I create class?
>
>
> I guess you could change the txqlen on eth0 before setting up the htb,
> but I would just make a script and use qdiscs.
>
> limit 10 may be a bit short, also you could consider other qdiscs to
> pfifo eg. if you wanted some fairness between flows you could use sfq or
> fq_codel.

No, limit 10 is a lot short. 50-100 is more reasonable for a fifo at
these speeds.

> sfq defaults would be fine fq_codel may need some parameters tweaking
> for lower bit rates, though I haven't properly tested it yet, just
> guessing from what man tc-fq_codel says.

sfq is a pretty good choice at low bandwidths and on older versions of
linux  (particularly as fq_codel is heavily backported from linux 3.6
but not available everywhere). In addition to fair queuing, sfq has a
hard packet limit iof 127 packets by default, which gives decent
performance in the range of bandwidths below 10mbit.

At rates below 3Mbit fq_codel needs a bit a tweaking we've discovered,
basically it helps to have a "target" parameter > the cost of a single MTU's
packet - e.g at 1Mbit, we tend to use a target of 15 and an interval of 150,
.5mbit 30, 300. Above 4mbit target and interval are fine with the defaults.

Changing the default quantum helps also at rates below 40Mbit - we use 300.

For way more detail on how this stuff works,

http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die

see cerowrt's SQM code...

and the related codel and fq_codel internet drafts...

>
>
> --
> To unsubscribe from this list: send the line "unsubscribe lartc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
                   ` (3 preceding siblings ...)
  2014-04-13 20:17 ` Dave Taht
@ 2014-04-14  1:30 ` Horace
  2014-04-14 22:40 ` Andy Furniss
  2014-04-14 22:50 ` Dave Taht
  6 siblings, 0 replies; 8+ messages in thread
From: Horace @ 2014-04-14  1:30 UTC (permalink / raw)
  To: lartc

Thanks for your references. I has been looking for an updated HTB script for over two years and has done my own version of experiments. Results were not good and I was about to give up.

Horace Ng

----- Original Message -----
From: "Dave Taht" <dave.taht@gmail.com>
To: "Andy Furniss" <adf.lists@gmail.com>
Cc: "Slavica Tomovic" <slavicat.cg@gmail.com>, lartc@vger.kernel.org
Sent: Monday, April 14, 2014 4:17:29 AM
Subject: Re: Problem with HTB bandwidth slicing when using TCP traffic

On Sun, Apr 13, 2014 at 12:43 PM, Andy Furniss <adf.lists@gmail.com> wrote:
> Slavica Tomovic wrote:
>>
>> Thank you Andy!
>>
>> I had changed limit values and that solved the problem.
>>
>> However, I did it  by typing command like this for each created
>> queue:
>>
>> tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 10
>>
>> Is there some command with which I can change default limit size so
>> that I don't have to this every time I create class?
>
>
> I guess you could change the txqlen on eth0 before setting up the htb,
> but I would just make a script and use qdiscs.
>
> limit 10 may be a bit short, also you could consider other qdiscs to
> pfifo eg. if you wanted some fairness between flows you could use sfq or
> fq_codel.

No, limit 10 is a lot short. 50-100 is more reasonable for a fifo at
these speeds.

> sfq defaults would be fine fq_codel may need some parameters tweaking
> for lower bit rates, though I haven't properly tested it yet, just
> guessing from what man tc-fq_codel says.

sfq is a pretty good choice at low bandwidths and on older versions of
linux  (particularly as fq_codel is heavily backported from linux 3.6
but not available everywhere). In addition to fair queuing, sfq has a
hard packet limit iof 127 packets by default, which gives decent
performance in the range of bandwidths below 10mbit.

At rates below 3Mbit fq_codel needs a bit a tweaking we've discovered,
basically it helps to have a "target" parameter > the cost of a single MTU's
packet - e.g at 1Mbit, we tend to use a target of 15 and an interval of 150,
.5mbit 30, 300. Above 4mbit target and interval are fine with the defaults.

Changing the default quantum helps also at rates below 40Mbit - we use 300.

For way more detail on how this stuff works,

http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die

see cerowrt's SQM code...

and the related codel and fq_codel internet drafts...

>
>
> --
> To unsubscribe from this list: send the line "unsubscribe lartc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
                   ` (4 preceding siblings ...)
  2014-04-14  1:30 ` Horace
@ 2014-04-14 22:40 ` Andy Furniss
  2014-04-14 22:50 ` Dave Taht
  6 siblings, 0 replies; 8+ messages in thread
From: Andy Furniss @ 2014-04-14 22:40 UTC (permalink / raw)
  To: lartc

Dave Taht wrote:
> On Sun, Apr 13, 2014 at 12:43 PM, Andy Furniss <adf.lists@gmail.com>
> wrote:

> At rates below 3Mbit fq_codel needs a bit a tweaking we've
> discovered, basically it helps to have a "target" parameter > the
> cost of a single MTU's packet - e.g at 1Mbit, we tend to use a target
> of 15 and an interval of 150, .5mbit 30, 300. Above 4mbit target and
> interval are fine with the defaults.
>
> Changing the default quantum helps also at rates below 40Mbit - we
> use 300.

Interesting info, thanks.

> For way more detail on how this stuff works,
>
> http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die

Hmm, while I agree policers are often crap and wondershaper should have
died or at least had the glaring htb misconfiguration fixed, the author
went incommunicado so it didn't happen.

To be fair to him, htb wasn't even in kernel at the time so it was just
a port of the CBQ script - I assume this is the reason.

It's a bit strange that the script is critically commented, is not
vanilla wondershaper (extra class thus making the configuration error
worse), but there is no mention of it.

"It" being that rates on htb leafs are not limited by the parent, so
when all classes are loaded it's way over the line rate.

As for fq_codel - it's good, but it wouldn't tempt me to bung my 
interactive in with bulk.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Problem with HTB bandwidth slicing when using TCP traffic
  2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
                   ` (5 preceding siblings ...)
  2014-04-14 22:40 ` Andy Furniss
@ 2014-04-14 22:50 ` Dave Taht
  6 siblings, 0 replies; 8+ messages in thread
From: Dave Taht @ 2014-04-14 22:50 UTC (permalink / raw)
  To: lartc

On Mon, Apr 14, 2014 at 3:40 PM, Andy Furniss <adf.lists@gmail.com> wrote:
> Dave Taht wrote:
>>
>> On Sun, Apr 13, 2014 at 12:43 PM, Andy Furniss <adf.lists@gmail.com>
>> wrote:
>
>
>> At rates below 3Mbit fq_codel needs a bit a tweaking we've
>> discovered, basically it helps to have a "target" parameter > the
>> cost of a single MTU's packet - e.g at 1Mbit, we tend to use a target
>> of 15 and an interval of 150, .5mbit 30, 300. Above 4mbit target and
>> interval are fine with the defaults.
>>
>> Changing the default quantum helps also at rates below 40Mbit - we
>> use 300.
>
>
> Interesting info, thanks.
>
>
>> For way more detail on how this stuff works,
>>
>> http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die
>
>
> Hmm, while I agree policers are often crap and wondershaper should have
> died or at least had the glaring htb misconfiguration fixed, the author
> went incommunicado so it didn't happen.

I'd like to see an update happen.

> To be fair to him, htb wasn't even in kernel at the time so it was just
> a port of the CBQ script - I assume this is the reason.

I agree that most resources on shaping etc are out of date.

> It's a bit strange that the script is critically commented, is not
> vanilla wondershaper (extra class thus making the configuration error
> worse), but there is no mention of it.

There must be a dozen variants of wondershaper out there. I think I
picked this one from the wondershaper "ng" project.

> "It" being that rates on htb leafs are not limited by the parent, so
> when all classes are loaded it's way over the line rate.

I missed that.

> As for fq_codel - it's good, but it wouldn't tempt me to bung my interactive
> in with bulk.

Well, try it with and without. You might be surprised by the
effectiveness of the "fast queue" concept for most interactive
traffic.

https://tools.ietf.org/html/draft-hoeiland-joergensen-aqm-fq-codel-00

But a three level shaper "simple.qos" is what we mostly use. While
this code is mostly targeted at openwrt derived systems, it can be
used with few modifications on any linux os (mostly making sure you
have modprobed various needed modules, and setting a bunch of
variables right)

https://github.com/dtaht/ceropackages-3.10/tree/master/net/sqm-scripts/files/usr/lib/sqm

Example of use:

IFACE=eth0 UPLINK@00 DOWNLINK 000 INSMOD=modprobe ./simple.qos

The other variables are documented in functions.sh

>



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-04-14 22:50 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-10 19:17 Problem with HTB bandwidth slicing when using TCP traffic Slavica Tomovic
2014-04-10 22:48 ` Andy Furniss
2014-04-12 10:20 ` Slavica Tomovic
2014-04-13 19:43 ` Andy Furniss
2014-04-13 20:17 ` Dave Taht
2014-04-14  1:30 ` Horace
2014-04-14 22:40 ` Andy Furniss
2014-04-14 22:50 ` Dave Taht

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.