All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Traffic shaping at 10~300mbps at a 10Gbps link
       [not found] <20210617121839.39aefb12@babalu>
@ 2021-06-17 16:17 ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 13+ messages in thread
From: Jesper Dangaard Brouer @ 2021-06-17 16:17 UTC (permalink / raw)
  To: Ethy H. Brito
  Cc: brouer, Rich Brown, xdp-newbies, Robert Chacon, Yoel Caspersen


(Reply inlined below, please learn howto reply inline)

On Thu, 17 Jun 2021 12:20:32 -0300
"Ethy H. Brito" <ethy.brito@inexo.com.br> wrote:

> Hi. mr. Brouer
> 
> I read your comment at lists.bufferbloat.net about my issue shaping traffic.

 https://lists.bufferbloat.net/pipermail/bloat/2021-June/016441.html
 
> I don't know how it ended up there since I opened it at LARTC, I am
> not a subscriber to bufferbloat list.

Yes, Rich (cc) moved your email to the bufferbloat list.  I just
pointed out that your reported issue was a classical TC root-queue
locking issue (as someone else mis-interpreted this).

I'm Cc'ing XDP-newbies list, as we should share our finding with the
community on how this TC-lock problem can be solve with XDP.


> About your XDP-project solution, how can I test it? 

There a two solutions in XDP-project involving TC-BPF in combination
with XDP.  The one you are talking about is:

 [2] https://github.com/xdp-project/xdp-cpumap-tc


> I read the "tc_mq_htb_setup_example.sh" script and did not understand
> it completely.

 [3] https://github.com/xdp-project/xdp-cpumap-tc/blob/master/bin/tc_mq_htb_setup_example.sh

> Do you think it will sove my problem?

Well, yes, *but* notice there are some "home-work" in the bottom of the
script.  You need to code-up the redirecting to the appropriate CPUs
yourself... it will be specific to your use-case if you can partition
the traffic to avoid the TC root-lock.

Can you explain you use-case for us?

> Would you help with my doubts so I can implementing it?

Let help each-other. It doesn't scale that I help on an individual
level.  That is why I bring this to the mailing list
(xdp-newbies@vger.kernel.org).  Also Cc'ing Yoel and Robert, that have
related interests in this.

I do acknowledge that the documentation in [2] is lacking. Perhaps a
goal should be that we add documentation on howto use it?

A longer term goal is to add contents to this almost empty repo:
 https://github.com/xdp-project/BNG-router

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (8 preceding siblings ...)
  2021-06-16 12:08 ` Erik Auerswald
@ 2021-07-07  2:10 ` L A Walsh
  9 siblings, 0 replies; 13+ messages in thread
From: L A Walsh @ 2021-07-07  2:10 UTC (permalink / raw)
  To: lartc

On 2021/06/09 08:04, Ethy H. Brito wrote:
> Hi Guys!
>
> Doesn't anybody can help me with this?
> Any help will be appreciated.
>
> Cheers
>
> Ethy
>   
You may need specialized HW, or powerful multi-cpu's to
process that much traffic in real time.

I tried just getting full throughput on a 10Gb interconnect
(no switches) between a server & Win-client.  Actually I first
tried pairing 2 of those interfaces since the Intel cards
came with 2 ports each.  That was way crazy -- couldn't get
more than about 300-400MB/s, lots of dropped packets amid
saturated cpu's.

(note: MB\x1024**2 Bytes (Bytes already being 2**3 bits, while
anywhere I say Mb, that is 1000*1000 bits, so base2 prefixes
for base-2 units, and base10 prefixes for counting 1's).

W/1 port, can hit 600MB/s read + 400MB/s writes depending on
tuning of packet sizes -- but smaller write-sizes put more load
(100% cpu) on a receiver, while larger ones put more load on
sender.  Those numbers were samba speeds to/from memory (not even
to disk).  That was with NO shaping attempts.

I'm sure I'm doing things incorrectly since trying to shape  my
outside connection lost me efficiency off of a 25Mb down/10Mb up,
which is fairly slow by most standards.

Trying to traffic shape any significant traffic @10Mb...you are
going to need multiple fast cpu's, but problem is trying
to split that traffic between cpu's -- you lose cache efficiency,
which becomes significant at those speeds.  So would need
dedicated HW (not someone's workstation), and try to use
on-card flow switching to multiple on card queues, trying to
deliver separate queues to separate cpus.  It isn't likely
to be pretty, but you might get some benefit by leveraging the
card's on-chip flow separation before it his your system's
interrupt queues. 

I know the Intel cards have a fair flow-differentiation set, but
have never used it -- would also try to bind interrupt
affinity by separate queues to separate cpu's (if possible), not
sure.

Looks like my ethernet card,
07:00.0 Ethernet controller: Intel Corporation Ethernet Controller 
10-Gigabit X540-AT2 (rev 01) (1 of 2)
has eth5-TxRx[0-11] mapped to interrupts [79-90].

If you can get the interrupt servicing affinity set to the same
cpu where you can process the packets, you'll be getting a
large optimization.  I really don't know about trying to
run the ip routing rules, though on a per-queue or affinity-bound
cpu basis.  Doesn't mean it can't be done, but _ignorant_
me doesn't know about anyone who has done it.

I know this is pretty general, but its about the most depth
I have knowledge of in this area, but hopefully it is of
some help!

-linda Walsh


>
> On Mon, 7 Jun 2021 13:38:53 -0300
> "Ethy H. Brito" <ethy.brito@inexo.com.br> wrote:
>
>   
>> Hi
>>
>> I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using
>> HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.
>>
>> Everything seems right, up 4Gbps overall download speed with shaping on.
>> I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)
>>
>> But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
>> CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a
>> consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This
>> goes from about 7PM to 11 PM every day.
>>
>> If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are
>> observed and a CPU load of about 5%. So I infer the uplink is not crowded.
>>
>> I use one root HTB qdisc and one root (1:) HTB class.
>> Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region 
>> And under these inner classes, goes the almost 3000 leaves (1:xxxx). 
>> I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just
>> one user.
>>
>> Is the way I'm using HTB+SFQ+TC suitable for this job?
>>
>> Since the script that creates the shaping environment is too long I do not post it here.
>>
>> What can I inform you guys to help me solve this?
>> Fragments of code, stats, some measurements? What?
>>
>> Thanks.
>>
>> Regards
>>
>> Ethy
>>     
>
>
>   

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Traffic shaping at 10~300mbps at a 10Gbps link
@ 2021-06-17 17:57 Ethy H. Brito
  0 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-17 17:57 UTC (permalink / raw)
  To: xdp-newbies


Hi.

I'm having trouble to shape traffic for a few thousand users.
Every time the interface bandwidth reaches around 4~4.5gbps, CPU load goes from 10~20% to 90~100% when using HTB. For HFSC this occurs around 2gbps.

I googled the problem and bumped into this topic (xdp-project) and Mr. Brouer told me about this list.

Please tell me which info I can feed you to help me with this issue.
I made a lots of experiences with no luck.

I am not a top level expert but I learn quick.

Regards

Ethy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (7 preceding siblings ...)
  2021-06-16 11:43 ` cronolog+lartc
@ 2021-06-16 12:08 ` Erik Auerswald
  2021-07-07  2:10 ` L A Walsh
  9 siblings, 0 replies; 13+ messages in thread
From: Erik Auerswald @ 2021-06-16 12:08 UTC (permalink / raw)
  To: lartc

Hi,

this question has been discussed a bit on a different mailing list[1],
but the information given there[2][3] has not yet reached the LARTC list.
So here are URLs that may or may not provide insight into the problem,
but no solution so far:

[1] https://lists.bufferbloat.net/listinfo/bloat
[2] https://lists.bufferbloat.net/pipermail/bloat/2021-June/016440.html
[3] https://lists.bufferbloat.net/pipermail/bloat/2021-June/016441.html

I myself have no idea regarding the problem, sorry.

Thanks,
Erik
-- 
In the beginning, there was static routing.
                        -- RFC 1118


On Wed, Jun 16, 2021 at 12:43:12PM +0100, cronolog+lartc wrote:
> Hi,
> 
> The initial query mentioned the issue occurs at around 4.5Gbps throughput or higher.
> 
> I wonder if a 32-bit value is overflowing somewhere. If the traffic rate or some other
> value used for internal calculations of the qdisc uses an unsigned 32-bit value, then
> this can only go up to 4294967295 which is just short of 4.3Gbps. So would make sense
> problems start to occur when you go beyond this throughput rate. I don't think the
> filters or quantity of filters would be the problem, they just classify the traffic.
> The problem would likely be within how a qdisc is implemented and how and when it
> chooses to dequeue packets for transmission. More likely the bug is within HTB than
> SFQ, as I don't think SFQ cares about throughput rate, but that's just my guess.
> 
> How are the qdiscs/classes created? It would be interesting to compare with the
> configuration from the person who has it working at 5Gbps.
> 
> I also thought tc/iproute2 supported 64-bit values for traffic rate. The tc man page
> says otherwise i.e. only 32-bit, but the source code clearly shows 64-bit support for
> measuring rate, so I assume the man page is out of date. So if possible, might also be
> worth trying the setup on a newer kernel and version of iproute2 than what's used by
> default in the version of Ubuntu mentioned.
> 
> HFSC is also an option as already suggested. I've not used it myself so not familiar
> with its configuration or how it compares to HTB+SFQ. But if it doesn't have what
> appears to be a 32-bit calculation bug, then that's good.
> 
> Regards,
> 
> Ali
> 
> 
> On 14/06/2021 17:13, Ethy H. Brito wrote:
> 
> >On Sat, 12 Jun 2021 14:24:21 +0300
> >Anatoly Muliarski <x86ever@gmail.com> wrote:
> >
> >>HTB does not work correctly on high speed links.
> >>Try to use HFSC.
> >I've read some reports about problem of CPU consumption using HFSC+SFQ.
> >
> >What qdisc do you use associated with each leaf (users) HFSC classes?
> >
> >Does HFSC inner classes need associated queue disciplines?
> >
> >Cheers
> >
> >Ethy
> >
> >>2021-06-10 1:13 GMT+03:00, Ethy H. Brito <ethy.brito@inexo.com.br>:
> >>>On Wed, 9 Jun 2021 21:57:30 +0200
> >>>Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
> >>>>W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:
> >>>>>On Wed, 9 Jun 2021 21:11:03 +0200
> >>>>>Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
> >>>>>>How you make filters, it's one level or multilevel based?
> >>>>>Multi level.
> >>>>>
> >>>>>First test IPv4, IPv6 or Other Protocols.
> >>>>>Then test for two cidr "/19" and one /20 blocks we use.
> >>>>>Then for every /24 block inside the selected cidr block.
> >>>>>
> >>>>>I can send some fragments of each "test" block, if you want.
> >>>>Please send it will be easier.
> >>>>
> >>>>We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6
> >>>>on this router, but we use PPPoE.
> >>>>
> >>>>
> >>>>What's kernel version?, our is old 4.17.
> >>># lsb_release -a
> >>>No LSB modules are available.
> >>>Distributor ID:	Ubuntu
> >>>Description:	Ubuntu 20.04.2 LTS
> >>>Release:	20.04
> >>>Codename:	focal
> >>>
> >>># tc -V
> >>>tc utility, iproute2-ss200127
> >>>
> >>># uname -a
> >>>Linux quiron 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021
> >>>x86_64 x86_64 x86_64 GNU/Linux
> >>>
> >>>
> >>>Here we go.
> >>>
> >>>eth0 is the internal interface (send packets to users - download)
> >>>The rules are inserted in the order bellow. There are no inversions.
> >>>
> >>>Just a very few lines were copied here.
> >>>I can send a WeTransfer link for those who may need the full script file.
> >>>
> >>>  ---------------------------8<------------------------------------
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 1 protocol ipv6 u32
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32
> >>>
> >>># Some service networks
> >>>/sbin/tc filter add dev eth0 protocol ipv4 parent 1: pref 2 \
> >>>         u32 match ip dst 100.64.254.0/24 \
> >>>         flowid 1:fffe
> >>>/sbin/tc filter add dev eth0 protocol ipv6 parent 1: pref 1 \
> >>>         u32 match ip6 dst fe80::/10 \
> >>>         flowid 1:fffe
> >>>
> >>>#
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 1: u32 divisor 4
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 10: u32 divisor 256
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> >>>ip6 dst 2001:12c4:a10:e000::/51  hashkey mask 0x0000ff00 at 28  link 10:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 100: u32 divisor 256
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:e0:
> >>>match ip6 dst 2001:12c4:a10:e000::/56 hashkey mask 0x000000ff at 28 link
> >>>100:
> >>>
> >>>... links 101 (2001:12c4:a10:e100::/56), 102 (2001:12c4:a10:e200::/56) ...
> >>>up to
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:ff:
> >>>match ip6 dst 2001:12c4:a10:ff00::/56 hashkey mask 0x000000ff at 28 link
> >>>11f:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 11: u32 divisor 256
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> >>>ip6 dst 2001:12c4:6440::/51  hashkey mask 0x0000ff00 at 28  link 11:
> >>>
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 120: u32 divisor 256
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 11:00:
> >>>match ip6 dst 2001:12c4:6440:000::/56 hashkey mask 0x000000ff at 28 link
> >>>120:
> >>>
> >>>... same here up to link 13f (2001:12c4:6440:1f00::/56):
> >>>More IPv6 filters like these follows to others CIDRs.
> >>>
> >>>Now for IPv4 /24 blocks.
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 2: u32 divisor 4
> >>>
> >>># Hash Table para 256 Blocos /24 de endereços IP para o macro bloco
> >>>10.16.224.0/19
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 13: u32 divisor 256
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> >>>ip dst 10.16.224.0/19  hashkey mask 0x0000ff00 at 16  link 13:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 150: u32 divisor 256
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:e0:
> >>>match ip dst 10.16.224.0/24 hashkey mask 0x000000ff at 16 link 150:
> >>>
> >>>... link 151 (10.16.225.0/24), 153 (10.16.226.0/24) ... up to
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 16f: u32 divisor 256
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:ff:
> >>>match ip dst 10.16.255.0/24 hashkey mask 0x000000ff at 16 link 16f:
> >>>
> >>>Other CIDRs blocks for 100.64.0/19 follows like...
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 14: u32 divisor 256
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> >>>ip dst 100.64.0.0/19  hashkey mask 0x0000ff00 at 16  link 14:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 170: u32 divisor 256
> >>>/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 14:00:
> >>>match ip dst 100.64.0.0/24 hashkey mask 0x000000ff at 16 link 170:
> >>>
> >>>up to link 19f:
> >>>
> >>>And finally:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 1 protocol ipv6 u32
> >>>match u32 0 0 link 1:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 2 protocol ip u32
> >>>match u32 0 0 link 2:
> >>>
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 3 protocol arp  u32 match u32 0
> >>>0 classid 1:fffe
> >>>
> >>># Rede "default" para endereços IPV6 não Classificados anteriormente
> >>>/sbin/tc filter add dev eth0 parent 1:0 prio 1 handle ::ffe protocol ipv6
> >>>u32 match u32 0 0 classid 1:fffc
> >>>
> >>>Thanks for your time.
> >>>
> >>>Cheers
> >>>
> >>>Ethy
> >>
> >>-- 
> >>Best regards
> >>Anatoly Muliarski
> >
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (6 preceding siblings ...)
  2021-06-14 16:13 ` Ethy H. Brito
@ 2021-06-16 11:43 ` cronolog+lartc
  2021-06-16 12:08 ` Erik Auerswald
  2021-07-07  2:10 ` L A Walsh
  9 siblings, 0 replies; 13+ messages in thread
From: cronolog+lartc @ 2021-06-16 11:43 UTC (permalink / raw)
  To: lartc

Hi,

The initial query mentioned the issue occurs at around 4.5Gbps throughput or higher.

I wonder if a 32-bit value is overflowing somewhere. If the traffic rate or some other
value used for internal calculations of the qdisc uses an unsigned 32-bit value, then
this can only go up to 4294967295 which is just short of 4.3Gbps. So would make sense
problems start to occur when you go beyond this throughput rate. I don't think the
filters or quantity of filters would be the problem, they just classify the traffic.
The problem would likely be within how a qdisc is implemented and how and when it
chooses to dequeue packets for transmission. More likely the bug is within HTB than
SFQ, as I don't think SFQ cares about throughput rate, but that's just my guess.

How are the qdiscs/classes created? It would be interesting to compare with the
configuration from the person who has it working at 5Gbps.

I also thought tc/iproute2 supported 64-bit values for traffic rate. The tc man page
says otherwise i.e. only 32-bit, but the source code clearly shows 64-bit support for
measuring rate, so I assume the man page is out of date. So if possible, might also be
worth trying the setup on a newer kernel and version of iproute2 than what's used by
default in the version of Ubuntu mentioned.

HFSC is also an option as already suggested. I've not used it myself so not familiar
with its configuration or how it compares to HTB+SFQ. But if it doesn't have what
appears to be a 32-bit calculation bug, then that's good.

Regards,

Ali


On 14/06/2021 17:13, Ethy H. Brito wrote:

> On Sat, 12 Jun 2021 14:24:21 +0300
> Anatoly Muliarski <x86ever@gmail.com> wrote:
>
>> HTB does not work correctly on high speed links.
>> Try to use HFSC.
> I've read some reports about problem of CPU consumption using HFSC+SFQ.
>
> What qdisc do you use associated with each leaf (users) HFSC classes?
>
> Does HFSC inner classes need associated queue disciplines?
>
> Cheers
>
> Ethy
>   
>
>> 2021-06-10 1:13 GMT+03:00, Ethy H. Brito <ethy.brito@inexo.com.br>:
>>> On Wed, 9 Jun 2021 21:57:30 +0200
>>> Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
>>>   
>>>> W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:
>>>>> On Wed, 9 Jun 2021 21:11:03 +0200
>>>>> Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
>>>>>   
>>>>>> How you make filters, it's one level or multilevel based?
>>>>> Multi level.
>>>>>
>>>>> First test IPv4, IPv6 or Other Protocols.
>>>>> Then test for two cidr "/19" and one /20 blocks we use.
>>>>> Then for every /24 block inside the selected cidr block.
>>>>>
>>>>> I can send some fragments of each "test" block, if you want.
>>>>>   
>>>> Please send it will be easier.
>>>>
>>>> We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6
>>>> on this router, but we use PPPoE.
>>>>
>>>>
>>>> What's kernel version?, our is old 4.17.
>>> # lsb_release -a
>>> No LSB modules are available.
>>> Distributor ID:	Ubuntu
>>> Description:	Ubuntu 20.04.2 LTS
>>> Release:	20.04
>>> Codename:	focal
>>>
>>> # tc -V
>>> tc utility, iproute2-ss200127
>>>
>>> # uname -a
>>> Linux quiron 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021
>>> x86_64 x86_64 x86_64 GNU/Linux
>>>
>>>
>>> Here we go.
>>>
>>> eth0 is the internal interface (send packets to users - download)
>>> The rules are inserted in the order bellow. There are no inversions.
>>>
>>> Just a very few lines were copied here.
>>> I can send a WeTransfer link for those who may need the full script file.
>>>
>>>   ---------------------------8<------------------------------------
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 1 protocol ipv6 u32
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32
>>>
>>> # Some service networks
>>> /sbin/tc filter add dev eth0 protocol ipv4 parent 1: pref 2 \
>>>          u32 match ip dst 100.64.254.0/24 \
>>>          flowid 1:fffe
>>> /sbin/tc filter add dev eth0 protocol ipv6 parent 1: pref 1 \
>>>          u32 match ip6 dst fe80::/10 \
>>>          flowid 1:fffe
>>>
>>> #
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 1: u32 divisor 4
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 10: u32 divisor 256
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
>>> ip6 dst 2001:12c4:a10:e000::/51  hashkey mask 0x0000ff00 at 28  link 10:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 100: u32 divisor 256
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:e0:
>>> match ip6 dst 2001:12c4:a10:e000::/56 hashkey mask 0x000000ff at 28 link
>>> 100:
>>>
>>> ... links 101 (2001:12c4:a10:e100::/56), 102 (2001:12c4:a10:e200::/56) ...
>>> up to
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:ff:
>>> match ip6 dst 2001:12c4:a10:ff00::/56 hashkey mask 0x000000ff at 28 link
>>> 11f:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 11: u32 divisor 256
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
>>> ip6 dst 2001:12c4:6440::/51  hashkey mask 0x0000ff00 at 28  link 11:
>>>
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 120: u32 divisor 256
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 11:00:
>>> match ip6 dst 2001:12c4:6440:000::/56 hashkey mask 0x000000ff at 28 link
>>> 120:
>>>
>>> ... same here up to link 13f (2001:12c4:6440:1f00::/56):
>>> More IPv6 filters like these follows to others CIDRs.
>>>
>>> Now for IPv4 /24 blocks.
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 2: u32 divisor 4
>>>
>>> # Hash Table para 256 Blocos /24 de endereços IP para o macro bloco
>>> 10.16.224.0/19
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 13: u32 divisor 256
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
>>> ip dst 10.16.224.0/19  hashkey mask 0x0000ff00 at 16  link 13:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 150: u32 divisor 256
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:e0:
>>> match ip dst 10.16.224.0/24 hashkey mask 0x000000ff at 16 link 150:
>>>
>>> ... link 151 (10.16.225.0/24), 153 (10.16.226.0/24) ... up to
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 16f: u32 divisor 256
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:ff:
>>> match ip dst 10.16.255.0/24 hashkey mask 0x000000ff at 16 link 16f:
>>>
>>> Other CIDRs blocks for 100.64.0/19 follows like...
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 14: u32 divisor 256
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
>>> ip dst 100.64.0.0/19  hashkey mask 0x0000ff00 at 16  link 14:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 170: u32 divisor 256
>>> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 14:00:
>>> match ip dst 100.64.0.0/24 hashkey mask 0x000000ff at 16 link 170:
>>>
>>> up to link 19f:
>>>
>>> And finally:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 1 protocol ipv6 u32
>>> match u32 0 0 link 1:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 2 protocol ip u32
>>> match u32 0 0 link 2:
>>>
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 3 protocol arp  u32 match u32 0
>>> 0 classid 1:fffe
>>>
>>> # Rede "default" para endereços IPV6 não Classificados anteriormente
>>> /sbin/tc filter add dev eth0 parent 1:0 prio 1 handle ::ffe protocol ipv6
>>> u32 match u32 0 0 classid 1:fffc
>>>
>>> Thanks for your time.
>>>
>>> Cheers
>>>
>>> Ethy
>>>   
>>
>> -- 
>> Best regards
>> Anatoly Muliarski
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (5 preceding siblings ...)
  2021-06-12 11:24 ` Anatoly Muliarski
@ 2021-06-14 16:13 ` Ethy H. Brito
  2021-06-16 11:43 ` cronolog+lartc
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-14 16:13 UTC (permalink / raw)
  To: lartc

On Sat, 12 Jun 2021 14:24:21 +0300
Anatoly Muliarski <x86ever@gmail.com> wrote:

> HTB does not work correctly on high speed links.
> Try to use HFSC.

I've read some reports about problem of CPU consumption using HFSC+SFQ.

What qdisc do you use associated with each leaf (users) HFSC classes?

Does HFSC inner classes need associated queue disciplines?

Cheers

Ethy
 

> 
> 2021-06-10 1:13 GMT+03:00, Ethy H. Brito <ethy.brito@inexo.com.br>:
> > On Wed, 9 Jun 2021 21:57:30 +0200
> > Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
> >  
> >> W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:  
> >> > On Wed, 9 Jun 2021 21:11:03 +0200
> >> > Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
> >> >  
> >> >> How you make filters, it's one level or multilevel based?  
> >> > Multi level.
> >> >
> >> > First test IPv4, IPv6 or Other Protocols.
> >> > Then test for two cidr "/19" and one /20 blocks we use.
> >> > Then for every /24 block inside the selected cidr block.
> >> >
> >> > I can send some fragments of each "test" block, if you want.
> >> >  
> >> Please send it will be easier.
> >>
> >> We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6
> >> on this router, but we use PPPoE.
> >>
> >>
> >> What's kernel version?, our is old 4.17.  
> >
> > # lsb_release -a
> > No LSB modules are available.
> > Distributor ID:	Ubuntu
> > Description:	Ubuntu 20.04.2 LTS
> > Release:	20.04
> > Codename:	focal
> >
> > # tc -V
> > tc utility, iproute2-ss200127
> >
> > # uname -a
> > Linux quiron 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021
> > x86_64 x86_64 x86_64 GNU/Linux
> >
> >
> > Here we go.
> >
> > eth0 is the internal interface (send packets to users - download)
> > The rules are inserted in the order bellow. There are no inversions.
> >
> > Just a very few lines were copied here.
> > I can send a WeTransfer link for those who may need the full script file.
> >
> >  ---------------------------8<------------------------------------
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 1 protocol ipv6 u32
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32
> >
> > # Some service networks
> > /sbin/tc filter add dev eth0 protocol ipv4 parent 1: pref 2 \
> >         u32 match ip dst 100.64.254.0/24 \
> >         flowid 1:fffe
> > /sbin/tc filter add dev eth0 protocol ipv6 parent 1: pref 1 \
> >         u32 match ip6 dst fe80::/10 \
> >         flowid 1:fffe
> >
> > #
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 1: u32 divisor 4
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 10: u32 divisor 256
> >
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> > ip6 dst 2001:12c4:a10:e000::/51  hashkey mask 0x0000ff00 at 28  link 10:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 100: u32 divisor 256
> >
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:e0:
> > match ip6 dst 2001:12c4:a10:e000::/56 hashkey mask 0x000000ff at 28 link
> > 100:
> >
> > ... links 101 (2001:12c4:a10:e100::/56), 102 (2001:12c4:a10:e200::/56) ...
> > up to
> >
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:ff:
> > match ip6 dst 2001:12c4:a10:ff00::/56 hashkey mask 0x000000ff at 28 link
> > 11f:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 11: u32 divisor 256
> >
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> > ip6 dst 2001:12c4:6440::/51  hashkey mask 0x0000ff00 at 28  link 11:
> >
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 120: u32 divisor 256
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 11:00:
> > match ip6 dst 2001:12c4:6440:000::/56 hashkey mask 0x000000ff at 28 link
> > 120:
> >
> > ... same here up to link 13f (2001:12c4:6440:1f00::/56):
> > More IPv6 filters like these follows to others CIDRs.
> >
> > Now for IPv4 /24 blocks.
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 2: u32 divisor 4
> >
> > # Hash Table para 256 Blocos /24 de endereços IP para o macro bloco
> > 10.16.224.0/19
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 13: u32 divisor 256
> >
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> > ip dst 10.16.224.0/19  hashkey mask 0x0000ff00 at 16  link 13:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 150: u32 divisor 256
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:e0:
> > match ip dst 10.16.224.0/24 hashkey mask 0x000000ff at 16 link 150:
> >
> > ... link 151 (10.16.225.0/24), 153 (10.16.226.0/24) ... up to
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 16f: u32 divisor 256
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:ff:
> > match ip dst 10.16.255.0/24 hashkey mask 0x000000ff at 16 link 16f:
> >
> > Other CIDRs blocks for 100.64.0/19 follows like...
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 14: u32 divisor 256
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> > ip dst 100.64.0.0/19  hashkey mask 0x0000ff00 at 16  link 14:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 170: u32 divisor 256
> > /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 14:00:
> > match ip dst 100.64.0.0/24 hashkey mask 0x000000ff at 16 link 170:
> >
> > up to link 19f:
> >
> > And finally:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 1 protocol ipv6 u32
> > match u32 0 0 link 1:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 2 protocol ip u32
> > match u32 0 0 link 2:
> >
> > /sbin/tc filter add dev eth0 parent 1:0 prio 3 protocol arp  u32 match u32 0
> > 0 classid 1:fffe
> >
> > # Rede "default" para endereços IPV6 não Classificados anteriormente
> > /sbin/tc filter add dev eth0 parent 1:0 prio 1 handle ::ffe protocol ipv6
> > u32 match u32 0 0 classid 1:fffc
> >
> > Thanks for your time.
> >
> > Cheers
> >
> > Ethy
> >  
> 
> 
> -- 
> Best regards
> Anatoly Muliarski


-- 

Ethy H. Brito         /"\
InterNexo Ltda.       \ /  CAMPANHA DA FITA ASCII - CONTRA MAIL HTML
+55 (12) 3797-6860     X   ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
S.J.Campos - Brasil   / \ 
 
PGP key: http://www.inexo.com.br/~ethy/0xC3F222A0.asc

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (4 preceding siblings ...)
  2021-06-09 22:13 ` Ethy H. Brito
@ 2021-06-12 11:24 ` Anatoly Muliarski
  2021-06-14 16:13 ` Ethy H. Brito
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Anatoly Muliarski @ 2021-06-12 11:24 UTC (permalink / raw)
  To: lartc

HTB does not work correctly on high speed links.
Try to use HFSC.

2021-06-10 1:13 GMT+03:00, Ethy H. Brito <ethy.brito@inexo.com.br>:
> On Wed, 9 Jun 2021 21:57:30 +0200
> Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
>
>> W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:
>> > On Wed, 9 Jun 2021 21:11:03 +0200
>> > Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
>> >
>> >> How you make filters, it's one level or multilevel based?
>> > Multi level.
>> >
>> > First test IPv4, IPv6 or Other Protocols.
>> > Then test for two cidr "/19" and one /20 blocks we use.
>> > Then for every /24 block inside the selected cidr block.
>> >
>> > I can send some fragments of each "test" block, if you want.
>> >
>> Please send it will be easier.
>>
>> We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6
>> on this router, but we use PPPoE.
>>
>>
>> What's kernel version?, our is old 4.17.
>
> # lsb_release -a
> No LSB modules are available.
> Distributor ID:	Ubuntu
> Description:	Ubuntu 20.04.2 LTS
> Release:	20.04
> Codename:	focal
>
> # tc -V
> tc utility, iproute2-ss200127
>
> # uname -a
> Linux quiron 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021
> x86_64 x86_64 x86_64 GNU/Linux
>
>
> Here we go.
>
> eth0 is the internal interface (send packets to users - download)
> The rules are inserted in the order bellow. There are no inversions.
>
> Just a very few lines were copied here.
> I can send a WeTransfer link for those who may need the full script file.
>
>  ---------------------------8<------------------------------------
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 1 protocol ipv6 u32
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32
>
> # Some service networks
> /sbin/tc filter add dev eth0 protocol ipv4 parent 1: pref 2 \
>         u32 match ip dst 100.64.254.0/24 \
>         flowid 1:fffe
> /sbin/tc filter add dev eth0 protocol ipv6 parent 1: pref 1 \
>         u32 match ip6 dst fe80::/10 \
>         flowid 1:fffe
>
> #
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 1: u32 divisor 4
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 10: u32 divisor 256
>
> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> ip6 dst 2001:12c4:a10:e000::/51  hashkey mask 0x0000ff00 at 28  link 10:
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 100: u32 divisor 256
>
> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:e0:
> match ip6 dst 2001:12c4:a10:e000::/56 hashkey mask 0x000000ff at 28 link
> 100:
>
> ... links 101 (2001:12c4:a10:e100::/56), 102 (2001:12c4:a10:e200::/56) ...
> up to
>
> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:ff:
> match ip6 dst 2001:12c4:a10:ff00::/56 hashkey mask 0x000000ff at 28 link
> 11f:
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 11: u32 divisor 256
>
> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match
> ip6 dst 2001:12c4:6440::/51  hashkey mask 0x0000ff00 at 28  link 11:
>
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 120: u32 divisor 256
> /sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 11:00:
> match ip6 dst 2001:12c4:6440:000::/56 hashkey mask 0x000000ff at 28 link
> 120:
>
> ... same here up to link 13f (2001:12c4:6440:1f00::/56):
> More IPv6 filters like these follows to others CIDRs.
>
> Now for IPv4 /24 blocks.
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 2: u32 divisor 4
>
> # Hash Table para 256 Blocos /24 de endereços IP para o macro bloco
> 10.16.224.0/19
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 13: u32 divisor 256
>
> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> ip dst 10.16.224.0/19  hashkey mask 0x0000ff00 at 16  link 13:
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 150: u32 divisor 256
> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:e0:
> match ip dst 10.16.224.0/24 hashkey mask 0x000000ff at 16 link 150:
>
> ... link 151 (10.16.225.0/24), 153 (10.16.226.0/24) ... up to
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 16f: u32 divisor 256
> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:ff:
> match ip dst 10.16.255.0/24 hashkey mask 0x000000ff at 16 link 16f:
>
> Other CIDRs blocks for 100.64.0/19 follows like...
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 14: u32 divisor 256
> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match
> ip dst 100.64.0.0/19  hashkey mask 0x0000ff00 at 16  link 14:
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 170: u32 divisor 256
> /sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 14:00:
> match ip dst 100.64.0.0/24 hashkey mask 0x000000ff at 16 link 170:
>
> up to link 19f:
>
> And finally:
>
> /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 1 protocol ipv6 u32
> match u32 0 0 link 1:
>
> /sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 2 protocol ip u32
> match u32 0 0 link 2:
>
> /sbin/tc filter add dev eth0 parent 1:0 prio 3 protocol arp  u32 match u32 0
> 0 classid 1:fffe
>
> # Rede "default" para endereços IPV6 não Classificados anteriormente
> /sbin/tc filter add dev eth0 parent 1:0 prio 1 handle ::ffe protocol ipv6
> u32 match u32 0 0 classid 1:fffc
>
> Thanks for your time.
>
> Cheers
>
> Ethy
>


-- 
Best regards
Anatoly Muliarski

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (3 preceding siblings ...)
  2021-06-09 19:57 ` Adam Niescierowicz
@ 2021-06-09 22:13 ` Ethy H. Brito
  2021-06-12 11:24 ` Anatoly Muliarski
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-09 22:13 UTC (permalink / raw)
  To: lartc

On Wed, 9 Jun 2021 21:57:30 +0200
Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:

> W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:
> > On Wed, 9 Jun 2021 21:11:03 +0200
> > Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
> >  
> >> How you make filters, it's one level or multilevel based?  
> > Multi level.
> >
> > First test IPv4, IPv6 or Other Protocols.
> > Then test for two cidr "/19" and one /20 blocks we use.
> > Then for every /24 block inside the selected cidr block.
> >
> > I can send some fragments of each "test" block, if you want.
> >  
> Please send it will be easier.
> 
> We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6 
> on this router, but we use PPPoE.
> 
> 
> What's kernel version?, our is old 4.17.

# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.2 LTS
Release:	20.04
Codename:	focal

# tc -V
tc utility, iproute2-ss200127

# uname -a
Linux quiron 5.4.0-66-generic #74-Ubuntu SMP Wed Jan 27 22:54:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux


Here we go. 

eth0 is the internal interface (send packets to users - download)
The rules are inserted in the order bellow. There are no inversions.

Just a very few lines were copied here.
I can send a WeTransfer link for those who may need the full script file.

 ---------------------------8<------------------------------------

/sbin/tc filter add dev eth0 parent 1:0 prio 1 protocol ipv6 u32

/sbin/tc filter add dev eth0 parent 1:0 prio 2 protocol ip u32

# Some service networks
/sbin/tc filter add dev eth0 protocol ipv4 parent 1: pref 2 \
        u32 match ip dst 100.64.254.0/24 \
        flowid 1:fffe
/sbin/tc filter add dev eth0 protocol ipv6 parent 1: pref 1 \
        u32 match ip6 dst fe80::/10 \
        flowid 1:fffe

# 
/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 1: u32 divisor 4

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 10: u32 divisor 256

/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match ip6 dst 2001:12c4:a10:e000::/51  hashkey mask 0x0000ff00 at 28  link 10:

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 100: u32 divisor 256

/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:e0: match ip6 dst 2001:12c4:a10:e000::/56 hashkey mask 0x000000ff at 28 link 100:

... links 101 (2001:12c4:a10:e100::/56), 102 (2001:12c4:a10:e200::/56) ... up to

/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 10:ff: match ip6 dst 2001:12c4:a10:ff00::/56 hashkey mask 0x000000ff at 28 link 11f:

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 11: u32 divisor 256

/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 1: match ip6 dst 2001:12c4:6440::/51  hashkey mask 0x0000ff00 at 28  link 11:


/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 120: u32 divisor 256
/sbin/tc filter add dev eth0 parent 1:0 protocol ipv6 prio 1 u32 ht 11:00: match ip6 dst 2001:12c4:6440:000::/56 hashkey mask 0x000000ff at 28 link 120:

... same here up to link 13f (2001:12c4:6440:1f00::/56):
More IPv6 filters like these follows to others CIDRs.

Now for IPv4 /24 blocks.

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 2: u32 divisor 4

# Hash Table para 256 Blocos /24 de endereços IP para o macro bloco 10.16.224.0/19

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 13: u32 divisor 256

/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match ip dst 10.16.224.0/19  hashkey mask 0x0000ff00 at 16  link 13:

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 150: u32 divisor 256
/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:e0: match ip dst 10.16.224.0/24 hashkey mask 0x000000ff at 16 link 150:

... link 151 (10.16.225.0/24), 153 (10.16.226.0/24) ... up to 

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 16f: u32 divisor 256
/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 13:ff: match ip dst 10.16.255.0/24 hashkey mask 0x000000ff at 16 link 16f:

Other CIDRs blocks for 100.64.0/19 follows like...
/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 14: u32 divisor 256
/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 2: match ip dst 100.64.0.0/19  hashkey mask 0x0000ff00 at 16  link 14:

/sbin/tc filter add dev eth0 parent 1:0 prio 99 handle 170: u32 divisor 256
/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 2 u32 ht 14:00: match ip dst 100.64.0.0/24 hashkey mask 0x000000ff at 16 link 170:

up to link 19f:

And finally:

/sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 1 protocol ipv6 u32 match u32 0 0 link 1:

/sbin/tc filter add dev eth0 parent 1:0 handle ::2 prio 2 protocol ip u32 match u32 0 0 link 2:

/sbin/tc filter add dev eth0 parent 1:0 prio 3 protocol arp  u32 match u32 0 0 classid 1:fffe

# Rede "default" para endereços IPV6 não Classificados anteriormente
/sbin/tc filter add dev eth0 parent 1:0 prio 1 handle ::ffe protocol ipv6 u32 match u32 0 0 classid 1:fffc

Thanks for your time.

Cheers

Ethy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
                   ` (2 preceding siblings ...)
  2021-06-09 19:30 ` Ethy H. Brito
@ 2021-06-09 19:57 ` Adam Niescierowicz
  2021-06-09 22:13 ` Ethy H. Brito
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Adam Niescierowicz @ 2021-06-09 19:57 UTC (permalink / raw)
  To: lartc

[-- Attachment #1: Type: text/plain, Size: 636 bytes --]

W dniu 09.06.2021 o 21:30, Ethy H. Brito pisze:
> On Wed, 9 Jun 2021 21:11:03 +0200
> Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:
>
>> How you make filters, it's one level or multilevel based?
> Multi level.
>
> First test IPv4, IPv6 or Other Protocols.
> Then test for two cidr "/19" and one /20 blocks we use.
> Then for every /24 block inside the selected cidr block.
>
> I can send some fragments of each "test" block, if you want.
>
Please send it will be easier.

We have 5+ Gbps at peak, 12core cpu and it's working. We don't use ipv6 
on this router, but we use PPPoE.


What's kernel version?, our is old 4.17.


[-- Attachment #2: adam_niescierowicz.vcf --]
[-- Type: text/x-vcard, Size: 187 bytes --]

begin:vcard
fn;quoted-printable:Adam Nie=C5=9Bcierowicz
n;quoted-printable:Nie=C5=9Bcierowicz;Adam
email;internet:adam.niescierowicz@justnet.pl
x-mozilla-html:TRUE
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
  2021-06-09 15:04 ` Ethy H. Brito
  2021-06-09 19:11 ` Adam Niescierowicz
@ 2021-06-09 19:30 ` Ethy H. Brito
  2021-06-09 19:57 ` Adam Niescierowicz
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-09 19:30 UTC (permalink / raw)
  To: lartc

On Wed, 9 Jun 2021 21:11:03 +0200
Adam Niescierowicz <adam.niescierowicz@justnet.pl> wrote:

> How you make filters, it's one level or multilevel based?

Multi level.

First test IPv4, IPv6 or Other Protocols.
Then test for two cidr "/19" and one /20 blocks we use.
Then for every /24 block inside the selected cidr block.

I can send some fragments of each "test" block, if you want.

Cheers

Ethy


> 
> W dniu 09.06.2021 o 17:04, Ethy H. Brito pisze:
> > Hi Guys!
> >
> > Doesn't anybody can help me with this?
> > Any help will be appreciated.
> >
> > Cheers
> >
> > Ethy
> >
> >
> > On Mon, 7 Jun 2021 13:38:53 -0300
> > "Ethy H. Brito" <ethy.brito@inexo.com.br> wrote:
> >  
> >> Hi
> >>
> >> I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using
> >> HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.
> >>
> >> Everything seems right, up 4Gbps overall download speed with shaping on.
> >> I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)
> >>
> >> But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
> >> CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a
> >> consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This
> >> goes from about 7PM to 11 PM every day.
> >>
> >> If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are
> >> observed and a CPU load of about 5%. So I infer the uplink is not crowded.
> >>
> >> I use one root HTB qdisc and one root (1:) HTB class.
> >> Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region
> >> And under these inner classes, goes the almost 3000 leaves (1:xxxx).
> >> I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just
> >> one user.
> >>
> >> Is the way I'm using HTB+SFQ+TC suitable for this job?
> >>
> >> Since the script that creates the shaping environment is too long I do not post it here.
> >>
> >> What can I inform you guys to help me solve this?
> >> Fragments of code, stats, some measurements? What?
> >>
> >> Thanks.
> >>
> >> Regards
> >>
> >> Ethy  
> >  
> -- 
> ---
> Pozdrawiam
> Adam Nieścierowicz
> 


-- 

Ethy H. Brito         /"\
InterNexo Ltda.       \ /  CAMPANHA DA FITA ASCII - CONTRA MAIL HTML
+55 (12) 3797-6860     X   ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
S.J.Campos - Brasil   / \ 
 
PGP key: http://www.inexo.com.br/~ethy/0xC3F222A0.asc

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
  2021-06-09 15:04 ` Ethy H. Brito
@ 2021-06-09 19:11 ` Adam Niescierowicz
  2021-06-09 19:30 ` Ethy H. Brito
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Adam Niescierowicz @ 2021-06-09 19:11 UTC (permalink / raw)
  To: lartc

[-- Attachment #1: Type: text/plain, Size: 2050 bytes --]

How you make filters, it's one level or multilevel based?

W dniu 09.06.2021 o 17:04, Ethy H. Brito pisze:
> Hi Guys!
>
> Doesn't anybody can help me with this?
> Any help will be appreciated.
>
> Cheers
>
> Ethy
>
>
> On Mon, 7 Jun 2021 13:38:53 -0300
> "Ethy H. Brito" <ethy.brito@inexo.com.br> wrote:
>
>> Hi
>>
>> I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using
>> HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.
>>
>> Everything seems right, up 4Gbps overall download speed with shaping on.
>> I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)
>>
>> But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
>> CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a
>> consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This
>> goes from about 7PM to 11 PM every day.
>>
>> If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are
>> observed and a CPU load of about 5%. So I infer the uplink is not crowded.
>>
>> I use one root HTB qdisc and one root (1:) HTB class.
>> Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region
>> And under these inner classes, goes the almost 3000 leaves (1:xxxx).
>> I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just
>> one user.
>>
>> Is the way I'm using HTB+SFQ+TC suitable for this job?
>>
>> Since the script that creates the shaping environment is too long I do not post it here.
>>
>> What can I inform you guys to help me solve this?
>> Fragments of code, stats, some measurements? What?
>>
>> Thanks.
>>
>> Regards
>>
>> Ethy
>
-- 
---
Pozdrawiam
Adam Nieścierowicz


[-- Attachment #2: adam_niescierowicz.vcf --]
[-- Type: text/x-vcard, Size: 187 bytes --]

begin:vcard
fn;quoted-printable:Adam Nie=C5=9Bcierowicz
n;quoted-printable:Nie=C5=9Bcierowicz;Adam
email;internet:adam.niescierowicz@justnet.pl
x-mozilla-html:TRUE
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Traffic shaping at 10~300mbps at a 10Gbps link
  2021-06-07 16:38 Ethy H. Brito
@ 2021-06-09 15:04 ` Ethy H. Brito
  2021-06-09 19:11 ` Adam Niescierowicz
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-09 15:04 UTC (permalink / raw)
  To: lartc


Hi Guys!

Doesn't anybody can help me with this?
Any help will be appreciated.

Cheers

Ethy


On Mon, 7 Jun 2021 13:38:53 -0300
"Ethy H. Brito" <ethy.brito@inexo.com.br> wrote:

> Hi
> 
> I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using
> HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.
> 
> Everything seems right, up 4Gbps overall download speed with shaping on.
> I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)
> 
> But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
> CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a
> consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This
> goes from about 7PM to 11 PM every day.
> 
> If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are
> observed and a CPU load of about 5%. So I infer the uplink is not crowded.
> 
> I use one root HTB qdisc and one root (1:) HTB class.
> Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region 
> And under these inner classes, goes the almost 3000 leaves (1:xxxx). 
> I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just
> one user.
> 
> Is the way I'm using HTB+SFQ+TC suitable for this job?
> 
> Since the script that creates the shaping environment is too long I do not post it here.
> 
> What can I inform you guys to help me solve this?
> Fragments of code, stats, some measurements? What?
> 
> Thanks.
> 
> Regards
> 
> Ethy


-- 

Ethy H. Brito         /"\
InterNexo Ltda.       \ /  CAMPANHA DA FITA ASCII - CONTRA MAIL HTML
+55 (12) 3797-6860     X   ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
S.J.Campos - Brasil   / \ 
 
PGP key: http://www.inexo.com.br/~ethy/0xC3F222A0.asc

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Traffic shaping at 10~300mbps at a 10Gbps link
@ 2021-06-07 16:38 Ethy H. Brito
  2021-06-09 15:04 ` Ethy H. Brito
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-07 16:38 UTC (permalink / raw)
  To: lartc


Hi

I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.

Everything seems right, up 4Gbps overall download speed with shaping on.
I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)

But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This goes from about 7PM to 11 PM every day.

If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are observed and a CPU load of about 5%. So I infer the uplink is not crowded.

I use one root HTB qdisc and one root (1:) HTB class.
Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region 
And under these inner classes, goes the almost 3000 leaves (1:xxxx). 
I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just one user.

Is the way I'm using HTB+SFQ+TC suitable for this job?

Since the script that creates the shaping environment is too long I do not post it here.

What can I inform you guys to help me solve this?
Fragments of code, stats, some measurements? What?

Thanks.

Regards

Ethy

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-07-07  2:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20210617121839.39aefb12@babalu>
2021-06-17 16:17 ` Traffic shaping at 10~300mbps at a 10Gbps link Jesper Dangaard Brouer
2021-06-17 17:57 Ethy H. Brito
  -- strict thread matches above, loose matches on Subject: below --
2021-06-07 16:38 Ethy H. Brito
2021-06-09 15:04 ` Ethy H. Brito
2021-06-09 19:11 ` Adam Niescierowicz
2021-06-09 19:30 ` Ethy H. Brito
2021-06-09 19:57 ` Adam Niescierowicz
2021-06-09 22:13 ` Ethy H. Brito
2021-06-12 11:24 ` Anatoly Muliarski
2021-06-14 16:13 ` Ethy H. Brito
2021-06-16 11:43 ` cronolog+lartc
2021-06-16 12:08 ` Erik Auerswald
2021-07-07  2:10 ` L A Walsh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.