All of lore.kernel.org
 help / color / mirror / Atom feed
* Traffic shaping at 10~300mbps at a 10Gbps link
@ 2021-06-17 17:57 Ethy H. Brito
  0 siblings, 0 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-17 17:57 UTC (permalink / raw)
  To: xdp-newbies


Hi.

I'm having trouble to shape traffic for a few thousand users.
Every time the interface bandwidth reaches around 4~4.5gbps, CPU load goes from 10~20% to 90~100% when using HTB. For HFSC this occurs around 2gbps.

I googled the problem and bumped into this topic (xdp-project) and Mr. Brouer told me about this list.

Please tell me which info I can feed you to help me with this issue.
I made a lots of experiences with no luck.

I am not a top level expert but I learn quick.

Regards

Ethy

^ permalink raw reply	[flat|nested] 13+ messages in thread
[parent not found: <20210617121839.39aefb12@babalu>]
* Traffic shaping at 10~300mbps at a 10Gbps link
@ 2021-06-07 16:38 Ethy H. Brito
  2021-06-09 15:04 ` Ethy H. Brito
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Ethy H. Brito @ 2021-06-07 16:38 UTC (permalink / raw)
  To: lartc


Hi

I am having a hard time trying to shape 3000 users at ceil speeds from 10 to 300mbps in a 7/7Gbps link using HTB+SFQ+TC(filter by IP hashkey mask) for a few days now tweaking HTB and SFQ parameters with no luck so far.

Everything seems right, up 4Gbps overall download speed with shaping on.
I have no significant packets delay, no dropped packets and no high CPU average loads (not more than 20% - htop info)

But when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in.
CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on) and as a consequence packets are dropped (as reported by tc -s class sh ...), RTT goes above 200ms and a lots of ungry users. This goes from about 7PM to 11 PM every day.

If I turn shaping off, everything return to normality immediately and peaks of not more than 5Gbps (1 second average) are observed and a CPU load of about 5%. So I infer the uplink is not crowded.

I use one root HTB qdisc and one root (1:) HTB class.
Then about 20~30 same level (1:xx) inner classes to (sort of) separate the users by region 
And under these inner classes, goes the almost 3000 leaves (1:xxxx). 
I have one class with about 900 users and this quantity decreases by the other inner classes having some of them with just one user.

Is the way I'm using HTB+SFQ+TC suitable for this job?

Since the script that creates the shaping environment is too long I do not post it here.

What can I inform you guys to help me solve this?
Fragments of code, stats, some measurements? What?

Thanks.

Regards

Ethy

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-07-07  2:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-17 17:57 Traffic shaping at 10~300mbps at a 10Gbps link Ethy H. Brito
     [not found] <20210617121839.39aefb12@babalu>
2021-06-17 16:17 ` Jesper Dangaard Brouer
  -- strict thread matches above, loose matches on Subject: below --
2021-06-07 16:38 Ethy H. Brito
2021-06-09 15:04 ` Ethy H. Brito
2021-06-09 19:11 ` Adam Niescierowicz
2021-06-09 19:30 ` Ethy H. Brito
2021-06-09 19:57 ` Adam Niescierowicz
2021-06-09 22:13 ` Ethy H. Brito
2021-06-12 11:24 ` Anatoly Muliarski
2021-06-14 16:13 ` Ethy H. Brito
2021-06-16 11:43 ` cronolog+lartc
2021-06-16 12:08 ` Erik Auerswald
2021-07-07  2:10 ` L A Walsh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.