All of lore.kernel.org
 help / color / mirror / Atom feed
* stress testing 40Gbps linux bridge with Mpps - is HFSC a bottleneck?
@ 2020-10-04 19:10 kaskada
  0 siblings, 0 replies; 2+ messages in thread
From: kaskada @ 2020-10-04 19:10 UTC (permalink / raw)
  To: netfilter

Hi,

I have three multiCPU servers connected in a row. ServerA --- serverB --- serverC.
All of them are interconnected with 40Gbps Mellanox X5 PCIe v4 cards with optical MPO cable.

All servers are multicore beasts with two CPUs, all of them have HyperThreading off.
- serverA (16 Xeon cores) is sending iperf3 traffic (small packets
created by lower MTU 90B set on 40Gbps port) to serverC (32 Xeon cores)
- serverB (128 Epyc cores) is setup as a linux bridge

Server A is able to send around 20 Milions of iperf3 packets (Mpps) to server C through server B (no iptables/ebtables rules)


Problem is when I add HFSC config, even simplified as much as possible, to a
40Gbps port on tested server B. Speed in packets drops down to some 2,6 
Mpps, while CPUs are nearly idling.

This is the HFSC config. It is just a minimalistic config -  I`m looking for
HFSC/linux bridge/... limits and I narrowed the problem down to this:


# root qdisc a class:
qdisc add dev eth0 root handle 1: hfsc default ffff
class add dev eth0 parent 1: classid 1:1 hfsc ls m2 34gbit ul m2 34gbit
# default qdisc a class
 class add dev eth0 parent 1:1 classid 1:ffff hfsc ls m2 34gbit ul m2 34gbit
 qdisc add dev eth0 parent 1:ffff handle ffff:0 sfq perturb 5

--> all iperf3 traffic is passing th default class ffff, which is expected
in this testing setup (so no filters/iptables classifies/ipsets are not 
necessary)
--> this is enough to have just 2,6 Mpps instead of 20Mpps...

What I`m I missing?
Thank you.
Pep.


^ permalink raw reply	[flat|nested] 2+ messages in thread

* stress testing 40Gbps linux bridge with Mpps - is HFSC a bottleneck?
@ 2020-10-05 18:55 kaskada
  0 siblings, 0 replies; 2+ messages in thread
From: kaskada @ 2020-10-05 18:55 UTC (permalink / raw)
  To: lartc

Hi,

I have three multiCPU servers connected in a row. ServerA --- serverB --- serverC.
All of them are interconnected with 40Gbps Mellanox X5 PCIe v4 cards with optical MPO cable.

All servers are multicore beasts with two CPUs, all of them have HyperThreading off.

- serverA (16 Xeon cores) is sending iperf3 traffic (small packets created by lower MTU 90B set on 40Gbps port) to serverC (32 Xeon cores)
- serverB (128 Epyc cores) is setup as a linux bridge

Server A is able to send around 20 Milions of iperf3 packets (Mpps) to server C through server B (no iptables/ebtables rules)


Problem is when I add HFSC config, even simplified as much as possible, to a 40Gbps port on tested server B. Speed in packets drops down to some 2,6 Mpps, while CPUs are nearly idling.


This is the HFSC config. It is just a minimalistic config -  I`m looking for HFSC/linux bridge/... limits and I narrowed the problem down to this:


# root qdisc a class:
qdisc add dev eth0 root handle 1: hfsc default ffff
class add dev eth0 parent 1: classid 1:1 hfsc ls m2 34gbit ul m2 34gbit
# default qdisc a class
 class add dev eth0 parent 1:1 classid 1:ffff hfsc ls m2 34gbit ul m2 34gbit
 qdisc add dev eth0 parent 1:ffff handle ffff:0 sfq perturb 5

--> all iperf3 traffic is passing th default class ffff, which is expected in this testing setup (so no filters/iptables classifies/ipsets are not necessary)
--> this is enough to have just 2,6 Mpps instead of 20Mpps...


What I`m I missing?

Thank you.
Pep.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-10-05 18:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-04 19:10 stress testing 40Gbps linux bridge with Mpps - is HFSC a bottleneck? kaskada
2020-10-05 18:55 kaskada

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.