* max channels for mlx5
@ 2020-05-04 0:41 David Ahern
2020-05-04 21:46 ` Saeed Mahameed
0 siblings, 1 reply; 3+ messages in thread
From: David Ahern @ 2020-05-04 0:41 UTC (permalink / raw)
To: Saeed Mahameed, netdev
Hi Saeed:
When I saw this commit last year:
commit 57c7fce14b1ad512a42abe33cb721a2ea3520d4b
Author: Fan Li <fanl@mellanox.com>
Date: Mon Dec 16 14:46:15 2019 +0200
net/mlx5: Increase the max number of channels to 128
I was expecting to be able to increase the number of channels on larger
systems (e.g., 96 cpus), but that is not working as I expected.
This is on net-next as of today:
60bcbc41ffb3 ("Merge branch 'net-smc-add-and-delete-link-processing'")
$ sudo ethtool -L eth0 combined 95
Cannot set device channel parameters: Invalid argument
As it stands the maximum is 63 (or is it 64 and cpus 0-63?):
$ sudo ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 63
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 63
A side effect of this limit is XDP_REDIRECT drops packets if a vhost
thread gets scheduled on cpus 64 and up since the tx queue is based on
processor id:
int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
u32 flags)
{
...
sq_num = smp_processor_id();
if (unlikely(sq_num >= priv->channels.num))
return -ENXIO;
So in my example if the redirect happens on cpus 64-95, which is 1/3 of
my hardware threads, the packet is just dropped.
Am I missing something about how to use the expanded maximum?
David
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: max channels for mlx5
2020-05-04 0:41 max channels for mlx5 David Ahern
@ 2020-05-04 21:46 ` Saeed Mahameed
2020-05-04 23:04 ` David Ahern
0 siblings, 1 reply; 3+ messages in thread
From: Saeed Mahameed @ 2020-05-04 21:46 UTC (permalink / raw)
To: dsahern, netdev
On Sun, 2020-05-03 at 18:41 -0600, David Ahern wrote:
> Hi Saeed:
>
> When I saw this commit last year:
>
> commit 57c7fce14b1ad512a42abe33cb721a2ea3520d4b
> Author: Fan Li <fanl@mellanox.com>
> Date: Mon Dec 16 14:46:15 2019 +0200
>
> net/mlx5: Increase the max number of channels to 128
>
> I was expecting to be able to increase the number of channels on
> larger
> systems (e.g., 96 cpus), but that is not working as I expected.
>
this patch should help, unless you are limited by FW/system MSI-x ..
what is the amount of msix avaiable for eth0 port ?
businfo=$(ethtool -i eth0 | grep bus-info | cut -d":" -f2-)
cat /proc/interrupts | grep $businfo | wc -l
> This is on net-next as of today:
> 60bcbc41ffb3 ("Merge branch 'net-smc-add-and-delete-link-
> processing'")
>
> $ sudo ethtool -L eth0 combined 95
> Cannot set device channel parameters: Invalid argument
>
> As it stands the maximum is 63 (or is it 64 and cpus 0-63?):
> $ sudo ethtool -l eth0
> Channel parameters for eth0:
> Pre-set maximums:
> RX: 0
> TX: 0
> Other: 0
> Combined: 63
> Current hardware settings:
> RX: 0
> TX: 0
> Other: 0
> Combined: 63
>
So if number of msix is 64, we can only use 63 for data path
completions ..
do you have sriov enabled ?
what is the FW version you have ?
we need to figure out if this is a system MSIX limitation or a FW
limitation.
> A side effect of this limit is XDP_REDIRECT drops packets if a vhost
> thread gets scheduled on cpus 64 and up since the tx queue is based
> on
> processor id:
>
> int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame
> **frames,
> u32 flags)
> {
> ...
> sq_num = smp_processor_id();
> if (unlikely(sq_num >= priv->channels.num))
> return -ENXIO;
>
> So in my example if the redirect happens on cpus 64-95, which is 1/3
> of
> my hardware threads, the packet is just dropped.
>
Know XDP redirect issue, you need to tune the RSS and affinity on RX
side and match TX count and affinity on TX side, so you won't end up on
a wrong CPU on the TX side
> Am I missing something about how to use the expanded maximum?
>
> David
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: max channels for mlx5
2020-05-04 21:46 ` Saeed Mahameed
@ 2020-05-04 23:04 ` David Ahern
0 siblings, 0 replies; 3+ messages in thread
From: David Ahern @ 2020-05-04 23:04 UTC (permalink / raw)
To: Saeed Mahameed, netdev
On 5/4/20 3:46 PM, Saeed Mahameed wrote:
> what is the amount of msix avaiable for eth0 port ?
>
> businfo=$(ethtool -i eth0 | grep bus-info | cut -d":" -f2-)
> cat /proc/interrupts | grep $businfo | wc -l
64
>
> So if number of msix is 64, we can only use 63 for data path
> completions ..
>
> do you have sriov enabled ?
no
>
> what is the FW version you have ?
$ ethtool -i eth0
driver: mlx5_core
version: 5.0-0
firmware-version: 14.27.1016 (MT_2420110034)
> we need to figure out if this is a system MSIX limitation or a FW
> limitation.
>
>> A side effect of this limit is XDP_REDIRECT drops packets if a vhost
>> thread gets scheduled on cpus 64 and up since the tx queue is based
>> on
>> processor id:
>>
>> int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame
>> **frames,
>> u32 flags)
>> {
>> ...
>> sq_num = smp_processor_id();
>> if (unlikely(sq_num >= priv->channels.num))
>> return -ENXIO;
>>
>> So in my example if the redirect happens on cpus 64-95, which is 1/3
>> of
>> my hardware threads, the packet is just dropped.
>>
>
> Know XDP redirect issue, you need to tune the RSS and affinity on RX
> side and match TX count and affinity on TX side, so you won't end up on
> a wrong CPU on the TX side
Understood for port to port redirect.
This use case is a virtual machine with a tap device + vhost bypassing
host kernel stack and redirecting to a port. Losing a third of the cpus
for vhost threads is a huge limitation.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-05-04 23:04 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-04 0:41 max channels for mlx5 David Ahern
2020-05-04 21:46 ` Saeed Mahameed
2020-05-04 23:04 ` David Ahern
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).