bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Björn Töpel" <bjorn.topel@intel.com>
To: Maxim Mikityanskiy <maximmi@mellanox.com>,
	Magnus Karlsson <magnus.karlsson@intel.com>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH bpf] xsk: Clear pool even for inactive queues
Date: Tue, 19 Jan 2021 15:08:36 +0100	[thread overview]
Message-ID: <9236949b-df13-9505-8ada-69ad26e03a89@intel.com> (raw)
In-Reply-To: <20210118160333.333439-1-maximmi@mellanox.com>

On 2021-01-18 17:03, Maxim Mikityanskiy wrote:
> The number of queues can change by other means, rather than ethtool. For
> example, attaching an mqprio qdisc with num_tc > 1 leads to creating
> multiple sets of TX queues, which may be then destroyed when mqprio is
> deleted. If an AF_XDP socket is created while mqprio is active,
> dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may
> decrease with deletion of mqprio, which will mean that the pool won't be
> NULLed, and a further increase of the number of TX queues may expose a
> dangling pointer.
> 
> To avoid any potential misbehavior, this commit clears pool for RX and
> TX queues, regardless of real_num_*_queues, still taking into
> consideration num_*_queues to avoid overflows.
> 
> Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
> Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation")
> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>

Thanks, Maxim!

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> ---
>   net/xdp/xsk.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 8037b04a9edd..4a83117507f5 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
>   
>   void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
>   {
> -	if (queue_id < dev->real_num_rx_queues)
> +	if (queue_id < dev->num_rx_queues)
>   		dev->_rx[queue_id].pool = NULL;
> -	if (queue_id < dev->real_num_tx_queues)
> +	if (queue_id < dev->num_tx_queues)
>   		dev->_tx[queue_id].pool = NULL;
>   }
>   
> 

  reply	other threads:[~2021-01-19 23:37 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-18 16:03 [PATCH bpf] xsk: Clear pool even for inactive queues Maxim Mikityanskiy
2021-01-19 14:08 ` Björn Töpel [this message]
2021-01-19 22:00 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9236949b-df13-9505-8ada-69ad26e03a89@intel.com \
    --to=bjorn.topel@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=magnus.karlsson@intel.com \
    --cc=maximmi@mellanox.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).