netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf] xsk: Clear pool even for inactive queues
@ 2021-01-18 16:03 Maxim Mikityanskiy
  2021-01-19 14:08 ` Björn Töpel
  2021-01-19 22:00 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Maxim Mikityanskiy @ 2021-01-18 16:03 UTC (permalink / raw)
  To: Magnus Karlsson, Björn Töpel
  Cc: Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann,
	David S. Miller, Jakub Kicinski, Jesper Dangaard Brouer,
	John Fastabend, netdev, bpf

The number of queues can change by other means, rather than ethtool. For
example, attaching an mqprio qdisc with num_tc > 1 leads to creating
multiple sets of TX queues, which may be then destroyed when mqprio is
deleted. If an AF_XDP socket is created while mqprio is active,
dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may
decrease with deletion of mqprio, which will mean that the pool won't be
NULLed, and a further increase of the number of TX queues may expose a
dangling pointer.

To avoid any potential misbehavior, this commit clears pool for RX and
TX queues, regardless of real_num_*_queues, still taking into
consideration num_*_queues to avoid overflows.

Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation")
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
---
 net/xdp/xsk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 8037b04a9edd..4a83117507f5 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
 
 void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
 {
-	if (queue_id < dev->real_num_rx_queues)
+	if (queue_id < dev->num_rx_queues)
 		dev->_rx[queue_id].pool = NULL;
-	if (queue_id < dev->real_num_tx_queues)
+	if (queue_id < dev->num_tx_queues)
 		dev->_tx[queue_id].pool = NULL;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf] xsk: Clear pool even for inactive queues
  2021-01-18 16:03 [PATCH bpf] xsk: Clear pool even for inactive queues Maxim Mikityanskiy
@ 2021-01-19 14:08 ` Björn Töpel
  2021-01-19 22:00 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Björn Töpel @ 2021-01-19 14:08 UTC (permalink / raw)
  To: Maxim Mikityanskiy, Magnus Karlsson
  Cc: Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann,
	David S. Miller, Jakub Kicinski, Jesper Dangaard Brouer,
	John Fastabend, netdev, bpf

On 2021-01-18 17:03, Maxim Mikityanskiy wrote:
> The number of queues can change by other means, rather than ethtool. For
> example, attaching an mqprio qdisc with num_tc > 1 leads to creating
> multiple sets of TX queues, which may be then destroyed when mqprio is
> deleted. If an AF_XDP socket is created while mqprio is active,
> dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may
> decrease with deletion of mqprio, which will mean that the pool won't be
> NULLed, and a further increase of the number of TX queues may expose a
> dangling pointer.
> 
> To avoid any potential misbehavior, this commit clears pool for RX and
> TX queues, regardless of real_num_*_queues, still taking into
> consideration num_*_queues to avoid overflows.
> 
> Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
> Fixes: a41b4f3c58dd ("xsk: simplify xdp_clear_umem_at_qid implementation")
> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>

Thanks, Maxim!

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> ---
>   net/xdp/xsk.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 8037b04a9edd..4a83117507f5 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid);
>   
>   void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id)
>   {
> -	if (queue_id < dev->real_num_rx_queues)
> +	if (queue_id < dev->num_rx_queues)
>   		dev->_rx[queue_id].pool = NULL;
> -	if (queue_id < dev->real_num_tx_queues)
> +	if (queue_id < dev->num_tx_queues)
>   		dev->_tx[queue_id].pool = NULL;
>   }
>   
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf] xsk: Clear pool even for inactive queues
  2021-01-18 16:03 [PATCH bpf] xsk: Clear pool even for inactive queues Maxim Mikityanskiy
  2021-01-19 14:08 ` Björn Töpel
@ 2021-01-19 22:00 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-01-19 22:00 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: magnus.karlsson, bjorn.topel, jonathan.lemon, ast, daniel, davem,
	kuba, hawk, john.fastabend, netdev, bpf

Hello:

This patch was applied to bpf/bpf.git (refs/heads/master):

On Mon, 18 Jan 2021 18:03:33 +0200 you wrote:
> The number of queues can change by other means, rather than ethtool. For
> example, attaching an mqprio qdisc with num_tc > 1 leads to creating
> multiple sets of TX queues, which may be then destroyed when mqprio is
> deleted. If an AF_XDP socket is created while mqprio is active,
> dev->_tx[queue_id].pool will be filled, but then real_num_tx_queues may
> decrease with deletion of mqprio, which will mean that the pool won't be
> NULLed, and a further increase of the number of TX queues may expose a
> dangling pointer.
> 
> [...]

Here is the summary with links:
  - [bpf] xsk: Clear pool even for inactive queues
    https://git.kernel.org/bpf/bpf/c/b425e24a934e

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-01-19 23:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-18 16:03 [PATCH bpf] xsk: Clear pool even for inactive queues Maxim Mikityanskiy
2021-01-19 14:08 ` Björn Töpel
2021-01-19 22:00 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).