All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf] xsk: fix possible memory leak at socket close
@ 2020-10-27 12:32 Magnus Karlsson
  2020-10-29  8:35 ` Björn Töpel
  0 siblings, 1 reply; 2+ messages in thread
From: Magnus Karlsson @ 2020-10-27 12:32 UTC (permalink / raw)
  To: magnus.karlsson, bjorn.topel, ast, daniel, netdev
  Cc: bpf, jonathan.lemon, maciej.fijalkowski, maciejromanfijalkowski

From: Magnus Karlsson <magnus.karlsson@intel.com>

Fix a possible memory leak at xsk socket close that is caused by the
refcounting of the umem object being wrong. The reference count of the
umem was decremented only after the pool had been freed. Note that if
the buffer pool is destroyed, it is important that the umem is
destroyed after the pool, otherwise the umem would disappear while the
driver is still running. And as the buffer pool needs to be destroyed
in a work queue, the umem is also (if its refcount reaches zero)
destroyed after the buffer pool in that same work queue.

What was missing is that the refcount also needs to be decremented
when the pool is not freed and when the pool has not even been
created. The first case happens when the refcount of the pool is
higher than 1, i.e. it is still being used by some other socket using
the same device and queue id. In this case, it is safe to decrement
the refcount of the umem outside of the work queue as the umem will
never be freed because the refcount of the umem is always greater than
or equal to the refcount of the buffer pool. The second case is if the
buffer pool has not been created yet, i.e. the socket was closed
before it was bound but after the umem was created. In this case, it
is safe to destroy the umem outside of the work queue, since there is
no pool that can use it by definition.

Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
Reported-by: syzbot+eb71df123dc2be2c1456@syzkaller.appspotmail.com
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
---
 include/net/xsk_buff_pool.h | 2 +-
 net/xdp/xsk.c               | 3 ++-
 net/xdp/xsk_buff_pool.c     | 7 +++++--
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
index 0140d08..01755b8 100644
--- a/include/net/xsk_buff_pool.h
+++ b/include/net/xsk_buff_pool.h
@@ -86,7 +86,7 @@ int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
 void xp_destroy(struct xsk_buff_pool *pool);
 void xp_release(struct xdp_buff_xsk *xskb);
 void xp_get_pool(struct xsk_buff_pool *pool);
-void xp_put_pool(struct xsk_buff_pool *pool);
+bool xp_put_pool(struct xsk_buff_pool *pool);
 void xp_clear_dev(struct xsk_buff_pool *pool);
 void xp_add_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs);
 void xp_del_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs);
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index b71a32e..cfbec39 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -1146,7 +1146,8 @@ static void xsk_destruct(struct sock *sk)
 	if (!sock_flag(sk, SOCK_DEAD))
 		return;

-	xp_put_pool(xs->pool);
+	if (!xp_put_pool(xs->pool))
+		xdp_put_umem(xs->umem);

 	sk_refcnt_debug_dec(sk);
 }
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 64c9e55..8a3bf4e 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -251,15 +251,18 @@ void xp_get_pool(struct xsk_buff_pool *pool)
 	refcount_inc(&pool->users);
 }

-void xp_put_pool(struct xsk_buff_pool *pool)
+bool xp_put_pool(struct xsk_buff_pool *pool)
 {
 	if (!pool)
-		return;
+		return false;

 	if (refcount_dec_and_test(&pool->users)) {
 		INIT_WORK(&pool->work, xp_release_deferred);
 		schedule_work(&pool->work);
+		return true;
 	}
+
+	return false;
 }

 static struct xsk_dma_map *xp_find_dma_map(struct xsk_buff_pool *pool)
--
2.7.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH bpf] xsk: fix possible memory leak at socket close
  2020-10-27 12:32 [PATCH bpf] xsk: fix possible memory leak at socket close Magnus Karlsson
@ 2020-10-29  8:35 ` Björn Töpel
  0 siblings, 0 replies; 2+ messages in thread
From: Björn Töpel @ 2020-10-29  8:35 UTC (permalink / raw)
  To: Magnus Karlsson
  Cc: Karlsson, Magnus, Björn Töpel, Alexei Starovoitov,
	Daniel Borkmann, Netdev, bpf, Jonathan Lemon, Fijalkowski,
	Maciej, Maciej Fijalkowski

On Wed, 28 Oct 2020 at 08:32, Magnus Karlsson <magnus.karlsson@gmail.com> wrote:
>
> From: Magnus Karlsson <magnus.karlsson@intel.com>
>
> Fix a possible memory leak at xsk socket close that is caused by the
> refcounting of the umem object being wrong. The reference count of the
> umem was decremented only after the pool had been freed. Note that if
> the buffer pool is destroyed, it is important that the umem is
> destroyed after the pool, otherwise the umem would disappear while the
> driver is still running. And as the buffer pool needs to be destroyed
> in a work queue, the umem is also (if its refcount reaches zero)
> destroyed after the buffer pool in that same work queue.
>
> What was missing is that the refcount also needs to be decremented
> when the pool is not freed and when the pool has not even been
> created. The first case happens when the refcount of the pool is
> higher than 1, i.e. it is still being used by some other socket using
> the same device and queue id. In this case, it is safe to decrement
> the refcount of the umem outside of the work queue as the umem will
> never be freed because the refcount of the umem is always greater than
> or equal to the refcount of the buffer pool. The second case is if the
> buffer pool has not been created yet, i.e. the socket was closed
> before it was bound but after the umem was created. In this case, it
> is safe to destroy the umem outside of the work queue, since there is
> no pool that can use it by definition.
>
> Fixes: 1c1efc2af158 ("xsk: Create and free buffer pool independently from umem")
> Reported-by: syzbot+eb71df123dc2be2c1456@syzkaller.appspotmail.com
> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>

Acked-by: Björn Töpel <bjorn.topel@intel.com>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-10-29  8:35 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-27 12:32 [PATCH bpf] xsk: fix possible memory leak at socket close Magnus Karlsson
2020-10-29  8:35 ` Björn Töpel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.