All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next] io_uring: null deref in early failed ring destruction
@ 2022-03-23 11:14 Pavel Begunkov
  2022-03-23 12:28 ` Jens Axboe
  0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2022-03-23 11:14 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

[  830.443583] Running test nop-all-sizes:
[  830.551826] BUG: kernel NULL pointer dereference, address: 00000000000000c0
[  830.551900] RIP: 0010:io_kill_timeouts+0xc5/0xf0
[  830.551951] Call Trace:
[  830.551958]  <TASK>
[  830.551970]  io_ring_ctx_wait_and_kill+0xb0/0x117
[  830.551975]  io_uring_setup.cold+0x4dc/0xb97
[  830.551990]  __x64_sys_io_uring_setup+0x15/0x20
[  830.552003]  do_syscall_64+0x3b/0x80
[  830.552011]  entry_SYSCALL_64_after_hwframe+0x44/0xae

Apparently, not all io_commi_cqring() guarding was useless, some were
protecting against cases where we call io_ring_ctx_wait_and_kill() for a
ring failed early during creation. This particular one points to

(gdb) l *(io_kill_timeouts+0xc5)
0xffffffff81b26b19 is in io_kill_timeouts (fs/io_uring.c:1813).
1808    }
1809
1810    static inline void io_commit_cqring(struct io_ring_ctx *ctx)
1811    {
1812            /* order cqe stores with ring update */
1813            smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
1814    }

A better way to handle the problem is to not get into the request
cancellation paths for when we don't have ctx->rings allocated.

Fixes: c9be622494c01 ("io_uring: remove extra ifs around io_commit_cqring")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 594ed8bc4585..6ad81d39d81e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -10309,11 +10309,13 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 		io_unregister_personality(ctx, index);
 	mutex_unlock(&ctx->uring_lock);
 
-	io_kill_timeouts(ctx, NULL, true);
-	io_poll_remove_all(ctx, NULL, true);
-
-	/* if we failed setting up the ctx, we might not have any rings */
-	io_iopoll_try_reap_events(ctx);
+	/* failed during ring init, it couldn't have issued any requests */
+	if (ctx->rings) {
+		io_kill_timeouts(ctx, NULL, true);
+		io_poll_remove_all(ctx, NULL, true);
+		/* if we failed setting up the ctx, we might not have any rings */
+		io_iopoll_try_reap_events(ctx);
+	}
 
 	INIT_WORK(&ctx->exit_work, io_ring_exit_work);
 	/*
@@ -10405,6 +10407,10 @@ static __cold void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
 	struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
 	struct io_uring_task *tctx = task ? task->io_uring : NULL;
 
+	/* failed during ring init, it couldn't have issued any requests */
+	if (!ctx->rings)
+		return;
+
 	while (1) {
 		enum io_wq_cancel cret;
 		bool ret = false;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-03-23 12:28 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-23 11:14 [PATCH for-next] io_uring: null deref in early failed ring destruction Pavel Begunkov
2022-03-23 12:28 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.