stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io.
       [not found] <cover.1607293068.git.asml.silence@gmail.com>
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: Xiaoguang Wang, stable, Abaci Fuzz, Joseph Qi

From: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>

Abaci Fuzz reported a double-free or invalid-free BUG in io_commit_cqring():
[   95.504842] BUG: KASAN: double-free or invalid-free in io_commit_cqring+0x3ec/0x8e0
[   95.505921]
[   95.506225] CPU: 0 PID: 4037 Comm: io_wqe_worker-0 Tainted: G    B
W         5.10.0-rc5+ #1
[   95.507434] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[   95.508248] Call Trace:
[   95.508683]  dump_stack+0x107/0x163
[   95.509323]  ? io_commit_cqring+0x3ec/0x8e0
[   95.509982]  print_address_description.constprop.0+0x3e/0x60
[   95.510814]  ? vprintk_func+0x98/0x140
[   95.511399]  ? io_commit_cqring+0x3ec/0x8e0
[   95.512036]  ? io_commit_cqring+0x3ec/0x8e0
[   95.512733]  kasan_report_invalid_free+0x51/0x80
[   95.513431]  ? io_commit_cqring+0x3ec/0x8e0
[   95.514047]  __kasan_slab_free+0x141/0x160
[   95.514699]  kfree+0xd1/0x390
[   95.515182]  io_commit_cqring+0x3ec/0x8e0
[   95.515799]  __io_req_complete.part.0+0x64/0x90
[   95.516483]  io_wq_submit_work+0x1fa/0x260
[   95.517117]  io_worker_handle_work+0xeac/0x1c00
[   95.517828]  io_wqe_worker+0xc94/0x11a0
[   95.518438]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.519151]  ? __kthread_parkme+0x11d/0x1d0
[   95.519806]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.520512]  ? io_worker_handle_work+0x1c00/0x1c00
[   95.521211]  kthread+0x396/0x470
[   95.521727]  ? _raw_spin_unlock_irq+0x24/0x30
[   95.522380]  ? kthread_mod_delayed_work+0x180/0x180
[   95.523108]  ret_from_fork+0x22/0x30
[   95.523684]
[   95.523985] Allocated by task 4035:
[   95.524543]  kasan_save_stack+0x1b/0x40
[   95.525136]  __kasan_kmalloc.constprop.0+0xc2/0xd0
[   95.525882]  kmem_cache_alloc_trace+0x17b/0x310
[   95.533930]  io_queue_sqe+0x225/0xcb0
[   95.534505]  io_submit_sqes+0x1768/0x25f0
[   95.535164]  __x64_sys_io_uring_enter+0x89e/0xd10
[   95.535900]  do_syscall_64+0x33/0x40
[   95.536465]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   95.537199]
[   95.537505] Freed by task 4035:
[   95.538003]  kasan_save_stack+0x1b/0x40
[   95.538599]  kasan_set_track+0x1c/0x30
[   95.539177]  kasan_set_free_info+0x1b/0x30
[   95.539798]  __kasan_slab_free+0x112/0x160
[   95.540427]  kfree+0xd1/0x390
[   95.540910]  io_commit_cqring+0x3ec/0x8e0
[   95.541516]  io_iopoll_complete+0x914/0x1390
[   95.542150]  io_do_iopoll+0x580/0x700
[   95.542724]  io_iopoll_try_reap_events.part.0+0x108/0x200
[   95.543512]  io_ring_ctx_wait_and_kill+0x118/0x340
[   95.544206]  io_uring_release+0x43/0x50
[   95.544791]  __fput+0x28d/0x940
[   95.545291]  task_work_run+0xea/0x1b0
[   95.545873]  do_exit+0xb6a/0x2c60
[   95.546400]  do_group_exit+0x12a/0x320
[   95.546967]  __x64_sys_exit_group+0x3f/0x50
[   95.547605]  do_syscall_64+0x33/0x40
[   95.548155]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

The reason is that once we got a non EAGAIN error in io_wq_submit_work(),
we'll complete req by calling io_req_complete(), which will hold completion_lock
to call io_commit_cqring(), but for polled io, io_iopoll_complete() won't
hold completion_lock to call io_commit_cqring(), then there maybe concurrent
access to ctx->defer_list, double free may happen.

To fix this bug, we always let io_iopoll_complete() complete polled io.

Cc: <stable@vger.kernel.org> # 5.5+
Reported-by: Abaci Fuzz <abaci@linux.alibaba.com>
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index a2a7c65a77aa..c895a306f919 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6074,8 +6074,19 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
 	}
 
 	if (ret) {
-		req_set_fail_links(req);
-		io_req_complete(req, ret);
+		/*
+		 * io_iopoll_complete() does not hold completion_lock to complete
+		 * polled io, so here for polled io, just mark it done and still let
+		 * io_iopoll_complete() complete it.
+		 */
+		if (req->ctx->flags & IORING_SETUP_IOPOLL) {
+			struct kiocb *kiocb = &req->rw.kiocb;
+
+			kiocb_done(kiocb, ret, NULL);
+		} else {
+			req_set_fail_links(req);
+			io_req_complete(req, ret);
+		}
 	}
 
 	return io_steal_work(req);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions
       [not found] <cover.1607293068.git.asml.silence@gmail.com>
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-07 18:31   ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov
  3 siblings, 1 reply; 5+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

IOPOLL allows buffer remove/provide requests, but they doesn't
synchronise by rules of IOPOLL, namely it have to hold uring_lock.

Cc: <stable@vger.kernel.org> # 5.7+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c895a306f919..4fac02ea5f4c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -3948,11 +3948,17 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
 	head = idr_find(&ctx->io_buffer_idr, p->bgid);
 	if (head)
 		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
-
-	io_ring_submit_lock(ctx, !force_nonblock);
 	if (ret < 0)
 		req_set_fail_links(req);
-	__io_req_complete(req, ret, 0, cs);
+
+	/* need to hold the lock to complete IOPOLL requests */
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		__io_req_complete(req, ret, 0, cs);
+		io_ring_submit_unlock(ctx, !force_nonblock);
+	} else {
+		io_ring_submit_unlock(ctx, !force_nonblock);
+		__io_req_complete(req, ret, 0, cs);
+	}
 	return 0;
 }
 
@@ -4037,10 +4043,17 @@ static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
 		}
 	}
 out:
-	io_ring_submit_unlock(ctx, !force_nonblock);
 	if (ret < 0)
 		req_set_fail_links(req);
-	__io_req_complete(req, ret, 0, cs);
+
+	/* need to hold the lock to complete IOPOLL requests */
+	if (ctx->flags & IORING_SETUP_IOPOLL) {
+		__io_req_complete(req, ret, 0, cs);
+		io_ring_submit_unlock(ctx, !force_nonblock);
+	} else {
+		io_ring_submit_unlock(ctx, !force_nonblock);
+		__io_req_complete(req, ret, 0, cs);
+	}
 	return 0;
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow
       [not found] <cover.1607293068.git.asml.silence@gmail.com>
  2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov
  3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

It's not safe to call io_cqring_overflow_flush() for IOPOLL mode without
hodling uring_lock, because it does synchronisation differently. Make
sure we have it.

As for io_ring_exit_work(), we don't even need it there because
io_ring_ctx_wait_and_kill() already set force flag making all overflowed
requests to be dropped.

Cc: <stable@vger.kernel.org> # 5.5+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4fac02ea5f4c..b1ba9a738315 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8393,8 +8393,6 @@ static void io_ring_exit_work(struct work_struct *work)
 	 * as nobody else will be looking for them.
 	 */
 	do {
-		if (ctx->rings)
-			io_cqring_overflow_flush(ctx, true, NULL, NULL);
 		io_iopoll_try_reap_events(ctx);
 	} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
 	io_ring_ctx_free(ctx);
@@ -8404,6 +8402,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
 	percpu_ref_kill(&ctx->refs);
+	if (ctx->rings)
+		io_cqring_overflow_flush(ctx, true, NULL, NULL);
 	mutex_unlock(&ctx->uring_lock);
 
 	io_kill_timeouts(ctx, NULL);
@@ -8413,8 +8413,6 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 		io_wq_cancel_all(ctx->io_wq);
 
 	/* if we failed setting up the ctx, we might not have any rings */
-	if (ctx->rings)
-		io_cqring_overflow_flush(ctx, true, NULL, NULL);
 	io_iopoll_try_reap_events(ctx);
 	idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx);
 
@@ -8691,7 +8689,9 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
 	else
 		io_cancel_defer_files(ctx, task, NULL);
 
+	io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 	io_cqring_overflow_flush(ctx, true, task, files);
+	io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 
 	while (__io_uring_cancel_task_requests(ctx, task, files)) {
 		io_run_task_work();
@@ -8993,8 +8993,10 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	 */
 	ret = 0;
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+		io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 		if (!list_empty_careful(&ctx->cq_overflow_list))
 			io_cqring_overflow_flush(ctx, false, NULL, NULL);
+		io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 		if (flags & IORING_ENTER_SQ_WAKEUP)
 			wake_up(&ctx->sq_data->wait);
 		if (flags & IORING_ENTER_SQ_WAIT)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush
       [not found] <cover.1607293068.git.asml.silence@gmail.com>
                   ` (2 preceding siblings ...)
  2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
@ 2020-12-06 22:22 ` Pavel Begunkov
  3 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-12-06 22:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

Checking !list_empty(&ctx->cq_overflow_list) around noflush in
io_cqring_events() is racy, because if it fails but a request overflowed
just after that, io_cqring_overflow_flush() still will be called.

Remove the second check, it shouldn't be a problem for performance,
because there is cq_check_overflow bit check just above.

Cc: <stable@vger.kernel.org> # 5.5+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b1ba9a738315..f707caed9f79 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2246,7 +2246,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
 		 * we wake up the task, and the next invocation will flush the
 		 * entries. We cannot safely to it from here.
 		 */
-		if (noflush && !list_empty(&ctx->cq_overflow_list))
+		if (noflush)
 			return -1U;
 
 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions
  2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
@ 2020-12-07 18:31   ` Pavel Begunkov
  0 siblings, 0 replies; 5+ messages in thread
From: Pavel Begunkov @ 2020-12-07 18:31 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

On 06/12/2020 22:22, Pavel Begunkov wrote:
> IOPOLL allows buffer remove/provide requests, but they doesn't
> synchronise by rules of IOPOLL, namely it have to hold uring_lock.
> 
> Cc: <stable@vger.kernel.org> # 5.7+
> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
> ---
>  fs/io_uring.c | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index c895a306f919..4fac02ea5f4c 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -3948,11 +3948,17 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock,
>  	head = idr_find(&ctx->io_buffer_idr, p->bgid);
>  	if (head)
>  		ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs);
> -
> -	io_ring_submit_lock(ctx, !force_nonblock);

Here should be unlock(), not a second lock(). I didn't notice
first, but the patch fixes it. maybe for 5.10?

>  	if (ret < 0)
>  		req_set_fail_links(req);
> -	__io_req_complete(req, ret, 0, cs);
> +
> +	/* need to hold the lock to complete IOPOLL requests */
> +	if (ctx->flags & IORING_SETUP_IOPOLL) {
> +		__io_req_complete(req, ret, 0, cs);
> +		io_ring_submit_unlock(ctx, !force_nonblock);
> +	} else {
> +		io_ring_submit_unlock(ctx, !force_nonblock);
> +		__io_req_complete(req, ret, 0, cs);
> +	}
>  	return 0;
>  }


-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-12-07 18:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1607293068.git.asml.silence@gmail.com>
2020-12-06 22:22 ` [PATCH 5.10 1/5] io_uring: always let io_iopoll_complete() complete polled io Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 2/5] io_uring: fix racy IOPOLL completions Pavel Begunkov
2020-12-07 18:31   ` Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 3/5] io_uring: fix racy IOPOLL flush overflow Pavel Begunkov
2020-12-06 22:22 ` [PATCH 5.10 4/5] io_uring: fix io_cqring_events()'s noflush Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).