All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET 0/33] Fixes queued up for 5.12
@ 2021-03-04  0:26 Jens Axboe
  2021-03-04  0:26 ` [PATCH 01/33] io-wq: wait for worker startup when forking a new one Jens Axboe
                   ` (32 more replies)
  0 siblings, 33 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring

Hi,

This is what is queued up for 5.12 so far. Mostly cleanups and removals
now enabled by the worker change, but also of course fixes for that late
change.

 fs/io-wq.c               | 158 ++++++++++------
 fs/io-wq.h               |   4 +-
 fs/io_uring.c            | 400 ++++++++++++++++-----------------------
 include/linux/io_uring.h |  11 +-
 kernel/cred.c            |   2 +
 kernel/fork.c            |   2 +
 6 files changed, 280 insertions(+), 297 deletions(-)

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 01/33] io-wq: wait for worker startup when forking a new one
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 02/33] io-wq: have manager wait for all workers to exit Jens Axboe
                   ` (31 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We need to have our worker count updated before continuing, to avoid
cases where we repeatedly think we need a new worker, but a fork is
already in progress.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 44e20248805a..965022fe9961 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -56,6 +56,7 @@ struct io_worker {
 	const struct cred *saved_creds;
 
 	struct completion ref_done;
+	struct completion started;
 
 	struct rcu_head rcu;
 };
@@ -267,6 +268,7 @@ static void io_worker_start(struct io_worker *worker)
 {
 	worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
 	io_wqe_inc_running(worker);
+	complete(&worker->started);
 }
 
 /*
@@ -644,6 +646,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
 	worker->wqe = wqe;
 	spin_lock_init(&worker->lock);
 	init_completion(&worker->ref_done);
+	init_completion(&worker->started);
 
 	refcount_inc(&wq->refs);
 
@@ -656,6 +659,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
 		kfree(worker);
 		return false;
 	}
+	wait_for_completion(&worker->started);
 	return true;
 }
 
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 02/33] io-wq: have manager wait for all workers to exit
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
  2021-03-04  0:26 ` [PATCH 01/33] io-wq: wait for worker startup when forking a new one Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 03/33] io-wq: don't ask for a new worker if we're exiting Jens Axboe
                   ` (30 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Instead of having to wait separately on workers and manager, just have
the manager wait on the workers. We use an atomic_t for the reference
here, as we need to start at 0 and allow increment from that. Since the
number of workers is naturally capped by the allowed nr of processes,
and that uses an int, there is no risk of overflow.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 965022fe9961..a61f867f98f4 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -120,6 +120,9 @@ struct io_wq {
 	refcount_t refs;
 	struct completion done;
 
+	atomic_t worker_refs;
+	struct completion worker_done;
+
 	struct hlist_node cpuhp_node;
 
 	pid_t task_pid;
@@ -189,7 +192,8 @@ static void io_worker_exit(struct io_worker *worker)
 	raw_spin_unlock_irq(&wqe->lock);
 
 	kfree_rcu(worker, rcu);
-	io_wq_put(wqe->wq);
+	if (atomic_dec_and_test(&wqe->wq->worker_refs))
+		complete(&wqe->wq->worker_done);
 }
 
 static inline bool io_wqe_run_queue(struct io_wqe *wqe)
@@ -648,14 +652,15 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
 	init_completion(&worker->ref_done);
 	init_completion(&worker->started);
 
-	refcount_inc(&wq->refs);
+	atomic_inc(&wq->worker_refs);
 
 	if (index == IO_WQ_ACCT_BOUND)
 		pid = io_wq_fork_thread(task_thread_bound, worker);
 	else
 		pid = io_wq_fork_thread(task_thread_unbound, worker);
 	if (pid < 0) {
-		io_wq_put(wq);
+		if (atomic_dec_and_test(&wq->worker_refs))
+			complete(&wq->worker_done);
 		kfree(worker);
 		return false;
 	}
@@ -753,6 +758,9 @@ static int io_wq_manager(void *data)
 	} while (!test_bit(IO_WQ_BIT_EXIT, &wq->state));
 
 	io_wq_check_workers(wq);
+	/* we might not ever have created any workers */
+	if (atomic_read(&wq->worker_refs))
+		wait_for_completion(&wq->worker_done);
 	wq->manager = NULL;
 	io_wq_put(wq);
 	do_exit(0);
@@ -796,6 +804,7 @@ static int io_wq_fork_manager(struct io_wq *wq)
 	if (wq->manager)
 		return 0;
 
+	reinit_completion(&wq->worker_done);
 	clear_bit(IO_WQ_BIT_EXIT, &wq->state);
 	refcount_inc(&wq->refs);
 	current->flags |= PF_IO_WORKER;
@@ -1050,6 +1059,9 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 	init_completion(&wq->done);
 	refcount_set(&wq->refs, 1);
 
+	init_completion(&wq->worker_done);
+	atomic_set(&wq->worker_refs, 0);
+
 	ret = io_wq_fork_manager(wq);
 	if (!ret)
 		return wq;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 03/33] io-wq: don't ask for a new worker if we're exiting
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
  2021-03-04  0:26 ` [PATCH 01/33] io-wq: wait for worker startup when forking a new one Jens Axboe
  2021-03-04  0:26 ` [PATCH 02/33] io-wq: have manager wait for all workers to exit Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 04/33] io-wq: rename wq->done completion to wq->started Jens Axboe
                   ` (29 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we're in the process of shutting down the async context, then don't
create new workers if we already have at least the fixed one.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index a61f867f98f4..d2b55ac817ef 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -673,6 +673,8 @@ static inline bool io_wqe_need_worker(struct io_wqe *wqe, int index)
 {
 	struct io_wqe_acct *acct = &wqe->acct[index];
 
+	if (acct->nr_workers && test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state))
+		return false;
 	/* if we have available workers or no work, no need */
 	if (!hlist_nulls_empty(&wqe->free_list) || !io_wqe_run_queue(wqe))
 		return false;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 04/33] io-wq: rename wq->done completion to wq->started
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (2 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 03/33] io-wq: don't ask for a new worker if we're exiting Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 05/33] io-wq: wait for manager exit on wq destroy Jens Axboe
                   ` (28 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

This is a leftover from a different use cases, it's used to wait for
the manager to startup. Rename it as such.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index d2b55ac817ef..a8ddca62f59e 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -118,7 +118,7 @@ struct io_wq {
 	struct io_wq_hash *hash;
 
 	refcount_t refs;
-	struct completion done;
+	struct completion started;
 
 	atomic_t worker_refs;
 	struct completion worker_done;
@@ -749,7 +749,7 @@ static int io_wq_manager(void *data)
 	current->flags |= PF_IO_WORKER;
 	wq->manager = current;
 
-	complete(&wq->done);
+	complete(&wq->started);
 
 	do {
 		set_current_state(TASK_INTERRUPTIBLE);
@@ -813,7 +813,7 @@ static int io_wq_fork_manager(struct io_wq *wq)
 	ret = io_wq_fork_thread(io_wq_manager, wq);
 	current->flags &= ~PF_IO_WORKER;
 	if (ret >= 0) {
-		wait_for_completion(&wq->done);
+		wait_for_completion(&wq->started);
 		return 0;
 	}
 
@@ -1058,7 +1058,7 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 	}
 
 	wq->task_pid = current->pid;
-	init_completion(&wq->done);
+	init_completion(&wq->started);
 	refcount_set(&wq->refs, 1);
 
 	init_completion(&wq->worker_done);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 05/33] io-wq: wait for manager exit on wq destroy
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (3 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 04/33] io-wq: rename wq->done completion to wq->started Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 06/33] io-wq: fix double put of 'wq' in error path Jens Axboe
                   ` (27 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

The manager waits for the workers, hence the manager is always valid if
workers are running. Now also have wq destroy wait for the manager on
exit, so we now everything is gone.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index a8ddca62f59e..9e52a9877905 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -119,6 +119,7 @@ struct io_wq {
 
 	refcount_t refs;
 	struct completion started;
+	struct completion exited;
 
 	atomic_t worker_refs;
 	struct completion worker_done;
@@ -764,6 +765,7 @@ static int io_wq_manager(void *data)
 	if (atomic_read(&wq->worker_refs))
 		wait_for_completion(&wq->worker_done);
 	wq->manager = NULL;
+	complete(&wq->exited);
 	io_wq_put(wq);
 	do_exit(0);
 }
@@ -1059,6 +1061,7 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 
 	wq->task_pid = current->pid;
 	init_completion(&wq->started);
+	init_completion(&wq->exited);
 	refcount_set(&wq->refs, 1);
 
 	init_completion(&wq->worker_done);
@@ -1088,8 +1091,10 @@ static void io_wq_destroy(struct io_wq *wq)
 	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
 
 	set_bit(IO_WQ_BIT_EXIT, &wq->state);
-	if (wq->manager)
+	if (wq->manager) {
 		wake_up_process(wq->manager);
+		wait_for_completion(&wq->exited);
+	}
 
 	rcu_read_lock();
 	for_each_node(node)
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 06/33] io-wq: fix double put of 'wq' in error path
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (4 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 05/33] io-wq: wait for manager exit on wq destroy Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 07/33] io_uring: SQPOLL stop error handling fixes Jens Axboe
                   ` (26 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, syzbot+7bf785eedca35ca05501

We are already freeing the wq struct in both spots, so don't put it and
get it freed twice.

Reported-by: syzbot+7bf785eedca35ca05501@syzkaller.appspotmail.com
Fixes: 4fb6ac326204 ("io-wq: improve manager/worker handling over exec")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 9e52a9877905..23f431747cd2 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -819,7 +819,6 @@ static int io_wq_fork_manager(struct io_wq *wq)
 		return 0;
 	}
 
-	io_wq_put(wq);
 	return ret;
 }
 
@@ -1071,7 +1070,6 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 	if (!ret)
 		return wq;
 
-	io_wq_put(wq);
 	io_wq_put_hash(data->hash);
 err:
 	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 07/33] io_uring: SQPOLL stop error handling fixes
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (5 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 06/33] io-wq: fix double put of 'wq' in error path Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 08/33] io_uring: run fallback on cancellation Jens Axboe
                   ` (25 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we fail to fork an SQPOLL worker, we can hit cancel, and hence
attempted thread stop, with the thread already being stopped. Ensure
we check for that.

Also guard thread stop fully by the sqd mutex, just like we do for
park.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4a088581b0f2..d55c9ab6314a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6793,9 +6793,9 @@ static int io_sq_thread(void *data)
 		ctx->sqo_exec = 1;
 		io_ring_set_wakeup_flag(ctx);
 	}
-	mutex_unlock(&sqd->lock);
 
 	complete(&sqd->exited);
+	mutex_unlock(&sqd->lock);
 	do_exit(0);
 }
 
@@ -7118,13 +7118,19 @@ static bool io_sq_thread_park(struct io_sq_data *sqd)
 
 static void io_sq_thread_stop(struct io_sq_data *sqd)
 {
-	if (!sqd->thread)
+	if (test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state))
 		return;
-
-	set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
-	WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state));
-	wake_up_process(sqd->thread);
-	wait_for_completion(&sqd->exited);
+	mutex_lock(&sqd->lock);
+	if (sqd->thread) {
+		set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+		WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state));
+		wake_up_process(sqd->thread);
+		mutex_unlock(&sqd->lock);
+		wait_for_completion(&sqd->exited);
+		WARN_ON_ONCE(sqd->thread);
+	} else {
+		mutex_unlock(&sqd->lock);
+	}
 }
 
 static void io_put_sq_data(struct io_sq_data *sqd)
@@ -8867,6 +8873,11 @@ static void io_uring_cancel_sqpoll(struct io_ring_ctx *ctx)
 	if (!io_sq_thread_park(sqd))
 		return;
 	tctx = ctx->sq_data->thread->io_uring;
+	/* can happen on fork/alloc failure, just ignore that state */
+	if (!tctx) {
+		io_sq_thread_unpark(sqd);
+		return;
+	}
 
 	atomic_inc(&tctx->in_idle);
 	do {
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 08/33] io_uring: run fallback on cancellation
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (6 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 07/33] io_uring: SQPOLL stop error handling fixes Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 09/33] io_uring: don't use complete_all() on SQPOLL thread exit Jens Axboe
                   ` (24 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

io_uring_try_cancel_requests() matches not only current's requests, but
also of other exiting tasks, so we need to actively cancel them and not
just wait, especially since the function can be called on flush during
do_exit() -> exit_files().
Even if it's not a problem for now, it's much nicer to know that the
function tries to cancel everything it can.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d55c9ab6314a..9d6696ff5748 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8518,9 +8518,10 @@ static int io_remove_personalities(int id, void *p, void *data)
 	return 0;
 }
 
-static void io_run_ctx_fallback(struct io_ring_ctx *ctx)
+static bool io_run_ctx_fallback(struct io_ring_ctx *ctx)
 {
 	struct callback_head *work, *head, *next;
+	bool executed = false;
 
 	do {
 		do {
@@ -8537,7 +8538,10 @@ static void io_run_ctx_fallback(struct io_ring_ctx *ctx)
 			work = next;
 			cond_resched();
 		} while (work);
+		executed = true;
 	} while (1);
+
+	return executed;
 }
 
 static void io_ring_exit_work(struct work_struct *work)
@@ -8677,6 +8681,7 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
 		ret |= io_poll_remove_all(ctx, task, files);
 		ret |= io_kill_timeouts(ctx, task, files);
 		ret |= io_run_task_work();
+		ret |= io_run_ctx_fallback(ctx);
 		io_cqring_overflow_flush(ctx, true, task, files);
 		if (!ret)
 			break;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 09/33] io_uring: don't use complete_all() on SQPOLL thread exit
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (7 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 08/33] io_uring: run fallback on cancellation Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 10/33] io-wq: provide an io_wq_put_and_exit() helper Jens Axboe
                   ` (23 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We want to reuse this completion, and a single complete should do just
fine. Ensure that we park ourselves first if requested, as that is what
lead to the initial deadlock in this area. If we've got someone attempting
to park us, then we can't proceed without having them finish first.

Fixes: 37d1e2e3642e ("io_uring: move SQPOLL thread io-wq forked worker")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9d6696ff5748..904bf0fecc36 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6783,10 +6783,13 @@ static int io_sq_thread(void *data)
 
 	io_run_task_work();
 
+	if (io_sq_thread_should_park(sqd))
+		io_sq_thread_parkme(sqd);
+
 	/*
 	 * Clear thread under lock so that concurrent parks work correctly
 	 */
-	complete_all(&sqd->completion);
+	complete(&sqd->completion);
 	mutex_lock(&sqd->lock);
 	sqd->thread = NULL;
 	list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 10/33] io-wq: provide an io_wq_put_and_exit() helper
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (8 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 09/33] io_uring: don't use complete_all() on SQPOLL thread exit Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 11/33] io_uring: fix race condition in task_work add and clear Jens Axboe
                   ` (22 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If we put the io-wq from io_uring, we really want it to exit. Provide
a helper that does that for us. Couple that with not having the manager
hold a reference to the 'wq' and the normal SQPOLL exit will tear down
the io-wq context appropriate.

On the io-wq side, our wq context is per task, so only the task itself
is manipulating ->manager and hence it's safe to check and clear without
any extra locking. We just need to ensure that the manager task stays
around, in case it exits.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c    | 29 +++++++++++++++++++----------
 fs/io-wq.h    |  1 +
 fs/io_uring.c |  2 +-
 3 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 23f431747cd2..65ae35ca8dba 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -748,7 +748,7 @@ static int io_wq_manager(void *data)
 	sprintf(buf, "iou-mgr-%d", wq->task_pid);
 	set_task_comm(current, buf);
 	current->flags |= PF_IO_WORKER;
-	wq->manager = current;
+	wq->manager = get_task_struct(current);
 
 	complete(&wq->started);
 
@@ -764,9 +764,7 @@ static int io_wq_manager(void *data)
 	/* we might not ever have created any workers */
 	if (atomic_read(&wq->worker_refs))
 		wait_for_completion(&wq->worker_done);
-	wq->manager = NULL;
 	complete(&wq->exited);
-	io_wq_put(wq);
 	do_exit(0);
 }
 
@@ -809,8 +807,6 @@ static int io_wq_fork_manager(struct io_wq *wq)
 		return 0;
 
 	reinit_completion(&wq->worker_done);
-	clear_bit(IO_WQ_BIT_EXIT, &wq->state);
-	refcount_inc(&wq->refs);
 	current->flags |= PF_IO_WORKER;
 	ret = io_wq_fork_thread(io_wq_manager, wq);
 	current->flags &= ~PF_IO_WORKER;
@@ -1082,6 +1078,16 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 	return ERR_PTR(ret);
 }
 
+static void io_wq_destroy_manager(struct io_wq *wq)
+{
+	if (wq->manager) {
+		wake_up_process(wq->manager);
+		wait_for_completion(&wq->exited);
+		put_task_struct(wq->manager);
+		wq->manager = NULL;
+	}
+}
+
 static void io_wq_destroy(struct io_wq *wq)
 {
 	int node;
@@ -1089,10 +1095,7 @@ static void io_wq_destroy(struct io_wq *wq)
 	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
 
 	set_bit(IO_WQ_BIT_EXIT, &wq->state);
-	if (wq->manager) {
-		wake_up_process(wq->manager);
-		wait_for_completion(&wq->exited);
-	}
+	io_wq_destroy_manager(wq);
 
 	rcu_read_lock();
 	for_each_node(node)
@@ -1110,7 +1113,6 @@ static void io_wq_destroy(struct io_wq *wq)
 	io_wq_put_hash(wq->hash);
 	kfree(wq->wqes);
 	kfree(wq);
-
 }
 
 void io_wq_put(struct io_wq *wq)
@@ -1119,6 +1121,13 @@ void io_wq_put(struct io_wq *wq)
 		io_wq_destroy(wq);
 }
 
+void io_wq_put_and_exit(struct io_wq *wq)
+{
+	set_bit(IO_WQ_BIT_EXIT, &wq->state);
+	io_wq_destroy_manager(wq);
+	io_wq_put(wq);
+}
+
 static bool io_wq_worker_affinity(struct io_worker *worker, void *data)
 {
 	struct task_struct *task = worker->task;
diff --git a/fs/io-wq.h b/fs/io-wq.h
index b6ca12b60c35..f6ef433df8a8 100644
--- a/fs/io-wq.h
+++ b/fs/io-wq.h
@@ -114,6 +114,7 @@ struct io_wq_data {
 
 struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
 void io_wq_put(struct io_wq *wq);
+void io_wq_put_and_exit(struct io_wq *wq);
 
 void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
 void io_wq_hash_work(struct io_wq_work *work, void *val);
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 904bf0fecc36..cb65e54c1b09 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8857,7 +8857,7 @@ void __io_uring_files_cancel(struct files_struct *files)
 	if (files) {
 		io_uring_remove_task_files(tctx);
 		if (tctx->io_wq) {
-			io_wq_put(tctx->io_wq);
+			io_wq_put_and_exit(tctx->io_wq);
 			tctx->io_wq = NULL;
 		}
 	}
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 11/33] io_uring: fix race condition in task_work add and clear
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (9 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 10/33] io-wq: provide an io_wq_put_and_exit() helper Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 12/33] io_uring: signal worker thread unshare Jens Axboe
                   ` (21 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We clear the bit marking the ctx task_work as active after having run
the queued work, but we really should be clearing it before. Otherwise
we can hit a tiny race ala:

CPU0					CPU1
io_task_work_add()			tctx_task_work()
					run_work
	add_to_list
	test_and_set_bit
					clear_bit
		already set

and CPU0 will return thinking the task_work is queued, while in reality
it's already being run. If we hit the condition after __tctx_task_work()
found no more work, but before we've cleared the bit, then we'll end up
thinking it's queued and will be run. In reality it is queued, but we
didn't queue the ctx task_work to ensure that it gets run.

Fixes: 7cbf1722d5fc ("io_uring: provide FIFO ordering for task_work")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index cb65e54c1b09..83973f6b3c0a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1893,10 +1893,10 @@ static void tctx_task_work(struct callback_head *cb)
 {
 	struct io_uring_task *tctx = container_of(cb, struct io_uring_task, task_work);
 
+	clear_bit(0, &tctx->task_state);
+
 	while (__tctx_task_work(tctx))
 		cond_resched();
-
-	clear_bit(0, &tctx->task_state);
 }
 
 static int io_task_work_add(struct task_struct *tsk, struct io_kiocb *req,
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 12/33] io_uring: signal worker thread unshare
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (10 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 11/33] io_uring: fix race condition in task_work add and clear Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04 12:15   ` Stefan Metzmacher
  2021-03-04  0:26 ` [PATCH 13/33] io_uring: warn on not destroyed io-wq Jens Axboe
                   ` (20 subsequent siblings)
  32 siblings, 1 reply; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If the original task switches credentials or unshares any part of the
task state, then we should notify the io_uring workers to they can
re-fork as well. For credentials, this actually happens just fine for
the io-wq workers, as we grab and pass that down. For SQPOLL, we're
stuck with the original credentials, which means that it cannot be used
if the task does eg seteuid().

For unshare(2), the story is the same, except a task cannot do that and
expect the workers to assume the new identity.

Fix this up by just having the threads exit and re-fork if the ring task
does seteuid() (and friends), or does unshare(2) on any parts of the
task.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c               | 21 ++++++++++++++++-----
 fs/io-wq.h               |  1 +
 fs/io_uring.c            | 26 ++++++++++++++++++++++++--
 include/linux/io_uring.h |  9 +++++++++
 kernel/cred.c            |  2 ++
 kernel/fork.c            |  2 ++
 6 files changed, 54 insertions(+), 7 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 65ae35ca8dba..c24473231eee 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -744,6 +744,7 @@ static int io_wq_manager(void *data)
 {
 	struct io_wq *wq = data;
 	char buf[TASK_COMM_LEN];
+	int node;
 
 	sprintf(buf, "iou-mgr-%d", wq->task_pid);
 	set_task_comm(current, buf);
@@ -761,6 +762,12 @@ static int io_wq_manager(void *data)
 	} while (!test_bit(IO_WQ_BIT_EXIT, &wq->state));
 
 	io_wq_check_workers(wq);
+
+	rcu_read_lock();
+	for_each_node(node)
+		io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
+	rcu_read_unlock();
+
 	/* we might not ever have created any workers */
 	if (atomic_read(&wq->worker_refs))
 		wait_for_completion(&wq->worker_done);
@@ -1097,11 +1104,6 @@ static void io_wq_destroy(struct io_wq *wq)
 	set_bit(IO_WQ_BIT_EXIT, &wq->state);
 	io_wq_destroy_manager(wq);
 
-	rcu_read_lock();
-	for_each_node(node)
-		io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
-	rcu_read_unlock();
-
 	spin_lock_irq(&wq->hash->wait.lock);
 	for_each_node(node) {
 		struct io_wqe *wqe = wq->wqes[node];
@@ -1165,3 +1167,12 @@ static __init int io_wq_init(void)
 	return 0;
 }
 subsys_initcall(io_wq_init);
+
+void io_wq_unshare(struct io_wq *wq)
+{
+	refcount_inc(&wq->refs);
+	set_bit(IO_WQ_BIT_EXIT, &wq->state);
+	io_wq_destroy_manager(wq);
+	clear_bit(IO_WQ_BIT_EXIT, &wq->state);
+	io_wq_put(wq);
+}
diff --git a/fs/io-wq.h b/fs/io-wq.h
index f6ef433df8a8..57e478af1e1d 100644
--- a/fs/io-wq.h
+++ b/fs/io-wq.h
@@ -115,6 +115,7 @@ struct io_wq_data {
 struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data);
 void io_wq_put(struct io_wq *wq);
 void io_wq_put_and_exit(struct io_wq *wq);
+void io_wq_unshare(struct io_wq *wq);
 
 void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
 void io_wq_hash_work(struct io_wq_work *work, void *val);
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 83973f6b3c0a..f89d7375a7c3 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8955,6 +8955,24 @@ void __io_uring_task_cancel(void)
 	io_uring_remove_task_files(tctx);
 }
 
+void __io_uring_unshare(void)
+{
+	struct io_uring_task *tctx = current->io_uring;
+	struct file *file;
+	unsigned long index;
+
+	io_wq_unshare(tctx->io_wq);
+	if (!tctx->sqpoll)
+		return;
+
+	xa_for_each(&tctx->xa, index, file) {
+		struct io_ring_ctx *ctx = file->private_data;
+
+		if (ctx->sq_data)
+			io_sq_thread_stop(ctx->sq_data);
+	}
+}
+
 static int io_uring_flush(struct file *file, void *data)
 {
 	struct io_uring_task *tctx = current->io_uring;
@@ -9170,10 +9188,14 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 		io_cqring_overflow_flush(ctx, false, NULL, NULL);
 
 		if (unlikely(ctx->sqo_exec)) {
-			ret = io_sq_thread_fork(ctx->sq_data, ctx);
+			struct io_sq_data *sqd = ctx->sq_data;
+
+			ret = io_sq_thread_fork(sqd, ctx);
+			if (ret)
+				set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+			complete(&sqd->startup);
 			if (ret)
 				goto out;
-			ctx->sqo_exec = 0;
 		}
 		ret = -EOWNERDEAD;
 		if (unlikely(ctx->sqo_dead))
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 51ede771cd99..bfe2fcb4f478 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -35,7 +35,13 @@ struct sock *io_uring_get_socket(struct file *file);
 void __io_uring_task_cancel(void);
 void __io_uring_files_cancel(struct files_struct *files);
 void __io_uring_free(struct task_struct *tsk);
+void __io_uring_unshare(void);
 
+static inline void io_uring_unshare(void)
+{
+	if (current->io_uring)
+		__io_uring_unshare();
+}
 static inline void io_uring_task_cancel(void)
 {
 	if (current->io_uring && !xa_empty(&current->io_uring->xa))
@@ -56,6 +62,9 @@ static inline struct sock *io_uring_get_socket(struct file *file)
 {
 	return NULL;
 }
+static inline void io_uring_unshare(void)
+{
+}
 static inline void io_uring_task_cancel(void)
 {
 }
diff --git a/kernel/cred.c b/kernel/cred.c
index 421b1149c651..324e3ee61e1d 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -16,6 +16,7 @@
 #include <linux/binfmts.h>
 #include <linux/cn_proc.h>
 #include <linux/uidgid.h>
+#include <linux/io_uring.h>
 
 #if 0
 #define kdebug(FMT, ...)						\
@@ -509,6 +510,7 @@ int commit_creds(struct cred *new)
 	/* release the old obj and subj refs both */
 	put_cred(old);
 	put_cred(old);
+	io_uring_unshare();
 	return 0;
 }
 EXPORT_SYMBOL(commit_creds);
diff --git a/kernel/fork.c b/kernel/fork.c
index d66cd1014211..5d1b00083c9e 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2999,6 +2999,8 @@ int ksys_unshare(unsigned long unshare_flags)
 			commit_creds(new_cred);
 			new_cred = NULL;
 		}
+
+		io_uring_unshare();
 	}
 
 	perf_event_namespaces(current);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 13/33] io_uring: warn on not destroyed io-wq
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (11 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 12/33] io_uring: signal worker thread unshare Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 14/33] io_uring: destroy io-wq on exec Jens Axboe
                   ` (19 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

Make sure that we killed an io-wq by the time a task is dead.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f89d7375a7c3..d495735de2d9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7843,6 +7843,8 @@ void __io_uring_free(struct task_struct *tsk)
 	struct io_uring_task *tctx = tsk->io_uring;
 
 	WARN_ON_ONCE(!xa_empty(&tctx->xa));
+	WARN_ON_ONCE(tctx->io_wq);
+
 	percpu_counter_destroy(&tctx->inflight);
 	kfree(tctx);
 	tsk->io_uring = NULL;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 14/33] io_uring: destroy io-wq on exec
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (12 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 13/33] io_uring: warn on not destroyed io-wq Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 15/33] io_uring: remove unused argument 'tsk' from io_req_caches_free() Jens Axboe
                   ` (18 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

Destroy current's io-wq backend and tctx on __io_uring_task_cancel(),
aka exec(). Looks it's not strictly necessary, because it will be done
at some point when the task dies and changes of creds/files/etc. are
handled, but better to do that earlier to free io-wq and not potentially
lock previous mm and other resources for the time being.

It's safe to do because we wait for all requests of the current task to
complete, so no request will use tctx afterwards. Note, that
io_uring_files_cancel() may leave some requests for later reaping, so it
leaves tctx intact, that's ok as the task is dying anyway.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c            | 19 ++++++++++---------
 include/linux/io_uring.h |  2 +-
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d495735de2d9..4afe3dd1430c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8835,13 +8835,17 @@ static void io_uring_del_task_file(struct file *file)
 		fput(file);
 }
 
-static void io_uring_remove_task_files(struct io_uring_task *tctx)
+static void io_uring_clean_tctx(struct io_uring_task *tctx)
 {
 	struct file *file;
 	unsigned long index;
 
 	xa_for_each(&tctx->xa, index, file)
 		io_uring_del_task_file(file);
+	if (tctx->io_wq) {
+		io_wq_put_and_exit(tctx->io_wq);
+		tctx->io_wq = NULL;
+	}
 }
 
 void __io_uring_files_cancel(struct files_struct *files)
@@ -8856,13 +8860,8 @@ void __io_uring_files_cancel(struct files_struct *files)
 		io_uring_cancel_task_requests(file->private_data, files);
 	atomic_dec(&tctx->in_idle);
 
-	if (files) {
-		io_uring_remove_task_files(tctx);
-		if (tctx->io_wq) {
-			io_wq_put_and_exit(tctx->io_wq);
-			tctx->io_wq = NULL;
-		}
-	}
+	if (files)
+		io_uring_clean_tctx(tctx);
 }
 
 static s64 tctx_inflight(struct io_uring_task *tctx)
@@ -8954,7 +8953,9 @@ void __io_uring_task_cancel(void)
 
 	atomic_dec(&tctx->in_idle);
 
-	io_uring_remove_task_files(tctx);
+	io_uring_clean_tctx(tctx);
+	/* all current's requests should be gone, we can kill tctx */
+	__io_uring_free(current);
 }
 
 void __io_uring_unshare(void)
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index bfe2fcb4f478..796e0d7d186d 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -44,7 +44,7 @@ static inline void io_uring_unshare(void)
 }
 static inline void io_uring_task_cancel(void)
 {
-	if (current->io_uring && !xa_empty(&current->io_uring->xa))
+	if (current->io_uring)
 		__io_uring_task_cancel();
 }
 static inline void io_uring_files_cancel(struct files_struct *files)
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 15/33] io_uring: remove unused argument 'tsk' from io_req_caches_free()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (13 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 14/33] io_uring: destroy io-wq on exec Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 16/33] io_uring: kill unnecessary REQ_F_WORK_INITIALIZED checks Jens Axboe
                   ` (17 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We prune the full cache regardless, get rid of the dead argument.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4afe3dd1430c..fa5b589b4516 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8395,7 +8395,7 @@ static void io_req_cache_free(struct list_head *list, struct task_struct *tsk)
 	}
 }
 
-static void io_req_caches_free(struct io_ring_ctx *ctx, struct task_struct *tsk)
+static void io_req_caches_free(struct io_ring_ctx *ctx)
 {
 	struct io_submit_state *submit_state = &ctx->submit_state;
 	struct io_comp_state *cs = &ctx->submit_state.comp;
@@ -8455,7 +8455,7 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
 
 	percpu_ref_exit(&ctx->refs);
 	free_uid(ctx->user);
-	io_req_caches_free(ctx, NULL);
+	io_req_caches_free(ctx);
 	if (ctx->hash_map)
 		io_wq_put_hash(ctx->hash_map);
 	kfree(ctx->cancel_hash);
@@ -8987,7 +8987,7 @@ static int io_uring_flush(struct file *file, void *data)
 
 	if (fatal_signal_pending(current) || (current->flags & PF_EXITING)) {
 		io_uring_cancel_task_requests(ctx, NULL);
-		io_req_caches_free(ctx, current);
+		io_req_caches_free(ctx);
 	}
 
 	io_run_ctx_fallback(ctx);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 16/33] io_uring: kill unnecessary REQ_F_WORK_INITIALIZED checks
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (14 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 15/33] io_uring: remove unused argument 'tsk' from io_req_caches_free() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 17/33] io_uring: move cred assignment into io_issue_sqe() Jens Axboe
                   ` (16 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We're no longer checking anything that requires the work item to be
initialized, as we're not carrying any file related state there.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fa5b589b4516..3bd9198c5a86 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1080,8 +1080,6 @@ static bool io_match_task(struct io_kiocb *head,
 		return true;
 
 	io_for_each_link(req, head) {
-		if (!(req->flags & REQ_F_WORK_INITIALIZED))
-			continue;
 		if (req->file && req->file->f_op == &io_uring_fops)
 			return true;
 		if (req->task->files == files)
@@ -1800,15 +1798,7 @@ static void io_fail_links(struct io_kiocb *req)
 		trace_io_uring_fail_link(req, link);
 		io_cqring_fill_event(link, -ECANCELED);
 
-		/*
-		 * It's ok to free under spinlock as they're not linked anymore,
-		 * but avoid REQ_F_WORK_INITIALIZED because it may deadlock on
-		 * work.fs->lock.
-		 */
-		if (link->flags & REQ_F_WORK_INITIALIZED)
-			io_put_req_deferred(link, 2);
-		else
-			io_double_put_req(link);
+		io_put_req_deferred(link, 2);
 		link = nxt;
 	}
 	io_commit_cqring(ctx);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 17/33] io_uring: move cred assignment into io_issue_sqe()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (15 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 16/33] io_uring: kill unnecessary REQ_F_WORK_INITIALIZED checks Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 18/33] io_uring: kill unnecessary io_run_ctx_fallback() in io_ring_exit_work() Jens Axboe
                   ` (15 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Pavel Begunkov

If we move it in there, then we no longer have to care about it in io-wq.
This means we can drop the cred handling in io-wq, and we can drop the
REQ_F_WORK_INITIALIZED flag and async init functions as that was the last
user of it since we moved to the new workers. Then we can also drop
io_wq_work->creds, and just hold the personality u16 in there instead.

Suggested-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c    | 26 ------------------
 fs/io-wq.h    |  2 +-
 fs/io_uring.c | 75 +++++++++++++++------------------------------------
 3 files changed, 22 insertions(+), 81 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index c24473231eee..327e390bc0c2 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -52,9 +52,6 @@ struct io_worker {
 	struct io_wq_work *cur_work;
 	spinlock_t lock;
 
-	const struct cred *cur_creds;
-	const struct cred *saved_creds;
-
 	struct completion ref_done;
 	struct completion started;
 
@@ -180,11 +177,6 @@ static void io_worker_exit(struct io_worker *worker)
 	worker->flags = 0;
 	preempt_enable();
 
-	if (worker->saved_creds) {
-		revert_creds(worker->saved_creds);
-		worker->cur_creds = worker->saved_creds = NULL;
-	}
-
 	raw_spin_lock_irq(&wqe->lock);
 	if (flags & IO_WORKER_F_FREE)
 		hlist_nulls_del_rcu(&worker->nulls_node);
@@ -326,10 +318,6 @@ static void __io_worker_idle(struct io_wqe *wqe, struct io_worker *worker)
 		worker->flags |= IO_WORKER_F_FREE;
 		hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
 	}
-	if (worker->saved_creds) {
-		revert_creds(worker->saved_creds);
-		worker->cur_creds = worker->saved_creds = NULL;
-	}
 }
 
 static inline unsigned int io_get_work_hash(struct io_wq_work *work)
@@ -404,18 +392,6 @@ static void io_flush_signals(void)
 	}
 }
 
-static void io_wq_switch_creds(struct io_worker *worker,
-			       struct io_wq_work *work)
-{
-	const struct cred *old_creds = override_creds(work->creds);
-
-	worker->cur_creds = work->creds;
-	if (worker->saved_creds)
-		put_cred(old_creds); /* creds set by previous switch */
-	else
-		worker->saved_creds = old_creds;
-}
-
 static void io_assign_current_work(struct io_worker *worker,
 				   struct io_wq_work *work)
 {
@@ -465,8 +441,6 @@ static void io_worker_handle_work(struct io_worker *worker)
 			unsigned int hash = io_get_work_hash(work);
 
 			next_hashed = wq_next_work(work);
-			if (work->creds && worker->cur_creds != work->creds)
-				io_wq_switch_creds(worker, work);
 			wq->do_work(work);
 			io_assign_current_work(worker, NULL);
 
diff --git a/fs/io-wq.h b/fs/io-wq.h
index 57e478af1e1d..024a5f5f03af 100644
--- a/fs/io-wq.h
+++ b/fs/io-wq.h
@@ -79,8 +79,8 @@ static inline void wq_list_del(struct io_wq_work_list *list,
 
 struct io_wq_work {
 	struct io_wq_work_node list;
-	const struct cred *creds;
 	unsigned flags;
+	unsigned short personality;
 };
 
 static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3bd9198c5a86..7d309795d910 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -688,7 +688,6 @@ enum {
 	REQ_F_POLLED_BIT,
 	REQ_F_BUFFER_SELECTED_BIT,
 	REQ_F_NO_FILE_TABLE_BIT,
-	REQ_F_WORK_INITIALIZED_BIT,
 	REQ_F_LTIMEOUT_ACTIVE_BIT,
 	REQ_F_COMPLETE_INLINE_BIT,
 
@@ -730,8 +729,6 @@ enum {
 	REQ_F_BUFFER_SELECTED	= BIT(REQ_F_BUFFER_SELECTED_BIT),
 	/* doesn't need file table for this request */
 	REQ_F_NO_FILE_TABLE	= BIT(REQ_F_NO_FILE_TABLE_BIT),
-	/* io_wq_work is initialized */
-	REQ_F_WORK_INITIALIZED	= BIT(REQ_F_WORK_INITIALIZED_BIT),
 	/* linked timeout is active, i.e. prepared by link's head */
 	REQ_F_LTIMEOUT_ACTIVE	= BIT(REQ_F_LTIMEOUT_ACTIVE_BIT),
 	/* completion is deferred through io_comp_state */
@@ -1094,24 +1091,6 @@ static inline void req_set_fail_links(struct io_kiocb *req)
 		req->flags |= REQ_F_FAIL_LINK;
 }
 
-static inline void __io_req_init_async(struct io_kiocb *req)
-{
-	memset(&req->work, 0, sizeof(req->work));
-	req->flags |= REQ_F_WORK_INITIALIZED;
-}
-
-/*
- * Note: must call io_req_init_async() for the first time you
- * touch any members of io_wq_work.
- */
-static inline void io_req_init_async(struct io_kiocb *req)
-{
-	if (req->flags & REQ_F_WORK_INITIALIZED)
-		return;
-
-	__io_req_init_async(req);
-}
-
 static void io_ring_ctx_ref_free(struct percpu_ref *ref)
 {
 	struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
@@ -1196,13 +1175,6 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
 
 static void io_req_clean_work(struct io_kiocb *req)
 {
-	if (!(req->flags & REQ_F_WORK_INITIALIZED))
-		return;
-
-	if (req->work.creds) {
-		put_cred(req->work.creds);
-		req->work.creds = NULL;
-	}
 	if (req->flags & REQ_F_INFLIGHT) {
 		struct io_ring_ctx *ctx = req->ctx;
 		struct io_uring_task *tctx = req->task->io_uring;
@@ -1215,8 +1187,6 @@ static void io_req_clean_work(struct io_kiocb *req)
 		if (atomic_read(&tctx->in_idle))
 			wake_up(&tctx->wait);
 	}
-
-	req->flags &= ~REQ_F_WORK_INITIALIZED;
 }
 
 static void io_req_track_inflight(struct io_kiocb *req)
@@ -1224,7 +1194,6 @@ static void io_req_track_inflight(struct io_kiocb *req)
 	struct io_ring_ctx *ctx = req->ctx;
 
 	if (!(req->flags & REQ_F_INFLIGHT)) {
-		io_req_init_async(req);
 		req->flags |= REQ_F_INFLIGHT;
 
 		spin_lock_irq(&ctx->inflight_lock);
@@ -1238,8 +1207,6 @@ static void io_prep_async_work(struct io_kiocb *req)
 	const struct io_op_def *def = &io_op_defs[req->opcode];
 	struct io_ring_ctx *ctx = req->ctx;
 
-	io_req_init_async(req);
-
 	if (req->flags & REQ_F_FORCE_ASYNC)
 		req->work.flags |= IO_WQ_WORK_CONCURRENT;
 
@@ -1250,8 +1217,6 @@ static void io_prep_async_work(struct io_kiocb *req)
 		if (def->unbound_nonreg_file)
 			req->work.flags |= IO_WQ_WORK_UNBOUND;
 	}
-	if (!req->work.creds)
-		req->work.creds = get_current_cred();
 }
 
 static void io_prep_async_link(struct io_kiocb *req)
@@ -3578,7 +3543,6 @@ static int __io_splice_prep(struct io_kiocb *req,
 		 * Splice operation will be punted aync, and here need to
 		 * modify io_wq_work.flags, so initialize io_wq_work firstly.
 		 */
-		io_req_init_async(req);
 		req->work.flags |= IO_WQ_WORK_UNBOUND;
 	}
 
@@ -5935,8 +5899,22 @@ static void __io_clean_op(struct io_kiocb *req)
 static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
+	const struct cred *creds = NULL;
 	int ret;
 
+	if (req->work.personality) {
+		const struct cred *new_creds;
+
+		if (!(issue_flags & IO_URING_F_NONBLOCK))
+			mutex_lock(&ctx->uring_lock);
+		new_creds = idr_find(&ctx->personality_idr, req->work.personality);
+		if (!(issue_flags & IO_URING_F_NONBLOCK))
+			mutex_unlock(&ctx->uring_lock);
+		if (!new_creds)
+			return -EINVAL;
+		creds = override_creds(new_creds);
+	}
+
 	switch (req->opcode) {
 	case IORING_OP_NOP:
 		ret = io_nop(req, issue_flags);
@@ -6043,6 +6021,9 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 		break;
 	}
 
+	if (creds)
+		revert_creds(creds);
+
 	if (ret)
 		return ret;
 
@@ -6206,18 +6187,10 @@ static struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
 static void __io_queue_sqe(struct io_kiocb *req)
 {
 	struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
-	const struct cred *old_creds = NULL;
 	int ret;
 
-	if ((req->flags & REQ_F_WORK_INITIALIZED) && req->work.creds &&
-	    req->work.creds != current_cred())
-		old_creds = override_creds(req->work.creds);
-
 	ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
 
-	if (old_creds)
-		revert_creds(old_creds);
-
 	/*
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
 	 * doesn't support non-blocking read/write attempts
@@ -6304,7 +6277,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 {
 	struct io_submit_state *state;
 	unsigned int sqe_flags;
-	int id, ret = 0;
+	int ret = 0;
 
 	req->opcode = READ_ONCE(sqe->opcode);
 	/* same numerical values with corresponding REQ_F_*, safe to copy */
@@ -6336,15 +6309,9 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	    !io_op_defs[req->opcode].buffer_select)
 		return -EOPNOTSUPP;
 
-	id = READ_ONCE(sqe->personality);
-	if (id) {
-		__io_req_init_async(req);
-		req->work.creds = idr_find(&ctx->personality_idr, id);
-		if (unlikely(!req->work.creds))
-			return -EINVAL;
-		get_cred(req->work.creds);
-	}
-
+	req->work.list.next = NULL;
+	req->work.flags = 0;
+	req->work.personality = READ_ONCE(sqe->personality);
 	state = &ctx->submit_state;
 
 	/*
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 18/33] io_uring: kill unnecessary io_run_ctx_fallback() in io_ring_exit_work()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (16 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 17/33] io_uring: move cred assignment into io_issue_sqe() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 19/33] io_uring: kill io_uring_flush() Jens Axboe
                   ` (14 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

We already run the fallback task_work in io_uring_try_cancel_requests(),
no need to duplicate at ring exit explicitly.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7d309795d910..def9da1ddc3c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8519,7 +8519,6 @@ static void io_ring_exit_work(struct work_struct *work)
 	 */
 	do {
 		io_uring_try_cancel_requests(ctx, NULL, NULL);
-		io_run_ctx_fallback(ctx);
 	} while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20));
 	io_ring_ctx_free(ctx);
 }
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 19/33] io_uring: kill io_uring_flush()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (17 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 18/33] io_uring: kill unnecessary io_run_ctx_fallback() in io_ring_exit_work() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 20/33] io_uring: fix __tctx_task_work() ctx race Jens Axboe
                   ` (13 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

This was always a weird work-around or file referencing, and we don't
need it anymore. Get rid of it.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 47 -----------------------------------------------
 1 file changed, 47 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index def9da1ddc3c..f6bc0254cd01 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8932,52 +8932,6 @@ void __io_uring_unshare(void)
 	}
 }
 
-static int io_uring_flush(struct file *file, void *data)
-{
-	struct io_uring_task *tctx = current->io_uring;
-	struct io_ring_ctx *ctx = file->private_data;
-
-	/* Ignore helper thread files exit */
-	if (current->flags & PF_IO_WORKER)
-		return 0;
-
-	if (fatal_signal_pending(current) || (current->flags & PF_EXITING)) {
-		io_uring_cancel_task_requests(ctx, NULL);
-		io_req_caches_free(ctx);
-	}
-
-	io_run_ctx_fallback(ctx);
-
-	if (!tctx)
-		return 0;
-
-	/* we should have cancelled and erased it before PF_EXITING */
-	WARN_ON_ONCE((current->flags & PF_EXITING) &&
-		     xa_load(&tctx->xa, (unsigned long)file));
-
-	/*
-	 * fput() is pending, will be 2 if the only other ref is our potential
-	 * task file note. If the task is exiting, drop regardless of count.
-	 */
-	if (atomic_long_read(&file->f_count) != 2)
-		return 0;
-
-	if (ctx->flags & IORING_SETUP_SQPOLL) {
-		/* there is only one file note, which is owned by sqo_task */
-		WARN_ON_ONCE(ctx->sqo_task != current &&
-			     xa_load(&tctx->xa, (unsigned long)file));
-		/* sqo_dead check is for when this happens after cancellation */
-		WARN_ON_ONCE(ctx->sqo_task == current && !ctx->sqo_dead &&
-			     !xa_load(&tctx->xa, (unsigned long)file));
-
-		io_disable_sqo_submit(ctx);
-	}
-
-	if (!(ctx->flags & IORING_SETUP_SQPOLL) || ctx->sqo_task == current)
-		io_uring_del_task_file(file);
-	return 0;
-}
-
 static void *io_uring_validate_mmap_request(struct file *file,
 					    loff_t pgoff, size_t sz)
 {
@@ -9313,7 +9267,6 @@ static void io_uring_show_fdinfo(struct seq_file *m, struct file *f)
 
 static const struct file_operations io_uring_fops = {
 	.release	= io_uring_release,
-	.flush		= io_uring_flush,
 	.mmap		= io_uring_mmap,
 #ifndef CONFIG_MMU
 	.get_unmapped_area = io_uring_nommu_get_unmapped_area,
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 20/33] io_uring: fix __tctx_task_work() ctx race
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (18 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 19/33] io_uring: kill io_uring_flush() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 21/33] io_uring: replace cmpxchg in fallback with xchg Jens Axboe
                   ` (12 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, syzbot+a157ac7c03a56397f553, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);

There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.

One example is as follows

  CPU1                                  |  CPU2
=======================================================================
@req->task_work.func()                  |
  -> @req overflwed,                    |
     so submit_state.comp,nr==0         |
                                        | flush overflows, and free @req
                                        | ctx refs == 0, free it
ctx is dead, but we do                  |
	lock + flush + unlock           |

So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.

Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 36 +++++++++++++++++++-----------------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f6bc0254cd01..befa0f4dd575 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1800,6 +1800,18 @@ static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
 	return __io_req_find_next(req);
 }
 
+static void ctx_flush_and_put(struct io_ring_ctx *ctx)
+{
+	if (!ctx)
+		return;
+	if (ctx->submit_state.comp.nr) {
+		mutex_lock(&ctx->uring_lock);
+		io_submit_flush_completions(&ctx->submit_state.comp, ctx);
+		mutex_unlock(&ctx->uring_lock);
+	}
+	percpu_ref_put(&ctx->refs);
+}
+
 static bool __tctx_task_work(struct io_uring_task *tctx)
 {
 	struct io_ring_ctx *ctx = NULL;
@@ -1817,30 +1829,20 @@ static bool __tctx_task_work(struct io_uring_task *tctx)
 	node = list.first;
 	while (node) {
 		struct io_wq_work_node *next = node->next;
-		struct io_ring_ctx *this_ctx;
 		struct io_kiocb *req;
 
 		req = container_of(node, struct io_kiocb, io_task_work.node);
-		this_ctx = req->ctx;
-		req->task_work.func(&req->task_work);
-		node = next;
-
-		if (!ctx) {
-			ctx = this_ctx;
-		} else if (ctx != this_ctx) {
-			mutex_lock(&ctx->uring_lock);
-			io_submit_flush_completions(&ctx->submit_state.comp, ctx);
-			mutex_unlock(&ctx->uring_lock);
-			ctx = this_ctx;
+		if (req->ctx != ctx) {
+			ctx_flush_and_put(ctx);
+			ctx = req->ctx;
+			percpu_ref_get(&ctx->refs);
 		}
-	}
 
-	if (ctx && ctx->submit_state.comp.nr) {
-		mutex_lock(&ctx->uring_lock);
-		io_submit_flush_completions(&ctx->submit_state.comp, ctx);
-		mutex_unlock(&ctx->uring_lock);
+		req->task_work.func(&req->task_work);
+		node = next;
 	}
 
+	ctx_flush_and_put(ctx);
 	return list.first != NULL;
 }
 
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 21/33] io_uring: replace cmpxchg in fallback with xchg
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (19 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 20/33] io_uring: fix __tctx_task_work() ctx race Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 22/33] io_uring: ensure that SQPOLL thread is started for exit Jens Axboe
                   ` (11 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

io_run_ctx_fallback() can use xchg() instead of cmpxchg(). It's simpler
and faster.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index befa0f4dd575..5bd039aa97bb 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8484,15 +8484,11 @@ static int io_remove_personalities(int id, void *p, void *data)
 
 static bool io_run_ctx_fallback(struct io_ring_ctx *ctx)
 {
-	struct callback_head *work, *head, *next;
+	struct callback_head *work, *next;
 	bool executed = false;
 
 	do {
-		do {
-			head = NULL;
-			work = READ_ONCE(ctx->exit_task_work);
-		} while (cmpxchg(&ctx->exit_task_work, work, head) != work);
-
+		work = xchg(&ctx->exit_task_work, NULL);
 		if (!work)
 			break;
 
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 22/33] io_uring: ensure that SQPOLL thread is started for exit
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (20 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 21/33] io_uring: replace cmpxchg in fallback with xchg Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 23/33] io_uring: ignore double poll add on the same waitqueue head Jens Axboe
                   ` (10 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, syzbot+fb5458330b4442f2090d

If we create it in a disabled state because IORING_SETUP_R_DISABLED is
set on ring creation, we need to ensure that we've kicked the thread if
we're exiting before it's been explicitly disabled. Otherwise we can run
into a deadlock where exit is waiting go park the SQPOLL thread, but the
SQPOLL thread itself is waiting to get a signal to start.

That results in the below trace of both tasks hung, waiting on each other:

INFO: task syz-executor458:8401 blocked for more than 143 seconds.
      Not tainted 5.11.0-next-20210226-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor458 state:D stack:27536 pid: 8401 ppid:  8400 flags:0x00004004
Call Trace:
 context_switch kernel/sched/core.c:4324 [inline]
 __schedule+0x90c/0x21a0 kernel/sched/core.c:5075
 schedule+0xcf/0x270 kernel/sched/core.c:5154
 schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
 do_wait_for_common kernel/sched/completion.c:85 [inline]
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
 io_sq_thread_park fs/io_uring.c:7115 [inline]
 io_sq_thread_park+0xd5/0x130 fs/io_uring.c:7103
 io_uring_cancel_task_requests+0x24c/0xd90 fs/io_uring.c:8745
 __io_uring_files_cancel+0x110/0x230 fs/io_uring.c:8840
 io_uring_files_cancel include/linux/io_uring.h:47 [inline]
 do_exit+0x299/0x2a60 kernel/exit.c:780
 do_group_exit+0x125/0x310 kernel/exit.c:922
 __do_sys_exit_group kernel/exit.c:933 [inline]
 __se_sys_exit_group kernel/exit.c:931 [inline]
 __x64_sys_exit_group+0x3a/0x50 kernel/exit.c:931
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x43e899
RSP: 002b:00007ffe89376d48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 00000000004af2f0 RCX: 000000000043e899
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffffffffffffc0 R09: 0000000010000000
R10: 0000000000008011 R11: 0000000000000246 R12: 00000000004af2f0
R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001
INFO: task iou-sqp-8401:8402 can't die for more than 143 seconds.
task:iou-sqp-8401    state:D stack:30272 pid: 8402 ppid:  8400 flags:0x00004004
Call Trace:
 context_switch kernel/sched/core.c:4324 [inline]
 __schedule+0x90c/0x21a0 kernel/sched/core.c:5075
 schedule+0xcf/0x270 kernel/sched/core.c:5154
 schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
 do_wait_for_common kernel/sched/completion.c:85 [inline]
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
 io_sq_thread+0x27d/0x1ae0 fs/io_uring.c:6717
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task iou-sqp-8401:8402 blocked for more than 143 seconds.

Reported-by: syzbot+fb5458330b4442f2090d@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5bd039aa97bb..549a5c5ee0b5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7916,6 +7916,7 @@ static void io_sq_offload_start(struct io_ring_ctx *ctx)
 {
 	struct io_sq_data *sqd = ctx->sq_data;
 
+	ctx->flags &= ~IORING_SETUP_R_DISABLED;
 	if (ctx->flags & IORING_SETUP_SQPOLL)
 		complete(&sqd->startup);
 }
@@ -8692,6 +8693,8 @@ static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
 	ctx->sqo_dead = 1;
+	if (ctx->flags & IORING_SETUP_R_DISABLED)
+		io_sq_offload_start(ctx);
 	mutex_unlock(&ctx->uring_lock);
 
 	/* make sure callers enter the ring to get error */
@@ -9659,10 +9662,7 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
 	if (ctx->restrictions.registered)
 		ctx->restricted = 1;
 
-	ctx->flags &= ~IORING_SETUP_R_DISABLED;
-
 	io_sq_offload_start(ctx);
-
 	return 0;
 }
 
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 23/33] io_uring: ignore double poll add on the same waitqueue head
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (21 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 22/33] io_uring: ensure that SQPOLL thread is started for exit Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 24/33] io_uring: kill sqo_dead and sqo submission halting Jens Axboe
                   ` (9 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, stable, syzbot+28abd693db9e92c160d8

syzbot reports a deadlock, attempting to lock the same spinlock twice:

============================================
WARNING: possible recursive locking detected
5.11.0-syzkaller #0 Not tainted
--------------------------------------------
swapper/1/0 is trying to acquire lock:
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline]
ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960

but task is already holding lock:
ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&runtime->sleep);
  lock(&runtime->sleep);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

2 locks held by swapper/1/0:
 #0: ffff888147474908 (&group->lock){..-.}-{2:2}, at: _snd_pcm_stream_lock_irqsave+0x9f/0xd0 sound/core/pcm_native.c:170
 #1: ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137

stack backtrace:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.11.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:79 [inline]
 dump_stack+0xfa/0x151 lib/dump_stack.c:120
 print_deadlock_bug kernel/locking/lockdep.c:2829 [inline]
 check_deadlock kernel/locking/lockdep.c:2872 [inline]
 validate_chain kernel/locking/lockdep.c:3661 [inline]
 __lock_acquire.cold+0x14c/0x3b4 kernel/locking/lockdep.c:4900
 lock_acquire kernel/locking/lockdep.c:5510 [inline]
 lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
 spin_lock include/linux/spinlock.h:354 [inline]
 io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
 __wake_up_common+0x147/0x650 kernel/sched/wait.c:108
 __wake_up_common_lock+0xd0/0x130 kernel/sched/wait.c:138
 snd_pcm_update_state+0x46a/0x540 sound/core/pcm_lib.c:203
 snd_pcm_update_hw_ptr0+0xa75/0x1a50 sound/core/pcm_lib.c:464
 snd_pcm_period_elapsed+0x160/0x250 sound/core/pcm_lib.c:1805
 dummy_hrtimer_callback+0x94/0x1b0 sound/drivers/dummy.c:378
 __run_hrtimer kernel/time/hrtimer.c:1519 [inline]
 __hrtimer_run_queues+0x609/0xe40 kernel/time/hrtimer.c:1583
 hrtimer_run_softirq+0x17b/0x360 kernel/time/hrtimer.c:1600
 __do_softirq+0x29b/0x9f6 kernel/softirq.c:345
 invoke_softirq kernel/softirq.c:221 [inline]
 __irq_exit_rcu kernel/softirq.c:422 [inline]
 irq_exit_rcu+0x134/0x200 kernel/softirq.c:434
 sysvec_apic_timer_interrupt+0x93/0xc0 arch/x86/kernel/apic/apic.c:1100
 </IRQ>
 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:632
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_irqs_disabled arch/x86/include/asm/irqflags.h:137 [inline]
RIP: 0010:acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline]
RIP: 0010:acpi_idle_do_entry+0x1c9/0x250 drivers/acpi/processor_idle.c:516
Code: dd 38 6e f8 84 db 75 ac e8 54 32 6e f8 e8 0f 1c 74 f8 e9 0c 00 00 00 e8 45 32 6e f8 0f 00 2d 4e 4a c5 00 e8 39 32 6e f8 fb f4 <9c> 5b 81 e3 00 02 00 00 fa 31 ff 48 89 de e8 14 3a 6e f8 48 85 db
RSP: 0018:ffffc90000d47d18 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8880115c3780 RSI: ffffffff89052537 RDI: 0000000000000000
RBP: ffff888141127064 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffff81794168 R11: 0000000000000000 R12: 0000000000000001
R13: ffff888141127000 R14: ffff888141127064 R15: ffff888143331804
 acpi_idle_enter+0x361/0x500 drivers/acpi/processor_idle.c:647
 cpuidle_enter_state+0x1b1/0xc80 drivers/cpuidle/cpuidle.c:237
 cpuidle_enter+0x4a/0xa0 drivers/cpuidle/cpuidle.c:351
 call_cpuidle kernel/sched/idle.c:158 [inline]
 cpuidle_idle_call kernel/sched/idle.c:239 [inline]
 do_idle+0x3e1/0x590 kernel/sched/idle.c:300
 cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:397
 start_secondary+0x274/0x350 arch/x86/kernel/smpboot.c:272
 secondary_startup_64_no_verify+0xb0/0xbb

which is due to the driver doing poll_wait() twice on the same
wait_queue_head. That is perfectly valid, but from checking the rest
of the kernel tree, it's the only driver that does this.

We can handle this just fine, we just need to ignore the second addition
as we'll get woken just fine on the first one.

Cc: stable@vger.kernel.org # 5.8+
Fixes: 18bceab101ad ("io_uring: allow POLL_ADD with double poll_wait() users")
Reported-by: syzbot+28abd693db9e92c160d8@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 549a5c5ee0b5..bdaeda5eefd5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4959,6 +4959,9 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
 			pt->error = -EINVAL;
 			return;
 		}
+		/* double add on the same waitqueue head, ignore */
+		if (poll->head == head)
+			return;
 		poll = kmalloc(sizeof(*poll), GFP_ATOMIC);
 		if (!poll) {
 			pt->error = -ENOMEM;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 24/33] io_uring: kill sqo_dead and sqo submission halting
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (22 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 23/33] io_uring: ignore double poll add on the same waitqueue head Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 25/33] io_uring: remove sqo_task Jens Axboe
                   ` (8 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

As SQPOLL task doesn't poke into ->sqo_task anymore, there is no need to
kill the sqo when the master task exits. Before it was necessary to
avoid races accessing sqo_task->files with removing them.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: don't forget to enable SQPOLL before exit, if started disabled]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 45 ++++++++-------------------------------------
 1 file changed, 8 insertions(+), 37 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index bdaeda5eefd5..da90d877afd4 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -338,7 +338,6 @@ struct io_ring_ctx {
 		unsigned int		drain_next: 1;
 		unsigned int		eventfd_async: 1;
 		unsigned int		restricted: 1;
-		unsigned int		sqo_dead: 1;
 		unsigned int		sqo_exec: 1;
 
 		/*
@@ -1967,7 +1966,7 @@ static void __io_req_task_submit(struct io_kiocb *req)
 
 	/* ctx stays valid until unlock, even if we drop all ours ctx->refs */
 	mutex_lock(&ctx->uring_lock);
-	if (!ctx->sqo_dead && !(current->flags & PF_EXITING) && !current->in_execve)
+	if (!(current->flags & PF_EXITING) && !current->in_execve)
 		__io_queue_sqe(req);
 	else
 		__io_req_task_cancel(req, -EFAULT);
@@ -6578,8 +6577,7 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
 		if (!list_empty(&ctx->iopoll_list))
 			io_do_iopoll(ctx, &nr_events, 0);
 
-		if (to_submit && !ctx->sqo_dead &&
-		    likely(!percpu_ref_is_dying(&ctx->refs)))
+		if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)))
 			ret = io_submit_sqes(ctx, to_submit);
 		mutex_unlock(&ctx->uring_lock);
 	}
@@ -7818,7 +7816,7 @@ static int io_sq_thread_fork(struct io_sq_data *sqd, struct io_ring_ctx *ctx)
 
 	clear_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
 	reinit_completion(&sqd->completion);
-	ctx->sqo_dead = ctx->sqo_exec = 0;
+	ctx->sqo_exec = 0;
 	sqd->task_pid = current->pid;
 	current->flags |= PF_IO_WORKER;
 	ret = io_wq_fork_thread(io_sq_thread, sqd);
@@ -8529,10 +8527,6 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
 	percpu_ref_kill(&ctx->refs);
-
-	if (WARN_ON_ONCE((ctx->flags & IORING_SETUP_SQPOLL) && !ctx->sqo_dead))
-		ctx->sqo_dead = 1;
-
 	/* if force is set, the ring is going away. always drop after that */
 	ctx->cq_overflow_flushed = 1;
 	if (ctx->rings)
@@ -8692,19 +8686,6 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 	}
 }
 
-static void io_disable_sqo_submit(struct io_ring_ctx *ctx)
-{
-	mutex_lock(&ctx->uring_lock);
-	ctx->sqo_dead = 1;
-	if (ctx->flags & IORING_SETUP_R_DISABLED)
-		io_sq_offload_start(ctx);
-	mutex_unlock(&ctx->uring_lock);
-
-	/* make sure callers enter the ring to get error */
-	if (ctx->rings)
-		io_ring_set_wakeup_flag(ctx);
-}
-
 /*
  * We need to iteratively cancel requests, in case a request has dependent
  * hard links. These persist even for failure of cancelations, hence keep
@@ -8717,7 +8698,11 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
 	bool did_park = false;
 
 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
-		io_disable_sqo_submit(ctx);
+		/* never started, nothing to cancel */
+		if (ctx->flags & IORING_SETUP_R_DISABLED) {
+			io_sq_offload_start(ctx);
+			return;
+		}
 		did_park = io_sq_thread_park(ctx->sq_data);
 		if (did_park) {
 			task = ctx->sq_data->thread;
@@ -8838,7 +8823,6 @@ static void io_uring_cancel_sqpoll(struct io_ring_ctx *ctx)
 
 	if (!sqd)
 		return;
-	io_disable_sqo_submit(ctx);
 	if (!io_sq_thread_park(sqd))
 		return;
 	tctx = ctx->sq_data->thread->io_uring;
@@ -8883,7 +8867,6 @@ void __io_uring_task_cancel(void)
 	/* make sure overflow events are dropped */
 	atomic_inc(&tctx->in_idle);
 
-	/* trigger io_disable_sqo_submit() */
 	if (tctx->sqpoll) {
 		struct file *file;
 		unsigned long index;
@@ -9014,22 +8997,14 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
 	do {
 		if (!io_sqring_full(ctx))
 			break;
-
 		prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE);
 
-		if (unlikely(ctx->sqo_dead)) {
-			ret = -EOWNERDEAD;
-			goto out;
-		}
-
 		if (!io_sqring_full(ctx))
 			break;
-
 		schedule();
 	} while (!signal_pending(current));
 
 	finish_wait(&ctx->sqo_sq_wait, &wait);
-out:
 	return ret;
 }
 
@@ -9115,8 +9090,6 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 				goto out;
 		}
 		ret = -EOWNERDEAD;
-		if (unlikely(ctx->sqo_dead))
-			goto out;
 		if (flags & IORING_ENTER_SQ_WAKEUP)
 			wake_up(&ctx->sq_data->wait);
 		if (flags & IORING_ENTER_SQ_WAIT) {
@@ -9488,7 +9461,6 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
 	 */
 	ret = io_uring_install_fd(ctx, file);
 	if (ret < 0) {
-		io_disable_sqo_submit(ctx);
 		/* fput will clean it up */
 		fput(file);
 		return ret;
@@ -9497,7 +9469,6 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
 	trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
 	return ret;
 err:
-	io_disable_sqo_submit(ctx);
 	io_ring_ctx_wait_and_kill(ctx);
 	return ret;
 }
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 25/33] io_uring: remove sqo_task
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (23 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 24/33] io_uring: kill sqo_dead and sqo submission halting Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 26/33] io-wq: fix error path leak of buffered write hash map Jens Axboe
                   ` (7 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

Now, sqo_task is used only for a warning that is not interesting anymore
since sqo_dead is gone, remove all of that including ctx->sqo_task.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index da90d877afd4..28a360aac4a3 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -379,11 +379,6 @@ struct io_ring_ctx {
 
 	struct io_rings	*rings;
 
-	/*
-	 * For SQPOLL usage
-	 */
-	struct task_struct	*sqo_task;
-
 	/* Only used for accounting purposes */
 	struct mm_struct	*mm_account;
 
@@ -8747,10 +8742,6 @@ static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)
 				fput(file);
 				return ret;
 			}
-
-			/* one and only SQPOLL file note, held by sqo_task */
-			WARN_ON_ONCE((ctx->flags & IORING_SETUP_SQPOLL) &&
-				     current != ctx->sqo_task);
 		}
 		tctx->last = file;
 	}
@@ -9398,7 +9389,6 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
 	ctx->compat = in_compat_syscall();
 	if (!capable(CAP_IPC_LOCK))
 		ctx->user = get_uid(current_user());
-	ctx->sqo_task = current;
 
 	/*
 	 * This is just grabbed for accounting purposes. When a process exits,
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 26/33] io-wq: fix error path leak of buffered write hash map
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (24 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 25/33] io_uring: remove sqo_task Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 27/33] io_uring: fix -EAGAIN retry with IOPOLL Jens Axboe
                   ` (6 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

The 'err' path should include the hash put, we already grabbed a reference
once we get that far.

Fixes: e941894eae31 ("io-wq: make buffered file write hashed work map per-ctx")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 327e390bc0c2..1fdb2b621b51 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -1047,8 +1047,8 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
 	if (!ret)
 		return wq;
 
-	io_wq_put_hash(data->hash);
 err:
+	io_wq_put_hash(data->hash);
 	cpuhp_state_remove_instance_nocalls(io_wq_online, &wq->cpuhp_node);
 	for_each_node(node)
 		kfree(wq->wqes[node]);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 27/33] io_uring: fix -EAGAIN retry with IOPOLL
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (25 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 26/33] io-wq: fix error path leak of buffered write hash map Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 28/33] io_uring: choose right tctx->io_wq for try cancel Jens Axboe
                   ` (5 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, stable, Abaci Robot, Xiaoguang Wang

We no longer revert the iovec on -EIOCBQUEUED, see commit ab2125df921d,
and this started causing issues for IOPOLL on devies that run out of
request slots. Turns out what outside of needing a revert for those, we
also had a bug where we didn't properly setup retry inside the submission
path. That could cause re-import of the iovec, if any, and that could lead
to spurious results if the application had those allocated on the stack.

Catch -EAGAIN retry and make the iovec stable for IOPOLL, just like we do
for !IOPOLL retries.

Cc: <stable@vger.kernel.org> # 5.9+
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Reported-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 36 +++++++++++++++++++++++++++++++-----
 1 file changed, 31 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 28a360aac4a3..c765b7fba8a1 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2423,23 +2423,32 @@ static bool io_resubmit_prep(struct io_kiocb *req)
 		return false;
 	return !io_setup_async_rw(req, iovec, inline_vecs, &iter, false);
 }
-#endif
 
-static bool io_rw_reissue(struct io_kiocb *req)
+static bool io_rw_should_reissue(struct io_kiocb *req)
 {
-#ifdef CONFIG_BLOCK
 	umode_t mode = file_inode(req->file)->i_mode;
+	struct io_ring_ctx *ctx = req->ctx;
 
 	if (!S_ISBLK(mode) && !S_ISREG(mode))
 		return false;
-	if ((req->flags & REQ_F_NOWAIT) || io_wq_current_is_worker())
+	if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
+	    !(ctx->flags & IORING_SETUP_IOPOLL)))
 		return false;
 	/*
 	 * If ref is dying, we might be running poll reap from the exit work.
 	 * Don't attempt to reissue from that path, just let it fail with
 	 * -EAGAIN.
 	 */
-	if (percpu_ref_is_dying(&req->ctx->refs))
+	if (percpu_ref_is_dying(&ctx->refs))
+		return false;
+	return true;
+}
+#endif
+
+static bool io_rw_reissue(struct io_kiocb *req)
+{
+#ifdef CONFIG_BLOCK
+	if (!io_rw_should_reissue(req))
 		return false;
 
 	lockdep_assert_held(&req->ctx->uring_lock);
@@ -2482,6 +2491,19 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
 {
 	struct io_kiocb *req = container_of(kiocb, struct io_kiocb, rw.kiocb);
 
+#ifdef CONFIG_BLOCK
+	/* Rewind iter, if we have one. iopoll path resubmits as usual */
+	if (res == -EAGAIN && io_rw_should_reissue(req)) {
+		struct io_async_rw *rw = req->async_data;
+
+		if (rw)
+			iov_iter_revert(&rw->iter,
+					req->result - iov_iter_count(&rw->iter));
+		else if (!io_resubmit_prep(req))
+			res = -EIO;
+	}
+#endif
+
 	if (kiocb->ki_flags & IOCB_WRITE)
 		kiocb_end_write(req);
 
@@ -3230,6 +3252,8 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
 	ret = io_iter_do_read(req, iter);
 
 	if (ret == -EIOCBQUEUED) {
+		if (req->async_data)
+			iov_iter_revert(iter, io_size - iov_iter_count(iter));
 		goto out_free;
 	} else if (ret == -EAGAIN) {
 		/* IOPOLL retry should happen for io-wq threads */
@@ -3361,6 +3385,8 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
 	/* no retry on NONBLOCK nor RWF_NOWAIT */
 	if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
 		goto done;
+	if (ret2 == -EIOCBQUEUED && req->async_data)
+		iov_iter_revert(iter, io_size - iov_iter_count(iter));
 	if (!force_nonblock || ret2 != -EAGAIN) {
 		/* IOPOLL retry should happen for io-wq threads */
 		if ((req->ctx->flags & IORING_SETUP_IOPOLL) && ret2 == -EAGAIN)
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 28/33] io_uring: choose right tctx->io_wq for try cancel
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (26 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 27/33] io_uring: fix -EAGAIN retry with IOPOLL Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 29/33] io_uring: inline io_req_clean_work() Jens Axboe
                   ` (4 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

When we cancel SQPOLL, @task in io_uring_try_cancel_requests() will
differ from current. Use the right tctx from passed in @task, and don't
forget that it can be NULL when the io_uring ctx exits.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c765b7fba8a1..d5e546acae7d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8636,7 +8636,8 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
 					 struct files_struct *files)
 {
 	struct io_task_cancel cancel = { .task = task, .files = files, };
-	struct io_uring_task *tctx = current->io_uring;
+	struct task_struct *tctx_task = task ?: current;
+	struct io_uring_task *tctx = tctx_task->io_uring;
 
 	while (1) {
 		enum io_wq_cancel cret;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 29/33] io_uring: inline io_req_clean_work()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (27 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 28/33] io_uring: choose right tctx->io_wq for try cancel Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 30/33] io_uring: inline __io_queue_async_work() Jens Axboe
                   ` (3 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

Inline io_req_clean_work(), less code and easier to analyse
tctx dependencies and refs usage.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 30 +++++++++++++-----------------
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d5e546acae7d..ef6f225762e4 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1167,22 +1167,6 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
 	return false;
 }
 
-static void io_req_clean_work(struct io_kiocb *req)
-{
-	if (req->flags & REQ_F_INFLIGHT) {
-		struct io_ring_ctx *ctx = req->ctx;
-		struct io_uring_task *tctx = req->task->io_uring;
-		unsigned long flags;
-
-		spin_lock_irqsave(&ctx->inflight_lock, flags);
-		list_del(&req->inflight_entry);
-		spin_unlock_irqrestore(&ctx->inflight_lock, flags);
-		req->flags &= ~REQ_F_INFLIGHT;
-		if (atomic_read(&tctx->in_idle))
-			wake_up(&tctx->wait);
-	}
-}
-
 static void io_req_track_inflight(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -1671,7 +1655,19 @@ static void io_dismantle_req(struct io_kiocb *req)
 		io_put_file(req, req->file, (req->flags & REQ_F_FIXED_FILE));
 	if (req->fixed_rsrc_refs)
 		percpu_ref_put(req->fixed_rsrc_refs);
-	io_req_clean_work(req);
+
+	if (req->flags & REQ_F_INFLIGHT) {
+		struct io_ring_ctx *ctx = req->ctx;
+		struct io_uring_task *tctx = req->task->io_uring;
+		unsigned long flags;
+
+		spin_lock_irqsave(&ctx->inflight_lock, flags);
+		list_del(&req->inflight_entry);
+		spin_unlock_irqrestore(&ctx->inflight_lock, flags);
+		req->flags &= ~REQ_F_INFLIGHT;
+		if (atomic_read(&tctx->in_idle))
+			wake_up(&tctx->wait);
+	}
 }
 
 static inline void io_put_task(struct task_struct *task, int nr)
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 30/33] io_uring: inline __io_queue_async_work()
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (28 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 29/33] io_uring: inline io_req_clean_work() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 31/33] io_uring: remove extra in_idle wake up Jens Axboe
                   ` (2 subsequent siblings)
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

__io_queue_async_work() is only called from io_queue_async_work(),
inline it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ef6f225762e4..7ae413736c04 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1205,7 +1205,7 @@ static void io_prep_async_link(struct io_kiocb *req)
 		io_prep_async_work(cur);
 }
 
-static struct io_kiocb *__io_queue_async_work(struct io_kiocb *req)
+static void io_queue_async_work(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_kiocb *link = io_prep_linked_timeout(req);
@@ -1216,18 +1216,9 @@ static struct io_kiocb *__io_queue_async_work(struct io_kiocb *req)
 
 	trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,
 					&req->work, req->flags);
-	io_wq_enqueue(tctx->io_wq, &req->work);
-	return link;
-}
-
-static void io_queue_async_work(struct io_kiocb *req)
-{
-	struct io_kiocb *link;
-
 	/* init ->work of the whole link before punting */
 	io_prep_async_link(req);
-	link = __io_queue_async_work(req);
-
+	io_wq_enqueue(tctx->io_wq, &req->work);
 	if (link)
 		io_queue_linked_timeout(link);
 }
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 31/33] io_uring: remove extra in_idle wake up
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (29 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 30/33] io_uring: inline __io_queue_async_work() Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:26 ` [PATCH 32/33] io_uring: ensure that threads freeze on suspend Jens Axboe
  2021-03-04  0:27 ` [PATCH 33/33] io-wq: ensure all pending work is canceled on exit Jens Axboe
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Pavel Begunkov, Jens Axboe

From: Pavel Begunkov <asml.silence@gmail.com>

io_dismantle_req() is always followed by io_put_task(), which already do
proper in_idle wake ups, so we can skip waking the owner task in
io_dismantle_req(). The rules are simpler now, do io_put_task() shortly
after ending a request, and it will be fine.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7ae413736c04..d9161b181a21 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1649,18 +1649,16 @@ static void io_dismantle_req(struct io_kiocb *req)
 
 	if (req->flags & REQ_F_INFLIGHT) {
 		struct io_ring_ctx *ctx = req->ctx;
-		struct io_uring_task *tctx = req->task->io_uring;
 		unsigned long flags;
 
 		spin_lock_irqsave(&ctx->inflight_lock, flags);
 		list_del(&req->inflight_entry);
 		spin_unlock_irqrestore(&ctx->inflight_lock, flags);
 		req->flags &= ~REQ_F_INFLIGHT;
-		if (atomic_read(&tctx->in_idle))
-			wake_up(&tctx->wait);
 	}
 }
 
+/* must to be called somewhat shortly after putting a request */
 static inline void io_put_task(struct task_struct *task, int nr)
 {
 	struct io_uring_task *tctx = task->io_uring;
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 32/33] io_uring: ensure that threads freeze on suspend
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (30 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 31/33] io_uring: remove extra in_idle wake up Jens Axboe
@ 2021-03-04  0:26 ` Jens Axboe
  2021-03-04  0:27 ` [PATCH 33/33] io-wq: ensure all pending work is canceled on exit Jens Axboe
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, Alex Xu

Alex reports that his system fails to suspend using 5.12-rc1, with the
following dump:

[  240.650300] PM: suspend entry (deep)
[  240.650748] Filesystems sync: 0.000 seconds
[  240.725605] Freezing user space processes ...
[  260.739483] Freezing of tasks failed after 20.013 seconds (3 tasks refusing to freeze, wq_busy=0):
[  260.739497] task:iou-mgr-446     state:S stack:    0 pid:  516 ppid:   439 flags:0x00004224
[  260.739504] Call Trace:
[  260.739507]  ? sysvec_apic_timer_interrupt+0xb/0x81
[  260.739515]  ? pick_next_task_fair+0x197/0x1cde
[  260.739519]  ? sysvec_reschedule_ipi+0x2f/0x6a
[  260.739522]  ? asm_sysvec_reschedule_ipi+0x12/0x20
[  260.739525]  ? __schedule+0x57/0x6d6
[  260.739529]  ? del_timer_sync+0xb9/0x115
[  260.739533]  ? schedule+0x63/0xd5
[  260.739536]  ? schedule_timeout+0x219/0x356
[  260.739540]  ? __next_timer_interrupt+0xf1/0xf1
[  260.739544]  ? io_wq_manager+0x73/0xb1
[  260.739549]  ? io_wq_create+0x262/0x262
[  260.739553]  ? ret_from_fork+0x22/0x30
[  260.739557] task:iou-mgr-517     state:S stack:    0 pid:  522 ppid:   439 flags:0x00004224
[  260.739561] Call Trace:
[  260.739563]  ? sysvec_apic_timer_interrupt+0xb/0x81
[  260.739566]  ? pick_next_task_fair+0x16f/0x1cde
[  260.739569]  ? sysvec_apic_timer_interrupt+0xb/0x81
[  260.739571]  ? asm_sysvec_apic_timer_interrupt+0x12/0x20
[  260.739574]  ? __schedule+0x5b7/0x6d6
[  260.739578]  ? del_timer_sync+0x70/0x115
[  260.739581]  ? schedule_timeout+0x211/0x356
[  260.739585]  ? __next_timer_interrupt+0xf1/0xf1
[  260.739588]  ? io_wq_check_workers+0x15/0x11f
[  260.739592]  ? io_wq_manager+0x69/0xb1
[  260.739596]  ? io_wq_create+0x262/0x262
[  260.739600]  ? ret_from_fork+0x22/0x30
[  260.739603] task:iou-wrk-517     state:S stack:    0 pid:  523 ppid:   439 flags:0x00004224
[  260.739607] Call Trace:
[  260.739609]  ? __schedule+0x5b7/0x6d6
[  260.739614]  ? schedule+0x63/0xd5
[  260.739617]  ? schedule_timeout+0x219/0x356
[  260.739621]  ? __next_timer_interrupt+0xf1/0xf1
[  260.739624]  ? task_thread.isra.0+0x148/0x3af
[  260.739628]  ? task_thread_unbound+0xa/0xa
[  260.739632]  ? task_thread_bound+0x7/0x7
[  260.739636]  ? ret_from_fork+0x22/0x30
[  260.739647] OOM killer enabled.
[  260.739648] Restarting tasks ... done.
[  260.740077] PM: suspend exit

Play nice and ensure that any thread we create will call try_to_freeze()
at an opportune time so that memory suspend can proceed. For the io-wq
worker threads, mark them as PF_NOFREEZE. They could potentially be
blocked for a long time.

Reported-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca>
Tested-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c    | 3 +++
 fs/io_uring.c | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index 1fdb2b621b51..a44bd22c045e 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -16,6 +16,7 @@
 #include <linux/rculist_nulls.h>
 #include <linux/cpu.h>
 #include <linux/tracehook.h>
+#include <linux/freezer.h>
 
 #include "../kernel/sched/sched.h"
 #include "io-wq.h"
@@ -263,6 +264,7 @@ static void io_wqe_dec_running(struct io_worker *worker)
 
 static void io_worker_start(struct io_worker *worker)
 {
+	current->flags |= PF_NOFREEZE;
 	worker->flags |= (IO_WORKER_F_UP | IO_WORKER_F_RUNNING);
 	io_wqe_inc_running(worker);
 	complete(&worker->started);
@@ -731,6 +733,7 @@ static int io_wq_manager(void *data)
 		set_current_state(TASK_INTERRUPTIBLE);
 		io_wq_check_workers(wq);
 		schedule_timeout(HZ);
+		try_to_freeze();
 		if (fatal_signal_pending(current))
 			set_bit(IO_WQ_BIT_EXIT, &wq->state);
 	} while (!test_bit(IO_WQ_BIT_EXIT, &wq->state));
diff --git a/fs/io_uring.c b/fs/io_uring.c
index d9161b181a21..7cf8d4a99d91 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -74,13 +74,11 @@
 #include <linux/fsnotify.h>
 #include <linux/fadvise.h>
 #include <linux/eventpoll.h>
-#include <linux/fs_struct.h>
 #include <linux/splice.h>
 #include <linux/task_work.h>
 #include <linux/pagemap.h>
 #include <linux/io_uring.h>
-#include <linux/blk-cgroup.h>
-#include <linux/audit.h>
+#include <linux/freezer.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/io_uring.h>
@@ -6736,6 +6734,7 @@ static int io_sq_thread(void *data)
 				io_ring_set_wakeup_flag(ctx);
 
 			schedule();
+			try_to_freeze();
 			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
 				io_ring_clear_wakeup_flag(ctx);
 		}
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 33/33] io-wq: ensure all pending work is canceled on exit
  2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
                   ` (31 preceding siblings ...)
  2021-03-04  0:26 ` [PATCH 32/33] io_uring: ensure that threads freeze on suspend Jens Axboe
@ 2021-03-04  0:27 ` Jens Axboe
  32 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04  0:27 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, syzbot+91b4b56ead187d35c9d3

If we race on shutting down the io-wq, then we should ensure that any
work that was queued after workers shutdown is canceled. Harden the
add work check a bit too, checking for IO_WQ_BIT_EXIT and cancel if
it's set.

Add a WARN_ON() for having any work before we kill the io-wq context.

Reported-by: syzbot+91b4b56ead187d35c9d3@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io-wq.c | 42 +++++++++++++++++++++++++++++++++---------
 1 file changed, 33 insertions(+), 9 deletions(-)

diff --git a/fs/io-wq.c b/fs/io-wq.c
index a44bd22c045e..6d57f9b80213 100644
--- a/fs/io-wq.c
+++ b/fs/io-wq.c
@@ -129,6 +129,17 @@ struct io_wq {
 
 static enum cpuhp_state io_wq_online;
 
+struct io_cb_cancel_data {
+	work_cancel_fn *fn;
+	void *data;
+	int nr_running;
+	int nr_pending;
+	bool cancel_all;
+};
+
+static void io_wqe_cancel_pending_work(struct io_wqe *wqe,
+				       struct io_cb_cancel_data *match);
+
 static bool io_worker_get(struct io_worker *worker)
 {
 	return refcount_inc_not_zero(&worker->ref);
@@ -713,6 +724,23 @@ static void io_wq_check_workers(struct io_wq *wq)
 	}
 }
 
+static bool io_wq_work_match_all(struct io_wq_work *work, void *data)
+{
+	return true;
+}
+
+static void io_wq_cancel_pending(struct io_wq *wq)
+{
+	struct io_cb_cancel_data match = {
+		.fn		= io_wq_work_match_all,
+		.cancel_all	= true,
+	};
+	int node;
+
+	for_each_node(node)
+		io_wqe_cancel_pending_work(wq->wqes[node], &match);
+}
+
 /*
  * Manager thread. Tasked with creating new workers, if we need them.
  */
@@ -748,6 +776,8 @@ static int io_wq_manager(void *data)
 	/* we might not ever have created any workers */
 	if (atomic_read(&wq->worker_refs))
 		wait_for_completion(&wq->worker_done);
+
+	io_wq_cancel_pending(wq);
 	complete(&wq->exited);
 	do_exit(0);
 }
@@ -809,7 +839,8 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
 	unsigned long flags;
 
 	/* Can only happen if manager creation fails after exec */
-	if (unlikely(io_wq_fork_manager(wqe->wq))) {
+	if (io_wq_fork_manager(wqe->wq) ||
+	    test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state)) {
 		work->flags |= IO_WQ_WORK_CANCEL;
 		wqe->wq->do_work(work);
 		return;
@@ -845,14 +876,6 @@ void io_wq_hash_work(struct io_wq_work *work, void *val)
 	work->flags |= (IO_WQ_WORK_HASHED | (bit << IO_WQ_HASH_SHIFT));
 }
 
-struct io_cb_cancel_data {
-	work_cancel_fn *fn;
-	void *data;
-	int nr_running;
-	int nr_pending;
-	bool cancel_all;
-};
-
 static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
 {
 	struct io_cb_cancel_data *match = data;
@@ -1086,6 +1109,7 @@ static void io_wq_destroy(struct io_wq *wq)
 		struct io_wqe *wqe = wq->wqes[node];
 
 		list_del_init(&wqe->wait.entry);
+		WARN_ON_ONCE(!wq_list_empty(&wqe->work_list));
 		kfree(wqe);
 	}
 	spin_unlock_irq(&wq->hash->wait.lock);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/33] io_uring: signal worker thread unshare
  2021-03-04  0:26 ` [PATCH 12/33] io_uring: signal worker thread unshare Jens Axboe
@ 2021-03-04 12:15   ` Stefan Metzmacher
  2021-03-04 14:05     ` Jens Axboe
  0 siblings, 1 reply; 36+ messages in thread
From: Stefan Metzmacher @ 2021-03-04 12:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring


Hi Jens,

> If the original task switches credentials or unshares any part of the
> task state, then we should notify the io_uring workers to they can
> re-fork as well. For credentials, this actually happens just fine for
> the io-wq workers, as we grab and pass that down. For SQPOLL, we're
> stuck with the original credentials, which means that it cannot be used
> if the task does eg seteuid().

I fear that this will be very problematic for Samba's use of io_uring.

We change credentials very often, switching between the impersonated users
and also root in order to run privileged sections.

Currently fd-based operations are not affected by the credential switches.

I guess any credential switch means that all pending io_uring requests
get canceled, correct?

It also means the usage of IORING_REGISTER_PERSONALITY isn't useful any longer,
as that requires a credential switch before (and most likely after) the
io_uring_register() syscall.

As seteuid(), unshare() and similar syscalls are per thread instead of process
in the kernel, the io_wq is also per userspace thread and not per io_ring_ctx, correct?

As I read the code any credential change in any userspace thread will
cause the sq_thread to be stopped and restarted by the next io_uring_enter(),
which means that the sq_thread may change its main credentials randomly overtime,
depending on which userspace thread calls io_uring_enter() first.
As unshare() applies only to the current task_struct I'm wondering if
we only want to refork the sq_thread if the current task is the parent of
the sq_thread.

I'm wondering if we better remove io_uring_unshare() from commit_creds()
and always handle the creds explicitly as req->work.creds.
io_init_req() then will req->work.creds from ctx->personality_idr
or from current->cred. At the same time we'd readd ctx->creds = get_current_cred();
in io_uring_setup() and use these ctx->creds in the io_sq_thread again
in order to make things sane again.

I'm also wondering if all this has an impact on IORING_SETUP_ATTACH_WQ,
in particular I'm thinking of the case where the fd was transfered via SCM_RIGHTS
or across fork(), when mm and files are not shared between the processes.

I think the IORING_FEAT_CUR_PERSONALITY section in io_uring_setup.2
should also talk about what credentials are used in the IORING_SETUP_SQPOLL case.

The IORING_SETUP_SQPOLL section should also be more detailed regarding
what state is used in particular in combination with IORING_SETUP_ATTACH_WQ.
Wasn't the idea up to 5.11 that the sq_thread would capture the whole
state at io_uring_setup()?

I think we need to maintain the overall behavior exposed to userspace...

metze

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 12/33] io_uring: signal worker thread unshare
  2021-03-04 12:15   ` Stefan Metzmacher
@ 2021-03-04 14:05     ` Jens Axboe
  0 siblings, 0 replies; 36+ messages in thread
From: Jens Axboe @ 2021-03-04 14:05 UTC (permalink / raw)
  To: Stefan Metzmacher; +Cc: io-uring

On 3/4/21 5:15 AM, Stefan Metzmacher wrote:
> 
> Hi Jens,
> 
>> If the original task switches credentials or unshares any part of the
>> task state, then we should notify the io_uring workers to they can
>> re-fork as well. For credentials, this actually happens just fine for
>> the io-wq workers, as we grab and pass that down. For SQPOLL, we're
>> stuck with the original credentials, which means that it cannot be used
>> if the task does eg seteuid().
> 
> I fear that this will be very problematic for Samba's use of io_uring.
> 
> We change credentials very often, switching between the impersonated
> users and also root in order to run privileged sections.
> 
> Currently fd-based operations are not affected by the credential
> switches.
> 
> I guess any credential switch means that all pending io_uring requests
> get canceled, correct?
> 
> It also means the usage of IORING_REGISTER_PERSONALITY isn't useful
> any longer, as that requires a credential switch before (and most
> likely after) the io_uring_register() syscall.
> 
> As seteuid(), unshare() and similar syscalls are per thread instead of
> process in the kernel, the io_wq is also per userspace thread and not
> per io_ring_ctx, correct?
> 
> As I read the code any credential change in any userspace thread will
> cause the sq_thread to be stopped and restarted by the next
> io_uring_enter(), which means that the sq_thread may change its main
> credentials randomly overtime, depending on which userspace thread
> calls io_uring_enter() first. As unshare() applies only to the current
> task_struct I'm wondering if we only want to refork the sq_thread if
> the current task is the parent of the sq_thread.
> 
> I'm wondering if we better remove io_uring_unshare() from
> commit_creds() and always handle the creds explicitly as
> req->work.creds. io_init_req() then will req->work.creds from
> ctx->personality_idr or from current->cred. At the same time we'd
> readd ctx->creds = get_current_cred(); in io_uring_setup() and use
> these ctx->creds in the io_sq_thread again in order to make things
> sane again.
> 
> I'm also wondering if all this has an impact on
> IORING_SETUP_ATTACH_WQ, in particular I'm thinking of the case where
> the fd was transfered via SCM_RIGHTS or across fork(), when mm and
> files are not shared between the processes.
> 
> I think the IORING_FEAT_CUR_PERSONALITY section in io_uring_setup.2
> should also talk about what credentials are used in the
> IORING_SETUP_SQPOLL case.
> 
> The IORING_SETUP_SQPOLL section should also be more detailed regarding
> what state is used in particular in combination with
> IORING_SETUP_ATTACH_WQ. Wasn't the idea up to 5.11 that the sq_thread
> would capture the whole state at io_uring_setup()?
> 
> I think we need to maintain the overall behavior exposed to
> userspace...

Totally agree - I'll get to this a bit later, I think we have better
ways of handling it. For now I'll drop this patch so we have time to
rethink it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-03-04 14:07 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-04  0:26 [PATCHSET 0/33] Fixes queued up for 5.12 Jens Axboe
2021-03-04  0:26 ` [PATCH 01/33] io-wq: wait for worker startup when forking a new one Jens Axboe
2021-03-04  0:26 ` [PATCH 02/33] io-wq: have manager wait for all workers to exit Jens Axboe
2021-03-04  0:26 ` [PATCH 03/33] io-wq: don't ask for a new worker if we're exiting Jens Axboe
2021-03-04  0:26 ` [PATCH 04/33] io-wq: rename wq->done completion to wq->started Jens Axboe
2021-03-04  0:26 ` [PATCH 05/33] io-wq: wait for manager exit on wq destroy Jens Axboe
2021-03-04  0:26 ` [PATCH 06/33] io-wq: fix double put of 'wq' in error path Jens Axboe
2021-03-04  0:26 ` [PATCH 07/33] io_uring: SQPOLL stop error handling fixes Jens Axboe
2021-03-04  0:26 ` [PATCH 08/33] io_uring: run fallback on cancellation Jens Axboe
2021-03-04  0:26 ` [PATCH 09/33] io_uring: don't use complete_all() on SQPOLL thread exit Jens Axboe
2021-03-04  0:26 ` [PATCH 10/33] io-wq: provide an io_wq_put_and_exit() helper Jens Axboe
2021-03-04  0:26 ` [PATCH 11/33] io_uring: fix race condition in task_work add and clear Jens Axboe
2021-03-04  0:26 ` [PATCH 12/33] io_uring: signal worker thread unshare Jens Axboe
2021-03-04 12:15   ` Stefan Metzmacher
2021-03-04 14:05     ` Jens Axboe
2021-03-04  0:26 ` [PATCH 13/33] io_uring: warn on not destroyed io-wq Jens Axboe
2021-03-04  0:26 ` [PATCH 14/33] io_uring: destroy io-wq on exec Jens Axboe
2021-03-04  0:26 ` [PATCH 15/33] io_uring: remove unused argument 'tsk' from io_req_caches_free() Jens Axboe
2021-03-04  0:26 ` [PATCH 16/33] io_uring: kill unnecessary REQ_F_WORK_INITIALIZED checks Jens Axboe
2021-03-04  0:26 ` [PATCH 17/33] io_uring: move cred assignment into io_issue_sqe() Jens Axboe
2021-03-04  0:26 ` [PATCH 18/33] io_uring: kill unnecessary io_run_ctx_fallback() in io_ring_exit_work() Jens Axboe
2021-03-04  0:26 ` [PATCH 19/33] io_uring: kill io_uring_flush() Jens Axboe
2021-03-04  0:26 ` [PATCH 20/33] io_uring: fix __tctx_task_work() ctx race Jens Axboe
2021-03-04  0:26 ` [PATCH 21/33] io_uring: replace cmpxchg in fallback with xchg Jens Axboe
2021-03-04  0:26 ` [PATCH 22/33] io_uring: ensure that SQPOLL thread is started for exit Jens Axboe
2021-03-04  0:26 ` [PATCH 23/33] io_uring: ignore double poll add on the same waitqueue head Jens Axboe
2021-03-04  0:26 ` [PATCH 24/33] io_uring: kill sqo_dead and sqo submission halting Jens Axboe
2021-03-04  0:26 ` [PATCH 25/33] io_uring: remove sqo_task Jens Axboe
2021-03-04  0:26 ` [PATCH 26/33] io-wq: fix error path leak of buffered write hash map Jens Axboe
2021-03-04  0:26 ` [PATCH 27/33] io_uring: fix -EAGAIN retry with IOPOLL Jens Axboe
2021-03-04  0:26 ` [PATCH 28/33] io_uring: choose right tctx->io_wq for try cancel Jens Axboe
2021-03-04  0:26 ` [PATCH 29/33] io_uring: inline io_req_clean_work() Jens Axboe
2021-03-04  0:26 ` [PATCH 30/33] io_uring: inline __io_queue_async_work() Jens Axboe
2021-03-04  0:26 ` [PATCH 31/33] io_uring: remove extra in_idle wake up Jens Axboe
2021-03-04  0:26 ` [PATCH 32/33] io_uring: ensure that threads freeze on suspend Jens Axboe
2021-03-04  0:27 ` [PATCH 33/33] io-wq: ensure all pending work is canceled on exit Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.