All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] a fix + cancellation unification
@ 2020-12-18 13:12 Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 1/8] io_uring: close a small race gap for files cancel Pavel Begunkov
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

I suggest for 1/8 to go for current, and the rest are for next.
Patches 2-8 finally unify how we do task and files cancellation removing
boilerplate and making it easier to understand overall. As a bonus to it
->inflight_entry is now used only for iopoll, probably can be put into a
union with something and save 16B of io_kiocb if that would be needed.

Pavel Begunkov (8):
  io_uring: close a small race gap for files cancel
  io_uring: further deduplicate #CQ events calc
  io_uring: account per-task #requests with files
  io_uring: explicitly pass tctx into del_task_file
  io_uring: draft files cancel based on inflight cnt
  io_uring: remove old files cancel mechanism
  io_uring: cleanup task cancel
  io_uring: kill not used anymore inflight_lock

 fs/io_uring.c            | 166 +++++++++++++--------------------------
 include/linux/io_uring.h |  13 ++-
 2 files changed, 59 insertions(+), 120 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/8] io_uring: close a small race gap for files cancel
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 2/8] io_uring: further deduplicate #CQ events calc Pavel Begunkov
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring; +Cc: stable

The purpose of io_uring_cancel_files() is to wait for all requests
matching ->files to go/be cancelled. We should first drop files of a
request in io_req_drop_files() and only then make it undiscoverable for
io_uring_cancel_files.

First drop, then delete from list. It's ok to leave req->id->files
dangling, because it's not dereferenced by cancellation code, only
compared against. It would potentially go to sleep and be awaken by
following in io_req_drop_files() wake_up().

Fixes: 0f2122045b946 ("io_uring: don't rely on weak ->files references")
Cc: <stable@vger.kernel.org> # 5.5+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8cf6f22afc5e..b74957856e68 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6098,15 +6098,15 @@ static void io_req_drop_files(struct io_kiocb *req)
 	struct io_uring_task *tctx = req->task->io_uring;
 	unsigned long flags;
 
+	put_files_struct(req->work.identity->files);
+	put_nsproxy(req->work.identity->nsproxy);
 	spin_lock_irqsave(&ctx->inflight_lock, flags);
 	list_del(&req->inflight_entry);
-	if (atomic_read(&tctx->in_idle))
-		wake_up(&tctx->wait);
 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
 	req->flags &= ~REQ_F_INFLIGHT;
-	put_files_struct(req->work.identity->files);
-	put_nsproxy(req->work.identity->nsproxy);
 	req->work.flags &= ~IO_WQ_WORK_FILES;
+	if (atomic_read(&tctx->in_idle))
+		wake_up(&tctx->wait);
 }
 
 static void __io_clean_op(struct io_kiocb *req)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/8] io_uring: further deduplicate #CQ events calc
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 1/8] io_uring: close a small race gap for files cancel Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 3/8] io_uring: account per-task #requests with files Pavel Begunkov
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Apparently, there is one more place hand coded calculation of number of
CQ events in the ring. Use __io_cqring_events() helper in
io_get_cqring() as well. Naturally, assembly stays identical.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b74957856e68..1401c1444e77 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1664,21 +1664,25 @@ static inline bool io_sqring_full(struct io_ring_ctx *ctx)
 	return READ_ONCE(r->sq.tail) - ctx->cached_sq_head == r->sq_ring_entries;
 }
 
+static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
+{
+	return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
+}
+
 static struct io_uring_cqe *io_get_cqring(struct io_ring_ctx *ctx)
 {
 	struct io_rings *rings = ctx->rings;
 	unsigned tail;
 
-	tail = ctx->cached_cq_tail;
 	/*
 	 * writes to the cq entry need to come after reading head; the
 	 * control dependency is enough as we're using WRITE_ONCE to
 	 * fill the cq entry
 	 */
-	if (tail - READ_ONCE(rings->cq.head) == rings->cq_ring_entries)
+	if (__io_cqring_events(ctx) == rings->cq_ring_entries)
 		return NULL;
 
-	ctx->cached_cq_tail++;
+	tail = ctx->cached_cq_tail++;
 	return &rings->cqes[tail & ctx->cq_mask];
 }
 
@@ -1693,11 +1697,6 @@ static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
 	return io_wq_current_is_worker();
 }
 
-static inline unsigned __io_cqring_events(struct io_ring_ctx *ctx)
-{
-	return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
-}
-
 static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
 {
 	if (waitqueue_active(&ctx->wait))
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/8] io_uring: account per-task #requests with files
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 1/8] io_uring: close a small race gap for files cancel Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 2/8] io_uring: further deduplicate #CQ events calc Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 4/8] io_uring: explicitly pass tctx into del_task_file Pavel Begunkov
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Similarly as we keep tctx->inflight add tctx->inflight_files tracking
number of requests with ->files set. That's a preparation patch, so it's
kept unused for now. Also, as it's not as hot as ->inflight use atomics.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c            | 27 +++++++++++++++++++--------
 include/linux/io_uring.h |  1 +
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1401c1444e77..3a3177739b13 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1497,6 +1497,7 @@ static bool io_grab_identity(struct io_kiocb *req)
 		req->flags |= REQ_F_INFLIGHT;
 
 		spin_lock_irq(&ctx->inflight_lock);
+		atomic_inc(&current->io_uring->inflight_files);
 		list_add(&req->inflight_entry, &ctx->inflight_list);
 		spin_unlock_irq(&ctx->inflight_lock);
 		req->work.flags |= IO_WQ_WORK_FILES;
@@ -6101,6 +6102,7 @@ static void io_req_drop_files(struct io_kiocb *req)
 	put_nsproxy(req->work.identity->nsproxy);
 	spin_lock_irqsave(&ctx->inflight_lock, flags);
 	list_del(&req->inflight_entry);
+	atomic_dec(&tctx->inflight_files);
 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
 	req->flags &= ~REQ_F_INFLIGHT;
 	req->work.flags &= ~IO_WQ_WORK_FILES;
@@ -8012,6 +8014,7 @@ static int io_uring_alloc_task_context(struct task_struct *task)
 	init_waitqueue_head(&tctx->wait);
 	tctx->last = NULL;
 	atomic_set(&tctx->in_idle, 0);
+	atomic_set(&tctx->inflight_files, 0);
 	tctx->sqpoll = false;
 	io_init_identity(&tctx->__identity);
 	tctx->identity = &tctx->__identity;
@@ -8927,13 +8930,17 @@ void __io_uring_files_cancel(struct files_struct *files)
 	atomic_dec(&tctx->in_idle);
 }
 
-static s64 tctx_inflight(struct io_uring_task *tctx)
+static s64 tctx_inflight(struct io_uring_task *tctx, bool files)
 {
 	unsigned long index;
 	struct file *file;
 	s64 inflight;
 
-	inflight = percpu_counter_sum(&tctx->inflight);
+	if (files)
+		inflight = atomic_read(&tctx->inflight_files);
+	else
+		inflight = percpu_counter_sum(&tctx->inflight);
+
 	if (!tctx->sqpoll)
 		return inflight;
 
@@ -8943,12 +8950,16 @@ static s64 tctx_inflight(struct io_uring_task *tctx)
 	 */
 	xa_for_each(&tctx->xa, index, file) {
 		struct io_ring_ctx *ctx = file->private_data;
+		struct io_uring_task *sqpoll_tctx;
 
-		if (ctx->flags & IORING_SETUP_SQPOLL) {
-			struct io_uring_task *__tctx = ctx->sqo_task->io_uring;
+		if (!(ctx->flags & IORING_SETUP_SQPOLL))
+			continue;
 
-			inflight += percpu_counter_sum(&__tctx->inflight);
-		}
+		sqpoll_tctx = ctx->sqo_task->io_uring;
+		if (files)
+			inflight += atomic_read(&sqpoll_tctx->inflight_files);
+		else
+			inflight += percpu_counter_sum(&sqpoll_tctx->inflight);
 	}
 
 	return inflight;
@@ -8969,7 +8980,7 @@ void __io_uring_task_cancel(void)
 
 	do {
 		/* read completions before cancelations */
-		inflight = tctx_inflight(tctx);
+		inflight = tctx_inflight(tctx, false);
 		if (!inflight)
 			break;
 		__io_uring_files_cancel(NULL);
@@ -8980,7 +8991,7 @@ void __io_uring_task_cancel(void)
 		 * If we've seen completions, retry. This avoids a race where
 		 * a completion comes in before we did prepare_to_wait().
 		 */
-		if (inflight != tctx_inflight(tctx))
+		if (inflight != tctx_inflight(tctx, false))
 			continue;
 		schedule();
 	} while (1);
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index 35b2d845704d..e1ff6f235d03 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -28,6 +28,7 @@ struct io_uring_task {
 	struct wait_queue_head	wait;
 	struct file		*last;
 	struct percpu_counter	inflight;
+	atomic_t		inflight_files;
 	struct io_identity	__identity;
 	struct io_identity	*identity;
 	atomic_t		in_idle;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/8] io_uring: explicitly pass tctx into del_task_file
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 3/8] io_uring: account per-task #requests with files Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 5/8] io_uring: draft files cancel based on inflight cnt Pavel Begunkov
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Pass tctx to io_uring_del_task_file() from above. No functional changes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3a3177739b13..1794ad4bfa39 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8882,10 +8882,9 @@ static int io_uring_add_task_file(struct io_ring_ctx *ctx, struct file *file)
 /*
  * Remove this io_uring_file -> task mapping.
  */
-static void io_uring_del_task_file(struct file *file)
+static void io_uring_del_task_file(struct io_uring_task *tctx,
+				   struct file *file)
 {
-	struct io_uring_task *tctx = current->io_uring;
-
 	if (tctx->last == file)
 		tctx->last = NULL;
 	file = xa_erase(&tctx->xa, (unsigned long)file);
@@ -8907,7 +8906,7 @@ static void io_uring_attempt_task_drop(struct file *file)
 	 */
 	if (fatal_signal_pending(current) || (current->flags & PF_EXITING) ||
 	    atomic_long_read(&file->f_count) == 2)
-		io_uring_del_task_file(file);
+		io_uring_del_task_file(current->io_uring, file);
 }
 
 void __io_uring_files_cancel(struct files_struct *files)
@@ -8924,7 +8923,7 @@ void __io_uring_files_cancel(struct files_struct *files)
 
 		io_uring_cancel_task_requests(ctx, files);
 		if (files)
-			io_uring_del_task_file(file);
+			io_uring_del_task_file(tctx, file);
 	}
 
 	atomic_dec(&tctx->in_idle);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/8] io_uring: draft files cancel based on inflight cnt
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 4/8] io_uring: explicitly pass tctx into del_task_file Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 6/8] io_uring: remove old files cancel mechanism Pavel Begunkov
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Do same thing for files cancellation as we do for task cancellations, in
particular keep trying to cancel while corresponding inflight counters
are not zero.

It's a preparation patch, io_uring_try_task_cancel still guarantees to
kill all requests matching files at first attempt. It also deduplicate a
bit those two functions allowing exporting only __io_uring_task_cancel()
from them to the core.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c            | 29 +++++++++++++++--------------
 include/linux/io_uring.h | 12 +++++-------
 2 files changed, 20 insertions(+), 21 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1794ad4bfa39..d20a2a96c3f8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8909,7 +8909,7 @@ static void io_uring_attempt_task_drop(struct file *file)
 		io_uring_del_task_file(current->io_uring, file);
 }
 
-void __io_uring_files_cancel(struct files_struct *files)
+static void io_uring_try_task_cancel(struct files_struct *files)
 {
 	struct io_uring_task *tctx = current->io_uring;
 	struct file *file;
@@ -8917,15 +8917,8 @@ void __io_uring_files_cancel(struct files_struct *files)
 
 	/* make sure overflow events are dropped */
 	atomic_inc(&tctx->in_idle);
-
-	xa_for_each(&tctx->xa, index, file) {
-		struct io_ring_ctx *ctx = file->private_data;
-
-		io_uring_cancel_task_requests(ctx, files);
-		if (files)
-			io_uring_del_task_file(tctx, file);
-	}
-
+	xa_for_each(&tctx->xa, index, file)
+		io_uring_cancel_task_requests(file->private_data, files);
 	atomic_dec(&tctx->in_idle);
 }
 
@@ -8968,7 +8961,7 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool files)
  * Find any io_uring fd that this task has registered or done IO on, and cancel
  * requests.
  */
-void __io_uring_task_cancel(void)
+void __io_uring_task_cancel(struct files_struct *files)
 {
 	struct io_uring_task *tctx = current->io_uring;
 	DEFINE_WAIT(wait);
@@ -8979,10 +8972,10 @@ void __io_uring_task_cancel(void)
 
 	do {
 		/* read completions before cancelations */
-		inflight = tctx_inflight(tctx, false);
+		inflight = tctx_inflight(tctx, !!files);
 		if (!inflight)
 			break;
-		__io_uring_files_cancel(NULL);
+		io_uring_try_task_cancel(files);
 
 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
 
@@ -8990,13 +8983,21 @@ void __io_uring_task_cancel(void)
 		 * If we've seen completions, retry. This avoids a race where
 		 * a completion comes in before we did prepare_to_wait().
 		 */
-		if (inflight != tctx_inflight(tctx, false))
+		if (inflight != tctx_inflight(tctx, !!files))
 			continue;
 		schedule();
 	} while (1);
 
 	finish_wait(&tctx->wait, &wait);
 	atomic_dec(&tctx->in_idle);
+
+	if (files) {
+		struct file *file;
+		unsigned long index;
+
+		xa_for_each(&tctx->xa, index, file)
+			io_uring_del_task_file(tctx, file);
+	}
 }
 
 static int io_uring_flush(struct file *file, void *data)
diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
index e1ff6f235d03..282f02bd04a5 100644
--- a/include/linux/io_uring.h
+++ b/include/linux/io_uring.h
@@ -37,19 +37,17 @@ struct io_uring_task {
 
 #if defined(CONFIG_IO_URING)
 struct sock *io_uring_get_socket(struct file *file);
-void __io_uring_task_cancel(void);
-void __io_uring_files_cancel(struct files_struct *files);
+void __io_uring_task_cancel(struct files_struct *files);
 void __io_uring_free(struct task_struct *tsk);
 
-static inline void io_uring_task_cancel(void)
+static inline void io_uring_files_cancel(struct files_struct *files)
 {
 	if (current->io_uring && !xa_empty(&current->io_uring->xa))
-		__io_uring_task_cancel();
+		__io_uring_task_cancel(files);
 }
-static inline void io_uring_files_cancel(struct files_struct *files)
+static inline void io_uring_task_cancel(void)
 {
-	if (current->io_uring && !xa_empty(&current->io_uring->xa))
-		__io_uring_files_cancel(files);
+	io_uring_files_cancel(NULL);
 }
 static inline void io_uring_free(struct task_struct *tsk)
 {
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/8] io_uring: remove old files cancel mechanism
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 5/8] io_uring: draft files cancel based on inflight cnt Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 7/8] io_uring: cleanup task cancel Pavel Begunkov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

files cancellation is now based on counters, remove the old way we were
doing that with keeping a list of such requests.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 65 +++++++--------------------------------------------
 1 file changed, 9 insertions(+), 56 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d20a2a96c3f8..4bf709d9db32 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -379,7 +379,6 @@ struct io_ring_ctx {
 		bool			poll_multi_file;
 
 		spinlock_t		inflight_lock;
-		struct list_head	inflight_list;
 	} ____cacheline_aligned_in_smp;
 
 	struct delayed_work		file_put_work;
@@ -719,10 +718,7 @@ struct io_kiocb {
 	struct io_kiocb			*link;
 	struct percpu_ref		*fixed_file_refs;
 
-	/*
-	 * 1. used with ctx->iopoll_list with reads/writes
-	 * 2. to track reqs with ->files (see io_op_def::file_table)
-	 */
+	/* ctx->iopoll_list, tracks rw requests for iopoll_list */
 	struct list_head		inflight_entry;
 	struct callback_head		task_work;
 	/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */
@@ -1308,7 +1304,6 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	INIT_LIST_HEAD(&ctx->defer_list);
 	INIT_LIST_HEAD(&ctx->timeout_list);
 	spin_lock_init(&ctx->inflight_lock);
-	INIT_LIST_HEAD(&ctx->inflight_list);
 	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
 	init_llist_head(&ctx->file_put_llist);
 	return ctx;
@@ -1498,7 +1493,6 @@ static bool io_grab_identity(struct io_kiocb *req)
 
 		spin_lock_irq(&ctx->inflight_lock);
 		atomic_inc(&current->io_uring->inflight_files);
-		list_add(&req->inflight_entry, &ctx->inflight_list);
 		spin_unlock_irq(&ctx->inflight_lock);
 		req->work.flags |= IO_WQ_WORK_FILES;
 	}
@@ -6101,7 +6095,6 @@ static void io_req_drop_files(struct io_kiocb *req)
 	put_files_struct(req->work.identity->files);
 	put_nsproxy(req->work.identity->nsproxy);
 	spin_lock_irqsave(&ctx->inflight_lock, flags);
-	list_del(&req->inflight_entry);
 	atomic_dec(&tctx->inflight_files);
 	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
 	req->flags &= ~REQ_F_INFLIGHT;
@@ -8739,48 +8732,12 @@ static void io_cancel_defer_files(struct io_ring_ctx *ctx,
 	}
 }
 
-static void io_uring_cancel_files(struct io_ring_ctx *ctx,
-				  struct task_struct *task,
-				  struct files_struct *files)
-{
-	while (!list_empty_careful(&ctx->inflight_list)) {
-		struct io_task_cancel cancel = { .task = task, .files = files };
-		struct io_kiocb *req;
-		DEFINE_WAIT(wait);
-		bool found = false;
-
-		spin_lock_irq(&ctx->inflight_lock);
-		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
-			if (req->task != task ||
-			    req->work.identity->files != files)
-				continue;
-			found = true;
-			break;
-		}
-		if (found)
-			prepare_to_wait(&task->io_uring->wait, &wait,
-					TASK_UNINTERRUPTIBLE);
-		spin_unlock_irq(&ctx->inflight_lock);
-
-		/* We need to keep going until we don't find a matching req */
-		if (!found)
-			break;
-
-		io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true);
-		io_poll_remove_all(ctx, task, files);
-		io_kill_timeouts(ctx, task, files);
-		/* cancellations _may_ trigger task work */
-		io_run_task_work();
-		schedule();
-		finish_wait(&task->io_uring->wait, &wait);
-	}
-}
-
 static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
-					    struct task_struct *task)
+					    struct task_struct *task,
+					    struct files_struct *files)
 {
 	while (1) {
-		struct io_task_cancel cancel = { .task = task, .files = NULL, };
+		struct io_task_cancel cancel = { .task = task, .files = files };
 		enum io_wq_cancel cret;
 		bool ret = false;
 
@@ -8789,18 +8746,18 @@ static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
 			ret = true;
 
 		/* SQPOLL thread does its own polling */
-		if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
+		if (!(ctx->flags & IORING_SETUP_SQPOLL) && !files) {
 			while (!list_empty_careful(&ctx->iopoll_list)) {
 				io_iopoll_try_reap_events(ctx);
 				ret = true;
 			}
 		}
 
-		ret |= io_poll_remove_all(ctx, task, NULL);
-		ret |= io_kill_timeouts(ctx, task, NULL);
+		ret |= io_poll_remove_all(ctx, task, files);
+		ret |= io_kill_timeouts(ctx, task, files);
+		io_run_task_work();
 		if (!ret)
 			break;
-		io_run_task_work();
 		cond_resched();
 	}
 }
@@ -8825,11 +8782,7 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
 	io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
 	io_cqring_overflow_flush(ctx, true, task, files);
 	io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
-
-	if (!files)
-		__io_uring_cancel_task_requests(ctx, task);
-	else
-		io_uring_cancel_files(ctx, task, files);
+	__io_uring_cancel_task_requests(ctx, task, files);
 
 	if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
 		atomic_dec(&task->io_uring->in_idle);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/8] io_uring: cleanup task cancel
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (5 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 6/8] io_uring: remove old files cancel mechanism Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 13:12 ` [PATCH 8/8] io_uring: kill not used anymore inflight_lock Pavel Begunkov
  2020-12-18 15:16 ` [PATCH 0/8] a fix + cancellation unification Jens Axboe
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

There is no use of io_uring_try_task_cancel(), inline it, kill extra
in_idle inc/dec as it's already done by __io_uring_task_cancel(), and do
a bit of renaming.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 25 ++++++-------------------
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4bf709d9db32..134ea0e3373d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8767,8 +8767,8 @@ static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
  * hard links. These persist even for failure of cancelations, hence keep
  * looping until none are found.
  */
-static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
-					  struct files_struct *files)
+static void io_uring_try_task_cancel(struct io_ring_ctx *ctx,
+				     struct files_struct *files)
 {
 	struct task_struct *task = current;
 
@@ -8862,19 +8862,6 @@ static void io_uring_attempt_task_drop(struct file *file)
 		io_uring_del_task_file(current->io_uring, file);
 }
 
-static void io_uring_try_task_cancel(struct files_struct *files)
-{
-	struct io_uring_task *tctx = current->io_uring;
-	struct file *file;
-	unsigned long index;
-
-	/* make sure overflow events are dropped */
-	atomic_inc(&tctx->in_idle);
-	xa_for_each(&tctx->xa, index, file)
-		io_uring_cancel_task_requests(file->private_data, files);
-	atomic_dec(&tctx->in_idle);
-}
-
 static s64 tctx_inflight(struct io_uring_task *tctx, bool files)
 {
 	unsigned long index;
@@ -8917,6 +8904,8 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool files)
 void __io_uring_task_cancel(struct files_struct *files)
 {
 	struct io_uring_task *tctx = current->io_uring;
+	unsigned long index;
+	struct file *file;
 	DEFINE_WAIT(wait);
 	s64 inflight;
 
@@ -8928,7 +8917,8 @@ void __io_uring_task_cancel(struct files_struct *files)
 		inflight = tctx_inflight(tctx, !!files);
 		if (!inflight)
 			break;
-		io_uring_try_task_cancel(files);
+		xa_for_each(&tctx->xa, index, file)
+			io_uring_try_task_cancel(file->private_data, files);
 
 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
 
@@ -8945,9 +8935,6 @@ void __io_uring_task_cancel(struct files_struct *files)
 	atomic_dec(&tctx->in_idle);
 
 	if (files) {
-		struct file *file;
-		unsigned long index;
-
 		xa_for_each(&tctx->xa, index, file)
 			io_uring_del_task_file(tctx, file);
 	}
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 8/8] io_uring: kill not used anymore inflight_lock
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (6 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 7/8] io_uring: cleanup task cancel Pavel Begunkov
@ 2020-12-18 13:12 ` Pavel Begunkov
  2020-12-18 15:16 ` [PATCH 0/8] a fix + cancellation unification Jens Axboe
  8 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 13:12 UTC (permalink / raw)
  To: Jens Axboe, io-uring

ctx->inflight_lock now doesn't protect anything that should be protected
-- tctx->inflight_files is atomic, and inflight list is gone. Time to
eradicate it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 134ea0e3373d..a678920b1c8d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -377,8 +377,6 @@ struct io_ring_ctx {
 		struct hlist_head	*cancel_hash;
 		unsigned		cancel_hash_bits;
 		bool			poll_multi_file;
-
-		spinlock_t		inflight_lock;
 	} ____cacheline_aligned_in_smp;
 
 	struct delayed_work		file_put_work;
@@ -1303,7 +1301,6 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	INIT_LIST_HEAD(&ctx->iopoll_list);
 	INIT_LIST_HEAD(&ctx->defer_list);
 	INIT_LIST_HEAD(&ctx->timeout_list);
-	spin_lock_init(&ctx->inflight_lock);
 	INIT_DELAYED_WORK(&ctx->file_put_work, io_file_put_work);
 	init_llist_head(&ctx->file_put_llist);
 	return ctx;
@@ -1433,7 +1430,6 @@ static bool io_grab_identity(struct io_kiocb *req)
 {
 	const struct io_op_def *def = &io_op_defs[req->opcode];
 	struct io_identity *id = req->work.identity;
-	struct io_ring_ctx *ctx = req->ctx;
 
 	if (def->work_flags & IO_WQ_WORK_FSIZE) {
 		if (id->fsize != rlimit(RLIMIT_FSIZE))
@@ -1491,9 +1487,7 @@ static bool io_grab_identity(struct io_kiocb *req)
 		get_nsproxy(id->nsproxy);
 		req->flags |= REQ_F_INFLIGHT;
 
-		spin_lock_irq(&ctx->inflight_lock);
 		atomic_inc(&current->io_uring->inflight_files);
-		spin_unlock_irq(&ctx->inflight_lock);
 		req->work.flags |= IO_WQ_WORK_FILES;
 	}
 
@@ -6088,15 +6082,11 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 
 static void io_req_drop_files(struct io_kiocb *req)
 {
-	struct io_ring_ctx *ctx = req->ctx;
 	struct io_uring_task *tctx = req->task->io_uring;
-	unsigned long flags;
 
 	put_files_struct(req->work.identity->files);
 	put_nsproxy(req->work.identity->nsproxy);
-	spin_lock_irqsave(&ctx->inflight_lock, flags);
 	atomic_dec(&tctx->inflight_files);
-	spin_unlock_irqrestore(&ctx->inflight_lock, flags);
 	req->flags &= ~REQ_F_INFLIGHT;
 	req->work.flags &= ~IO_WQ_WORK_FILES;
 	if (atomic_read(&tctx->in_idle))
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/8] a fix + cancellation unification
  2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
                   ` (7 preceding siblings ...)
  2020-12-18 13:12 ` [PATCH 8/8] io_uring: kill not used anymore inflight_lock Pavel Begunkov
@ 2020-12-18 15:16 ` Jens Axboe
  2020-12-18 15:22   ` Pavel Begunkov
  8 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2020-12-18 15:16 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 12/18/20 6:12 AM, Pavel Begunkov wrote:
> I suggest for 1/8 to go for current, and the rest are for next.
> Patches 2-8 finally unify how we do task and files cancellation removing
> boilerplate and making it easier to understand overall. As a bonus to it
> ->inflight_entry is now used only for iopoll, probably can be put into a
> union with something and save 16B of io_kiocb if that would be needed.

I've added 1/8 to the 5.11 mix for now, I'll get back to the other ones.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/8] a fix + cancellation unification
  2020-12-18 15:16 ` [PATCH 0/8] a fix + cancellation unification Jens Axboe
@ 2020-12-18 15:22   ` Pavel Begunkov
  0 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-12-18 15:22 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 18/12/2020 15:16, Jens Axboe wrote:
> On 12/18/20 6:12 AM, Pavel Begunkov wrote:
>> I suggest for 1/8 to go for current, and the rest are for next.
>> Patches 2-8 finally unify how we do task and files cancellation removing
>> boilerplate and making it easier to understand overall. As a bonus to it
>> ->inflight_entry is now used only for iopoll, probably can be put into a
>> union with something and save 16B of io_kiocb if that would be needed.
> 
> I've added 1/8 to the 5.11 mix for now, I'll get back to the other ones.

Sure, I'll keep track of the rest. Thanks

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-12-18 15:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-18 13:12 [PATCH 0/8] a fix + cancellation unification Pavel Begunkov
2020-12-18 13:12 ` [PATCH 1/8] io_uring: close a small race gap for files cancel Pavel Begunkov
2020-12-18 13:12 ` [PATCH 2/8] io_uring: further deduplicate #CQ events calc Pavel Begunkov
2020-12-18 13:12 ` [PATCH 3/8] io_uring: account per-task #requests with files Pavel Begunkov
2020-12-18 13:12 ` [PATCH 4/8] io_uring: explicitly pass tctx into del_task_file Pavel Begunkov
2020-12-18 13:12 ` [PATCH 5/8] io_uring: draft files cancel based on inflight cnt Pavel Begunkov
2020-12-18 13:12 ` [PATCH 6/8] io_uring: remove old files cancel mechanism Pavel Begunkov
2020-12-18 13:12 ` [PATCH 7/8] io_uring: cleanup task cancel Pavel Begunkov
2020-12-18 13:12 ` [PATCH 8/8] io_uring: kill not used anymore inflight_lock Pavel Begunkov
2020-12-18 15:16 ` [PATCH 0/8] a fix + cancellation unification Jens Axboe
2020-12-18 15:22   ` Pavel Begunkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.