io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] fixes for REQ_F_COMP_LOCKED
@ 2020-10-13  8:43 Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 1/5] io_uring: don't set COMP_LOCKED if won't put Pavel Begunkov
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring

This removes REQ_F_COMP_LOCKED to fix a couple of problems with it.

[5/5] is harsh and some work should be done to ease the aftermath,
i.e. io_submit_flush_completions() and maybe fail_links().

Another way around would be to replace the flag with an comp_locked
argument in put_req(), free_req() and so on, but IMHO in a long run
removing it should be better.

note: there is a new io_req_task_work_add() call in [5/5]. Jens,
could you please verify whether passed @twa_signal_ok=true is ok,
because I don't really understand the difference.

Pavel Begunkov (5):
  io_uring: don't set COMP_LOCKED if won't put
  io_uring: don't unnecessarily clear F_LINK_TIMEOUT
  io_uring: don't put a poll req under spinlock
  io_uring: dig out COMP_LOCK from deep call chain
  io_uring: fix REQ_F_COMP_LOCKED by killing it

 fs/io_uring.c | 158 ++++++++++++++++++--------------------------------
 1 file changed, 57 insertions(+), 101 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/5] io_uring: don't set COMP_LOCKED if won't put
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
@ 2020-10-13  8:43 ` Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 2/5] io_uring: don't unnecessarily clear F_LINK_TIMEOUT Pavel Begunkov
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring

__io_kill_linked_timeout() sets REQ_F_COMP_LOCKED for a linked timeout
even if it can't cancel it, e.g. it's already running. It not only races
with io_link_timeout_fn() for ->flags field, but also leaves the flag
set and so io_link_timeout_fn() may find it and decide that it holds the
lock. Hopefully, the second problem is potential.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index bbac9a40f42c..e73f63b57e0b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1778,6 +1778,7 @@ static bool io_link_cancel_timeout(struct io_kiocb *req)
 
 	ret = hrtimer_try_to_cancel(&io->timer);
 	if (ret != -1) {
+		req->flags |= REQ_F_COMP_LOCKED;
 		io_cqring_fill_event(req, -ECANCELED);
 		io_commit_cqring(ctx);
 		req->flags &= ~REQ_F_LINK_HEAD;
@@ -1800,7 +1801,6 @@ static bool __io_kill_linked_timeout(struct io_kiocb *req)
 		return false;
 
 	list_del_init(&link->link_list);
-	link->flags |= REQ_F_COMP_LOCKED;
 	wake_ev = io_link_cancel_timeout(link);
 	req->flags &= ~REQ_F_LINK_TIMEOUT;
 	return wake_ev;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/5] io_uring: don't unnecessarily clear F_LINK_TIMEOUT
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 1/5] io_uring: don't set COMP_LOCKED if won't put Pavel Begunkov
@ 2020-10-13  8:43 ` Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 3/5] io_uring: don't put a poll req under spinlock Pavel Begunkov
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring

If a request had REQ_F_LINK_TIMEOUT it would've been cleared in
__io_kill_linked_timeout() by the time of __io_fail_links(), so no need
to care about it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index e73f63b57e0b..3ccc7939d863 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1861,7 +1861,6 @@ static void __io_fail_links(struct io_kiocb *req)
 		io_cqring_fill_event(link, -ECANCELED);
 		link->flags |= REQ_F_COMP_LOCKED;
 		__io_double_put_req(link);
-		req->flags &= ~REQ_F_LINK_TIMEOUT;
 	}
 
 	io_commit_cqring(ctx);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/5] io_uring: don't put a poll req under spinlock
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 1/5] io_uring: don't set COMP_LOCKED if won't put Pavel Begunkov
  2020-10-13  8:43 ` [PATCH 2/5] io_uring: don't unnecessarily clear F_LINK_TIMEOUT Pavel Begunkov
@ 2020-10-13  8:43 ` Pavel Begunkov
  2020-10-13 14:54   ` Jens Axboe
  2020-10-13  8:43 ` [PATCH 4/5] io_uring: dig out COMP_LOCK from deep call chain Pavel Begunkov
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Move io_put_req() in io_poll_task_handler() from under spinlock. That's
a good rule to minimise time within spinlock sections, and
performance-wise it should affect only rare cases/slow-path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3ccc7939d863..ca1cff579873 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4857,10 +4857,9 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
 
 	hash_del(&req->hash_node);
 	io_poll_complete(req, req->result, 0);
-	req->flags |= REQ_F_COMP_LOCKED;
-	*nxt = io_put_req_find_next(req);
 	spin_unlock_irq(&ctx->completion_lock);
 
+	*nxt = io_put_req_find_next(req);
 	io_cqring_ev_posted(ctx);
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/5] io_uring: dig out COMP_LOCK from deep call chain
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-10-13  8:43 ` [PATCH 3/5] io_uring: don't put a poll req under spinlock Pavel Begunkov
@ 2020-10-13  8:43 ` Pavel Begunkov
  2020-10-13  8:44 ` [PATCH 5/5] io_uring: fix REQ_F_COMP_LOCKED by killing it Pavel Begunkov
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring

io_req_clean_work() checks REQ_F_COMP_LOCK to pass this two layers up.
Move the check up into __io_free_req(), so at least it doesn't looks so
ugly and would facilitate further changes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 33 +++++++++------------------------
 1 file changed, 9 insertions(+), 24 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ca1cff579873..d5baa5bba895 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1190,14 +1190,10 @@ static void __io_commit_cqring(struct io_ring_ctx *ctx)
 	}
 }
 
-/*
- * Returns true if we need to defer file table putting. This can only happen
- * from the error path with REQ_F_COMP_LOCKED set.
- */
-static bool io_req_clean_work(struct io_kiocb *req)
+static void io_req_clean_work(struct io_kiocb *req)
 {
 	if (!(req->flags & REQ_F_WORK_INITIALIZED))
-		return false;
+		return;
 
 	req->flags &= ~REQ_F_WORK_INITIALIZED;
 
@@ -1216,9 +1212,6 @@ static bool io_req_clean_work(struct io_kiocb *req)
 	if (req->work.fs) {
 		struct fs_struct *fs = req->work.fs;
 
-		if (req->flags & REQ_F_COMP_LOCKED)
-			return true;
-
 		spin_lock(&req->work.fs->lock);
 		if (--fs->users)
 			fs = NULL;
@@ -1227,8 +1220,6 @@ static bool io_req_clean_work(struct io_kiocb *req)
 			free_fs_struct(fs);
 		req->work.fs = NULL;
 	}
-
-	return false;
 }
 
 static void io_prep_async_work(struct io_kiocb *req)
@@ -1708,7 +1699,7 @@ static inline void io_put_file(struct io_kiocb *req, struct file *file,
 		fput(file);
 }
 
-static bool io_dismantle_req(struct io_kiocb *req)
+static void io_dismantle_req(struct io_kiocb *req)
 {
 	io_clean_op(req);
 
@@ -1717,7 +1708,7 @@ static bool io_dismantle_req(struct io_kiocb *req)
 	if (req->file)
 		io_put_file(req, req->file, (req->flags & REQ_F_FIXED_FILE));
 
-	return io_req_clean_work(req);
+	io_req_clean_work(req);
 }
 
 static void __io_free_req_finish(struct io_kiocb *req)
@@ -1740,21 +1731,15 @@ static void __io_free_req_finish(struct io_kiocb *req)
 static void io_req_task_file_table_put(struct callback_head *cb)
 {
 	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
-	struct fs_struct *fs = req->work.fs;
-
-	spin_lock(&req->work.fs->lock);
-	if (--fs->users)
-		fs = NULL;
-	spin_unlock(&req->work.fs->lock);
-	if (fs)
-		free_fs_struct(fs);
-	req->work.fs = NULL;
+
+	io_dismantle_req(req);
 	__io_free_req_finish(req);
 }
 
 static void __io_free_req(struct io_kiocb *req)
 {
-	if (!io_dismantle_req(req)) {
+	if (!(req->flags & REQ_F_COMP_LOCKED)) {
+		io_dismantle_req(req);
 		__io_free_req_finish(req);
 	} else {
 		int ret;
@@ -2066,7 +2051,7 @@ static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req)
 	}
 	rb->task_refs++;
 
-	WARN_ON_ONCE(io_dismantle_req(req));
+	io_dismantle_req(req);
 	rb->reqs[rb->to_free++] = req;
 	if (unlikely(rb->to_free == ARRAY_SIZE(rb->reqs)))
 		__io_req_free_batch_flush(req->ctx, rb);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] io_uring: fix REQ_F_COMP_LOCKED by killing it
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-10-13  8:43 ` [PATCH 4/5] io_uring: dig out COMP_LOCK from deep call chain Pavel Begunkov
@ 2020-10-13  8:44 ` Pavel Begunkov
  2020-10-13  9:46 ` [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
  2020-10-13 15:14 ` Jens Axboe
  6 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  8:44 UTC (permalink / raw)
  To: Jens Axboe, io-uring

REQ_F_COMP_LOCKED is used and implemented in a buggy way. The problem is
that the flag is set before io_put_req() but not cleared after, and if
that wasn't the final reference, the request will be freed with the flag
set from some other context, which may not hold a spinlock. That means
possible races with removing linked timeouts and unsynchronised
completion (e.g. access to CQ).

Instead of fixing REQ_F_COMP_LOCKED, kill the flag and use
task_work_add() to move such requests to a fresh context to free from
it, as was done with __io_free_req_finish().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 129 ++++++++++++++++++++------------------------------
 1 file changed, 51 insertions(+), 78 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d5baa5bba895..cb2640f6fdb2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -581,7 +581,6 @@ enum {
 	REQ_F_NOWAIT_BIT,
 	REQ_F_LINK_TIMEOUT_BIT,
 	REQ_F_ISREG_BIT,
-	REQ_F_COMP_LOCKED_BIT,
 	REQ_F_NEED_CLEANUP_BIT,
 	REQ_F_POLLED_BIT,
 	REQ_F_BUFFER_SELECTED_BIT,
@@ -620,8 +619,6 @@ enum {
 	REQ_F_LINK_TIMEOUT	= BIT(REQ_F_LINK_TIMEOUT_BIT),
 	/* regular file */
 	REQ_F_ISREG		= BIT(REQ_F_ISREG_BIT),
-	/* completion under lock */
-	REQ_F_COMP_LOCKED	= BIT(REQ_F_COMP_LOCKED_BIT),
 	/* needs cleanup */
 	REQ_F_NEED_CLEANUP	= BIT(REQ_F_NEED_CLEANUP_BIT),
 	/* already went through poll handler */
@@ -972,8 +969,8 @@ static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
 			     struct io_comp_state *cs);
 static void io_cqring_fill_event(struct io_kiocb *req, long res);
 static void io_put_req(struct io_kiocb *req);
+static void io_put_req_deferred(struct io_kiocb *req, int nr);
 static void io_double_put_req(struct io_kiocb *req);
-static void __io_double_put_req(struct io_kiocb *req);
 static struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req);
 static void __io_queue_linked_timeout(struct io_kiocb *req);
 static void io_queue_linked_timeout(struct io_kiocb *req);
@@ -1325,9 +1322,8 @@ static void io_kill_timeout(struct io_kiocb *req)
 		atomic_set(&req->ctx->cq_timeouts,
 			atomic_read(&req->ctx->cq_timeouts) + 1);
 		list_del_init(&req->timeout.list);
-		req->flags |= REQ_F_COMP_LOCKED;
 		io_cqring_fill_event(req, 0);
-		io_put_req(req);
+		io_put_req_deferred(req, 1);
 	}
 }
 
@@ -1378,8 +1374,7 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx)
 		if (link) {
 			__io_queue_linked_timeout(link);
 			/* drop submission reference */
-			link->flags |= REQ_F_COMP_LOCKED;
-			io_put_req(link);
+			io_put_req_deferred(link, 1);
 		}
 		kfree(de);
 	} while (!list_empty(&ctx->defer_list));
@@ -1607,8 +1602,7 @@ static void io_submit_flush_completions(struct io_comp_state *cs)
 		list_del(&req->compl.list);
 		__io_cqring_fill_event(req, req->result, req->compl.cflags);
 		if (!(req->flags & REQ_F_LINK_HEAD)) {
-			req->flags |= REQ_F_COMP_LOCKED;
-			io_put_req(req);
+			io_put_req_deferred(req, 1);
 		} else {
 			spin_unlock_irq(&ctx->completion_lock);
 			io_put_req(req);
@@ -1711,10 +1705,14 @@ static void io_dismantle_req(struct io_kiocb *req)
 	io_req_clean_work(req);
 }
 
-static void __io_free_req_finish(struct io_kiocb *req)
+static void __io_free_req(struct io_kiocb *req)
 {
-	struct io_uring_task *tctx = req->task->io_uring;
-	struct io_ring_ctx *ctx = req->ctx;
+	struct io_uring_task *tctx;
+	struct io_ring_ctx *ctx;
+
+	io_dismantle_req(req);
+	tctx = req->task->io_uring;
+	ctx = req->ctx;
 
 	atomic_long_inc(&tctx->req_complete);
 	if (tctx->in_idle)
@@ -1728,33 +1726,6 @@ static void __io_free_req_finish(struct io_kiocb *req)
 	percpu_ref_put(&ctx->refs);
 }
 
-static void io_req_task_file_table_put(struct callback_head *cb)
-{
-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
-
-	io_dismantle_req(req);
-	__io_free_req_finish(req);
-}
-
-static void __io_free_req(struct io_kiocb *req)
-{
-	if (!(req->flags & REQ_F_COMP_LOCKED)) {
-		io_dismantle_req(req);
-		__io_free_req_finish(req);
-	} else {
-		int ret;
-
-		init_task_work(&req->task_work, io_req_task_file_table_put);
-		ret = task_work_add(req->task, &req->task_work, TWA_RESUME);
-		if (unlikely(ret)) {
-			struct task_struct *tsk;
-
-			tsk = io_wq_get_task(req->ctx->io_wq);
-			task_work_add(tsk, &req->task_work, 0);
-		}
-	}
-}
-
 static bool io_link_cancel_timeout(struct io_kiocb *req)
 {
 	struct io_timeout_data *io = req->async_data;
@@ -1763,11 +1734,10 @@ static bool io_link_cancel_timeout(struct io_kiocb *req)
 
 	ret = hrtimer_try_to_cancel(&io->timer);
 	if (ret != -1) {
-		req->flags |= REQ_F_COMP_LOCKED;
 		io_cqring_fill_event(req, -ECANCELED);
 		io_commit_cqring(ctx);
 		req->flags &= ~REQ_F_LINK_HEAD;
-		io_put_req(req);
+		io_put_req_deferred(req, 1);
 		return true;
 	}
 
@@ -1794,17 +1764,12 @@ static bool __io_kill_linked_timeout(struct io_kiocb *req)
 static void io_kill_linked_timeout(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
+	unsigned long flags;
 	bool wake_ev;
 
-	if (!(req->flags & REQ_F_COMP_LOCKED)) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&ctx->completion_lock, flags);
-		wake_ev = __io_kill_linked_timeout(req);
-		spin_unlock_irqrestore(&ctx->completion_lock, flags);
-	} else {
-		wake_ev = __io_kill_linked_timeout(req);
-	}
+	spin_lock_irqsave(&ctx->completion_lock, flags);
+	wake_ev = __io_kill_linked_timeout(req);
+	spin_unlock_irqrestore(&ctx->completion_lock, flags);
 
 	if (wake_ev)
 		io_cqring_ev_posted(ctx);
@@ -1844,27 +1809,20 @@ static void __io_fail_links(struct io_kiocb *req)
 		trace_io_uring_fail_link(req, link);
 
 		io_cqring_fill_event(link, -ECANCELED);
-		link->flags |= REQ_F_COMP_LOCKED;
-		__io_double_put_req(link);
+		io_put_req_deferred(link, 2);
 	}
 
 	io_commit_cqring(ctx);
-	io_cqring_ev_posted(ctx);
 }
 
 static void io_fail_links(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
+	unsigned long flags;
 
-	if (!(req->flags & REQ_F_COMP_LOCKED)) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&ctx->completion_lock, flags);
-		__io_fail_links(req);
-		spin_unlock_irqrestore(&ctx->completion_lock, flags);
-	} else {
-		__io_fail_links(req);
-	}
+	spin_lock_irqsave(&ctx->completion_lock, flags);
+	__io_fail_links(req);
+	spin_unlock_irqrestore(&ctx->completion_lock, flags);
 
 	io_cqring_ev_posted(ctx);
 }
@@ -2078,6 +2036,34 @@ static void io_put_req(struct io_kiocb *req)
 		io_free_req(req);
 }
 
+static void io_put_req_deferred_cb(struct callback_head *cb)
+{
+	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+
+	io_free_req(req);
+}
+
+static void io_free_req_deferred(struct io_kiocb *req)
+{
+	int ret;
+
+	init_task_work(&req->task_work, io_put_req_deferred_cb);
+	ret = io_req_task_work_add(req, true);
+	if (unlikely(ret)) {
+		struct task_struct *tsk;
+
+		tsk = io_wq_get_task(req->ctx->io_wq);
+		task_work_add(tsk, &req->task_work, 0);
+		wake_up_process(tsk);
+	}
+}
+
+static inline void io_put_req_deferred(struct io_kiocb *req, int refs)
+{
+	if (refcount_sub_and_test(refs, &req->refs))
+		io_free_req_deferred(req);
+}
+
 static struct io_wq_work *io_steal_work(struct io_kiocb *req)
 {
 	struct io_kiocb *nxt;
@@ -2094,17 +2080,6 @@ static struct io_wq_work *io_steal_work(struct io_kiocb *req)
 	return nxt ? &nxt->work : NULL;
 }
 
-/*
- * Must only be used if we don't need to care about links, usually from
- * within the completion handling itself.
- */
-static void __io_double_put_req(struct io_kiocb *req)
-{
-	/* drop both submit and complete references */
-	if (refcount_sub_and_test(2, &req->refs))
-		__io_free_req(req);
-}
-
 static void io_double_put_req(struct io_kiocb *req)
 {
 	/* drop both submit and complete references */
@@ -5134,9 +5109,8 @@ static bool io_poll_remove_one(struct io_kiocb *req)
 	if (do_complete) {
 		io_cqring_fill_event(req, -ECANCELED);
 		io_commit_cqring(req->ctx);
-		req->flags |= REQ_F_COMP_LOCKED;
 		req_set_fail_links(req);
-		io_put_req(req);
+		io_put_req_deferred(req, 1);
 	}
 
 	return do_complete;
@@ -5318,9 +5292,8 @@ static int __io_timeout_cancel(struct io_kiocb *req)
 	list_del_init(&req->timeout.list);
 
 	req_set_fail_links(req);
-	req->flags |= REQ_F_COMP_LOCKED;
 	io_cqring_fill_event(req, -ECANCELED);
-	io_put_req(req);
+	io_put_req_deferred(req, 1);
 	return 0;
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] fixes for REQ_F_COMP_LOCKED
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-10-13  8:44 ` [PATCH 5/5] io_uring: fix REQ_F_COMP_LOCKED by killing it Pavel Begunkov
@ 2020-10-13  9:46 ` Pavel Begunkov
  2020-10-13 14:57   ` Jens Axboe
  2020-10-13 15:14 ` Jens Axboe
  6 siblings, 1 reply; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13  9:46 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 13/10/2020 09:43, Pavel Begunkov wrote:
> This removes REQ_F_COMP_LOCKED to fix a couple of problems with it.
> 
> [5/5] is harsh and some work should be done to ease the aftermath,
> i.e. io_submit_flush_completions() and maybe fail_links().
> 
> Another way around would be to replace the flag with an comp_locked
> argument in put_req(), free_req() and so on, but IMHO in a long run
> removing it should be better.
> 
> note: there is a new io_req_task_work_add() call in [5/5]. Jens,
> could you please verify whether passed @twa_signal_ok=true is ok,
> because I don't really understand the difference.

btw, when I copied task_work_add(TWA_RESUME) from __io_free_req(),
tasks were hanging sleeping uninterruptibly, and fail_links()
wasn't waking them. It looks like the deferring branch of
__io_free_req() is buggy as well and should use
io_req_task_work_add().


> 
> Pavel Begunkov (5):
>   io_uring: don't set COMP_LOCKED if won't put
>   io_uring: don't unnecessarily clear F_LINK_TIMEOUT
>   io_uring: don't put a poll req under spinlock
>   io_uring: dig out COMP_LOCK from deep call chain
>   io_uring: fix REQ_F_COMP_LOCKED by killing it
> 
>  fs/io_uring.c | 158 ++++++++++++++++++--------------------------------
>  1 file changed, 57 insertions(+), 101 deletions(-)
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/5] io_uring: don't put a poll req under spinlock
  2020-10-13  8:43 ` [PATCH 3/5] io_uring: don't put a poll req under spinlock Pavel Begunkov
@ 2020-10-13 14:54   ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2020-10-13 14:54 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 10/13/20 2:43 AM, Pavel Begunkov wrote:
> Move io_put_req() in io_poll_task_handler() from under spinlock. That's
> a good rule to minimise time within spinlock sections, and
> performance-wise it should affect only rare cases/slow-path.

It's actually more important not to bounce on the lock, especially
when the extra path isn't that long. You'll end up with the same hold
time, just split, but the downside is that you'll dirty that lock twice
instead of one.

So while I think the patch is fine as it avoids REQ_F_COMP_LOCKED,
I don't think the commit message really reflects reality. I'll
adjust it a bit.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] fixes for REQ_F_COMP_LOCKED
  2020-10-13  9:46 ` [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
@ 2020-10-13 14:57   ` Jens Axboe
  2020-10-13 17:02     ` Pavel Begunkov
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2020-10-13 14:57 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 10/13/20 3:46 AM, Pavel Begunkov wrote:
> On 13/10/2020 09:43, Pavel Begunkov wrote:
>> This removes REQ_F_COMP_LOCKED to fix a couple of problems with it.
>>
>> [5/5] is harsh and some work should be done to ease the aftermath,
>> i.e. io_submit_flush_completions() and maybe fail_links().
>>
>> Another way around would be to replace the flag with an comp_locked
>> argument in put_req(), free_req() and so on, but IMHO in a long run
>> removing it should be better.
>>
>> note: there is a new io_req_task_work_add() call in [5/5]. Jens,
>> could you please verify whether passed @twa_signal_ok=true is ok,
>> because I don't really understand the difference.

It should be fine, the only case that can't use 'true' is when it's
called from within the waitqueue handler as we can recurse on that
lock.

Luckily that'll all go away once the TWA_SIGNAL improvement patches
are ready.

> btw, when I copied task_work_add(TWA_RESUME) from __io_free_req(),
> tasks were hanging sleeping uninterruptibly, and fail_links()
> wasn't waking them. It looks like the deferring branch of
> __io_free_req() is buggy as well and should use
> io_req_task_work_add().

Probably related to exit conditions.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] fixes for REQ_F_COMP_LOCKED
  2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
                   ` (5 preceding siblings ...)
  2020-10-13  9:46 ` [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
@ 2020-10-13 15:14 ` Jens Axboe
  6 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2020-10-13 15:14 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 10/13/20 2:43 AM, Pavel Begunkov wrote:
> This removes REQ_F_COMP_LOCKED to fix a couple of problems with it.
> 
> [5/5] is harsh and some work should be done to ease the aftermath,
> i.e. io_submit_flush_completions() and maybe fail_links().
> 
> Another way around would be to replace the flag with an comp_locked
> argument in put_req(), free_req() and so on, but IMHO in a long run
> removing it should be better.
> 
> note: there is a new io_req_task_work_add() call in [5/5]. Jens,
> could you please verify whether passed @twa_signal_ok=true is ok,
> because I don't really understand the difference.

As mentioned in the other email, looks good to me.

Thanks, I have applied the series.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/5] fixes for REQ_F_COMP_LOCKED
  2020-10-13 14:57   ` Jens Axboe
@ 2020-10-13 17:02     ` Pavel Begunkov
  0 siblings, 0 replies; 11+ messages in thread
From: Pavel Begunkov @ 2020-10-13 17:02 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 13/10/2020 15:57, Jens Axboe wrote:
> On 10/13/20 3:46 AM, Pavel Begunkov wrote:
>> On 13/10/2020 09:43, Pavel Begunkov wrote:
>>> This removes REQ_F_COMP_LOCKED to fix a couple of problems with it.
>>>
>>> [5/5] is harsh and some work should be done to ease the aftermath,
>>> i.e. io_submit_flush_completions() and maybe fail_links().
>>>
>>> Another way around would be to replace the flag with an comp_locked
>>> argument in put_req(), free_req() and so on, but IMHO in a long run
>>> removing it should be better.
>>>
>>> note: there is a new io_req_task_work_add() call in [5/5]. Jens,
>>> could you please verify whether passed @twa_signal_ok=true is ok,
>>> because I don't really understand the difference.
> 
> It should be fine, the only case that can't use 'true' is when it's
> called from within the waitqueue handler as we can recurse on that
> lock.

Got it. And thanks for fixing descriptions!

> 
> Luckily that'll all go away once the TWA_SIGNAL improvement patches
> are ready.
> 
>> btw, when I copied task_work_add(TWA_RESUME) from __io_free_req(),
>> tasks were hanging sleeping uninterruptibly, and fail_links()
>> wasn't waking them. It looks like the deferring branch of
>> __io_free_req() is buggy as well and should use
>> io_req_task_work_add().
> 
> Probably related to exit conditions.

Yep, kind of
    close() -> ->flush() -> io_uring_cancel_files() -> schedule()

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-10-13 17:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-13  8:43 [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
2020-10-13  8:43 ` [PATCH 1/5] io_uring: don't set COMP_LOCKED if won't put Pavel Begunkov
2020-10-13  8:43 ` [PATCH 2/5] io_uring: don't unnecessarily clear F_LINK_TIMEOUT Pavel Begunkov
2020-10-13  8:43 ` [PATCH 3/5] io_uring: don't put a poll req under spinlock Pavel Begunkov
2020-10-13 14:54   ` Jens Axboe
2020-10-13  8:43 ` [PATCH 4/5] io_uring: dig out COMP_LOCK from deep call chain Pavel Begunkov
2020-10-13  8:44 ` [PATCH 5/5] io_uring: fix REQ_F_COMP_LOCKED by killing it Pavel Begunkov
2020-10-13  9:46 ` [PATCH 0/5] fixes for REQ_F_COMP_LOCKED Pavel Begunkov
2020-10-13 14:57   ` Jens Axboe
2020-10-13 17:02     ` Pavel Begunkov
2020-10-13 15:14 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).