All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] fixes around request overflows
@ 2020-12-17  0:24 Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 1/5] io_uring: cancel reqs shouldn't kill overflow list Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

[1/5] is a recent regression, should be pretty easily discoverable
(backport?). 3-5 are just a regular easy cleaning.

Pavel Begunkov (5):
  io_uring: cancel reqs shouldn't kill overflow list
  io_uring: remove racy overflow list fast checks
  io_uring: consolidate CQ nr events calculation
  io_uring: inline io_cqring_mark_overflow()
  io_uring: limit {io|sq}poll submit locking scope

 fs/io_uring.c | 59 +++++++++++++++++++++++----------------------------
 1 file changed, 26 insertions(+), 33 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5] io_uring: cancel reqs shouldn't kill overflow list
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
@ 2020-12-17  0:24 ` Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 2/5] io_uring: remove racy overflow list fast checks Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

io_uring_cancel_task_requests() doesn't imply that the ring is going
away, it may continue to work well after that. The problem is that it
sets ->cq_overflow_flushed effectively disabling the CQ overflow feature

Split setting cq_overflow_flushed from flush, and do the first one only
on exit. It's ok in terms of cancellations because there is a
io_uring->in_idle check in __io_cqring_fill_event().

It also fixes a race with setting ->cq_overflow_flushed in
io_uring_cancel_task_requests, whuch's is not atomic and a part of a
bitmask with other flags. Though, the only other flag that's not set
during init is drain_next, so it's not as bad for sane architectures.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f53356ced5ab..7115147a25a8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1734,10 +1734,6 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 
 	spin_lock_irqsave(&ctx->completion_lock, flags);
 
-	/* if force is set, the ring is going away. always drop after that */
-	if (force)
-		ctx->cq_overflow_flushed = 1;
-
 	cqe = NULL;
 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
 		if (!io_match_task(req, tsk, files))
@@ -8663,6 +8659,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
 	percpu_ref_kill(&ctx->refs);
+	/* if force is set, the ring is going away. always drop after that */
+	ctx->cq_overflow_flushed = 1;
 	if (ctx->rings)
 		io_cqring_overflow_flush(ctx, true, NULL, NULL);
 	mutex_unlock(&ctx->uring_lock);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/5] io_uring: remove racy overflow list fast checks
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 1/5] io_uring: cancel reqs shouldn't kill overflow list Pavel Begunkov
@ 2020-12-17  0:24 ` Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 3/5] io_uring: consolidate CQ nr events calculation Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

list_empty_careful() is not racy only if some conditions are met, i.e.
no re-adds after del_init. io_cqring_overflow_flush() does list_move(),
so it's actually racy.

Remove those checks, we have ->cq_check_overflow for the fast path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 7115147a25a8..3c8134be4a05 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1725,8 +1725,6 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 	LIST_HEAD(list);
 
 	if (!force) {
-		if (list_empty_careful(&ctx->cq_overflow_list))
-			return true;
 		if ((ctx->cached_cq_tail - READ_ONCE(rings->cq.head) ==
 		    rings->cq_ring_entries))
 			return false;
@@ -6831,8 +6829,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
 
 	/* if we have a backlog and couldn't flush it all, return BUSY */
 	if (test_bit(0, &ctx->sq_check_overflow)) {
-		if (!list_empty(&ctx->cq_overflow_list) &&
-		    !io_cqring_overflow_flush(ctx, false, NULL, NULL))
+		if (!io_cqring_overflow_flush(ctx, false, NULL, NULL))
 			return -EBUSY;
 	}
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/5] io_uring: consolidate CQ nr events calculation
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 1/5] io_uring: cancel reqs shouldn't kill overflow list Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 2/5] io_uring: remove racy overflow list fast checks Pavel Begunkov
@ 2020-12-17  0:24 ` Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 4/5] io_uring: inline io_cqring_mark_overflow() Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Add a helper which calculates number of events in CQ. Handcoded version
of it in io_cqring_overflow_flush() is not the clearest thing, so it
makes it slightly more readable.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3c8134be4a05..8f3588f4cb38 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1694,6 +1694,11 @@ static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
 	return io_wq_current_is_worker();
 }
 
+static inline unsigned __io_cqring_events(struct io_ring_ctx *ctx)
+{
+	return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
+}
+
 static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
 {
 	if (waitqueue_active(&ctx->wait))
@@ -1724,14 +1729,10 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 	unsigned long flags;
 	LIST_HEAD(list);
 
-	if (!force) {
-		if ((ctx->cached_cq_tail - READ_ONCE(rings->cq.head) ==
-		    rings->cq_ring_entries))
-			return false;
-	}
+	if (!force && __io_cqring_events(ctx) == rings->cq_ring_entries)
+		return false;
 
 	spin_lock_irqsave(&ctx->completion_lock, flags);
-
 	cqe = NULL;
 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
 		if (!io_match_task(req, tsk, files))
@@ -2315,8 +2316,6 @@ static void io_double_put_req(struct io_kiocb *req)
 
 static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
 {
-	struct io_rings *rings = ctx->rings;
-
 	if (test_bit(0, &ctx->cq_check_overflow)) {
 		/*
 		 * noflush == true is from the waitqueue handler, just ensure
@@ -2331,7 +2330,7 @@ static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush)
 
 	/* See comment at the top of this file */
 	smp_rmb();
-	return ctx->cached_cq_tail - READ_ONCE(rings->cq.head);
+	return __io_cqring_events(ctx);
 }
 
 static inline unsigned int io_sqring_entries(struct io_ring_ctx *ctx)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/5] io_uring: inline io_cqring_mark_overflow()
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-12-17  0:24 ` [PATCH 3/5] io_uring: consolidate CQ nr events calculation Pavel Begunkov
@ 2020-12-17  0:24 ` Pavel Begunkov
  2020-12-17  0:24 ` [PATCH 5/5] io_uring: limit {io|sq}poll submit locking scope Pavel Begunkov
  2020-12-17  2:26 ` [PATCH 0/5] fixes around request overflows Jens Axboe
  5 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

There is only one user of it and the name is misleading, get rid of it
by inlining. By the way make overflow_flush's return value deduction
simpler.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8f3588f4cb38..ce580678b8ed 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1709,15 +1709,6 @@ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
 		eventfd_signal(ctx->cq_ev_fd, 1);
 }
 
-static void io_cqring_mark_overflow(struct io_ring_ctx *ctx)
-{
-	if (list_empty(&ctx->cq_overflow_list)) {
-		clear_bit(0, &ctx->sq_check_overflow);
-		clear_bit(0, &ctx->cq_check_overflow);
-		ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
-	}
-}
-
 /* Returns true if there are no backlogged entries after the flush */
 static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 				     struct task_struct *tsk,
@@ -1727,13 +1718,13 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 	struct io_kiocb *req, *tmp;
 	struct io_uring_cqe *cqe;
 	unsigned long flags;
+	bool all_flushed;
 	LIST_HEAD(list);
 
 	if (!force && __io_cqring_events(ctx) == rings->cq_ring_entries)
 		return false;
 
 	spin_lock_irqsave(&ctx->completion_lock, flags);
-	cqe = NULL;
 	list_for_each_entry_safe(req, tmp, &ctx->cq_overflow_list, compl.list) {
 		if (!io_match_task(req, tsk, files))
 			continue;
@@ -1754,9 +1745,14 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 		}
 	}
 
-	io_commit_cqring(ctx);
-	io_cqring_mark_overflow(ctx);
+	all_flushed = list_empty(&ctx->cq_overflow_list);
+	if (all_flushed) {
+		clear_bit(0, &ctx->sq_check_overflow);
+		clear_bit(0, &ctx->cq_check_overflow);
+		ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
+	}
 
+	io_commit_cqring(ctx);
 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
 	io_cqring_ev_posted(ctx);
 
@@ -1766,7 +1762,7 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 		io_put_req(req);
 	}
 
-	return cqe != NULL;
+	return all_flushed;
 }
 
 static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/5] io_uring: limit {io|sq}poll submit locking scope
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-12-17  0:24 ` [PATCH 4/5] io_uring: inline io_cqring_mark_overflow() Pavel Begunkov
@ 2020-12-17  0:24 ` Pavel Begunkov
  2020-12-17  2:26 ` [PATCH 0/5] fixes around request overflows Jens Axboe
  5 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  0:24 UTC (permalink / raw)
  To: Jens Axboe, io-uring

We don't need to take uring_lock for SQPOLL|IOPOLL to do
io_cqring_overflow_flush() when cq_overflow_list is empty, remove it
from the hot path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ce580678b8ed..fdb7271127c9 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9162,10 +9162,13 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	 */
 	ret = 0;
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
-		io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
-		if (!list_empty_careful(&ctx->cq_overflow_list))
+		if (!list_empty_careful(&ctx->cq_overflow_list)) {
+			bool needs_lock = ctx->flags & IORING_SETUP_IOPOLL;
+
+			io_ring_submit_lock(ctx, needs_lock);
 			io_cqring_overflow_flush(ctx, false, NULL, NULL);
-		io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL));
+			io_ring_submit_unlock(ctx, needs_lock);
+		}
 		if (flags & IORING_ENTER_SQ_WAKEUP)
 			wake_up(&ctx->sq_data->wait);
 		if (flags & IORING_ENTER_SQ_WAIT)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] fixes around request overflows
  2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-12-17  0:24 ` [PATCH 5/5] io_uring: limit {io|sq}poll submit locking scope Pavel Begunkov
@ 2020-12-17  2:26 ` Jens Axboe
  2020-12-17  2:38   ` Pavel Begunkov
  5 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2020-12-17  2:26 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 12/16/20 5:24 PM, Pavel Begunkov wrote:
> [1/5] is a recent regression, should be pretty easily discoverable
> (backport?). 3-5 are just a regular easy cleaning.

If 1/5 is a recent regression, why doesn't it have a Fixes line?
Any commit should have that, if applicable, as it makes it a lot
easier to judge what needs to be backported.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] fixes around request overflows
  2020-12-17  2:26 ` [PATCH 0/5] fixes around request overflows Jens Axboe
@ 2020-12-17  2:38   ` Pavel Begunkov
  2020-12-17 15:21     ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Pavel Begunkov @ 2020-12-17  2:38 UTC (permalink / raw)
  To: Jens Axboe, io-uring

On 17/12/2020 02:26, Jens Axboe wrote:
> On 12/16/20 5:24 PM, Pavel Begunkov wrote:
>> [1/5] is a recent regression, should be pretty easily discoverable
>> (backport?). 3-5 are just a regular easy cleaning.
> 
> If 1/5 is a recent regression, why doesn't it have a Fixes line?

I was lazy enough to just ask before actually looking for it.

Fixes: 0f2122045b946 ("io_uring: don't rely on weak ->files references")
Which is backported to v5.5+

> Any commit should have that, if applicable, as it makes it a lot
> easier to judge what needs to be backported.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] fixes around request overflows
  2020-12-17  2:38   ` Pavel Begunkov
@ 2020-12-17 15:21     ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2020-12-17 15:21 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 12/16/20 7:38 PM, Pavel Begunkov wrote:
> On 17/12/2020 02:26, Jens Axboe wrote:
>> On 12/16/20 5:24 PM, Pavel Begunkov wrote:
>>> [1/5] is a recent regression, should be pretty easily discoverable
>>> (backport?). 3-5 are just a regular easy cleaning.
>>
>> If 1/5 is a recent regression, why doesn't it have a Fixes line?
> 
> I was lazy enough to just ask before actually looking for it.
> 
> Fixes: 0f2122045b946 ("io_uring: don't rely on weak ->files references")
> Which is backported to v5.5+

Thanks added, and applied the rest.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-12-17 15:22 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-17  0:24 [PATCH 0/5] fixes around request overflows Pavel Begunkov
2020-12-17  0:24 ` [PATCH 1/5] io_uring: cancel reqs shouldn't kill overflow list Pavel Begunkov
2020-12-17  0:24 ` [PATCH 2/5] io_uring: remove racy overflow list fast checks Pavel Begunkov
2020-12-17  0:24 ` [PATCH 3/5] io_uring: consolidate CQ nr events calculation Pavel Begunkov
2020-12-17  0:24 ` [PATCH 4/5] io_uring: inline io_cqring_mark_overflow() Pavel Begunkov
2020-12-17  0:24 ` [PATCH 5/5] io_uring: limit {io|sq}poll submit locking scope Pavel Begunkov
2020-12-17  2:26 ` [PATCH 0/5] fixes around request overflows Jens Axboe
2020-12-17  2:38   ` Pavel Begunkov
2020-12-17 15:21     ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.