io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-next 0/5] overflow CQE cleanups
@ 2024-04-10  1:26 Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 1/5] io_uring: unexport io_req_cqe_overflow() Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Refactoring for overflow CQE flushing and posting. The next related
problem would be to make io_cqring_event_overflow()'s locking saner.

Pavel Begunkov (5):
  io_uring: unexport io_req_cqe_overflow()
  io_uring: remove extra SQPOLL overflow flush
  io_uring: open code io_cqring_overflow_flush()
  io_uring: always lock __io_cqring_overflow_flush
  io_uring: consolidate overflow flushing

 io_uring/io_uring.c | 60 +++++++++++++++++----------------------------
 io_uring/io_uring.h |  1 -
 2 files changed, 23 insertions(+), 38 deletions(-)

-- 
2.44.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH for-next 1/5] io_uring: unexport io_req_cqe_overflow()
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
@ 2024-04-10  1:26 ` Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 2/5] io_uring: remove extra SQPOLL overflow flush Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

There are no users of io_req_cqe_overflow() apart from io_uring.c, make
it static.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 2 +-
 io_uring/io_uring.h | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 8df9ad010803..7e90c37084a9 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -819,7 +819,7 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
 	return true;
 }
 
-void io_req_cqe_overflow(struct io_kiocb *req)
+static void io_req_cqe_overflow(struct io_kiocb *req)
 {
 	io_cqring_event_overflow(req->ctx, req->cqe.user_data,
 				req->cqe.res, req->cqe.flags,
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 1eb65324792a..624ca9076a50 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -62,7 +62,6 @@ static inline bool io_should_wake(struct io_wait_queue *iowq)
 }
 
 bool io_cqe_cache_refill(struct io_ring_ctx *ctx, bool overflow);
-void io_req_cqe_overflow(struct io_kiocb *req);
 int io_run_task_work_sig(struct io_ring_ctx *ctx);
 void io_req_defer_failed(struct io_kiocb *req, s32 res);
 bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-next 2/5] io_uring: remove extra SQPOLL overflow flush
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 1/5] io_uring: unexport io_req_cqe_overflow() Pavel Begunkov
@ 2024-04-10  1:26 ` Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 3/5] io_uring: open code io_cqring_overflow_flush() Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

c1edbf5f081be ("io_uring: flag SQPOLL busy condition to userspace")
added an extra overflowed CQE flush in the SQPOLL submission path due to
backpressure, which was later removed. Remove the flush and let
io_cqring_wait() / iopoll handle it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7e90c37084a9..55838813ac3e 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3238,8 +3238,6 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
 	 */
 	ret = 0;
 	if (ctx->flags & IORING_SETUP_SQPOLL) {
-		io_cqring_overflow_flush(ctx);
-
 		if (unlikely(ctx->sq_data->thread == NULL)) {
 			ret = -EOWNERDEAD;
 			goto out;
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-next 3/5] io_uring: open code io_cqring_overflow_flush()
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 1/5] io_uring: unexport io_req_cqe_overflow() Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 2/5] io_uring: remove extra SQPOLL overflow flush Pavel Begunkov
@ 2024-04-10  1:26 ` Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 4/5] io_uring: always lock __io_cqring_overflow_flush Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

There is only one caller of io_cqring_overflow_flush(), open code it

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 55838813ac3e..9424659c5856 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -726,12 +726,6 @@ static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
 		mutex_unlock(&ctx->uring_lock);
 }
 
-static void io_cqring_overflow_flush(struct io_ring_ctx *ctx)
-{
-	if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
-		io_cqring_do_overflow_flush(ctx);
-}
-
 /* can be called by any task */
 static void io_put_task_remote(struct task_struct *task)
 {
@@ -2452,8 +2446,9 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 	if (!llist_empty(&ctx->work_llist))
 		io_run_local_work(ctx, min_events);
 	io_run_task_work();
-	io_cqring_overflow_flush(ctx);
-	/* if user messes with these they will just get an early return */
+
+	if (unlikely(test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)))
+		io_cqring_do_overflow_flush(ctx);
 	if (__io_cqring_events_user(ctx) >= min_events)
 		return 0;
 
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-next 4/5] io_uring: always lock __io_cqring_overflow_flush
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2024-04-10  1:26 ` [PATCH for-next 3/5] io_uring: open code io_cqring_overflow_flush() Pavel Begunkov
@ 2024-04-10  1:26 ` Pavel Begunkov
  2024-04-10  1:26 ` [PATCH for-next 5/5] io_uring: consolidate overflow flushing Pavel Begunkov
  2024-04-10  2:39 ` [PATCH for-next 0/5] overflow CQE cleanups Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Conditional locking is never great, in case of
__io_cqring_overflow_flush(), which is a slow path, it's not justified.
Don't handle IOPOLL separately, always grab uring_lock for overflow
flushing.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 9424659c5856..d6cb7d0d5e1d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -673,6 +673,8 @@ static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
 	struct io_overflow_cqe *ocqe;
 	LIST_HEAD(list);
 
+	lockdep_assert_held(&ctx->uring_lock);
+
 	spin_lock(&ctx->completion_lock);
 	list_splice_init(&ctx->cq_overflow_list, &list);
 	clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
@@ -689,6 +691,8 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
 {
 	size_t cqe_size = sizeof(struct io_uring_cqe);
 
+	lockdep_assert_held(&ctx->uring_lock);
+
 	if (__io_cqring_events(ctx) == ctx->cq_entries)
 		return;
 
@@ -718,12 +722,9 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
 
 static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
 {
-	/* iopoll syncs against uring_lock, not completion_lock */
-	if (ctx->flags & IORING_SETUP_IOPOLL)
-		mutex_lock(&ctx->uring_lock);
+	mutex_lock(&ctx->uring_lock);
 	__io_cqring_overflow_flush(ctx);
-	if (ctx->flags & IORING_SETUP_IOPOLL)
-		mutex_unlock(&ctx->uring_lock);
+	mutex_unlock(&ctx->uring_lock);
 }
 
 /* can be called by any task */
@@ -1522,6 +1523,8 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
 	unsigned int nr_events = 0;
 	unsigned long check_cq;
 
+	lockdep_assert_held(&ctx->uring_lock);
+
 	if (!io_allowed_run_tw(ctx))
 		return -EEXIST;
 
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-next 5/5] io_uring: consolidate overflow flushing
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2024-04-10  1:26 ` [PATCH for-next 4/5] io_uring: always lock __io_cqring_overflow_flush Pavel Begunkov
@ 2024-04-10  1:26 ` Pavel Begunkov
  2024-04-10  2:39 ` [PATCH for-next 0/5] overflow CQE cleanups Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2024-04-10  1:26 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Consolidate __io_cqring_overflow_flush and io_cqring_overflow_kill()
into a single function as it once was, it's easier to work with it this
way.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c | 40 +++++++++++++++-------------------------
 1 file changed, 15 insertions(+), 25 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d6cb7d0d5e1d..7a9bfbc1c080 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -668,26 +668,7 @@ static void io_cq_unlock_post(struct io_ring_ctx *ctx)
 	io_commit_cqring_flush(ctx);
 }
 
-static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
-{
-	struct io_overflow_cqe *ocqe;
-	LIST_HEAD(list);
-
-	lockdep_assert_held(&ctx->uring_lock);
-
-	spin_lock(&ctx->completion_lock);
-	list_splice_init(&ctx->cq_overflow_list, &list);
-	clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
-	spin_unlock(&ctx->completion_lock);
-
-	while (!list_empty(&list)) {
-		ocqe = list_first_entry(&list, struct io_overflow_cqe, list);
-		list_del(&ocqe->list);
-		kfree(ocqe);
-	}
-}
-
-static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
+static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
 {
 	size_t cqe_size = sizeof(struct io_uring_cqe);
 
@@ -704,11 +685,14 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
 		struct io_uring_cqe *cqe;
 		struct io_overflow_cqe *ocqe;
 
-		if (!io_get_cqe_overflow(ctx, &cqe, true))
-			break;
 		ocqe = list_first_entry(&ctx->cq_overflow_list,
 					struct io_overflow_cqe, list);
-		memcpy(cqe, &ocqe->cqe, cqe_size);
+
+		if (!dying) {
+			if (!io_get_cqe_overflow(ctx, &cqe, true))
+				break;
+			memcpy(cqe, &ocqe->cqe, cqe_size);
+		}
 		list_del(&ocqe->list);
 		kfree(ocqe);
 	}
@@ -720,10 +704,16 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
 	io_cq_unlock_post(ctx);
 }
 
+static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
+{
+	if (ctx->rings)
+		__io_cqring_overflow_flush(ctx, true);
+}
+
 static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
 {
 	mutex_lock(&ctx->uring_lock);
-	__io_cqring_overflow_flush(ctx);
+	__io_cqring_overflow_flush(ctx, false);
 	mutex_unlock(&ctx->uring_lock);
 }
 
@@ -1531,7 +1521,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
 	check_cq = READ_ONCE(ctx->check_cq);
 	if (unlikely(check_cq)) {
 		if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
-			__io_cqring_overflow_flush(ctx);
+			__io_cqring_overflow_flush(ctx, false);
 		/*
 		 * Similarly do not spin if we have not informed the user of any
 		 * dropped CQE.
-- 
2.44.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH for-next 0/5] overflow CQE cleanups
  2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
                   ` (4 preceding siblings ...)
  2024-04-10  1:26 ` [PATCH for-next 5/5] io_uring: consolidate overflow flushing Pavel Begunkov
@ 2024-04-10  2:39 ` Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-04-10  2:39 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov


On Wed, 10 Apr 2024 02:26:50 +0100, Pavel Begunkov wrote:
> Refactoring for overflow CQE flushing and posting. The next related
> problem would be to make io_cqring_event_overflow()'s locking saner.
> 
> Pavel Begunkov (5):
>   io_uring: unexport io_req_cqe_overflow()
>   io_uring: remove extra SQPOLL overflow flush
>   io_uring: open code io_cqring_overflow_flush()
>   io_uring: always lock __io_cqring_overflow_flush
>   io_uring: consolidate overflow flushing
> 
> [...]

Applied, thanks!

[1/5] io_uring: unexport io_req_cqe_overflow()
      commit: 3de3cc01f18fc7f6c9a5f8f28d97c5e36912e78b
[2/5] io_uring: remove extra SQPOLL overflow flush
      commit: 2aa2ddefbe584264ee618e15b1a0d1183e8e37b8
[3/5] io_uring: open code io_cqring_overflow_flush()
      commit: bd08cb7a6f5b05bc1b122117a922da21c081c58e
[4/5] io_uring: always lock __io_cqring_overflow_flush
      commit: 678b1aa58dffc01d9359a3fc093192746350f137
[5/5] io_uring: consolidate overflow flushing
      commit: ed50ebf24b391a6a3b17a7f6bf968303f0277bb7

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-04-10  2:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-10  1:26 [PATCH for-next 0/5] overflow CQE cleanups Pavel Begunkov
2024-04-10  1:26 ` [PATCH for-next 1/5] io_uring: unexport io_req_cqe_overflow() Pavel Begunkov
2024-04-10  1:26 ` [PATCH for-next 2/5] io_uring: remove extra SQPOLL overflow flush Pavel Begunkov
2024-04-10  1:26 ` [PATCH for-next 3/5] io_uring: open code io_cqring_overflow_flush() Pavel Begunkov
2024-04-10  1:26 ` [PATCH for-next 4/5] io_uring: always lock __io_cqring_overflow_flush Pavel Begunkov
2024-04-10  1:26 ` [PATCH for-next 5/5] io_uring: consolidate overflow flushing Pavel Begunkov
2024-04-10  2:39 ` [PATCH for-next 0/5] overflow CQE cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).