io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] io_uring: optimise mb() in io_req_local_work_add
@ 2022-10-06  1:06 Pavel Begunkov
  2022-10-06 13:40 ` Jens Axboe
  0 siblings, 1 reply; 2+ messages in thread
From: Pavel Begunkov @ 2022-10-06  1:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

io_cqring_wake() needs a barrier for the waitqueue_active() check.
However, in case of io_req_local_work_add() prior it calls llist_add(),
which implies an atomic, and with that we can replace smb_mb() with
smp_mb__after_atomic().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/io_uring.c |  5 +++--
 io_uring/io_uring.h | 11 +++++++++--
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 5e7c086685bf..355fc1f3083d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1106,6 +1106,8 @@ static void io_req_local_work_add(struct io_kiocb *req)
 
 	if (!llist_add(&req->io_task_work.node, &ctx->work_llist))
 		return;
+	/* need it for the following io_cqring_wake() */
+	smp_mb__after_atomic();
 
 	if (unlikely(atomic_read(&req->task->io_uring->in_idle))) {
 		io_move_task_work_from_local(ctx);
@@ -1117,8 +1119,7 @@ static void io_req_local_work_add(struct io_kiocb *req)
 
 	if (ctx->has_evfd)
 		io_eventfd_signal(ctx);
-	io_cqring_wake(ctx);
-
+	__io_cqring_wake(ctx);
 }
 
 static inline void __io_req_task_work_add(struct io_kiocb *req, bool allow_local)
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index 177bd55357d7..e733d31f31d2 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -203,17 +203,24 @@ static inline void io_commit_cqring(struct io_ring_ctx *ctx)
 	smp_store_release(&ctx->rings->cq.tail, ctx->cached_cq_tail);
 }
 
-static inline void io_cqring_wake(struct io_ring_ctx *ctx)
+/* requires smb_mb() prior, see wq_has_sleeper() */
+static inline void __io_cqring_wake(struct io_ring_ctx *ctx)
 {
 	/*
 	 * wake_up_all() may seem excessive, but io_wake_function() and
 	 * io_should_wake() handle the termination of the loop and only
 	 * wake as many waiters as we need to.
 	 */
-	if (wq_has_sleeper(&ctx->cq_wait))
+	if (waitqueue_active(&ctx->cq_wait))
 		wake_up_all(&ctx->cq_wait);
 }
 
+static inline void io_cqring_wake(struct io_ring_ctx *ctx)
+{
+	smp_mb();
+	__io_cqring_wake(ctx);
+}
+
 static inline bool io_sqring_full(struct io_ring_ctx *ctx)
 {
 	struct io_rings *r = ctx->rings;
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH 1/1] io_uring: optimise mb() in io_req_local_work_add
  2022-10-06  1:06 [PATCH 1/1] io_uring: optimise mb() in io_req_local_work_add Pavel Begunkov
@ 2022-10-06 13:40 ` Jens Axboe
  0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2022-10-06 13:40 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov

On Thu, 6 Oct 2022 02:06:10 +0100, Pavel Begunkov wrote:
> io_cqring_wake() needs a barrier for the waitqueue_active() check.
> However, in case of io_req_local_work_add() prior it calls llist_add(),
> which implies an atomic, and with that we can replace smb_mb() with
> smp_mb__after_atomic().
> 
> 

Applied, thanks!

[1/1] io_uring: optimise mb() in io_req_local_work_add
      commit: b4f5d4f4e12def53462ea7f35dafa132f2d54156

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-10-06 13:40 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-06  1:06 [PATCH 1/1] io_uring: optimise mb() in io_req_local_work_add Pavel Begunkov
2022-10-06 13:40 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).