io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pavel Begunkov <asml.silence@gmail.com>
To: Jens Axboe <axboe@kernel.dk>, io-uring@vger.kernel.org
Subject: [PATCH v2 17/24] io_uring: remove drain_active check from hot path
Date: Fri, 24 Sep 2021 21:59:57 +0100	[thread overview]
Message-ID: <d7e7ddc63c15e8a300833132abb3eb8fd3918aef.1632516769.git.asml.silence@gmail.com> (raw)
In-Reply-To: <cover.1632516769.git.asml.silence@gmail.com>

req->ctx->active_drain is a bit too expensive, partially because of two
dereferences. Do a trick, if we see it set in io_init_req(), set
REQ_F_FORCE_ASYNC and it automatically goes through a slower path where
we can catch it. It's nearly free to do in io_init_req() because there
is already ->restricted check and it's in the same byte of a bitmask.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 53 ++++++++++++++++++++++++++++-----------------------
 1 file changed, 29 insertions(+), 24 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 271d921508f8..25f6096269c5 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6446,23 +6446,15 @@ static bool io_drain_req(struct io_kiocb *req)
 	int ret;
 	u32 seq;
 
-	if (req->flags & REQ_F_FAIL) {
-		io_req_complete_fail_submit(req);
-		return true;
-	}
-
-	/*
-	 * If we need to drain a request in the middle of a link, drain the
-	 * head request and the next request/link after the current link.
-	 * Considering sequential execution of links, IOSQE_IO_DRAIN will be
-	 * maintained for every request of our link.
-	 */
-	if (ctx->drain_next) {
-		req->flags |= REQ_F_IO_DRAIN;
-		ctx->drain_next = false;
-	}
 	/* not interested in head, start from the first linked */
 	io_for_each_link(pos, req->link) {
+		/*
+		 * If we need to drain a request in the middle of a link, drain
+		 * the head request and the next request/link after the current
+		 * link. Considering sequential execution of links,
+		 * IOSQE_IO_DRAIN will be maintained for every request of our
+		 * link.
+		 */
 		if (pos->flags & REQ_F_IO_DRAIN) {
 			ctx->drain_next = true;
 			req->flags |= REQ_F_IO_DRAIN;
@@ -6954,13 +6946,12 @@ static void __io_queue_sqe(struct io_kiocb *req)
 static inline void io_queue_sqe(struct io_kiocb *req)
 	__must_hold(&req->ctx->uring_lock)
 {
-	if (unlikely(req->ctx->drain_active) && io_drain_req(req))
-		return;
-
 	if (likely(!(req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
 		__io_queue_sqe(req);
 	} else if (req->flags & REQ_F_FAIL) {
 		io_req_complete_fail_submit(req);
+	} else if (unlikely(req->ctx->drain_active) && io_drain_req(req)) {
+		return;
 	} else {
 		int ret = io_req_prep_async(req);
 
@@ -6980,9 +6971,6 @@ static inline bool io_check_restriction(struct io_ring_ctx *ctx,
 					struct io_kiocb *req,
 					unsigned int sqe_flags)
 {
-	if (likely(!ctx->restricted))
-		return true;
-
 	if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
 		return false;
 
@@ -7023,11 +7011,28 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
 		    !io_op_defs[req->opcode].buffer_select)
 			return -EOPNOTSUPP;
-		if (sqe_flags & IOSQE_IO_DRAIN)
+		if (sqe_flags & IOSQE_IO_DRAIN) {
+			struct io_submit_link *link = &ctx->submit_state.link;
+
 			ctx->drain_active = true;
+			req->flags |= REQ_F_FORCE_ASYNC;
+			if (link->head)
+				link->head->flags |= IOSQE_IO_DRAIN | REQ_F_FORCE_ASYNC;
+		}
+	}
+	if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
+		if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
+			return -EACCES;
+		/* knock it to the slow queue path, will be drained there */
+		if (ctx->drain_active)
+			req->flags |= REQ_F_FORCE_ASYNC;
+		/* if there is no link, we're at "next" request and need to drain */
+		if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
+			ctx->drain_next = false;
+			ctx->drain_active = true;
+			req->flags |= REQ_F_FORCE_ASYNC | IOSQE_IO_DRAIN;
+		}
 	}
-	if (!io_check_restriction(ctx, req, sqe_flags))
-		return -EACCES;
 
 	personality = READ_ONCE(sqe->personality);
 	if (personality) {
-- 
2.33.0


  parent reply	other threads:[~2021-09-24 21:01 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-24 20:59 [PATCH v2 00/24] rework and optimise submission+completion paths Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 01/24] io_uring: mark having different creds unlikely Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 02/24] io_uring: force_nonspin Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 03/24] io_uring: make io_do_iopoll return number of reqs Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 04/24] io_uring: use slist for completion batching Pavel Begunkov
2021-09-26  6:57   ` Hao Xu
2021-09-28  9:41     ` Pavel Begunkov
2021-09-28 15:32       ` Jens Axboe
2021-09-24 20:59 ` [PATCH v2 05/24] io_uring: remove allocation cache array Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 06/24] io-wq: add io_wq_work_node based stack Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 07/24] io_uring: replace list with stack for req caches Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 08/24] io_uring: split iopoll loop Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 09/24] io_uring: use single linked list for iopoll Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 10/24] io_uring: add a helper for batch free Pavel Begunkov
2021-09-26  3:36   ` Hao Xu
2021-09-28  9:33     ` Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 11/24] io_uring: convert iopoll_completed to store_release Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 12/24] io_uring: optimise batch completion Pavel Begunkov
     [not found]   ` <CAFUsyfLSXMvd_MBAp83qriW7LD=bg2=25TC4e_X4oMO1atoPYg@mail.gmail.com>
2021-09-28  9:35     ` Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 13/24] io_uring: inline completion batching helpers Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 14/24] io_uring: don't pass tail into io_free_batch_list Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 15/24] io_uring: don't pass state to io_submit_state_end Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 16/24] io_uring: deduplicate io_queue_sqe() call sites Pavel Begunkov
2021-09-24 20:59 ` Pavel Begunkov [this message]
2021-09-24 20:59 ` [PATCH v2 18/24] io_uring: split slow path from io_queue_sqe Pavel Begunkov
2021-09-24 20:59 ` [PATCH v2 19/24] io_uring: inline hot path of __io_queue_sqe() Pavel Begunkov
2021-09-24 21:00 ` [PATCH v2 20/24] io_uring: reshuffle queue_sqe completion handling Pavel Begunkov
2021-09-24 21:00 ` [PATCH v2 21/24] io_uring: restructure submit sqes to_submit checks Pavel Begunkov
2021-09-24 21:00 ` [PATCH v2 22/24] io_uring: kill off ->inflight_entry field Pavel Begunkov
2021-09-24 21:00 ` [PATCH v2 23/24] io_uring: comment why inline complete calls io_clean_op() Pavel Begunkov
2021-09-24 21:00 ` [PATCH v2 24/24] io_uring: disable draining earlier Pavel Begunkov
2021-09-30 16:04 ` [PATCH v2 00/24] rework and optimise submission+completion paths Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d7e7ddc63c15e8a300833132abb3eb8fd3918aef.1632516769.git.asml.silence@gmail.com \
    --to=asml.silence@gmail.com \
    --cc=axboe@kernel.dk \
    --cc=io-uring@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).