io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] bunch of random cleanup for-next
@ 2021-10-06 15:06 Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 1/5] io_uring: optimise plugging Pavel Begunkov
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Mix of small optimisations and some additions for safety

Pavel Begunkov (5):
  io_uring: optimise plugging
  io_uring: safer fallback_work free
  io_uring: reshuffle io_submit_state bits
  io_uring: optimise out req->opcode reloading
  io_uring: remove extra io_ring_exit_work wake up

 fs/io_uring.c | 72 +++++++++++++++++++++++++--------------------------
 1 file changed, 35 insertions(+), 37 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/5] io_uring: optimise plugging
  2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
@ 2021-10-06 15:06 ` Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 2/5] io_uring: safer fallback_work free Pavel Begunkov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Plugging is only needed with requests that also need a file, so hide
plugging under a ->needs_file check. Also, place ->needs_file and ->plug
bits into the same byte of io_op_defs, it may matter for compilers, e.g.
only with the change a tested one decided to optimise two memory testb
into a more with two register testb.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 62dc128e9b6b..fabb3580e27c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -894,12 +894,12 @@ struct io_defer_entry {
 struct io_op_def {
 	/* needs req->file assigned */
 	unsigned		needs_file : 1;
+	/* should block plug */
+	unsigned		plug : 1;
 	/* hash wq insertion if file is a regular file */
 	unsigned		hash_reg_file : 1;
 	/* unbound wq insertion if file is a non-regular file */
 	unsigned		unbound_nonreg_file : 1;
-	/* opcode is not supported by this kernel */
-	unsigned		not_supported : 1;
 	/* set if opcode supports polled "wait" */
 	unsigned		pollin : 1;
 	unsigned		pollout : 1;
@@ -907,8 +907,8 @@ struct io_op_def {
 	unsigned		buffer_select : 1;
 	/* do prep async if is going to be punted */
 	unsigned		needs_async_setup : 1;
-	/* should block plug */
-	unsigned		plug : 1;
+	/* opcode is not supported by this kernel */
+	unsigned		not_supported : 1;
 	/* size of async data needed, if any */
 	unsigned short		async_size;
 };
@@ -6981,7 +6981,6 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		       const struct io_uring_sqe *sqe)
 	__must_hold(&ctx->uring_lock)
 {
-	struct io_submit_state *state;
 	unsigned int sqe_flags;
 	int personality;
 
@@ -7028,19 +7027,20 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		get_cred(req->creds);
 		req->flags |= REQ_F_CREDS;
 	}
-	state = &ctx->submit_state;
-
-	/*
-	 * Plug now if we have more than 1 IO left after this, and the target
-	 * is potentially a read/write to block based storage.
-	 */
-	if (state->need_plug && io_op_defs[req->opcode].plug) {
-		blk_start_plug(&state->plug);
-		state->plug_started = true;
-		state->need_plug = false;
-	}
 
 	if (io_op_defs[req->opcode].needs_file) {
+		struct io_submit_state *state = &ctx->submit_state;
+
+		/*
+		 * Plug now if we have more than 2 IO left after this, and the
+		 * target is potentially a read/write to block based storage.
+		 */
+		if (state->need_plug && io_op_defs[req->opcode].plug) {
+			state->plug_started = true;
+			state->need_plug = false;
+			blk_start_plug(&state->plug);
+		}
+
 		req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
 					(sqe_flags & IOSQE_FIXED_FILE));
 		if (unlikely(!req->file))
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/5] io_uring: safer fallback_work free
  2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 1/5] io_uring: optimise plugging Pavel Begunkov
@ 2021-10-06 15:06 ` Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 3/5] io_uring: reshuffle io_submit_state bits Pavel Begunkov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Add extra wq flushing for fallback_work, that's not necessary but safer
if invariants of io_fallback_req_func() change.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fabb3580e27c..b61ffb1e7990 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1262,7 +1262,6 @@ static __cold void io_fallback_req_func(struct work_struct *work)
 		mutex_unlock(&ctx->uring_lock);
 	}
 	percpu_ref_put(&ctx->refs);
-
 }
 
 static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
@@ -9215,6 +9214,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
 	if (ctx->rsrc_backup_node)
 		io_rsrc_node_destroy(ctx->rsrc_backup_node);
 	flush_delayed_work(&ctx->rsrc_put_work);
+	flush_delayed_work(&ctx->fallback_work);
 
 	WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
 	WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/5] io_uring: reshuffle io_submit_state bits
  2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 1/5] io_uring: optimise plugging Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 2/5] io_uring: safer fallback_work free Pavel Begunkov
@ 2021-10-06 15:06 ` Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 4/5] io_uring: optimise out req->opcode reloading Pavel Begunkov
  2021-10-06 15:06 ` [PATCH 5/5] io_uring: remove extra io_ring_exit_work wake up Pavel Begunkov
  4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

struct io_submit_state's ->free_list and ->link are hotter and smaller
than ->plug, place them first.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b61ffb1e7990..3f8bfa309b16 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -308,19 +308,15 @@ struct io_submit_link {
 };
 
 struct io_submit_state {
-	struct blk_plug		plug;
+	/* inline/task_work completion list, under ->uring_lock */
+	struct io_wq_work_node	free_list;
+	/* batch completion logic */
+	struct io_wq_work_list	compl_reqs;
 	struct io_submit_link	link;
 
 	bool			plug_started;
 	bool			need_plug;
-
-	/*
-	 * Batch completion logic
-	 */
-	struct io_wq_work_list	compl_reqs;
-
-	/* inline/task_work completion list, under ->uring_lock */
-	struct io_wq_work_node	free_list;
+	struct blk_plug		plug;
 };
 
 struct io_ring_ctx {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/5] io_uring: optimise out req->opcode reloading
  2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
                   ` (2 preceding siblings ...)
  2021-10-06 15:06 ` [PATCH 3/5] io_uring: reshuffle io_submit_state bits Pavel Begunkov
@ 2021-10-06 15:06 ` Pavel Begunkov
  2021-10-06 16:48   ` Jens Axboe
  2021-10-06 15:06 ` [PATCH 5/5] io_uring: remove extra io_ring_exit_work wake up Pavel Begunkov
  4 siblings, 1 reply; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

Looking at the assembly, the compiler decided to reload req->opcode in
io_op_defs[opcode].needs_file instead of one it had in a register, so
store it in a temp variable so it can be optimised out. Also move the
personality block later, it's better for spilling/etc. as it only
depends on @sqe, which we're keeping anyway.

By the way, zero req->opcode if it over IORING_OP_LAST, not a problem,
at the moment but is safer.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3f8bfa309b16..1f4aa2cdaaa2 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6978,9 +6978,10 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 {
 	unsigned int sqe_flags;
 	int personality;
+	u8 opcode;
 
 	/* req is partially pre-initialised, see io_preinit_req() */
-	req->opcode = READ_ONCE(sqe->opcode);
+	req->opcode = opcode = READ_ONCE(sqe->opcode);
 	/* same numerical values with corresponding REQ_F_*, safe to copy */
 	req->flags = sqe_flags = READ_ONCE(sqe->flags);
 	req->user_data = READ_ONCE(sqe->user_data);
@@ -6988,14 +6989,16 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	req->fixed_rsrc_refs = NULL;
 	req->task = current;
 
-	if (unlikely(req->opcode >= IORING_OP_LAST))
+	if (unlikely(opcode >= IORING_OP_LAST)) {
+		req->opcode = 0;
 		return -EINVAL;
+	}
 	if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
 		/* enforce forwards compatibility on users */
 		if (sqe_flags & ~SQE_VALID_FLAGS)
 			return -EINVAL;
 		if ((sqe_flags & IOSQE_BUFFER_SELECT) &&
-		    !io_op_defs[req->opcode].buffer_select)
+		    !io_op_defs[opcode].buffer_select)
 			return -EOPNOTSUPP;
 		if (sqe_flags & IOSQE_IO_DRAIN)
 			io_init_req_drain(req);
@@ -7014,23 +7017,14 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		}
 	}
 
-	personality = READ_ONCE(sqe->personality);
-	if (personality) {
-		req->creds = xa_load(&ctx->personalities, personality);
-		if (!req->creds)
-			return -EINVAL;
-		get_cred(req->creds);
-		req->flags |= REQ_F_CREDS;
-	}
-
-	if (io_op_defs[req->opcode].needs_file) {
+	if (io_op_defs[opcode].needs_file) {
 		struct io_submit_state *state = &ctx->submit_state;
 
 		/*
 		 * Plug now if we have more than 2 IO left after this, and the
 		 * target is potentially a read/write to block based storage.
 		 */
-		if (state->need_plug && io_op_defs[req->opcode].plug) {
+		if (state->need_plug && io_op_defs[opcode].plug) {
 			state->plug_started = true;
 			state->need_plug = false;
 			blk_start_plug(&state->plug);
@@ -7042,6 +7036,15 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			return -EBADF;
 	}
 
+	personality = READ_ONCE(sqe->personality);
+	if (personality) {
+		req->creds = xa_load(&ctx->personalities, personality);
+		if (!req->creds)
+			return -EINVAL;
+		get_cred(req->creds);
+		req->flags |= REQ_F_CREDS;
+	}
+
 	return io_req_prep(req, sqe);
 }
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/5] io_uring: remove extra io_ring_exit_work wake up
  2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
                   ` (3 preceding siblings ...)
  2021-10-06 15:06 ` [PATCH 4/5] io_uring: optimise out req->opcode reloading Pavel Begunkov
@ 2021-10-06 15:06 ` Pavel Begunkov
  4 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2021-10-06 15:06 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe, asml.silence

task_work_add() takes care of waking up the thread, remove useless
wake_up_process().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 1f4aa2cdaaa2..73135c5c6168 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -9370,7 +9370,6 @@ static __cold void io_ring_exit_work(struct work_struct *work)
 		ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
 		if (WARN_ON_ONCE(ret))
 			continue;
-		wake_up_process(node->task);
 
 		mutex_unlock(&ctx->uring_lock);
 		wait_for_completion(&exit.completion);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 4/5] io_uring: optimise out req->opcode reloading
  2021-10-06 15:06 ` [PATCH 4/5] io_uring: optimise out req->opcode reloading Pavel Begunkov
@ 2021-10-06 16:48   ` Jens Axboe
  0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2021-10-06 16:48 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 10/6/21 9:06 AM, Pavel Begunkov wrote:
> Looking at the assembly, the compiler decided to reload req->opcode in
> io_op_defs[opcode].needs_file instead of one it had in a register, so
> store it in a temp variable so it can be optimised out. Also move the
> personality block later, it's better for spilling/etc. as it only
> depends on @sqe, which we're keeping anyway.
> 
> By the way, zero req->opcode if it over IORING_OP_LAST, not a problem,
> at the moment but is safer.

That had me a bit worried, but you mean it's reloading req->opcode,
not sqe->opcode. Phew.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-10-06 16:48 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-06 15:06 [PATCH 0/5] bunch of random cleanup for-next Pavel Begunkov
2021-10-06 15:06 ` [PATCH 1/5] io_uring: optimise plugging Pavel Begunkov
2021-10-06 15:06 ` [PATCH 2/5] io_uring: safer fallback_work free Pavel Begunkov
2021-10-06 15:06 ` [PATCH 3/5] io_uring: reshuffle io_submit_state bits Pavel Begunkov
2021-10-06 15:06 ` [PATCH 4/5] io_uring: optimise out req->opcode reloading Pavel Begunkov
2021-10-06 16:48   ` Jens Axboe
2021-10-06 15:06 ` [PATCH 5/5] io_uring: remove extra io_ring_exit_work wake up Pavel Begunkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).