IO-Uring Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/3] io_uring: clean wq path
@ 2020-02-05 19:07 Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 1/3] io_uring: pass sqe for link head Pavel Begunkov
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 19:07 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

This is the first series of shaving some overhead for wq-offloading.
The 1st removes extra allocations, and the 3rd req->refs abusing.

There are plenty of opportunities to leak memory similarly to
the way mentioned in [PATCH 1/3], and I'm working a generic fix,
as I need it to close holes in waiting splice(2) patches.

The untold idea behind [PATCH 3/3] is to get rid referencing even
further. As submission ref always pin request, there is no need
in the second (i.e. completion) ref. Even more, With a bit of
retossing, we can get rid of req->refs at all by using non-atomic
ref under @compl_lock, which usually will be bundled fill_event().
I'll play with it soon. Any ideas or concerns regarding it?

Regarding [PATCH 3/3], is there better way to do it for io_poll_add()?


Pavel Begunkov (3):
  io_uring: pass sqe for link head
  io_uring: deduce force_nonblock in io_issue_sqe()
  io_uring: pass submission ref to async

 fs/io_uring.c | 60 +++++++++++++++++++++++++++++----------------------
 1 file changed, 34 insertions(+), 26 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] io_uring: pass sqe for link head
  2020-02-05 19:07 [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
@ 2020-02-05 19:07 ` Pavel Begunkov
  2020-02-05 21:39   ` Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 2/3] io_uring: deduce force_nonblock in io_issue_sqe() Pavel Begunkov
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 19:07 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

Save an sqe for a head of a link, so it doesn't go through switch in
io_req_defer_prep() nor allocating an async context in advance.

Also, it's fixes potenial memleak for double-preparing head requests.
E.g. prep in io_submit_sqe() and then prep in io_req_defer(),
which leaks iovec for vectored read/writes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index f00c2c9c67c0..e18056af5672 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4721,20 +4721,22 @@ static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	}
 }
 
-static inline void io_queue_link_head(struct io_kiocb *req)
+static inline void io_queue_link_head(struct io_kiocb *req,
+				      const struct io_uring_sqe *sqe)
 {
 	if (unlikely(req->flags & REQ_F_FAIL_LINK)) {
 		io_cqring_add_event(req, -ECANCELED);
 		io_double_put_req(req);
 	} else
-		io_queue_sqe(req, NULL);
+		io_queue_sqe(req, sqe);
 }
 
 #define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK|	\
 				IOSQE_IO_HARDLINK | IOSQE_ASYNC)
 
 static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-			  struct io_submit_state *state, struct io_kiocb **link)
+			  struct io_submit_state *state, struct io_kiocb **link,
+			  const struct io_uring_sqe **link_sqe)
 {
 	const struct cred *old_creds = NULL;
 	struct io_ring_ctx *ctx = req->ctx;
@@ -4812,7 +4814,7 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 
 		/* last request of a link, enqueue the link */
 		if (!(sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK))) {
-			io_queue_link_head(head);
+			io_queue_link_head(head, *link_sqe);
 			*link = NULL;
 		}
 	} else {
@@ -4823,10 +4825,8 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 		if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) {
 			req->flags |= REQ_F_LINK;
 			INIT_LIST_HEAD(&req->link_list);
-			ret = io_req_defer_prep(req, sqe);
-			if (ret)
-				req->flags |= REQ_F_FAIL_LINK;
 			*link = req;
+			*link_sqe = sqe;
 		} else {
 			io_queue_sqe(req, sqe);
 		}
@@ -4924,6 +4924,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 	struct io_kiocb *link = NULL;
 	int i, submitted = 0;
 	bool mm_fault = false;
+	const struct io_uring_sqe *link_sqe = NULL;
 
 	/* if we have a backlog and couldn't flush it all, return BUSY */
 	if (test_bit(0, &ctx->sq_check_overflow)) {
@@ -4983,7 +4984,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 		req->needs_fixed_file = async;
 		trace_io_uring_submit_sqe(ctx, req->opcode, req->user_data,
 						true, async);
-		if (!io_submit_sqe(req, sqe, statep, &link))
+		if (!io_submit_sqe(req, sqe, statep, &link, &link_sqe))
 			break;
 	}
 
@@ -4993,7 +4994,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 		percpu_ref_put_many(&ctx->refs, nr - ref_used);
 	}
 	if (link)
-		io_queue_link_head(link);
+		io_queue_link_head(link, link_sqe);
 	if (statep)
 		io_submit_state_end(&state);
 
-- 
2.24.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 2/3] io_uring: deduce force_nonblock in io_issue_sqe()
  2020-02-05 19:07 [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 1/3] io_uring: pass sqe for link head Pavel Begunkov
@ 2020-02-05 19:07 ` Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 3/3] io_uring: pass submission ref to async Pavel Begunkov
  2020-02-05 22:29 ` [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
  3 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 19:07 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

It passes "@sqe == NULL" IIF it's in wq context. Don't pass
@force_nonblock but deduce it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index e18056af5672..b24d3b975344 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4207,10 +4207,11 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 }
 
 static int io_issue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-			struct io_kiocb **nxt, bool force_nonblock)
+			struct io_kiocb **nxt)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	int ret;
+	bool force_nonblock = (sqe != NULL);
 
 	switch (req->opcode) {
 	case IORING_OP_NOP:
@@ -4448,7 +4449,7 @@ static void io_wq_submit_work(struct io_wq_work **workptr)
 		req->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0;
 		req->in_async = true;
 		do {
-			ret = io_issue_sqe(req, NULL, &nxt, false);
+			ret = io_issue_sqe(req, NULL, &nxt);
 			/*
 			 * We can get EAGAIN for polled IO even though we're
 			 * forcing a sync submission from here, since we can't
@@ -4643,7 +4644,7 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 again:
 	linked_timeout = io_prep_linked_timeout(req);
 
-	ret = io_issue_sqe(req, sqe, &nxt, true);
+	ret = io_issue_sqe(req, sqe, &nxt);
 
 	/*
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
-- 
2.24.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 3/3] io_uring: pass submission ref to async
  2020-02-05 19:07 [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 1/3] io_uring: pass sqe for link head Pavel Begunkov
  2020-02-05 19:07 ` [PATCH 2/3] io_uring: deduce force_nonblock in io_issue_sqe() Pavel Begunkov
@ 2020-02-05 19:07 ` Pavel Begunkov
  2020-02-05 22:29 ` [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
  3 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 19:07 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

Whenever going into async, __io_queue_sqe() won't put submission ref,
so it's done either by io_wq_submit_work() in a generic way (1) or
compensated in an opcode handler (2). By removing (2) in favor of (1),
requests in async are always pinned by this submission ref, so extra
referencing in io_{get,put}_work() can be removed.

The patch makes the flow as follows: if going async, pass submission
ref into it. When async handling is done, put it in io_put_work().
The benefit is killing 1 extra pair of get/put and further though
yet blurry optimisation prospects.

- remove referencing from io_{get,put}_work()
- remove (2) from opcodes specialising custom @work->func
- refcount_inc() in io_poll_add() to restore submission ref
- put a ref in io_uring_cancel_files() as io_put_work() won't be called on
cancel.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 34 ++++++++++++++++++++--------------
 1 file changed, 20 insertions(+), 14 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b24d3b975344..00a682ec2efe 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2466,7 +2466,6 @@ static int io_fsync(struct io_kiocb *req, struct io_kiocb **nxt,
 
 	/* fsync always requires a blocking context */
 	if (force_nonblock) {
-		io_put_req(req);
 		req->work.func = io_fsync_finish;
 		return -EAGAIN;
 	}
@@ -2513,7 +2512,6 @@ static int io_fallocate(struct io_kiocb *req, struct io_kiocb **nxt,
 
 	/* fallocate always requiring blocking context */
 	if (force_nonblock) {
-		io_put_req(req);
 		req->work.func = io_fallocate_finish;
 		return -EAGAIN;
 	}
@@ -2894,6 +2892,13 @@ static int io_close(struct io_kiocb *req, struct io_kiocb **nxt,
 
 eagain:
 	req->work.func = io_close_finish;
+
+	/*
+	 * As return 0, submission ref will be put, but we need it for
+	 * async context. Grab one.
+	 */
+	refcount_inc(&req->refs);
+
 	/*
 	 * Do manual async queue here to avoid grabbing files - we don't
 	 * need the files, and it'll cause io_close_finish() to close
@@ -2947,7 +2952,6 @@ static int io_sync_file_range(struct io_kiocb *req, struct io_kiocb **nxt,
 
 	/* sync_file_range always requires a blocking context */
 	if (force_nonblock) {
-		io_put_req(req);
 		req->work.func = io_sync_file_range_finish;
 		return -EAGAIN;
 	}
@@ -3322,7 +3326,6 @@ static int io_accept(struct io_kiocb *req, struct io_kiocb **nxt,
 	ret = __io_accept(req, nxt, force_nonblock);
 	if (ret == -EAGAIN && force_nonblock) {
 		req->work.func = io_accept_finish;
-		io_put_req(req);
 		return -EAGAIN;
 	}
 	return 0;
@@ -3409,6 +3412,8 @@ static void io_poll_remove_one(struct io_kiocb *req)
 	WRITE_ONCE(poll->canceled, true);
 	if (!list_empty(&poll->wait.entry)) {
 		list_del_init(&poll->wait.entry);
+		/* compensate submission ref */
+		refcount_inc(&req->refs);
 		io_queue_async_work(req);
 	}
 	spin_unlock(&poll->head->lock);
@@ -3634,8 +3639,11 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
 				req->work.func = io_poll_flush;
 		}
 	}
-	if (req)
+	if (req) {
+		/* compensate submission ref */
+		refcount_inc(&req->refs);
 		io_queue_async_work(req);
+	}
 
 	return 1;
 }
@@ -4461,9 +4469,6 @@ static void io_wq_submit_work(struct io_wq_work **workptr)
 		} while (1);
 	}
 
-	/* drop submission reference */
-	io_put_req(req);
-
 	if (ret) {
 		req_set_fail_links(req);
 		io_cqring_add_event(req, ret);
@@ -5826,15 +5831,12 @@ static void io_put_work(struct io_wq_work *work)
 {
 	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
 
+	/* drop submission ref */
 	io_put_req(req);
 }
 
 static void io_get_work(struct io_wq_work *work)
-{
-	struct io_kiocb *req = container_of(work, struct io_kiocb, work);
-
-	refcount_inc(&req->refs);
-}
+{}
 
 static int io_init_wq_offload(struct io_ring_ctx *ctx,
 			      struct io_uring_params *p)
@@ -6358,6 +6360,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 
 	while (!list_empty_careful(&ctx->inflight_list)) {
 		struct io_kiocb *cancel_req = NULL;
+		enum io_wq_cancel ret;
 
 		spin_lock_irq(&ctx->inflight_lock);
 		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
@@ -6378,7 +6381,10 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
 		if (!cancel_req)
 			break;
 
-		io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+		ret = io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+		/* put submission ref instead of never-called io_put_work() */
+		if (ret != IO_WQ_CANCEL_RUNNING)
+			io_put_req(cancel_req);
 		io_put_req(cancel_req);
 		schedule();
 	}
-- 
2.24.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] io_uring: pass sqe for link head
  2020-02-05 19:07 ` [PATCH 1/3] io_uring: pass sqe for link head Pavel Begunkov
@ 2020-02-05 21:39   ` Pavel Begunkov
  2020-02-05 21:43     ` Pavel Begunkov
  0 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 21:39 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

[-- Attachment #1.1: Type: text/plain, Size: 3584 bytes --]

On 05/02/2020 22:07, Pavel Begunkov wrote:
> Save an sqe for a head of a link, so it doesn't go through switch in
> io_req_defer_prep() nor allocating an async context in advance.
> 
> Also, it's fixes potenial memleak for double-preparing head requests.
> E.g. prep in io_submit_sqe() and then prep in io_req_defer(),
> which leaks iovec for vectored read/writes.

Looking through -rc1, remembered that Jens already fixed this. So, this may be
striked out.


> 
> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
> ---
>  fs/io_uring.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index f00c2c9c67c0..e18056af5672 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -4721,20 +4721,22 @@ static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>  	}
>  }
>  
> -static inline void io_queue_link_head(struct io_kiocb *req)
> +static inline void io_queue_link_head(struct io_kiocb *req,
> +				      const struct io_uring_sqe *sqe)
>  {
>  	if (unlikely(req->flags & REQ_F_FAIL_LINK)) {
>  		io_cqring_add_event(req, -ECANCELED);
>  		io_double_put_req(req);
>  	} else
> -		io_queue_sqe(req, NULL);
> +		io_queue_sqe(req, sqe);
>  }
>  
>  #define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK|	\
>  				IOSQE_IO_HARDLINK | IOSQE_ASYNC)
>  
>  static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> -			  struct io_submit_state *state, struct io_kiocb **link)
> +			  struct io_submit_state *state, struct io_kiocb **link,
> +			  const struct io_uring_sqe **link_sqe)
>  {
>  	const struct cred *old_creds = NULL;
>  	struct io_ring_ctx *ctx = req->ctx;
> @@ -4812,7 +4814,7 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>  
>  		/* last request of a link, enqueue the link */
>  		if (!(sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK))) {
> -			io_queue_link_head(head);
> +			io_queue_link_head(head, *link_sqe);
>  			*link = NULL;
>  		}
>  	} else {
> @@ -4823,10 +4825,8 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>  		if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) {
>  			req->flags |= REQ_F_LINK;
>  			INIT_LIST_HEAD(&req->link_list);
> -			ret = io_req_defer_prep(req, sqe);
> -			if (ret)
> -				req->flags |= REQ_F_FAIL_LINK;
>  			*link = req;
> +			*link_sqe = sqe;
>  		} else {
>  			io_queue_sqe(req, sqe);
>  		}
> @@ -4924,6 +4924,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>  	struct io_kiocb *link = NULL;
>  	int i, submitted = 0;
>  	bool mm_fault = false;
> +	const struct io_uring_sqe *link_sqe = NULL;
>  
>  	/* if we have a backlog and couldn't flush it all, return BUSY */
>  	if (test_bit(0, &ctx->sq_check_overflow)) {
> @@ -4983,7 +4984,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>  		req->needs_fixed_file = async;
>  		trace_io_uring_submit_sqe(ctx, req->opcode, req->user_data,
>  						true, async);
> -		if (!io_submit_sqe(req, sqe, statep, &link))
> +		if (!io_submit_sqe(req, sqe, statep, &link, &link_sqe))
>  			break;
>  	}
>  
> @@ -4993,7 +4994,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>  		percpu_ref_put_many(&ctx->refs, nr - ref_used);
>  	}
>  	if (link)
> -		io_queue_link_head(link);
> +		io_queue_link_head(link, link_sqe);
>  	if (statep)
>  		io_submit_state_end(&state);
>  
> 

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] io_uring: pass sqe for link head
  2020-02-05 21:39   ` Pavel Begunkov
@ 2020-02-05 21:43     ` Pavel Begunkov
  0 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 21:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

[-- Attachment #1.1: Type: text/plain, Size: 3822 bytes --]

On 06/02/2020 00:39, Pavel Begunkov wrote:
> On 05/02/2020 22:07, Pavel Begunkov wrote:
>> Save an sqe for a head of a link, so it doesn't go through switch in
>> io_req_defer_prep() nor allocating an async context in advance.
>>
>> Also, it's fixes potenial memleak for double-preparing head requests.
>> E.g. prep in io_submit_sqe() and then prep in io_req_defer(),
>> which leaks iovec for vectored read/writes.
> 
> Looking through -rc1, remembered that Jens already fixed this. So, this may be
> striked out.

Just to clarify, I was talking about removing the last argument in the patch
message.

> 
> 
>>
>> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
>> ---
>>  fs/io_uring.c | 19 ++++++++++---------
>>  1 file changed, 10 insertions(+), 9 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index f00c2c9c67c0..e18056af5672 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -4721,20 +4721,22 @@ static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>>  	}
>>  }
>>  
>> -static inline void io_queue_link_head(struct io_kiocb *req)
>> +static inline void io_queue_link_head(struct io_kiocb *req,
>> +				      const struct io_uring_sqe *sqe)
>>  {
>>  	if (unlikely(req->flags & REQ_F_FAIL_LINK)) {
>>  		io_cqring_add_event(req, -ECANCELED);
>>  		io_double_put_req(req);
>>  	} else
>> -		io_queue_sqe(req, NULL);
>> +		io_queue_sqe(req, sqe);
>>  }
>>  
>>  #define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK|	\
>>  				IOSQE_IO_HARDLINK | IOSQE_ASYNC)
>>  
>>  static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>> -			  struct io_submit_state *state, struct io_kiocb **link)
>> +			  struct io_submit_state *state, struct io_kiocb **link,
>> +			  const struct io_uring_sqe **link_sqe)
>>  {
>>  	const struct cred *old_creds = NULL;
>>  	struct io_ring_ctx *ctx = req->ctx;
>> @@ -4812,7 +4814,7 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>>  
>>  		/* last request of a link, enqueue the link */
>>  		if (!(sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK))) {
>> -			io_queue_link_head(head);
>> +			io_queue_link_head(head, *link_sqe);
>>  			*link = NULL;
>>  		}
>>  	} else {
>> @@ -4823,10 +4825,8 @@ static bool io_submit_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>>  		if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) {
>>  			req->flags |= REQ_F_LINK;
>>  			INIT_LIST_HEAD(&req->link_list);
>> -			ret = io_req_defer_prep(req, sqe);
>> -			if (ret)
>> -				req->flags |= REQ_F_FAIL_LINK;
>>  			*link = req;
>> +			*link_sqe = sqe;
>>  		} else {
>>  			io_queue_sqe(req, sqe);
>>  		}
>> @@ -4924,6 +4924,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>>  	struct io_kiocb *link = NULL;
>>  	int i, submitted = 0;
>>  	bool mm_fault = false;
>> +	const struct io_uring_sqe *link_sqe = NULL;
>>  
>>  	/* if we have a backlog and couldn't flush it all, return BUSY */
>>  	if (test_bit(0, &ctx->sq_check_overflow)) {
>> @@ -4983,7 +4984,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>>  		req->needs_fixed_file = async;
>>  		trace_io_uring_submit_sqe(ctx, req->opcode, req->user_data,
>>  						true, async);
>> -		if (!io_submit_sqe(req, sqe, statep, &link))
>> +		if (!io_submit_sqe(req, sqe, statep, &link, &link_sqe))
>>  			break;
>>  	}
>>  
>> @@ -4993,7 +4994,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>>  		percpu_ref_put_many(&ctx->refs, nr - ref_used);
>>  	}
>>  	if (link)
>> -		io_queue_link_head(link);
>> +		io_queue_link_head(link, link_sqe);
>>  	if (statep)
>>  		io_submit_state_end(&state);
>>  
>>
> 

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] io_uring: clean wq path
  2020-02-05 19:07 [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-02-05 19:07 ` [PATCH 3/3] io_uring: pass submission ref to async Pavel Begunkov
@ 2020-02-05 22:29 ` Pavel Begunkov
  2020-02-06  2:50   ` Jens Axboe
  3 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-05 22:29 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

[-- Attachment #1.1: Type: text/plain, Size: 1280 bytes --]

On 05/02/2020 22:07, Pavel Begunkov wrote:
> This is the first series of shaving some overhead for wq-offloading.
> The 1st removes extra allocations, and the 3rd req->refs abusing.

Rechecked a couple of assumptions, this patchset is messed up.
Drop it for now.

> There are plenty of opportunities to leak memory similarly to
> the way mentioned in [PATCH 1/3], and I'm working a generic fix,
> as I need it to close holes in waiting splice(2) patches.
> 
> The untold idea behind [PATCH 3/3] is to get rid referencing even
> further. As submission ref always pin request, there is no need
> in the second (i.e. completion) ref. Even more, With a bit of
> retossing, we can get rid of req->refs at all by using non-atomic
> ref under @compl_lock, which usually will be bundled fill_event().
> I'll play with it soon. Any ideas or concerns regarding it?
> 
> Regarding [PATCH 3/3], is there better way to do it for io_poll_add()?
> 
> 
> Pavel Begunkov (3):
>   io_uring: pass sqe for link head
>   io_uring: deduce force_nonblock in io_issue_sqe()
>   io_uring: pass submission ref to async
> 
>  fs/io_uring.c | 60 +++++++++++++++++++++++++++++----------------------
>  1 file changed, 34 insertions(+), 26 deletions(-)
> 

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] io_uring: clean wq path
  2020-02-05 22:29 ` [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
@ 2020-02-06  2:50   ` Jens Axboe
  2020-02-06  9:51     ` Pavel Begunkov
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2020-02-06  2:50 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-kernel

On 2/5/20 3:29 PM, Pavel Begunkov wrote:
> On 05/02/2020 22:07, Pavel Begunkov wrote:
>> This is the first series of shaving some overhead for wq-offloading.
>> The 1st removes extra allocations, and the 3rd req->refs abusing.
> 
> Rechecked a couple of assumptions, this patchset is messed up.
> Drop it for now.

OK, will do, haven't had time to look at it yet anyway.

Are you going to do the ->has_user removal? We should just do that
separately first.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] io_uring: clean wq path
  2020-02-06  2:50   ` Jens Axboe
@ 2020-02-06  9:51     ` Pavel Begunkov
  2020-02-06 14:26       ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2020-02-06  9:51 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-kernel

On 2/6/2020 5:50 AM, Jens Axboe wrote:
> On 2/5/20 3:29 PM, Pavel Begunkov wrote:
>> On 05/02/2020 22:07, Pavel Begunkov wrote:
>>> This is the first series of shaving some overhead for wq-offloading.
>>> The 1st removes extra allocations, and the 3rd req->refs abusing.
>>
>> Rechecked a couple of assumptions, this patchset is messed up.
>> Drop it for now.
> 
> OK, will do, haven't had time to look at it yet anyway.

Sorry for the fuss. I'll return to it later.

> 
> Are you going to do the ->has_user removal? We should just do that
> separately first.
Yes. I've spotted a few bugs, so I'm going to patch them first with
merging/backporting in mind, and then deal with ->has_user. IMO, this
order makes more sense.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/3] io_uring: clean wq path
  2020-02-06  9:51     ` Pavel Begunkov
@ 2020-02-06 14:26       ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2020-02-06 14:26 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-kernel

On 2/6/20 2:51 AM, Pavel Begunkov wrote:
> On 2/6/2020 5:50 AM, Jens Axboe wrote:
>> On 2/5/20 3:29 PM, Pavel Begunkov wrote:
>>> On 05/02/2020 22:07, Pavel Begunkov wrote:
>>>> This is the first series of shaving some overhead for wq-offloading.
>>>> The 1st removes extra allocations, and the 3rd req->refs abusing.
>>>
>>> Rechecked a couple of assumptions, this patchset is messed up.
>>> Drop it for now.
>>
>> OK, will do, haven't had time to look at it yet anyway.
> 
> Sorry for the fuss. I'll return to it later.

No worries

>> Are you going to do the ->has_user removal? We should just do that
>> separately first.
> Yes. I've spotted a few bugs, so I'm going to patch them first with
> merging/backporting in mind, and then deal with ->has_user. IMO, this
> order makes more sense.

I think it probably makes sense to do it in the opposite order, as the
->has_user cleanup/clarification should go into 5.6 whereas the other
stuff is likely 5.7 material.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, back to index

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-05 19:07 [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
2020-02-05 19:07 ` [PATCH 1/3] io_uring: pass sqe for link head Pavel Begunkov
2020-02-05 21:39   ` Pavel Begunkov
2020-02-05 21:43     ` Pavel Begunkov
2020-02-05 19:07 ` [PATCH 2/3] io_uring: deduce force_nonblock in io_issue_sqe() Pavel Begunkov
2020-02-05 19:07 ` [PATCH 3/3] io_uring: pass submission ref to async Pavel Begunkov
2020-02-05 22:29 ` [PATCH 0/3] io_uring: clean wq path Pavel Begunkov
2020-02-06  2:50   ` Jens Axboe
2020-02-06  9:51     ` Pavel Begunkov
2020-02-06 14:26       ` Jens Axboe

IO-Uring Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/io-uring/0 io-uring/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 io-uring io-uring/ https://lore.kernel.org/io-uring \
		io-uring@vger.kernel.org
	public-inbox-index io-uring

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.io-uring


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git