io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/3] Inline sqe_submit
@ 2019-11-05 23:04 Pavel Begunkov
  2019-11-05 23:04 ` [PATCH 1/3] io_uring: allocate io_kiocb upfront Pavel Begunkov
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:04 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block

The proposal is to not pass struct sqe_submit as a separate entity,
but always use req->submit instead, so there will be less stuff to
care about. The reasoning begind is code simplification.

Also, I've got steady +1% throughput improvement for nop tests.
Though, it's highly system-dependent, and I wouldn't count on it.

P.S. I'll double check the patches, if the idea is accepted.


Pavel Begunkov (3):
  io_uring: allocate io_kiocb upfront
  io_uring: Use submit info inlined into req
  io_uring: use inlined struct sqe_submit

 fs/io_uring.c | 127 ++++++++++++++++++++++++--------------------------
 1 file changed, 61 insertions(+), 66 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] io_uring: allocate io_kiocb upfront
  2019-11-05 23:04 [RFC 0/3] Inline sqe_submit Pavel Begunkov
@ 2019-11-05 23:04 ` Pavel Begunkov
  2019-11-05 23:04 ` [PATCH 2/3] io_uring: Use submit info inlined into req Pavel Begunkov
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:04 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block

Preparation patch.
Make io_submit_sqes() to allocate io_kiocb, and then pass it further.
Another difference is that it's allocated before we get an sqe.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 82c2da99cb5c..920ad731db01 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2538,30 +2538,23 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
 
 #define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK)
 
-static void io_submit_sqe(struct io_ring_ctx *ctx, struct sqe_submit *s,
-			  struct io_submit_state *state, struct io_kiocb **link)
+static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
+			  struct sqe_submit *s, struct io_submit_state *state,
+			  struct io_kiocb **link)
 {
 	struct io_uring_sqe *sqe_copy;
-	struct io_kiocb *req;
 	int ret;
 
 	/* enforce forwards compatibility on users */
 	if (unlikely(s->sqe->flags & ~SQE_VALID_FLAGS)) {
 		ret = -EINVAL;
-		goto err;
-	}
-
-	req = io_get_req(ctx, state);
-	if (unlikely(!req)) {
-		ret = -EAGAIN;
-		goto err;
+		goto err_req;
 	}
 
 	ret = io_req_set_file(ctx, s, state, req);
 	if (unlikely(ret)) {
 err_req:
 		io_free_req(req, NULL);
-err:
 		io_cqring_add_event(ctx, s->sqe->user_data, ret);
 		return;
 	}
@@ -2697,9 +2690,15 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 
 	for (i = 0; i < nr; i++) {
 		struct sqe_submit s;
+		struct io_kiocb *req;
 
-		if (!io_get_sqring(ctx, &s))
+		req = io_get_req(ctx, statep);
+		if (unlikely(!req))
 			break;
+		if (!io_get_sqring(ctx, &s)) {
+			__io_free_req(req);
+			break;
+		}
 
 		if (io_sqe_needs_user(s.sqe) && !*mm) {
 			mm_fault = mm_fault || !mmget_not_zero(ctx->sqo_mm);
@@ -2727,7 +2726,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 		s.in_async = async;
 		s.needs_fixed_file = async;
 		trace_io_uring_submit_sqe(ctx, s.sqe->user_data, true, async);
-		io_submit_sqe(ctx, &s, statep, &link);
+		io_submit_sqe(ctx, req, &s, statep, &link);
 		submitted++;
 
 		/*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] io_uring: Use submit info inlined into req
  2019-11-05 23:04 [RFC 0/3] Inline sqe_submit Pavel Begunkov
  2019-11-05 23:04 ` [PATCH 1/3] io_uring: allocate io_kiocb upfront Pavel Begunkov
@ 2019-11-05 23:04 ` Pavel Begunkov
  2019-11-05 23:42   ` Jens Axboe
  2019-11-05 23:04 ` [PATCH 3/3] io_uring: use inlined struct sqe_submit Pavel Begunkov
  2019-11-05 23:37 ` [RFC 0/3] Inline sqe_submit Jens Axboe
  3 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:04 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block

Stack allocated struct sqe_submit is passed down to the submission path
along with a request (a.k.a. struct io_kiocb), and will be copied into
req->submit for async requests.

As space for it is already allocated, fill req->submit in the first
place instead of using on-stack one. As a result:

1. sqe->submit is the only place for sqe_submit and is always valid,
so we don't need to track which one to use.
2. don't need to copy in case of async
3. allows to simplify the code by not carrying it as an argument all
the way down
4. allows to reduce number of function arguments / potentially improve
spilling

The downside is that stack is most probably be cached, that's not true
for just allocated memory for a request. Another concern is cache
pollution. Though, a request would be touched and fetched along with
req->submit at some point anyway, so shouldn't be a problem.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 29 +++++++++++++----------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 920ad731db01..ecb5a4336389 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2443,7 +2443,6 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL);
 		if (sqe_copy) {
 			s->sqe = sqe_copy;
-			memcpy(&req->submit, s, sizeof(*s));
 			if (req->work.flags & IO_WQ_WORK_NEEDS_FILES) {
 				ret = io_grab_files(ctx, req);
 				if (ret) {
@@ -2578,13 +2577,11 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		}
 
 		s->sqe = sqe_copy;
-		memcpy(&req->submit, s, sizeof(*s));
 		trace_io_uring_link(ctx, req, prev);
 		list_add_tail(&req->list, &prev->link_list);
 	} else if (s->sqe->flags & IOSQE_IO_LINK) {
 		req->flags |= REQ_F_LINK;
 
-		memcpy(&req->submit, s, sizeof(*s));
 		INIT_LIST_HEAD(&req->link_list);
 		*link = req;
 	} else {
@@ -2689,18 +2686,17 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 	}
 
 	for (i = 0; i < nr; i++) {
-		struct sqe_submit s;
 		struct io_kiocb *req;
 
 		req = io_get_req(ctx, statep);
 		if (unlikely(!req))
 			break;
-		if (!io_get_sqring(ctx, &s)) {
+		if (!io_get_sqring(ctx, &req->submit)) {
 			__io_free_req(req);
 			break;
 		}
 
-		if (io_sqe_needs_user(s.sqe) && !*mm) {
+		if (io_sqe_needs_user(req->submit.sqe) && !*mm) {
 			mm_fault = mm_fault || !mmget_not_zero(ctx->sqo_mm);
 			if (!mm_fault) {
 				use_mm(ctx->sqo_mm);
@@ -2708,7 +2704,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 			}
 		}
 
-		if (link && (s.sqe->flags & IOSQE_IO_DRAIN)) {
+		if (link && (req->submit.sqe->flags & IOSQE_IO_DRAIN)) {
 			if (!shadow_req) {
 				shadow_req = io_get_req(ctx, NULL);
 				if (unlikely(!shadow_req))
@@ -2716,24 +2712,25 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 				shadow_req->flags |= (REQ_F_IO_DRAIN | REQ_F_SHADOW_DRAIN);
 				refcount_dec(&shadow_req->refs);
 			}
-			shadow_req->sequence = s.sequence;
+			shadow_req->sequence = req->submit.sequence;
 		}
 
 out:
-		s.ring_file = ring_file;
-		s.ring_fd = ring_fd;
-		s.has_user = *mm != NULL;
-		s.in_async = async;
-		s.needs_fixed_file = async;
-		trace_io_uring_submit_sqe(ctx, s.sqe->user_data, true, async);
-		io_submit_sqe(ctx, req, &s, statep, &link);
+		req->submit.ring_file = ring_file;
+		req->submit.ring_fd = ring_fd;
+		req->submit.has_user = *mm != NULL;
+		req->submit.in_async = async;
+		req->submit.needs_fixed_file = async;
+		trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data,
+					  true, async);
+		io_submit_sqe(ctx, req, &req->submit, statep, &link);
 		submitted++;
 
 		/*
 		 * If previous wasn't linked and we have a linked command,
 		 * that's the end of the chain. Submit the previous link.
 		 */
-		if (!(s.sqe->flags & IOSQE_IO_LINK) && link) {
+		if (!(req->submit.sqe->flags & IOSQE_IO_LINK) && link) {
 			io_queue_link_head(ctx, link, &link->submit, shadow_req);
 			link = NULL;
 			shadow_req = NULL;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] io_uring: use inlined struct sqe_submit
  2019-11-05 23:04 [RFC 0/3] Inline sqe_submit Pavel Begunkov
  2019-11-05 23:04 ` [PATCH 1/3] io_uring: allocate io_kiocb upfront Pavel Begunkov
  2019-11-05 23:04 ` [PATCH 2/3] io_uring: Use submit info inlined into req Pavel Begunkov
@ 2019-11-05 23:04 ` Pavel Begunkov
  2019-11-05 23:37   ` Jens Axboe
  2019-11-05 23:37 ` [RFC 0/3] Inline sqe_submit Jens Axboe
  3 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:04 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block

req->submit is always up-to-date, use it directly

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 85 +++++++++++++++++++++++++--------------------------
 1 file changed, 42 insertions(+), 43 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ecb5a4336389..e40a6ed54adf 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1157,10 +1157,9 @@ static bool io_file_supports_async(struct file *file)
 	return false;
 }
 
-static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,
-		      bool force_nonblock)
+static int io_prep_rw(struct io_kiocb *req, bool force_nonblock)
 {
-	const struct io_uring_sqe *sqe = s->sqe;
+	const struct io_uring_sqe *sqe = req->submit.sqe;
 	struct io_ring_ctx *ctx = req->ctx;
 	struct kiocb *kiocb = &req->rw;
 	unsigned ioprio;
@@ -1408,8 +1407,8 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb,
 	return ret;
 }
 
-static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
-		   struct io_kiocb **nxt, bool force_nonblock)
+static int io_read(struct io_kiocb *req, struct io_kiocb **nxt,
+		   bool force_nonblock)
 {
 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
 	struct kiocb *kiocb = &req->rw;
@@ -1418,7 +1417,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
 	size_t iov_count;
 	ssize_t read_size, ret;
 
-	ret = io_prep_rw(req, s, force_nonblock);
+	ret = io_prep_rw(req, force_nonblock);
 	if (ret)
 		return ret;
 	file = kiocb->ki_filp;
@@ -1426,7 +1425,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
 	if (unlikely(!(file->f_mode & FMODE_READ)))
 		return -EBADF;
 
-	ret = io_import_iovec(req->ctx, READ, s, &iovec, &iter);
+	ret = io_import_iovec(req->ctx, READ, &req->submit, &iovec, &iter);
 	if (ret < 0)
 		return ret;
 
@@ -1458,7 +1457,7 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
 			ret2 = -EAGAIN;
 		/* Catch -EAGAIN return for forced non-blocking submission */
 		if (!force_nonblock || ret2 != -EAGAIN)
-			kiocb_done(kiocb, ret2, nxt, s->in_async);
+			kiocb_done(kiocb, ret2, nxt, req->submit.in_async);
 		else
 			ret = -EAGAIN;
 	}
@@ -1466,8 +1465,8 @@ static int io_read(struct io_kiocb *req, const struct sqe_submit *s,
 	return ret;
 }
 
-static int io_write(struct io_kiocb *req, const struct sqe_submit *s,
-		    struct io_kiocb **nxt, bool force_nonblock)
+static int io_write(struct io_kiocb *req, struct io_kiocb **nxt,
+		    bool force_nonblock)
 {
 	struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;
 	struct kiocb *kiocb = &req->rw;
@@ -1476,7 +1475,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s,
 	size_t iov_count;
 	ssize_t ret;
 
-	ret = io_prep_rw(req, s, force_nonblock);
+	ret = io_prep_rw(req, force_nonblock);
 	if (ret)
 		return ret;
 
@@ -1484,7 +1483,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s,
 	if (unlikely(!(file->f_mode & FMODE_WRITE)))
 		return -EBADF;
 
-	ret = io_import_iovec(req->ctx, WRITE, s, &iovec, &iter);
+	ret = io_import_iovec(req->ctx, WRITE, &req->submit, &iovec, &iter);
 	if (ret < 0)
 		return ret;
 
@@ -1521,7 +1520,7 @@ static int io_write(struct io_kiocb *req, const struct sqe_submit *s,
 		else
 			ret2 = loop_rw_iter(WRITE, file, kiocb, &iter);
 		if (!force_nonblock || ret2 != -EAGAIN)
-			kiocb_done(kiocb, ret2, nxt, s->in_async);
+			kiocb_done(kiocb, ret2, nxt, req->submit.in_async);
 		else
 			ret = -EAGAIN;
 	}
@@ -2177,9 +2176,9 @@ static int io_async_cancel(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	return 0;
 }
 
-static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			const struct io_uring_sqe *sqe)
+static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req)
 {
+	const struct io_uring_sqe *sqe = req->submit.sqe;
 	struct io_uring_sqe *sqe_copy;
 
 	if (!io_sequence_defer(ctx, req) && list_empty(&ctx->defer_list))
@@ -2206,10 +2205,10 @@ static int io_req_defer(struct io_ring_ctx *ctx, struct io_kiocb *req,
 }
 
 static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			   const struct sqe_submit *s, struct io_kiocb **nxt,
-			   bool force_nonblock)
+			   struct io_kiocb **nxt, bool force_nonblock)
 {
 	int ret, opcode;
+	struct sqe_submit *s = &req->submit;
 
 	req->user_data = READ_ONCE(s->sqe->user_data);
 
@@ -2221,18 +2220,18 @@ static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	case IORING_OP_READV:
 		if (unlikely(s->sqe->buf_index))
 			return -EINVAL;
-		ret = io_read(req, s, nxt, force_nonblock);
+		ret = io_read(req, nxt, force_nonblock);
 		break;
 	case IORING_OP_WRITEV:
 		if (unlikely(s->sqe->buf_index))
 			return -EINVAL;
-		ret = io_write(req, s, nxt, force_nonblock);
+		ret = io_write(req, nxt, force_nonblock);
 		break;
 	case IORING_OP_READ_FIXED:
-		ret = io_read(req, s, nxt, force_nonblock);
+		ret = io_read(req, nxt, force_nonblock);
 		break;
 	case IORING_OP_WRITE_FIXED:
-		ret = io_write(req, s, nxt, force_nonblock);
+		ret = io_write(req, nxt, force_nonblock);
 		break;
 	case IORING_OP_FSYNC:
 		ret = io_fsync(req, s->sqe, nxt, force_nonblock);
@@ -2307,7 +2306,7 @@ static void io_wq_submit_work(struct io_wq_work **workptr)
 		s->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0;
 		s->in_async = true;
 		do {
-			ret = __io_submit_sqe(ctx, req, s, &nxt, false);
+			ret = __io_submit_sqe(ctx, req, &nxt, false);
 			/*
 			 * We can get EAGAIN for polled IO even though we're
 			 * forcing a sync submission from here, since we can't
@@ -2359,9 +2358,10 @@ static inline struct file *io_file_from_index(struct io_ring_ctx *ctx,
 	return table->files[index & IORING_FILE_TABLE_MASK];
 }
 
-static int io_req_set_file(struct io_ring_ctx *ctx, const struct sqe_submit *s,
+static int io_req_set_file(struct io_ring_ctx *ctx,
 			   struct io_submit_state *state, struct io_kiocb *req)
 {
+	struct sqe_submit *s = &req->submit;
 	unsigned flags;
 	int fd;
 
@@ -2425,12 +2425,11 @@ static int io_grab_files(struct io_ring_ctx *ctx, struct io_kiocb *req)
 	return ret;
 }
 
-static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			struct sqe_submit *s)
+static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req)
 {
 	int ret;
 
-	ret = __io_submit_sqe(ctx, req, s, NULL, true);
+	ret = __io_submit_sqe(ctx, req, NULL, true);
 
 	/*
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
@@ -2438,6 +2437,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	 */
 	if (ret == -EAGAIN && (!(req->flags & REQ_F_NOWAIT) ||
 	    (req->flags & REQ_F_MUST_PUNT))) {
+		struct sqe_submit *s = &req->submit;
 		struct io_uring_sqe *sqe_copy;
 
 		sqe_copy = kmemdup(s->sqe, sizeof(*sqe_copy), GFP_KERNEL);
@@ -2475,31 +2475,30 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return ret;
 }
 
-static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			struct sqe_submit *s)
+static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req)
 {
 	int ret;
 
-	ret = io_req_defer(ctx, req, s->sqe);
+	ret = io_req_defer(ctx, req);
 	if (ret) {
 		if (ret != -EIOCBQUEUED) {
 			io_free_req(req, NULL);
-			io_cqring_add_event(ctx, s->sqe->user_data, ret);
+			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);
 		}
 		return 0;
 	}
 
-	return __io_queue_sqe(ctx, req, s);
+	return __io_queue_sqe(ctx, req);
 }
 
 static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			      struct sqe_submit *s, struct io_kiocb *shadow)
+			      struct io_kiocb *shadow)
 {
 	int ret;
 	int need_submit = false;
 
 	if (!shadow)
-		return io_queue_sqe(ctx, req, s);
+		return io_queue_sqe(ctx, req);
 
 	/*
 	 * Mark the first IO in link list as DRAIN, let all the following
@@ -2507,12 +2506,12 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	 * list.
 	 */
 	req->flags |= REQ_F_IO_DRAIN;
-	ret = io_req_defer(ctx, req, s->sqe);
+	ret = io_req_defer(ctx, req);
 	if (ret) {
 		if (ret != -EIOCBQUEUED) {
 			io_free_req(req, NULL);
 			__io_free_req(shadow);
-			io_cqring_add_event(ctx, s->sqe->user_data, ret);
+			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);
 			return 0;
 		}
 	} else {
@@ -2530,7 +2529,7 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	spin_unlock_irq(&ctx->completion_lock);
 
 	if (need_submit)
-		return __io_queue_sqe(ctx, req, s);
+		return __io_queue_sqe(ctx, req);
 
 	return 0;
 }
@@ -2538,10 +2537,10 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
 #define SQE_VALID_FLAGS	(IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK)
 
 static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
-			  struct sqe_submit *s, struct io_submit_state *state,
-			  struct io_kiocb **link)
+			  struct io_submit_state *state, struct io_kiocb **link)
 {
 	struct io_uring_sqe *sqe_copy;
+	struct sqe_submit *s = &req->submit;
 	int ret;
 
 	/* enforce forwards compatibility on users */
@@ -2550,7 +2549,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		goto err_req;
 	}
 
-	ret = io_req_set_file(ctx, s, state, req);
+	ret = io_req_set_file(ctx, state, req);
 	if (unlikely(ret)) {
 err_req:
 		io_free_req(req, NULL);
@@ -2585,7 +2584,7 @@ static void io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		INIT_LIST_HEAD(&req->link_list);
 		*link = req;
 	} else {
-		io_queue_sqe(ctx, req, s);
+		io_queue_sqe(ctx, req);
 	}
 }
 
@@ -2723,7 +2722,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 		req->submit.needs_fixed_file = async;
 		trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data,
 					  true, async);
-		io_submit_sqe(ctx, req, &req->submit, statep, &link);
+		io_submit_sqe(ctx, req, statep, &link);
 		submitted++;
 
 		/*
@@ -2731,14 +2730,14 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
 		 * that's the end of the chain. Submit the previous link.
 		 */
 		if (!(req->submit.sqe->flags & IOSQE_IO_LINK) && link) {
-			io_queue_link_head(ctx, link, &link->submit, shadow_req);
+			io_queue_link_head(ctx, link, shadow_req);
 			link = NULL;
 			shadow_req = NULL;
 		}
 	}
 
 	if (link)
-		io_queue_link_head(ctx, link, &link->submit, shadow_req);
+		io_queue_link_head(ctx, link, shadow_req);
 	if (statep)
 		io_submit_state_end(&state);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] io_uring: use inlined struct sqe_submit
  2019-11-05 23:04 ` [PATCH 3/3] io_uring: use inlined struct sqe_submit Pavel Begunkov
@ 2019-11-05 23:37   ` Jens Axboe
  2019-11-05 23:43     ` Pavel Begunkov
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2019-11-05 23:37 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-block

On 11/5/19 4:04 PM, Pavel Begunkov wrote:
> @@ -2475,31 +2475,30 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>   	return ret;
>   }
>   
> -static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
> -			struct sqe_submit *s)
> +static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req)
>   {
>   	int ret;
>   
> -	ret = io_req_defer(ctx, req, s->sqe);
> +	ret = io_req_defer(ctx, req);
>   	if (ret) {
>   		if (ret != -EIOCBQUEUED) {
>   			io_free_req(req, NULL);
> -			io_cqring_add_event(ctx, s->sqe->user_data, ret);
> +			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);

Cases like these are now (or can be) use-after-free. Same with this one:

> @@ -2507,12 +2506,12 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
>   	 * list.
>   	 */
>   	req->flags |= REQ_F_IO_DRAIN;
> -	ret = io_req_defer(ctx, req, s->sqe);
> +	ret = io_req_defer(ctx, req);
>   	if (ret) {
>   		if (ret != -EIOCBQUEUED) {
>   			io_free_req(req, NULL);
>   			__io_free_req(shadow);
> -			io_cqring_add_event(ctx, s->sqe->user_data, ret);
> +			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);
>   			return 0;

Free the req, then deref it...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 0/3] Inline sqe_submit
  2019-11-05 23:04 [RFC 0/3] Inline sqe_submit Pavel Begunkov
                   ` (2 preceding siblings ...)
  2019-11-05 23:04 ` [PATCH 3/3] io_uring: use inlined struct sqe_submit Pavel Begunkov
@ 2019-11-05 23:37 ` Jens Axboe
  2019-11-05 23:45   ` Pavel Begunkov
  3 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2019-11-05 23:37 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-block

On 11/5/19 4:04 PM, Pavel Begunkov wrote:
> The proposal is to not pass struct sqe_submit as a separate entity,
> but always use req->submit instead, so there will be less stuff to
> care about. The reasoning begind is code simplification.
> 
> Also, I've got steady +1% throughput improvement for nop tests.
> Though, it's highly system-dependent, and I wouldn't count on it.
> 
> P.S. I'll double check the patches, if the idea is accepted.

I like the idea (a lot), makes the whole thing easier to follow as well.
Just one comment on patch 3, that needs fixing.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] io_uring: Use submit info inlined into req
  2019-11-05 23:04 ` [PATCH 2/3] io_uring: Use submit info inlined into req Pavel Begunkov
@ 2019-11-05 23:42   ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2019-11-05 23:42 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-block

On 11/5/19 4:04 PM, Pavel Begunkov wrote:
 				if (unlikely(!shadow_req))
> @@ -2716,24 +2712,25 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>   				shadow_req->flags |= (REQ_F_IO_DRAIN | REQ_F_SHADOW_DRAIN);
>   				refcount_dec(&shadow_req->refs);
>   			}
> -			shadow_req->sequence = s.sequence;
> +			shadow_req->sequence = req->submit.sequence;
>   		}
>   
>   out:
> -		s.ring_file = ring_file;
> -		s.ring_fd = ring_fd;
> -		s.has_user = *mm != NULL;
> -		s.in_async = async;
> -		s.needs_fixed_file = async;
> -		trace_io_uring_submit_sqe(ctx, s.sqe->user_data, true, async);
> -		io_submit_sqe(ctx, req, &s, statep, &link);
> +		req->submit.ring_file = ring_file;
> +		req->submit.ring_fd = ring_fd;
> +		req->submit.has_user = *mm != NULL;
> +		req->submit.in_async = async;
> +		req->submit.needs_fixed_file = async;
> +		trace_io_uring_submit_sqe(ctx, req->submit.sqe->user_data,
> +					  true, async);
> +		io_submit_sqe(ctx, req, &req->submit, statep, &link);
>   		submitted++;
>   
>   		/*
>   		 * If previous wasn't linked and we have a linked command,
>   		 * that's the end of the chain. Submit the previous link.
>   		 */
> -		if (!(s.sqe->flags & IOSQE_IO_LINK) && link) {
> +		if (!(req->submit.sqe->flags & IOSQE_IO_LINK) && link) {
>   			io_queue_link_head(ctx, link, &link->submit, shadow_req);
>   			link = NULL;
>   			shadow_req = NULL;

Another potential use-after-free here, as 'req' might have completed by
the time you go and check for IOSQE_IO_LINK.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] io_uring: use inlined struct sqe_submit
  2019-11-05 23:37   ` Jens Axboe
@ 2019-11-05 23:43     ` Pavel Begunkov
  0 siblings, 0 replies; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:43 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block


[-- Attachment #1.1: Type: text/plain, Size: 1429 bytes --]

On 06/11/2019 02:37, Jens Axboe wrote:
> On 11/5/19 4:04 PM, Pavel Begunkov wrote:
>> @@ -2475,31 +2475,30 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>   	return ret;
>>   }
>>   
>> -static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>> -			struct sqe_submit *s)
>> +static int io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req)
>>   {
>>   	int ret;
>>   
>> -	ret = io_req_defer(ctx, req, s->sqe);
>> +	ret = io_req_defer(ctx, req);
>>   	if (ret) {
>>   		if (ret != -EIOCBQUEUED) {
>>   			io_free_req(req, NULL);
>> -			io_cqring_add_event(ctx, s->sqe->user_data, ret);
>> +			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);
> 
> Cases like these are now (or can be) use-after-free. Same with this one:
> 
Hmm, lost this in rebasing. Good catch!

>> @@ -2507,12 +2506,12 @@ static int io_queue_link_head(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>   	 * list.
>>   	 */
>>   	req->flags |= REQ_F_IO_DRAIN;
>> -	ret = io_req_defer(ctx, req, s->sqe);
>> +	ret = io_req_defer(ctx, req);
>>   	if (ret) {
>>   		if (ret != -EIOCBQUEUED) {
>>   			io_free_req(req, NULL);
>>   			__io_free_req(shadow);
>> -			io_cqring_add_event(ctx, s->sqe->user_data, ret);
>> +			io_cqring_add_event(ctx, req->submit.sqe->user_data, ret);
>>   			return 0;
> 
> Free the req, then deref it...
> 

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 0/3] Inline sqe_submit
  2019-11-05 23:37 ` [RFC 0/3] Inline sqe_submit Jens Axboe
@ 2019-11-05 23:45   ` Pavel Begunkov
  2019-11-05 23:48     ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: Pavel Begunkov @ 2019-11-05 23:45 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-block


[-- Attachment #1.1: Type: text/plain, Size: 725 bytes --]

On 06/11/2019 02:37, Jens Axboe wrote:
> On 11/5/19 4:04 PM, Pavel Begunkov wrote:
>> The proposal is to not pass struct sqe_submit as a separate entity,
>> but always use req->submit instead, so there will be less stuff to
>> care about. The reasoning begind is code simplification.
>>
>> Also, I've got steady +1% throughput improvement for nop tests.
>> Though, it's highly system-dependent, and I wouldn't count on it.
>>
>> P.S. I'll double check the patches, if the idea is accepted.
> 
> I like the idea (a lot), makes the whole thing easier to follow as well.

Great, than I'll prepare the patches properly and resend it

> Just one comment on patch 3, that needs fixing.
> 

-- 
Pavel Begunkov


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 0/3] Inline sqe_submit
  2019-11-05 23:45   ` Pavel Begunkov
@ 2019-11-05 23:48     ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2019-11-05 23:48 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring, linux-block

On 11/5/19 4:45 PM, Pavel Begunkov wrote:
> On 06/11/2019 02:37, Jens Axboe wrote:
>> On 11/5/19 4:04 PM, Pavel Begunkov wrote:
>>> The proposal is to not pass struct sqe_submit as a separate entity,
>>> but always use req->submit instead, so there will be less stuff to
>>> care about. The reasoning begind is code simplification.
>>>
>>> Also, I've got steady +1% throughput improvement for nop tests.
>>> Though, it's highly system-dependent, and I wouldn't count on it.
>>>
>>> P.S. I'll double check the patches, if the idea is accepted.
>>
>> I like the idea (a lot), makes the whole thing easier to follow as well.
> 
> Great, than I'll prepare the patches properly and resend it

Perfect - doesn't look like it'll conflict with the submission path
cleanup (which also looks good), but if it does, just collate them into
a single series.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-11-05 23:48 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 23:04 [RFC 0/3] Inline sqe_submit Pavel Begunkov
2019-11-05 23:04 ` [PATCH 1/3] io_uring: allocate io_kiocb upfront Pavel Begunkov
2019-11-05 23:04 ` [PATCH 2/3] io_uring: Use submit info inlined into req Pavel Begunkov
2019-11-05 23:42   ` Jens Axboe
2019-11-05 23:04 ` [PATCH 3/3] io_uring: use inlined struct sqe_submit Pavel Begunkov
2019-11-05 23:37   ` Jens Axboe
2019-11-05 23:43     ` Pavel Begunkov
2019-11-05 23:37 ` [RFC 0/3] Inline sqe_submit Jens Axboe
2019-11-05 23:45   ` Pavel Begunkov
2019-11-05 23:48     ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).