io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] prep and grab_files cleanup
@ 2020-07-23 17:25 Pavel Begunkov
  2020-07-23 17:25 ` [PATCH 1/2] io_uring: don't do opcode prep twice Pavel Begunkov
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Pavel Begunkov @ 2020-07-23 17:25 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Not sure we currently have a problem mentioned in [1/1],
but it definitely nice to have such a guarantee.

Pavel Begunkov (2):
  io_uring: don't do opcode prep twice
  io_uring: deduplicate io_grab_files() calls

 fs/io_uring.c | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] io_uring: don't do opcode prep twice
  2020-07-23 17:25 [PATCH 0/2] prep and grab_files cleanup Pavel Begunkov
@ 2020-07-23 17:25 ` Pavel Begunkov
  2020-07-23 17:25 ` [PATCH 2/2] io_uring: deduplicate io_grab_files() calls Pavel Begunkov
  2020-07-23 18:24 ` [PATCH 0/2] prep and grab_files cleanup Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Pavel Begunkov @ 2020-07-23 17:25 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Calling into opcode prep handlers may be dangerous, as they re-read
SQE but might not re-initialise requests completely. If io_req_defer()
passed fast checks and is done with preparations, punt it async.

As all other cases are covered with nulling @sqe, this guarantees that
io_[opcode]_prep() are visited only once per request.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index cda729de4ea3..bb356c56f57c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5438,7 +5438,8 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
 		spin_unlock_irq(&ctx->completion_lock);
 		kfree(de);
-		return 0;
+		io_queue_async_work(req);
+		return -EIOCBQUEUED;
 	}
 
 	trace_io_uring_defer(ctx, req, req->user_data);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] io_uring: deduplicate io_grab_files() calls
  2020-07-23 17:25 [PATCH 0/2] prep and grab_files cleanup Pavel Begunkov
  2020-07-23 17:25 ` [PATCH 1/2] io_uring: don't do opcode prep twice Pavel Begunkov
@ 2020-07-23 17:25 ` Pavel Begunkov
  2020-07-23 18:24 ` [PATCH 0/2] prep and grab_files cleanup Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Pavel Begunkov @ 2020-07-23 17:25 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Move io_req_init_async() into io_grab_files(), it's safer this way. Note
that io_queue_async_work() does *init_async(), so it's valid to move out
of __io_queue_sqe() punt path. Also, add a helper around io_grab_files().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index bb356c56f57c..c22c2a3c8357 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -912,7 +912,7 @@ static void io_queue_linked_timeout(struct io_kiocb *req);
 static int __io_sqe_files_update(struct io_ring_ctx *ctx,
 				 struct io_uring_files_update *ip,
 				 unsigned nr_args);
-static int io_grab_files(struct io_kiocb *req);
+static int io_prep_work_files(struct io_kiocb *req);
 static void io_complete_rw_common(struct kiocb *kiocb, long res,
 				  struct io_comp_state *cs);
 static void __io_clean_op(struct io_kiocb *req);
@@ -5285,13 +5285,9 @@ static int io_req_defer_prep(struct io_kiocb *req,
 
 	if (io_alloc_async_ctx(req))
 		return -EAGAIN;
-
-	if (io_op_defs[req->opcode].file_table) {
-		io_req_init_async(req);
-		ret = io_grab_files(req);
-		if (unlikely(ret))
-			return ret;
-	}
+	ret = io_prep_work_files(req);
+	if (unlikely(ret))
+		return ret;
 
 	switch (req->opcode) {
 	case IORING_OP_NOP:
@@ -5842,6 +5838,8 @@ static int io_grab_files(struct io_kiocb *req)
 	int ret = -EBADF;
 	struct io_ring_ctx *ctx = req->ctx;
 
+	io_req_init_async(req);
+
 	if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))
 		return 0;
 	if (!ctx->ring_file)
@@ -5867,6 +5865,13 @@ static int io_grab_files(struct io_kiocb *req)
 	return ret;
 }
 
+static inline int io_prep_work_files(struct io_kiocb *req)
+{
+	if (!io_op_defs[req->opcode].file_table)
+		return 0;
+	return io_grab_files(req);
+}
+
 static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer)
 {
 	struct io_timeout_data *data = container_of(timer,
@@ -5978,14 +5983,9 @@ static void __io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 			goto exit;
 		}
 punt:
-		io_req_init_async(req);
-
-		if (io_op_defs[req->opcode].file_table) {
-			ret = io_grab_files(req);
-			if (ret)
-				goto err;
-		}
-
+		ret = io_prep_work_files(req);
+		if (unlikely(ret))
+			goto err;
 		/*
 		 * Queued up for async execution, worker will release
 		 * submit reference when the iocb is actually submitted.
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 0/2] prep and grab_files cleanup
  2020-07-23 17:25 [PATCH 0/2] prep and grab_files cleanup Pavel Begunkov
  2020-07-23 17:25 ` [PATCH 1/2] io_uring: don't do opcode prep twice Pavel Begunkov
  2020-07-23 17:25 ` [PATCH 2/2] io_uring: deduplicate io_grab_files() calls Pavel Begunkov
@ 2020-07-23 18:24 ` Jens Axboe
  2 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2020-07-23 18:24 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 7/23/20 11:25 AM, Pavel Begunkov wrote:
> Not sure we currently have a problem mentioned in [1/1],
> but it definitely nice to have such a guarantee.

I think better safe than sorry, and it doesn't impact the fast path.

I've applied the series, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-23 18:24 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23 17:25 [PATCH 0/2] prep and grab_files cleanup Pavel Begunkov
2020-07-23 17:25 ` [PATCH 1/2] io_uring: don't do opcode prep twice Pavel Begunkov
2020-07-23 17:25 ` [PATCH 2/2] io_uring: deduplicate io_grab_files() calls Pavel Begunkov
2020-07-23 18:24 ` [PATCH 0/2] prep and grab_files cleanup Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).