All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET 0/6] Misc cleanups
@ 2022-05-24 21:37 Jens Axboe
  2022-05-24 21:37 ` [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers Jens Axboe
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring

Hi,

Just a few minor consistency fixups that I found while starting the
io_uring.c move and split. Aiming to put these into 5.19 to avoid
any extra hassle on the 5.20 front.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  2022-05-25  6:16   ` Kanchan Joshi
  2022-05-24 21:37 ` [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently Jens Axboe
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

All other opcodes take a {req, sqe} set for prep handling, split out
a timeout prep handler so that timeout and linked timeouts can use
the same one.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9f1c682d7caf..c3991034b26a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7698,8 +7698,9 @@ static int io_timeout_remove(struct io_kiocb *req, unsigned int issue_flags)
 	return 0;
 }
 
-static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
-			   bool is_timeout_link)
+static int __io_timeout_prep(struct io_kiocb *req,
+			     const struct io_uring_sqe *sqe,
+			     bool is_timeout_link)
 {
 	struct io_timeout_data *data;
 	unsigned flags;
@@ -7754,6 +7755,18 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
 	return 0;
 }
 
+static int io_timeout_prep(struct io_kiocb *req,
+			   const struct io_uring_sqe *sqe)
+{
+	return __io_timeout_prep(req, sqe, false);
+}
+
+static int io_link_timeout_prep(struct io_kiocb *req,
+				const struct io_uring_sqe *sqe)
+{
+	return __io_timeout_prep(req, sqe, true);
+}
+
 static int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -8039,13 +8052,13 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	case IORING_OP_CONNECT:
 		return io_connect_prep(req, sqe);
 	case IORING_OP_TIMEOUT:
-		return io_timeout_prep(req, sqe, false);
+		return io_timeout_prep(req, sqe);
 	case IORING_OP_TIMEOUT_REMOVE:
 		return io_timeout_remove_prep(req, sqe);
 	case IORING_OP_ASYNC_CANCEL:
 		return io_async_cancel_prep(req, sqe);
 	case IORING_OP_LINK_TIMEOUT:
-		return io_timeout_prep(req, sqe, true);
+		return io_link_timeout_prep(req, sqe);
 	case IORING_OP_ACCEPT:
 		return io_accept_prep(req, sqe);
 	case IORING_OP_FALLOCATE:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
  2022-05-24 21:37 ` [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  2022-05-25  6:06   ` Kanchan Joshi
  2022-05-24 21:37 ` [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue Jens Axboe
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Almost all of them are, the odd ones out are the poll remove and the
files update request. Name them like the others, which is:

io_#cmdname_prep	for request preparation
io_#cmdname		for request issue

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c3991034b26a..01a96fcdb7c6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7390,7 +7390,7 @@ static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe,
 	return demangle_poll(events) | (events & (EPOLLEXCLUSIVE|EPOLLONESHOT));
 }
 
-static int io_poll_update_prep(struct io_kiocb *req,
+static int io_poll_remove_prep(struct io_kiocb *req,
 			       const struct io_uring_sqe *sqe)
 {
 	struct io_poll_update *upd = &req->poll_update;
@@ -7454,7 +7454,7 @@ static int io_poll_add(struct io_kiocb *req, unsigned int issue_flags)
 	return 0;
 }
 
-static int io_poll_update(struct io_kiocb *req, unsigned int issue_flags)
+static int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags)
 {
 	struct io_cancel_data cd = { .data = req->poll_update.old_user_data, };
 	struct io_ring_ctx *ctx = req->ctx;
@@ -7983,7 +7983,7 @@ static int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags)
 	return 0;
 }
 
-static int io_rsrc_update_prep(struct io_kiocb *req,
+static int io_files_update_prep(struct io_kiocb *req,
 				const struct io_uring_sqe *sqe)
 {
 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
@@ -8038,7 +8038,7 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	case IORING_OP_POLL_ADD:
 		return io_poll_add_prep(req, sqe);
 	case IORING_OP_POLL_REMOVE:
-		return io_poll_update_prep(req, sqe);
+		return io_poll_remove_prep(req, sqe);
 	case IORING_OP_FSYNC:
 		return io_fsync_prep(req, sqe);
 	case IORING_OP_SYNC_FILE_RANGE:
@@ -8068,7 +8068,7 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 	case IORING_OP_CLOSE:
 		return io_close_prep(req, sqe);
 	case IORING_OP_FILES_UPDATE:
-		return io_rsrc_update_prep(req, sqe);
+		return io_files_update_prep(req, sqe);
 	case IORING_OP_STATX:
 		return io_statx_prep(req, sqe);
 	case IORING_OP_FADVISE:
@@ -8334,7 +8334,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 		ret = io_poll_add(req, issue_flags);
 		break;
 	case IORING_OP_POLL_REMOVE:
-		ret = io_poll_update(req, issue_flags);
+		ret = io_poll_remove(req, issue_flags);
 		break;
 	case IORING_OP_SYNC_FILE_RANGE:
 		ret = io_sync_file_range(req, issue_flags);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
  2022-05-24 21:37 ` [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers Jens Axboe
  2022-05-24 21:37 ` [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  2022-05-25  6:41   ` Kanchan Joshi
  2022-05-24 21:37 ` [PATCH 4/6] io_uring: unify calling convention for async prep handling Jens Axboe
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Define and set it when appropriate, and use it consistently in the
function rather than using io_op_defs[opcode].

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 01a96fcdb7c6..d2c99176b11a 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8301,6 +8301,7 @@ static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
 
 static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 {
+	const struct io_op_def *def = &io_op_defs[req->opcode];
 	const struct cred *creds = NULL;
 	int ret;
 
@@ -8310,7 +8311,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 	if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
 		creds = override_creds(req->creds);
 
-	if (!io_op_defs[req->opcode].audit_skip)
+	if (!def->audit_skip)
 		audit_uring_entry(req->opcode);
 
 	switch (req->opcode) {
@@ -8449,7 +8450,7 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
 		break;
 	}
 
-	if (!io_op_defs[req->opcode].audit_skip)
+	if (!def->audit_skip)
 		audit_uring_exit(!ret, ret);
 
 	if (creds)
@@ -8801,12 +8802,14 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		       const struct io_uring_sqe *sqe)
 	__must_hold(&ctx->uring_lock)
 {
+	const struct io_op_def *def;
 	unsigned int sqe_flags;
 	int personality;
 	u8 opcode;
 
 	/* req is partially pre-initialised, see io_preinit_req() */
 	req->opcode = opcode = READ_ONCE(sqe->opcode);
+	def = &io_op_defs[opcode];
 	/* same numerical values with corresponding REQ_F_*, safe to copy */
 	req->flags = sqe_flags = READ_ONCE(sqe->flags);
 	req->cqe.user_data = READ_ONCE(sqe->user_data);
@@ -8823,7 +8826,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		if (sqe_flags & ~SQE_VALID_FLAGS)
 			return -EINVAL;
 		if (sqe_flags & IOSQE_BUFFER_SELECT) {
-			if (!io_op_defs[opcode].buffer_select)
+			if (!def->buffer_select)
 				return -EOPNOTSUPP;
 			req->buf_index = READ_ONCE(sqe->buf_group);
 		}
@@ -8849,12 +8852,12 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		}
 	}
 
-	if (!io_op_defs[opcode].ioprio && sqe->ioprio)
+	if (!def->ioprio && sqe->ioprio)
 		return -EINVAL;
-	if (!io_op_defs[opcode].iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
+	if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
 		return -EINVAL;
 
-	if (io_op_defs[opcode].needs_file) {
+	if (def->needs_file) {
 		struct io_submit_state *state = &ctx->submit_state;
 
 		req->cqe.fd = READ_ONCE(sqe->fd);
@@ -8863,7 +8866,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 		 * Plug now if we have more than 2 IO left after this, and the
 		 * target is potentially a read/write to block based storage.
 		 */
-		if (state->need_plug && io_op_defs[opcode].plug) {
+		if (state->need_plug && def->plug) {
 			state->plug_started = true;
 			state->need_plug = false;
 			blk_start_plug_nr_ios(&state->plug, state->submit_nr);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/6] io_uring: unify calling convention for async prep handling
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
                   ` (2 preceding siblings ...)
  2022-05-24 21:37 ` [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  2022-05-24 21:37 ` [PATCH 5/6] io_uring: drop confusion between cleanup flags Jens Axboe
  2022-05-24 21:37 ` [PATCH 6/6] io_uring: move shutdown under the general net section Jens Axboe
  5 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Make them consistent in preparation for defining a req async prep
handler. The readv/writev requests share a prep handler, move it one
level down so the initial one is consistent with the others.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d2c99176b11a..408265a03563 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4176,6 +4176,16 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
 	return 0;
 }
 
+static int io_readv_prep_async(struct io_kiocb *req)
+{
+	return io_rw_prep_async(req, READ);
+}
+
+static int io_writev_prep_async(struct io_kiocb *req)
+{
+	return io_rw_prep_async(req, READ);
+}
+
 /*
  * This is our waitqueue callback handler, registered through __folio_lock_async()
  * when we initially tried to do the IO with the iocb armed our waitqueue.
@@ -8136,9 +8146,9 @@ static int io_req_prep_async(struct io_kiocb *req)
 
 	switch (req->opcode) {
 	case IORING_OP_READV:
-		return io_rw_prep_async(req, READ);
+		return io_readv_prep_async(req);
 	case IORING_OP_WRITEV:
-		return io_rw_prep_async(req, WRITE);
+		return io_writev_prep_async(req);
 	case IORING_OP_SENDMSG:
 		return io_sendmsg_prep_async(req);
 	case IORING_OP_RECVMSG:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/6] io_uring: drop confusion between cleanup flags
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
                   ` (3 preceding siblings ...)
  2022-05-24 21:37 ` [PATCH 4/6] io_uring: unify calling convention for async prep handling Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  2022-05-25  8:46   ` Kanchan Joshi
  2022-05-24 21:37 ` [PATCH 6/6] io_uring: move shutdown under the general net section Jens Axboe
  5 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

If the opcode only stores data that needs to be kfree'ed in
req->async_data, then it doesn't need special handling in
our cleanup handler.

This has the added bonus of removing knowledge of those kinds of
special async_data to the io_uring core.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 408265a03563..8188c47956ad 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -8229,24 +8229,6 @@ static void io_clean_op(struct io_kiocb *req)
 
 	if (req->flags & REQ_F_NEED_CLEANUP) {
 		switch (req->opcode) {
-		case IORING_OP_READV:
-		case IORING_OP_READ_FIXED:
-		case IORING_OP_READ:
-		case IORING_OP_WRITEV:
-		case IORING_OP_WRITE_FIXED:
-		case IORING_OP_WRITE: {
-			struct io_async_rw *io = req->async_data;
-
-			kfree(io->free_iovec);
-			break;
-			}
-		case IORING_OP_RECVMSG:
-		case IORING_OP_SENDMSG: {
-			struct io_async_msghdr *io = req->async_data;
-
-			kfree(io->free_iov);
-			break;
-			}
 		case IORING_OP_OPENAT:
 		case IORING_OP_OPENAT2:
 			if (req->open.filename)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/6] io_uring: move shutdown under the general net section
  2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
                   ` (4 preceding siblings ...)
  2022-05-24 21:37 ` [PATCH 5/6] io_uring: drop confusion between cleanup flags Jens Axboe
@ 2022-05-24 21:37 ` Jens Axboe
  5 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-05-24 21:37 UTC (permalink / raw)
  To: io-uring; +Cc: Jens Axboe

Gets rid of some ifdefs and enables use of the net defines for when
CONFIG_NET isn't set.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 66 +++++++++++++++++++++++----------------------------
 1 file changed, 30 insertions(+), 36 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8188c47956ad..9624ab114a40 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5113,42 +5113,6 @@ static int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
 	return 0;
 }
 
-static int io_shutdown_prep(struct io_kiocb *req,
-			    const struct io_uring_sqe *sqe)
-{
-#if defined(CONFIG_NET)
-	if (unlikely(sqe->off || sqe->addr || sqe->rw_flags ||
-		     sqe->buf_index || sqe->splice_fd_in))
-		return -EINVAL;
-
-	req->shutdown.how = READ_ONCE(sqe->len);
-	return 0;
-#else
-	return -EOPNOTSUPP;
-#endif
-}
-
-static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
-{
-#if defined(CONFIG_NET)
-	struct socket *sock;
-	int ret;
-
-	if (issue_flags & IO_URING_F_NONBLOCK)
-		return -EAGAIN;
-
-	sock = sock_from_file(req->file);
-	if (unlikely(!sock))
-		return -ENOTSOCK;
-
-	ret = __sys_shutdown_sock(sock, req->shutdown.how);
-	io_req_complete(req, ret);
-	return 0;
-#else
-	return -EOPNOTSUPP;
-#endif
-}
-
 static int __io_splice_prep(struct io_kiocb *req,
 			    const struct io_uring_sqe *sqe)
 {
@@ -6073,6 +6037,34 @@ static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
 }
 
 #if defined(CONFIG_NET)
+static int io_shutdown_prep(struct io_kiocb *req,
+			    const struct io_uring_sqe *sqe)
+{
+	if (unlikely(sqe->off || sqe->addr || sqe->rw_flags ||
+		     sqe->buf_index || sqe->splice_fd_in))
+		return -EINVAL;
+
+	req->shutdown.how = READ_ONCE(sqe->len);
+	return 0;
+}
+
+static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
+{
+	struct socket *sock;
+	int ret;
+
+	if (issue_flags & IO_URING_F_NONBLOCK)
+		return -EAGAIN;
+
+	sock = sock_from_file(req->file);
+	if (unlikely(!sock))
+		return -ENOTSOCK;
+
+	ret = __sys_shutdown_sock(sock, req->shutdown.how);
+	io_req_complete(req, ret);
+	return 0;
+}
+
 static bool io_net_retry(struct socket *sock, int flags)
 {
 	if (!(flags & MSG_WAITALL))
@@ -6777,8 +6769,10 @@ IO_NETOP_PREP_ASYNC(recvmsg);
 IO_NETOP_PREP_ASYNC(connect);
 IO_NETOP_PREP(accept);
 IO_NETOP_PREP(socket);
+IO_NETOP_PREP(shutdown);
 IO_NETOP_FN(send);
 IO_NETOP_FN(recv);
+IO_NETOP_FN(shutdown);
 #endif /* CONFIG_NET */
 
 struct io_poll_table {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently
  2022-05-24 21:37 ` [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently Jens Axboe
@ 2022-05-25  6:06   ` Kanchan Joshi
  0 siblings, 0 replies; 13+ messages in thread
From: Kanchan Joshi @ 2022-05-25  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

[-- Attachment #1: Type: text/plain, Size: 618 bytes --]

On Tue, May 24, 2022 at 03:37:23PM -0600, Jens Axboe wrote:
>Almost all of them are, the odd ones out are the poll remove and the
>files update request. Name them like the others, which is:
>
>io_#cmdname_prep	for request preparation
>io_#cmdname		for request issue

        case IORING_OP_READV:
        case IORING_OP_READ_FIXED:
        case IORING_OP_READ:
        case IORING_OP_WRITEV:
        case IORING_OP_WRITE_FIXED:
        case IORING_OP_WRITE:
                return io_prep_rw(req, sqe);

Should io_prep_rw be renamed to io_rw_prep as well?
Looks good.
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>


[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers
  2022-05-24 21:37 ` [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers Jens Axboe
@ 2022-05-25  6:16   ` Kanchan Joshi
  0 siblings, 0 replies; 13+ messages in thread
From: Kanchan Joshi @ 2022-05-25  6:16 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

[-- Attachment #1: Type: text/plain, Size: 263 bytes --]

On Tue, May 24, 2022 at 03:37:22PM -0600, Jens Axboe wrote:
>All other opcodes take a {req, sqe} set for prep handling, split out
>a timeout prep handler so that timeout and linked timeouts can use
>the same one.

Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue
  2022-05-24 21:37 ` [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue Jens Axboe
@ 2022-05-25  6:41   ` Kanchan Joshi
  2022-05-25 11:37     ` Jens Axboe
  0 siblings, 1 reply; 13+ messages in thread
From: Kanchan Joshi @ 2022-05-25  6:41 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

[-- Attachment #1: Type: text/plain, Size: 324 bytes --]

On Tue, May 24, 2022 at 03:37:24PM -0600, Jens Axboe wrote:
>Define and set it when appropriate, and use it consistently in the
>function rather than using io_op_defs[opcode].

seems you skipped doing this inside io_alloc_async_data() because
access is not that repetitive.

Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/6] io_uring: drop confusion between cleanup flags
  2022-05-24 21:37 ` [PATCH 5/6] io_uring: drop confusion between cleanup flags Jens Axboe
@ 2022-05-25  8:46   ` Kanchan Joshi
  2022-05-25 11:34     ` Jens Axboe
  0 siblings, 1 reply; 13+ messages in thread
From: Kanchan Joshi @ 2022-05-25  8:46 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring

[-- Attachment #1: Type: text/plain, Size: 1120 bytes --]

On Tue, May 24, 2022 at 03:37:26PM -0600, Jens Axboe wrote:
>If the opcode only stores data that needs to be kfree'ed in
>req->async_data, then it doesn't need special handling in
>our cleanup handler.
>
>This has the added bonus of removing knowledge of those kinds of
>special async_data to the io_uring core.
>
>Signed-off-by: Jens Axboe <axboe@kernel.dk>
>---
> fs/io_uring.c | 18 ------------------
> 1 file changed, 18 deletions(-)
>
>diff --git a/fs/io_uring.c b/fs/io_uring.c
>index 408265a03563..8188c47956ad 100644
>--- a/fs/io_uring.c
>+++ b/fs/io_uring.c
>@@ -8229,24 +8229,6 @@ static void io_clean_op(struct io_kiocb *req)
>
> 	if (req->flags & REQ_F_NEED_CLEANUP) {
> 		switch (req->opcode) {
>-		case IORING_OP_READV:
>-		case IORING_OP_READ_FIXED:
>-		case IORING_OP_READ:
>-		case IORING_OP_WRITEV:
>-		case IORING_OP_WRITE_FIXED:
>-		case IORING_OP_WRITE: {
>-			struct io_async_rw *io = req->async_data;
>-
>-			kfree(io->free_iovec);

Removing this kfree may cause a leak.
For READV/WRITEV atleast, io->free_iovec will hold the address of
allocated iovec array if input was larger than UIO_FASTIOV.

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/6] io_uring: drop confusion between cleanup flags
  2022-05-25  8:46   ` Kanchan Joshi
@ 2022-05-25 11:34     ` Jens Axboe
  0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-05-25 11:34 UTC (permalink / raw)
  To: Kanchan Joshi; +Cc: io-uring

On 5/25/22 2:46 AM, Kanchan Joshi wrote:
> On Tue, May 24, 2022 at 03:37:26PM -0600, Jens Axboe wrote:
>> If the opcode only stores data that needs to be kfree'ed in
>> req->async_data, then it doesn't need special handling in
>> our cleanup handler.
>>
>> This has the added bonus of removing knowledge of those kinds of
>> special async_data to the io_uring core.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>> fs/io_uring.c | 18 ------------------
>> 1 file changed, 18 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 408265a03563..8188c47956ad 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -8229,24 +8229,6 @@ static void io_clean_op(struct io_kiocb *req)
>>
>>     if (req->flags & REQ_F_NEED_CLEANUP) {
>>         switch (req->opcode) {
>> -        case IORING_OP_READV:
>> -        case IORING_OP_READ_FIXED:
>> -        case IORING_OP_READ:
>> -        case IORING_OP_WRITEV:
>> -        case IORING_OP_WRITE_FIXED:
>> -        case IORING_OP_WRITE: {
>> -            struct io_async_rw *io = req->async_data;
>> -
>> -            kfree(io->free_iovec);
> 
> Removing this kfree may cause a leak.
> For READV/WRITEV atleast, io->free_iovec will hold the address of
> allocated iovec array if input was larger than UIO_FASTIOV.

I think I was too tired when I did and saw it freeing ->async_data,
which is obviously not the case. I'll drop this one.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue
  2022-05-25  6:41   ` Kanchan Joshi
@ 2022-05-25 11:37     ` Jens Axboe
  0 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-05-25 11:37 UTC (permalink / raw)
  To: Kanchan Joshi; +Cc: io-uring

On 5/25/22 12:41 AM, Kanchan Joshi wrote:
> On Tue, May 24, 2022 at 03:37:24PM -0600, Jens Axboe wrote:
>> Define and set it when appropriate, and use it consistently in the
>> function rather than using io_op_defs[opcode].
> 
> seems you skipped doing this inside io_alloc_async_data() because
> access is not that repetitive.

Right, and because it won't get impacted by abstracting out opcode
handlers.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-05-25 11:37 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-24 21:37 [PATCHSET 0/6] Misc cleanups Jens Axboe
2022-05-24 21:37 ` [PATCH 1/6] io_uring: make timeout prep handlers consistent with other prep handlers Jens Axboe
2022-05-25  6:16   ` Kanchan Joshi
2022-05-24 21:37 ` [PATCH 2/6] io_uring: make prep and issue side of req handlers named consistently Jens Axboe
2022-05-25  6:06   ` Kanchan Joshi
2022-05-24 21:37 ` [PATCH 3/6] io_uring: add io_op_defs 'def' pointer in req init and issue Jens Axboe
2022-05-25  6:41   ` Kanchan Joshi
2022-05-25 11:37     ` Jens Axboe
2022-05-24 21:37 ` [PATCH 4/6] io_uring: unify calling convention for async prep handling Jens Axboe
2022-05-24 21:37 ` [PATCH 5/6] io_uring: drop confusion between cleanup flags Jens Axboe
2022-05-25  8:46   ` Kanchan Joshi
2022-05-25 11:34     ` Jens Axboe
2022-05-24 21:37 ` [PATCH 6/6] io_uring: move shutdown under the general net section Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.