From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org,
ZiyangZhang <ZiyangZhang@linux.alibaba.com>,
Harris James R <james.r.harris@intel.com>,
Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V2 2/7] ublk: cleanup io cmd code path by adding ublk_fill_io()
Date: Thu, 27 Apr 2023 20:44:09 +0800 [thread overview]
Message-ID: <20230427124414.319945-3-ming.lei@redhat.com> (raw)
In-Reply-To: <20230427124414.319945-1-ming.lei@redhat.com>
Add one small helper to cleanup io command hanlding code path.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 068d8b99605b..0980c9fec669 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1238,6 +1238,14 @@ static inline int ublk_check_cmd_op(u32 cmd_op)
return 0;
}
+static inline void ublk_fill_io(struct ublk_io *io, struct io_uring_cmd *cmd,
+ unsigned long buf_addr)
+{
+ io->cmd = cmd;
+ io->flags |= UBLK_IO_FLAG_ACTIVE;
+ io->addr = buf_addr;
+}
+
static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
unsigned int issue_flags,
struct ublksrv_io_cmd *ub_cmd)
@@ -1304,10 +1312,8 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
/* FETCH_RQ has to provide IO buffer if NEED GET DATA is not enabled */
if (!ub_cmd->addr && !ublk_need_get_data(ubq))
goto out;
- io->cmd = cmd;
- io->flags |= UBLK_IO_FLAG_ACTIVE;
- io->addr = ub_cmd->addr;
+ ublk_fill_io(io, cmd, ub_cmd->addr);
ublk_mark_io_ready(ub, ubq);
break;
case UBLK_IO_COMMIT_AND_FETCH_REQ:
@@ -1320,17 +1326,13 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
goto out;
if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
goto out;
- io->addr = ub_cmd->addr;
- io->flags |= UBLK_IO_FLAG_ACTIVE;
- io->cmd = cmd;
+ ublk_fill_io(io, cmd, ub_cmd->addr);
ublk_commit_completion(ub, ub_cmd);
break;
case UBLK_IO_NEED_GET_DATA:
if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
goto out;
- io->addr = ub_cmd->addr;
- io->cmd = cmd;
- io->flags |= UBLK_IO_FLAG_ACTIVE;
+ ublk_fill_io(io, cmd, ub_cmd->addr);
ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag);
break;
default:
--
2.40.0
next prev parent reply other threads:[~2023-04-27 12:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-27 12:44 [PATCH V2 0/7] ublk: cleanup and support user copy Ming Lei
2023-04-27 12:44 ` [PATCH V2 1/7] ublk: kill queuing request by task_work_add Ming Lei
2023-04-27 12:44 ` Ming Lei [this message]
2023-04-27 12:44 ` [PATCH V2 3/7] ublk: cleanup ublk_copy_user_pages Ming Lei
2023-04-27 12:44 ` [PATCH V2 4/7] ublk: grab request reference when the request is handled by userspace Ming Lei
2023-04-27 12:44 ` [PATCH V2 5/7] ublk: support to copy any part of request pages Ming Lei
2023-04-27 12:44 ` [PATCH V2 6/7] ublk: add read()/write() support for ublk char device Ming Lei
2023-04-27 12:44 ` [PATCH V2 7/7] ublk: support user copy Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230427124414.319945-3-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=ZiyangZhang@linux.alibaba.com \
--cc=axboe@kernel.dk \
--cc=james.r.harris@intel.com \
--cc=linux-block@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).