* [PATCHv2 0/2] nvme driver pdu enhancements for passthrough
@ 2021-03-17 20:37 Keith Busch
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Keith Busch @ 2021-03-17 20:37 UTC (permalink / raw)
To: joshi.k, hch, sagi, linux-nvme; +Cc: Keith Busch
v1->v2:
Improved change logs, added reviews.
Retain clearing SGL in passthrough commands (hch)
Added a comment explaining why REQ_OP_DRV_IN/OUT is a no-op for
nvme_setup_cmd()
Keith Busch (2):
nvme-pci: allocate nvme_command within driver pdu
nvme: use driver pdu command for passthrough
drivers/nvme/host/core.c | 23 ++++++++++-------------
drivers/nvme/host/fc.c | 5 ++---
drivers/nvme/host/nvme.h | 3 +--
drivers/nvme/host/pci.c | 12 +++++++-----
drivers/nvme/host/rdma.c | 5 +++--
drivers/nvme/host/tcp.c | 5 ++++-
drivers/nvme/target/loop.c | 4 +++-
7 files changed, 30 insertions(+), 27 deletions(-)
--
2.25.4
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu
2021-03-17 20:37 [PATCHv2 0/2] nvme driver pdu enhancements for passthrough Keith Busch
@ 2021-03-17 20:37 ` Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
` (2 more replies)
2021-03-17 20:37 ` [PATCHv2 2/2] nvme: use driver pdu command for passthrough Keith Busch
` (2 subsequent siblings)
3 siblings, 3 replies; 10+ messages in thread
From: Keith Busch @ 2021-03-17 20:37 UTC (permalink / raw)
To: joshi.k, hch, sagi, linux-nvme; +Cc: Keith Busch
Except for pci, all the nvme transport drivers allocate a command within
the driver's pdu. Align pci with everyone else by allocating the nvme
command within pci's pdu and replace the .queue_rq() stack variable with
this.
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/nvme/host/pci.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index ecd11b1febf8..1a0912146c74 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -224,6 +224,7 @@ struct nvme_queue {
*/
struct nvme_iod {
struct nvme_request req;
+ struct nvme_command cmd;
struct nvme_queue *nvmeq;
bool use_sgl;
int aborted;
@@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
struct nvme_dev *dev = nvmeq->dev;
struct request *req = bd->rq;
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
- struct nvme_command cmnd;
+ struct nvme_command *cmnd = &iod->cmd;
blk_status_t ret;
iod->aborted = 0;
@@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
return BLK_STS_IOERR;
- ret = nvme_setup_cmd(ns, req, &cmnd);
+ ret = nvme_setup_cmd(ns, req, cmnd);
if (ret)
return ret;
if (blk_rq_nr_phys_segments(req)) {
- ret = nvme_map_data(dev, req, &cmnd);
+ ret = nvme_map_data(dev, req, cmnd);
if (ret)
goto out_free_cmd;
}
if (blk_integrity_rq(req)) {
- ret = nvme_map_metadata(dev, req, &cmnd);
+ ret = nvme_map_metadata(dev, req, cmnd);
if (ret)
goto out_unmap_data;
}
blk_mq_start_request(req);
- nvme_submit_cmd(nvmeq, &cmnd, bd->last);
+ nvme_submit_cmd(nvmeq, cmnd, bd->last);
return BLK_STS_OK;
out_unmap_data:
nvme_unmap_data(dev, req);
--
2.25.4
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCHv2 2/2] nvme: use driver pdu command for passthrough
2021-03-17 20:37 [PATCHv2 0/2] nvme driver pdu enhancements for passthrough Keith Busch
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
@ 2021-03-17 20:37 ` Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
2021-03-17 22:21 ` Sagi Grimberg
2021-03-18 2:00 ` [PATCHv2 0/2] nvme driver pdu enhancements " Jens Axboe
2021-03-18 5:18 ` Christoph Hellwig
3 siblings, 2 replies; 10+ messages in thread
From: Keith Busch @ 2021-03-17 20:37 UTC (permalink / raw)
To: joshi.k, hch, sagi, linux-nvme; +Cc: Keith Busch
All nvme transport drivers preallocate an nvme command for each request.
Assume to use that command for nvme_setup_cmd() instead of requiring
drivers pass a pointer to it. All nvme drivers must initialize the
generic nvme_request 'cmd' to point to the transport's preallocated
nvme_command.
The generic nvme_request cmd pointer had previously been used only as a
temporary copy for passthrough commands. Since it now points to the
command that gets dispatched, passthrough commands must directly set it
up prior to executing the request.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/nvme/host/core.c | 23 ++++++++++-------------
drivers/nvme/host/fc.c | 5 ++---
drivers/nvme/host/nvme.h | 3 +--
drivers/nvme/host/pci.c | 3 ++-
drivers/nvme/host/rdma.c | 5 +++--
drivers/nvme/host/tcp.c | 5 ++++-
drivers/nvme/target/loop.c | 4 +++-
7 files changed, 25 insertions(+), 23 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 512e5578c6e4..bfc868d6bda9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -575,6 +575,9 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU);
static inline void nvme_clear_nvme_request(struct request *req)
{
+ struct nvme_command *cmd = nvme_req(req)->cmd;
+
+ memset(cmd, 0, sizeof(*cmd));
nvme_req(req)->retries = 0;
nvme_req(req)->flags = 0;
req->rq_flags |= RQF_DONTPREP;
@@ -593,9 +596,12 @@ static inline void nvme_init_request(struct request *req,
else /* no queuedata implies admin queue */
req->timeout = NVME_ADMIN_TIMEOUT;
+ /* passthru commands should let the driver set the SGL flags */
+ cmd->common.flags &= ~NVME_CMD_SGL_ALL;
+
req->cmd_flags |= REQ_FAILFAST_DRIVER;
nvme_clear_nvme_request(req);
- nvme_req(req)->cmd = cmd;
+ memcpy(nvme_req(req)->cmd, cmd, sizeof(*cmd));
}
struct request *nvme_alloc_request(struct request_queue *q,
@@ -724,14 +730,6 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl,
req->q->write_hints[streamid] += blk_rq_bytes(req) >> 9;
}
-static inline void nvme_setup_passthrough(struct request *req,
- struct nvme_command *cmd)
-{
- memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd));
- /* passthru commands should let the driver set the SGL flags */
- cmd->common.flags &= ~NVME_CMD_SGL_ALL;
-}
-
static inline void nvme_setup_flush(struct nvme_ns *ns,
struct nvme_command *cmnd)
{
@@ -886,19 +884,18 @@ void nvme_cleanup_cmd(struct request *req)
}
EXPORT_SYMBOL_GPL(nvme_cleanup_cmd);
-blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
- struct nvme_command *cmd)
+blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
{
+ struct nvme_command *cmd = nvme_req(req)->cmd;
blk_status_t ret = BLK_STS_OK;
if (!(req->rq_flags & RQF_DONTPREP))
nvme_clear_nvme_request(req);
- memset(cmd, 0, sizeof(*cmd));
switch (req_op(req)) {
case REQ_OP_DRV_IN:
case REQ_OP_DRV_OUT:
- nvme_setup_passthrough(req, cmd);
+ /* these are setup prior to execution in nvme_init_request() */
break;
case REQ_OP_FLUSH:
nvme_setup_flush(ns, cmd);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index fe8c32f0efb1..0d4fb8edbf7c 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2128,6 +2128,7 @@ nvme_fc_init_request(struct blk_mq_tag_set *set, struct request *rq,
op->op.fcp_req.first_sgl = op->sgl;
op->op.fcp_req.private = &op->priv[0];
nvme_req(rq)->ctrl = &ctrl->ctrl;
+ nvme_req(rq)->cmd = &op->op.cmd_iu.sqe;
return res;
}
@@ -2759,8 +2760,6 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
struct nvme_fc_ctrl *ctrl = queue->ctrl;
struct request *rq = bd->rq;
struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
- struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
- struct nvme_command *sqe = &cmdiu->sqe;
enum nvmefc_fcp_datadir io_dir;
bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags);
u32 data_len;
@@ -2770,7 +2769,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
!nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
return nvmf_fail_nonready_command(&queue->ctrl->ctrl, rq);
- ret = nvme_setup_cmd(ns, rq, sqe);
+ ret = nvme_setup_cmd(ns, rq);
if (ret)
return ret;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 76de7ed55d90..b0863c59fac4 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -623,8 +623,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags);
void nvme_cleanup_cmd(struct request *req);
-blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
- struct nvme_command *cmd);
+blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req);
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen);
int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 1a0912146c74..d47bb18b976a 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -430,6 +430,7 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req,
iod->nvmeq = nvmeq;
nvme_req(req)->ctrl = &dev->ctrl;
+ nvme_req(req)->cmd = &iod->cmd;
return 0;
}
@@ -932,7 +933,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
return BLK_STS_IOERR;
- ret = nvme_setup_cmd(ns, req, cmnd);
+ ret = nvme_setup_cmd(ns, req);
if (ret)
return ret;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 9b0a61841851..f94c6e7d5d93 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -314,6 +314,7 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
NVME_RDMA_DATA_SGL_SIZE;
req->queue = queue;
+ nvme_req(rq)->cmd = req->sqe.data;
return 0;
}
@@ -2038,7 +2039,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
struct request *rq = bd->rq;
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_qe *sqe = &req->sqe;
- struct nvme_command *c = sqe->data;
+ struct nvme_command *c = nvme_req(rq)->cmd;
struct ib_device *dev;
bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
blk_status_t ret;
@@ -2061,7 +2062,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
ib_dma_sync_single_for_cpu(dev, sqe->dma,
sizeof(struct nvme_command), DMA_TO_DEVICE);
- ret = nvme_setup_cmd(ns, rq, c);
+ ret = nvme_setup_cmd(ns, rq);
if (ret)
goto unmap_qe;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index ee25492ec130..4d9ea6c2ef87 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -417,6 +417,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
{
struct nvme_tcp_ctrl *ctrl = set->driver_data;
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ struct nvme_tcp_cmd_pdu *pdu;
int queue_idx = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0;
struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx];
u8 hdgst = nvme_tcp_hdgst_len(queue);
@@ -427,8 +428,10 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
if (!req->pdu)
return -ENOMEM;
+ pdu = req->pdu;
req->queue = queue;
nvme_req(rq)->ctrl = &ctrl->ctrl;
+ nvme_req(rq)->cmd = &pdu->cmd;
return 0;
}
@@ -2259,7 +2262,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0;
blk_status_t ret;
- ret = nvme_setup_cmd(ns, rq, &pdu->cmd);
+ ret = nvme_setup_cmd(ns, rq);
if (ret)
return ret;
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 8fd53df5cf7d..6665da3b634f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -141,7 +141,7 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
if (!nvmf_check_ready(&queue->ctrl->ctrl, req, queue_ready))
return nvmf_fail_nonready_command(&queue->ctrl->ctrl, req);
- ret = nvme_setup_cmd(ns, req, &iod->cmd);
+ ret = nvme_setup_cmd(ns, req);
if (ret)
return ret;
@@ -205,8 +205,10 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set,
unsigned int numa_node)
{
struct nvme_loop_ctrl *ctrl = set->driver_data;
+ struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
nvme_req(req)->ctrl = &ctrl->ctrl;
+ nvme_req(req)->cmd = &iod->cmd;
return nvme_loop_init_iod(ctrl, blk_mq_rq_to_pdu(req),
(set == &ctrl->tag_set) ? hctx_idx + 1 : 0);
}
--
2.25.4
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
@ 2021-03-17 21:39 ` Himanshu Madhani
2021-03-18 1:08 ` Chaitanya Kulkarni
2021-03-18 5:27 ` Kanchan Joshi
2 siblings, 0 replies; 10+ messages in thread
From: Himanshu Madhani @ 2021-03-17 21:39 UTC (permalink / raw)
To: Keith Busch; +Cc: joshi.k, hch, sagi, linux-nvme
> On Mar 17, 2021, at 3:37 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> Except for pci, all the nvme transport drivers allocate a command within
> the driver's pdu. Align pci with everyone else by allocating the nvme
> command within pci's pdu and replace the .queue_rq() stack variable with
> this.
>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> drivers/nvme/host/pci.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index ecd11b1febf8..1a0912146c74 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -224,6 +224,7 @@ struct nvme_queue {
> */
> struct nvme_iod {
> struct nvme_request req;
> + struct nvme_command cmd;
> struct nvme_queue *nvmeq;
> bool use_sgl;
> int aborted;
> @@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> struct nvme_dev *dev = nvmeq->dev;
> struct request *req = bd->rq;
> struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> - struct nvme_command cmnd;
> + struct nvme_command *cmnd = &iod->cmd;
> blk_status_t ret;
>
> iod->aborted = 0;
> @@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> return BLK_STS_IOERR;
>
> - ret = nvme_setup_cmd(ns, req, &cmnd);
> + ret = nvme_setup_cmd(ns, req, cmnd);
> if (ret)
> return ret;
>
> if (blk_rq_nr_phys_segments(req)) {
> - ret = nvme_map_data(dev, req, &cmnd);
> + ret = nvme_map_data(dev, req, cmnd);
> if (ret)
> goto out_free_cmd;
> }
>
> if (blk_integrity_rq(req)) {
> - ret = nvme_map_metadata(dev, req, &cmnd);
> + ret = nvme_map_metadata(dev, req, cmnd);
> if (ret)
> goto out_unmap_data;
> }
>
> blk_mq_start_request(req);
> - nvme_submit_cmd(nvmeq, &cmnd, bd->last);
> + nvme_submit_cmd(nvmeq, cmnd, bd->last);
> return BLK_STS_OK;
> out_unmap_data:
> nvme_unmap_data(dev, req);
> --
> 2.25.4
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
Looks Good
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
--
Himanshu Madhani Oracle Linux Engineering
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 2/2] nvme: use driver pdu command for passthrough
2021-03-17 20:37 ` [PATCHv2 2/2] nvme: use driver pdu command for passthrough Keith Busch
@ 2021-03-17 21:39 ` Himanshu Madhani
2021-03-17 22:21 ` Sagi Grimberg
1 sibling, 0 replies; 10+ messages in thread
From: Himanshu Madhani @ 2021-03-17 21:39 UTC (permalink / raw)
To: Keith Busch; +Cc: joshi.k, hch, sagi, linux-nvme
> On Mar 17, 2021, at 3:37 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> All nvme transport drivers preallocate an nvme command for each request.
> Assume to use that command for nvme_setup_cmd() instead of requiring
> drivers pass a pointer to it. All nvme drivers must initialize the
> generic nvme_request 'cmd' to point to the transport's preallocated
> nvme_command.
>
> The generic nvme_request cmd pointer had previously been used only as a
> temporary copy for passthrough commands. Since it now points to the
> command that gets dispatched, passthrough commands must directly set it
> up prior to executing the request.
>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> drivers/nvme/host/core.c | 23 ++++++++++-------------
> drivers/nvme/host/fc.c | 5 ++---
> drivers/nvme/host/nvme.h | 3 +--
> drivers/nvme/host/pci.c | 3 ++-
> drivers/nvme/host/rdma.c | 5 +++--
> drivers/nvme/host/tcp.c | 5 ++++-
> drivers/nvme/target/loop.c | 4 +++-
> 7 files changed, 25 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 512e5578c6e4..bfc868d6bda9 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -575,6 +575,9 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU);
>
> static inline void nvme_clear_nvme_request(struct request *req)
> {
> + struct nvme_command *cmd = nvme_req(req)->cmd;
> +
> + memset(cmd, 0, sizeof(*cmd));
> nvme_req(req)->retries = 0;
> nvme_req(req)->flags = 0;
> req->rq_flags |= RQF_DONTPREP;
> @@ -593,9 +596,12 @@ static inline void nvme_init_request(struct request *req,
> else /* no queuedata implies admin queue */
> req->timeout = NVME_ADMIN_TIMEOUT;
>
> + /* passthru commands should let the driver set the SGL flags */
> + cmd->common.flags &= ~NVME_CMD_SGL_ALL;
> +
> req->cmd_flags |= REQ_FAILFAST_DRIVER;
> nvme_clear_nvme_request(req);
> - nvme_req(req)->cmd = cmd;
> + memcpy(nvme_req(req)->cmd, cmd, sizeof(*cmd));
> }
>
> struct request *nvme_alloc_request(struct request_queue *q,
> @@ -724,14 +730,6 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl,
> req->q->write_hints[streamid] += blk_rq_bytes(req) >> 9;
> }
>
> -static inline void nvme_setup_passthrough(struct request *req,
> - struct nvme_command *cmd)
> -{
> - memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd));
> - /* passthru commands should let the driver set the SGL flags */
> - cmd->common.flags &= ~NVME_CMD_SGL_ALL;
> -}
> -
> static inline void nvme_setup_flush(struct nvme_ns *ns,
> struct nvme_command *cmnd)
> {
> @@ -886,19 +884,18 @@ void nvme_cleanup_cmd(struct request *req)
> }
> EXPORT_SYMBOL_GPL(nvme_cleanup_cmd);
>
> -blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
> - struct nvme_command *cmd)
> +blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
> {
> + struct nvme_command *cmd = nvme_req(req)->cmd;
> blk_status_t ret = BLK_STS_OK;
>
> if (!(req->rq_flags & RQF_DONTPREP))
> nvme_clear_nvme_request(req);
>
> - memset(cmd, 0, sizeof(*cmd));
> switch (req_op(req)) {
> case REQ_OP_DRV_IN:
> case REQ_OP_DRV_OUT:
> - nvme_setup_passthrough(req, cmd);
> + /* these are setup prior to execution in nvme_init_request() */
> break;
> case REQ_OP_FLUSH:
> nvme_setup_flush(ns, cmd);
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index fe8c32f0efb1..0d4fb8edbf7c 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -2128,6 +2128,7 @@ nvme_fc_init_request(struct blk_mq_tag_set *set, struct request *rq,
> op->op.fcp_req.first_sgl = op->sgl;
> op->op.fcp_req.private = &op->priv[0];
> nvme_req(rq)->ctrl = &ctrl->ctrl;
> + nvme_req(rq)->cmd = &op->op.cmd_iu.sqe;
> return res;
> }
>
> @@ -2759,8 +2760,6 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
> struct nvme_fc_ctrl *ctrl = queue->ctrl;
> struct request *rq = bd->rq;
> struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
> - struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
> - struct nvme_command *sqe = &cmdiu->sqe;
> enum nvmefc_fcp_datadir io_dir;
> bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags);
> u32 data_len;
> @@ -2770,7 +2769,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
> !nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
> return nvmf_fail_nonready_command(&queue->ctrl->ctrl, rq);
>
> - ret = nvme_setup_cmd(ns, rq, sqe);
> + ret = nvme_setup_cmd(ns, rq);
> if (ret)
> return ret;
>
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 76de7ed55d90..b0863c59fac4 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -623,8 +623,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
> struct request *nvme_alloc_request(struct request_queue *q,
> struct nvme_command *cmd, blk_mq_req_flags_t flags);
> void nvme_cleanup_cmd(struct request *req);
> -blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
> - struct nvme_command *cmd);
> +blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req);
> int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
> void *buf, unsigned bufflen);
> int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 1a0912146c74..d47bb18b976a 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -430,6 +430,7 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req,
> iod->nvmeq = nvmeq;
>
> nvme_req(req)->ctrl = &dev->ctrl;
> + nvme_req(req)->cmd = &iod->cmd;
> return 0;
> }
>
> @@ -932,7 +933,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> return BLK_STS_IOERR;
>
> - ret = nvme_setup_cmd(ns, req, cmnd);
> + ret = nvme_setup_cmd(ns, req);
> if (ret)
> return ret;
>
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index 9b0a61841851..f94c6e7d5d93 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -314,6 +314,7 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
> NVME_RDMA_DATA_SGL_SIZE;
>
> req->queue = queue;
> + nvme_req(rq)->cmd = req->sqe.data;
>
> return 0;
> }
> @@ -2038,7 +2039,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
> struct request *rq = bd->rq;
> struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
> struct nvme_rdma_qe *sqe = &req->sqe;
> - struct nvme_command *c = sqe->data;
> + struct nvme_command *c = nvme_req(rq)->cmd;
> struct ib_device *dev;
> bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
> blk_status_t ret;
> @@ -2061,7 +2062,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
> ib_dma_sync_single_for_cpu(dev, sqe->dma,
> sizeof(struct nvme_command), DMA_TO_DEVICE);
>
> - ret = nvme_setup_cmd(ns, rq, c);
> + ret = nvme_setup_cmd(ns, rq);
> if (ret)
> goto unmap_qe;
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index ee25492ec130..4d9ea6c2ef87 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -417,6 +417,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
> {
> struct nvme_tcp_ctrl *ctrl = set->driver_data;
> struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
> + struct nvme_tcp_cmd_pdu *pdu;
> int queue_idx = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0;
> struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx];
> u8 hdgst = nvme_tcp_hdgst_len(queue);
> @@ -427,8 +428,10 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
> if (!req->pdu)
> return -ENOMEM;
>
> + pdu = req->pdu;
> req->queue = queue;
> nvme_req(rq)->ctrl = &ctrl->ctrl;
> + nvme_req(rq)->cmd = &pdu->cmd;
>
> return 0;
> }
> @@ -2259,7 +2262,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
> u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0;
> blk_status_t ret;
>
> - ret = nvme_setup_cmd(ns, rq, &pdu->cmd);
> + ret = nvme_setup_cmd(ns, rq);
> if (ret)
> return ret;
>
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 8fd53df5cf7d..6665da3b634f 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -141,7 +141,7 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
> if (!nvmf_check_ready(&queue->ctrl->ctrl, req, queue_ready))
> return nvmf_fail_nonready_command(&queue->ctrl->ctrl, req);
>
> - ret = nvme_setup_cmd(ns, req, &iod->cmd);
> + ret = nvme_setup_cmd(ns, req);
> if (ret)
> return ret;
>
> @@ -205,8 +205,10 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set,
> unsigned int numa_node)
> {
> struct nvme_loop_ctrl *ctrl = set->driver_data;
> + struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
>
> nvme_req(req)->ctrl = &ctrl->ctrl;
> + nvme_req(req)->cmd = &iod->cmd;
> return nvme_loop_init_iod(ctrl, blk_mq_rq_to_pdu(req),
> (set == &ctrl->tag_set) ? hctx_idx + 1 : 0);
> }
> --
> 2.25.4
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
Looks Good
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
--
Himanshu Madhani Oracle Linux Engineering
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 2/2] nvme: use driver pdu command for passthrough
2021-03-17 20:37 ` [PATCHv2 2/2] nvme: use driver pdu command for passthrough Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
@ 2021-03-17 22:21 ` Sagi Grimberg
1 sibling, 0 replies; 10+ messages in thread
From: Sagi Grimberg @ 2021-03-17 22:21 UTC (permalink / raw)
To: Keith Busch, joshi.k, hch, linux-nvme
> All nvme transport drivers preallocate an nvme command for each request.
> Assume to use that command for nvme_setup_cmd() instead of requiring
> drivers pass a pointer to it. All nvme drivers must initialize the
> generic nvme_request 'cmd' to point to the transport's preallocated
> nvme_command.
>
> The generic nvme_request cmd pointer had previously been used only as a
> temporary copy for passthrough commands. Since it now points to the
> command that gets dispatched, passthrough commands must directly set it
> up prior to executing the request.
This looks good,
Did you get to run blktests with this?
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
@ 2021-03-18 1:08 ` Chaitanya Kulkarni
2021-03-18 5:27 ` Kanchan Joshi
2 siblings, 0 replies; 10+ messages in thread
From: Chaitanya Kulkarni @ 2021-03-18 1:08 UTC (permalink / raw)
To: Keith Busch, joshi.k, hch, sagi, linux-nvme
On 3/17/21 14:04, Keith Busch wrote:
> Except for pci, all the nvme transport drivers allocate a command within
> the driver's pdu. Align pci with everyone else by allocating the nvme
> command within pci's pdu and replace the .queue_rq() stack variable with
> this.
>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
Looks good.
Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 0/2] nvme driver pdu enhancements for passthrough
2021-03-17 20:37 [PATCHv2 0/2] nvme driver pdu enhancements for passthrough Keith Busch
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
2021-03-17 20:37 ` [PATCHv2 2/2] nvme: use driver pdu command for passthrough Keith Busch
@ 2021-03-18 2:00 ` Jens Axboe
2021-03-18 5:18 ` Christoph Hellwig
3 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2021-03-18 2:00 UTC (permalink / raw)
To: Keith Busch, joshi.k, hch, sagi, linux-nvme
On 3/17/21 2:37 PM, Keith Busch wrote:
> v1->v2:
>
> Improved change logs, added reviews.
>
> Retain clearing SGL in passthrough commands (hch)
>
> Added a comment explaining why REQ_OP_DRV_IN/OUT is a no-op for
> nvme_setup_cmd()
>
> Keith Busch (2):
> nvme-pci: allocate nvme_command within driver pdu
> nvme: use driver pdu command for passthrough
>
> drivers/nvme/host/core.c | 23 ++++++++++-------------
> drivers/nvme/host/fc.c | 5 ++---
> drivers/nvme/host/nvme.h | 3 +--
> drivers/nvme/host/pci.c | 12 +++++++-----
> drivers/nvme/host/rdma.c | 5 +++--
> drivers/nvme/host/tcp.c | 5 ++++-
> drivers/nvme/target/loop.c | 4 +++-
> 7 files changed, 30 insertions(+), 27 deletions(-)
These are both nice.
Reviewed-by: Jens Axboe <axboe@kernel.dk>
--
Jens Axboe
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 0/2] nvme driver pdu enhancements for passthrough
2021-03-17 20:37 [PATCHv2 0/2] nvme driver pdu enhancements for passthrough Keith Busch
` (2 preceding siblings ...)
2021-03-18 2:00 ` [PATCHv2 0/2] nvme driver pdu enhancements " Jens Axboe
@ 2021-03-18 5:18 ` Christoph Hellwig
3 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2021-03-18 5:18 UTC (permalink / raw)
To: Keith Busch; +Cc: joshi.k, hch, sagi, linux-nvme
Thanks,
applied to nvme-5.13.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
2021-03-18 1:08 ` Chaitanya Kulkarni
@ 2021-03-18 5:27 ` Kanchan Joshi
2 siblings, 0 replies; 10+ messages in thread
From: Kanchan Joshi @ 2021-03-18 5:27 UTC (permalink / raw)
To: Keith Busch; +Cc: KANCHAN JOSHI, Christoph Hellwig, Sagi Grimberg, linux-nvme
On Thu, Mar 18, 2021 at 2:32 AM Keith Busch <kbusch@kernel.org> wrote:
>
> Except for pci, all the nvme transport drivers allocate a command within
> the driver's pdu. Align pci with everyone else by allocating the nvme
> command within pci's pdu and replace the .queue_rq() stack variable with
> this.
>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> drivers/nvme/host/pci.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index ecd11b1febf8..1a0912146c74 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -224,6 +224,7 @@ struct nvme_queue {
> */
> struct nvme_iod {
> struct nvme_request req;
> + struct nvme_command cmd;
> struct nvme_queue *nvmeq;
> bool use_sgl;
> int aborted;
> @@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> struct nvme_dev *dev = nvmeq->dev;
> struct request *req = bd->rq;
> struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> - struct nvme_command cmnd;
> + struct nvme_command *cmnd = &iod->cmd;
> blk_status_t ret;
>
> iod->aborted = 0;
> @@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
> if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> return BLK_STS_IOERR;
>
> - ret = nvme_setup_cmd(ns, req, &cmnd);
> + ret = nvme_setup_cmd(ns, req, cmnd);
> if (ret)
> return ret;
>
> if (blk_rq_nr_phys_segments(req)) {
> - ret = nvme_map_data(dev, req, &cmnd);
> + ret = nvme_map_data(dev, req, cmnd);
> if (ret)
> goto out_free_cmd;
> }
>
> if (blk_integrity_rq(req)) {
> - ret = nvme_map_metadata(dev, req, &cmnd);
> + ret = nvme_map_metadata(dev, req, cmnd);
> if (ret)
> goto out_unmap_data;
> }
>
> blk_mq_start_request(req);
> - nvme_submit_cmd(nvmeq, &cmnd, bd->last);
> + nvme_submit_cmd(nvmeq, cmnd, bd->last);
> return BLK_STS_OK;
> out_unmap_data:
> nvme_unmap_data(dev, req);
> --
> 2.25.4
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
LGTM.
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>
--
Kanchan
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-03-18 5:28 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17 20:37 [PATCHv2 0/2] nvme driver pdu enhancements for passthrough Keith Busch
2021-03-17 20:37 ` [PATCHv2 1/2] nvme-pci: allocate nvme_command within driver pdu Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
2021-03-18 1:08 ` Chaitanya Kulkarni
2021-03-18 5:27 ` Kanchan Joshi
2021-03-17 20:37 ` [PATCHv2 2/2] nvme: use driver pdu command for passthrough Keith Busch
2021-03-17 21:39 ` Himanshu Madhani
2021-03-17 22:21 ` Sagi Grimberg
2021-03-18 2:00 ` [PATCHv2 0/2] nvme driver pdu enhancements " Jens Axboe
2021-03-18 5:18 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).