Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH V3 0/6] nvmet: passthru fixes and improvements
@ 2020-10-22  1:02 Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Chaitanya Kulkarni
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

Hi,

This patch series has several small trivial fixes and few code
optimizations.

Regards,
Chaitanya

Changes from V2:-

1. Remove "nvme-core: annotate nvme_alloc_request()" patch and
   split the nvme_alloc_request() into nvme_alloc_request_qid_any()
   and nvme_alloc_request_qid() with addition of the prep patch for
   the same.
2. Remove the cleanup patches and trim down the series.
3. Remove the code for setting up the op_flags for passthru.  
4. Rebase and retest on the nvme-5.10.

Changes from V1:-

1. Remove the sg_cnt check and nvmet_passthru_sg_map() annotation.
2. Add annotations and performance numbers for newly added patch #1.
3. Move ctrl refcount and module refcount into nvme_dev_open() and
   nvme_dev_release(). Add prepration patch for the same.
4. Add reviewed-by tags.

Chaitanya Kulkarni (6):
  nvme-core: add a helper to init req from nvme cmd
  nvme-core: split nvme_alloc_request()
  nvmet: remove op_flags for passthru commands
  block: move blk_rq_bio_prep() to linux/blk-mq.h
  nvmet: use minimized version of blk_rq_append_bio
  nvmet: use inline bio for passthru fast path

 block/blk.h                    | 12 --------
 drivers/nvme/host/core.c       | 56 +++++++++++++++++++++++-----------
 drivers/nvme/host/lightnvm.c   |  5 ++-
 drivers/nvme/host/nvme.h       |  4 +--
 drivers/nvme/host/pci.c        |  6 ++--
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c | 38 +++++++++++++----------
 include/linux/blk-mq.h         | 12 ++++++++
 8 files changed, 81 insertions(+), 53 deletions(-)

-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Chaitanya Kulkarni
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

This is a preparation patch which adds a helper that we use in the next
patch that splits the nvme_alloc_request() into
nvme_alloc_request_qid_any() and nvme_alloc_request_qid(). The new
functions shares the code to initialize the allocated request from NVMe
cmd.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 56e2a22e8a02..5bc52594fe63 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -514,6 +514,14 @@ static inline void nvme_clear_nvme_request(struct request *req)
 	}
 }
 
+static inline void nvme_init_req_from_cmd(struct request *req,
+		struct nvme_command *cmd)
+{
+	req->cmd_flags |= REQ_FAILFAST_DRIVER;
+	nvme_clear_nvme_request(req);
+	nvme_req(req)->cmd = cmd;
+}
+
 struct request *nvme_alloc_request(struct request_queue *q,
 		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
 {
@@ -529,9 +537,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
 	if (IS_ERR(req))
 		return req;
 
-	req->cmd_flags |= REQ_FAILFAST_DRIVER;
-	nvme_clear_nvme_request(req);
-	nvme_req(req)->cmd = cmd;
+	nvme_init_req_from_cmd(req, cmd);
 
 	return req;
 }
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 2/6] nvme-core: split nvme_alloc_request()
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-11-03 18:24   ` Christoph Hellwig
  2020-10-22  1:02 ` [PATCH V3 3/6] nvmet: remove op_flags for passthru commands Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

Right now nvme_alloc_request() allocates a request from block layer
based on the value of the qid. When qid set to NVME_QID_ANY it used
blk_mq_alloc_request() else blk_mq_alloc_request_hctx().

The function nvme_alloc_request() is called from different context, The
only place where it uses non NVME_QID_ANY value is for fabrics connect
commands :-

nvme_submit_sync_cmd()		NVME_QID_ANY
nvme_features()			NVME_QID_ANY
nvme_sec_submit()		NVME_QID_ANY
nvmf_reg_read32()		NVME_QID_ANY
nvmf_reg_read64()		NVME_QID_ANY
nvmf_reg_write32()		NVME_QID_ANY
nvmf_connect_admin_queue()	NVME_QID_ANY
nvme_submit_user_cmd()		NVME_QID_ANY
	nvme_alloc_request()
nvme_keep_alive()		NVME_QID_ANY
	nvme_alloc_request()
nvme_timeout()			NVME_QID_ANY
	nvme_alloc_request()
nvme_delete_queue()		NVME_QID_ANY
	nvme_alloc_request()
nvmet_passthru_execute_cmd()	NVME_QID_ANY
	nvme_alloc_request()
nvmf_connect_io_queue() 	QID
	__nvme_submit_sync_cmd()
		nvme_alloc_request()

With passthru nvme_alloc_request() now falls into the I/O fast path such
that blk_mq_alloc_request_hctx() is never gets called and that adds
additional branch check and the code in fast path.

Split the nvme_alloc_request() into nvme_alloc_request_qid_any() and
nvme_alloc_request_qid().

Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
with a call to newly added nvme_alloc_request_qid_any().

Replace a call to nvme_alloc_request() with QID param with a call to
newly added nvme_alloc_request_qid_any() and nvme_alloc_request_qid()
based on the qid value set in the __nvme_submit_sync_cmd().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c       | 44 +++++++++++++++++++++++-----------
 drivers/nvme/host/lightnvm.c   |  5 ++--
 drivers/nvme/host/nvme.h       |  4 ++--
 drivers/nvme/host/pci.c        |  6 ++---
 drivers/nvme/target/passthru.c |  2 +-
 5 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5bc52594fe63..87e56ef48f5d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -522,26 +522,38 @@ static inline void nvme_init_req_from_cmd(struct request *req,
 	nvme_req(req)->cmd = cmd;
 }
 
-struct request *nvme_alloc_request(struct request_queue *q,
+static inline unsigned int nvme_req_op(struct nvme_command *cmd)
+{
+	return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
+}
+
+struct request *nvme_alloc_request_qid_any(struct request_queue *q,
+		struct nvme_command *cmd, blk_mq_req_flags_t flags)
+{
+	struct request *req;
+
+	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
+	if (unlikely(IS_ERR(req)))
+		return req;
+
+	nvme_init_req_from_cmd(req, cmd);
+	return req;
+}
+EXPORT_SYMBOL_GPL(nvme_alloc_request_qid_any);
+
+static struct request *nvme_alloc_request_qid(struct request_queue *q,
 		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
 {
-	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 	struct request *req;
 
-	if (qid == NVME_QID_ANY) {
-		req = blk_mq_alloc_request(q, op, flags);
-	} else {
-		req = blk_mq_alloc_request_hctx(q, op, flags,
-				qid ? qid - 1 : 0);
-	}
+	req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
+			qid ? qid - 1 : 0);
 	if (IS_ERR(req))
 		return req;
 
 	nvme_init_req_from_cmd(req, cmd);
-
 	return req;
 }
-EXPORT_SYMBOL_GPL(nvme_alloc_request);
 
 static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable)
 {
@@ -899,7 +911,11 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 	struct request *req;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, flags, qid);
+	if (qid == NVME_QID_ANY)
+		req = nvme_alloc_request_qid_any(q, cmd, flags);
+	else
+		req = nvme_alloc_request_qid(q, cmd, flags, qid);
+
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -1069,7 +1085,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	void *meta = NULL;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
+	req = nvme_alloc_request_qid_any(q, cmd, 0);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -1143,8 +1159,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl)
 {
 	struct request *rq;
 
-	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED,
-			NVME_QID_ANY);
+	rq = nvme_alloc_request_qid_any(ctrl->admin_q, &ctrl->ka_cmd,
+			BLK_MQ_REQ_RESERVED);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 8e562d0f2c30..b1ee1a0310f6 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
 
 	nvme_nvm_rqtocmd(rqd, ns, cmd);
 
-	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY);
+	rq = nvme_alloc_request_qid_any(q, (struct nvme_command *)cmd, 0);
 	if (IS_ERR(rq))
 		return rq;
 
@@ -767,8 +767,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q,
 	DECLARE_COMPLETION_ONSTACK(wait);
 	int ret = 0;
 
-	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0,
-			NVME_QID_ANY);
+	rq = nvme_alloc_request_qid_any(q, (struct nvme_command *)vcmd, 0);
 	if (IS_ERR(rq)) {
 		ret = -ENOMEM;
 		goto err_cmd;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index cc111136a981..f39a0a387a51 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -608,8 +608,8 @@ int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
 void nvme_start_freeze(struct nvme_ctrl *ctrl);
 
 #define NVME_QID_ANY -1
-struct request *nvme_alloc_request(struct request_queue *q,
-		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid);
+struct request *nvme_alloc_request_qid_any(struct request_queue *q,
+		struct nvme_command *cmd, blk_mq_req_flags_t flags);
 void nvme_cleanup_cmd(struct request *req);
 blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
 		struct nvme_command *cmd);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index df8f3612107f..94f329b5f980 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1289,8 +1289,8 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 		"I/O %d QID %d timeout, aborting\n",
 		 req->tag, nvmeq->qid);
 
-	abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
-			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+	abort_req = nvme_alloc_request_qid_any(dev->ctrl.admin_q, &cmd,
+			BLK_MQ_REQ_NOWAIT);
 	if (IS_ERR(abort_req)) {
 		atomic_inc(&dev->ctrl.abort_limit);
 		return BLK_EH_RESET_TIMER;
@@ -2204,7 +2204,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
 	cmd.delete_queue.opcode = opcode;
 	cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
 
-	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+	req = nvme_alloc_request_qid_any(q, &cmd, BLK_MQ_REQ_NOWAIT);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 56c571052216..76affbc3bd9a 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -236,7 +236,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 		q = ns->queue;
 	}
 
-	rq = nvme_alloc_request(q, req->cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+	rq = nvme_alloc_request_qid_any(q, req->cmd, BLK_MQ_REQ_NOWAIT);
 	if (IS_ERR(rq)) {
 		status = NVME_SC_INTERNAL;
 		goto out_put_ns;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 3/6] nvmet: remove op_flags for passthru commands
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

For passthru commands setting op_flags has no meaning. Remove the code
that sets the op flags in nvmet_passthru_map_sg().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/passthru.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 76affbc3bd9a..e5b5657bcce2 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -182,18 +182,12 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 {
 	int sg_cnt = req->sg_cnt;
 	struct scatterlist *sg;
-	int op_flags = 0;
 	struct bio *bio;
 	int i, ret;
 
-	if (req->cmd->common.opcode == nvme_cmd_flush)
-		op_flags = REQ_FUA;
-	else if (nvme_is_write(req->cmd))
-		op_flags = REQ_SYNC | REQ_IDLE;
-
 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
 	bio->bi_end_io = bio_put;
-	bio->bi_opf = req_op(rq) | op_flags;
+	bio->bi_opf = req_op(rq);
 
 	for_each_sg(req->sg, sg, req->sg_cnt, i) {
 		if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2020-10-22  1:02 ` [PATCH V3 3/6] nvmet: remove op_flags for passthru commands Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

This is a preparation patch to have minimal block layer request bio
append functionality in the context of the NVMeOF Passthru driver which
falls in the fast path and doesn't need calls from blk_rq_append_bio().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/blk.h            | 12 ------------
 include/linux/blk-mq.h | 12 ++++++++++++
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/blk.h b/block/blk.h
index dfab98465db9..e05507a8d1e3 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -91,18 +91,6 @@ static inline bool bvec_gap_to_prev(struct request_queue *q,
 	return __bvec_gap_to_prev(q, bprv, offset);
 }
 
-static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
-		unsigned int nr_segs)
-{
-	rq->nr_phys_segments = nr_segs;
-	rq->__data_len = bio->bi_iter.bi_size;
-	rq->bio = rq->biotail = bio;
-	rq->ioprio = bio_prio(bio);
-
-	if (bio->bi_disk)
-		rq->rq_disk = bio->bi_disk;
-}
-
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 void blk_flush_integrity(void);
 bool __bio_integrity_endio(struct bio *);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index b23eeca4d677..d1d277073761 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -591,6 +591,18 @@ static inline void blk_mq_cleanup_rq(struct request *rq)
 		rq->q->mq_ops->cleanup_rq(rq);
 }
 
+static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
+		unsigned int nr_segs)
+{
+	rq->nr_phys_segments = nr_segs;
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio->bi_disk)
+		rq->rq_disk = bio->bi_disk;
+}
+
 blk_qc_t blk_mq_submit_bio(struct bio *bio);
 
 #endif
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2020-10-22  1:02 ` [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-10-22  1:02 ` [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
  2020-11-03 18:32 ` [PATCH V3 0/6] nvmet: passthru fixes and improvements Christoph Hellwig
  6 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

The function blk_rq_append_bio() is a genereric API written for all
types driver (having bounce buffers) and different context (where
request is already having a bio i.e. rq->bio != NULL).

It does mainly three things: calculating the segments, bounce queue and
if req->bio == NULL call blk_rq_bio_prep() or handle low level merge()
case.

The NVMe PCIe and fabrics transports currently does not use queue
bounce mechanism. In order to find this for each request processing
in the passthru blk_rq_append_bio() does extra work in the fast path
for each request.

When I ran I/Os with different block sizes on the passthru controller
I found that we can reuse the req->sg_cnt instead of iterating over the
bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in
blk_rq_append_bio() is a duplication of work given that we have the
value in req->sg_cnt. (correct me here if I'm wrong).

With NVMe passthru request based driver we allocate fresh request each
time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e.
we don't really need the second condition in the blk_rq_append_bio()
and the resulting error condition in the caller of blk_rq_append_bio().

So for NVMeOF passthru driver recalculating the segments, bounce check
and ll_back_merge code is not needed such that we can get away with the
minimal version of the blk_rq_append_bio() which removes the error check
in the fast path along with extra variable in nvmet_passthru_map_sg().

This patch updates the nvmet_passthru_map_sg() such that it does only
appending the bio to the request in the context of the NVMeOF Passthru
driver. Following are perf numbers :-

With current implementation (blk_rq_append_bio()) :-
----------------------------------------------------
+    5.80%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.88%     0.00%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.86%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.17%     0.00%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd

With this patch using blk_rq_bio_prep() :-
----------------------------------------------------
+    3.14%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    3.26%     0.01%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.37%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.18%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.84%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.87%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/target/passthru.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index e5b5657bcce2..496ffedb77dc 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -183,7 +183,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 	int sg_cnt = req->sg_cnt;
 	struct scatterlist *sg;
 	struct bio *bio;
-	int i, ret;
+	int i;
 
 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
 	bio->bi_end_io = bio_put;
@@ -198,11 +198,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 		sg_cnt--;
 	}
 
-	ret = blk_rq_append_bio(rq, &bio);
-	if (unlikely(ret)) {
-		bio_put(bio);
-		return ret;
-	}
+	blk_rq_bio_prep(rq, bio, req->sg_cnt);
 
 	return 0;
 }
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH V3 6/6] nvmet: use inline bio for passthru fast path
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2020-10-22  1:02 ` [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
@ 2020-10-22  1:02 ` Chaitanya Kulkarni
  2020-10-22 15:57   ` Logan Gunthorpe
  2020-11-03 18:32 ` [PATCH V3 0/6] nvmet: passthru fixes and improvements Christoph Hellwig
  6 siblings, 1 reply; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-22  1:02 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

In nvmet_passthru_execute_cmd() which is a high frequency function
it uses bio_alloc() which leads to memory allocation from the fs pool
for each I/O.

For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
request allocation that can be used with preallocated bio when we
already know the size of request before bio allocation with bio_alloc(),
which we already do.

Introduce a bio member for the nvmet_req passthru anon union. In the
fast path, check if we can get away with inline bvec and bio from
nvmet_req with bio_init() call before actually allocating from the
bio_alloc().

This will be useful to avoid any new memory allocation under high
memory pressure situation and get rid of any extra work of
allocation (bio_alloc()) vs initialization (bio_init()) when
transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
compile time.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c | 20 ++++++++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 559a15ccc322..408a13084fb4 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -330,6 +330,7 @@ struct nvmet_req {
 			struct work_struct      work;
 		} f;
 		struct {
+			struct bio		inline_bio;
 			struct request		*rq;
 			struct work_struct      work;
 			bool			use_workqueue;
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 496ffedb77dc..32498b4302cc 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -178,6 +178,14 @@ static void nvmet_passthru_req_done(struct request *rq,
 	blk_mq_free_request(rq);
 }
 
+static void nvmet_passthru_bio_done(struct bio *bio)
+{
+	struct nvmet_req *req = bio->bi_private;
+
+	if (bio != &req->p.inline_bio)
+		bio_put(bio);
+}
+
 static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 {
 	int sg_cnt = req->sg_cnt;
@@ -186,13 +194,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 	int i;
 
 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
-	bio->bi_end_io = bio_put;
+	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+		bio = &req->p.inline_bio;
+		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+	} else {
+		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
+	}
+
+	bio->bi_end_io = nvmet_passthru_bio_done;
 	bio->bi_opf = req_op(rq);
+	bio->bi_private = req;
 
 	for_each_sg(req->sg, sg, req->sg_cnt, i) {
 		if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
 				    sg->offset) < sg->length) {
-			bio_put(bio);
+			nvmet_passthru_bio_done(bio);
 			return -EINVAL;
 		}
 		sg_cnt--;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 6/6] nvmet: use inline bio for passthru fast path
  2020-10-22  1:02 ` [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
@ 2020-10-22 15:57   ` Logan Gunthorpe
  2020-10-29 19:02     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 14+ messages in thread
From: Logan Gunthorpe @ 2020-10-22 15:57 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-block, linux-nvme; +Cc: kbusch, hch, sagi




On 2020-10-21 7:02 p.m., Chaitanya Kulkarni wrote:
> In nvmet_passthru_execute_cmd() which is a high frequency function
> it uses bio_alloc() which leads to memory allocation from the fs pool
> for each I/O.
> 
> For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
> request allocation that can be used with preallocated bio when we
> already know the size of request before bio allocation with bio_alloc(),
> which we already do.
> 
> Introduce a bio member for the nvmet_req passthru anon union. In the
> fast path, check if we can get away with inline bvec and bio from
> nvmet_req with bio_init() call before actually allocating from the
> bio_alloc().
> 
> This will be useful to avoid any new memory allocation under high
> memory pressure situation and get rid of any extra work of
> allocation (bio_alloc()) vs initialization (bio_init()) when
> transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
> compile time.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  drivers/nvme/target/nvmet.h    |  1 +
>  drivers/nvme/target/passthru.c | 20 ++++++++++++++++++--
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index 559a15ccc322..408a13084fb4 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -330,6 +330,7 @@ struct nvmet_req {
>  			struct work_struct      work;
>  		} f;
>  		struct {
> +			struct bio		inline_bio;
>  			struct request		*rq;
>  			struct work_struct      work;
>  			bool			use_workqueue;
> diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
> index 496ffedb77dc..32498b4302cc 100644
> --- a/drivers/nvme/target/passthru.c
> +++ b/drivers/nvme/target/passthru.c
> @@ -178,6 +178,14 @@ static void nvmet_passthru_req_done(struct request *rq,
>  	blk_mq_free_request(rq);
>  }
>  
> +static void nvmet_passthru_bio_done(struct bio *bio)
> +{
> +	struct nvmet_req *req = bio->bi_private;
> +
> +	if (bio != &req->p.inline_bio)
> +		bio_put(bio);
> +}
> +
>  static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>  {
>  	int sg_cnt = req->sg_cnt;
> @@ -186,13 +194,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>  	int i;
>  
>  	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
> -	bio->bi_end_io = bio_put;
> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
> +		bio = &req->p.inline_bio;
> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
> +	} else {
> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
> +	}
> +
> +	bio->bi_end_io = nvmet_passthru_bio_done;

I still think it's cleaner to change bi_endio for the inline/alloc'd
cases by simply setting bi_endi_io to bio_put() only in the bio_alloc
case. This should also be more efficient as it's one less indirect call
and condition for the inline case.

Besides that, the entire series looks good to me.

Reviewed-by: Logan Gunthorpe <logang@deltatee.com>

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 6/6] nvmet: use inline bio for passthru fast path
  2020-10-22 15:57   ` Logan Gunthorpe
@ 2020-10-29 19:02     ` Chaitanya Kulkarni
  2020-11-03 18:25       ` hch
  0 siblings, 1 reply; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-29 19:02 UTC (permalink / raw)
  To: hch, sagi; +Cc: linux-block, kbusch, Logan Gunthorpe, linux-nvme

On 10/22/20 08:58, Logan Gunthorpe wrote:
>
>
> On 2020-10-21 7:02 p.m., Chaitanya Kulkarni wrote:
>> In nvmet_passthru_execute_cmd() which is a high frequency function
>> it uses bio_alloc() which leads to memory allocation from the fs pool
>> for each I/O.
>>
>> For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
>> request allocation that can be used with preallocated bio when we
>> already know the size of request before bio allocation with bio_alloc(),
>> which we already do.
>>
>> Introduce a bio member for the nvmet_req passthru anon union. In the
>> fast path, check if we can get away with inline bvec and bio from
>> nvmet_req with bio_init() call before actually allocating from the
>> bio_alloc().
>>
>> This will be useful to avoid any new memory allocation under high
>> memory pressure situation and get rid of any extra work of
>> allocation (bio_alloc()) vs initialization (bio_init()) when
>> transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
>> compile time.
>>
>> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
>> ---
>>  drivers/nvme/target/nvmet.h    |  1 +
>>  drivers/nvme/target/passthru.c | 20 ++++++++++++++++++--
>>  2 files changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
>> index 559a15ccc322..408a13084fb4 100644
>> --- a/drivers/nvme/target/nvmet.h
>> +++ b/drivers/nvme/target/nvmet.h
>> @@ -330,6 +330,7 @@ struct nvmet_req {
>>  			struct work_struct      work;
>>  		} f;
>>  		struct {
>> +			struct bio		inline_bio;
>>  			struct request		*rq;
>>  			struct work_struct      work;
>>  			bool			use_workqueue;
>> diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
>> index 496ffedb77dc..32498b4302cc 100644
>> --- a/drivers/nvme/target/passthru.c
>> +++ b/drivers/nvme/target/passthru.c
>> @@ -178,6 +178,14 @@ static void nvmet_passthru_req_done(struct request *rq,
>>  	blk_mq_free_request(rq);
>>  }
>>  
>> +static void nvmet_passthru_bio_done(struct bio *bio)
>> +{
>> +	struct nvmet_req *req = bio->bi_private;
>> +
>> +	if (bio != &req->p.inline_bio)
>> +		bio_put(bio);
>> +}
>> +
>>  static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>  {
>>  	int sg_cnt = req->sg_cnt;
>> @@ -186,13 +194,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>  	int i;
>>  
>>  	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>> -	bio->bi_end_io = bio_put;
>> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
>> +		bio = &req->p.inline_bio;
>> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
>> +	} else {
>> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>> +	}
>> +
>> +	bio->bi_end_io = nvmet_passthru_bio_done;
> I still think it's cleaner to change bi_endio for the inline/alloc'd
> cases by simply setting bi_endi_io to bio_put() only in the bio_alloc
> case. This should also be more efficient as it's one less indirect call
> and condition for the inline case.
>
> Besides that, the entire series looks good to me.
>
> Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
>
> Logan
>
Sagi/Christoph, any comments on this one ?

This series been sitting out for a while now.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 2/6] nvme-core: split nvme_alloc_request()
  2020-10-22  1:02 ` [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Chaitanya Kulkarni
@ 2020-11-03 18:24   ` Christoph Hellwig
  2020-11-04 21:03     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2020-11-03 18:24 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: sagi, linux-nvme, linux-block, kbusch, logang, hch

On Wed, Oct 21, 2020 at 06:02:30PM -0700, Chaitanya Kulkarni wrote:
> +static inline unsigned int nvme_req_op(struct nvme_command *cmd)
> +{
> +	return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
> +}

Why is this added here while nvme_init_req_from_cmd is added in a prep
patch?  I'm actually fine either way, but doing it differnetly for the
different helpers is a little inconsistent.

> +
> +struct request *nvme_alloc_request_qid_any(struct request_queue *q,
> +		struct nvme_command *cmd, blk_mq_req_flags_t flags)

I'd call this just nvme_alloc_request to keep the short name for the
normal use case.

> +	struct request *req;
> +
> +	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
> +	if (unlikely(IS_ERR(req)))
> +		return req;
> +
> +	nvme_init_req_from_cmd(req, cmd);
> +	return req;

Could be simplified to:

	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
	if (!IS_ERR(req))
		nvme_init_req_from_cmd(req, cmd);
	return req;

Note that IS_ERR already contains an embedded unlikely().

> +static struct request *nvme_alloc_request_qid(struct request_queue *q,
>  		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
>  {
>  	struct request *req;
>  
> +	req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
> +			qid ? qid - 1 : 0);
>  	if (IS_ERR(req))
>  		return req;
>  
>  	nvme_init_req_from_cmd(req, cmd);
>  	return req;

Same here.

>  }
> -EXPORT_SYMBOL_GPL(nvme_alloc_request);

I think nvme_alloc_request_qid needs to be exported as well.

FYI, this also doesn't apply to the current nvme-5.10 tree any more.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 6/6] nvmet: use inline bio for passthru fast path
  2020-10-29 19:02     ` Chaitanya Kulkarni
@ 2020-11-03 18:25       ` hch
  0 siblings, 0 replies; 14+ messages in thread
From: hch @ 2020-11-03 18:25 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: sagi, linux-nvme, linux-block, kbusch, Logan Gunthorpe, hch

On Thu, Oct 29, 2020 at 07:02:27PM +0000, Chaitanya Kulkarni wrote:
> > I still think it's cleaner to change bi_endio for the inline/alloc'd
> > cases by simply setting bi_endi_io to bio_put() only in the bio_alloc
> > case. This should also be more efficient as it's one less indirect call
> > and condition for the inline case.
> >
> > Besides that, the entire series looks good to me.
> >
> > Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
> >
> > Logan
> >
> Sagi/Christoph, any comments on this one ?
> 
> This series been sitting out for a while now.

I think the suggestion from Logan makes sense.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 0/6] nvmet: passthru fixes and improvements
  2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (5 preceding siblings ...)
  2020-10-22  1:02 ` [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
@ 2020-11-03 18:32 ` Christoph Hellwig
  2020-11-03 23:50   ` Chaitanya Kulkarni
  6 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2020-11-03 18:32 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: sagi, linux-nvme, linux-block, kbusch, logang, hch

Btw, what is fixed by the series?  It looks like just (useful) cleanups
to me.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 0/6] nvmet: passthru fixes and improvements
  2020-11-03 18:32 ` [PATCH V3 0/6] nvmet: passthru fixes and improvements Christoph Hellwig
@ 2020-11-03 23:50   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-03 23:50 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, kbusch, logang, sagi, linux-nvme

On 11/3/20 10:32, Christoph Hellwig wrote:
> Btw, what is fixed by the series?  It looks like just (useful) cleanups
> to me.
>
No fixes as we did the separate series for that, will remove it.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH V3 2/6] nvme-core: split nvme_alloc_request()
  2020-11-03 18:24   ` Christoph Hellwig
@ 2020-11-04 21:03     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 21:03 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, kbusch, logang, sagi, linux-nvme

On 11/3/20 10:24, Christoph Hellwig wrote:
> On Wed, Oct 21, 2020 at 06:02:30PM -0700, Chaitanya Kulkarni wrote:
>> +static inline unsigned int nvme_req_op(struct nvme_command *cmd)
>> +{
>> +	return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>> +}
> Why is this added here while nvme_init_req_from_cmd is added in a prep
> patch?  I'm actually fine either way, but doing it differnetly for the
> different helpers is a little inconsistent.

I'll move this into the first prep patch.

>> +
>> +struct request *nvme_alloc_request_qid_any(struct request_queue *q,
>> +		struct nvme_command *cmd, blk_mq_req_flags_t flags)
> I'd call this just nvme_alloc_request to keep the short name for the
> normal use case.
Okay.
>
>> +	struct request *req;
>> +
>> +	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
>> +	if (unlikely(IS_ERR(req)))
>> +		return req;
>> +
>> +	nvme_init_req_from_cmd(req, cmd);
>> +	return req;
> Could be simplified to:
>
> 	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
> 	if (!IS_ERR(req))
> 		nvme_init_req_from_cmd(req, cmd);
> 	return req;
>
> Note that IS_ERR already contains an embedded unlikely().
Sure.
>> +static struct request *nvme_alloc_request_qid(struct request_queue *q,
>>  		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
>>  {
>>  	struct request *req;
>>  
>> +	req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>> +			qid ? qid - 1 : 0);
>>  	if (IS_ERR(req))
>>  		return req;
>>  
>>  	nvme_init_req_from_cmd(req, cmd);
>>  	return req;
> Same here.
Will do.
>>  }
>> -EXPORT_SYMBOL_GPL(nvme_alloc_request);
> I think nvme_alloc_request_qid needs to be exported as well.
>
> FYI, this also doesn't apply to the current nvme-5.10 tree any more.
>
Since it conflicts with the timeout series will rebase and resend once
we get

the timeout series in, otherwise it makes reviews confusing and stale at
times.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, back to index

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-22  1:02 [PATCH V3 0/6] nvmet: passthru fixes and improvements Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Chaitanya Kulkarni
2020-11-03 18:24   ` Christoph Hellwig
2020-11-04 21:03     ` Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 3/6] nvmet: remove op_flags for passthru commands Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
2020-10-22  1:02 ` [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
2020-10-22 15:57   ` Logan Gunthorpe
2020-10-29 19:02     ` Chaitanya Kulkarni
2020-11-03 18:25       ` hch
2020-11-03 18:32 ` [PATCH V3 0/6] nvmet: passthru fixes and improvements Christoph Hellwig
2020-11-03 23:50   ` Chaitanya Kulkarni

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git