All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 00/12] nvmet: passthru fixes and improvements
@ 2020-08-31 22:26 Chaitanya Kulkarni
  2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
                   ` (11 more replies)
  0 siblings, 12 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

Hi,

This patch series has several small trivial fixes and few code
optimizations.

Regards,
Chaitanya

Changes from V1: -

1. Remove the sg_cnt check and nvmet_passthru_sg_map() annotation.
2. Add annotations and performance numbers for newly added patch #1.
3. Move ctrl refcount and module refcount into nvme_dev_open() and
   nvme_dev_release(). Add prepration patch for the same.
4. Add reviewed-by tags.

Chaitanya Kulkarni (12):
  nvme-core: annotate nvme_alloc_request()
  nvmet: for pt I/O commands use likely for ns check
  nvmet: use consistent type with id->nlbaf
  nvmet: use consistent type for op_flag
  nvmet: use unlikely for uncommon commands
  nvmet: remove op_flags for write commands
  nvmet: decouple nvme_ctrl_get_by_path()
  nvmet: move get/put ctrl into dev open/release
  nvmet: fix nvme module ref count Oops
  block: move blk_rq_bio_prep() to linux/blk-mq.h
  nvmet: use minimized version of blk_rq_append_bio
  nvmet: use inline bio for passthru fast path

 block/blk.h                    | 12 -------
 drivers/nvme/host/core.c       | 31 ++++++++++------
 drivers/nvme/host/nvme.h       |  2 +-
 drivers/nvme/target/nvmet.h    |  2 ++
 drivers/nvme/target/passthru.c | 64 +++++++++++++++++++++-------------
 include/linux/blk-mq.h         | 12 +++++++
 6 files changed, 76 insertions(+), 47 deletions(-)

-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
@ 2020-08-31 22:26 ` Chaitanya Kulkarni
  2020-09-01  1:56   ` Baolin Wang
  2020-09-03 16:17   ` Christoph Hellwig
  2020-08-31 22:26 ` [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check Chaitanya Kulkarni
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

The function nvme_alloc_request() is called from different context, The
only place where it uses non NVME_QID_ANY value is for fabrics connect
commands :-

nvme_submit_sync_cmd()		NVME_QID_ANY
nvme_features()			NVME_QID_ANY
nvme_sec_submit()		NVME_QID_ANY
nvmf_reg_read32()		NVME_QID_ANY
nvmf_reg_read64()		NVME_QID_ANY
nvmf_reg_write32()		NVME_QID_ANY
nvmf_connect_admin_queue()	NVME_QID_ANY
nvmf_connect_io_queue() 	QID
	__nvme_submit_sync_cmd()
		nvme_alloc_request()

nvme_submit_user_cmd()		NVME_QID_ANY
	nvme_alloc_request()
nvme_keep_alive()		NVME_QID_ANY
	nvme_alloc_request()
nvme_timeout()			NVME_QID_ANY
	nvme_alloc_request()
nvme_delete_queue()		NVME_QID_ANY
	nvme_alloc_request()
nvmet_passthru_execute_cmd()	NVME_QID_ANY
	nvme_alloc_request()

With passthru nvme_alloc_request now falls into the I/O fast path.
Annotate the case for block layer request allocation with likely and
error condition with unlikely.

The only case this annotation will break for fabrics connect commands
which are low frequency commands anyway.

This leads to following performance change in the passthru code :-

Bandwidth up (better) by 1.7 MB/s :-
-----------------------------------------------------------------------
* Average Bandwidth nvme-alloc-default: 272.10000 MB
* Average Bandwidth nvme-alloc-likely:  273.80000 MB

Latency down (better) by ~5 usec :-
-----------------------------------------------------------------------
* Average Latency util nvme-alloc-default: 917.90300 (usec)
* Average Latency util nvme-alloc-likely:  912.57100 (usec)

nvme-alloc-default.fio.10.log:read: IOPS=69.2k, BW=270MiB/s
nvme-alloc-default.fio.1.log: read: IOPS=69.6k, BW=272MiB/s
nvme-alloc-default.fio.2.log: read: IOPS=71.1k, BW=278MiB/s
nvme-alloc-default.fio.3.log: read: IOPS=70.8k, BW=276MiB/s
nvme-alloc-default.fio.4.log: read: IOPS=70.3k, BW=275MiB/s
nvme-alloc-default.fio.5.log: read: IOPS=69.8k, BW=273MiB/s
nvme-alloc-default.fio.6.log: read: IOPS=67.5k, BW=264MiB/s
nvme-alloc-default.fio.7.log: read: IOPS=67.0k, BW=266MiB/s
nvme-alloc-default.fio.8.log: read: IOPS=69.5k, BW=271MiB/s
nvme-alloc-default.fio.9.log: read: IOPS=70.7k, BW=276MiB/s

nvme-alloc-likely.fio.10.log: read: IOPS=69.5k, BW=272MiB/s
nvme-alloc-likely.fio.1.log:  read: IOPS=70.1k, BW=274MiB/s
nvme-alloc-likely.fio.2.log:  read: IOPS=68.4k, BW=267MiB/s
nvme-alloc-likely.fio.3.log:  read: IOPS=69.3k, BW=271MiB/s
nvme-alloc-likely.fio.4.log:  read: IOPS=69.9k, BW=273MiB/s
nvme-alloc-likely.fio.5.log:  read: IOPS=70.2k, BW=274MiB/s
nvme-alloc-likely.fio.6.log:  read: IOPS=71.6k, BW=280MiB/s
nvme-alloc-likely.fio.7.log:  read: IOPS=70.5k, BW=276MiB/s
nvme-alloc-likely.fio.8.log:  read: IOPS=70.6k, BW=276MiB/s
nvme-alloc-likely.fio.9.log:  read: IOPS=70.3k, BW=275MiB/s

nvme-alloc-default.fio.10.log:lat (usec): avg=924.12
nvme-alloc-default.fio.1.log: lat (usec): avg=917.80
nvme-alloc-default.fio.2.log: lat (usec): avg=898.58
nvme-alloc-default.fio.3.log: lat (usec): avg=903.38
nvme-alloc-default.fio.4.log: lat (usec): avg=908.74
nvme-alloc-default.fio.5.log: lat (usec): avg=915.63
nvme-alloc-default.fio.6.log: lat (usec): avg=946.41
nvme-alloc-default.fio.7.log: lat (usec): avg=940.14
nvme-alloc-default.fio.8.log: lat (usec): avg=920.30
nvme-alloc-default.fio.9.log: lat (usec): avg=903.93

nvme-alloc-likely.fio.10.log: lat (usec): avg=919.55
nvme-alloc-likely.fio.1.log:  lat (usec): avg=912.23
nvme-alloc-likely.fio.2.log:  lat (usec): avg=934.59
nvme-alloc-likely.fio.3.log:  lat (usec): avg=921.68
nvme-alloc-likely.fio.4.log:  lat (usec): avg=914.25
nvme-alloc-likely.fio.5.log:  lat (usec): avg=910.31
nvme-alloc-likely.fio.6.log:  lat (usec): avg=892.22
nvme-alloc-likely.fio.7.log:  lat (usec): avg=906.25
nvme-alloc-likely.fio.8.log:  lat (usec): avg=905.41
nvme-alloc-likely.fio.9.log:  lat (usec): avg=909.22

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5702a3843746..a62fdcbfd1cc 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
 	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 	struct request *req;
 
-	if (qid == NVME_QID_ANY) {
+	if (unlikely(qid == NVME_QID_ANY)) {
 		req = blk_mq_alloc_request(q, op, flags);
 	} else {
 		req = blk_mq_alloc_request_hctx(q, op, flags,
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
  2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
@ 2020-08-31 22:26 ` Chaitanya Kulkarni
  2020-09-03 16:18   ` Christoph Hellwig
  2020-08-31 22:26 ` [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf Chaitanya Kulkarni
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

I/O commands (nvme_cmd_read, nvme_cmd_write) are most common commands
when accessing passthru controller. Since for I/O commands ns is always
present and the condition is marked as likely. Annotate post request
submission ns check with likely which is dependent on earlier the ns
check likely condition.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/passthru.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 8bd7f656e240..18e96eda39b1 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -268,7 +268,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 				      nvmet_passthru_req_done);
 	}
 
-	if (ns)
+	if (likely(ns))
 		nvme_put_ns(ns);
 
 	return;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
  2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
  2020-08-31 22:26 ` [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check Chaitanya Kulkarni
@ 2020-08-31 22:26 ` Chaitanya Kulkarni
  2020-09-01 15:55   ` Keith Busch
  2020-08-31 22:26 ` [PATCH V2 04/12] nvmet: use consistent type for op_flag Chaitanya Kulkarni
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

In function nvmet_passthru_override_id_ns() while iterating
over namespace lba format loop variable is declared as int that is
inconsistent with the id->nlbaf type which is u8.

Make loop variable of the same type as id->nlbaf.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/passthru.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 18e96eda39b1..15dd63e14227 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -112,7 +112,7 @@ static u16 nvmet_passthru_override_id_ns(struct nvmet_req *req)
 {
 	u16 status = NVME_SC_SUCCESS;
 	struct nvme_id_ns *id;
-	int i;
+	u8 i;
 
 	id = kzalloc(sizeof(*id), GFP_KERNEL);
 	if (!id)
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 04/12] nvmet: use consistent type for op_flag
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2020-08-31 22:26 ` [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf Chaitanya Kulkarni
@ 2020-08-31 22:26 ` Chaitanya Kulkarni
  2020-09-03 16:20   ` Christoph Hellwig
  2020-08-31 22:27 ` [PATCH V2 05/12] nvmet: use unlikely for uncommon commands Chaitanya Kulkarni
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

In function nvmet_passthru_map_sg() we set the bio->bi_opf ored with
req_op and op_flags set based on the NVMe cmd. The variable bio->bi_opf
is declared as unsigned int use same type for op_flag.

Also, adjust the order according to new length.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed By: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/target/passthru.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 15dd63e14227..682291fb6d7b 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -180,9 +180,9 @@ static void nvmet_passthru_req_done(struct request *rq,
 
 static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 {
+	unsigned int op_flags = 0;
 	int sg_cnt = req->sg_cnt;
 	struct scatterlist *sg;
-	int op_flags = 0;
 	struct bio *bio;
 	int i, ret;
 
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 05/12] nvmet: use unlikely for uncommon commands
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2020-08-31 22:26 ` [PATCH V2 04/12] nvmet: use consistent type for op_flag Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-03 16:20   ` Christoph Hellwig
  2020-08-31 22:27 ` [PATCH V2 06/12] nvmet: remove op_flags for write commands Chaitanya Kulkarni
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

I/O commands (nvme_cmd_read, nvme_cmd_write) are most common commands
when accessing passthru controller, most controllers should not
have set the effects for r/w I/O commnds (atleast I don't know at this
moment). Also, check for req->p.use_workqueue is true in two admin
commands only which are low frequency commands.

Annotate use_workqueue and command effects check with unlikely in the
fast path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/passthru.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 682291fb6d7b..8f9c2144a103 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -258,7 +258,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 	 * which is typically in interrupt context.
 	 */
 	effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode);
-	if (req->p.use_workqueue || effects) {
+	if (unlikely(req->p.use_workqueue || effects)) {
 		INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work);
 		req->p.rq = rq;
 		schedule_work(&req->p.work);
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 06/12] nvmet: remove op_flags for write commands
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 05/12] nvmet: use unlikely for uncommon commands Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-08-31 22:27 ` [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path() Chaitanya Kulkarni
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

The function nvmet_passthru_map_sg() sets the op_flags to
(REQ_SYNC | REQ_IDLE). Currently, in the block layer this check is only
present for not throttling WRITE_ODIRECT requests for REQ_OP_WRITE
opcode.

Remove the op_flags assignment for nvme_is_write() case.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/passthru.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 8f9c2144a103..fbe2678aea6a 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -188,8 +188,6 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 
 	if (req->cmd->common.opcode == nvme_cmd_flush)
 		op_flags = REQ_FUA;
-	else if (nvme_is_write(req->cmd))
-		op_flags = REQ_SYNC | REQ_IDLE;
 
 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
 	bio->bi_end_io = bio_put;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path()
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (5 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 06/12] nvmet: remove op_flags for write commands Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-01 17:00   ` Logan Gunthorpe
  2020-08-31 22:27 ` [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

Right now nvme_ctrl_get_by_path accepts ctrl path, based on that it
opens a file and calls nvme_get_ctrl(). In order to take module
refcount it is important to distinguish the error between file_open()
and module file ops check so that we can unwind the code in the caller
nvmet_passthru_ctrl_enable() in the error path.

Rename nvme_ctrl_get_by_path() -> nvme_ctrl_get_by_file() and lift
the file opening and error handling in the caller so that we can unwind
appropriately in the error path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c       | 10 ++--------
 drivers/nvme/host/nvme.h       |  2 +-
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c | 17 +++++++++++++----
 4 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index a62fdcbfd1cc..425a3c16d5a5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4641,14 +4641,9 @@ void nvme_sync_queues(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_sync_queues);
 
-struct nvme_ctrl *nvme_ctrl_get_by_path(const char *path)
+struct nvme_ctrl *nvme_ctrl_get_by_file(struct file *f)
 {
 	struct nvme_ctrl *ctrl;
-	struct file *f;
-
-	f = filp_open(path, O_RDWR, 0);
-	if (IS_ERR(f))
-		return ERR_CAST(f);
 
 	if (f->f_op != &nvme_dev_fops) {
 		ctrl = ERR_PTR(-EINVAL);
@@ -4659,10 +4654,9 @@ struct nvme_ctrl *nvme_ctrl_get_by_path(const char *path)
 	nvme_get_ctrl(ctrl);
 
 out_close:
-	filp_close(f, NULL);
 	return ctrl;
 }
-EXPORT_SYMBOL_NS_GPL(nvme_ctrl_get_by_path, NVME_TARGET_PASSTHRU);
+EXPORT_SYMBOL_NS_GPL(nvme_ctrl_get_by_file, NVME_TARGET_PASSTHRU);
 
 /*
  * Check we didn't inadvertently grow the command structure sizes:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 2910f6caab7d..590cc880834a 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -836,7 +836,7 @@ static inline void nvme_hwmon_init(struct nvme_ctrl *ctrl) { }
 u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
 			 u8 opcode);
 void nvme_execute_passthru_rq(struct request *rq);
-struct nvme_ctrl *nvme_ctrl_get_by_path(const char *path);
+struct nvme_ctrl *nvme_ctrl_get_by_file(struct file *f);
 struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
 void nvme_put_ns(struct nvme_ns *ns);
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 47ee3fb193bd..477439acb8e1 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -248,6 +248,7 @@ struct nvmet_subsys {
 #ifdef CONFIG_NVME_TARGET_PASSTHRU
 	struct nvme_ctrl	*passthru_ctrl;
 	char			*passthru_ctrl_path;
+	struct file             *passthru_ctrl_file;
 	struct config_group	passthru_group;
 #endif /* CONFIG_NVME_TARGET_PASSTHRU */
 };
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index fbe2678aea6a..b121f532a6ff 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -471,6 +471,7 @@ u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req)
 
 int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 {
+	const char *pt_path = subsys->passthru_ctrl_path;
 	struct nvme_ctrl *ctrl;
 	int ret = -EINVAL;
 	void *old;
@@ -478,7 +479,7 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 	mutex_lock(&subsys->lock);
 	if (!subsys->passthru_ctrl_path)
 		goto out_unlock;
-	if (subsys->passthru_ctrl)
+	if (!pt_path)
 		goto out_unlock;
 
 	if (subsys->nr_namespaces) {
@@ -486,13 +487,18 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 		goto out_unlock;
 	}
 
-	ctrl = nvme_ctrl_get_by_path(subsys->passthru_ctrl_path);
+	subsys->passthru_ctrl_file = filp_open(pt_path, O_RDWR, 0);
+	if (IS_ERR(subsys->passthru_ctrl_file)) {
+		ret = PTR_ERR(subsys->passthru_ctrl_file);
+		goto out_unlock;
+	}
+
+	ctrl = nvme_ctrl_get_by_file(subsys->passthru_ctrl_file);
 	if (IS_ERR(ctrl)) {
 		ret = PTR_ERR(ctrl);
 		pr_err("failed to open nvme controller %s\n",
 		       subsys->passthru_ctrl_path);
-
-		goto out_unlock;
+		goto out_put_file;
 	}
 
 	old = xa_cmpxchg(&passthru_subsystems, ctrl->cntlid, NULL,
@@ -520,6 +526,8 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 
 out_put_ctrl:
 	nvme_put_ctrl(ctrl);
+out_put_file:
+	filp_close(subsys->passthru_ctrl_file, NULL);
 out_unlock:
 	mutex_unlock(&subsys->lock);
 	return ret;
@@ -529,6 +537,7 @@ static void __nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys)
 {
 	if (subsys->passthru_ctrl) {
 		xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid);
+		filp_close(subsys->passthru_ctrl_file, NULL);
 		nvme_put_ctrl(subsys->passthru_ctrl);
 	}
 	subsys->passthru_ctrl = NULL;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (6 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path() Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-01 17:02   ` Logan Gunthorpe
  2020-08-31 22:27 ` [PATCH V2 09/12] nvmet: fix nvme module ref count Oops Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

Introduce nvme_dev_release ctrl file release callback and move ctrl get
and put operations from target passthru into host core in ctrl open and
release file operations respectively.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c       | 12 +++++++++++-
 drivers/nvme/target/passthru.c |  7 ++-----
 2 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 425a3c16d5a5..f82c6a283b15 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3262,6 +3262,16 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
 	}
 
 	file->private_data = ctrl;
+	nvme_get_ctrl(ctrl);
+	return 0;
+}
+
+static int nvme_dev_release(struct inode *inode, struct file *file)
+{
+	struct nvme_ctrl *ctrl =
+		container_of(inode->i_cdev, struct nvme_ctrl, cdev);
+
+	nvme_put_ctrl(ctrl);
 	return 0;
 }
 
@@ -3327,6 +3337,7 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
 static const struct file_operations nvme_dev_fops = {
 	.owner		= THIS_MODULE,
 	.open		= nvme_dev_open,
+	.release	= nvme_dev_release,
 	.unlocked_ioctl	= nvme_dev_ioctl,
 	.compat_ioctl	= compat_ptr_ioctl,
 };
@@ -4651,7 +4662,6 @@ struct nvme_ctrl *nvme_ctrl_get_by_file(struct file *f)
 	}
 
 	ctrl = f->private_data;
-	nvme_get_ctrl(ctrl);
 
 out_close:
 	return ctrl;
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index b121f532a6ff..20944c48095c 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -505,11 +505,11 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 			 subsys, GFP_KERNEL);
 	if (xa_is_err(old)) {
 		ret = xa_err(old);
-		goto out_put_ctrl;
+		goto out_put_file;
 	}
 
 	if (old)
-		goto out_put_ctrl;
+		goto out_put_file;
 
 	subsys->passthru_ctrl = ctrl;
 	subsys->ver = ctrl->vs;
@@ -524,8 +524,6 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
 	mutex_unlock(&subsys->lock);
 	return 0;
 
-out_put_ctrl:
-	nvme_put_ctrl(ctrl);
 out_put_file:
 	filp_close(subsys->passthru_ctrl_file, NULL);
 out_unlock:
@@ -538,7 +536,6 @@ static void __nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys)
 	if (subsys->passthru_ctrl) {
 		xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid);
 		filp_close(subsys->passthru_ctrl_file, NULL);
-		nvme_put_ctrl(subsys->passthru_ctrl);
 	}
 	subsys->passthru_ctrl = NULL;
 	subsys->ver = NVMET_DEFAULT_VS;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 09/12] nvmet: fix nvme module ref count Oops
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (7 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-01 17:23   ` Logan Gunthorpe
  2020-08-31 22:27 ` [PATCH V2 10/12] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

In the passthru controller enable path current code doesn't take the
reference to the passthru ctrl module. Which produces following Oops :-

Entering kdb (current=0xffff8887f8290000, pid 3128) on processor 30 Oops: (null)
due to oops @ 0xffffffffa01019ad
CPU: 30 PID: 3128 Comm: bash Tainted: G        W  OE     5.8.0-rc4nvme-5.9+ #35
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.4
RIP: 0010:nvme_free_ctrl+0x234/0x285 [nvme_core]
Code: 57 10 a0 e8 73 bf 02 e1 ba 3d 11 00 00 48 c7 c6 98 33 10 a0 48 c7 c7 1d 57 10 a0 e8 5b bf 02 e1 8
RSP: 0018:ffffc90001d63de0 EFLAGS: 00010246
RAX: ffffffffa05c0440 RBX: ffff8888119e45a0 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff8888177e9550 RDI: ffff8888119e43b0
RBP: ffff8887d4768000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: ffffc90001d63c90 R12: ffff8888119e43b0
R13: ffff8888119e5108 R14: dead000000000100 R15: ffff8888119e5108
FS:  00007f1ef27b0740(0000) GS:ffff888817600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffffa05c0470 CR3: 00000007f6bee000 CR4: 00000000003406e0
Call Trace:
 device_release+0x27/0x80
 kobject_put+0x98/0x170
 nvmet_passthru_ctrl_disable+0x4a/0x70 [nvmet]
 nvmet_passthru_enable_store+0x4c/0x90 [nvmet]
 configfs_write_file+0xe6/0x150
 vfs_write+0xba/0x1e0
 ksys_write+0x5f/0xe0
 do_syscall_64+0x52/0xb0
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f1ef1eb2840
Code: Bad RIP value.
RSP: 002b:00007fffdbff0eb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f1ef1eb2840
RDX: 0000000000000002 RSI: 00007f1ef27d2000 RDI: 0000000000000001
RBP: 00007f1ef27d2000 R08: 000000000000000a R09: 00007f1ef27b0740
R10: 0000000000000001 R11: 0000000000000246 R12: 00007f1ef2186400
R13: 0000000000000002 R14: 0000000000000001 R15: 0000000000000000

We fix that by taking a module ref count in nvme_dev_open() and release
that ref count in nvme_dev_release().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f82c6a283b15..c940f49a30bf 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3263,6 +3263,12 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
 
 	file->private_data = ctrl;
 	nvme_get_ctrl(ctrl);
+	if (!try_module_get(ctrl->ops->module)) {
+		pr_err("try_module_get failed for cntlid 0x%x\n", ctrl->cntlid);
+		nvme_put_ctrl(ctrl);
+		return -EINVAL;
+	}
+
 	return 0;
 }
 
@@ -3271,6 +3277,7 @@ static int nvme_dev_release(struct inode *inode, struct file *file)
 	struct nvme_ctrl *ctrl =
 		container_of(inode->i_cdev, struct nvme_ctrl, cdev);
 
+	module_put(ctrl->ops->module);
 	nvme_put_ctrl(ctrl);
 	return 0;
 }
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 10/12] block: move blk_rq_bio_prep() to linux/blk-mq.h
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (8 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 09/12] nvmet: fix nvme module ref count Oops Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-08-31 22:27 ` [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
  2020-08-31 22:27 ` [PATCH V2 12/12] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
  11 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

This is a preparation patch to have minimal block layer request bio
append functionality in the context of the NVMeOF Passthru driver which
falls in the fast path and doesn't need calls from blk_rq_append_bio().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/blk.h            | 12 ------------
 include/linux/blk-mq.h | 12 ++++++++++++
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/blk.h b/block/blk.h
index 49e2928a1632..6f60b39fc6a8 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -91,18 +91,6 @@ static inline bool bvec_gap_to_prev(struct request_queue *q,
 	return __bvec_gap_to_prev(q, bprv, offset);
 }
 
-static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
-		unsigned int nr_segs)
-{
-	rq->nr_phys_segments = nr_segs;
-	rq->__data_len = bio->bi_iter.bi_size;
-	rq->bio = rq->biotail = bio;
-	rq->ioprio = bio_prio(bio);
-
-	if (bio->bi_disk)
-		rq->rq_disk = bio->bi_disk;
-}
-
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 void blk_flush_integrity(void);
 bool __bio_integrity_endio(struct bio *);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 9d2d5ad367a4..02b211e39b25 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -582,6 +582,18 @@ static inline void blk_mq_cleanup_rq(struct request *rq)
 		rq->q->mq_ops->cleanup_rq(rq);
 }
 
+static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
+		unsigned int nr_segs)
+{
+	rq->nr_phys_segments = nr_segs;
+	rq->__data_len = bio->bi_iter.bi_size;
+	rq->bio = rq->biotail = bio;
+	rq->ioprio = bio_prio(bio);
+
+	if (bio->bi_disk)
+		rq->rq_disk = bio->bi_disk;
+}
+
 blk_qc_t blk_mq_submit_bio(struct bio *bio);
 
 #endif
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (9 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 10/12] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-03 16:23   ` Christoph Hellwig
  2020-08-31 22:27 ` [PATCH V2 12/12] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

The function blk_rq_append_bio() is a genereric API written for all
types driver (having bounce buffers) and different context (where
request is already having a bio i.e. rq->bio != NULL).

It does mainly three things: calculating the segments, bounce queue and
if req->bio == NULL call blk_rq_bio_prep() or handle low level merge()
case.

The NVMe PCIe driver does not use the queue bounce mechanism. In order
to find this out for each request processing in the passthru
blk_rq_append_bio() does extra work in the fast path for each request.

When I ran I/Os with different block sizes on the passthru controller
I found that we can reuse the req->sg_cnt instead of iterating over the
bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in
blk_rq_append_bio() is a duplication of work given that we have the
value in req->sg_cnt. (correct me here if I'm wrong).

With NVMe passthru request based driver we allocate fresh request each
time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e.
we don't really need the second condition in the blk_rq_append_bio()
and the resulting error condition in the caller of blk_rq_append_bio().

So for NVMeOF passthru driver recalculating the segments, bounce check
and ll_back_merge code is not needed such that we can get away with the
minimal version of the blk_rq_append_bio() which removes the error check
in the fast path along with extra variable in nvmet_passthru_map_sg().

This patch updates the nvmet_passthru_map_sg() such that it does only
appending the bio to the request in the context of the NVMeOF Passthru
driver. Following are perf numbers :-

With current implementation (blk_rq_append_bio()) :-
----------------------------------------------------
+    5.80%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.88%     0.00%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.44%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.86%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.17%     0.00%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd

With this patch :-
----------------------------------------------------
+    3.14%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    3.26%     0.01%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.37%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    5.18%     0.02%  kworker/0:2-eve  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.84%     0.02%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd
+    4.87%     0.01%  kworker/0:2-mm_  [nvmet]  [k] nvmet_passthru_execute_cmd

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/target/passthru.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 20944c48095c..89d848006bd5 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -9,6 +9,7 @@
  */
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #include <linux/module.h>
+#include <linux/blk-mq.h>
 
 #include "../host/nvme.h"
 #include "nvmet.h"
@@ -184,7 +185,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 	int sg_cnt = req->sg_cnt;
 	struct scatterlist *sg;
 	struct bio *bio;
-	int i, ret;
+	int i;
 
 	if (req->cmd->common.opcode == nvme_cmd_flush)
 		op_flags = REQ_FUA;
@@ -202,11 +203,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 		sg_cnt--;
 	}
 
-	ret = blk_rq_append_bio(rq, &bio);
-	if (unlikely(ret)) {
-		bio_put(bio);
-		return ret;
-	}
+	blk_rq_bio_prep(rq, bio, req->sg_cnt);
 
 	return 0;
 }
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 12/12] nvmet: use inline bio for passthru fast path
  2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
                   ` (10 preceding siblings ...)
  2020-08-31 22:27 ` [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
@ 2020-08-31 22:27 ` Chaitanya Kulkarni
  2020-09-01 17:29   ` Logan Gunthorpe
  11 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-08-31 22:27 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, logang, hch, Chaitanya Kulkarni, sagi

In nvmet_passthru_execute_cmd() which is a high frequency function
it uses bio_alloc() which leads to memory allocation from the fs pool
for each I/O.

For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
request allocation that can be used with preallocated bio when we
already know the size of request before bio allocation with bio_alloc(),
which we already do.

Introduce a bio member for the nvmet_req passthru anon union. In the
fast path, check if we can get away with inline bvec and bio from
nvmet_req with bio_init() call before actually allocating from the
bio_alloc().

This will be useful to avoid any new memory allocation under high
memory pressure situation and get rid of any extra work of
allocation (bio_alloc()) vs initialization (bio_init()) when
transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
compile time.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c | 21 ++++++++++++++++++---
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 477439acb8e1..6b1430f8ac78 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -331,6 +331,7 @@ struct nvmet_req {
 			struct work_struct      work;
 		} f;
 		struct {
+			struct bio		inline_bio;
 			struct request		*rq;
 			struct work_struct      work;
 			bool			use_workqueue;
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 89d848006bd5..ff39f0635451 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -179,6 +179,14 @@ static void nvmet_passthru_req_done(struct request *rq,
 	blk_mq_free_request(rq);
 }
 
+static void nvmet_passthru_bio_done(struct bio *bio)
+{
+	struct nvmet_req *req = bio->bi_private;
+
+	if (bio != &req->p.inline_bio)
+		bio_put(bio);
+}
+
 static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 {
 	unsigned int op_flags = 0;
@@ -190,14 +198,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 	if (req->cmd->common.opcode == nvme_cmd_flush)
 		op_flags = REQ_FUA;
 
-	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
-	bio->bi_end_io = bio_put;
+	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+		bio = &req->p.inline_bio;
+		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
+	} else {
+		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
+	}
+
+	bio->bi_end_io = nvmet_passthru_bio_done;
 	bio->bi_opf = req_op(rq) | op_flags;
+	bio->bi_private = req;
 
 	for_each_sg(req->sg, sg, req->sg_cnt, i) {
 		if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
 				    sg->offset) < sg->length) {
-			bio_put(bio);
+			nvmet_passthru_bio_done(bio);
 			return -EINVAL;
 		}
 		sg_cnt--;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
@ 2020-09-01  1:56   ` Baolin Wang
  2020-09-01 17:21     ` Sagi Grimberg
  2020-09-03 16:17   ` Christoph Hellwig
  1 sibling, 1 reply; 44+ messages in thread
From: Baolin Wang @ 2020-09-01  1:56 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

Hi Chaitanya,

On Mon, Aug 31, 2020 at 03:26:56PM -0700, Chaitanya Kulkarni wrote:
> The function nvme_alloc_request() is called from different context, The
> only place where it uses non NVME_QID_ANY value is for fabrics connect
> commands :-
> 
> nvme_submit_sync_cmd()		NVME_QID_ANY
> nvme_features()			NVME_QID_ANY
> nvme_sec_submit()		NVME_QID_ANY
> nvmf_reg_read32()		NVME_QID_ANY
> nvmf_reg_read64()		NVME_QID_ANY
> nvmf_reg_write32()		NVME_QID_ANY
> nvmf_connect_admin_queue()	NVME_QID_ANY
> nvmf_connect_io_queue() 	QID
> 	__nvme_submit_sync_cmd()
> 		nvme_alloc_request()
> 
> nvme_submit_user_cmd()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_keep_alive()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_timeout()			NVME_QID_ANY
> 	nvme_alloc_request()
> nvme_delete_queue()		NVME_QID_ANY
> 	nvme_alloc_request()
> nvmet_passthru_execute_cmd()	NVME_QID_ANY
> 	nvme_alloc_request()
> 
> With passthru nvme_alloc_request now falls into the I/O fast path.
> Annotate the case for block layer request allocation with likely and
> error condition with unlikely.
> 
> The only case this annotation will break for fabrics connect commands
> which are low frequency commands anyway.
> 
> This leads to following performance change in the passthru code :-
> 
> Bandwidth up (better) by 1.7 MB/s :-
> -----------------------------------------------------------------------
> * Average Bandwidth nvme-alloc-default: 272.10000 MB
> * Average Bandwidth nvme-alloc-likely:  273.80000 MB
> 
> Latency down (better) by ~5 usec :-
> -----------------------------------------------------------------------
> * Average Latency util nvme-alloc-default: 917.90300 (usec)
> * Average Latency util nvme-alloc-likely:  912.57100 (usec)
> 
> nvme-alloc-default.fio.10.log:read: IOPS=69.2k, BW=270MiB/s
> nvme-alloc-default.fio.1.log: read: IOPS=69.6k, BW=272MiB/s
> nvme-alloc-default.fio.2.log: read: IOPS=71.1k, BW=278MiB/s
> nvme-alloc-default.fio.3.log: read: IOPS=70.8k, BW=276MiB/s
> nvme-alloc-default.fio.4.log: read: IOPS=70.3k, BW=275MiB/s
> nvme-alloc-default.fio.5.log: read: IOPS=69.8k, BW=273MiB/s
> nvme-alloc-default.fio.6.log: read: IOPS=67.5k, BW=264MiB/s
> nvme-alloc-default.fio.7.log: read: IOPS=67.0k, BW=266MiB/s
> nvme-alloc-default.fio.8.log: read: IOPS=69.5k, BW=271MiB/s
> nvme-alloc-default.fio.9.log: read: IOPS=70.7k, BW=276MiB/s
> 
> nvme-alloc-likely.fio.10.log: read: IOPS=69.5k, BW=272MiB/s
> nvme-alloc-likely.fio.1.log:  read: IOPS=70.1k, BW=274MiB/s
> nvme-alloc-likely.fio.2.log:  read: IOPS=68.4k, BW=267MiB/s
> nvme-alloc-likely.fio.3.log:  read: IOPS=69.3k, BW=271MiB/s
> nvme-alloc-likely.fio.4.log:  read: IOPS=69.9k, BW=273MiB/s
> nvme-alloc-likely.fio.5.log:  read: IOPS=70.2k, BW=274MiB/s
> nvme-alloc-likely.fio.6.log:  read: IOPS=71.6k, BW=280MiB/s
> nvme-alloc-likely.fio.7.log:  read: IOPS=70.5k, BW=276MiB/s
> nvme-alloc-likely.fio.8.log:  read: IOPS=70.6k, BW=276MiB/s
> nvme-alloc-likely.fio.9.log:  read: IOPS=70.3k, BW=275MiB/s
> 
> nvme-alloc-default.fio.10.log:lat (usec): avg=924.12
> nvme-alloc-default.fio.1.log: lat (usec): avg=917.80
> nvme-alloc-default.fio.2.log: lat (usec): avg=898.58
> nvme-alloc-default.fio.3.log: lat (usec): avg=903.38
> nvme-alloc-default.fio.4.log: lat (usec): avg=908.74
> nvme-alloc-default.fio.5.log: lat (usec): avg=915.63
> nvme-alloc-default.fio.6.log: lat (usec): avg=946.41
> nvme-alloc-default.fio.7.log: lat (usec): avg=940.14
> nvme-alloc-default.fio.8.log: lat (usec): avg=920.30
> nvme-alloc-default.fio.9.log: lat (usec): avg=903.93
> 
> nvme-alloc-likely.fio.10.log: lat (usec): avg=919.55
> nvme-alloc-likely.fio.1.log:  lat (usec): avg=912.23
> nvme-alloc-likely.fio.2.log:  lat (usec): avg=934.59
> nvme-alloc-likely.fio.3.log:  lat (usec): avg=921.68
> nvme-alloc-likely.fio.4.log:  lat (usec): avg=914.25
> nvme-alloc-likely.fio.5.log:  lat (usec): avg=910.31
> nvme-alloc-likely.fio.6.log:  lat (usec): avg=892.22
> nvme-alloc-likely.fio.7.log:  lat (usec): avg=906.25
> nvme-alloc-likely.fio.8.log:  lat (usec): avg=905.41
> nvme-alloc-likely.fio.9.log:  lat (usec): avg=909.22
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  drivers/nvme/host/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5702a3843746..a62fdcbfd1cc 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
>  	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>  	struct request *req;
>  
> -	if (qid == NVME_QID_ANY) {
> +	if (unlikely(qid == NVME_QID_ANY)) {

From your commit message, this should be likely(), right?

>  		req = blk_mq_alloc_request(q, op, flags);
>  	} else {
>  		req = blk_mq_alloc_request_hctx(q, op, flags,
> -- 
> 2.22.1
> 
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf
  2020-08-31 22:26 ` [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf Chaitanya Kulkarni
@ 2020-09-01 15:55   ` Keith Busch
  2020-09-01 21:17     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Keith Busch @ 2020-09-01 15:55 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: logang, hch, linux-nvme, sagi

On Mon, Aug 31, 2020 at 03:26:58PM -0700, Chaitanya Kulkarni wrote:
> In function nvmet_passthru_override_id_ns() while iterating
> over namespace lba format loop variable is declared as int that is
> inconsistent with the id->nlbaf type which is u8.
> 
> Make loop variable of the same type as id->nlbaf.

The loop is id->nlbaf + 1, so it'd overflow if the nlbaf value is the max
u8. Sure, it doesn't get that large today, but they're always trying to
make it bigger. And there's really nothing wrong with counting for loops
with 'int i' anyway.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path()
  2020-08-31 22:27 ` [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path() Chaitanya Kulkarni
@ 2020-09-01 17:00   ` Logan Gunthorpe
  0 siblings, 0 replies; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-01 17:00 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, sagi



On 2020-08-31 4:27 p.m., Chaitanya Kulkarni wrote:
> Right now nvme_ctrl_get_by_path accepts ctrl path, based on that it
> opens a file and calls nvme_get_ctrl(). In order to take module
> refcount it is important to distinguish the error between file_open()
> and module file ops check so that we can unwind the code in the caller
> nvmet_passthru_ctrl_enable() in the error path.
> 
> Rename nvme_ctrl_get_by_path() -> nvme_ctrl_get_by_file() and lift
> the file opening and error handling in the caller so that we can unwind
> appropriately in the error path.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>

I don't really like this. It just seems like a lot of extra mess to keep
a reference to a file object that we don't really need.

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-08-31 22:27 ` [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release Chaitanya Kulkarni
@ 2020-09-01 17:02   ` Logan Gunthorpe
  2020-09-01 17:18     ` Sagi Grimberg
  0 siblings, 1 reply; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-01 17:02 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, sagi



On 2020-08-31 4:27 p.m., Chaitanya Kulkarni wrote:
> Introduce nvme_dev_release ctrl file release callback and move ctrl get
> and put operations from target passthru into host core in ctrl open and
> release file operations respectively.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  drivers/nvme/host/core.c       | 12 +++++++++++-
>  drivers/nvme/target/passthru.c |  7 ++-----
>  2 files changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 425a3c16d5a5..f82c6a283b15 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3262,6 +3262,16 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
>  	}
>  
>  	file->private_data = ctrl;
> +	nvme_get_ctrl(ctrl);
> +	return 0;
> +}
> +
> +static int nvme_dev_release(struct inode *inode, struct file *file)
> +{
> +	struct nvme_ctrl *ctrl =
> +		container_of(inode->i_cdev, struct nvme_ctrl, cdev);
> +
> +	nvme_put_ctrl(ctrl);
>  	return 0;
>  }
>  
> @@ -3327,6 +3337,7 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
>  static const struct file_operations nvme_dev_fops = {
>  	.owner		= THIS_MODULE,
>  	.open		= nvme_dev_open,
> +	.release	= nvme_dev_release,
>  	.unlocked_ioctl	= nvme_dev_ioctl,
>  	.compat_ioctl	= compat_ptr_ioctl,
>  };

Seems like everything above this is necessary and fixes a real and
obvious bug and should go in separately, first.

Everything below this is rather unrelated to the bug...


> @@ -4651,7 +4662,6 @@ struct nvme_ctrl *nvme_ctrl_get_by_file(struct file *f)
>  	}
>  
>  	ctrl = f->private_data;
> -	nvme_get_ctrl(ctrl);
>  
>  out_close:
>  	return ctrl;
> diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
> index b121f532a6ff..20944c48095c 100644
> --- a/drivers/nvme/target/passthru.c
> +++ b/drivers/nvme/target/passthru.c
> @@ -505,11 +505,11 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
>  			 subsys, GFP_KERNEL);
>  	if (xa_is_err(old)) {
>  		ret = xa_err(old);
> -		goto out_put_ctrl;
> +		goto out_put_file;
>  	}
>  
>  	if (old)
> -		goto out_put_ctrl;
> +		goto out_put_file;
>  
>  	subsys->passthru_ctrl = ctrl;
>  	subsys->ver = ctrl->vs;
> @@ -524,8 +524,6 @@ int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys)
>  	mutex_unlock(&subsys->lock);
>  	return 0;
>  
> -out_put_ctrl:
> -	nvme_put_ctrl(ctrl);
>  out_put_file:
>  	filp_close(subsys->passthru_ctrl_file, NULL);
>  out_unlock:
> @@ -538,7 +536,6 @@ static void __nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys)
>  	if (subsys->passthru_ctrl) {
>  		xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid);
>  		filp_close(subsys->passthru_ctrl_file, NULL);
> -		nvme_put_ctrl(subsys->passthru_ctrl);
>  	}
>  	subsys->passthru_ctrl = NULL;
>  	subsys->ver = NVMET_DEFAULT_VS;
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-09-01 17:02   ` Logan Gunthorpe
@ 2020-09-01 17:18     ` Sagi Grimberg
  2020-09-02  0:31       ` Chaitanya Kulkarni
  2020-09-03 16:21       ` Christoph Hellwig
  0 siblings, 2 replies; 44+ messages in thread
From: Sagi Grimberg @ 2020-09-01 17:18 UTC (permalink / raw)
  To: Logan Gunthorpe, Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch


>> @@ -3327,6 +3337,7 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd,
>>   static const struct file_operations nvme_dev_fops = {
>>   	.owner		= THIS_MODULE,
>>   	.open		= nvme_dev_open,
>> +	.release	= nvme_dev_release,
>>   	.unlocked_ioctl	= nvme_dev_ioctl,
>>   	.compat_ioctl	= compat_ptr_ioctl,
>>   };
> 
> Seems like everything above this is necessary and fixes a real and
> obvious bug and should go in separately, first.
> 
> Everything below this is rather unrelated to the bug...

Lets overall split bug fixes from any enhancement. Bug fixes
should go directly before we handle enhancements.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-01  1:56   ` Baolin Wang
@ 2020-09-01 17:21     ` Sagi Grimberg
  2020-09-01 21:16       ` Chaitanya Kulkarni
                         ` (2 more replies)
  0 siblings, 3 replies; 44+ messages in thread
From: Sagi Grimberg @ 2020-09-01 17:21 UTC (permalink / raw)
  To: Baolin Wang, Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme


>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5702a3843746..a62fdcbfd1cc 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
>>   	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>>   	struct request *req;
>>   
>> -	if (qid == NVME_QID_ANY) {
>> +	if (unlikely(qid == NVME_QID_ANY)) {
> 
>  From your commit message, this should be likely(), right?
> 
>>   		req = blk_mq_alloc_request(q, op, flags);
>>   	} else {
>>   		req = blk_mq_alloc_request_hctx(q, op, flags,

Chaitanya, can you check the objdump that the change actually
resulted in a different machine code, I've seen patches of this
sort lately where this annotation didn't change anything at all.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 09/12] nvmet: fix nvme module ref count Oops
  2020-08-31 22:27 ` [PATCH V2 09/12] nvmet: fix nvme module ref count Oops Chaitanya Kulkarni
@ 2020-09-01 17:23   ` Logan Gunthorpe
  0 siblings, 0 replies; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-01 17:23 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, sagi



On 2020-08-31 4:27 p.m., Chaitanya Kulkarni wrote:
> In the passthru controller enable path current code doesn't take the
> reference to the passthru ctrl module. Which produces following Oops :-

Seems like this change should be justifiable without passthru, simply by
 opening the char device and removing the module. The tag in the title
should certainly be nvme, not nvmet...

> We fix that by taking a module ref count in nvme_dev_open() and release
> that ref count in nvme_dev_release().
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  drivers/nvme/host/core.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index f82c6a283b15..c940f49a30bf 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3263,6 +3263,12 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
>  
>  	file->private_data = ctrl;
>  	nvme_get_ctrl(ctrl);
> +	if (!try_module_get(ctrl->ops->module)) {
> +		pr_err("try_module_get failed for cntlid 0x%x\n", ctrl->cntlid);

Is the error print necessary? We don't print any errors in the similar
path in nvme_open()...

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 12/12] nvmet: use inline bio for passthru fast path
  2020-08-31 22:27 ` [PATCH V2 12/12] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
@ 2020-09-01 17:29   ` Logan Gunthorpe
  2020-09-02  0:39     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-01 17:29 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, sagi



On 2020-08-31 4:27 p.m., Chaitanya Kulkarni wrote:
> In nvmet_passthru_execute_cmd() which is a high frequency function
> it uses bio_alloc() which leads to memory allocation from the fs pool
> for each I/O.
> 
> For NVMeoF nvmet_req we already have inline_bvec allocated as a part of
> request allocation that can be used with preallocated bio when we
> already know the size of request before bio allocation with bio_alloc(),
> which we already do.
> 
> Introduce a bio member for the nvmet_req passthru anon union. In the
> fast path, check if we can get away with inline bvec and bio from
> nvmet_req with bio_init() call before actually allocating from the
> bio_alloc().
> 
> This will be useful to avoid any new memory allocation under high
> memory pressure situation and get rid of any extra work of
> allocation (bio_alloc()) vs initialization (bio_init()) when
> transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at
> compile time.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>  drivers/nvme/target/nvmet.h    |  1 +
>  drivers/nvme/target/passthru.c | 21 ++++++++++++++++++---
>  2 files changed, 19 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index 477439acb8e1..6b1430f8ac78 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -331,6 +331,7 @@ struct nvmet_req {
>  			struct work_struct      work;
>  		} f;
>  		struct {
> +			struct bio		inline_bio;
>  			struct request		*rq;
>  			struct work_struct      work;
>  			bool			use_workqueue;
> diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
> index 89d848006bd5..ff39f0635451 100644
> --- a/drivers/nvme/target/passthru.c
> +++ b/drivers/nvme/target/passthru.c
> @@ -179,6 +179,14 @@ static void nvmet_passthru_req_done(struct request *rq,
>  	blk_mq_free_request(rq);
>  }
>  
> +static void nvmet_passthru_bio_done(struct bio *bio)
> +{
> +	struct nvmet_req *req = bio->bi_private;
> +
> +	if (bio != &req->p.inline_bio)
> +		bio_put(bio);
> +}
> +
>  static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>  {
>  	unsigned int op_flags = 0;
> @@ -190,14 +198,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>  	if (req->cmd->common.opcode == nvme_cmd_flush)
>  		op_flags = REQ_FUA;
>  
> -	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
> -	bio->bi_end_io = bio_put;
> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
> +		bio = &req->p.inline_bio;
> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
> +	} else {
> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
> +	}
> +
> +	bio->bi_end_io = nvmet_passthru_bio_done;
>  	bio->bi_opf = req_op(rq) | op_flags;
> +	bio->bi_private = req;

I still think this would be cleaner to just do:

if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
	bio = &req->p.inline_bio;
	bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
        bio->bi_end_io = bio_put;
}

>  	for_each_sg(req->sg, sg, req->sg_cnt, i) {
>  		if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
>  				    sg->offset) < sg->length) {
> -			bio_put(bio);
> +			nvmet_passthru_bio_done(bio);

Then just open code this function here...

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-01 17:21     ` Sagi Grimberg
@ 2020-09-01 21:16       ` Chaitanya Kulkarni
  2020-09-02  3:28       ` Chaitanya Kulkarni
  2020-09-02 22:02       ` Chaitanya Kulkarni
  2 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-01 21:16 UTC (permalink / raw)
  To: Sagi Grimberg, Baolin Wang; +Cc: kbusch, logang, hch, linux-nvme

On 9/1/20 10:21, Sagi Grimberg wrote:
>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index 5702a3843746..a62fdcbfd1cc 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
>>>    	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>>>    	struct request *req;
>>>    
>>> -	if (qid == NVME_QID_ANY) {
>>> +	if (unlikely(qid == NVME_QID_ANY)) {
>>   From your commit message, this should be likely(), right?
Yes it is my bad when sending this.
>>
>>>    		req = blk_mq_alloc_request(q, op, flags);
>>>    	} else {
>>>    		req = blk_mq_alloc_request_hctx(q, op, flags,
> Chaitanya, can you check the objdump that the change actually
> resulted in a different machine code, I've seen patches of this
> sort lately where this annotation didn't change anything at all.
> 
Okay.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf
  2020-09-01 15:55   ` Keith Busch
@ 2020-09-01 21:17     ` Chaitanya Kulkarni
  2020-09-01 21:31       ` Logan Gunthorpe
  2020-09-01 22:03       ` Keith Busch
  0 siblings, 2 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-01 21:17 UTC (permalink / raw)
  To: Keith Busch; +Cc: logang, hch, linux-nvme, sagi

On 9/1/20 08:56, Keith Busch wrote:
> On Mon, Aug 31, 2020 at 03:26:58PM -0700, Chaitanya Kulkarni wrote:
>> In function nvmet_passthru_override_id_ns() while iterating
>> over namespace lba format loop variable is declared as int that is
>> inconsistent with the id->nlbaf type which is u8.
>>
>> Make loop variable of the same type as id->nlbaf.
> The loop is id->nlbaf + 1, so it'd overflow if the nlbaf value is the max
> u8. Sure, it doesn't get that large today, but they're always trying to
> make it bigger. And there's really nothing wrong with counting for loops
> with 'int i' anyway.
> 

In past when ppl run proprietary static checker it has generated
warnings for such cases.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf
  2020-09-01 21:17     ` Chaitanya Kulkarni
@ 2020-09-01 21:31       ` Logan Gunthorpe
  2020-09-01 22:03       ` Keith Busch
  1 sibling, 0 replies; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-01 21:31 UTC (permalink / raw)
  To: Chaitanya Kulkarni, Keith Busch; +Cc: hch, linux-nvme, sagi



On 2020-09-01 3:17 p.m., Chaitanya Kulkarni wrote:
> On 9/1/20 08:56, Keith Busch wrote:
>> On Mon, Aug 31, 2020 at 03:26:58PM -0700, Chaitanya Kulkarni wrote:
>>> In function nvmet_passthru_override_id_ns() while iterating
>>> over namespace lba format loop variable is declared as int that is
>>> inconsistent with the id->nlbaf type which is u8.
>>>
>>> Make loop variable of the same type as id->nlbaf.
>> The loop is id->nlbaf + 1, so it'd overflow if the nlbaf value is the max
>> u8. Sure, it doesn't get that large today, but they're always trying to
>> make it bigger. And there's really nothing wrong with counting for loops
>> with 'int i' anyway.
>>
> 
> In past when ppl run proprietary static checker it has generated
> warnings for such cases.

I'd ignore such warnings. Sounds like the proprietary tool is producing
ridiculous false positives. And, per Keith's example, is just making the
code wrong for no reason.

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf
  2020-09-01 21:17     ` Chaitanya Kulkarni
  2020-09-01 21:31       ` Logan Gunthorpe
@ 2020-09-01 22:03       ` Keith Busch
  1 sibling, 0 replies; 44+ messages in thread
From: Keith Busch @ 2020-09-01 22:03 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: logang, hch, linux-nvme, sagi

On Tue, Sep 01, 2020 at 09:17:45PM +0000, Chaitanya Kulkarni wrote:
> On 9/1/20 08:56, Keith Busch wrote:
> > On Mon, Aug 31, 2020 at 03:26:58PM -0700, Chaitanya Kulkarni wrote:
> >> In function nvmet_passthru_override_id_ns() while iterating
> >> over namespace lba format loop variable is declared as int that is
> >> inconsistent with the id->nlbaf type which is u8.
> >>
> >> Make loop variable of the same type as id->nlbaf.
> > The loop is id->nlbaf + 1, so it'd overflow if the nlbaf value is the max
> > u8. Sure, it doesn't get that large today, but they're always trying to
> > make it bigger. And there's really nothing wrong with counting for loops
> > with 'int i' anyway.
> > 
> 
> In past when ppl run proprietary static checker it has generated
> warnings for such cases.

Sounds broken. Adding a constant coereces an int, so comparing with a
declared int should be what static analysis wants if they're being
pedantic about it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-09-01 17:18     ` Sagi Grimberg
@ 2020-09-02  0:31       ` Chaitanya Kulkarni
  2020-09-03 16:21       ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-02  0:31 UTC (permalink / raw)
  To: Sagi Grimberg, Logan Gunthorpe; +Cc: kbusch, hch, linux-nvme

On 9/1/20 10:18, Sagi Grimberg wrote:
>> Seems like everything above this is necessary and fixes a real and
>> obvious bug and should go in separately, first.
>>
>> Everything below this is rather unrelated to the bug...
> Lets overall split bug fixes from any enhancement. Bug fixes
> should go directly before we handle enhancements.
> 

Okay, let me re-spin and we can continue the discussion.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 12/12] nvmet: use inline bio for passthru fast path
  2020-09-01 17:29   ` Logan Gunthorpe
@ 2020-09-02  0:39     ` Chaitanya Kulkarni
  2020-09-02 15:41       ` Logan Gunthorpe
  0 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-02  0:39 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-nvme; +Cc: kbusch, hch, sagi

On 9/1/20 10:29, Logan Gunthorpe wrote:
>> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
>> +		bio = &req->p.inline_bio;
>> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
>> +	} else {
>> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>> +	}
>> +
>> +	bio->bi_end_io = nvmet_passthru_bio_done;
>>   	bio->bi_opf = req_op(rq) | op_flags;
>> +	bio->bi_private = req;
> I still think this would be cleaner to just do:
> 
> if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
> 	bio = &req->p.inline_bio;
> 	bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
> } else {
> 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>          bio->bi_end_io = bio_put
This is intentionally kept consistent with what we have for bdev-ns.
> }
> 
>>   	for_each_sg(req->sg, sg, req->sg_cnt, i) {
>>   		if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
>>   				    sg->offset) < sg->length) {
>> -			bio_put(bio);
>> +			nvmet_passthru_bio_done(bio);
> Then just open code this function here...

> Logan
> 


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-01 17:21     ` Sagi Grimberg
  2020-09-01 21:16       ` Chaitanya Kulkarni
@ 2020-09-02  3:28       ` Chaitanya Kulkarni
  2020-09-02 22:02       ` Chaitanya Kulkarni
  2 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-02  3:28 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: kbusch, logang, hch, linux-nvme

Sagi,

On 9/1/20 10:21, Sagi Grimberg wrote:
>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index 5702a3843746..a62fdcbfd1cc 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -508,7 +508,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
>>>    	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>>>    	struct request *req;
>>>    
>>> -	if (qid == NVME_QID_ANY) {
>>> +	if (unlikely(qid == NVME_QID_ANY)) {
>>   From your commit message, this should be likely(), right?
>>
>>>    		req = blk_mq_alloc_request(q, op, flags);
>>>    	} else {
>>>    		req = blk_mq_alloc_request_hctx(q, op, flags,
> Chaitanya, can you check the objdump that the change actually
> resulted in a different machine code, I've seen patches of this
> sort lately where this annotation didn't change anything at all.
> 

Agree, that is why I've added performance numbers.

Here is objdump for likely [1] vs default [2]. The generated assembly
is different with likely if someone reports performance regression
we can always revert this patch as it is very independent and
trivial.

With likely [1] :-

  00000000000008b0 <nvme_alloc_request>:
   904 {
   905      8b0:       e8 00 00 00 00          callq  8b5 
<nvme_alloc_request+0x5>
   906      8b5:       53                      push   %rbx
   907         /*
   908          * What a mess...
   909          *
   910          * Why can't we simply have a Fabrics In and Fabrics out 
command?
   911          */
   912         if (unlikely(nvme_is_fabrics(cmd)))
   913      8b6:       0f b6 06                movzbl (%rsi),%eax
   914      8b9:       48 89 f3                mov    %rsi,%rbx
   915                 return cmd->fabrics.fctype & 1;
   916         return cmd->common.opcode & 1;
   917      8bc:       89 c6                   mov    %eax,%esi
   918      8be:       83 e6 01                and    $0x1,%esi
   919         if (unlikely(nvme_is_fabrics(cmd)))
   920      8c1:       3c 7f                   cmp    $0x7f,%al
   921      8c3:       74 53                   je     918 
<nvme_alloc_request+0x68>
   922         unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : 
REQ_OP_DRV_IN;
   923      8c5:       83 e6 01                and    $0x1,%esi
   924      8c8:       83 c6 22                add    $0x22,%esi
   925         if (likely(qid == NVME_QID_ANY)) { 

   926      8cb:       83 f9 ff                cmp    $0xffffffff,%ecx
   927      8ce:       75 34                   jne    904 
<nvme_alloc_request+0x54>
   928                 req = blk_mq_alloc_request(q, op, flags);
   929      8d0:       e8 00 00 00 00          callq  8d5 
<nvme_alloc_request+0x25>
   930         if (IS_ERR(req))
   931      8d5:       48 3d 00 f0 ff ff       cmp 
$0xfffffffffffff000,%rax
   932      8db:       77 25                   ja     902 
<nvme_alloc_request+0x52>
   933         if (!(req->rq_flags & RQF_DONTPREP)) {
   934      8dd:       8b 50 1c                mov    0x1c(%rax),%edx
   935         req->cmd_flags |= REQ_FAILFAST_DRIVER;
   936      8e0:       81 48 18 00 04 00 00    orl    $0x400,0x18(%rax)
   937         if (!(req->rq_flags & RQF_DONTPREP)) {
   938      8e7:       f6 c2 80                test   $0x80,%dl
   939      8ea:       75 0f                   jne    8fb 
<nvme_alloc_request+0x4b>
   940                 nvme_req(req)->retries = 0;
   941      8ec:       31 c9                   xor    %ecx,%ecx
   942                 req->rq_flags |= RQF_DONTPREP;
   943      8ee:       80 ca 80                or     $0x80,%dl
   944                 nvme_req(req)->retries = 0;
   945      8f1:       66 89 88 28 01 00 00    mov    %cx,0x128(%rax)
   946                 req->rq_flags |= RQF_DONTPREP;
   947      8f8:       89 50 1c                mov    %edx,0x1c(%rax)
   948         nvme_req(req)->cmd = cmd;
   949      8fb:       48 89 98 18 01 00 00    mov    %rbx,0x118(%rax)
   950 }
   951      902:       5b                      pop    %rbx
   952      903:       c3                      retq
   953                                 qid ? qid - 1 : 0);
   954      904:       85 c9                   test   %ecx,%ecx
   955      906:       8d 41 ff                lea    -0x1(%rcx),%eax
   956      909:       b9 00 00 00 00          mov    $0x0,%ecx
   957      90e:       0f 45 c8                cmovne %eax,%ecx
   958                 req = blk_mq_alloc_request_hctx(q, op, flags,
   959      911:       e8 00 00 00 00          callq  916 
<nvme_alloc_request+0x66>
   960      916:       eb bd                   jmp    8d5 
<nvme_alloc_request+0x25>
   961                 return cmd->fabrics.fctype & 1;
   962      918:       0f b6 73 04             movzbl 0x4(%rbx),%esi
   963      91c:       83 e6 01                and    $0x1,%esi
   964      91f:       eb a4                   jmp    8c5 
<nvme_alloc_request+0x15>
   965      921:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
   966      926:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)
   967      92d:       00 00 00



Default :-

  00000000000008b0 <nvme_alloc_request>:
   904 {
   905      8b0:       e8 00 00 00 00          callq  8b5 
<nvme_alloc_request+0x5>
   906      8b5:       53                      push   %rbx
   907         /*
   908          * What a mess...
   909          *
   910          * Why can't we simply have a Fabrics In and Fabrics out 
command?
   911          */
   912         if (unlikely(nvme_is_fabrics(cmd)))
   913      8b6:       0f b6 06                movzbl (%rsi),%eax
   914      8b9:       48 89 f3                mov    %rsi,%rbx
   915                 return cmd->fabrics.fctype & 1;
   916         return cmd->common.opcode & 1;
   917      8bc:       89 c6                   mov    %eax,%esi
   918      8be:       83 e6 01                and    $0x1,%esi
   919         if (unlikely(nvme_is_fabrics(cmd)))
   920      8c1:       3c 7f                   cmp    $0x7f,%al
   921      8c3:       74 53                   je     918 
<nvme_alloc_request+0x68>
   922         unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : 
REQ_OP_DRV_IN;
   923      8c5:       83 e6 01                and    $0x1,%esi
   924      8c8:       83 c6 22                add    $0x22,%esi
   925         if (qid == NVME_QID_ANY) { 

   926      8cb:       83 f9 ff                cmp    $0xffffffff,%ecx
   927      8ce:       74 41                   je     911 
<nvme_alloc_request+0x61>
   928                                 qid ? qid - 1 : 0);
   929      8d0:       8d 41 ff                lea    -0x1(%rcx),%eax
   930      8d3:       85 c9                   test   %ecx,%ecx
   931      8d5:       b9 00 00 00 00          mov    $0x0,%ecx
   932      8da:       0f 45 c8                cmovne %eax,%ecx
   933                 req = blk_mq_alloc_request_hctx(q, op, flags,
   934      8dd:       e8 00 00 00 00          callq  8e2 
<nvme_alloc_request+0x32>
   935         if (IS_ERR(req))
   936      8e2:       48 3d 00 f0 ff ff       cmp 
$0xfffffffffffff000,%rax
   937      8e8:       77 25                   ja     90f 
<nvme_alloc_request+0x5f>
   938         if (!(req->rq_flags & RQF_DONTPREP)) {
   939      8ea:       8b 50 1c                mov    0x1c(%rax),%edx
   940         req->cmd_flags |= REQ_FAILFAST_DRIVER;
   941      8ed:       81 48 18 00 04 00 00    orl    $0x400,0x18(%rax)
   942         if (!(req->rq_flags & RQF_DONTPREP)) {
   943      8f4:       f6 c2 80                test   $0x80,%dl
   944      8f7:       75 0f                   jne    908 
<nvme_alloc_request+0x58>
   945                 nvme_req(req)->retries = 0;
   946      8f9:       31 c9                   xor    %ecx,%ecx
   947                 req->rq_flags |= RQF_DONTPREP;
   948      8fb:       80 ca 80                or     $0x80,%dl
   949                 nvme_req(req)->retries = 0;
   950      8fe:       66 89 88 28 01 00 00    mov    %cx,0x128(%rax)
   951                 req->rq_flags |= RQF_DONTPREP;
   952      905:       89 50 1c                mov    %edx,0x1c(%rax)
   953         nvme_req(req)->cmd = cmd;
   954      908:       48 89 98 18 01 00 00    mov    %rbx,0x118(%rax)
   955 }
   956      90f:       5b                      pop    %rbx
   957      910:       c3                      retq
   958                 req = blk_mq_alloc_request(q, op, flags);
   959      911:       e8 00 00 00 00          callq  916 
<nvme_alloc_request+0x66>
   960      916:       eb ca                   jmp    8e2 
<nvme_alloc_request+0x32>
   961                 return cmd->fabrics.fctype & 1;
   962      918:       0f b6 73 04             movzbl 0x4(%rbx),%esi
   963      91c:       83 e6 01                and    $0x1,%esi
   964      91f:       eb a4                   jmp    8c5 
<nvme_alloc_request+0x15>
   965      921:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
   966      926:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)
   967      92d:       00 00 00
   968
   969 0000000000000930 <nvme_end_sync_rq>:
   970 {
   971      930:       e8 00 00 00 00          callq  935 
<nvme_end_sync_rq+0x5>
   972      935:       48 89 f8                mov    %rdi,%rax
   973         struct completion *waiting = rq->end_io_data;
   974      938:       48 8b bf 10 01 00 00    mov    0x110(%rdi),%rdi
   975         rq->end_io_data = NULL;
   976      93f:       48 c7 80 10 01 00 00    movq   $0x0,0x110(%rax)
   977      946:       00 00 00 00
   978         complete(waiting);
   979      94a:       e9 00 00 00 00          jmpq   94f 
<nvme_end_sync_rq+0x1f>
   980      94f:       90                      nop
   981


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 12/12] nvmet: use inline bio for passthru fast path
  2020-09-02  0:39     ` Chaitanya Kulkarni
@ 2020-09-02 15:41       ` Logan Gunthorpe
  2020-09-02 21:04         ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Logan Gunthorpe @ 2020-09-02 15:41 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, sagi



On 2020-09-01 6:39 p.m., Chaitanya Kulkarni wrote:
> On 9/1/20 10:29, Logan Gunthorpe wrote:
>>> +	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
>>> +		bio = &req->p.inline_bio;
>>> +		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
>>> +	} else {
>>> +		bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>>> +	}
>>> +
>>> +	bio->bi_end_io = nvmet_passthru_bio_done;
>>>   	bio->bi_opf = req_op(rq) | op_flags;
>>> +	bio->bi_private = req;
>> I still think this would be cleaner to just do:
>>
>> if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
>> 	bio = &req->p.inline_bio;
>> 	bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
>> } else {
>> 	bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
>>          bio->bi_end_io = bio_put
> This is intentionally kept consistent with what we have for bdev-ns.

So then cleanup bdev-ns... Doing something in a less clean way because
somebody else did it is not good reasoning.

Logan

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 12/12] nvmet: use inline bio for passthru fast path
  2020-09-02 15:41       ` Logan Gunthorpe
@ 2020-09-02 21:04         ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-02 21:04 UTC (permalink / raw)
  To: linux-nvme, hch, sagi; +Cc: Logan Gunthorpe

Christoph/Sagi,

On 9/2/20 08:41, Logan Gunthorpe wrote:
>> This is intentionally kept consistent with what we have for bdev-ns.
> So then cleanup bdev-ns... Doing something in a less clean way because
> somebody else did it is not good reasoning.
> 
> Logan
> 

Are you guys okay with this ?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-01 17:21     ` Sagi Grimberg
  2020-09-01 21:16       ` Chaitanya Kulkarni
  2020-09-02  3:28       ` Chaitanya Kulkarni
@ 2020-09-02 22:02       ` Chaitanya Kulkarni
  2 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-02 22:02 UTC (permalink / raw)
  To: Sagi Grimberg, Baolin Wang; +Cc: kbusch, logang, hch, linux-nvme

On 9/1/20 10:21, Sagi Grimberg wrote:
> Chaitanya, can you check the objdump that the change actually
> resulted in a different machine code, I've seen patches of this
> sort lately where this annotation didn't change anything at all.
> 

I again took the numbers with and without this patch :-

# grep BW nvme-alloc*log
nvme-alloc-default10.fio.log:   read: IOPS=66.6k, BW=260MiB/s (273MB/s)
nvme-alloc-default1.fio.log:   read: IOPS=64.9k, BW=254MiB/s (266MB/s)
nvme-alloc-default2.fio.log:   read: IOPS=65.4k, BW=256MiB/s (268MB/s)
nvme-alloc-default3.fio.log:   read: IOPS=65.1k, BW=254MiB/s (267MB/s)
nvme-alloc-default4.fio.log:   read: IOPS=67.2k, BW=263MiB/s (275MB/s)
nvme-alloc-default5.fio.log:   read: IOPS=66.3k, BW=259MiB/s (272MB/s)
nvme-alloc-default6.fio.log:   read: IOPS=64.6k, BW=252MiB/s (264MB/s)
nvme-alloc-default7.fio.log:   read: IOPS=66.0k, BW=258MiB/s (270MB/s)
nvme-alloc-default8.fio.log:   read: IOPS=64.5k, BW=252MiB/s (264MB/s)
nvme-alloc-default9.fio.log:   read: IOPS=65.4k, BW=256MiB/s (268MB/s)
nvme-alloc-likely10.fio.log:   read: IOPS=65.5k, BW=256MiB/s (268MB/s)
nvme-alloc-likely1.fio.log:   read: IOPS=65.6k, BW=256MiB/s (269MB/s)
nvme-alloc-likely2.fio.log:   read: IOPS=65.9k, BW=257MiB/s (270MB/s)
nvme-alloc-likely3.fio.log:   read: IOPS=65.4k, BW=255MiB/s (268MB/s)
nvme-alloc-likely4.fio.log:   read: IOPS=66.2k, BW=259MiB/s (271MB/s)
nvme-alloc-likely5.fio.log:   read: IOPS=66.7k, BW=261MiB/s (273MB/s)
nvme-alloc-likely6.fio.log:   read: IOPS=66.7k, BW=261MiB/s (273MB/s)
nvme-alloc-likely7.fio.log:   read: IOPS=67.8k, BW=265MiB/s (278MB/s)
nvme-alloc-likely8.fio.log:   read: IOPS=66.5k, BW=260MiB/s (273MB/s)
nvme-alloc-likely9.fio.log:   read: IOPS=65.6k, BW=256MiB/s (269MB/s)
# ./get_fio_avg.sh

Average Bandwidth nvme-alloc-default 256.40000
Average Bandwidth nvme-alloc-likely 258.60000

Average lat util nvme-alloc-default 974.25900
Average lat util nvme-alloc-likely 965.78000

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
  2020-09-01  1:56   ` Baolin Wang
@ 2020-09-03 16:17   ` Christoph Hellwig
  2020-09-03 19:40     ` Chaitanya Kulkarni
  1 sibling, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:17 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

Before we start micro-optimzing this stuff I suspect we might want
to just split the submit_cmd handlers so that we have a fast path
and the rest.  We've grown a few too many arguments by now..

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check
  2020-08-31 22:26 ` [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check Chaitanya Kulkarni
@ 2020-09-03 16:18   ` Christoph Hellwig
  2020-09-03 18:21     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:18 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

On Mon, Aug 31, 2020 at 03:26:57PM -0700, Chaitanya Kulkarni wrote:
> I/O commands (nvme_cmd_read, nvme_cmd_write) are most common commands
> when accessing passthru controller. Since for I/O commands ns is always
> present and the condition is marked as likely. Annotate post request
> submission ns check with likely which is dependent on earlier the ns
> check likely condition.

This really seems like weird over-optimization.  Any sensible branch
predictor should be able to find this out by itself.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 04/12] nvmet: use consistent type for op_flag
  2020-08-31 22:26 ` [PATCH V2 04/12] nvmet: use consistent type for op_flag Chaitanya Kulkarni
@ 2020-09-03 16:20   ` Christoph Hellwig
  2020-09-03 18:35     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:20 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

On Mon, Aug 31, 2020 at 03:26:59PM -0700, Chaitanya Kulkarni wrote:
> In function nvmet_passthru_map_sg() we set the bio->bi_opf ored with
> req_op and op_flags set based on the NVMe cmd. The variable bio->bi_opf
> is declared as unsigned int use same type for op_flag.
> 
> Also, adjust the order according to new length.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Reviewed By: Logan Gunthorpe <logang@deltatee.com>

Actually all the op_flags used here are bogus and have no meaning for
passthrough commands.  So the right fix is to remove op_flags entirely.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 05/12] nvmet: use unlikely for uncommon commands
  2020-08-31 22:27 ` [PATCH V2 05/12] nvmet: use unlikely for uncommon commands Chaitanya Kulkarni
@ 2020-09-03 16:20   ` Christoph Hellwig
  2020-09-03 18:37     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:20 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

On Mon, Aug 31, 2020 at 03:27:00PM -0700, Chaitanya Kulkarni wrote:
> I/O commands (nvme_cmd_read, nvme_cmd_write) are most common commands
> when accessing passthru controller, most controllers should not
> have set the effects for r/w I/O commnds (atleast I don't know at this
> moment). Also, check for req->p.use_workqueue is true in two admin
> commands only which are low frequency commands.
> 
> Annotate use_workqueue and command effects check with unlikely in the
> fast path.

What difference does this make?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-09-01 17:18     ` Sagi Grimberg
  2020-09-02  0:31       ` Chaitanya Kulkarni
@ 2020-09-03 16:21       ` Christoph Hellwig
  2020-09-03 18:22         ` Chaitanya Kulkarni
  1 sibling, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:21 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: kbusch, Logan Gunthorpe, linux-nvme, Chaitanya Kulkarni, hch

On Tue, Sep 01, 2020 at 10:18:23AM -0700, Sagi Grimberg wrote:
> Lets overall split bug fixes from any enhancement. Bug fixes
> should go directly before we handle enhancements.

Yes.  I'd really like to see a minimal bug fix series, and for
real cleanups and maybe then we can get to likely/unlikely
micro-optimizations.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio
  2020-08-31 22:27 ` [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
@ 2020-09-03 16:23   ` Christoph Hellwig
  2020-09-03 18:23     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-03 16:23 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, hch, linux-nvme, sagi

On Mon, Aug 31, 2020 at 03:27:06PM -0700, Chaitanya Kulkarni wrote:
> The function blk_rq_append_bio() is a genereric API written for all
> types driver (having bounce buffers) and different context (where
> request is already having a bio i.e. rq->bio != NULL).
> 
> It does mainly three things: calculating the segments, bounce queue and
> if req->bio == NULL call blk_rq_bio_prep() or handle low level merge()
> case.
> 
> The NVMe PCIe driver does not use the queue bounce mechanism. In order
> to find this out for each request processing in the passthru
> blk_rq_append_bio() does extra work in the fast path for each request.

We need to be able to deal with all transport drivers.  None of them
uses bounce buffering, but the changelog needs to be correct.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check
  2020-09-03 16:18   ` Christoph Hellwig
@ 2020-09-03 18:21     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 18:21 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, logang, sagi, linux-nvme

On 9/3/20 09:18, Christoph Hellwig wrote:
> This really seems like weird over-optimization.  Any sensible branch
> predictor should be able to find this out by itself.

Okay I'll drop it, we can rely on that.

By looking at the code it looked logical.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release
  2020-09-03 16:21       ` Christoph Hellwig
@ 2020-09-03 18:22         ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 18:22 UTC (permalink / raw)
  To: Christoph Hellwig, Sagi Grimberg; +Cc: kbusch, Logan Gunthorpe, linux-nvme

On 9/3/20 09:22, Christoph Hellwig wrote:
> On Tue, Sep 01, 2020 at 10:18:23AM -0700, Sagi Grimberg wrote:
>> Lets overall split bug fixes from any enhancement. Bug fixes
>> should go directly before we handle enhancements.
> Yes.  I'd really like to see a minimal bug fix series, and for
> real cleanups and maybe then we can get to likely/unlikely
> micro-optimizations.
> 

Okay, I'll split the bug fix and rest cleanups.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio
  2020-09-03 16:23   ` Christoph Hellwig
@ 2020-09-03 18:23     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 18:23 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, logang, sagi, linux-nvme

On 9/3/20 09:23, Christoph Hellwig wrote:
>> The NVMe PCIe driver does not use the queue bounce mechanism. In order
>> to find this out for each request processing in the passthru
>> blk_rq_append_bio() does extra work in the fast path for each request.
> We need to be able to deal with all transport drivers.  None of them
> uses bounce buffering, but the changelog needs to be correct.
> 

Okay I'll add those transports in the next changelog.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 04/12] nvmet: use consistent type for op_flag
  2020-09-03 16:20   ` Christoph Hellwig
@ 2020-09-03 18:35     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 18:35 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, logang, sagi, linux-nvme

On 9/3/20 09:20, Christoph Hellwig wrote:
> Actually all the op_flags used here are bogus and have no meaning for
> passthrough commands.  So the right fix is to remove op_flags entirely.
> 
Okay.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 05/12] nvmet: use unlikely for uncommon commands
  2020-09-03 16:20   ` Christoph Hellwig
@ 2020-09-03 18:37     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 18:37 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, logang, sagi, linux-nvme

On 9/3/20 09:20, Christoph Hellwig wrote:
> What difference does this make?
> 

Unlike the first patch(nvme_alloc_request()) I've not measured
performance numbers for this one.

I'll measure performance numbers and look into assembly with this patch,
if it shows any improvement then I'll add this patch in next version
along with perf numbers delta.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-03 16:17   ` Christoph Hellwig
@ 2020-09-03 19:40     ` Chaitanya Kulkarni
  2020-09-08  8:57       ` Christoph Hellwig
  0 siblings, 1 reply; 44+ messages in thread
From: Chaitanya Kulkarni @ 2020-09-03 19:40 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, logang, sagi, linux-nvme

On 9/3/20 09:17, Christoph Hellwig wrote:
> Before we start micro-optimzing this stuff I suspect we might want
> to just split the submit_cmd handlers so that we have a fast path
> and the rest.  We've grown a few too many arguments by now..
> 

Are you referring to any of this :-

1. Split nvmet_passthru_execute_cmd() into
    nvmet_passthru_execute_io_cmd() and
    nvmet_passthru_execute_admin_cmd() handler just like as bdev-ns and
    file-ns. (I don't think so that is what you mean).

OR

2. Split the nvme_alloc_request() into nvme_alloc_request_qid_any() and
    nvme_alloc_request_qid() then call nvme_alloc_request_qid_any() from
    passthru fast path ? (or variation of the same).

OR

Can you please elaborate how you like to have a split ?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request()
  2020-09-03 19:40     ` Chaitanya Kulkarni
@ 2020-09-08  8:57       ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2020-09-08  8:57 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, logang, Christoph Hellwig, linux-nvme, sagi

On Thu, Sep 03, 2020 at 07:40:14PM +0000, Chaitanya Kulkarni wrote:
> On 9/3/20 09:17, Christoph Hellwig wrote:
> > Before we start micro-optimzing this stuff I suspect we might want
> > to just split the submit_cmd handlers so that we have a fast path
> > and the rest.  We've grown a few too many arguments by now..
> > 
> 
> Are you referring to any of this :-
> 
> 1. Split nvmet_passthru_execute_cmd() into
>     nvmet_passthru_execute_io_cmd() and
>     nvmet_passthru_execute_admin_cmd() handler just like as bdev-ns and
>     file-ns. (I don't think so that is what you mean).
> 
> OR
> 
> 2. Split the nvme_alloc_request() into nvme_alloc_request_qid_any() and
>     nvme_alloc_request_qid() then call nvme_alloc_request_qid_any() from
>     passthru fast path ? (or variation of the same).
> 
> OR
> 
> Can you please elaborate how you like to have a split ?

I'm not really sure yet, I just noticed that we created a convoluted
mess.  I'll see what I can do.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2020-09-08  8:57 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-31 22:26 [PATCH V2 00/12] nvmet: passthru fixes and improvements Chaitanya Kulkarni
2020-08-31 22:26 ` [PATCH V2 01/12] nvme-core: annotate nvme_alloc_request() Chaitanya Kulkarni
2020-09-01  1:56   ` Baolin Wang
2020-09-01 17:21     ` Sagi Grimberg
2020-09-01 21:16       ` Chaitanya Kulkarni
2020-09-02  3:28       ` Chaitanya Kulkarni
2020-09-02 22:02       ` Chaitanya Kulkarni
2020-09-03 16:17   ` Christoph Hellwig
2020-09-03 19:40     ` Chaitanya Kulkarni
2020-09-08  8:57       ` Christoph Hellwig
2020-08-31 22:26 ` [PATCH V2 02/12] nvmet: for pt I/O commands use likely for ns check Chaitanya Kulkarni
2020-09-03 16:18   ` Christoph Hellwig
2020-09-03 18:21     ` Chaitanya Kulkarni
2020-08-31 22:26 ` [PATCH V2 03/12] nvmet: use consistent type with id->nlbaf Chaitanya Kulkarni
2020-09-01 15:55   ` Keith Busch
2020-09-01 21:17     ` Chaitanya Kulkarni
2020-09-01 21:31       ` Logan Gunthorpe
2020-09-01 22:03       ` Keith Busch
2020-08-31 22:26 ` [PATCH V2 04/12] nvmet: use consistent type for op_flag Chaitanya Kulkarni
2020-09-03 16:20   ` Christoph Hellwig
2020-09-03 18:35     ` Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 05/12] nvmet: use unlikely for uncommon commands Chaitanya Kulkarni
2020-09-03 16:20   ` Christoph Hellwig
2020-09-03 18:37     ` Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 06/12] nvmet: remove op_flags for write commands Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 07/12] nvmet: decouple nvme_ctrl_get_by_path() Chaitanya Kulkarni
2020-09-01 17:00   ` Logan Gunthorpe
2020-08-31 22:27 ` [PATCH V2 08/12] nvmet: move get/put ctrl into dev open/release Chaitanya Kulkarni
2020-09-01 17:02   ` Logan Gunthorpe
2020-09-01 17:18     ` Sagi Grimberg
2020-09-02  0:31       ` Chaitanya Kulkarni
2020-09-03 16:21       ` Christoph Hellwig
2020-09-03 18:22         ` Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 09/12] nvmet: fix nvme module ref count Oops Chaitanya Kulkarni
2020-09-01 17:23   ` Logan Gunthorpe
2020-08-31 22:27 ` [PATCH V2 10/12] block: move blk_rq_bio_prep() to linux/blk-mq.h Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 11/12] nvmet: use minimized version of blk_rq_append_bio Chaitanya Kulkarni
2020-09-03 16:23   ` Christoph Hellwig
2020-09-03 18:23     ` Chaitanya Kulkarni
2020-08-31 22:27 ` [PATCH V2 12/12] nvmet: use inline bio for passthru fast path Chaitanya Kulkarni
2020-09-01 17:29   ` Logan Gunthorpe
2020-09-02  0:39     ` Chaitanya Kulkarni
2020-09-02 15:41       ` Logan Gunthorpe
2020-09-02 21:04         ` Chaitanya Kulkarni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.