linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup
@ 2020-11-04 20:37 Chaitanya Kulkarni
  2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

Hi,

This patch series uses NVME_IO_TIMEOUT for the sync request submission
and user request submission when the timeout is not specified by the 
caller and request queue data is set to NULL which is true for admin 
queue.

Also in this version, I've added timeout values setting for the NVMeOF
passthru controller given that passthru VUCs (both admin and I/O VUCs)
can differ in execution time than general-purpose admin and I/O command
set following is the detailed scenario :-

Consider NVMeOF target is setup in the passthru mode and both target
and host modules are loaded with the default timeout values present
in the nvme-core.

1. User issues nvme cmd from the host to the target of nvme-cli where
   timeout is taken from the controller log page  e.g. :-
   # nvme io-passthru --timeout=XXX ... 
   OR  
   # nvme admin-passthru --timeout=XXX ... 
   The timeout value in the above example is greater than the default
   timeout value present in host-core for both on host machine and the 
   target machine.
2. The host-core on the host side will set the timeout for the host
   request and issues NVMe cmd over transport.
3. In the absence of [1] target will set the timeout value which is
   default timeout and that is smaller than the timeout set on the host
   side.
4. Due to lack of timeout value not passed over the transport leading
   to smaller timeout value for the request on the target than the host
   side, target side command will timeout.
   
   This breaks the scenario where the same command with timeout value
   from nvme-cli is able to execute with success when issued to NVMe
   PCIe ctrl but will fail when it is executed on the NVMeOF Passthru
   ctrl.

The last patch centralizes the setting of the request end_io_data
parameter into nvme_alloc_request() similar to what we have done for
setting up the request timeout value in this version.

Regards,
Chaitanya

[1] nvmet: add passthru admin timeout value attr
    nvmet: add passthru io timeout value attr

* Changes from V5:-

1. Rebase on the latest nvme-5.10.
2. Centralize setting up the request->timeout value into
   nvme_alloc_request().
3. Centralize setting up the request->end_io_data value into
   nvme_alloc_request().
4. Update the NVMeOF passthru nvmet_passthru_execute_cmd() such that
   it relfect nvme_alloc_request() changes from the first patch when
   handling io and admin timeouts.
5. Move nvme_req_set_default_timeout() to host/core.c since there is
   only one caller now.

* Changes from V4:-

1. Rebase and retest on nvme-5.10.

* Changes from V3:-

1. Rebase and retest on nvme-5.10.
2. Update the cover-letter with detailed scenario why NVMeOF target
   passthru is needed.
3. Wrap the line under 80 char for the last patch.
4. Change the nvme_default_timeout() -> nvme_set_req_default_timeout()
   such that it will not take timeout argument.

* Changes from V2:-

1. Introduce nvme_defalt_timeout() helper and use it in host/core.c.
2. Use nvme_default_timeout() in the lightnvme.c

* Changes from V1:-

1. Instead of using qid to decide IO or ADMIN timeout use request
   queue's queuedata whch we only set for non admin queue
   __nvme_submit_sync_cmd().
2. Add second patch to set IO timeout for nvme_submit_user_cmd().
3. Set the NVMeOF passthru ctrl timeout values with default values from
   nvme-core module.
4. Add admin and I/O timeout configfs attributes for NVMeOF passthru
   controller.

Chaitanya Kulkarni (5):
  nvme: centralize setting req timeout
  nvmet: add passthru admin timeout value attr
  nvmet: add passthru io timeout value attr
  nvme: use consistent macro name for timeout
  nvme: centralize setting req end_io_data

 drivers/nvme/host/core.c       | 35 ++++++++++++-------
 drivers/nvme/host/fc.c         |  2 +-
 drivers/nvme/host/lightnvm.c   |  9 +++--
 drivers/nvme/host/nvme.h       |  5 +--
 drivers/nvme/host/pci.c        | 15 ++++----
 drivers/nvme/host/rdma.c       |  2 +-
 drivers/nvme/host/tcp.c        |  2 +-
 drivers/nvme/target/configfs.c | 64 ++++++++++++++++++++++++++++++++++
 drivers/nvme/target/loop.c     |  2 +-
 drivers/nvme/target/nvmet.h    |  2 ++
 drivers/nvme/target/passthru.c | 12 ++++++-
 11 files changed, 117 insertions(+), 33 deletions(-)

-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V6 1/5] nvme: centralize setting req timeout
  2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
@ 2020-11-04 20:37 ` Chaitanya Kulkarni
  2020-11-04 22:10   ` Sagi Grimberg
  2020-11-05  7:31   ` Christoph Hellwig
  2020-11-04 20:37 ` [PATCH V6 2/5] nvmet: add passthru admin timeout value attr Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

The function nvme_alloc_request() is called from different context
(I/O and Admin queue) where callers do not consider the I/O timeout
when called from I/O queue context.

Introduce nvme_req_set_default_timeout() function which sets the
default I/O and Admin timeout value based on whether the queuedata
is set or not. Call this function from the nvme_alloc_request() when
the newly added timeout parameter is not set from the callers of
nvme_alloc_request().

Update the respective callers of the nvme_alloc_request() which are
setting the req->timeout value after allocating the request such that
now we pass that value as a last parameter of the nvme_alloc_request().

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c       | 28 +++++++++++++++++++---------
 drivers/nvme/host/lightnvm.c   |  9 ++++-----
 drivers/nvme/host/nvme.h       |  3 ++-
 drivers/nvme/host/pci.c        |  7 +++----
 drivers/nvme/target/passthru.c |  2 +-
 5 files changed, 29 insertions(+), 20 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 40ca71b29bb9..045ac445aee7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -518,8 +518,17 @@ static inline void nvme_clear_nvme_request(struct request *req)
 	}
 }
 
+static inline void nvme_req_set_default_timeout(struct request *req)
+{
+	if (req->q->queuedata)
+		req->timeout = NVME_IO_TIMEOUT;
+	else /* no queuedata implies admin queue */
+		req->timeout = ADMIN_TIMEOUT;
+}
+
 struct request *nvme_alloc_request(struct request_queue *q,
-		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
+		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid,
+		unsigned int timeout)
 {
 	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 	struct request *req;
@@ -533,6 +542,11 @@ struct request *nvme_alloc_request(struct request_queue *q,
 	if (IS_ERR(req))
 		return req;
 
+	if (timeout)
+		req->timeout = timeout;
+	else
+		nvme_req_set_default_timeout(req);
+
 	req->cmd_flags |= REQ_FAILFAST_DRIVER;
 	nvme_clear_nvme_request(req);
 	nvme_req(req)->cmd = cmd;
@@ -897,12 +911,10 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 	struct request *req;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, flags, qid);
+	req = nvme_alloc_request(q, cmd, flags, qid, timeout);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
-	req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
-
 	if (buffer && bufflen) {
 		ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
 		if (ret)
@@ -1067,11 +1079,10 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	void *meta = NULL;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
+	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY, timeout);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
-	req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
 	nvme_req(req)->flags |= NVME_REQ_USERCMD;
 
 	if (ubuffer && bufflen) {
@@ -1141,12 +1152,11 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl)
 {
 	struct request *rq;
 
-	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED,
-			NVME_QID_ANY);
+	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd,
+			BLK_MQ_REQ_RESERVED, NVME_QID_ANY, ctrl->kato * HZ);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
-	rq->timeout = ctrl->kato * HZ;
 	rq->end_io_data = ctrl;
 
 	blk_execute_rq_nowait(rq->q, NULL, rq, 0, nvme_keep_alive_end_io);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 8e562d0f2c30..eabcbdd3a556 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -653,7 +653,8 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
 
 	nvme_nvm_rqtocmd(rqd, ns, cmd);
 
-	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY);
+	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY,
+			0);
 	if (IS_ERR(rq))
 		return rq;
 
@@ -767,15 +768,13 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q,
 	DECLARE_COMPLETION_ONSTACK(wait);
 	int ret = 0;
 
-	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0,
-			NVME_QID_ANY);
+	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, NVME_QID_ANY,
+			timeout);
 	if (IS_ERR(rq)) {
 		ret = -ENOMEM;
 		goto err_cmd;
 	}
 
-	rq->timeout = timeout ? timeout : ADMIN_TIMEOUT;
-
 	if (ppa_buf && ppa_len) {
 		ppa_list = dma_pool_alloc(dev->dma_pool, GFP_KERNEL, &ppa_dma);
 		if (!ppa_list) {
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bc330bf0d3bd..f2b1a18d8bd7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -610,7 +610,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
 
 #define NVME_QID_ANY -1
 struct request *nvme_alloc_request(struct request_queue *q,
-		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid);
+		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid,
+		unsigned int timeout);
 void nvme_cleanup_cmd(struct request *req);
 blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
 		struct nvme_command *cmd);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 0578ff253c47..923ca1d47031 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1304,13 +1304,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 		 req->tag, nvmeq->qid);
 
 	abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
-			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY, ADMIN_TIMEOUT);
 	if (IS_ERR(abort_req)) {
 		atomic_inc(&dev->ctrl.abort_limit);
 		return BLK_EH_RESET_TIMER;
 	}
 
-	abort_req->timeout = ADMIN_TIMEOUT;
 	abort_req->end_io_data = NULL;
 	blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio);
 
@@ -2219,11 +2218,11 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
 	cmd.delete_queue.opcode = opcode;
 	cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
 
-	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
+	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY,
+				 ADMIN_TIMEOUT);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
-	req->timeout = ADMIN_TIMEOUT;
 	req->end_io_data = nvmeq;
 
 	init_completion(&nvmeq->delete_done);
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 8ee94f056898..2905c039b479 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -244,7 +244,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 		q = ns->queue;
 	}
 
-	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY);
+	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY, 0);
 	if (IS_ERR(rq)) {
 		status = NVME_SC_INTERNAL;
 		goto out_put_ns;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V6 2/5] nvmet: add passthru admin timeout value attr
  2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
  2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
@ 2020-11-04 20:37 ` Chaitanya Kulkarni
  2020-11-04 22:13   ` Sagi Grimberg
  2020-11-05  7:33   ` Christoph Hellwig
  2020-11-04 20:37 ` [PATCH V6 3/5] nvmet: add passthru io " Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

NVMeOF controller in the passsthru mode is capable of handling wide set
of admin commands including Vender unique passhtru admin comands (VUAC).

The VUACs are used to read the large driver logs and can take longer
than default NVMe commands, that is for passthru requests the timeout
value may differ from the passthru controller's default timeout values
(nvme-core:admin_timeout).

Add a configfs attribute so that user can set the admin timeout values.
In case if this configfs value is not set nvme_alloc_request() will set
use the ADMIN_TIMEOUT value.

This attribute setting is only allowed when ctrl is disable to avoid
rcu calls in the fast path, in future when needed we can always make it
fast path friendly using RCU.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/configfs.c | 32 ++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c | 11 ++++++++++-
 3 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 37e1d7784e17..7625f8892fb5 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -736,9 +736,41 @@ static ssize_t nvmet_passthru_enable_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_passthru_, enable);
 
+static ssize_t nvmet_passthru_admin_timeout_show(struct config_item *item,
+		char *page)
+{
+	return sprintf(page, "%u\n", to_subsys(item->ci_parent)->admin_timeout);
+}
+
+static ssize_t nvmet_passthru_admin_timeout_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item->ci_parent);
+	unsigned int admin_timeout;
+	int ret = 0;
+
+	mutex_lock(&subsys->lock);
+	if (subsys->passthru_ctrl) {
+		pr_err("disable passthru ctrl before setting admin_timeout\n");
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (kstrtouint(page, 0, &admin_timeout)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	subsys->admin_timeout = admin_timeout;
+out:
+	mutex_unlock(&subsys->lock);
+	return ret ? ret : count;
+}
+CONFIGFS_ATTR(nvmet_passthru_, admin_timeout);
+
 static struct configfs_attribute *nvmet_passthru_attrs[] = {
 	&nvmet_passthru_attr_device_path,
 	&nvmet_passthru_attr_enable,
+	&nvmet_passthru_attr_admin_timeout,
 	NULL,
 };
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 559a15ccc322..a0c80e5179a2 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -249,6 +249,7 @@ struct nvmet_subsys {
 	struct nvme_ctrl	*passthru_ctrl;
 	char			*passthru_ctrl_path;
 	struct config_group	passthru_group;
+	unsigned int		admin_timeout;
 #endif /* CONFIG_NVME_TARGET_PASSTHRU */
 };
 
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 2905c039b479..93d48b562f6d 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -221,6 +221,14 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
 	return 0;
 }
 
+static inline unsigned int nvmet_passthru_req_timeout(struct nvmet_req *req,
+						      struct request_queue *q)
+{
+	unsigned int admin_timeout = req->sq->ctrl->subsys->admin_timeout;
+
+	return q->queuedata == NULL ? admin_timeout : 0;
+}
+
 static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 {
 	struct nvme_ctrl *ctrl = nvmet_req_passthru_ctrl(req);
@@ -244,7 +252,8 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 		q = ns->queue;
 	}
 
-	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY, 0);
+	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY,
+				nvmet_passthru_req_timeout(req, q));
 	if (IS_ERR(rq)) {
 		status = NVME_SC_INTERNAL;
 		goto out_put_ns;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V6 3/5] nvmet: add passthru io timeout value attr
  2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
  2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
  2020-11-04 20:37 ` [PATCH V6 2/5] nvmet: add passthru admin timeout value attr Chaitanya Kulkarni
@ 2020-11-04 20:37 ` Chaitanya Kulkarni
  2020-11-04 22:13   ` Sagi Grimberg
  2020-11-04 20:37 ` [PATCH V6 4/5] nvme: use consistent macro name for timeout Chaitanya Kulkarni
  2020-11-04 20:37 ` [PATCH V6 5/5] nvme: centralize setting req end_io_data Chaitanya Kulkarni
  4 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

NVMe controller in the passsthru mode is capable of handling wide set
of admin commands including Vender unique passhtru I/O comands
(VUICs).

The VUICs can take longer than default NVMe commands, that is for
passthru requests the timeout value may differ from the passthru
controller's default timeout values (nvme-core:io_timeout).

Add a configfs attribute so that user can set the io timeout values.
In case if this configfs value is not set nvme_alloc_request() will set
use the NVME_IO_TIMEOUT value.

This attribute setting is only allowed when ctrl is disable to avoid
rcu calls in the fast path, in future when needed we can always make it
fast path friendly using RCU.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/configfs.c | 32 ++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h    |  1 +
 drivers/nvme/target/passthru.c |  3 ++-
 3 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 7625f8892fb5..5beae49ba6ad 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -767,10 +767,42 @@ static ssize_t nvmet_passthru_admin_timeout_store(struct config_item *item,
 }
 CONFIGFS_ATTR(nvmet_passthru_, admin_timeout);
 
+static ssize_t nvmet_passthru_io_timeout_show(struct config_item *item,
+		char *page)
+{
+	return sprintf(page, "%u\n", to_subsys(item->ci_parent)->io_timeout);
+}
+
+static ssize_t nvmet_passthru_io_timeout_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_subsys *subsys = to_subsys(item->ci_parent);
+	unsigned int io_timeout;
+	int ret = 0;
+
+	mutex_lock(&subsys->lock);
+	if (subsys->passthru_ctrl) {
+		pr_err("disable passthru ctrl before setting io_timeout\n");
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (kstrtouint(page, 0, &io_timeout)) {
+		ret = -EINVAL;
+		goto out;
+	}
+	subsys->io_timeout = io_timeout;
+out:
+	mutex_unlock(&subsys->lock);
+	return ret ? ret : count;
+}
+CONFIGFS_ATTR(nvmet_passthru_, io_timeout);
+
 static struct configfs_attribute *nvmet_passthru_attrs[] = {
 	&nvmet_passthru_attr_device_path,
 	&nvmet_passthru_attr_enable,
 	&nvmet_passthru_attr_admin_timeout,
+	&nvmet_passthru_attr_io_timeout,
 	NULL,
 };
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index a0c80e5179a2..2f9635273629 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -250,6 +250,7 @@ struct nvmet_subsys {
 	char			*passthru_ctrl_path;
 	struct config_group	passthru_group;
 	unsigned int		admin_timeout;
+	unsigned int		io_timeout;
 #endif /* CONFIG_NVME_TARGET_PASSTHRU */
 };
 
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 93d48b562f6d..d099f288d296 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -225,8 +225,9 @@ static inline unsigned int nvmet_passthru_req_timeout(struct nvmet_req *req,
 						      struct request_queue *q)
 {
 	unsigned int admin_timeout = req->sq->ctrl->subsys->admin_timeout;
+	unsigned int io_timeout = req->sq->ctrl->subsys->io_timeout;
 
-	return q->queuedata == NULL ? admin_timeout : 0;
+	return q->queuedata == NULL ? admin_timeout : io_timeout;
 }
 
 static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V6 4/5] nvme: use consistent macro name for timeout
  2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2020-11-04 20:37 ` [PATCH V6 3/5] nvmet: add passthru io " Chaitanya Kulkarni
@ 2020-11-04 20:37 ` Chaitanya Kulkarni
  2020-11-04 22:14   ` Sagi Grimberg
  2020-11-04 20:37 ` [PATCH V6 5/5] nvme: centralize setting req end_io_data Chaitanya Kulkarni
  4 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

This is purely a clenaup patch, add prefix NVME to the ADMIN_TIMEOUT to
make consistent with NVME_IO_TIMEOUT.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c   | 5 +++--
 drivers/nvme/host/fc.c     | 2 +-
 drivers/nvme/host/nvme.h   | 2 +-
 drivers/nvme/host/pci.c    | 8 ++++----
 drivers/nvme/host/rdma.c   | 2 +-
 drivers/nvme/host/tcp.c    | 2 +-
 drivers/nvme/target/loop.c | 2 +-
 7 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 045ac445aee7..29d9b5bb29b2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -523,7 +523,7 @@ static inline void nvme_req_set_default_timeout(struct request *req)
 	if (req->q->queuedata)
 		req->timeout = NVME_IO_TIMEOUT;
 	else /* no queuedata implies admin queue */
-		req->timeout = ADMIN_TIMEOUT;
+		req->timeout = NVME_ADMIN_TIMEOUT;
 }
 
 struct request *nvme_alloc_request(struct request_queue *q,
@@ -2274,7 +2274,8 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
 	cmd.common.cdw11 = cpu_to_le32(len);
 
 	return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
-				      ADMIN_TIMEOUT, NVME_QID_ANY, 1, 0, false);
+				      NVME_ADMIN_TIMEOUT, NVME_QID_ANY, 1, 0,
+				      false);
 }
 EXPORT_SYMBOL_GPL(nvme_sec_submit);
 #endif /* CONFIG_BLK_SED_OPAL */
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index f4c246462658..38373a0e86ef 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3479,7 +3479,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
 			    ctrl->lport->ops->fcprqst_priv_sz);
 	ctrl->admin_tag_set.driver_data = ctrl;
 	ctrl->admin_tag_set.nr_hw_queues = 1;
-	ctrl->admin_tag_set.timeout = ADMIN_TIMEOUT;
+	ctrl->admin_tag_set.timeout = NVME_ADMIN_TIMEOUT;
 	ctrl->admin_tag_set.flags = BLK_MQ_F_NO_SCHED;
 
 	ret = blk_mq_alloc_tag_set(&ctrl->admin_tag_set);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index f2b1a18d8bd7..78f0ac0fb39b 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -24,7 +24,7 @@ extern unsigned int nvme_io_timeout;
 #define NVME_IO_TIMEOUT	(nvme_io_timeout * HZ)
 
 extern unsigned int admin_timeout;
-#define ADMIN_TIMEOUT	(admin_timeout * HZ)
+#define NVME_ADMIN_TIMEOUT	(admin_timeout * HZ)
 
 #define NVME_DEFAULT_KATO	5
 #define NVME_KATO_GRACE		10
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 923ca1d47031..6089660b2a24 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1304,7 +1304,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 		 req->tag, nvmeq->qid);
 
 	abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
-			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY, ADMIN_TIMEOUT);
+			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY, NVME_ADMIN_TIMEOUT);
 	if (IS_ERR(abort_req)) {
 		atomic_inc(&dev->ctrl.abort_limit);
 		return BLK_EH_RESET_TIMER;
@@ -1606,7 +1606,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
 		dev->admin_tagset.nr_hw_queues = 1;
 
 		dev->admin_tagset.queue_depth = NVME_AQ_MQ_TAG_DEPTH;
-		dev->admin_tagset.timeout = ADMIN_TIMEOUT;
+		dev->admin_tagset.timeout = NVME_ADMIN_TIMEOUT;
 		dev->admin_tagset.numa_node = dev->ctrl.numa_node;
 		dev->admin_tagset.cmd_size = sizeof(struct nvme_iod);
 		dev->admin_tagset.flags = BLK_MQ_F_NO_SCHED;
@@ -2219,7 +2219,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
 	cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
 
 	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY,
-				 ADMIN_TIMEOUT);
+				 NVME_ADMIN_TIMEOUT);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -2238,7 +2238,7 @@ static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)
 	unsigned long timeout;
 
  retry:
-	timeout = ADMIN_TIMEOUT;
+	timeout = NVME_ADMIN_TIMEOUT;
 	while (nr_queues > 0) {
 		if (nvme_delete_queue(&dev->queues[nr_queues], opcode))
 			break;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 75d071d34319..d3a4e8629485 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -797,7 +797,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
 				NVME_RDMA_DATA_SGL_SIZE;
 		set->driver_data = ctrl;
 		set->nr_hw_queues = 1;
-		set->timeout = ADMIN_TIMEOUT;
+		set->timeout = NVME_ADMIN_TIMEOUT;
 		set->flags = BLK_MQ_F_NO_SCHED;
 	} else {
 		set = &ctrl->tag_set;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index c0c33320fe65..1ba659927442 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1568,7 +1568,7 @@ static struct blk_mq_tag_set *nvme_tcp_alloc_tagset(struct nvme_ctrl *nctrl,
 		set->cmd_size = sizeof(struct nvme_tcp_request);
 		set->driver_data = ctrl;
 		set->nr_hw_queues = 1;
-		set->timeout = ADMIN_TIMEOUT;
+		set->timeout = NVME_ADMIN_TIMEOUT;
 	} else {
 		set = &ctrl->tag_set;
 		memset(set, 0, sizeof(*set));
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 5ab46d3130e7..979dfeb39d9b 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -345,7 +345,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
 		NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
 	ctrl->admin_tag_set.driver_data = ctrl;
 	ctrl->admin_tag_set.nr_hw_queues = 1;
-	ctrl->admin_tag_set.timeout = ADMIN_TIMEOUT;
+	ctrl->admin_tag_set.timeout = NVME_ADMIN_TIMEOUT;
 	ctrl->admin_tag_set.flags = BLK_MQ_F_NO_SCHED;
 
 	ctrl->queues[0].ctrl = ctrl;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V6 5/5] nvme: centralize setting req end_io_data
  2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2020-11-04 20:37 ` [PATCH V6 4/5] nvme: use consistent macro name for timeout Chaitanya Kulkarni
@ 2020-11-04 20:37 ` Chaitanya Kulkarni
  2020-11-04 22:17   ` Sagi Grimberg
  4 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2020-11-04 20:37 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, sagi, Chaitanya Kulkarni, hch

The host side code allocates the request in several places using
nvme_alloc_request() and initialises the allocated request->end_io_data,
which repeats the code and will have duplicate code in the future.

Add an end_io_data request parameter which initializes the block
layer request in nvme_alloc_request(), update the respective callers
and remove the duplicate code.  

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/core.c       | 12 ++++++------
 drivers/nvme/host/lightnvm.c   |  4 ++--
 drivers/nvme/host/nvme.h       |  2 +-
 drivers/nvme/host/pci.c        |  8 +++-----
 drivers/nvme/target/passthru.c |  2 +-
 5 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 29d9b5bb29b2..8bebfd64a69b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -528,7 +528,7 @@ static inline void nvme_req_set_default_timeout(struct request *req)
 
 struct request *nvme_alloc_request(struct request_queue *q,
 		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid,
-		unsigned int timeout)
+		unsigned int timeout, void *end_io_data)
 {
 	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 	struct request *req;
@@ -547,6 +547,7 @@ struct request *nvme_alloc_request(struct request_queue *q,
 	else
 		nvme_req_set_default_timeout(req);
 
+	req->end_io_data = end_io_data;
 	req->cmd_flags |= REQ_FAILFAST_DRIVER;
 	nvme_clear_nvme_request(req);
 	nvme_req(req)->cmd = cmd;
@@ -911,7 +912,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 	struct request *req;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, flags, qid, timeout);
+	req = nvme_alloc_request(q, cmd, flags, qid, timeout, NULL);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -1079,7 +1080,7 @@ static int nvme_submit_user_cmd(struct request_queue *q,
 	void *meta = NULL;
 	int ret;
 
-	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY, timeout);
+	req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY, timeout, NULL);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
@@ -1153,12 +1154,11 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl)
 	struct request *rq;
 
 	rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd,
-			BLK_MQ_REQ_RESERVED, NVME_QID_ANY, ctrl->kato * HZ);
+			BLK_MQ_REQ_RESERVED, NVME_QID_ANY, ctrl->kato * HZ,
+			ctrl);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 
-	rq->end_io_data = ctrl;
-
 	blk_execute_rq_nowait(rq->q, NULL, rq, 0, nvme_keep_alive_end_io);
 
 	return 0;
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index eabcbdd3a556..837c5e5ce3b3 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -654,7 +654,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
 	nvme_nvm_rqtocmd(rqd, ns, cmd);
 
 	rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY,
-			0);
+			0, NULL);
 	if (IS_ERR(rq))
 		return rq;
 
@@ -769,7 +769,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q,
 	int ret = 0;
 
 	rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, NVME_QID_ANY,
-			timeout);
+			timeout, NULL);
 	if (IS_ERR(rq)) {
 		ret = -ENOMEM;
 		goto err_cmd;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 78f0ac0fb39b..8076b5147f3e 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -611,7 +611,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
 #define NVME_QID_ANY -1
 struct request *nvme_alloc_request(struct request_queue *q,
 		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid,
-		unsigned int timeout);
+		unsigned int timeout, void *end_io_data);
 void nvme_cleanup_cmd(struct request *req);
 blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
 		struct nvme_command *cmd);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6089660b2a24..33a920b93601 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1304,13 +1304,13 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 		 req->tag, nvmeq->qid);
 
 	abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
-			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY, NVME_ADMIN_TIMEOUT);
+			BLK_MQ_REQ_NOWAIT, NVME_QID_ANY, NVME_ADMIN_TIMEOUT,
+			NULL);
 	if (IS_ERR(abort_req)) {
 		atomic_inc(&dev->ctrl.abort_limit);
 		return BLK_EH_RESET_TIMER;
 	}
 
-	abort_req->end_io_data = NULL;
 	blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio);
 
 	/*
@@ -2219,12 +2219,10 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
 	cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
 
 	req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY,
-				 NVME_ADMIN_TIMEOUT);
+				 NVME_ADMIN_TIMEOUT, nvmeq);
 	if (IS_ERR(req))
 		return PTR_ERR(req);
 
-	req->end_io_data = nvmeq;
-
 	init_completion(&nvmeq->delete_done);
 	blk_execute_rq_nowait(q, NULL, req, false,
 			opcode == nvme_admin_delete_cq ?
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index d099f288d296..6b76087ca421 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -254,7 +254,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
 	}
 
 	rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY,
-				nvmet_passthru_req_timeout(req, q));
+				nvmet_passthru_req_timeout(req, q), NULL);
 	if (IS_ERR(rq)) {
 		status = NVME_SC_INTERNAL;
 		goto out_put_ns;
-- 
2.22.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 1/5] nvme: centralize setting req timeout
  2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
@ 2020-11-04 22:10   ` Sagi Grimberg
  2020-11-05  7:31   ` Christoph Hellwig
  1 sibling, 0 replies; 15+ messages in thread
From: Sagi Grimberg @ 2020-11-04 22:10 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 2/5] nvmet: add passthru admin timeout value attr
  2020-11-04 20:37 ` [PATCH V6 2/5] nvmet: add passthru admin timeout value attr Chaitanya Kulkarni
@ 2020-11-04 22:13   ` Sagi Grimberg
       [not found]     ` <BYAPR04MB49655CAAD9809FFC21D5418686EE0@BYAPR04MB4965.namprd04.prod.outlook.com>
  2020-11-05  7:33   ` Christoph Hellwig
  1 sibling, 1 reply; 15+ messages in thread
From: Sagi Grimberg @ 2020-11-04 22:13 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch



On 11/4/20 12:37 PM, Chaitanya Kulkarni wrote:
> NVMeOF controller in the passsthru mode is capable of handling wide set
> of admin commands including Vender unique passhtru admin comands (VUAC).
> 
> The VUACs are used to read the large driver logs and can take longer
> than default NVMe commands, that is for passthru requests the timeout
> value may differ from the passthru controller's default timeout values
> (nvme-core:admin_timeout).
> 
> Add a configfs attribute so that user can set the admin timeout values.
> In case if this configfs value is not set nvme_alloc_request() will set
> use the ADMIN_TIMEOUT value.
> 
> This attribute setting is only allowed when ctrl is disable to avoid
> rcu calls in the fast path, in future when needed we can always make it
> fast path friendly using RCU.

What's not safe here? a command will get a single timeout, either the
old one or the new one, not sure what can go wrong here...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 3/5] nvmet: add passthru io timeout value attr
  2020-11-04 20:37 ` [PATCH V6 3/5] nvmet: add passthru io " Chaitanya Kulkarni
@ 2020-11-04 22:13   ` Sagi Grimberg
  0 siblings, 0 replies; 15+ messages in thread
From: Sagi Grimberg @ 2020-11-04 22:13 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch



On 11/4/20 12:37 PM, Chaitanya Kulkarni wrote:
> NVMe controller in the passsthru mode is capable of handling wide set
> of admin commands including Vender unique passhtru I/O comands
> (VUICs).
> 
> The VUICs can take longer than default NVMe commands, that is for
> passthru requests the timeout value may differ from the passthru
> controller's default timeout values (nvme-core:io_timeout).
> 
> Add a configfs attribute so that user can set the io timeout values.
> In case if this configfs value is not set nvme_alloc_request() will set
> use the NVME_IO_TIMEOUT value.
> 
> This attribute setting is only allowed when ctrl is disable to avoid
> rcu calls in the fast path, in future when needed we can always make it
> fast path friendly using RCU.

Same question

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 4/5] nvme: use consistent macro name for timeout
  2020-11-04 20:37 ` [PATCH V6 4/5] nvme: use consistent macro name for timeout Chaitanya Kulkarni
@ 2020-11-04 22:14   ` Sagi Grimberg
  0 siblings, 0 replies; 15+ messages in thread
From: Sagi Grimberg @ 2020-11-04 22:14 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch

OK...

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 5/5] nvme: centralize setting req end_io_data
  2020-11-04 20:37 ` [PATCH V6 5/5] nvme: centralize setting req end_io_data Chaitanya Kulkarni
@ 2020-11-04 22:17   ` Sagi Grimberg
  2020-11-05  7:28     ` Christoph Hellwig
  0 siblings, 1 reply; 15+ messages in thread
From: Sagi Grimberg @ 2020-11-04 22:17 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch

> The host side code allocates the request in several places using
> nvme_alloc_request() and initialises the allocated request->end_io_data,
> which repeats the code and will have duplicate code in the future.
> 
> Add an end_io_data request parameter which initializes the block
> layer request in nvme_alloc_request(), update the respective callers
> and remove the duplicate code.

There are less callers that actually put something there so not sure
if this is an improvement.. The alloc signature is getting kinda bloated
now...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 5/5] nvme: centralize setting req end_io_data
  2020-11-04 22:17   ` Sagi Grimberg
@ 2020-11-05  7:28     ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-11-05  7:28 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: kbusch, linux-nvme, Chaitanya Kulkarni, hch

On Wed, Nov 04, 2020 at 02:17:05PM -0800, Sagi Grimberg wrote:
>> The host side code allocates the request in several places using
>> nvme_alloc_request() and initialises the allocated request->end_io_data,
>> which repeats the code and will have duplicate code in the future.
>>
>> Add an end_io_data request parameter which initializes the block
>> layer request in nvme_alloc_request(), update the respective callers
>> and remove the duplicate code.
>
> There are less callers that actually put something there so not sure
> if this is an improvement.. The alloc signature is getting kinda bloated
> now...

Yes, I don't really see the point - the request is available to the
caller, so they can trivially assign it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 1/5] nvme: centralize setting req timeout
  2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
  2020-11-04 22:10   ` Sagi Grimberg
@ 2020-11-05  7:31   ` Christoph Hellwig
  1 sibling, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-11-05  7:31 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, sagi, linux-nvme, hch

On Wed, Nov 04, 2020 at 12:37:32PM -0800, Chaitanya Kulkarni wrote:
>  
> +static inline void nvme_req_set_default_timeout(struct request *req)
> +{
> +	if (req->q->queuedata)
> +		req->timeout = NVME_IO_TIMEOUT;
> +	else /* no queuedata implies admin queue */
> +		req->timeout = ADMIN_TIMEOUT;
> +}

This just has a single caller now, so I think we can open code it.

> +
>  struct request *nvme_alloc_request(struct request_queue *q,
> -		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
> +		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid,
> +		unsigned int timeout)
>  {
>  	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
>  	struct request *req;
> @@ -533,6 +542,11 @@ struct request *nvme_alloc_request(struct request_queue *q,
>  	if (IS_ERR(req))
>  		return req;
>  
> +	if (timeout)
> +		req->timeout = timeout;
> +	else
> +		nvme_req_set_default_timeout(req);

No need to pass the timeout argument here.  The callers that care
can set it in the caller instead of adding another parameter.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 2/5] nvmet: add passthru admin timeout value attr
       [not found]     ` <BYAPR04MB49655CAAD9809FFC21D5418686EE0@BYAPR04MB4965.namprd04.prod.outlook.com>
@ 2020-11-05  7:32       ` hch
  0 siblings, 0 replies; 15+ messages in thread
From: hch @ 2020-11-05  7:32 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, Sagi Grimberg, linux-nvme, hch

On Thu, Nov 05, 2020 at 12:24:41AM +0000, Chaitanya Kulkarni wrote:
> Are you saying that since it is a scaler we can get away without
> 
> locking ? If so I'll send V7 without locking in the configfs.
---end quoted text---

Yes.  Strictly speaking we should probably use WRITE_ONCE and READ_ONCE
for it these days to make the memory model happy, but it has always worked.

Btw, you mailer hasn't properly quoted replies for the past few weeks.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V6 2/5] nvmet: add passthru admin timeout value attr
  2020-11-04 20:37 ` [PATCH V6 2/5] nvmet: add passthru admin timeout value attr Chaitanya Kulkarni
  2020-11-04 22:13   ` Sagi Grimberg
@ 2020-11-05  7:33   ` Christoph Hellwig
  1 sibling, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-11-05  7:33 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, sagi, linux-nvme, hch

On Wed, Nov 04, 2020 at 12:37:33PM -0800, Chaitanya Kulkarni wrote:
> NVMeOF controller in the passsthru mode is capable of handling wide set
> of admin commands including Vender unique passhtru admin comands (VUAC).

That is not a term that exists in NVMe.  NVMe always speaks of
vendor specific command, fields, etc.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-11-05  7:34 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-04 20:37 [PATCH V6 0/5] nvme-core: timeout related fixes and cleanup Chaitanya Kulkarni
2020-11-04 20:37 ` [PATCH V6 1/5] nvme: centralize setting req timeout Chaitanya Kulkarni
2020-11-04 22:10   ` Sagi Grimberg
2020-11-05  7:31   ` Christoph Hellwig
2020-11-04 20:37 ` [PATCH V6 2/5] nvmet: add passthru admin timeout value attr Chaitanya Kulkarni
2020-11-04 22:13   ` Sagi Grimberg
     [not found]     ` <BYAPR04MB49655CAAD9809FFC21D5418686EE0@BYAPR04MB4965.namprd04.prod.outlook.com>
2020-11-05  7:32       ` hch
2020-11-05  7:33   ` Christoph Hellwig
2020-11-04 20:37 ` [PATCH V6 3/5] nvmet: add passthru io " Chaitanya Kulkarni
2020-11-04 22:13   ` Sagi Grimberg
2020-11-04 20:37 ` [PATCH V6 4/5] nvme: use consistent macro name for timeout Chaitanya Kulkarni
2020-11-04 22:14   ` Sagi Grimberg
2020-11-04 20:37 ` [PATCH V6 5/5] nvme: centralize setting req end_io_data Chaitanya Kulkarni
2020-11-04 22:17   ` Sagi Grimberg
2020-11-05  7:28     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).