All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] nvme-core: trivial cleanups
@ 2023-03-23  3:36 Chaitanya Kulkarni
  2023-03-23  3:36 ` [PATCH 1/9] nvme-core: use uint type for shutdown timeout Chaitanya Kulkarni
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Hi,

Small code cleanups for nvme-core and the last patch has fixing really
long argument list of __nvme_submit_sync_cmd().

Please have a closer look at patch 4 and 9.

There are no functional changes in any of these patches.

-ck

Chaitanya Kulkarni (9):
  nvme-core: use uint type for shutdown timeout
  nvme-core: remove unnecessary else
  nvme-core: remvoe extra line at end of function
  nvme-core: code cleanup for __nvme_check_ready()
  nvme-core: use normal pattern
  nvme-core: open code nvme_delete_ctrl_sync()
  nvme-core: cleanup for nvme_set_latency_tolerance
  nvme-core: remove unneacessary else
  nvme-core: fix nvme_submit_sync_cmd() args

 drivers/nvme/host/auth.c    |  13 ++-
 drivers/nvme/host/core.c    | 158 +++++++++++++++++++++---------------
 drivers/nvme/host/fabrics.c |  60 +++++++++++---
 drivers/nvme/host/nvme.h    |  16 +++-
 4 files changed, 163 insertions(+), 84 deletions(-)

This series is passing the blktests for nvme-loop and nvme-tcp :-

blktests (master) # ./check nvme
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  19.504s  ...  19.508s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.087s  ...  10.091s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.440s  ...  1.476s
nvme/005 (reset local loopback target)                       [passed]
    runtime  1.798s  ...  1.814s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.061s  ...  0.053s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.032s  ...  0.031s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.458s  ...  1.455s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.440s  ...  1.433s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  71.351s  ...  82.214s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  73.525s  ...  70.006s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  77.108s  ...  77.670s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  67.936s  ...  67.830s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  4.152s  ...  4.324s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  3.710s  ...  3.724s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
    runtime  13.086s  ...  13.120s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime  13.287s  ...  13.261s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.449s  ...  1.456s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.435s  ...  1.455s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.422s  ...  1.419s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.430s  ...  1.445s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  1.758s  ...  1.754s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.456s  ...  1.457s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.416s  ...  1.432s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.426s  ...  1.422s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.447s  ...  1.430s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.423s  ...  1.418s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.434s  ...  1.416s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.557s  ...  1.548s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.204s  ...  0.199s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  3.906s  ...  4.045s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.011s  ...  0.012s
nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
    runtime  7.869s  ...  7.982s
nvme/041 (Create authenticated connections)                  [passed]
    runtime  0.711s  ...  0.746s
nvme/042 (Test dhchap key types for authenticated connections) [passed]
    runtime  4.567s  ...  4.589s
nvme/043 (Test hash and DH group variations for authenticated connections) [passed]
    runtime  3.162s  ...  6.974s
nvme/044 (Test bi-directional authentication)                [passed]
    runtime  1.724s  ...  1.704s
nvme/045 (Test re-authentication)                            [passed]
    runtime  3.920s  ...  3.947s
blktests (master) # nvme_trtype=tcp ./check nvme
nvme/002 (create many subsystems and test discovery)         [not run]
    runtime  19.508s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.091s  ...  10.089s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.476s  ...  1.124s
nvme/005 (reset local loopback target)                       [passed]
    runtime  1.814s  ...  1.192s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.053s  ...  0.054s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.031s  ...  0.035s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.455s  ...  1.133s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.433s  ...  1.130s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  82.214s  ...  89.073s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  70.006s  ...  78.218s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  77.670s  ...  75.592s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  67.830s  ...  73.434s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  4.324s  ...  3.913s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  3.724s  ...  3.564s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run]
    runtime  13.120s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/017 (create/delete many file-ns and test discovery)     [not run]
    runtime  13.261s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.456s  ...  1.128s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.455s  ...  1.148s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.419s  ...  1.113s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.445s  ...  1.104s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  1.754s  ...  1.151s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.457s  ...  1.132s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.432s  ...  1.113s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.422s  ...  1.105s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.430s  ...  1.116s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.418s  ...  1.127s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.416s  ...  1.109s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.548s  ...  1.239s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.199s  ...  0.126s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  4.045s  ...  0.791s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.012s  ...  0.014s
nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
    runtime  7.982s  ...  7.153s
nvme/041 (Create authenticated connections)                  [passed]
    runtime  0.746s  ...  0.402s
nvme/042 (Test dhchap key types for authenticated connections) [passed]
    runtime  4.589s  ...  2.546s
nvme/043 (Test hash and DH group variations for authenticated connections) [passed]
    runtime  6.974s  ...  0.691s
nvme/044 (Test bi-directional authentication)                [passed]
    runtime  1.704s  ...  1.122s
nvme/045 (Test re-authentication)                            [passed]
    runtime  3.947s  ...  3.782s
blktests (master) # 





-- 
2.29.0



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  7:56   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 2/9] nvme-core: remove unnecessary else Chaitanya Kulkarni
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Use consistent type unsigned int for shutdown_timeout that matches
other timeout types. Also, use this oppurtunity to print default
timeout values in the module parameter description that saves a trip
to source code.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d4be525f8100..c9636be68d74 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -43,17 +43,17 @@ struct nvme_ns_info {
 
 unsigned int admin_timeout = 60;
 module_param(admin_timeout, uint, 0644);
-MODULE_PARM_DESC(admin_timeout, "timeout in seconds for admin commands");
+MODULE_PARM_DESC(admin_timeout, "timeout in seconds for admin commands, default:60");
 EXPORT_SYMBOL_GPL(admin_timeout);
 
 unsigned int nvme_io_timeout = 30;
 module_param_named(io_timeout, nvme_io_timeout, uint, 0644);
-MODULE_PARM_DESC(io_timeout, "timeout in seconds for I/O");
+MODULE_PARM_DESC(io_timeout, "timeout in seconds for I/O, default:30");
 EXPORT_SYMBOL_GPL(nvme_io_timeout);
 
-static unsigned char shutdown_timeout = 5;
-module_param(shutdown_timeout, byte, 0644);
-MODULE_PARM_DESC(shutdown_timeout, "timeout in seconds for controller shutdown");
+static unsigned int shutdown_timeout = 5;
+module_param(shutdown_timeout, uint, 0644);
+MODULE_PARM_DESC(shutdown_timeout, "timeout in seconds for controller shutdown default:5");
 
 static u8 nvme_max_retries = 5;
 module_param_named(max_retries, nvme_max_retries, byte, 0644);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/9] nvme-core: remove unnecessary else
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
  2023-03-23  3:36 ` [PATCH 1/9] nvme-core: use uint type for shutdown timeout Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  7:57   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 3/9] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

By returning early we can safely remove the unnecessary else part

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c9636be68d74..ce516553b517 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4475,9 +4475,10 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	if (ns) {
 		nvme_validate_ns(ns, &info);
 		nvme_put_ns(ns);
-	} else {
-		nvme_alloc_ns(ctrl, &info);
+		return;
 	}
+
+	nvme_alloc_ns(ctrl, &info);
 }
 
 static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 3/9] nvme-core: remvoe extra line at end of function
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
  2023-03-23  3:36 ` [PATCH 1/9] nvme-core: use uint type for shutdown timeout Chaitanya Kulkarni
  2023-03-23  3:36 ` [PATCH 2/9] nvme-core: remove unnecessary else Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  7:57   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Remove the extra line at the end of the function.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ce516553b517..c2fb7a0eba3d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4496,7 +4496,6 @@ static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
 
 	list_for_each_entry_safe(ns, next, &rm_list, list)
 		nvme_ns_remove(ns);
-
 }
 
 static int nvme_scan_ns_list(struct nvme_ctrl *ctrl)
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready()
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 3/9] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:02   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 5/9] nvme-core: use normal pattern Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

We can avoid using nested indentation by returning early if the
controller is not fabrics. Additionally, instead of using comparisons,
we can use a switch statement to determine if the required command is
connect/auth_send/auth_recv. This will also help to reduce the length of
the lines in the code and makes code easy to read since we don't have to
verify three different comparisons. Lastly, fix the spelling of
"appropriate" in the second comment.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 41 +++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c2fb7a0eba3d..a380938f5a06 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -721,25 +721,32 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
 	if (rq->q == ctrl->admin_q && (req->flags & NVME_REQ_USERCMD))
 		return false;
 
-	if (ctrl->ops->flags & NVME_F_FABRICS) {
-		/*
-		 * Only allow commands on a live queue, except for the connect
-		 * command, which is require to set the queue live in the
-		 * appropinquate states.
-		 */
-		switch (ctrl->state) {
-		case NVME_CTRL_CONNECTING:
-			if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
-			    (req->cmd->fabrics.fctype == nvme_fabrics_type_connect ||
-			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send ||
-			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive))
+	if (!(ctrl->ops->flags & NVME_F_FABRICS))
+		return queue_live;
+
+	/*
+	 * Only allow commands on a live queue, except for connect/auth_send/
+	 * auth_recv commands which are require to set the queue live in the
+	 * appropriate states.
+	 */
+	switch (ctrl->state) {
+	case NVME_CTRL_CONNECTING:
+		if (blk_rq_is_passthrough(rq) &&
+		    nvme_is_fabrics(req->cmd)) {
+			switch (req->cmd->fabrics.fctype) {
+			case nvme_fabrics_type_connect:
+			case nvme_fabrics_type_auth_send:
+			case nvme_fabrics_type_auth_receive:
 				return true;
-			break;
-		default:
-			break;
-		case NVME_CTRL_DEAD:
-			return false;
+			default:
+				break;
+			}
 		}
+		break;
+	default:
+		break;
+	case NVME_CTRL_DEAD:
+		return false;
 	}
 
 	return queue_live;
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 5/9] nvme-core: use normal pattern
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:04   ` Sagi Grimberg
  2023-03-23 17:50   ` Keith Busch
  2023-03-23  3:36 ` [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Although return is allowed in the switch ... case when function is
returing void, it creates confusion for future code which is desirable
pattern to use for switch ... case. Repalce return in the switch with
break that is standard pattern for the switch ... case.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index a380938f5a06..3eb2299a81fc 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -403,13 +403,13 @@ void nvme_complete_rq(struct request *req)
 	switch (nvme_decide_disposition(req)) {
 	case COMPLETE:
 		nvme_end_req(req);
-		return;
+		break;
 	case RETRY:
 		nvme_retry_req(req);
-		return;
+		break;
 	case FAILOVER:
 		nvme_failover_req(req);
-		return;
+		break;
 	case AUTHENTICATE:
 #ifdef CONFIG_NVME_AUTH
 		queue_work(nvme_wq, &ctrl->dhchap_auth_work);
@@ -417,7 +417,7 @@ void nvme_complete_rq(struct request *req)
 #else
 		nvme_end_req(req);
 #endif
-		return;
+		break;
 	}
 }
 EXPORT_SYMBOL_GPL(nvme_complete_rq);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync()
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 5/9] nvme-core: use normal pattern Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:05   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

There is only one caller for the nvme_delete_ctrl_sync() i.e.
nvme_sysfs_delete(). Just open code the function in the caller.

Also, add a meaningful comment since we don't have the function name to
indicate synchronous deletion of the controller.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 3eb2299a81fc..06dac97c5b0f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -237,18 +237,6 @@ int nvme_delete_ctrl(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_delete_ctrl);
 
-static void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
-{
-	/*
-	 * Keep a reference until nvme_do_delete_ctrl() complete,
-	 * since ->delete_ctrl can free the controller.
-	 */
-	nvme_get_ctrl(ctrl);
-	if (nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING))
-		nvme_do_delete_ctrl(ctrl);
-	nvme_put_ctrl(ctrl);
-}
-
 static blk_status_t nvme_error_status(u16 status)
 {
 	switch (status & 0x7ff) {
@@ -3591,8 +3579,18 @@ static ssize_t nvme_sysfs_delete(struct device *dev,
 {
 	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
 
-	if (device_remove_file_self(dev, attr))
-		nvme_delete_ctrl_sync(ctrl);
+	if (!device_remove_file_self(dev, attr))
+		return count;
+
+	/*
+	 * Delete the controller synchronously but keep a reference until
+	 * nvme_do_delete_ctrl() complete, since ->delete_ctrl can free the
+	 * controller.
+	 */
+	nvme_get_ctrl(ctrl);
+	if (nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING))
+		nvme_do_delete_ctrl(ctrl);
+	nvme_put_ctrl(ctrl);
 	return count;
 }
 static DEVICE_ATTR(delete_controller, S_IWUSR, NULL, nvme_sysfs_delete);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (5 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:06   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 8/9] nvme-core: remove unneacessary else Chaitanya Kulkarni
  2023-03-23  3:36 ` [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Remove the extra line before default in the swicth and add a break
at the end of default in the swicth which is a standard practice.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 06dac97c5b0f..fdcba1dc583d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2594,9 +2594,9 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
 	case PM_QOS_LATENCY_ANY:
 		latency = U64_MAX;
 		break;
-
 	default:
 		latency = val;
+		break;
 	}
 
 	if (ctrl->ps_max_latency_us != latency) {
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 8/9] nvme-core: remove unneacessary else
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (6 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:06   ` Sagi Grimberg
  2023-03-23  3:36 ` [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

There is no need for the else when direct return is used at the
end of the function.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fdcba1dc583d..481a326e80f7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3418,8 +3418,8 @@ static inline struct nvme_ns_head *dev_to_ns_head(struct device *dev)
 
 	if (disk->fops == &nvme_bdev_ops)
 		return nvme_get_ns_from_dev(dev)->head;
-	else
-		return disk->private_data;
+
+	return disk->private_data;
 }
 
 static ssize_t wwid_show(struct device *dev, struct device_attribute *attr,
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (7 preceding siblings ...)
  2023-03-23  3:36 ` [PATCH 8/9] nvme-core: remove unneacessary else Chaitanya Kulkarni
@ 2023-03-23  3:36 ` Chaitanya Kulkarni
  2023-03-23  8:16   ` Sagi Grimberg
  8 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  3:36 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

The __nvme_submit_sync_cmd() function currently has a long list of
eight arguments, which can make it difficult to read and debug,
particularly when the function is commonly used from different parts
of the codebase. This complexity can also increase when extending the
functionality of the function, resulting in more arguments being added.

To address this issue, we can create a new structure called
nvme_submit_cmd_data and move all the existing arguments into this
structure. By doing this, we can pass the structure as a single argument
to the __nvme_submit_sync_cmd() function. This approach simplifies the
code and makes it more readable, and also provides a way to add future
arguments to the structure without increasing the number of function
arguments.

This pattern of using a structure for arguments is commonly used in the
block layer code:-

blk_mq_rq_cache_fill()
	struct blk_mq_allocate_data data = { ... }
	__blk_mq_alloc_request(data)
		blk_mq_rq_ctx_init(data)

blk_mq_alloc_and_init_hctx()
	blk_mq_alloc_request_hctx()
		struct blk_mq_allocate_data data = { ... }
 		blk_mq_rq_ctx_init(data)

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/auth.c    | 13 ++++++--
 drivers/nvme/host/core.c    | 61 +++++++++++++++++++++++++------------
 drivers/nvme/host/fabrics.c | 60 ++++++++++++++++++++++++++++--------
 drivers/nvme/host/nvme.h    | 16 +++++++---
 4 files changed, 112 insertions(+), 38 deletions(-)

diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index ea16a0aba679..48257cc4fc24 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -64,6 +64,15 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
 	struct nvme_command cmd = {};
 	blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
 	struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
+	struct nvme_submit_cmd_data d = {
+		.q = q,
+		.cmd = &cmd,
+		.buffer = data,
+		.bufflen = data_len,
+		.qid = qid == 0 ? NVME_QID_ANY : qid,
+		.at_head = false,
+		.flags = flags,
+	};
 	int ret;
 
 	cmd.auth_common.opcode = nvme_fabrics_command;
@@ -78,9 +87,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
 		cmd.auth_receive.al = cpu_to_le32(data_len);
 	}
 
-	ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
-				     qid == 0 ? NVME_QID_ANY : qid,
-				     0, flags);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret > 0)
 		dev_warn(ctrl->device,
 			"qid %d auth_send failed with status %d\n", qid, ret);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 481a326e80f7..9d4fdaa12063 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1025,32 +1025,30 @@ EXPORT_SYMBOL_NS_GPL(nvme_execute_rq, NVME_TARGET_PASSTHRU);
  * Returns 0 on success.  If the result is negative, it's a Linux error code;
  * if the result is positive, it's an NVM Express status code
  */
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
-		union nvme_result *result, void *buffer, unsigned bufflen,
-		int qid, int at_head, blk_mq_req_flags_t flags)
+int __nvme_submit_sync_cmd(struct nvme_submit_cmd_data *d)
 {
 	struct request *req;
 	int ret;
 
-	if (qid == NVME_QID_ANY)
-		req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
+	if (d->qid == NVME_QID_ANY)
+		req = blk_mq_alloc_request(d->q, nvme_req_op(d->cmd), d->flags);
 	else
-		req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
-						qid - 1);
+		req = blk_mq_alloc_request_hctx(d->q, nvme_req_op(d->cmd), d->flags,
+						d->qid - 1);
 
 	if (IS_ERR(req))
 		return PTR_ERR(req);
-	nvme_init_request(req, cmd);
+	nvme_init_request(req, d->cmd);
 
-	if (buffer && bufflen) {
-		ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
+	if (d->buffer && d->bufflen) {
+		ret = blk_rq_map_kern(d->q, req, d->buffer, d->bufflen, GFP_KERNEL);
 		if (ret)
 			goto out;
 	}
 
-	ret = nvme_execute_rq(req, at_head);
-	if (result && ret >= 0)
-		*result = nvme_req(req)->result;
+	ret = nvme_execute_rq(req, d->at_head);
+	if (d->result && ret >= 0)
+		*(d->result) = nvme_req(req)->result;
  out:
 	blk_mq_free_request(req);
 	return ret;
@@ -1060,8 +1058,16 @@ EXPORT_SYMBOL_GPL(__nvme_submit_sync_cmd);
 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		void *buffer, unsigned bufflen)
 {
-	return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen,
-			NVME_QID_ANY, 0, 0);
+	struct nvme_submit_cmd_data d = {
+		.q = q,
+		.cmd = cmd,
+		.buffer = buffer,
+		.bufflen = bufflen,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
+	return __nvme_submit_sync_cmd(&d);
 }
 EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd);
 
@@ -1480,14 +1486,23 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
 {
 	union nvme_result res = { 0 };
 	struct nvme_command c = { };
+	struct nvme_submit_cmd_data d = {
+		.q = dev->admin_q,
+		.cmd = &c,
+		.result = &res,
+		.buffer = buffer,
+		.bufflen = buflen,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	c.features.opcode = op;
 	c.features.fid = cpu_to_le32(fid);
 	c.features.dword11 = cpu_to_le32(dword11);
 
-	ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res,
-			buffer, buflen, NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret >= 0 && result)
 		*result = le32_to_cpu(res.u32);
 	return ret;
@@ -2209,6 +2224,15 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
 {
 	struct nvme_ctrl *ctrl = data;
 	struct nvme_command cmd = { };
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->admin_q,
+		.cmd = &cmd,
+		.buffer = buffer,
+		.bufflen = len,
+		.qid = NVME_QID_ANY,
+		.at_head = true,
+		.flags = 0,
+	};
 
 	if (send)
 		cmd.common.opcode = nvme_admin_security_send;
@@ -2218,8 +2242,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
 	cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
 	cmd.common.cdw11 = cpu_to_le32(len);
 
-	return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
-			NVME_QID_ANY, 1, 0);
+	return __nvme_submit_sync_cmd(&d);
 }
 
 static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index bbaa04a0c502..56d0c41b659b 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -146,14 +146,21 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
 {
 	struct nvme_command cmd = { };
 	union nvme_result res;
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_get.opcode = nvme_fabrics_command;
 	cmd.prop_get.fctype = nvme_fabrics_type_property_get;
 	cmd.prop_get.offset = cpu_to_le32(off);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 
 	if (ret >= 0)
 		*val = le64_to_cpu(res.u64);
@@ -191,6 +198,14 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
 {
 	struct nvme_command cmd = { };
 	union nvme_result res;
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_get.opcode = nvme_fabrics_command;
@@ -198,8 +213,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
 	cmd.prop_get.attrib = 1;
 	cmd.prop_get.offset = cpu_to_le32(off);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 
 	if (ret >= 0)
 		*val = le64_to_cpu(res.u64);
@@ -235,6 +249,13 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read64);
 int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
 {
 	struct nvme_command cmd = { };
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_set.opcode = nvme_fabrics_command;
@@ -243,8 +264,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
 	cmd.prop_set.offset = cpu_to_le32(off);
 	cmd.prop_set.value = cpu_to_le64(val);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (unlikely(ret))
 		dev_err(ctrl->device,
 			"Property Set error: %d, offset %#x\n",
@@ -374,6 +394,15 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
 	struct nvme_command cmd = { };
 	union nvme_result res;
 	struct nvmf_connect_data *data;
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.bufflen = sizeof(*data),
+		.qid = NVME_QID_ANY,
+		.at_head = true,
+		.flags = BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT,
+	};
 	int ret;
 	u32 result;
 
@@ -394,14 +423,13 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
 	if (!data)
 		return -ENOMEM;
 
+	d.buffer = data;
 	uuid_copy(&data->hostid, &ctrl->opts->host->id);
 	data->cntlid = cpu_to_le16(0xffff);
 	strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
 	strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res,
-			data, sizeof(*data), NVME_QID_ANY, 1,
-			BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret) {
 		nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
 				       &cmd, data);
@@ -465,6 +493,15 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
 	struct nvme_command cmd = { };
 	struct nvmf_connect_data *data;
 	union nvme_result res;
+	struct nvme_submit_cmd_data d = {
+		.q = ctrl->connect_q,
+		.cmd = &cmd,
+		.result = &res,
+		.bufflen = sizeof(*data),
+		.qid = qid,
+		.at_head = true,
+		.flags = BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT,
+	};
 	int ret;
 	u32 result;
 
@@ -480,14 +517,13 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
 	if (!data)
 		return -ENOMEM;
 
+	d.buffer = data;
 	uuid_copy(&data->hostid, &ctrl->opts->host->id);
 	data->cntlid = cpu_to_le16(ctrl->cntlid);
 	strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
 	strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
 
-	ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res,
-			data, sizeof(*data), qid, 1,
-			BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret) {
 		nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
 				       &cmd, data);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bf46f122e9e1..4982cfda7e85 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -811,12 +811,20 @@ static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
 		(ctrl->ctratt & NVME_CTRL_CTRATT_NVM_SETS);
 }
 
+struct nvme_submit_cmd_data {
+	struct request_queue *q;
+	struct nvme_command *cmd;
+	union nvme_result *result;
+	void *buffer;
+	unsigned int bufflen;
+	int qid;
+	bool at_head;
+	blk_mq_req_flags_t flags;
+};
+
 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		void *buf, unsigned bufflen);
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
-		union nvme_result *result, void *buffer, unsigned bufflen,
-		int qid, int at_head,
-		blk_mq_req_flags_t flags);
+int __nvme_submit_sync_cmd(struct nvme_submit_cmd_data *d);
 int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
 		      unsigned int dword11, void *buffer, size_t buflen,
 		      u32 *result);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-23  3:36 ` [PATCH 1/9] nvme-core: use uint type for shutdown timeout Chaitanya Kulkarni
@ 2023-03-23  7:56   ` Sagi Grimberg
  2023-03-23 22:21     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  7:56 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

> Use consistent type unsigned int for shutdown_timeout that matches
> other timeout types.

Does it make sense to allow shutdown timeout to be that long?

  Also, use this oppurtunity to print default
> timeout values in the module parameter description that saves a trip
> to source code.

Or simply reading from sys modparams, which one may do anyway.

> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index d4be525f8100..c9636be68d74 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -43,17 +43,17 @@ struct nvme_ns_info {
>   
>   unsigned int admin_timeout = 60;
>   module_param(admin_timeout, uint, 0644);
> -MODULE_PARM_DESC(admin_timeout, "timeout in seconds for admin commands");
> +MODULE_PARM_DESC(admin_timeout, "timeout in seconds for admin commands, default:60");
>   EXPORT_SYMBOL_GPL(admin_timeout);
>   
>   unsigned int nvme_io_timeout = 30;
>   module_param_named(io_timeout, nvme_io_timeout, uint, 0644);
> -MODULE_PARM_DESC(io_timeout, "timeout in seconds for I/O");
> +MODULE_PARM_DESC(io_timeout, "timeout in seconds for I/O, default:30");
>   EXPORT_SYMBOL_GPL(nvme_io_timeout);
>   
> -static unsigned char shutdown_timeout = 5;
> -module_param(shutdown_timeout, byte, 0644);
> -MODULE_PARM_DESC(shutdown_timeout, "timeout in seconds for controller shutdown");
> +static unsigned int shutdown_timeout = 5;
> +module_param(shutdown_timeout, uint, 0644);
> +MODULE_PARM_DESC(shutdown_timeout, "timeout in seconds for controller shutdown default:5");
>   
>   static u8 nvme_max_retries = 5;
>   module_param_named(max_retries, nvme_max_retries, byte, 0644);


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/9] nvme-core: remove unnecessary else
  2023-03-23  3:36 ` [PATCH 2/9] nvme-core: remove unnecessary else Chaitanya Kulkarni
@ 2023-03-23  7:57   ` Sagi Grimberg
  0 siblings, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  7:57 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] nvme-core: remvoe extra line at end of function
  2023-03-23  3:36 ` [PATCH 3/9] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
@ 2023-03-23  7:57   ` Sagi Grimberg
  0 siblings, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  7:57 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready()
  2023-03-23  3:36 ` [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
@ 2023-03-23  8:02   ` Sagi Grimberg
  2023-03-23 22:24     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:02 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare



On 3/23/23 05:36, Chaitanya Kulkarni wrote:
> We can avoid using nested indentation by returning early if the
> controller is not fabrics. Additionally, instead of using comparisons,
> we can use a switch statement to determine if the required command is
> connect/auth_send/auth_recv. This will also help to reduce the length of
> the lines in the code and makes code easy to read since we don't have to
> verify three different comparisons. Lastly, fix the spelling of
> "appropriate" in the second comment.
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 41 +++++++++++++++++++++++-----------------
>   1 file changed, 24 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index c2fb7a0eba3d..a380938f5a06 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -721,25 +721,32 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
>   	if (rq->q == ctrl->admin_q && (req->flags & NVME_REQ_USERCMD))
>   		return false;
>   
> -	if (ctrl->ops->flags & NVME_F_FABRICS) {
> -		/*
> -		 * Only allow commands on a live queue, except for the connect
> -		 * command, which is require to set the queue live in the
> -		 * appropinquate states.
> -		 */
> -		switch (ctrl->state) {
> -		case NVME_CTRL_CONNECTING:
> -			if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
> -			    (req->cmd->fabrics.fctype == nvme_fabrics_type_connect ||
> -			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send ||
> -			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive))
> +	if (!(ctrl->ops->flags & NVME_F_FABRICS))
> +		return queue_live;
> +
> +	/*
> +	 * Only allow commands on a live queue, except for connect/auth_send/
> +	 * auth_recv commands which are require to set the queue live in the
> +	 * appropriate states.
> +	 */
> +	switch (ctrl->state) {
> +	case NVME_CTRL_CONNECTING:
> +		if (blk_rq_is_passthrough(rq) &&
> +		    nvme_is_fabrics(req->cmd)) {
> +			switch (req->cmd->fabrics.fctype) {
> +			case nvme_fabrics_type_connect:
> +			case nvme_fabrics_type_auth_send:
> +			case nvme_fabrics_type_auth_receive:
>   				return true;
> -			break;
> -		default:
> -			break;
> -		case NVME_CTRL_DEAD:
> -			return false;
> +			default:
> +				break;

I think we can safely remove the default case here.

> +			}
>   		}
> +		break;
> +	default:
> +		break;

While we're at it, same here.

> +	case NVME_CTRL_DEAD:
> +		return false;


>   	}
>   
>   	return queue_live;


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] nvme-core: use normal pattern
  2023-03-23  3:36 ` [PATCH 5/9] nvme-core: use normal pattern Chaitanya Kulkarni
@ 2023-03-23  8:04   ` Sagi Grimberg
  2023-03-23 17:50   ` Keith Busch
  1 sibling, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:04 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare


> Although return is allowed in the switch ... case when function is
> returing void, it creates confusion for future code which is desirable
> pattern to use for switch ... case. Repalce return in the switch with
> break that is standard pattern for the switch ... case.

Don't see the confusion, but ok,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index a380938f5a06..3eb2299a81fc 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -403,13 +403,13 @@ void nvme_complete_rq(struct request *req)
>   	switch (nvme_decide_disposition(req)) {
>   	case COMPLETE:
>   		nvme_end_req(req);
> -		return;
> +		break;
>   	case RETRY:
>   		nvme_retry_req(req);
> -		return;
> +		break;
>   	case FAILOVER:
>   		nvme_failover_req(req);
> -		return;
> +		break;
>   	case AUTHENTICATE:
>   #ifdef CONFIG_NVME_AUTH
>   		queue_work(nvme_wq, &ctrl->dhchap_auth_work);
> @@ -417,7 +417,7 @@ void nvme_complete_rq(struct request *req)
>   #else
>   		nvme_end_req(req);
>   #endif
> -		return;
> +		break;
>   	}
>   }
>   EXPORT_SYMBOL_GPL(nvme_complete_rq);


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync()
  2023-03-23  3:36 ` [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
@ 2023-03-23  8:05   ` Sagi Grimberg
  0 siblings, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:05 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance
  2023-03-23  3:36 ` [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
@ 2023-03-23  8:06   ` Sagi Grimberg
  0 siblings, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:06 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare


> Remove the extra line before default in the swicth and add a break
> at the end of default in the swicth which is a standard practice.

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 06dac97c5b0f..fdcba1dc583d 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2594,9 +2594,9 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
>   	case PM_QOS_LATENCY_ANY:
>   		latency = U64_MAX;
>   		break;
> -
>   	default:
>   		latency = val;
> +		break;
>   	}
>   
>   	if (ctrl->ps_max_latency_us != latency) {


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 8/9] nvme-core: remove unneacessary else
  2023-03-23  3:36 ` [PATCH 8/9] nvme-core: remove unneacessary else Chaitanya Kulkarni
@ 2023-03-23  8:06   ` Sagi Grimberg
  0 siblings, 0 replies; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:06 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-23  3:36 ` [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
@ 2023-03-23  8:16   ` Sagi Grimberg
  2023-03-23  8:23     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-23  8:16 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

> The __nvme_submit_sync_cmd() function currently has a long list of
> eight arguments, which can make it difficult to read and debug,
> particularly when the function is commonly used from different parts
> of the codebase. This complexity can also increase when extending the
> functionality of the function, resulting in more arguments being added.

I don't think that complexity/difficulty is the issue here, its just
ugly.

> 
> To address this issue, we can create a new structure called
> nvme_submit_cmd_data

Maybe call it nvme_sync_cmd_args.

> and move all the existing arguments into this
> structure. By doing this, we can pass the structure as a single argument
> to the __nvme_submit_sync_cmd() function. This approach simplifies the
> code and makes it more readable, and also provides a way to add future
> arguments to the structure without increasing the number of function
> arguments.

The potential gain here is that on-stack initialized structs you can
implcitly zero-out members so you don't need to explicitly set in all
the call sites. But that is a subjective gain because it can be counter
intuitive at times. Otherwise there is no real coding gain with this
specifically.

But regardless, this looks fine to me.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-23  8:16   ` Sagi Grimberg
@ 2023-03-23  8:23     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23  8:23 UTC (permalink / raw)
  To: Sagi Grimberg, Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

On 3/23/23 01:16, Sagi Grimberg wrote:
>> The __nvme_submit_sync_cmd() function currently has a long list of
>> eight arguments, which can make it difficult to read and debug,
>> particularly when the function is commonly used from different parts
>> of the codebase. This complexity can also increase when extending the
>> functionality of the function, resulting in more arguments being added.
>
> I don't think that complexity/difficulty is the issue here, its just
> ugly.
>
>>
>> To address this issue, we can create a new structure called
>> nvme_submit_cmd_data
>
> Maybe call it nvme_sync_cmd_args.

okay.

>
>> and move all the existing arguments into this
>> structure. By doing this, we can pass the structure as a single argument
>> to the __nvme_submit_sync_cmd() function. This approach simplifies the
>> code and makes it more readable, and also provides a way to add future
>> arguments to the structure without increasing the number of function
>> arguments.
>
> The potential gain here is that on-stack initialized structs you can
> implcitly zero-out members so you don't need to explicitly set in all
> the call sites. But that is a subjective gain because it can be counter
> intuitive at times. Otherwise there is no real coding gain with this
> specifically.
>
> But regardless, this looks fine to me.

exactly, first version I created was with only required parameters
initialized, I found this one much easy to read.

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] nvme-core: use normal pattern
  2023-03-23  3:36 ` [PATCH 5/9] nvme-core: use normal pattern Chaitanya Kulkarni
  2023-03-23  8:04   ` Sagi Grimberg
@ 2023-03-23 17:50   ` Keith Busch
  2023-03-23 22:01     ` Chaitanya Kulkarni
  1 sibling, 1 reply; 27+ messages in thread
From: Keith Busch @ 2023-03-23 17:50 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, hare

On Wed, Mar 22, 2023 at 08:36:32PM -0700, Chaitanya Kulkarni wrote:
> Although return is allowed in the switch ... case when function is
> returing void, it creates confusion for future code which is desirable
> pattern to use for switch ... case. Repalce return in the switch with
> break that is standard pattern for the switch ... case.

The previous patch replaced 'break' with 'return', and this one does the
opposite. What's special about a void return that makes this a standard
pattern?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] nvme-core: use normal pattern
  2023-03-23 17:50   ` Keith Busch
@ 2023-03-23 22:01     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23 22:01 UTC (permalink / raw)
  To: Keith Busch, Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, hare

On 3/23/23 10:50, Keith Busch wrote:
> On Wed, Mar 22, 2023 at 08:36:32PM -0700, Chaitanya Kulkarni wrote:
>> Although return is allowed in the switch ... case when function is
>> returing void, it creates confusion for future code which is desirable
>> pattern to use for switch ... case. Repalce return in the switch with
>> break that is standard pattern for the switch ... case.
> The previous patch replaced 'break' with 'return', and this one does the
> opposite. What's special about a void return that makes this a standard
> pattern?

Regarding previous patch :
In switch case if we are not returning any value then it looks
odd to use return, can we please should avoid such pattern and
use break ?

Regarding his patch :
we are returning the value so I kept the original code, I can
add break everywhere in this patch if needed, please let
me know..

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-23  7:56   ` Sagi Grimberg
@ 2023-03-23 22:21     ` Chaitanya Kulkarni
  2023-03-23 22:37       ` Chaitanya Kulkarni
  0 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23 22:21 UTC (permalink / raw)
  To: Sagi Grimberg, Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

On 3/23/23 00:56, Sagi Grimberg wrote:
>> Use consistent type unsigned int for shutdown_timeout that matches
>> other timeout types.
>
> Does it make sense to allow shutdown timeout to be that long?
>

no you are right it matches with rest of the timeout types..
we can drop this patch if you want to..

>  Also, use this oppurtunity to print default
>> timeout values in the module parameter description that saves a trip
>> to source code.
>
> Or simply reading from sys modparams, which one may do anyway.
>

that is certainly possible, with null_blk we have it in the
MODULE_PARAM_DESC() string and it turned out to be very
helpful ...

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready()
  2023-03-23  8:02   ` Sagi Grimberg
@ 2023-03-23 22:24     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23 22:24 UTC (permalink / raw)
  To: Sagi Grimberg, Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare


>> +     * Only allow commands on a live queue, except for 
>> connect/auth_send/
>> +     * auth_recv commands which are require to set the queue live in 
>> the
>> +     * appropriate states.
>> +     */
>> +    switch (ctrl->state) {
>> +    case NVME_CTRL_CONNECTING:
>> +        if (blk_rq_is_passthrough(rq) &&
>> +            nvme_is_fabrics(req->cmd)) {
>> +            switch (req->cmd->fabrics.fctype) {
>> +            case nvme_fabrics_type_connect:
>> +            case nvme_fabrics_type_auth_send:
>> +            case nvme_fabrics_type_auth_receive:
>>                   return true;
>> -            break;
>> -        default:
>> -            break;
>> -        case NVME_CTRL_DEAD:
>> -            return false;
>> +            default:
>> +                break;
>
> I think we can safely remove the default case here.
>
>> +            }
>>           }
>> +        break;
>> +    default:
>> +        break;
>
> While we're at it, same here.
>
>> +    case NVME_CTRL_DEAD:
>> +        return false;
>

only concern is if we should keep to avoid any stupid warnings
from some static code analyzers that I don't have ...

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-23 22:21     ` Chaitanya Kulkarni
@ 2023-03-23 22:37       ` Chaitanya Kulkarni
  2023-03-26  7:22         ` Sagi Grimberg
  0 siblings, 1 reply; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-23 22:37 UTC (permalink / raw)
  To: Sagi Grimberg, Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare

On 3/23/23 15:21, Chaitanya Kulkarni wrote:
> On 3/23/23 00:56, Sagi Grimberg wrote:
>>> Use consistent type unsigned int for shutdown_timeout that matches
>>> other timeout types.
>>
>> Does it make sense to allow shutdown timeout to be that long?
>>
>

thinking on the same line do we really need admin and io timeout
values to be unsigned int (sizeof(unsigned int) = 4 bytes on
my machine ? as this value is in seconds 0xffffffff is way
to long wait to finish admin command..

perhaps we should change it to u16 for io/admin/shudown timeout ?

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-23 22:37       ` Chaitanya Kulkarni
@ 2023-03-26  7:22         ` Sagi Grimberg
  2023-03-26 23:06           ` Chaitanya Kulkarni
  0 siblings, 1 reply; 27+ messages in thread
From: Sagi Grimberg @ 2023-03-26  7:22 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, hch, hare


>>>> Use consistent type unsigned int for shutdown_timeout that matches
>>>> other timeout types.
>>>
>>> Does it make sense to allow shutdown timeout to be that long?
>>>
>>
> 
> thinking on the same line do we really need admin and io timeout
> values to be unsigned int (sizeof(unsigned int) = 4 bytes on
> my machine ? as this value is in seconds 0xffffffff is way
> to long wait to finish admin command..
> 
> perhaps we should change it to u16 for io/admin/shudown timeout ?

I think we can leave the io/admin and shutdown as they are.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] nvme-core: use uint type for shutdown timeout
  2023-03-26  7:22         ` Sagi Grimberg
@ 2023-03-26 23:06           ` Chaitanya Kulkarni
  0 siblings, 0 replies; 27+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-26 23:06 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme; +Cc: kbusch, hch, hare

On 3/26/23 00:22, Sagi Grimberg wrote:
>
>>>>> Use consistent type unsigned int for shutdown_timeout that matches
>>>>> other timeout types.
>>>>
>>>> Does it make sense to allow shutdown timeout to be that long?
>>>>
>>>
>>
>> thinking on the same line do we really need admin and io timeout
>> values to be unsigned int (sizeof(unsigned int) = 4 bytes on
>> my machine ? as this value is in seconds 0xffffffff is way
>> to long wait to finish admin command..
>>
>> perhaps we should change it to u16 for io/admin/shudown timeout ?
>
> I think we can leave the io/admin and shutdown as they are.

okay, will send out v2 with comments addressed.

-ck



^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-03-26 23:07 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-23  3:36 [PATCH 0/9] nvme-core: trivial cleanups Chaitanya Kulkarni
2023-03-23  3:36 ` [PATCH 1/9] nvme-core: use uint type for shutdown timeout Chaitanya Kulkarni
2023-03-23  7:56   ` Sagi Grimberg
2023-03-23 22:21     ` Chaitanya Kulkarni
2023-03-23 22:37       ` Chaitanya Kulkarni
2023-03-26  7:22         ` Sagi Grimberg
2023-03-26 23:06           ` Chaitanya Kulkarni
2023-03-23  3:36 ` [PATCH 2/9] nvme-core: remove unnecessary else Chaitanya Kulkarni
2023-03-23  7:57   ` Sagi Grimberg
2023-03-23  3:36 ` [PATCH 3/9] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
2023-03-23  7:57   ` Sagi Grimberg
2023-03-23  3:36 ` [PATCH 4/9] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
2023-03-23  8:02   ` Sagi Grimberg
2023-03-23 22:24     ` Chaitanya Kulkarni
2023-03-23  3:36 ` [PATCH 5/9] nvme-core: use normal pattern Chaitanya Kulkarni
2023-03-23  8:04   ` Sagi Grimberg
2023-03-23 17:50   ` Keith Busch
2023-03-23 22:01     ` Chaitanya Kulkarni
2023-03-23  3:36 ` [PATCH 6/9] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
2023-03-23  8:05   ` Sagi Grimberg
2023-03-23  3:36 ` [PATCH 7/9] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
2023-03-23  8:06   ` Sagi Grimberg
2023-03-23  3:36 ` [PATCH 8/9] nvme-core: remove unneacessary else Chaitanya Kulkarni
2023-03-23  8:06   ` Sagi Grimberg
2023-03-23  3:36 ` [PATCH 9/9] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
2023-03-23  8:16   ` Sagi Grimberg
2023-03-23  8:23     ` Chaitanya Kulkarni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.