All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] avoid repeated request completion and IO error
@ 2021-01-05  7:19 ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

First avoid repeated request completion for nvmf_fail_nonready_command.
Second avoid IO error and repeated request completion for queue_rq.

Chao Leng (6):
  blk-mq: introduce blk_mq_set_request_complete
  nvme-core: introduce complete failed request
  nvme-fabrics: avoid repeated request completion for
    nvmf_fail_nonready_command
  nvme-rdma: avoid IO error and repeated request completion
  nvme-tcp: avoid IO error and repeated request completion
  nvme-fc: avoid IO error and repeated request completion

 drivers/nvme/host/fabrics.c |  4 +---
 drivers/nvme/host/fc.c      |  6 ++++--
 drivers/nvme/host/nvme.h    | 18 ++++++++++++++++++
 drivers/nvme/host/rdma.c    |  2 +-
 drivers/nvme/host/tcp.c     |  2 +-
 include/linux/blk-mq.h      |  5 +++++
 6 files changed, 30 insertions(+), 7 deletions(-)

-- 
2.16.4


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/6] avoid repeated request completion and IO error
@ 2021-01-05  7:19 ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

First avoid repeated request completion for nvmf_fail_nonready_command.
Second avoid IO error and repeated request completion for queue_rq.

Chao Leng (6):
  blk-mq: introduce blk_mq_set_request_complete
  nvme-core: introduce complete failed request
  nvme-fabrics: avoid repeated request completion for
    nvmf_fail_nonready_command
  nvme-rdma: avoid IO error and repeated request completion
  nvme-tcp: avoid IO error and repeated request completion
  nvme-fc: avoid IO error and repeated request completion

 drivers/nvme/host/fabrics.c |  4 +---
 drivers/nvme/host/fc.c      |  6 ++++--
 drivers/nvme/host/nvme.h    | 18 ++++++++++++++++++
 drivers/nvme/host/rdma.c    |  2 +-
 drivers/nvme/host/tcp.c     |  2 +-
 include/linux/blk-mq.h      |  5 +++++
 6 files changed, 30 insertions(+), 7 deletions(-)

-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

In some scenarios, nvme need setting the state of request to
MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
For details, see the subsequent patches.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 include/linux/blk-mq.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e7482e6ad3ec..cee72d31054d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
 	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
 }
 
+static inline void blk_mq_set_request_complete(struct request *rq)
+{
+	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
+}
+
 void blk_mq_start_request(struct request *rq);
 void blk_mq_end_request(struct request *rq, blk_status_t error);
 void __blk_mq_end_request(struct request *rq, blk_status_t error);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

In some scenarios, nvme need setting the state of request to
MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
For details, see the subsequent patches.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 include/linux/blk-mq.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e7482e6ad3ec..cee72d31054d 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
 	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
 }
 
+static inline void blk_mq_set_request_complete(struct request *rq)
+{
+	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
+}
+
 void blk_mq_start_request(struct request *rq);
 void blk_mq_end_request(struct request *rq, blk_status_t error);
 void __blk_mq_end_request(struct request *rq, blk_status_t error);
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/6] nvme-core: introduce complete failed request
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

When a request is queued failed, if the fail status is not
BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
the request is need to complete with nvme_complete_rq in queue_rq.
So introduce nvme_try_complete_failed_req.
The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
nvmf_fail_nonready_command and queue_rq.
So introduce nvme_complete_failed_req.
For details, see the subsequent patches.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bfcedfa4b057..1a0bddb9158f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
 extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
+static inline void nvme_complete_failed_req(struct request *req)
+{
+	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
+	blk_mq_set_request_complete(req);
+	nvme_complete_rq(req);
+}
+
+static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
+							blk_status_t ret)
+{
+	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
+	    ret == BLK_STS_ZONE_RESOURCE)
+		return ret;
+
+	nvme_complete_failed_req(req);
+	return BLK_STS_OK;
+}
+
 #ifdef CONFIG_NVME_MULTIPATH
 static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
 {
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/6] nvme-core: introduce complete failed request
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

When a request is queued failed, if the fail status is not
BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
the request is need to complete with nvme_complete_rq in queue_rq.
So introduce nvme_try_complete_failed_req.
The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
nvmf_fail_nonready_command and queue_rq.
So introduce nvme_complete_failed_req.
For details, see the subsequent patches.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bfcedfa4b057..1a0bddb9158f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
 extern const struct attribute_group *nvme_ns_id_attr_groups[];
 extern const struct block_device_operations nvme_ns_head_ops;
 
+static inline void nvme_complete_failed_req(struct request *req)
+{
+	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
+	blk_mq_set_request_complete(req);
+	nvme_complete_rq(req);
+}
+
+static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
+							blk_status_t ret)
+{
+	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
+	    ret == BLK_STS_ZONE_RESOURCE)
+		return ret;
+
+	nvme_complete_failed_req(req);
+	return BLK_STS_OK;
+}
+
 #ifdef CONFIG_NVME_MULTIPATH
 static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
 {
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/6] nvme-fabrics: avoid repeated request completion for nvmf_fail_nonready_command
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

The request may be completed with NVME_SC_HOST_PATH_ERROR in
nvmf_fail_nonready_command. The state of request will be changed to
MQ_RQ_IN_FLIGHT before call nvme_complete_rq. If free the request
asynchronously such as in nvme_submit_user_cmd, in extreme scenario
the request will be repeated freed in tear down.
Nvmf_fail_nonready_command do not need calling blk_mq_start_request
before complete the request. Nvmf_fail_nonready_command should set
the state of request to MQ_RQ_COMPLETE before complete the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/fabrics.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 72ac00173500..874e4320e214 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -553,9 +553,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
 	    !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
 		return BLK_STS_RESOURCE;
 
-	nvme_req(rq)->status = NVME_SC_HOST_PATH_ERROR;
-	blk_mq_start_request(rq);
-	nvme_complete_rq(rq);
+	nvme_complete_failed_req(rq);
 	return BLK_STS_OK;
 }
 EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/6] nvme-fabrics: avoid repeated request completion for nvmf_fail_nonready_command
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

The request may be completed with NVME_SC_HOST_PATH_ERROR in
nvmf_fail_nonready_command. The state of request will be changed to
MQ_RQ_IN_FLIGHT before call nvme_complete_rq. If free the request
asynchronously such as in nvme_submit_user_cmd, in extreme scenario
the request will be repeated freed in tear down.
Nvmf_fail_nonready_command do not need calling blk_mq_start_request
before complete the request. Nvmf_fail_nonready_command should set
the state of request to MQ_RQ_COMPLETE before complete the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/fabrics.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 72ac00173500..874e4320e214 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -553,9 +553,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
 	    !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
 		return BLK_STS_RESOURCE;
 
-	nvme_req(rq)->status = NVME_SC_HOST_PATH_ERROR;
-	blk_mq_start_request(rq);
-	nvme_complete_rq(rq);
+	nvme_complete_failed_req(rq);
 	return BLK_STS_OK;
 }
 EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command);
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/6] nvme-rdma: avoid IO error and repeated request completion
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is complete with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs in queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/rdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index df9f6f4549f1..4a89bf44ecdc 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2093,7 +2093,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 unmap_qe:
 	ib_dma_unmap_single(dev, req->sqe.dma, sizeof(struct nvme_command),
 			    DMA_TO_DEVICE);
-	return ret;
+	return nvme_try_complete_failed_req(rq, ret);
 }
 
 static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx)
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/6] nvme-rdma: avoid IO error and repeated request completion
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is complete with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs in queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/rdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index df9f6f4549f1..4a89bf44ecdc 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2093,7 +2093,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 unmap_qe:
 	ib_dma_unmap_single(dev, req->sqe.dma, sizeof(struct nvme_command),
 			    DMA_TO_DEVICE);
-	return ret;
+	return nvme_try_complete_failed_req(rq, ret);
 }
 
 static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx)
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/6] nvme-tcp: avoid IO error and repeated request completion
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is completed with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs when queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/tcp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 1ba659927442..a81683ce8cff 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2306,7 +2306,7 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	ret = nvme_tcp_setup_cmd_pdu(ns, rq);
 	if (unlikely(ret))
-		return ret;
+		return nvme_try_complete_failed_req(rq, ret);
 
 	blk_mq_start_request(rq);
 
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/6] nvme-tcp: avoid IO error and repeated request completion
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is completed with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs when queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/tcp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 1ba659927442..a81683ce8cff 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2306,7 +2306,7 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	ret = nvme_tcp_setup_cmd_pdu(ns, rq);
 	if (unlikely(ret))
-		return ret;
+		return nvme_try_complete_failed_req(rq, ret);
 
 	blk_mq_start_request(rq);
 
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/6] nvme-fc: avoid IO error and repeated request completion
  2021-01-05  7:19 ` Chao Leng
@ 2021-01-05  7:19   ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, sagi, lengchao, linux-block, axboe

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is completed with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs when queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/fc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 38373a0e86ef..f6a5758ef1ea 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2761,7 +2761,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	ret = nvme_setup_cmd(ns, rq, sqe);
 	if (ret)
-		return ret;
+		goto fail;
 
 	/*
 	 * nvme core doesn't quite treat the rq opaquely. Commands such
@@ -2781,7 +2781,9 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	}
 
 
-	return nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir);
+	ret = nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir);
+fail:
+	return nvme_try_complete_failed_req(rq, ret);
 }
 
 static void
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/6] nvme-fc: avoid IO error and repeated request completion
@ 2021-01-05  7:19   ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-05  7:19 UTC (permalink / raw)
  To: linux-nvme; +Cc: axboe, linux-block, sagi, axboe, lengchao, kbusch, hch

When a request is queued failed, blk_status_t is directly returned
to the blk-mq. If blk_status_t is not BLK_STS_RESOURCE,
BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE, blk-mq call
blk_mq_end_request to complete the request with BLK_STS_IOERR.
In two scenarios, the request should be retried and may succeed.
First, if work with nvme multipath, the request may be retried
successfully in another path, because the error is probably related to
the path. Second, if work without multipath software, the request may
be retried successfully after error recovery.
If the request is completed with BLK_STS_IOERR in blk_mq_dispatch_rq_list.
The state of request may be changed to MQ_RQ_IN_FLIGHT. If free the
request asynchronously such as in nvme_submit_user_cmd, in extreme
scenario the request will be repeated freed in tear down.
If a non-resource error occurs when queue_rq, should directly call
nvme_complete_rq to complete request and set the state of request to
MQ_RQ_COMPLETE. nvme_complete_rq will decide to retry, fail over or end
the request.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/fc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 38373a0e86ef..f6a5758ef1ea 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2761,7 +2761,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	ret = nvme_setup_cmd(ns, rq, sqe);
 	if (ret)
-		return ret;
+		goto fail;
 
 	/*
 	 * nvme core doesn't quite treat the rq opaquely. Commands such
@@ -2781,7 +2781,9 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	}
 
 
-	return nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir);
+	ret = nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir);
+fail:
+	return nvme_try_complete_failed_req(rq, ret);
 }
 
 static void
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] nvme-core: introduce complete failed request
  2021-01-05  7:19   ` Chao Leng
@ 2021-01-05 19:11     ` Minwoo Im
  -1 siblings, 0 replies; 22+ messages in thread
From: Minwoo Im @ 2021-01-05 19:11 UTC (permalink / raw)
  To: Chao Leng; +Cc: linux-nvme, kbusch, axboe, hch, sagi, linux-block, axboe

Hello,

On 21-01-05 15:19:32, Chao Leng wrote:
> When a request is queued failed, if the fail status is not
> BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
> the request is need to complete with nvme_complete_rq in queue_rq.
> So introduce nvme_try_complete_failed_req.
> The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
> nvmf_fail_nonready_command and queue_rq.
> So introduce nvme_complete_failed_req.
> For details, see the subsequent patches.
> 
> Signed-off-by: Chao Leng <lengchao@huawei.com>
> ---
>  drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bfcedfa4b057..1a0bddb9158f 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
>  extern const struct attribute_group *nvme_ns_id_attr_groups[];
>  extern const struct block_device_operations nvme_ns_head_ops;
>  
> +static inline void nvme_complete_failed_req(struct request *req)
> +{
> +	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
> +	blk_mq_set_request_complete(req);
> +	nvme_complete_rq(req);
> +}
> +
> +static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
> +							blk_status_t ret)
> +{
> +	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
> +	    ret == BLK_STS_ZONE_RESOURCE)
> +		return ret;

If it has nothing to do with various conditions, can we have this if
to switch just like the other function in the same file does:

	switch (ret) {
	case BLK_STS_RESOURCE:
	case BLK_STS_DEV_RESOURCE:
	case BLK_STS_ZONE_RESOURCE:
		return ret;
	default:
		nvme_complete_failed_req(req);
		return BLK_STS_OK;
	}

> +
> +	nvme_complete_failed_req(req);
> +	return BLK_STS_OK;
> +}
> +

Can we have these two functions along side with nvme_try_complete_req()
by moving declaration of nvme_coplete_rq() a little bit up ?

>  #ifdef CONFIG_NVME_MULTIPATH
>  static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
>  {
> -- 
> 2.16.4
> 

Thanks,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] nvme-core: introduce complete failed request
@ 2021-01-05 19:11     ` Minwoo Im
  0 siblings, 0 replies; 22+ messages in thread
From: Minwoo Im @ 2021-01-05 19:11 UTC (permalink / raw)
  To: Chao Leng; +Cc: axboe, linux-block, sagi, linux-nvme, axboe, kbusch, hch

Hello,

On 21-01-05 15:19:32, Chao Leng wrote:
> When a request is queued failed, if the fail status is not
> BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
> the request is need to complete with nvme_complete_rq in queue_rq.
> So introduce nvme_try_complete_failed_req.
> The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
> nvmf_fail_nonready_command and queue_rq.
> So introduce nvme_complete_failed_req.
> For details, see the subsequent patches.
> 
> Signed-off-by: Chao Leng <lengchao@huawei.com>
> ---
>  drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bfcedfa4b057..1a0bddb9158f 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
>  extern const struct attribute_group *nvme_ns_id_attr_groups[];
>  extern const struct block_device_operations nvme_ns_head_ops;
>  
> +static inline void nvme_complete_failed_req(struct request *req)
> +{
> +	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
> +	blk_mq_set_request_complete(req);
> +	nvme_complete_rq(req);
> +}
> +
> +static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
> +							blk_status_t ret)
> +{
> +	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
> +	    ret == BLK_STS_ZONE_RESOURCE)
> +		return ret;

If it has nothing to do with various conditions, can we have this if
to switch just like the other function in the same file does:

	switch (ret) {
	case BLK_STS_RESOURCE:
	case BLK_STS_DEV_RESOURCE:
	case BLK_STS_ZONE_RESOURCE:
		return ret;
	default:
		nvme_complete_failed_req(req);
		return BLK_STS_OK;
	}

> +
> +	nvme_complete_failed_req(req);
> +	return BLK_STS_OK;
> +}
> +

Can we have these two functions along side with nvme_try_complete_req()
by moving declaration of nvme_coplete_rq() a little bit up ?

>  #ifdef CONFIG_NVME_MULTIPATH
>  static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
>  {
> -- 
> 2.16.4
> 

Thanks,

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
  2021-01-05  7:19   ` Chao Leng
@ 2021-01-05 19:16     ` Minwoo Im
  -1 siblings, 0 replies; 22+ messages in thread
From: Minwoo Im @ 2021-01-05 19:16 UTC (permalink / raw)
  To: Chao Leng; +Cc: linux-nvme, kbusch, axboe, hch, sagi, linux-block, axboe

Hello,

On 21-01-05 15:19:31, Chao Leng wrote:
> In some scenarios, nvme need setting the state of request to
> MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
> For details, see the subsequent patches.
> 
> Signed-off-by: Chao Leng <lengchao@huawei.com>
> ---
>  include/linux/blk-mq.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> index e7482e6ad3ec..cee72d31054d 100644
> --- a/include/linux/blk-mq.h
> +++ b/include/linux/blk-mq.h
> @@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
>  	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
>  }
>  
> +static inline void blk_mq_set_request_complete(struct request *rq)
> +{
> +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
> +}
> +

Maybe we can have this newly added helper with updating caller
in blk_mq_complete_request_remote() also ?

Thanks,

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
@ 2021-01-05 19:16     ` Minwoo Im
  0 siblings, 0 replies; 22+ messages in thread
From: Minwoo Im @ 2021-01-05 19:16 UTC (permalink / raw)
  To: Chao Leng; +Cc: axboe, linux-block, sagi, linux-nvme, axboe, kbusch, hch

Hello,

On 21-01-05 15:19:31, Chao Leng wrote:
> In some scenarios, nvme need setting the state of request to
> MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
> For details, see the subsequent patches.
> 
> Signed-off-by: Chao Leng <lengchao@huawei.com>
> ---
>  include/linux/blk-mq.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> index e7482e6ad3ec..cee72d31054d 100644
> --- a/include/linux/blk-mq.h
> +++ b/include/linux/blk-mq.h
> @@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
>  	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
>  }
>  
> +static inline void blk_mq_set_request_complete(struct request *rq)
> +{
> +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
> +}
> +

Maybe we can have this newly added helper with updating caller
in blk_mq_complete_request_remote() also ?

Thanks,

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
  2021-01-05 19:16     ` Minwoo Im
@ 2021-01-06  2:29       ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-06  2:29 UTC (permalink / raw)
  To: Minwoo Im; +Cc: linux-nvme, kbusch, axboe, hch, sagi, linux-block, axboe



On 2021/1/6 3:16, Minwoo Im wrote:
> Hello,
> 
> On 21-01-05 15:19:31, Chao Leng wrote:
>> In some scenarios, nvme need setting the state of request to
>> MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
>> For details, see the subsequent patches.
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
>> ---
>>   include/linux/blk-mq.h | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
>> index e7482e6ad3ec..cee72d31054d 100644
>> --- a/include/linux/blk-mq.h
>> +++ b/include/linux/blk-mq.h
>> @@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
>>   	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
>>   }
>>   
>> +static inline void blk_mq_set_request_complete(struct request *rq)
>> +{
>> +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
>> +}
>> +
> 
> Maybe we can have this newly added helper with updating caller
> in blk_mq_complete_request_remote() also ?
There are similar optimizations for blk_mq_request_started and
blk_mq_request_completed. It may be better to optimize it by using
independent patches.
> 
> Thanks,
> .
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete
@ 2021-01-06  2:29       ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-06  2:29 UTC (permalink / raw)
  To: Minwoo Im; +Cc: axboe, linux-block, sagi, linux-nvme, axboe, kbusch, hch



On 2021/1/6 3:16, Minwoo Im wrote:
> Hello,
> 
> On 21-01-05 15:19:31, Chao Leng wrote:
>> In some scenarios, nvme need setting the state of request to
>> MQ_RQ_COMPLETE. So add an inline function blk_mq_set_request_complete.
>> For details, see the subsequent patches.
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
>> ---
>>   include/linux/blk-mq.h | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
>> index e7482e6ad3ec..cee72d31054d 100644
>> --- a/include/linux/blk-mq.h
>> +++ b/include/linux/blk-mq.h
>> @@ -493,6 +493,11 @@ static inline int blk_mq_request_completed(struct request *rq)
>>   	return blk_mq_rq_state(rq) == MQ_RQ_COMPLETE;
>>   }
>>   
>> +static inline void blk_mq_set_request_complete(struct request *rq)
>> +{
>> +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
>> +}
>> +
> 
> Maybe we can have this newly added helper with updating caller
> in blk_mq_complete_request_remote() also ?
There are similar optimizations for blk_mq_request_started and
blk_mq_request_completed. It may be better to optimize it by using
independent patches.
> 
> Thanks,
> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] nvme-core: introduce complete failed request
  2021-01-05 19:11     ` Minwoo Im
@ 2021-01-06  2:31       ` Chao Leng
  -1 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-06  2:31 UTC (permalink / raw)
  To: Minwoo Im; +Cc: linux-nvme, kbusch, axboe, hch, sagi, linux-block, axboe



On 2021/1/6 3:11, Minwoo Im wrote:
> Hello,
> 
> On 21-01-05 15:19:32, Chao Leng wrote:
>> When a request is queued failed, if the fail status is not
>> BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
>> the request is need to complete with nvme_complete_rq in queue_rq.
>> So introduce nvme_try_complete_failed_req.
>> The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
>> nvmf_fail_nonready_command and queue_rq.
>> So introduce nvme_complete_failed_req.
>> For details, see the subsequent patches.
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
>> ---
>>   drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
>> index bfcedfa4b057..1a0bddb9158f 100644
>> --- a/drivers/nvme/host/nvme.h
>> +++ b/drivers/nvme/host/nvme.h
>> @@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
>>   extern const struct attribute_group *nvme_ns_id_attr_groups[];
>>   extern const struct block_device_operations nvme_ns_head_ops;
>>   
>> +static inline void nvme_complete_failed_req(struct request *req)
>> +{
>> +	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
>> +	blk_mq_set_request_complete(req);
>> +	nvme_complete_rq(req);
>> +}
>> +
>> +static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
>> +							blk_status_t ret)
>> +{
>> +	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
>> +	    ret == BLK_STS_ZONE_RESOURCE)
>> +		return ret;
> 
> If it has nothing to do with various conditions, can we have this if
> to switch just like the other function in the same file does:
ok.
> 
> 	switch (ret) {
> 	case BLK_STS_RESOURCE:
> 	case BLK_STS_DEV_RESOURCE:
> 	case BLK_STS_ZONE_RESOURCE:
> 		return ret;
> 	default:
> 		nvme_complete_failed_req(req);
> 		return BLK_STS_OK;
> 	}
> 
>> +
>> +	nvme_complete_failed_req(req);
>> +	return BLK_STS_OK;
>> +}
>> +
> 
> Can we have these two functions along side with nvme_try_complete_req()
> by moving declaration of nvme_coplete_rq() a little bit up ?
This may cause the function declaration disordered.
> 
>>   #ifdef CONFIG_NVME_MULTIPATH
>>   static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
>>   {
>> -- 
>> 2.16.4
>>
> 
> Thanks,
> .
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] nvme-core: introduce complete failed request
@ 2021-01-06  2:31       ` Chao Leng
  0 siblings, 0 replies; 22+ messages in thread
From: Chao Leng @ 2021-01-06  2:31 UTC (permalink / raw)
  To: Minwoo Im; +Cc: axboe, linux-block, sagi, linux-nvme, axboe, kbusch, hch



On 2021/1/6 3:11, Minwoo Im wrote:
> Hello,
> 
> On 21-01-05 15:19:32, Chao Leng wrote:
>> When a request is queued failed, if the fail status is not
>> BLK_STS_RESOURCE, BLK_STS_DEV_RESOURCE, BLK_STS_ZONE_RESOURCE,
>> the request is need to complete with nvme_complete_rq in queue_rq.
>> So introduce nvme_try_complete_failed_req.
>> The request is needed to complete with NVME_SC_HOST_PATH_ERROR in
>> nvmf_fail_nonready_command and queue_rq.
>> So introduce nvme_complete_failed_req.
>> For details, see the subsequent patches.
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
>> ---
>>   drivers/nvme/host/nvme.h | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
>> index bfcedfa4b057..1a0bddb9158f 100644
>> --- a/drivers/nvme/host/nvme.h
>> +++ b/drivers/nvme/host/nvme.h
>> @@ -649,6 +649,24 @@ void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
>>   extern const struct attribute_group *nvme_ns_id_attr_groups[];
>>   extern const struct block_device_operations nvme_ns_head_ops;
>>   
>> +static inline void nvme_complete_failed_req(struct request *req)
>> +{
>> +	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
>> +	blk_mq_set_request_complete(req);
>> +	nvme_complete_rq(req);
>> +}
>> +
>> +static inline blk_status_t nvme_try_complete_failed_req(struct request *req,
>> +							blk_status_t ret)
>> +{
>> +	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE ||
>> +	    ret == BLK_STS_ZONE_RESOURCE)
>> +		return ret;
> 
> If it has nothing to do with various conditions, can we have this if
> to switch just like the other function in the same file does:
ok.
> 
> 	switch (ret) {
> 	case BLK_STS_RESOURCE:
> 	case BLK_STS_DEV_RESOURCE:
> 	case BLK_STS_ZONE_RESOURCE:
> 		return ret;
> 	default:
> 		nvme_complete_failed_req(req);
> 		return BLK_STS_OK;
> 	}
> 
>> +
>> +	nvme_complete_failed_req(req);
>> +	return BLK_STS_OK;
>> +}
>> +
> 
> Can we have these two functions along side with nvme_try_complete_req()
> by moving declaration of nvme_coplete_rq() a little bit up ?
This may cause the function declaration disordered.
> 
>>   #ifdef CONFIG_NVME_MULTIPATH
>>   static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
>>   {
>> -- 
>> 2.16.4
>>
> 
> Thanks,
> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-01-06  2:32 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-05  7:19 [PATCH 0/6] avoid repeated request completion and IO error Chao Leng
2021-01-05  7:19 ` Chao Leng
2021-01-05  7:19 ` [PATCH 1/6] blk-mq: introduce blk_mq_set_request_complete Chao Leng
2021-01-05  7:19   ` Chao Leng
2021-01-05 19:16   ` Minwoo Im
2021-01-05 19:16     ` Minwoo Im
2021-01-06  2:29     ` Chao Leng
2021-01-06  2:29       ` Chao Leng
2021-01-05  7:19 ` [PATCH 2/6] nvme-core: introduce complete failed request Chao Leng
2021-01-05  7:19   ` Chao Leng
2021-01-05 19:11   ` Minwoo Im
2021-01-05 19:11     ` Minwoo Im
2021-01-06  2:31     ` Chao Leng
2021-01-06  2:31       ` Chao Leng
2021-01-05  7:19 ` [PATCH 3/6] nvme-fabrics: avoid repeated request completion for nvmf_fail_nonready_command Chao Leng
2021-01-05  7:19   ` Chao Leng
2021-01-05  7:19 ` [PATCH 4/6] nvme-rdma: avoid IO error and repeated request completion Chao Leng
2021-01-05  7:19   ` Chao Leng
2021-01-05  7:19 ` [PATCH 5/6] nvme-tcp: " Chao Leng
2021-01-05  7:19   ` Chao Leng
2021-01-05  7:19 ` [PATCH 6/6] nvme-fc: " Chao Leng
2021-01-05  7:19   ` Chao Leng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.