linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] small NVMe cleanups/fixes
@ 2019-10-10 13:28 Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 1/8] nvme: Introduce nvme_is_aen_req function Max Gurtovoy
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

Hi Sagi/Christoph/Keith,
This series include few fast-path and code readability improvements from
IsraelR and also few memleak fixes and new status code introduction from
myself.

This series applies cleanly on top of nvme-5.4 branch, commit 3a8ecc935efab
("nvme: retain split access workaround for capability reads")

Israel Rukshin (4):
  nvme: Introduce nvme_is_aen_req function
  nvmet: Use bio_io_error instead of duplicating it
  nvmet: add unlikely check at nvmet_req_alloc_sgl
  nvmet-rdma: add unlikely check at nvmet_rdma_map_sgl_keyed

Max Gurtovoy (4):
  nvme: introduce "Command Aborted By host" status code
  nvme: move common call to nvme_cleanup_cmd to core layer
  nvmet-loop: fix possible leakage during error flow
  nvme-tcp: fix possible leakage during error flow

 drivers/nvme/host/core.c          |  6 +++++-
 drivers/nvme/host/fc.c            |  3 +--
 drivers/nvme/host/multipath.c     |  1 +
 drivers/nvme/host/nvme.h          |  5 +++++
 drivers/nvme/host/pci.c           |  4 +---
 drivers/nvme/host/rdma.c          | 16 +++++++---------
 drivers/nvme/host/tcp.c           |  5 +++--
 drivers/nvme/target/core.c        |  2 +-
 drivers/nvme/target/io-cmd-bdev.c |  8 +++-----
 drivers/nvme/target/loop.c        |  9 +++++----
 drivers/nvme/target/rdma.c        |  4 ++--
 include/linux/nvme.h              |  1 +
 12 files changed, 35 insertions(+), 29 deletions(-)

-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/8] nvme: Introduce nvme_is_aen_req function
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:51   ` Christoph Hellwig
  2019-10-10 13:28 ` [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it Max Gurtovoy
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: israelr, james.smart, shlomin

From: Israel Rukshin <israelr@mellanox.com>

This function improves code readability and reduces code duplication.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/host/nvme.h   | 5 +++++
 drivers/nvme/host/pci.c    | 3 +--
 drivers/nvme/host/rdma.c   | 4 ++--
 drivers/nvme/host/tcp.c    | 4 ++--
 drivers/nvme/target/loop.c | 4 ++--
 5 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 38a83ef5..6fa2b6e 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -445,6 +445,11 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl)
 	put_device(ctrl->device);
 }
 
+static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
+{
+	return (!qid && command_id >= NVME_AQ_BLK_MQ_DEPTH);
+}
+
 void nvme_complete_rq(struct request *req);
 bool nvme_cancel_request(struct request *req, void *data, bool reserved);
 bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 78e4038..f8f76a9 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -967,8 +967,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
 	 * aborts.  We don't even bother to allocate a struct request
 	 * for them but rather special case them here.
 	 */
-	if (unlikely(nvmeq->qid == 0 &&
-			cqe->command_id >= NVME_AQ_BLK_MQ_DEPTH)) {
+	if (unlikely(nvme_is_aen_req(nvmeq->qid, cqe->command_id))) {
 		nvme_complete_async_event(&nvmeq->dev->ctrl,
 				cqe->status, &cqe->result);
 		return;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 4d28016..154fa4e 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1501,8 +1501,8 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	 * aborts.  We don't even bother to allocate a struct request
 	 * for them but rather special case them here.
 	 */
-	if (unlikely(nvme_rdma_queue_idx(queue) == 0 &&
-			cqe->command_id >= NVME_AQ_BLK_MQ_DEPTH))
+	if (unlikely(nvme_is_aen_req(nvme_rdma_queue_idx(queue),
+				     cqe->command_id)))
 		nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status,
 				&cqe->result);
 	else
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 385a521..124fda6 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -491,8 +491,8 @@ static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
 	 * aborts.  We don't even bother to allocate a struct request
 	 * for them but rather special case them here.
 	 */
-	if (unlikely(nvme_tcp_queue_id(queue) == 0 &&
-	    cqe->command_id >= NVME_AQ_BLK_MQ_DEPTH))
+	if (unlikely(nvme_is_aen_req(nvme_tcp_queue_id(queue),
+				     cqe->command_id)))
 		nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status,
 				&cqe->result);
 	else
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 748a39f..bd1f81f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -102,8 +102,8 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
 	 * aborts.  We don't even bother to allocate a struct request
 	 * for them but rather special case them here.
 	 */
-	if (unlikely(nvme_loop_queue_idx(queue) == 0 &&
-			cqe->command_id >= NVME_AQ_BLK_MQ_DEPTH)) {
+	if (unlikely(nvme_is_aen_req(nvme_loop_queue_idx(queue),
+				     cqe->command_id))) {
 		nvme_complete_async_event(&queue->ctrl->ctrl, cqe->status,
 				&cqe->result);
 	} else {
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 1/8] nvme: Introduce nvme_is_aen_req function Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:51   ` Christoph Hellwig
  2019-10-10 13:28 ` [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl Max Gurtovoy
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: israelr, james.smart, shlomin

From: Israel Rukshin <israelr@mellanox.com>

This commit doesn't change any logic.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/io-cmd-bdev.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 32008d8..f2618dc 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -261,12 +261,10 @@ static void nvmet_bdev_execute_discard(struct nvmet_req *req)
 	if (bio) {
 		bio->bi_private = req;
 		bio->bi_end_io = nvmet_bio_done;
-		if (status) {
-			bio->bi_status = BLK_STS_IOERR;
-			bio_endio(bio);
-		} else {
+		if (status)
+			bio_io_error(bio);
+		else
 			submit_bio(bio);
-		}
 	} else {
 		nvmet_req_complete(req, status);
 	}
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 1/8] nvme: Introduce nvme_is_aen_req function Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:51   ` Christoph Hellwig
  2019-10-10 13:28 ` [PATCH 4/8] nvmet-rdma: add unlikely check at nvmet_rdma_map_sgl_keyed Max Gurtovoy
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: israelr, james.smart, shlomin

From: Israel Rukshin <israelr@mellanox.com>

The call to sgl_alloc shouldn't fail so add this simple optimization to
the fast path.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 3a67e24..6b39cfc 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -966,7 +966,7 @@ int nvmet_req_alloc_sgl(struct nvmet_req *req)
 	}
 
 	req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
-	if (!req->sg)
+	if (unlikely(!req->sg))
 		return -ENOMEM;
 
 	return 0;
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/8] nvmet-rdma: add unlikely check at nvmet_rdma_map_sgl_keyed
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
                   ` (2 preceding siblings ...)
  2019-10-10 13:28 ` [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 5/8] nvme: introduce "Command Aborted By host" status code Max Gurtovoy
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: israelr, james.smart, shlomin

From: Israel Rukshin <israelr@mellanox.com>

The calls to nvmet_req_alloc_sgl and rdma_rw_ctx_init should usually
succeed, so add this simple optimization to the fast path.

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/rdma.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 36d906a..ccf9821 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -672,13 +672,13 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 		return 0;
 
 	ret = nvmet_req_alloc_sgl(&rsp->req);
-	if (ret < 0)
+	if (unlikely(ret < 0))
 		goto error_out;
 
 	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
 			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
 			nvmet_data_dir(&rsp->req));
-	if (ret < 0)
+	if (unlikely(ret < 0))
 		goto error_out;
 	rsp->n_rdma += ret;
 
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/8] nvme: introduce "Command Aborted By host" status code
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
                   ` (3 preceding siblings ...)
  2019-10-10 13:28 ` [PATCH 4/8] nvmet-rdma: add unlikely check at nvmet_rdma_map_sgl_keyed Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:53   ` Christoph Hellwig
  2019-10-10 13:28 ` [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer Max Gurtovoy
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

Fix the status code of canceled requests initiated by the host according
to TP4028 (Status Code 0x371):
"Command Aborted By host: The command was aborted as a result of host
action (e.g., the host disconnected the Fabric connection)."

Also in a multipath environment, unless otherwise specified, errors of
this type (path related) should be retried using a different path, if
one is available.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/host/core.c      | 2 +-
 drivers/nvme/host/multipath.c | 1 +
 include/linux/nvme.h          | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ef1d8f8..2203309 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -298,7 +298,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
 	if (blk_mq_request_completed(req))
 		return true;
 
-	nvme_req(req)->status = NVME_SC_HOST_PATH_ERROR;
+	nvme_req(req)->status = NVME_SC_HOST_ABORTED_CMD;
 	blk_mq_complete_request(req);
 	return true;
 }
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 30de7ef..103c04f 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -95,6 +95,7 @@ void nvme_failover_req(struct request *req)
 		}
 		break;
 	case NVME_SC_HOST_PATH_ERROR:
+	case NVME_SC_HOST_ABORTED_CMD:
 		/*
 		 * Temporary transport disruption in talking to the controller.
 		 * Try to send on a new path.
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index f61d690..a260cd7 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -1368,6 +1368,7 @@ enum {
 	NVME_SC_ANA_INACCESSIBLE	= 0x302,
 	NVME_SC_ANA_TRANSITION		= 0x303,
 	NVME_SC_HOST_PATH_ERROR		= 0x370,
+	NVME_SC_HOST_ABORTED_CMD	= 0x371,
 
 	NVME_SC_CRD			= 0x1800,
 	NVME_SC_DNR			= 0x4000,
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
                   ` (4 preceding siblings ...)
  2019-10-10 13:28 ` [PATCH 5/8] nvme: introduce "Command Aborted By host" status code Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:56   ` Christoph Hellwig
  2019-10-10 13:28 ` [PATCH 7/8] nvmet-loop: fix possible leakage during error flow Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 8/8] nvme-tcp: " Max Gurtovoy
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

nvme_cleanup_cmd should be called for each call to nvme_setup_cmd
(symmetrical functions). Move the call for nvme_cleanup_cmd to the common
core layer and call it during nvme_complete_rq for the good flow. For
error flow, each transport will call nvme_cleanup_cmd independently. Also
take care of a special case of path failure, where we call
nvme_complete_rq without doing nvme_setup_cmd.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/host/core.c   |  4 ++++
 drivers/nvme/host/fc.c     |  3 +--
 drivers/nvme/host/pci.c    |  1 -
 drivers/nvme/host/rdma.c   | 12 +++++-------
 drivers/nvme/target/loop.c |  1 -
 5 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2203309..46acdba 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -268,6 +268,10 @@ void nvme_complete_rq(struct request *req)
 
 	trace_nvme_complete_rq(req);
 
+	/* In case of NVME_SC_HOST_PATH_ERROR we don't call nvme_setup_cmd */
+	if (likely(nvme_req(req)->status != NVME_SC_HOST_PATH_ERROR))
+		nvme_cleanup_cmd(req);
+
 	if (nvme_req(req)->ctrl->kas)
 		nvme_req(req)->ctrl->comp_seen = true;
 
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 265f89e..1e14dd6 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2173,8 +2173,6 @@ enum {
 	fc_dma_unmap_sg(ctrl->lport->dev, freq->sg_table.sgl, op->nents,
 			rq_dma_dir(rq));
 
-	nvme_cleanup_cmd(rq);
-
 	sg_free_table_chained(&freq->sg_table, SG_CHUNK_SIZE);
 
 	freq->sg_cnt = 0;
@@ -2305,6 +2303,7 @@ enum {
 		if (!(op->flags & FCOP_FLAGS_AEN))
 			nvme_fc_unmap_data(ctrl, op->rq, op);
 
+		nvme_cleanup_cmd(op->rq);
 		nvme_fc_ctrl_put(ctrl);
 
 		if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f8f76a9..55939df 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -924,7 +924,6 @@ static void nvme_pci_complete_rq(struct request *req)
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	struct nvme_dev *dev = iod->nvmeq->dev;
 
-	nvme_cleanup_cmd(req);
 	if (blk_integrity_rq(req))
 		dma_unmap_page(dev->dev, iod->meta_dma,
 			       rq_integrity_vec(req)->bv_len, rq_data_dir(req));
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 154fa4e..05f2dfa 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1160,8 +1160,6 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 	}
 
 	ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
-
-	nvme_cleanup_cmd(rq);
 	sg_free_table_chained(&req->sg_table, SG_CHUNK_SIZE);
 }
 
@@ -1760,7 +1758,6 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (unlikely(err < 0)) {
 		dev_err(queue->ctrl->ctrl.device,
 			     "Failed to map data (%d)\n", err);
-		nvme_cleanup_cmd(rq);
 		goto err;
 	}
 
@@ -1771,18 +1768,19 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
 			req->mr ? &req->reg_wr.wr : NULL);
-	if (unlikely(err)) {
-		nvme_rdma_unmap_data(queue, rq);
-		goto err;
-	}
+	if (unlikely(err))
+		goto err_unmap;
 
 	return BLK_STS_OK;
 
+err_unmap:
+	nvme_rdma_unmap_data(queue, rq);
 err:
 	if (err == -ENOMEM || err == -EAGAIN)
 		ret = BLK_STS_RESOURCE;
 	else
 		ret = BLK_STS_IOERR;
+	nvme_cleanup_cmd(rq);
 unmap_qe:
 	ib_dma_unmap_single(dev, req->sqe.dma, sizeof(struct nvme_command),
 			    DMA_TO_DEVICE);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index bd1f81f..5b7b197 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -76,7 +76,6 @@ static void nvme_loop_complete_rq(struct request *req)
 {
 	struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
 
-	nvme_cleanup_cmd(req);
 	sg_free_table_chained(&iod->sg_table, SG_CHUNK_SIZE);
 	nvme_complete_rq(req);
 }
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 7/8] nvmet-loop: fix possible leakage during error flow
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
                   ` (5 preceding siblings ...)
  2019-10-10 13:28 ` [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:28 ` [PATCH 8/8] nvme-tcp: " Max Gurtovoy
  7 siblings, 0 replies; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

During nvme_loop_queue_rq error flow, one must call nvme_cleanup_cmd since
it's symmetric to nvme_setup_cmd.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/target/loop.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 5b7b197..74a61262 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -156,8 +156,10 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		iod->sg_table.sgl = iod->first_sgl;
 		if (sg_alloc_table_chained(&iod->sg_table,
 				blk_rq_nr_phys_segments(req),
-				iod->sg_table.sgl, SG_CHUNK_SIZE))
+				iod->sg_table.sgl, SG_CHUNK_SIZE)) {
+			nvme_cleanup_cmd(req);
 			return BLK_STS_RESOURCE;
+		}
 
 		iod->req.sg = iod->sg_table.sgl;
 		iod->req.sg_cnt = blk_rq_map_sg(req->q, req, iod->sg_table.sgl);
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 8/8] nvme-tcp: fix possible leakage during error flow
  2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
                   ` (6 preceding siblings ...)
  2019-10-10 13:28 ` [PATCH 7/8] nvmet-loop: fix possible leakage during error flow Max Gurtovoy
@ 2019-10-10 13:28 ` Max Gurtovoy
  2019-10-10 13:56   ` Christoph Hellwig
  7 siblings, 1 reply; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 13:28 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

During nvme_tcp_setup_cmd_pdu error flow, one must call nvme_cleanup_cmd
since it's symmetric to nvme_setup_cmd.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/host/tcp.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 124fda6..0156350 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2126,6 +2126,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
 
 	ret = nvme_tcp_map_data(queue, rq);
 	if (unlikely(ret)) {
+		nvme_cleanup_cmd(rq);
 		dev_err(queue->ctrl->ctrl.device,
 			"Failed to map data (%d)\n", ret);
 		return ret;
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/8] nvme: Introduce nvme_is_aen_req function
  2019-10-10 13:28 ` [PATCH 1/8] nvme: Introduce nvme_is_aen_req function Max Gurtovoy
@ 2019-10-10 13:51   ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:51 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:13PM +0300, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
> 
> This function improves code readability and reduces code duplication.
> 
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
> ---
>  drivers/nvme/host/nvme.h   | 5 +++++
>  drivers/nvme/host/pci.c    | 3 +--
>  drivers/nvme/host/rdma.c   | 4 ++--
>  drivers/nvme/host/tcp.c    | 4 ++--
>  drivers/nvme/target/loop.c | 4 ++--
>  5 files changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 38a83ef5..6fa2b6e 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -445,6 +445,11 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl)
>  	put_device(ctrl->device);
>  }
>  
> +static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
> +{
> +	return (!qid && command_id >= NVME_AQ_BLK_MQ_DEPTH);

No need for the braces.

Otherwise this looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it
  2019-10-10 13:28 ` [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it Max Gurtovoy
@ 2019-10-10 13:51   ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:51 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:14PM +0300, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
> 
> This commit doesn't change any logic.
> 
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>

Looks fine,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl
  2019-10-10 13:28 ` [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl Max Gurtovoy
@ 2019-10-10 13:51   ` Christoph Hellwig
  2019-10-10 15:27     ` Max Gurtovoy
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:51 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:15PM +0300, Max Gurtovoy wrote:
> From: Israel Rukshin <israelr@mellanox.com>
> 
> The call to sgl_alloc shouldn't fail so add this simple optimization to
> the fast path.

Does it really make a difference in practice?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/8] nvme: introduce "Command Aborted By host" status code
  2019-10-10 13:28 ` [PATCH 5/8] nvme: introduce "Command Aborted By host" status code Max Gurtovoy
@ 2019-10-10 13:53   ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:53 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:17PM +0300, Max Gurtovoy wrote:
> Fix the status code of canceled requests initiated by the host according
> to TP4028 (Status Code 0x371):
> "Command Aborted By host: The command was aborted as a result of host
> action (e.g., the host disconnected the Fabric connection)."
> 
> Also in a multipath environment, unless otherwise specified, errors of
> this type (path related) should be retried using a different path, if
> one is available.

Looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer
  2019-10-10 13:28 ` [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer Max Gurtovoy
@ 2019-10-10 13:56   ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:56 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:18PM +0300, Max Gurtovoy wrote:
> nvme_cleanup_cmd should be called for each call to nvme_setup_cmd
> (symmetrical functions). Move the call for nvme_cleanup_cmd to the common
> core layer and call it during nvme_complete_rq for the good flow. For
> error flow, each transport will call nvme_cleanup_cmd independently. Also
> take care of a special case of path failure, where we call
> nvme_complete_rq without doing nvme_setup_cmd.
> 
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
> ---
>  drivers/nvme/host/core.c   |  4 ++++
>  drivers/nvme/host/fc.c     |  3 +--
>  drivers/nvme/host/pci.c    |  1 -
>  drivers/nvme/host/rdma.c   | 12 +++++-------
>  drivers/nvme/target/loop.c |  1 -
>  5 files changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 2203309..46acdba 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -268,6 +268,10 @@ void nvme_complete_rq(struct request *req)
>  
>  	trace_nvme_complete_rq(req);
>  
> +	/* In case of NVME_SC_HOST_PATH_ERROR we don't call nvme_setup_cmd */
> +	if (likely(nvme_req(req)->status != NVME_SC_HOST_PATH_ERROR))
> +		nvme_cleanup_cmd(req);

Hmm.  I don't really like that special case.  Especially given that
nvme_cleanup_cmd checks for RQF_SPECIAL_PAYLOAD and thus is a no-op if
nvme_setup_cmd wasn't called.  So I'd rather remove this check.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 8/8] nvme-tcp: fix possible leakage during error flow
  2019-10-10 13:28 ` [PATCH 8/8] nvme-tcp: " Max Gurtovoy
@ 2019-10-10 13:56   ` Christoph Hellwig
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Hellwig @ 2019-10-10 13:56 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch, hch

On Thu, Oct 10, 2019 at 04:28:20PM +0300, Max Gurtovoy wrote:
> During nvme_tcp_setup_cmd_pdu error flow, one must call nvme_cleanup_cmd
> since it's symmetric to nvme_setup_cmd.
> 
> Signed-off-by: Max Gurtovoy <maxg@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl
  2019-10-10 13:51   ` Christoph Hellwig
@ 2019-10-10 15:27     ` Max Gurtovoy
  0 siblings, 0 replies; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-10 15:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: sagi, israelr, james.smart, linux-nvme, shlomin, kbusch


On 10/10/2019 4:51 PM, Christoph Hellwig wrote:
> On Thu, Oct 10, 2019 at 04:28:15PM +0300, Max Gurtovoy wrote:
>> From: Israel Rukshin <israelr@mellanox.com>
>>
>> The call to sgl_alloc shouldn't fail so add this simple optimization to
>> the fast path.
> Does it really make a difference in practice?

one adding of "unlikely" will not make a difference but 100 will :)


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer
  2019-10-13 16:57 [PATCH v2 0/8] small NVMe cleanups/fixes Max Gurtovoy
@ 2019-10-13 16:57 ` Max Gurtovoy
  0 siblings, 0 replies; 17+ messages in thread
From: Max Gurtovoy @ 2019-10-13 16:57 UTC (permalink / raw)
  To: linux-nvme, hch, kbusch, sagi; +Cc: Max Gurtovoy, israelr, james.smart, shlomin

nvme_cleanup_cmd should be called for each call to nvme_setup_cmd
(symmetrical functions). Move the call for nvme_cleanup_cmd to the common
core layer and call it during nvme_complete_rq for the good flow. For
error flow, each transport will call nvme_cleanup_cmd independently. Also
take care of a special case of path failure, where we call
nvme_complete_rq without doing nvme_setup_cmd.

Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/host/core.c   |  2 ++
 drivers/nvme/host/fc.c     |  3 +--
 drivers/nvme/host/pci.c    |  1 -
 drivers/nvme/host/rdma.c   | 12 +++++-------
 drivers/nvme/target/loop.c |  1 -
 5 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 2203309..033b16c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -268,6 +268,8 @@ void nvme_complete_rq(struct request *req)
 
 	trace_nvme_complete_rq(req);
 
+	nvme_cleanup_cmd(req);
+
 	if (nvme_req(req)->ctrl->kas)
 		nvme_req(req)->ctrl->comp_seen = true;
 
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 265f89e..1e14dd6 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2173,8 +2173,6 @@ enum {
 	fc_dma_unmap_sg(ctrl->lport->dev, freq->sg_table.sgl, op->nents,
 			rq_dma_dir(rq));
 
-	nvme_cleanup_cmd(rq);
-
 	sg_free_table_chained(&freq->sg_table, SG_CHUNK_SIZE);
 
 	freq->sg_cnt = 0;
@@ -2305,6 +2303,7 @@ enum {
 		if (!(op->flags & FCOP_FLAGS_AEN))
 			nvme_fc_unmap_data(ctrl, op->rq, op);
 
+		nvme_cleanup_cmd(op->rq);
 		nvme_fc_ctrl_put(ctrl);
 
 		if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE &&
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f8f76a9..55939df 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -924,7 +924,6 @@ static void nvme_pci_complete_rq(struct request *req)
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	struct nvme_dev *dev = iod->nvmeq->dev;
 
-	nvme_cleanup_cmd(req);
 	if (blk_integrity_rq(req))
 		dma_unmap_page(dev->dev, iod->meta_dma,
 			       rq_integrity_vec(req)->bv_len, rq_data_dir(req));
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 154fa4e..05f2dfa 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1160,8 +1160,6 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 	}
 
 	ib_dma_unmap_sg(ibdev, req->sg_table.sgl, req->nents, rq_dma_dir(rq));
-
-	nvme_cleanup_cmd(rq);
 	sg_free_table_chained(&req->sg_table, SG_CHUNK_SIZE);
 }
 
@@ -1760,7 +1758,6 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (unlikely(err < 0)) {
 		dev_err(queue->ctrl->ctrl.device,
 			     "Failed to map data (%d)\n", err);
-		nvme_cleanup_cmd(rq);
 		goto err;
 	}
 
@@ -1771,18 +1768,19 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
 			req->mr ? &req->reg_wr.wr : NULL);
-	if (unlikely(err)) {
-		nvme_rdma_unmap_data(queue, rq);
-		goto err;
-	}
+	if (unlikely(err))
+		goto err_unmap;
 
 	return BLK_STS_OK;
 
+err_unmap:
+	nvme_rdma_unmap_data(queue, rq);
 err:
 	if (err == -ENOMEM || err == -EAGAIN)
 		ret = BLK_STS_RESOURCE;
 	else
 		ret = BLK_STS_IOERR;
+	nvme_cleanup_cmd(rq);
 unmap_qe:
 	ib_dma_unmap_single(dev, req->sqe.dma, sizeof(struct nvme_command),
 			    DMA_TO_DEVICE);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index bd1f81f..5b7b197 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -76,7 +76,6 @@ static void nvme_loop_complete_rq(struct request *req)
 {
 	struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
 
-	nvme_cleanup_cmd(req);
 	sg_free_table_chained(&iod->sg_table, SG_CHUNK_SIZE);
 	nvme_complete_rq(req);
 }
-- 
1.8.3.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-10-13 16:59 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 13:28 [PATCH 0/8] small NVMe cleanups/fixes Max Gurtovoy
2019-10-10 13:28 ` [PATCH 1/8] nvme: Introduce nvme_is_aen_req function Max Gurtovoy
2019-10-10 13:51   ` Christoph Hellwig
2019-10-10 13:28 ` [PATCH 2/8] nvmet: use bio_io_error instead of duplicating it Max Gurtovoy
2019-10-10 13:51   ` Christoph Hellwig
2019-10-10 13:28 ` [PATCH 3/8] nvmet: add unlikely check at nvmet_req_alloc_sgl Max Gurtovoy
2019-10-10 13:51   ` Christoph Hellwig
2019-10-10 15:27     ` Max Gurtovoy
2019-10-10 13:28 ` [PATCH 4/8] nvmet-rdma: add unlikely check at nvmet_rdma_map_sgl_keyed Max Gurtovoy
2019-10-10 13:28 ` [PATCH 5/8] nvme: introduce "Command Aborted By host" status code Max Gurtovoy
2019-10-10 13:53   ` Christoph Hellwig
2019-10-10 13:28 ` [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer Max Gurtovoy
2019-10-10 13:56   ` Christoph Hellwig
2019-10-10 13:28 ` [PATCH 7/8] nvmet-loop: fix possible leakage during error flow Max Gurtovoy
2019-10-10 13:28 ` [PATCH 8/8] nvme-tcp: " Max Gurtovoy
2019-10-10 13:56   ` Christoph Hellwig
2019-10-13 16:57 [PATCH v2 0/8] small NVMe cleanups/fixes Max Gurtovoy
2019-10-13 16:57 ` [PATCH 6/8] nvme: move common call to nvme_cleanup_cmd to core layer Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).