All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] SGL alloc and free helper functions for requests
@ 2018-03-29 16:07 ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg, James Smart, Logan Gunthorpe

Hey,

These patches are going to be part of the P2PDMA patchset I'm working
on. The change was suggested by Christoph. I just wanted to get the NVMe
community's initial feedback on them seperate from our main patchset.

That is, except for patch 0003 which I'm pretty sure is a valid bug fix
but it needs review.

I also can not test the fibre channel changes as I do not have hardware
so I'd also appreciate it if someone can test it out.

If you need more context the commit which modifies these to support
P2P is [1]. This will likely be posted as the next version of the P2PDMA
patchset in the next cycle.

Thanks,

Logan


[1] https://github.com/sbates130272/linux-p2pmem/commit/4e57ff23d5ccc4687ac8eb567df1d4d15bf50254

Logan Gunthorpe (4):
  nvmet: Introduce helper functions to allocate and free request SGLs
  nvmet-rdma: Use new SGL alloc/free helper for requests
  nvmet-fc: Don't use the count returned by the dma_map_sg call
  nvmet-fc: Use new SGL alloc/free helper for requests

 drivers/nvme/target/core.c  | 16 ++++++++++++++++
 drivers/nvme/target/fc.c    | 45 ++++++++++++++++-----------------------------
 drivers/nvme/target/nvmet.h |  2 ++
 drivers/nvme/target/rdma.c  | 20 ++++++++++++--------
 4 files changed, 46 insertions(+), 37 deletions(-)

--
2.11.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 0/4] SGL alloc and free helper functions for requests
@ 2018-03-29 16:07 ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)


Hey,

These patches are going to be part of the P2PDMA patchset I'm working
on. The change was suggested by Christoph. I just wanted to get the NVMe
community's initial feedback on them seperate from our main patchset.

That is, except for patch 0003 which I'm pretty sure is a valid bug fix
but it needs review.

I also can not test the fibre channel changes as I do not have hardware
so I'd also appreciate it if someone can test it out.

If you need more context the commit which modifies these to support
P2P is [1]. This will likely be posted as the next version of the P2PDMA
patchset in the next cycle.

Thanks,

Logan


[1] https://github.com/sbates130272/linux-p2pmem/commit/4e57ff23d5ccc4687ac8eb567df1d4d15bf50254

Logan Gunthorpe (4):
  nvmet: Introduce helper functions to allocate and free request SGLs
  nvmet-rdma: Use new SGL alloc/free helper for requests
  nvmet-fc: Don't use the count returned by the dma_map_sg call
  nvmet-fc: Use new SGL alloc/free helper for requests

 drivers/nvme/target/core.c  | 16 ++++++++++++++++
 drivers/nvme/target/fc.c    | 45 ++++++++++++++++-----------------------------
 drivers/nvme/target/nvmet.h |  2 ++
 drivers/nvme/target/rdma.c  | 20 ++++++++++++--------
 4 files changed, 46 insertions(+), 37 deletions(-)

--
2.11.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/4] nvmet: Introduce helper functions to allocate and free request SGLs
  2018-03-29 16:07 ` Logan Gunthorpe
@ 2018-03-29 16:07   ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg, James Smart, Logan Gunthorpe

Add helpers to allocate and free the SGL in a struct nvmet_req:

int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq)
void nvmet_req_free_sgl(struct nvmet_req *req)

This will be expanded in a future patch to implement peer-to-peer
memory and should be common with all target drivers. The presently
unused sq argument in the alloc function will be necessary to
decide whether to use peer-to-peer memory and obtain the correct
provider to allocate the memory.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/core.c  | 16 ++++++++++++++++
 drivers/nvme/target/nvmet.h |  2 ++
 2 files changed, 18 insertions(+)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index a78029e4e5f4..3770fb812657 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -575,6 +575,22 @@ void nvmet_req_execute(struct nvmet_req *req)
 }
 EXPORT_SYMBOL_GPL(nvmet_req_execute);
 
+int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq)
+{
+	req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
+	if (!req->sg)
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
+
+void nvmet_req_free_sgl(struct nvmet_req *req)
+{
+	sgl_free(req->sg);
+}
+EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
+
 static inline bool nvmet_cc_en(u32 cc)
 {
 	return (cc >> NVME_CC_EN_SHIFT) & 0x1;
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 417f6c0331cc..6f74cb2b7e37 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -271,6 +271,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 void nvmet_req_uninit(struct nvmet_req *req);
 void nvmet_req_execute(struct nvmet_req *req);
 void nvmet_req_complete(struct nvmet_req *req, u16 status);
+int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq);
+void nvmet_req_free_sgl(struct nvmet_req *req);
 
 void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
 		u16 size);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 1/4] nvmet: Introduce helper functions to allocate and free request SGLs
@ 2018-03-29 16:07   ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)


Add helpers to allocate and free the SGL in a struct nvmet_req:

int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq)
void nvmet_req_free_sgl(struct nvmet_req *req)

This will be expanded in a future patch to implement peer-to-peer
memory and should be common with all target drivers. The presently
unused sq argument in the alloc function will be necessary to
decide whether to use peer-to-peer memory and obtain the correct
provider to allocate the memory.

Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/core.c  | 16 ++++++++++++++++
 drivers/nvme/target/nvmet.h |  2 ++
 2 files changed, 18 insertions(+)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index a78029e4e5f4..3770fb812657 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -575,6 +575,22 @@ void nvmet_req_execute(struct nvmet_req *req)
 }
 EXPORT_SYMBOL_GPL(nvmet_req_execute);
 
+int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq)
+{
+	req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
+	if (!req->sg)
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
+
+void nvmet_req_free_sgl(struct nvmet_req *req)
+{
+	sgl_free(req->sg);
+}
+EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
+
 static inline bool nvmet_cc_en(u32 cc)
 {
 	return (cc >> NVME_CC_EN_SHIFT) & 0x1;
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 417f6c0331cc..6f74cb2b7e37 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -271,6 +271,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 void nvmet_req_uninit(struct nvmet_req *req);
 void nvmet_req_execute(struct nvmet_req *req);
 void nvmet_req_complete(struct nvmet_req *req, u16 status);
+int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq);
+void nvmet_req_free_sgl(struct nvmet_req *req);
 
 void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
 		u16 size);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
  2018-03-29 16:07 ` Logan Gunthorpe
@ 2018-03-29 16:07   ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg, James Smart, Logan Gunthorpe

Use the new helpers introduced in the previous patch to allocate
the SGLs for the request.

Seeing we use req.transfer_len as the length of the SGL it is
set earlier and cleared on any error. It also seems to be unnecessary
to accumulate the length as the map_sgl functions should only ever
be called once.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/rdma.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 978e169c11bf..0dd78229b92f 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -431,7 +431,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
 	}
 
 	if (rsp->req.sg != &rsp->cmd->inline_sg)
-		sgl_free(rsp->req.sg);
+		nvmet_req_free_sgl(&rsp->req);
 
 	if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list)))
 		nvmet_rdma_process_wr_wait_list(queue);
@@ -565,24 +565,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 {
 	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
 	u64 addr = le64_to_cpu(sgl->addr);
-	u32 len = get_unaligned_le24(sgl->length);
 	u32 key = get_unaligned_le32(sgl->key);
 	int ret;
 
+	rsp->req.transfer_len = get_unaligned_le24(sgl->length);
+
 	/* no data command? */
-	if (!len)
+	if (!rsp->req.transfer_len)
 		return 0;
 
-	rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
-	if (!rsp->req.sg)
-		return NVME_SC_INTERNAL;
+	ret = nvmet_req_alloc_sgl(&rsp->req, &rsp->queue->nvme_sq);
+	if (ret < 0)
+		goto error_out;
 
 	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
 			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
 			nvmet_data_dir(&rsp->req));
 	if (ret < 0)
-		return NVME_SC_INTERNAL;
-	rsp->req.transfer_len += len;
+		goto error_out;
 	rsp->n_rdma += ret;
 
 	if (invalidate) {
@@ -591,6 +591,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 	}
 
 	return 0;
+
+error_out:
+	rsp->req.transfer_len = 0;
+	return NVME_SC_INTERNAL;
 }
 
 static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
@ 2018-03-29 16:07   ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)


Use the new helpers introduced in the previous patch to allocate
the SGLs for the request.

Seeing we use req.transfer_len as the length of the SGL it is
set earlier and cleared on any error. It also seems to be unnecessary
to accumulate the length as the map_sgl functions should only ever
be called once.

Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/rdma.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 978e169c11bf..0dd78229b92f 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -431,7 +431,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
 	}
 
 	if (rsp->req.sg != &rsp->cmd->inline_sg)
-		sgl_free(rsp->req.sg);
+		nvmet_req_free_sgl(&rsp->req);
 
 	if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list)))
 		nvmet_rdma_process_wr_wait_list(queue);
@@ -565,24 +565,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 {
 	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
 	u64 addr = le64_to_cpu(sgl->addr);
-	u32 len = get_unaligned_le24(sgl->length);
 	u32 key = get_unaligned_le32(sgl->key);
 	int ret;
 
+	rsp->req.transfer_len = get_unaligned_le24(sgl->length);
+
 	/* no data command? */
-	if (!len)
+	if (!rsp->req.transfer_len)
 		return 0;
 
-	rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
-	if (!rsp->req.sg)
-		return NVME_SC_INTERNAL;
+	ret = nvmet_req_alloc_sgl(&rsp->req, &rsp->queue->nvme_sq);
+	if (ret < 0)
+		goto error_out;
 
 	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
 			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
 			nvmet_data_dir(&rsp->req));
 	if (ret < 0)
-		return NVME_SC_INTERNAL;
-	rsp->req.transfer_len += len;
+		goto error_out;
 	rsp->n_rdma += ret;
 
 	if (invalidate) {
@@ -591,6 +591,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
 	}
 
 	return 0;
+
+error_out:
+	rsp->req.transfer_len = 0;
+	return NVME_SC_INTERNAL;
 }
 
 static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:07 ` Logan Gunthorpe
@ 2018-03-29 16:07   ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg, James Smart, Logan Gunthorpe

When allocating an SGL, the fibre channel target uses the number
of entities mapped as the number of entities in a given scatter
gather list. This is incorrect.

The DMA-API-HOWTO document gives this note:

   The 'nents' argument to the dma_unmap_sg call must be
   the _same_ one you passed into the dma_map_sg call,
   it should _NOT_ be the 'count' value _returned_ from the
   dma_map_sg call.

The fc code only stores the count value returned form the dma_map_sg()
call and uses that value in the call to dma_unmap_sg().

The dma_map_sg() call will return a lower count than nents when multiple
SG entries were merged into one. This implies that there will be fewer
DMA address and length entries but the original number of page entries
in the SGL. So if this occurs, when the SGL reaches nvmet_execute_rw(),
a bio would be created with fewer than the total number of entries.

As odd as it sounds, and as far as I can tell, the number of SG entries
mapped does not appear to be used anywhere in the fc driver and therefore
there's no current need to store it.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Fixes: c53432030d8642 ("nvme-fabrics: Add target support for FC transport")
---
 drivers/nvme/target/fc.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 9b39a6cb1935..9f2f8ab83158 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1698,6 +1698,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
 	struct scatterlist *sg;
 	unsigned int nent;
+	int ret;
 
 	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
 	if (!sg)
@@ -1705,10 +1706,12 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 
 	fod->data_sg = sg;
 	fod->data_sg_cnt = nent;
-	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
-				((fod->io_dir == NVMET_FCP_WRITE) ?
-					DMA_FROM_DEVICE : DMA_TO_DEVICE));
-				/* note: write from initiator perspective */
+	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
+			    ((fod->io_dir == NVMET_FCP_WRITE) ?
+				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
+			    /* note: write from initiator perspective */
+	if (!ret)
+		goto out;
 
 	return 0;
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:07   ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)


When allocating an SGL, the fibre channel target uses the number
of entities mapped as the number of entities in a given scatter
gather list. This is incorrect.

The DMA-API-HOWTO document gives this note:

   The 'nents' argument to the dma_unmap_sg call must be
   the _same_ one you passed into the dma_map_sg call,
   it should _NOT_ be the 'count' value _returned_ from the
   dma_map_sg call.

The fc code only stores the count value returned form the dma_map_sg()
call and uses that value in the call to dma_unmap_sg().

The dma_map_sg() call will return a lower count than nents when multiple
SG entries were merged into one. This implies that there will be fewer
DMA address and length entries but the original number of page entries
in the SGL. So if this occurs, when the SGL reaches nvmet_execute_rw(),
a bio would be created with fewer than the total number of entries.

As odd as it sounds, and as far as I can tell, the number of SG entries
mapped does not appear to be used anywhere in the fc driver and therefore
there's no current need to store it.

Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
Cc: James Smart <james.smart at broadcom.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
Fixes: c53432030d8642 ("nvme-fabrics: Add target support for FC transport")
---
 drivers/nvme/target/fc.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 9b39a6cb1935..9f2f8ab83158 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1698,6 +1698,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
 	struct scatterlist *sg;
 	unsigned int nent;
+	int ret;
 
 	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
 	if (!sg)
@@ -1705,10 +1706,12 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 
 	fod->data_sg = sg;
 	fod->data_sg_cnt = nent;
-	fod->data_sg_cnt = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
-				((fod->io_dir == NVMET_FCP_WRITE) ?
-					DMA_FROM_DEVICE : DMA_TO_DEVICE));
-				/* note: write from initiator perspective */
+	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
+			    ((fod->io_dir == NVMET_FCP_WRITE) ?
+				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
+			    /* note: write from initiator perspective */
+	if (!ret)
+		goto out;
 
 	return 0;
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
  2018-03-29 16:07 ` Logan Gunthorpe
@ 2018-03-29 16:07   ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg, James Smart, Logan Gunthorpe

Use the new helpers introduced earlier to allocate the SGLs for
the request.

To do this, we drop the appearantly redundant data_sg and data_sg_cnt
members as they are identical to the existing req.sg and req.sg_cnt.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/fc.c | 38 +++++++++++---------------------------
 1 file changed, 11 insertions(+), 27 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 9f2f8ab83158..00135ff7d1c2 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -74,8 +74,6 @@ struct nvmet_fc_fcp_iod {
 	struct nvme_fc_cmd_iu		cmdiubuf;
 	struct nvme_fc_ersp_iu		rspiubuf;
 	dma_addr_t			rspdma;
-	struct scatterlist		*data_sg;
-	int				data_sg_cnt;
 	u32				offset;
 	enum nvmet_fcp_datadir		io_dir;
 	bool				active;
@@ -1696,43 +1694,34 @@ EXPORT_SYMBOL_GPL(nvmet_fc_rcv_ls_req);
 static int
 nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
-	struct scatterlist *sg;
-	unsigned int nent;
 	int ret;
 
-	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
-	if (!sg)
-		goto out;
+	ret = nvmet_req_alloc_sgl(&fod->req, &fod->queue->nvme_sq);
+	if (ret < 0)
+		return NVME_SC_INTERNAL;
 
-	fod->data_sg = sg;
-	fod->data_sg_cnt = nent;
-	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
+	ret = fc_dma_map_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
 			    ((fod->io_dir == NVMET_FCP_WRITE) ?
 				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
 			    /* note: write from initiator perspective */
 	if (!ret)
-		goto out;
+		return NVME_SC_INTERNAL;
 
 	return 0;
-
-out:
-	return NVME_SC_INTERNAL;
 }
 
 static void
 nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
-	if (!fod->data_sg || !fod->data_sg_cnt)
+	if (!fod->req.sg || !fod->req.sg_cnt)
 		return;
 
-	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
+	fc_dma_unmap_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
 				((fod->io_dir == NVMET_FCP_WRITE) ?
 					DMA_FROM_DEVICE : DMA_TO_DEVICE));
-	sgl_free(fod->data_sg);
-	fod->data_sg = NULL;
-	fod->data_sg_cnt = 0;
-}
 
+	nvmet_req_free_sgl(&fod->req);
+}
 
 static bool
 queue_90percent_full(struct nvmet_fc_tgt_queue *q, u32 sqhd)
@@ -1871,7 +1860,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
 	fcpreq->fcp_error = 0;
 	fcpreq->rsplen = 0;
 
-	fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
+	fcpreq->sg = &fod->req.sg[fod->offset / PAGE_SIZE];
 	fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
 
 	/*
@@ -2083,7 +2072,7 @@ __nvmet_fc_fcp_nvme_cmd_done(struct nvmet_fc_tgtport *tgtport,
 		 * There may be a status where data still was intended to
 		 * be moved
 		 */
-		if ((fod->io_dir == NVMET_FCP_READ) && (fod->data_sg_cnt)) {
+		if ((fod->io_dir == NVMET_FCP_READ) && (fod->req.sg_cnt)) {
 			/* push the data over before sending rsp */
 			nvmet_fc_transfer_fcp_data(tgtport, fod,
 						NVMET_FCOP_READDATA);
@@ -2153,9 +2142,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
 	/* clear any response payload */
 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
 
-	fod->data_sg = NULL;
-	fod->data_sg_cnt = 0;
-
 	ret = nvmet_req_init(&fod->req,
 				&fod->queue->nvme_cq,
 				&fod->queue->nvme_sq,
@@ -2178,8 +2164,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
 			return;
 		}
 	}
-	fod->req.sg = fod->data_sg;
-	fod->req.sg_cnt = fod->data_sg_cnt;
 	fod->offset = 0;
 
 	if (fod->io_dir == NVMET_FCP_WRITE) {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
@ 2018-03-29 16:07   ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:07 UTC (permalink / raw)


Use the new helpers introduced earlier to allocate the SGLs for
the request.

To do this, we drop the appearantly redundant data_sg and data_sg_cnt
members as they are identical to the existing req.sg and req.sg_cnt.

Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
Cc: James Smart <james.smart at broadcom.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/fc.c | 38 +++++++++++---------------------------
 1 file changed, 11 insertions(+), 27 deletions(-)

diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 9f2f8ab83158..00135ff7d1c2 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -74,8 +74,6 @@ struct nvmet_fc_fcp_iod {
 	struct nvme_fc_cmd_iu		cmdiubuf;
 	struct nvme_fc_ersp_iu		rspiubuf;
 	dma_addr_t			rspdma;
-	struct scatterlist		*data_sg;
-	int				data_sg_cnt;
 	u32				offset;
 	enum nvmet_fcp_datadir		io_dir;
 	bool				active;
@@ -1696,43 +1694,34 @@ EXPORT_SYMBOL_GPL(nvmet_fc_rcv_ls_req);
 static int
 nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
-	struct scatterlist *sg;
-	unsigned int nent;
 	int ret;
 
-	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
-	if (!sg)
-		goto out;
+	ret = nvmet_req_alloc_sgl(&fod->req, &fod->queue->nvme_sq);
+	if (ret < 0)
+		return NVME_SC_INTERNAL;
 
-	fod->data_sg = sg;
-	fod->data_sg_cnt = nent;
-	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
+	ret = fc_dma_map_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
 			    ((fod->io_dir == NVMET_FCP_WRITE) ?
 				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
 			    /* note: write from initiator perspective */
 	if (!ret)
-		goto out;
+		return NVME_SC_INTERNAL;
 
 	return 0;
-
-out:
-	return NVME_SC_INTERNAL;
 }
 
 static void
 nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
 {
-	if (!fod->data_sg || !fod->data_sg_cnt)
+	if (!fod->req.sg || !fod->req.sg_cnt)
 		return;
 
-	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
+	fc_dma_unmap_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
 				((fod->io_dir == NVMET_FCP_WRITE) ?
 					DMA_FROM_DEVICE : DMA_TO_DEVICE));
-	sgl_free(fod->data_sg);
-	fod->data_sg = NULL;
-	fod->data_sg_cnt = 0;
-}
 
+	nvmet_req_free_sgl(&fod->req);
+}
 
 static bool
 queue_90percent_full(struct nvmet_fc_tgt_queue *q, u32 sqhd)
@@ -1871,7 +1860,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
 	fcpreq->fcp_error = 0;
 	fcpreq->rsplen = 0;
 
-	fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
+	fcpreq->sg = &fod->req.sg[fod->offset / PAGE_SIZE];
 	fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
 
 	/*
@@ -2083,7 +2072,7 @@ __nvmet_fc_fcp_nvme_cmd_done(struct nvmet_fc_tgtport *tgtport,
 		 * There may be a status where data still was intended to
 		 * be moved
 		 */
-		if ((fod->io_dir == NVMET_FCP_READ) && (fod->data_sg_cnt)) {
+		if ((fod->io_dir == NVMET_FCP_READ) && (fod->req.sg_cnt)) {
 			/* push the data over before sending rsp */
 			nvmet_fc_transfer_fcp_data(tgtport, fod,
 						NVMET_FCOP_READDATA);
@@ -2153,9 +2142,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
 	/* clear any response payload */
 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
 
-	fod->data_sg = NULL;
-	fod->data_sg_cnt = 0;
-
 	ret = nvmet_req_init(&fod->req,
 				&fod->queue->nvme_cq,
 				&fod->queue->nvme_sq,
@@ -2178,8 +2164,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
 			return;
 		}
 	}
-	fod->req.sg = fod->data_sg;
-	fod->req.sg_cnt = fod->data_sg_cnt;
 	fod->offset = 0;
 
 	if (fod->io_dir == NVMET_FCP_WRITE) {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:07   ` Logan Gunthorpe
@ 2018-03-29 16:14     ` Bart Van Assche
  -1 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2018-03-29 16:14 UTC (permalink / raw)
  To: logang, linux-kernel, linux-nvme; +Cc: james.smart, hch, sagi

On Thu, 2018-03-29 at 10:07 -0600, Logan Gunthorpe wrote:
> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
> +			    /* note: write from initiator perspective */

If you have to repost this patch please remove the four superfluous parentheses.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:14     ` Bart Van Assche
  0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2018-03-29 16:14 UTC (permalink / raw)


On Thu, 2018-03-29@10:07 -0600, Logan Gunthorpe wrote:
> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
> +			    /* note: write from initiator perspective */

If you have to repost this patch please remove the four superfluous parentheses.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:14     ` Bart Van Assche
@ 2018-03-29 16:15       ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:15 UTC (permalink / raw)
  To: Bart Van Assche, linux-kernel, linux-nvme; +Cc: james.smart, hch, sagi



On 29/03/18 10:14 AM, Bart Van Assche wrote:
> On Thu, 2018-03-29 at 10:07 -0600, Logan Gunthorpe wrote:
>> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
>> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>> +			    /* note: write from initiator perspective */
> 
> If you have to repost this patch please remove the four superfluous parentheses.

Ok, will do.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:15       ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:15 UTC (permalink / raw)




On 29/03/18 10:14 AM, Bart Van Assche wrote:
> On Thu, 2018-03-29@10:07 -0600, Logan Gunthorpe wrote:
>> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
>> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>> +			    /* note: write from initiator perspective */
> 
> If you have to repost this patch please remove the four superfluous parentheses.

Ok, will do.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:07   ` Logan Gunthorpe
@ 2018-03-29 16:24     ` James Smart
  -1 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:24 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg

On 3/29/2018 9:07 AM, Logan Gunthorpe wrote:
> When allocating an SGL, the fibre channel target uses the number
> of entities mapped as the number of entities in a given scatter
> gather list. This is incorrect.
>
> The DMA-API-HOWTO document gives this note:
>
>     The 'nents' argument to the dma_unmap_sg call must be
>     the _same_ one you passed into the dma_map_sg call,
>     it should _NOT_ be the 'count' value _returned_ from the
>     dma_map_sg call.
>
> The fc code only stores the count value returned form the dma_map_sg()
> call and uses that value in the call to dma_unmap_sg().
>
> The dma_map_sg() call will return a lower count than nents when multiple
> SG entries were merged into one. This implies that there will be fewer
> DMA address and length entries but the original number of page entries
> in the SGL. So if this occurs, when the SGL reaches nvmet_execute_rw(),
> a bio would be created with fewer than the total number of entries.
>
> As odd as it sounds, and as far as I can tell, the number of SG entries
> mapped does not appear to be used anywhere in the fc driver and therefore
> there's no current need to store it.
>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> Cc: James Smart <james.smart@broadcom.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Fixes: c53432030d8642 ("nvme-fabrics: Add target support for FC transport")
> ---

Signed-off-by: James Smart  <james.smart@broadcom.com>

Patch looks fine.

As for "not used anywhere", be careful as the structure being prepped is 
passed from the nvme-fc transport to an underlying lldd. So the 
references would likely be in the lldd.

-- james

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:24     ` James Smart
  0 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:24 UTC (permalink / raw)


On 3/29/2018 9:07 AM, Logan Gunthorpe wrote:
> When allocating an SGL, the fibre channel target uses the number
> of entities mapped as the number of entities in a given scatter
> gather list. This is incorrect.
>
> The DMA-API-HOWTO document gives this note:
>
>     The 'nents' argument to the dma_unmap_sg call must be
>     the _same_ one you passed into the dma_map_sg call,
>     it should _NOT_ be the 'count' value _returned_ from the
>     dma_map_sg call.
>
> The fc code only stores the count value returned form the dma_map_sg()
> call and uses that value in the call to dma_unmap_sg().
>
> The dma_map_sg() call will return a lower count than nents when multiple
> SG entries were merged into one. This implies that there will be fewer
> DMA address and length entries but the original number of page entries
> in the SGL. So if this occurs, when the SGL reaches nvmet_execute_rw(),
> a bio would be created with fewer than the total number of entries.
>
> As odd as it sounds, and as far as I can tell, the number of SG entries
> mapped does not appear to be used anywhere in the fc driver and therefore
> there's no current need to store it.
>
> Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
> Cc: James Smart <james.smart at broadcom.com>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: Sagi Grimberg <sagi at grimberg.me>
> Fixes: c53432030d8642 ("nvme-fabrics: Add target support for FC transport")
> ---

Signed-off-by: James Smart? <james.smart at broadcom.com>

Patch looks fine.

As for "not used anywhere", be careful as the structure being prepped is 
passed from the nvme-fc transport to an underlying lldd. So the 
references would likely be in the lldd.

-- james

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:24     ` James Smart
@ 2018-03-29 16:30       ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:30 UTC (permalink / raw)
  To: James Smart, linux-kernel, linux-nvme; +Cc: Christoph Hellwig, Sagi Grimberg



On 29/03/18 10:24 AM, James Smart wrote:
> Signed-off-by: James Smart  <james.smart@broadcom.com>

Thanks James.

> As for "not used anywhere", be careful as the structure being prepped is 
> passed from the nvme-fc transport to an underlying lldd. So the 
> references would likely be in the lldd.

Can you elaborate? The 'data_sg_cnt' member was in 'struct
nvmet_fc_fcp_iod' which is declared in fc.c so it doesn't seem sane for
lower driver to access it... In fact the next patch in the series
removes it completely.

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:30       ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:30 UTC (permalink / raw)




On 29/03/18 10:24 AM, James Smart wrote:
> Signed-off-by: James Smart? <james.smart at broadcom.com>

Thanks James.

> As for "not used anywhere", be careful as the structure being prepped is 
> passed from the nvme-fc transport to an underlying lldd. So the 
> references would likely be in the lldd.

Can you elaborate? The 'data_sg_cnt' member was in 'struct
nvmet_fc_fcp_iod' which is declared in fc.c so it doesn't seem sane for
lower driver to access it... In fact the next patch in the series
removes it completely.

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:30       ` Logan Gunthorpe
@ 2018-03-29 16:34         ` James Smart
  -1 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:34 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg

On 3/29/2018 9:30 AM, Logan Gunthorpe wrote:
> Can you elaborate? The 'data_sg_cnt' member was in 'struct
> nvmet_fc_fcp_iod' which is declared in fc.c so it doesn't seem sane for
> lower driver to access it... In fact the next patch in the series
> removes it completely.
>
> Logan

actually, I do think it's ok. about to post review on the next patch

-- james

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:34         ` James Smart
  0 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:34 UTC (permalink / raw)


On 3/29/2018 9:30 AM, Logan Gunthorpe wrote:
> Can you elaborate? The 'data_sg_cnt' member was in 'struct
> nvmet_fc_fcp_iod' which is declared in fc.c so it doesn't seem sane for
> lower driver to access it... In fact the next patch in the series
> removes it completely.
>
> Logan

actually, I do think it's ok. about to post review on the next patch

-- james

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:14     ` Bart Van Assche
@ 2018-03-29 16:38       ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:38 UTC (permalink / raw)
  To: Bart Van Assche, linux-kernel, linux-nvme; +Cc: james.smart, hch, sagi



On 29/03/18 10:14 AM, Bart Van Assche wrote:
> On Thu, 2018-03-29 at 10:07 -0600, Logan Gunthorpe wrote:
>> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
>> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>> +			    /* note: write from initiator perspective */
> 
> If you have to repost this patch please remove the four superfluous parentheses.

Oh, actually, I made this change in patch 4 of this series for the next
time I post it. This allowed me to cleanup both the map and unmap
instances in the same patch.

Also, if the maintainer wants to pick up patch 3 as a bug fix I can
carry the rest in the P2PDMA series.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-03-29 16:38       ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 16:38 UTC (permalink / raw)




On 29/03/18 10:14 AM, Bart Van Assche wrote:
> On Thu, 2018-03-29@10:07 -0600, Logan Gunthorpe wrote:
>> +	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
>> +			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>> +				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>> +			    /* note: write from initiator perspective */
> 
> If you have to repost this patch please remove the four superfluous parentheses.

Oh, actually, I made this change in patch 4 of this series for the next
time I post it. This allowed me to cleanup both the map and unmap
instances in the same patch.

Also, if the maintainer wants to pick up patch 3 as a bug fix I can
carry the rest in the P2PDMA series.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
  2018-03-29 16:07   ` Logan Gunthorpe
@ 2018-03-29 16:52     ` James Smart
  -1 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:52 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg

overall - I'm a little concerned about the replacement.

I know the code depends on the null/zero checks in 
nvmet_fc_free_tgt_pgs() as there are paths that can call them twice.  
Your replacement in that routine is fine, but you've fully removed any 
initialization to zero or setting to zero on free. Given the structure 
containing the req structure is reused, I'd rather that that code was 
left in some way... or that nvmet_req_free_sgl() or 
nvmet_fc_free_tgt_pgs() was updated to set req->sg to null and (not sure 
necessary if req->sg is null) set req->sg_cnt to 0.

also - sg_cnt isn't referenced as there is an assumption that: transfer 
length meant there would be (transfer length / PAGESIZE + 1 if partial 
page) pages allocated and sg_cnt would always equal that page cnt  thus 
the fcpreq->sgXXX setting behavior in nvmet_fc_transfer_fcp_data().   Is 
that no longer valid ?   I may not have watched closely enough when the 
sgl_alloc() common routine was introduced as it may have invalidated 
these assumptions that were true before.

-- james


On 3/29/2018 9:07 AM, Logan Gunthorpe wrote:
> Use the new helpers introduced earlier to allocate the SGLs for
> the request.
>
> To do this, we drop the appearantly redundant data_sg and data_sg_cnt
> members as they are identical to the existing req.sg and req.sg_cnt.
>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> Cc: James Smart <james.smart@broadcom.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> ---
>   drivers/nvme/target/fc.c | 38 +++++++++++---------------------------
>   1 file changed, 11 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 9f2f8ab83158..00135ff7d1c2 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -74,8 +74,6 @@ struct nvmet_fc_fcp_iod {
>   	struct nvme_fc_cmd_iu		cmdiubuf;
>   	struct nvme_fc_ersp_iu		rspiubuf;
>   	dma_addr_t			rspdma;
> -	struct scatterlist		*data_sg;
> -	int				data_sg_cnt;
>   	u32				offset;
>   	enum nvmet_fcp_datadir		io_dir;
>   	bool				active;
> @@ -1696,43 +1694,34 @@ EXPORT_SYMBOL_GPL(nvmet_fc_rcv_ls_req);
>   static int
>   nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   {
> -	struct scatterlist *sg;
> -	unsigned int nent;
>   	int ret;
>   
> -	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
> -	if (!sg)
> -		goto out;
> +	ret = nvmet_req_alloc_sgl(&fod->req, &fod->queue->nvme_sq);
> +	if (ret < 0)
> +		return NVME_SC_INTERNAL;
>   
> -	fod->data_sg = sg;
> -	fod->data_sg_cnt = nent;
> -	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
> +	ret = fc_dma_map_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
>   			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>   				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>   			    /* note: write from initiator perspective */
>   	if (!ret)
> -		goto out;
> +		return NVME_SC_INTERNAL;
>   
>   	return 0;
> -
> -out:
> -	return NVME_SC_INTERNAL;
>   }
>   
>   static void
>   nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   {
> -	if (!fod->data_sg || !fod->data_sg_cnt)
> +	if (!fod->req.sg || !fod->req.sg_cnt)
>   		return;
>   
> -	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
> +	fc_dma_unmap_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
>   				((fod->io_dir == NVMET_FCP_WRITE) ?
>   					DMA_FROM_DEVICE : DMA_TO_DEVICE));
> -	sgl_free(fod->data_sg);
> -	fod->data_sg = NULL;
> -	fod->data_sg_cnt = 0;
> -}
>   
> +	nvmet_req_free_sgl(&fod->req);
> +}handl
>   
>   static bool
>   queue_90percent_full(struct nvmet_fc_tgt_queue *q, u32 sqhd)
> @@ -1871,7 +1860,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
>   	fcpreq->fcp_error = 0;
>   	fcpreq->rsplen = 0;
>   
> -	fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
> +	fcpreq->sg = &fod->req.sg[fod->offset / PAGE_SIZE];
>   	fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
>   
>   	/*
> @@ -2083,7 +2072,7 @@ __nvmet_fc_fcp_nvme_cmd_done(struct nvmet_fc_tgtport *tgtport,
>   		 * There may be a status where data still was intended to
>   		 * be moved
>   		 */
> -		if ((fod->io_dir == NVMET_FCP_READ) && (fod->data_sg_cnt)) {
> +		if ((fod->io_dir == NVMET_FCP_READ) && (fod->req.sg_cnt)) {
>   			/* push the data over before sending rsp */
>   			nvmet_fc_transfer_fcp_data(tgtport, fod,
>   						NVMET_FCOP_READDATA);
> @@ -2153,9 +2142,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
>   	/* clear any response payload */
>   	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
>   
> -	fod->data_sg = NULL;
> -	fod->data_sg_cnt = 0;
> -
>   	ret = nvmet_req_init(&fod->req,
>   				&fod->queue->nvme_cq,
>   				&fod->queue->nvme_sq,
> @@ -2178,8 +2164,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
>   			return;
>   		}
>   	}
> -	fod->req.sg = fod->data_sg;
> -	fod->req.sg_cnt = fod->data_sg_cnt;
>   	fod->offset = 0;
>   
>   	if (fod->io_dir == NVMET_FCP_WRITE) {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
@ 2018-03-29 16:52     ` James Smart
  0 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 16:52 UTC (permalink / raw)


overall - I'm a little concerned about the replacement.

I know the code depends on the null/zero checks in 
nvmet_fc_free_tgt_pgs() as there are paths that can call them twice.? 
Your replacement in that routine is fine, but you've fully removed any 
initialization to zero or setting to zero on free. Given the structure 
containing the req structure is reused, I'd rather that that code was 
left in some way... or that nvmet_req_free_sgl() or 
nvmet_fc_free_tgt_pgs() was updated to set req->sg to null and (not sure 
necessary if req->sg is null) set req->sg_cnt to 0.

also - sg_cnt isn't referenced as there is an assumption that: transfer 
length meant there would be (transfer length / PAGESIZE + 1 if partial 
page) pages allocated and sg_cnt would always equal that page cnt? thus 
the fcpreq->sgXXX setting behavior in nvmet_fc_transfer_fcp_data().?? Is 
that no longer valid ??? I may not have watched closely enough when the 
sgl_alloc() common routine was introduced as it may have invalidated 
these assumptions that were true before.

-- james


On 3/29/2018 9:07 AM, Logan Gunthorpe wrote:
> Use the new helpers introduced earlier to allocate the SGLs for
> the request.
>
> To do this, we drop the appearantly redundant data_sg and data_sg_cnt
> members as they are identical to the existing req.sg and req.sg_cnt.
>
> Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
> Cc: James Smart <james.smart at broadcom.com>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: Sagi Grimberg <sagi at grimberg.me>
> ---
>   drivers/nvme/target/fc.c | 38 +++++++++++---------------------------
>   1 file changed, 11 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
> index 9f2f8ab83158..00135ff7d1c2 100644
> --- a/drivers/nvme/target/fc.c
> +++ b/drivers/nvme/target/fc.c
> @@ -74,8 +74,6 @@ struct nvmet_fc_fcp_iod {
>   	struct nvme_fc_cmd_iu		cmdiubuf;
>   	struct nvme_fc_ersp_iu		rspiubuf;
>   	dma_addr_t			rspdma;
> -	struct scatterlist		*data_sg;
> -	int				data_sg_cnt;
>   	u32				offset;
>   	enum nvmet_fcp_datadir		io_dir;
>   	bool				active;
> @@ -1696,43 +1694,34 @@ EXPORT_SYMBOL_GPL(nvmet_fc_rcv_ls_req);
>   static int
>   nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   {
> -	struct scatterlist *sg;
> -	unsigned int nent;
>   	int ret;
>   
> -	sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent);
> -	if (!sg)
> -		goto out;
> +	ret = nvmet_req_alloc_sgl(&fod->req, &fod->queue->nvme_sq);
> +	if (ret < 0)
> +		return NVME_SC_INTERNAL;
>   
> -	fod->data_sg = sg;
> -	fod->data_sg_cnt = nent;
> -	ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent,
> +	ret = fc_dma_map_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
>   			    ((fod->io_dir == NVMET_FCP_WRITE) ?
>   				    DMA_FROM_DEVICE : DMA_TO_DEVICE));
>   			    /* note: write from initiator perspective */
>   	if (!ret)
> -		goto out;
> +		return NVME_SC_INTERNAL;
>   
>   	return 0;
> -
> -out:
> -	return NVME_SC_INTERNAL;
>   }
>   
>   static void
>   nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
>   {
> -	if (!fod->data_sg || !fod->data_sg_cnt)
> +	if (!fod->req.sg || !fod->req.sg_cnt)
>   		return;
>   
> -	fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt,
> +	fc_dma_unmap_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt,
>   				((fod->io_dir == NVMET_FCP_WRITE) ?
>   					DMA_FROM_DEVICE : DMA_TO_DEVICE));
> -	sgl_free(fod->data_sg);
> -	fod->data_sg = NULL;
> -	fod->data_sg_cnt = 0;
> -}
>   
> +	nvmet_req_free_sgl(&fod->req);
> +}handl
>   
>   static bool
>   queue_90percent_full(struct nvmet_fc_tgt_queue *q, u32 sqhd)
> @@ -1871,7 +1860,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
>   	fcpreq->fcp_error = 0;
>   	fcpreq->rsplen = 0;
>   
> -	fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
> +	fcpreq->sg = &fod->req.sg[fod->offset / PAGE_SIZE];
>   	fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
>   
>   	/*
> @@ -2083,7 +2072,7 @@ __nvmet_fc_fcp_nvme_cmd_done(struct nvmet_fc_tgtport *tgtport,
>   		 * There may be a status where data still was intended to
>   		 * be moved
>   		 */
> -		if ((fod->io_dir == NVMET_FCP_READ) && (fod->data_sg_cnt)) {
> +		if ((fod->io_dir == NVMET_FCP_READ) && (fod->req.sg_cnt)) {
>   			/* push the data over before sending rsp */
>   			nvmet_fc_transfer_fcp_data(tgtport, fod,
>   						NVMET_FCOP_READDATA);
> @@ -2153,9 +2142,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
>   	/* clear any response payload */
>   	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
>   
> -	fod->data_sg = NULL;
> -	fod->data_sg_cnt = 0;
> -
>   	ret = nvmet_req_init(&fod->req,
>   				&fod->queue->nvme_cq,
>   				&fod->queue->nvme_sq,
> @@ -2178,8 +2164,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
>   			return;
>   		}
>   	}
> -	fod->req.sg = fod->data_sg;
> -	fod->req.sg_cnt = fod->data_sg_cnt;
>   	fod->offset = 0;
>   
>   	if (fod->io_dir == NVMET_FCP_WRITE) {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
  2018-03-29 16:52     ` James Smart
@ 2018-03-29 17:02       ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 17:02 UTC (permalink / raw)
  To: James Smart, linux-kernel, linux-nvme; +Cc: Christoph Hellwig, Sagi Grimberg



On 29/03/18 10:52 AM, James Smart wrote:
> overall - I'm a little concerned about the replacement.
> 
> I know the code depends on the null/zero checks in 
> nvmet_fc_free_tgt_pgs() as there are paths that can call them twice.  
> Your replacement in that routine is fine, but you've fully removed any 
> initialization to zero or setting to zero on free. Given the structure 
> containing the req structure is reused, I'd rather that that code was 
> left in some way... or that nvmet_req_free_sgl() or 
> nvmet_fc_free_tgt_pgs() was updated to set req->sg to null and (not sure 
> necessary if req->sg is null) set req->sg_cnt to 0.

Ok, I can ensure we zero and NULL those in nvmet_req_free_sgl().

> also - sg_cnt isn't referenced as there is an assumption that: transfer 
> length meant there would be (transfer length / PAGESIZE + 1 if partial 
> page) pages allocated and sg_cnt would always equal that page cnt  thus 
> the fcpreq->sgXXX setting behavior in nvmet_fc_transfer_fcp_data().   Is 
> that no longer valid ?   I may not have watched closely enough when the 
> sgl_alloc() common routine was introduced as it may have invalidated 
> these assumptions that were true before.

Per the bug in the previous patch, I don't think that was ever a valid
assumption. It doesn't have anything to do with the sgl_alloc change
either. The dma_map interface is allowed to merge SGLs and that's why it
can return fewer nents than it was passed. I'm not sure how many or
which DMA ops actually do this which is why it hasn't actually
manifested itself as a bug; but it is part of how the interface is
specified to work.

I think we need to store the sg_map_cnt separately and use it instead of
the calculation based on the transfer length. But this is really a fix
that should be rolled in with the previous patch. If you can point me to
where this needs to change I can update my patch, or if you want to fix
it yourself go ahead.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
@ 2018-03-29 17:02       ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-03-29 17:02 UTC (permalink / raw)




On 29/03/18 10:52 AM, James Smart wrote:
> overall - I'm a little concerned about the replacement.
> 
> I know the code depends on the null/zero checks in 
> nvmet_fc_free_tgt_pgs() as there are paths that can call them twice.? 
> Your replacement in that routine is fine, but you've fully removed any 
> initialization to zero or setting to zero on free. Given the structure 
> containing the req structure is reused, I'd rather that that code was 
> left in some way... or that nvmet_req_free_sgl() or 
> nvmet_fc_free_tgt_pgs() was updated to set req->sg to null and (not sure 
> necessary if req->sg is null) set req->sg_cnt to 0.

Ok, I can ensure we zero and NULL those in nvmet_req_free_sgl().

> also - sg_cnt isn't referenced as there is an assumption that: transfer 
> length meant there would be (transfer length / PAGESIZE + 1 if partial 
> page) pages allocated and sg_cnt would always equal that page cnt? thus 
> the fcpreq->sgXXX setting behavior in nvmet_fc_transfer_fcp_data().?? Is 
> that no longer valid ??? I may not have watched closely enough when the 
> sgl_alloc() common routine was introduced as it may have invalidated 
> these assumptions that were true before.

Per the bug in the previous patch, I don't think that was ever a valid
assumption. It doesn't have anything to do with the sgl_alloc change
either. The dma_map interface is allowed to merge SGLs and that's why it
can return fewer nents than it was passed. I'm not sure how many or
which DMA ops actually do this which is why it hasn't actually
manifested itself as a bug; but it is part of how the interface is
specified to work.

I think we need to store the sg_map_cnt separately and use it instead of
the calculation based on the transfer length. But this is really a fix
that should be rolled in with the previous patch. If you can point me to
where this needs to change I can update my patch, or if you want to fix
it yourself go ahead.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
  2018-03-29 17:02       ` Logan Gunthorpe
@ 2018-03-29 17:39         ` James Smart
  -1 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 17:39 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme
  Cc: Christoph Hellwig, Sagi Grimberg

On 3/29/2018 10:02 AM, Logan Gunthorpe wrote:
> Per the bug in the previous patch, I don't think that was ever a valid
> assumption. It doesn't have anything to do with the sgl_alloc change
> either. The dma_map interface is allowed to merge SGLs and that's why it
> can return fewer nents than it was passed. I'm not sure how many or
> which DMA ops actually do this which is why it hasn't actually
> manifested itself as a bug; but it is part of how the interface is
> specified to work.

Argh.. yep. I'll have to correct that assumption. The bug would have 
only shown up on i/o sizes beyond a particular length.

>
> I think we need to store the sg_map_cnt separately and use it instead of
> the calculation based on the transfer length. But this is really a fix
> that should be rolled in with the previous patch. If you can point me to
> where this needs to change I can update my patch, or if you want to fix
> it yourself go ahead.
I'll fix it as it's part of the assumption fix.

With the nulling/zeroing, I'm good with your patch.

-- james.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
@ 2018-03-29 17:39         ` James Smart
  0 siblings, 0 replies; 36+ messages in thread
From: James Smart @ 2018-03-29 17:39 UTC (permalink / raw)


On 3/29/2018 10:02 AM, Logan Gunthorpe wrote:
> Per the bug in the previous patch, I don't think that was ever a valid
> assumption. It doesn't have anything to do with the sgl_alloc change
> either. The dma_map interface is allowed to merge SGLs and that's why it
> can return fewer nents than it was passed. I'm not sure how many or
> which DMA ops actually do this which is why it hasn't actually
> manifested itself as a bug; but it is part of how the interface is
> specified to work.

Argh.. yep. I'll have to correct that assumption. The bug would have 
only shown up on i/o sizes beyond a particular length.

>
> I think we need to store the sg_map_cnt separately and use it instead of
> the calculation based on the transfer length. But this is really a fix
> that should be rolled in with the previous patch. If you can point me to
> where this needs to change I can update my patch, or if you want to fix
> it yourself go ahead.
I'll fix it as it's part of the assumption fix.

With the nulling/zeroing, I'm good with your patch.

-- james.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
  2018-03-29 17:02       ` Logan Gunthorpe
@ 2018-03-29 18:15         ` Christoph Hellwig
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2018-03-29 18:15 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: James Smart, linux-kernel, linux-nvme, Christoph Hellwig, Sagi Grimberg

On Thu, Mar 29, 2018 at 11:02:10AM -0600, Logan Gunthorpe wrote:
> Per the bug in the previous patch, I don't think that was ever a valid
> assumption. It doesn't have anything to do with the sgl_alloc change
> either. The dma_map interface is allowed to merge SGLs and that's why it
> can return fewer nents than it was passed. I'm not sure how many or
> which DMA ops actually do this which is why it hasn't actually
> manifested itself as a bug; but it is part of how the interface is
> specified to work.

Most iommus can, it just is very unlikely do happen in block drivers
given that the block layer already merges segments higher up.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests
@ 2018-03-29 18:15         ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2018-03-29 18:15 UTC (permalink / raw)


On Thu, Mar 29, 2018@11:02:10AM -0600, Logan Gunthorpe wrote:
> Per the bug in the previous patch, I don't think that was ever a valid
> assumption. It doesn't have anything to do with the sgl_alloc change
> either. The dma_map interface is allowed to merge SGLs and that's why it
> can return fewer nents than it was passed. I'm not sure how many or
> which DMA ops actually do this which is why it hasn't actually
> manifested itself as a bug; but it is part of how the interface is
> specified to work.

Most iommus can, it just is very unlikely do happen in block drivers
given that the block layer already merges segments higher up.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
  2018-03-29 16:07   ` Logan Gunthorpe
@ 2018-04-04 12:43     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2018-04-04 12:43 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme; +Cc: Christoph Hellwig, James Smart


> @@ -565,24 +565,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
>   {
>   	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
>   	u64 addr = le64_to_cpu(sgl->addr);
> -	u32 len = get_unaligned_le24(sgl->length);
>   	u32 key = get_unaligned_le32(sgl->key);
>   	int ret;
>   
> +	rsp->req.transfer_len = get_unaligned_le24(sgl->length);
> +

IIRC, this might result in nvmet-rdma executing data-transfer even
for failed requests in some error cases. I'm not sure this is the
only case, but have you tested what happens in error cases?

See nvmet_rdma_need_data_in()/nvmet_rdma_need_data_out() which look
at transfer_len.

>   	/* no data command? */
> -	if (!len)
> +	if (!rsp->req.transfer_len)
>   		return 0;
>   
> -	rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
> -	if (!rsp->req.sg)
> -		return NVME_SC_INTERNAL;
> +	ret = nvmet_req_alloc_sgl(&rsp->req, &rsp->queue->nvme_sq);
> +	if (ret < 0)
> +		goto error_out;
>   
>   	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
>   			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
>   			nvmet_data_dir(&rsp->req));
>   	if (ret < 0)
> -		return NVME_SC_INTERNAL;
> -	rsp->req.transfer_len += len;
> +		goto error_out;
>   	rsp->n_rdma += ret;
>   
>   	if (invalidate) {
> @@ -591,6 +591,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
>   	}
>   
>   	return 0;
> +
> +error_out:
> +	rsp->req.transfer_len = 0;
> +	return NVME_SC_INTERNAL;
>   }
>   
>   static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
@ 2018-04-04 12:43     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2018-04-04 12:43 UTC (permalink / raw)



> @@ -565,24 +565,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
>   {
>   	struct rdma_cm_id *cm_id = rsp->queue->cm_id;
>   	u64 addr = le64_to_cpu(sgl->addr);
> -	u32 len = get_unaligned_le24(sgl->length);
>   	u32 key = get_unaligned_le32(sgl->key);
>   	int ret;
>   
> +	rsp->req.transfer_len = get_unaligned_le24(sgl->length);
> +

IIRC, this might result in nvmet-rdma executing data-transfer even
for failed requests in some error cases. I'm not sure this is the
only case, but have you tested what happens in error cases?

See nvmet_rdma_need_data_in()/nvmet_rdma_need_data_out() which look
at transfer_len.

>   	/* no data command? */
> -	if (!len)
> +	if (!rsp->req.transfer_len)
>   		return 0;
>   
> -	rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
> -	if (!rsp->req.sg)
> -		return NVME_SC_INTERNAL;
> +	ret = nvmet_req_alloc_sgl(&rsp->req, &rsp->queue->nvme_sq);
> +	if (ret < 0)
> +		goto error_out;
>   
>   	ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
>   			rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
>   			nvmet_data_dir(&rsp->req));
>   	if (ret < 0)
> -		return NVME_SC_INTERNAL;
> -	rsp->req.transfer_len += len;
> +		goto error_out;
>   	rsp->n_rdma += ret;
>   
>   	if (invalidate) {
> @@ -591,6 +591,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
>   	}
>   
>   	return 0;
> +
> +error_out:
> +	rsp->req.transfer_len = 0;
> +	return NVME_SC_INTERNAL;
>   }
>   
>   static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
  2018-03-29 16:07   ` Logan Gunthorpe
@ 2018-04-04 12:45     ` Sagi Grimberg
  -1 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2018-04-04 12:45 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme; +Cc: Christoph Hellwig, James Smart

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call
@ 2018-04-04 12:45     ` Sagi Grimberg
  0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2018-04-04 12:45 UTC (permalink / raw)


Reviewed-by: Sagi Grimberg <sagi at grimberg.me>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
  2018-04-04 12:43     ` Sagi Grimberg
@ 2018-04-04 16:47       ` Logan Gunthorpe
  -1 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-04-04 16:47 UTC (permalink / raw)
  To: Sagi Grimberg, linux-kernel, linux-nvme; +Cc: Christoph Hellwig, James Smart

On 4/4/2018 6:43 AM, Sagi Grimberg wrote:
> IIRC, this might result in nvmet-rdma executing data-transfer even
> for failed requests in some error cases. I'm not sure this is the
> only case, but have you tested what happens in error cases?

That's why I set transfer_len to zero when we exit this function on 
error (see the error_out label):

> +error_out:
> +    rsp->req.transfer_len = 0;
> +    return NVME_SC_INTERNAL; 

I haven't tested the error path, but I can. Is it sufficient to just 
change the code to create an error or do you know of a better way to 
test this?

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests
@ 2018-04-04 16:47       ` Logan Gunthorpe
  0 siblings, 0 replies; 36+ messages in thread
From: Logan Gunthorpe @ 2018-04-04 16:47 UTC (permalink / raw)


On 4/4/2018 6:43 AM, Sagi Grimberg wrote:
> IIRC, this might result in nvmet-rdma executing data-transfer even
> for failed requests in some error cases. I'm not sure this is the
> only case, but have you tested what happens in error cases?

That's why I set transfer_len to zero when we exit this function on 
error (see the error_out label):

> +error_out:
> +    rsp->req.transfer_len = 0;
> +    return NVME_SC_INTERNAL; 

I haven't tested the error path, but I can. Is it sufficient to just 
change the code to create an error or do you know of a better way to 
test this?

Thanks,

Logan

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2018-04-04 16:47 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-29 16:07 [PATCH 0/4] SGL alloc and free helper functions for requests Logan Gunthorpe
2018-03-29 16:07 ` Logan Gunthorpe
2018-03-29 16:07 ` [PATCH 1/4] nvmet: Introduce helper functions to allocate and free request SGLs Logan Gunthorpe
2018-03-29 16:07   ` Logan Gunthorpe
2018-03-29 16:07 ` [PATCH 2/4] nvmet-rdma: Use new SGL alloc/free helper for requests Logan Gunthorpe
2018-03-29 16:07   ` Logan Gunthorpe
2018-04-04 12:43   ` Sagi Grimberg
2018-04-04 12:43     ` Sagi Grimberg
2018-04-04 16:47     ` Logan Gunthorpe
2018-04-04 16:47       ` Logan Gunthorpe
2018-03-29 16:07 ` [PATCH 3/4] nvmet-fc: Don't use the count returned by the dma_map_sg call Logan Gunthorpe
2018-03-29 16:07   ` Logan Gunthorpe
2018-03-29 16:14   ` Bart Van Assche
2018-03-29 16:14     ` Bart Van Assche
2018-03-29 16:15     ` Logan Gunthorpe
2018-03-29 16:15       ` Logan Gunthorpe
2018-03-29 16:38     ` Logan Gunthorpe
2018-03-29 16:38       ` Logan Gunthorpe
2018-03-29 16:24   ` James Smart
2018-03-29 16:24     ` James Smart
2018-03-29 16:30     ` Logan Gunthorpe
2018-03-29 16:30       ` Logan Gunthorpe
2018-03-29 16:34       ` James Smart
2018-03-29 16:34         ` James Smart
2018-04-04 12:45   ` Sagi Grimberg
2018-04-04 12:45     ` Sagi Grimberg
2018-03-29 16:07 ` [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests Logan Gunthorpe
2018-03-29 16:07   ` Logan Gunthorpe
2018-03-29 16:52   ` James Smart
2018-03-29 16:52     ` James Smart
2018-03-29 17:02     ` Logan Gunthorpe
2018-03-29 17:02       ` Logan Gunthorpe
2018-03-29 17:39       ` James Smart
2018-03-29 17:39         ` James Smart
2018-03-29 18:15       ` Christoph Hellwig
2018-03-29 18:15         ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.