All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] nvme-rdma: fix error flow during mapping request data
@ 2018-06-10 13:58 Max Gurtovoy
  2018-06-11  6:26 ` Christoph Hellwig
  2018-06-11 14:27 ` Christoph Hellwig
  0 siblings, 2 replies; 3+ messages in thread
From: Max Gurtovoy @ 2018-06-10 13:58 UTC (permalink / raw)


After dma mapping the sgl, we map the sgl to nvme sgl descriptor. In case
of failure during the last mapping we never dma unmap the sgl.

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/nvme/host/rdma.c | 31 ++++++++++++++++++++++++-------
 1 file changed, 24 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 2aba038..7cd4199 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1189,21 +1189,38 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	count = ib_dma_map_sg(ibdev, req->sg_table.sgl, req->nents,
 		    rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 	if (unlikely(count <= 0)) {
-		sg_free_table_chained(&req->sg_table, true);
-		return -EIO;
+		ret = -EIO;
+		goto out_free_table;
 	}
 
 	if (count == 1) {
 		if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) &&
 		    blk_rq_payload_bytes(rq) <=
-				nvme_rdma_inline_data_size(queue))
-			return nvme_rdma_map_sg_inline(queue, req, c);
+				nvme_rdma_inline_data_size(queue)) {
+			ret = nvme_rdma_map_sg_inline(queue, req, c);
+			goto out;
+		}
 
-		if (dev->pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY)
-			return nvme_rdma_map_sg_single(queue, req, c);
+		if (dev->pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY) {
+			ret = nvme_rdma_map_sg_single(queue, req, c);
+			goto out;
+		}
 	}
 
-	return nvme_rdma_map_sg_fr(queue, req, c, count);
+	ret = nvme_rdma_map_sg_fr(queue, req, c, count);
+out:
+	if (unlikely(ret))
+		goto out_unmap_sg;
+
+	return 0;
+
+out_unmap_sg:
+	ib_dma_unmap_sg(ibdev, req->sg_table.sgl,
+			req->nents, rq_data_dir(rq) ==
+			WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
+out_free_table:
+	sg_free_table_chained(&req->sg_table, true);
+	return ret;
 }
 
 static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 1/1] nvme-rdma: fix error flow during mapping request data
  2018-06-10 13:58 [PATCH 1/1] nvme-rdma: fix error flow during mapping request data Max Gurtovoy
@ 2018-06-11  6:26 ` Christoph Hellwig
  2018-06-11 14:27 ` Christoph Hellwig
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2018-06-11  6:26 UTC (permalink / raw)


On Sun, Jun 10, 2018@04:58:29PM +0300, Max Gurtovoy wrote:
> After dma mapping the sgl, we map the sgl to nvme sgl descriptor. In case
> of failure during the last mapping we never dma unmap the sgl.
> 
> Signed-off-by: Max Gurtovoy <maxg at mellanox.com>

Looks fine,

Reviewed-by: Christoph Hellwig <hch at lst.de>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/1] nvme-rdma: fix error flow during mapping request data
  2018-06-10 13:58 [PATCH 1/1] nvme-rdma: fix error flow during mapping request data Max Gurtovoy
  2018-06-11  6:26 ` Christoph Hellwig
@ 2018-06-11 14:27 ` Christoph Hellwig
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2018-06-11 14:27 UTC (permalink / raw)


Thanks,

applied to nvme-4.18.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-06-11 14:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-10 13:58 [PATCH 1/1] nvme-rdma: fix error flow during mapping request data Max Gurtovoy
2018-06-11  6:26 ` Christoph Hellwig
2018-06-11 14:27 ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.