linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path
@ 2020-07-06 23:15 Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 1/5] nvme-pci: reduce blk_rq_nr_phys_segments calls Chaitanya Kulkarni
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

Hi Christoph/James/Keith/Sagi,

While reviewing other patch-series I found that there are repetitive
calls to blk_rq_nr_phys_segments() in the fast path for NVMe Transports 
(pci, rdma, fc, loop). We should try and avoid as many repetitive checks
in the fast-path as possible.

This patch-series reduces multiple calls and minimizes the repetitive
checks in blk_rq_nr_phys_segments() for NVMe transports.

P.S. Tagging it as an RFC since I've not tested rdma/tcp/fc part, if we
agree to get this in I'll put more effort in testing.

Regards,
Chaitanya

Chaitanya Kulkarni (5):
  nvme-pci: reduce blk_rq_nr_phys_segments calls
  nvme-rdma: reduce blk_rq_nr_phys_segments calls
  nvme-tcp: reduce blk_rq_nr_phys_segments calls
  nvme-fc: reduce blk_rq_nr_phys_segments calls
  nvme-loop: reduce blk_rq_nr_phys_segments calls

 drivers/nvme/host/fc.c     | 20 ++++++++++----------
 drivers/nvme/host/pci.c    | 18 ++++++++++--------
 drivers/nvme/host/rdma.c   |  8 ++++----
 drivers/nvme/host/tcp.c    | 10 +++++-----
 drivers/nvme/target/loop.c |  6 +++---
 5 files changed, 32 insertions(+), 30 deletions(-)

-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC PATCH 1/5] nvme-pci: reduce blk_rq_nr_phys_segments calls
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
@ 2020-07-06 23:15 ` Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 2/5 COMPILE TESTED] nvme-rdma: " Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

In the fast patch blk_rq_nr_phys_segments() is called multiple times
nvme_queue_rq()(1), nvme_map_data()(2) and nvme_pci_use_sgl()(1). The
function blk_rq_nr_phys_segments() adds a if check for special payload.
The same check gets repeated number of time we call the function in the
fast path.

In order to minimize repetitive check in the fast path this patch
reduces the number of calls to one in the parent function
nvme_queue_rq() after we call the nvme_setup_cmd() so that we have the
right nseg count (with write-zeroes using discard quirk) and adjust the
code in submission path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/pci.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c283e8dbfb86..07ac28d7d66c 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -495,10 +495,10 @@ static void **nvme_pci_iod_list(struct request *req)
 	return (void **)(iod->sg + blk_rq_nr_phys_segments(req));
 }
 
-static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req)
+static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req,
+		unsigned short nseg)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-	int nseg = blk_rq_nr_phys_segments(req);
 	unsigned int avg_seg_size;
 
 	if (nseg == 0)
@@ -787,13 +787,13 @@ static blk_status_t nvme_setup_sgl_simple(struct nvme_dev *dev,
 }
 
 static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
-		struct nvme_command *cmnd)
+		struct nvme_command *cmnd, unsigned short nseg)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	blk_status_t ret = BLK_STS_RESOURCE;
 	int nr_mapped;
 
-	if (blk_rq_nr_phys_segments(req) == 1) {
+	if (nseg == 1) {
 		struct bio_vec bv = req_bvec(req);
 
 		if (!is_pci_p2pdma_page(bv.bv_page)) {
@@ -812,7 +812,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
 	iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC);
 	if (!iod->sg)
 		return BLK_STS_RESOURCE;
-	sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
+	sg_init_table(iod->sg, nseg);
 	iod->nents = blk_rq_map_sg(req->q, req, iod->sg);
 	if (!iod->nents)
 		goto out;
@@ -826,7 +826,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
 	if (!nr_mapped)
 		goto out;
 
-	iod->use_sgl = nvme_pci_use_sgls(dev, req);
+	iod->use_sgl = nvme_pci_use_sgls(dev, req, nseg);
 	if (iod->use_sgl)
 		ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped);
 	else
@@ -860,6 +860,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct nvme_queue *nvmeq = hctx->driver_data;
 	struct nvme_dev *dev = nvmeq->dev;
 	struct request *req = bd->rq;
+	unsigned short nseg;
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	struct nvme_command cmnd;
 	blk_status_t ret;
@@ -879,8 +880,9 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (ret)
 		return ret;
 
-	if (blk_rq_nr_phys_segments(req)) {
-		ret = nvme_map_data(dev, req, &cmnd);
+	nseg = blk_rq_nr_phys_segments(req);
+	if (nseg) {
+		ret = nvme_map_data(dev, req, &cmnd, nseg);
 		if (ret)
 			goto out_free_cmd;
 	}
-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 2/5 COMPILE TESTED] nvme-rdma: reduce blk_rq_nr_phys_segments calls
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 1/5] nvme-pci: reduce blk_rq_nr_phys_segments calls Chaitanya Kulkarni
@ 2020-07-06 23:15 ` Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 3/5 COMPILE TESTED] nvme-tcp: " Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

In the fast path blk_rq_nr_phys_segments() is called twice for RDMA
fabric. The function blk_rq_nr_phys_segments() adds a if check for
special payload. The same check gets repeated number of times we call
the function in the fast path.

In order to minimize repetitive check in the fast path this patch
reduces the number of calls to one and adjust the code in submission
path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/rdma.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 13506a87a444..736e5741dbdc 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1444,6 +1444,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 		struct request *rq, struct nvme_command *c)
 {
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	unsigned short nseg = blk_rq_nr_phys_segments(rq);
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
 	int pi_count = 0;
@@ -1454,13 +1455,12 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 
 	c->common.flags |= NVME_CMD_SGL_METABUF;
 
-	if (!blk_rq_nr_phys_segments(rq))
+	if (!nseg)
 		return nvme_rdma_set_sg_null(c);
 
 	req->data_sgl.sg_table.sgl = (struct scatterlist *)(req + 1);
-	ret = sg_alloc_table_chained(&req->data_sgl.sg_table,
-			blk_rq_nr_phys_segments(rq), req->data_sgl.sg_table.sgl,
-			NVME_INLINE_SG_CNT);
+	ret = sg_alloc_table_chained(&req->data_sgl.sg_table, nseg,
+			req->data_sgl.sg_table.sgl, NVME_INLINE_SG_CNT);
 	if (ret)
 		return -ENOMEM;
 
-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 3/5 COMPILE TESTED] nvme-tcp: reduce blk_rq_nr_phys_segments calls
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 1/5] nvme-pci: reduce blk_rq_nr_phys_segments calls Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 2/5 COMPILE TESTED] nvme-rdma: " Chaitanya Kulkarni
@ 2020-07-06 23:15 ` Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 4/5 COMPILE TESTED] nvme-fc: " Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

In the fast patch blk_rq_nr_phys_segments() is called twice for TCP
fabric. The function blk_rq_nr_phys_segments() adds a if check for
special payload. The same check gets repeated number of times we call
the function in the fast path.

In order to minimize repetitive check in the fast path this patch
reduces the number of calls to one and adjust the code in submission
path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/tcp.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 860d7ddc2eee..ca0f8f17ef29 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2173,7 +2173,7 @@ nvme_tcp_timeout(struct request *rq, bool reserved)
 }
 
 static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue,
-			struct request *rq)
+			struct request *rq, unsigned short nseg)
 {
 	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_tcp_cmd_pdu *pdu = req->pdu;
@@ -2181,7 +2181,7 @@ static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue,
 
 	c->common.flags |= NVME_CMD_SGL_METABUF;
 
-	if (!blk_rq_nr_phys_segments(rq))
+	if (!nseg)
 		nvme_tcp_set_sg_null(c);
 	else if (rq_data_dir(rq) == WRITE &&
 	    req->data_len <= nvme_tcp_inline_data_size(queue))
@@ -2196,6 +2196,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
 		struct request *rq)
 {
 	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+	unsigned short nseg = blk_rq_nr_phys_segments(rq);
 	struct nvme_tcp_cmd_pdu *pdu = req->pdu;
 	struct nvme_tcp_queue *queue = req->queue;
 	u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0;
@@ -2210,8 +2211,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
 	req->data_sent = 0;
 	req->pdu_len = 0;
 	req->pdu_sent = 0;
-	req->data_len = blk_rq_nr_phys_segments(rq) ?
-				blk_rq_payload_bytes(rq) : 0;
+	req->data_len = nseg ? blk_rq_payload_bytes(rq) : 0;
 	req->curr_bio = rq->bio;
 
 	if (rq_data_dir(rq) == WRITE &&
@@ -2233,7 +2233,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
 	pdu->hdr.plen =
 		cpu_to_le32(pdu->hdr.hlen + hdgst + req->pdu_len + ddgst);
 
-	ret = nvme_tcp_map_data(queue, rq);
+	ret = nvme_tcp_map_data(queue, rq, nseg);
 	if (unlikely(ret)) {
 		nvme_cleanup_cmd(rq);
 		dev_err(queue->ctrl->ctrl.device,
-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 4/5 COMPILE TESTED] nvme-fc: reduce blk_rq_nr_phys_segments calls
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2020-07-06 23:15 ` [RFC PATCH 3/5 COMPILE TESTED] nvme-tcp: " Chaitanya Kulkarni
@ 2020-07-06 23:15 ` Chaitanya Kulkarni
  2020-07-06 23:15 ` [RFC PATCH 5/5] nvme-loop: " Chaitanya Kulkarni
  2020-07-07  7:11 ` [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Christoph Hellwig
  5 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

In the fast path blk_rq_nr_phys_segments() is called multiple times
nvme_fc_queue_rq()(1), nvme_fc_map_data()(3) for FC fabric. The function
blk_rq_nr_phys_segments() adds a if check for special payload. The same
check gets repeated number of time we call the function in the fast path.

In order to minimize repetitive check in the fast path this patch
reduces the number of calls to one and adjust the code in submission
path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/host/fc.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index e999a8c4b7e8..7f627890e3de 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2462,25 +2462,24 @@ nvme_fc_timeout(struct request *rq, bool reserved)
 
 static int
 nvme_fc_map_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
-		struct nvme_fc_fcp_op *op)
+		struct nvme_fc_fcp_op *op, unsigned short nseg)
 {
 	struct nvmefc_fcp_req *freq = &op->fcp_req;
 	int ret;
 
 	freq->sg_cnt = 0;
 
-	if (!blk_rq_nr_phys_segments(rq))
+	if (!nseg)
 		return 0;
 
 	freq->sg_table.sgl = freq->first_sgl;
-	ret = sg_alloc_table_chained(&freq->sg_table,
-			blk_rq_nr_phys_segments(rq), freq->sg_table.sgl,
+	ret = sg_alloc_table_chained(&freq->sg_table, nseg, freq->sg_table.sgl,
 			NVME_INLINE_SG_CNT);
 	if (ret)
 		return -ENOMEM;
 
 	op->nents = blk_rq_map_sg(rq->q, rq, freq->sg_table.sgl);
-	WARN_ON(op->nents > blk_rq_nr_phys_segments(rq));
+	WARN_ON(op->nents > nseg);
 	freq->sg_cnt = fc_dma_map_sg(ctrl->lport->dev, freq->sg_table.sgl,
 				op->nents, rq_dma_dir(rq));
 	if (unlikely(freq->sg_cnt <= 0)) {
@@ -2538,7 +2537,7 @@ nvme_fc_unmap_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
 static blk_status_t
 nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 	struct nvme_fc_fcp_op *op, u32 data_len,
-	enum nvmefc_fcp_datadir	io_dir)
+	enum nvmefc_fcp_datadir	io_dir, unsigned short nseg)
 {
 	struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
 	struct nvme_command *sqe = &cmdiu->sqe;
@@ -2595,7 +2594,7 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
 	sqe->rw.dptr.sgl.addr = 0;
 
 	if (!(op->flags & FCOP_FLAGS_AEN)) {
-		ret = nvme_fc_map_data(ctrl, op->rq, op);
+		ret = nvme_fc_map_data(ctrl, op->rq, op, nseg);
 		if (ret < 0) {
 			nvme_cleanup_cmd(op->rq);
 			nvme_fc_ctrl_put(ctrl);
@@ -2659,6 +2658,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct nvme_fc_queue *queue = hctx->driver_data;
 	struct nvme_fc_ctrl *ctrl = queue->ctrl;
 	struct request *rq = bd->rq;
+	unsigned short nseg = blk_rq_nr_phys_segments(rq);
 	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
 	struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
 	struct nvme_command *sqe = &cmdiu->sqe;
@@ -2683,7 +2683,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	 * more physical segments in the sg list. If there is no
 	 * physical segments, there is no payload.
 	 */
-	if (blk_rq_nr_phys_segments(rq)) {
+	if (nseg) {
 		data_len = blk_rq_payload_bytes(rq);
 		io_dir = ((rq_data_dir(rq) == WRITE) ?
 					NVMEFC_FCP_WRITE : NVMEFC_FCP_READ);
@@ -2693,7 +2693,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	}
 
 
-	return nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir);
+	return nvme_fc_start_fcp_op(ctrl, queue, op, data_len, io_dir, nseg);
 }
 
 static void
@@ -2709,7 +2709,7 @@ nvme_fc_submit_async_event(struct nvme_ctrl *arg)
 	aen_op = &ctrl->aen_ops[0];
 
 	ret = nvme_fc_start_fcp_op(ctrl, aen_op->queue, aen_op, 0,
-					NVMEFC_FCP_NODATA);
+					NVMEFC_FCP_NODATA, 0);
 	if (ret)
 		dev_err(ctrl->ctrl.device,
 			"failed async event work\n");
-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 5/5] nvme-loop: reduce blk_rq_nr_phys_segments calls
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2020-07-06 23:15 ` [RFC PATCH 4/5 COMPILE TESTED] nvme-fc: " Chaitanya Kulkarni
@ 2020-07-06 23:15 ` Chaitanya Kulkarni
  2020-07-07  7:11 ` [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Christoph Hellwig
  5 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-06 23:15 UTC (permalink / raw)
  To: hch, james.smart, kbusch, sagi; +Cc: Chaitanya Kulkarni, linux-nvme

In the fast path blk_rq_nr_phys_segments() is called twice for
nvme-loop. The function blk_rq_nr_phys_segments() adds a if check for
special payload. The same check gets repeated number of times we call
the function in the fast path.

In order to minimize repetitive check in the fast path this patch
reduces the number of calls to one and adjust the code in submission
path.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
 drivers/nvme/target/loop.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 6e8d14a8227c..b5a93b9db783 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -130,6 +130,7 @@ static void nvme_loop_execute_work(struct work_struct *work)
 static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		const struct blk_mq_queue_data *bd)
 {
+	unsigned short nseg = blk_rq_nr_phys_segments(bd->rq);
 	struct nvme_ns *ns = hctx->queue->queuedata;
 	struct nvme_loop_queue *queue = hctx->driver_data;
 	struct request *req = bd->rq;
@@ -151,10 +152,9 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 			&queue->nvme_sq, &nvme_loop_ops))
 		return BLK_STS_OK;
 
-	if (blk_rq_nr_phys_segments(req)) {
+	if (nseg) {
 		iod->sg_table.sgl = iod->first_sgl;
-		if (sg_alloc_table_chained(&iod->sg_table,
-				blk_rq_nr_phys_segments(req),
+		if (sg_alloc_table_chained(&iod->sg_table, nseg,
 				iod->sg_table.sgl, NVME_INLINE_SG_CNT)) {
 			nvme_cleanup_cmd(req);
 			return BLK_STS_RESOURCE;
-- 
2.22.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path
  2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2020-07-06 23:15 ` [RFC PATCH 5/5] nvme-loop: " Chaitanya Kulkarni
@ 2020-07-07  7:11 ` Christoph Hellwig
  2020-07-07 18:10   ` Chaitanya Kulkarni
  5 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2020-07-07  7:11 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: kbusch, sagi, hch, linux-nvme, james.smart

On Mon, Jul 06, 2020 at 04:15:19PM -0700, Chaitanya Kulkarni wrote:
> Hi Christoph/James/Keith/Sagi,
> 
> While reviewing other patch-series I found that there are repetitive
> calls to blk_rq_nr_phys_segments() in the fast path for NVMe Transports 
> (pci, rdma, fc, loop). We should try and avoid as many repetitive checks
> in the fast-path as possible.
> 
> This patch-series reduces multiple calls and minimizes the repetitive
> checks in blk_rq_nr_phys_segments() for NVMe transports.
> 
> P.S. Tagging it as an RFC since I've not tested rdma/tcp/fc part, if we
> agree to get this in I'll put more effort in testing.

blk_rq_nr_phys_segments is pretty trivial - just a branch and struct
field access.  Does this make any kind of difference?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path
  2020-07-07  7:11 ` [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Christoph Hellwig
@ 2020-07-07 18:10   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2020-07-07 18:10 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, james.smart, linux-nvme, sagi

On 7/7/20 00:11, Christoph Hellwig wrote:
> blk_rq_nr_phys_segments is pretty trivial - just a branch and struct
> field access.  Does this make any kind of difference?

I've not seen any difference, on the code level host side adds the same
additional multiple checks (fc(3)/rdam(1)/tcp(1)) and if the device on
the target is nvme controller (nvme-pci with SGL 3 additional checks)
the same check will get repeated (host + pci) in host-target fast path.

In this case having same multiple checks is not bringing us anything
either since rq_nr_phys_seg is not changed once we setup the command in
nvme_setup_cmd() unless I miss something.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-07-07 18:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-06 23:15 [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Chaitanya Kulkarni
2020-07-06 23:15 ` [RFC PATCH 1/5] nvme-pci: reduce blk_rq_nr_phys_segments calls Chaitanya Kulkarni
2020-07-06 23:15 ` [RFC PATCH 2/5 COMPILE TESTED] nvme-rdma: " Chaitanya Kulkarni
2020-07-06 23:15 ` [RFC PATCH 3/5 COMPILE TESTED] nvme-tcp: " Chaitanya Kulkarni
2020-07-06 23:15 ` [RFC PATCH 4/5 COMPILE TESTED] nvme-fc: " Chaitanya Kulkarni
2020-07-06 23:15 ` [RFC PATCH 5/5] nvme-loop: " Chaitanya Kulkarni
2020-07-07  7:11 ` [RFC PATCH 0/5] nvme: reduce repetitive calls in fast path Christoph Hellwig
2020-07-07 18:10   ` Chaitanya Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).