All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET v3 0/4] Add support for list issue
@ 2021-12-15 16:24 Jens Axboe
  2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
                   ` (3 more replies)
  0 siblings, 4 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 16:24 UTC (permalink / raw)
  To: io-uring, linux-nvme

Hi,

With the support in 5.16-rc1 for allocating and completing batches of
IO, the one missing piece is passing down a list of requests for issue.
Drivers can take advantage of this by defining an mq_ops->queue_rqs()
hook.

This implements it for NVMe, allowing copy of multiple commands in one
swoop.

This is good for around a 500K IOPS/core improvement in my testing,
which is around a 5-6% improvement in efficiency.

No changes since v3 outside of a comment addition.

Changes since v2:
- Add comment on why shared tags are currently bypassed
- Add reviewed-by's

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH 1/4] block: add mq_ops->queue_rqs hook
  2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
@ 2021-12-15 16:24 ` Jens Axboe
  2021-12-16  9:01   ` Christoph Hellwig
  2021-12-20 20:36   ` Keith Busch
  2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 16:24 UTC (permalink / raw)
  To: io-uring, linux-nvme; +Cc: Jens Axboe

If we have a list of requests in our plug list, send it to the driver in
one go, if possible. The driver must set mq_ops->queue_rqs() to support
this, if not the usual one-by-one path is used.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c         | 26 +++++++++++++++++++++++---
 include/linux/blk-mq.h |  8 ++++++++
 2 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e02e7017db03..f24394cb2004 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2512,6 +2512,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 {
 	struct blk_mq_hw_ctx *this_hctx;
 	struct blk_mq_ctx *this_ctx;
+	struct request *rq;
 	unsigned int depth;
 	LIST_HEAD(list);
 
@@ -2520,7 +2521,28 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 	plug->rq_count = 0;
 
 	if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) {
-		struct request_queue *q = rq_list_peek(&plug->mq_list)->q;
+		struct request_queue *q;
+
+		rq = rq_list_peek(&plug->mq_list);
+		q = rq->q;
+
+		/*
+		 * Peek first request and see if we have a ->queue_rqs() hook.
+		 * If we do, we can dispatch the whole plug list in one go. We
+		 * already know at this point that all requests belong to the
+		 * same queue, caller must ensure that's the case.
+		 *
+		 * Since we pass off the full list to the driver at this point,
+		 * we do not increment the active request count for the queue.
+		 * Bypass shared tags for now because of that.
+		 */
+		if (q->mq_ops->queue_rqs &&
+		    !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) {
+			blk_mq_run_dispatch_ops(q,
+				q->mq_ops->queue_rqs(&plug->mq_list));
+			if (rq_list_empty(plug->mq_list))
+				return;
+		}
 
 		blk_mq_run_dispatch_ops(q,
 				blk_mq_plug_issue_direct(plug, false));
@@ -2532,8 +2554,6 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 	this_ctx = NULL;
 	depth = 0;
 	do {
-		struct request *rq;
-
 		rq = rq_list_pop(&plug->mq_list);
 
 		if (!this_hctx) {
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 6f858e05781e..1e1cd9cfbbea 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -493,6 +493,14 @@ struct blk_mq_ops {
 	 */
 	void (*commit_rqs)(struct blk_mq_hw_ctx *);
 
+	/**
+	 * @queue_rqs: Queue a list of new requests. Driver is guaranteed
+	 * that each request belongs to the same queue. If the driver doesn't
+	 * empty the @rqlist completely, then the rest will be queued
+	 * individually by the block layer upon return.
+	 */
+	void (*queue_rqs)(struct request **rqlist);
+
 	/**
 	 * @get_budget: Reserve budget before queue request, once .queue_rq is
 	 * run, it is driver's responsibility to release the
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 2/4] nvme: split command copy into a helper
  2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
  2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
@ 2021-12-15 16:24 ` Jens Axboe
  2021-12-16  9:01   ` Christoph Hellwig
  2021-12-16 12:17   ` Max Gurtovoy
  2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
  2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  3 siblings, 2 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 16:24 UTC (permalink / raw)
  To: io-uring, linux-nvme; +Cc: Jens Axboe, Chaitanya Kulkarni, Hannes Reinecke

We'll need it for batched submit as well.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 8637538f3fd5..09ea21f75439 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -500,6 +500,15 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
 	nvmeq->last_sq_tail = nvmeq->sq_tail;
 }
 
+static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq,
+				    struct nvme_command *cmd)
+{
+	memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), cmd,
+		sizeof(*cmd));
+	if (++nvmeq->sq_tail == nvmeq->q_depth)
+		nvmeq->sq_tail = 0;
+}
+
 /**
  * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
  * @nvmeq: The queue to use
@@ -510,10 +519,7 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
 			    bool write_sq)
 {
 	spin_lock(&nvmeq->sq_lock);
-	memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
-	       cmd, sizeof(*cmd));
-	if (++nvmeq->sq_tail == nvmeq->q_depth)
-		nvmeq->sq_tail = 0;
+	nvme_sq_copy_cmd(nvmeq, cmd);
 	nvme_write_sq_db(nvmeq, write_sq);
 	spin_unlock(&nvmeq->sq_lock);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 3/4] nvme: separate command prep and issue
  2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
  2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
  2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
@ 2021-12-15 16:24 ` Jens Axboe
  2021-12-16  9:02   ` Christoph Hellwig
  2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  3 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 16:24 UTC (permalink / raw)
  To: io-uring, linux-nvme; +Cc: Jens Axboe, Hannes Reinecke

Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is
adapted to use this helper.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 57 ++++++++++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 09ea21f75439..6be6b1ab4285 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -918,52 +918,32 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req,
 	return BLK_STS_OK;
 }
 
-/*
- * NOTE: ns is NULL when called on the admin queue.
- */
-static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
-			 const struct blk_mq_queue_data *bd)
+static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
 {
-	struct nvme_ns *ns = hctx->queue->queuedata;
-	struct nvme_queue *nvmeq = hctx->driver_data;
-	struct nvme_dev *dev = nvmeq->dev;
-	struct request *req = bd->rq;
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-	struct nvme_command *cmnd = &iod->cmd;
 	blk_status_t ret;
 
 	iod->aborted = 0;
 	iod->npages = -1;
 	iod->nents = 0;
 
-	/*
-	 * We should not need to do this, but we're still using this to
-	 * ensure we can drain requests on a dying queue.
-	 */
-	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
-		return BLK_STS_IOERR;
-
-	if (!nvme_check_ready(&dev->ctrl, req, true))
-		return nvme_fail_nonready_command(&dev->ctrl, req);
-
-	ret = nvme_setup_cmd(ns, req);
+	ret = nvme_setup_cmd(req->q->queuedata, req);
 	if (ret)
 		return ret;
 
 	if (blk_rq_nr_phys_segments(req)) {
-		ret = nvme_map_data(dev, req, cmnd);
+		ret = nvme_map_data(dev, req, &iod->cmd);
 		if (ret)
 			goto out_free_cmd;
 	}
 
 	if (blk_integrity_rq(req)) {
-		ret = nvme_map_metadata(dev, req, cmnd);
+		ret = nvme_map_metadata(dev, req, &iod->cmd);
 		if (ret)
 			goto out_unmap_data;
 	}
 
 	blk_mq_start_request(req);
-	nvme_submit_cmd(nvmeq, cmnd, bd->last);
 	return BLK_STS_OK;
 out_unmap_data:
 	nvme_unmap_data(dev, req);
@@ -972,6 +952,35 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
+/*
+ * NOTE: ns is NULL when called on the admin queue.
+ */
+static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+			 const struct blk_mq_queue_data *bd)
+{
+	struct nvme_queue *nvmeq = hctx->driver_data;
+	struct nvme_dev *dev = nvmeq->dev;
+	struct request *req = bd->rq;
+	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	blk_status_t ret;
+
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return BLK_STS_IOERR;
+
+	if (unlikely(!nvme_check_ready(&dev->ctrl, req, true)))
+		return nvme_fail_nonready_command(&dev->ctrl, req);
+
+	ret = nvme_prep_rq(dev, req);
+	if (unlikely(ret))
+		return ret;
+	nvme_submit_cmd(nvmeq, &iod->cmd, bd->last);
+	return BLK_STS_OK;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
                   ` (2 preceding siblings ...)
  2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
@ 2021-12-15 16:24 ` Jens Axboe
  2021-12-15 17:29   ` Keith Busch
                     ` (2 more replies)
  3 siblings, 3 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 16:24 UTC (permalink / raw)
  To: io-uring, linux-nvme; +Cc: Jens Axboe, Hannes Reinecke

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6be6b1ab4285..197aa45ef7ef 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+{
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
+				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
+		if (++nvmeq->sq_tail == nvmeq->q_depth)
+			nvmeq->sq_tail = 0;
+	}
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+{
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return false;
+	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+		return false;
+
+	req->mq_hctx->tags->rqs[req->tag] = req;
+	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+}
+
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+		if (!nvme_prep_rq_batch(nvmeq, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			prev->rq_next = NULL;
+			nvme_submit_cmds(nvmeq, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -1678,6 +1738,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 static const struct blk_mq_ops nvme_mq_ops = {
 	.queue_rq	= nvme_queue_rq,
+	.queue_rqs	= nvme_queue_rqs,
 	.complete	= nvme_pci_complete_rq,
 	.commit_rqs	= nvme_commit_rqs,
 	.init_hctx	= nvme_init_hctx,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
@ 2021-12-15 17:29   ` Keith Busch
  2021-12-15 20:27     ` Jens Axboe
  2021-12-16  9:08   ` Christoph Hellwig
  2021-12-16 13:02   ` Max Gurtovoy
  2 siblings, 1 reply; 59+ messages in thread
From: Keith Busch @ 2021-12-15 17:29 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme, Hannes Reinecke

On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
> +{
> +	/*
> +	 * We should not need to do this, but we're still using this to
> +	 * ensure we can drain requests on a dying queue.
> +	 */
> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> +		return false;

The patch looks good:

Reviewed-by: Keith Busch <kbusch@kernel.org>

Now a side comment on the above snippet:

I was going to mention in v2 that you shouldn't need to do this for each
request since the queue enabling/disabling only happens while quiesced,
so the state doesn't change once you start a batch. But I realized
multiple hctx's can be in a single batch, so we have to check each of
them instead of just once. :(

I tried to remove this check entirely ("We should not need to do this",
after all), but that's not looking readily possible without just
creating an equivalent check in blk-mq: we can't end a particular
request in failure without draining whatever list it may be linked
within, and we don't know what list it's in when iterating allocated
hctx tags.

Do you happen to have any thoughts on how we could remove this check?
The API I was thinking of is something like "blk_mq_hctx_dead()" in
order to fail pending requests on that hctx without sending them to the
low-level driver so that it wouldn't need these kinds of per-IO checks.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-15 17:29   ` Keith Busch
@ 2021-12-15 20:27     ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-15 20:27 UTC (permalink / raw)
  To: Keith Busch; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/15/21 10:29 AM, Keith Busch wrote:
> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
>> +{
>> +	/*
>> +	 * We should not need to do this, but we're still using this to
>> +	 * ensure we can drain requests on a dying queue.
>> +	 */
>> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
>> +		return false;
> 
> The patch looks good:
> 
> Reviewed-by: Keith Busch <kbusch@kernel.org>

Thanks Keith!

> Now a side comment on the above snippet:
> 
> I was going to mention in v2 that you shouldn't need to do this for each
> request since the queue enabling/disabling only happens while quiesced,
> so the state doesn't change once you start a batch. But I realized
> multiple hctx's can be in a single batch, so we have to check each of
> them instead of just once. :(
> 
> I tried to remove this check entirely ("We should not need to do this",
> after all), but that's not looking readily possible without just
> creating an equivalent check in blk-mq: we can't end a particular
> request in failure without draining whatever list it may be linked
> within, and we don't know what list it's in when iterating allocated
> hctx tags.
> 
> Do you happen to have any thoughts on how we could remove this check?
> The API I was thinking of is something like "blk_mq_hctx_dead()" in
> order to fail pending requests on that hctx without sending them to the
> low-level driver so that it wouldn't need these kinds of per-IO checks.

That's a good question, and something I thought about as well while
doing the change. The req based test following it is a bit annoying as
well, but probably harder to get rid of. I didn't pursue this one in
particular, as the single test_bit() is pretty cheap.

Care to take a stab at doing a blk_mq_hctx_dead() addition?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 1/4] block: add mq_ops->queue_rqs hook
  2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
@ 2021-12-16  9:01   ` Christoph Hellwig
  2021-12-20 20:36   ` Keith Busch
  1 sibling, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16  9:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme

This whole series seems to miss a linux-block cc..

On Wed, Dec 15, 2021 at 09:24:18AM -0700, Jens Axboe wrote:
> If we have a list of requests in our plug list, send it to the driver in
> one go, if possible. The driver must set mq_ops->queue_rqs() to support
> this, if not the usual one-by-one path is used.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-mq.c         | 26 +++++++++++++++++++++++---
>  include/linux/blk-mq.h |  8 ++++++++
>  2 files changed, 31 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e02e7017db03..f24394cb2004 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2512,6 +2512,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
>  {
>  	struct blk_mq_hw_ctx *this_hctx;
>  	struct blk_mq_ctx *this_ctx;
> +	struct request *rq;
>  	unsigned int depth;
>  	LIST_HEAD(list);
>  
> @@ -2520,7 +2521,28 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
>  	plug->rq_count = 0;
>  
>  	if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) {
> +		struct request_queue *q;
> +
> +		rq = rq_list_peek(&plug->mq_list);
> +		q = rq->q;

Nit: I'd just drop the q local variable as it is rather pointless.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 2/4] nvme: split command copy into a helper
  2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
@ 2021-12-16  9:01   ` Christoph Hellwig
  2021-12-16 12:17   ` Max Gurtovoy
  1 sibling, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16  9:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme, Chaitanya Kulkarni, Hannes Reinecke


Just like the two times before: NAK.  Please remove nvme_submit_cmd and
open code it in the two callers.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 3/4] nvme: separate command prep and issue
  2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
@ 2021-12-16  9:02   ` Christoph Hellwig
  0 siblings, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16  9:02 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme, Hannes Reinecke

On Wed, Dec 15, 2021 at 09:24:20AM -0700, Jens Axboe wrote:
> Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is
> adapted to use this helper.
> 
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  2021-12-15 17:29   ` Keith Busch
@ 2021-12-16  9:08   ` Christoph Hellwig
  2021-12-16 13:06     ` Max Gurtovoy
  2021-12-16 15:45     ` Jens Axboe
  2021-12-16 13:02   ` Max Gurtovoy
  2 siblings, 2 replies; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16  9:08 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme, Hannes Reinecke

On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
> +	spin_lock(&nvmeq->sq_lock);
> +	while (!rq_list_empty(*rqlist)) {
> +		struct request *req = rq_list_pop(rqlist);
> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> +
> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
> +			nvmeq->sq_tail = 0;

So this doesn't even use the new helper added in patch 2?  I think this
should call nvme_sq_copy_cmd().

The rest looks identical to the incremental patch I posted, so I guess
the performance degration measured on the first try was a measurement
error?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 2/4] nvme: split command copy into a helper
  2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
  2021-12-16  9:01   ` Christoph Hellwig
@ 2021-12-16 12:17   ` Max Gurtovoy
  1 sibling, 0 replies; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 12:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-nvme; +Cc: Chaitanya Kulkarni, Hannes Reinecke


On 12/15/2021 6:24 PM, Jens Axboe wrote:
> We'll need it for batched submit as well.
>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   drivers/nvme/host/pci.c | 14 ++++++++++----
>   1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 8637538f3fd5..09ea21f75439 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -500,6 +500,15 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
>   	nvmeq->last_sq_tail = nvmeq->sq_tail;
>   }
>   
> +static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq,
> +				    struct nvme_command *cmd)
> +{
> +	memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), cmd,
> +		sizeof(*cmd));
> +	if (++nvmeq->sq_tail == nvmeq->q_depth)
> +		nvmeq->sq_tail = 0;
> +}
> +
>   /**
>    * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
>    * @nvmeq: The queue to use
> @@ -510,10 +519,7 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
>   			    bool write_sq)
>   {
>   	spin_lock(&nvmeq->sq_lock);
> -	memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
> -	       cmd, sizeof(*cmd));
> -	if (++nvmeq->sq_tail == nvmeq->q_depth)
> -		nvmeq->sq_tail = 0;
> +	nvme_sq_copy_cmd(nvmeq, cmd);
>   	nvme_write_sq_db(nvmeq, write_sq);
>   	spin_unlock(&nvmeq->sq_lock);
>   }

Looks good,

Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  2021-12-15 17:29   ` Keith Busch
  2021-12-16  9:08   ` Christoph Hellwig
@ 2021-12-16 13:02   ` Max Gurtovoy
  2021-12-16 15:59     ` Jens Axboe
  2 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 13:02 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-nvme; +Cc: Hannes Reinecke


On 12/15/2021 6:24 PM, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.
>
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
>
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.
>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 6be6b1ab4285..197aa45ef7ef 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>   	return BLK_STS_OK;
>   }
>   
> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
> +{
> +	spin_lock(&nvmeq->sq_lock);
> +	while (!rq_list_empty(*rqlist)) {
> +		struct request *req = rq_list_pop(rqlist);
> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> +
> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
> +			nvmeq->sq_tail = 0;
> +	}
> +	nvme_write_sq_db(nvmeq, true);
> +	spin_unlock(&nvmeq->sq_lock);
> +}
> +
> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
> +{
> +	/*
> +	 * We should not need to do this, but we're still using this to
> +	 * ensure we can drain requests on a dying queue.
> +	 */
> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> +		return false;
> +	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
> +		return false;
> +
> +	req->mq_hctx->tags->rqs[req->tag] = req;
> +	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
> +}
> +
> +static void nvme_queue_rqs(struct request **rqlist)
> +{
> +	struct request *req = rq_list_peek(rqlist), *prev = NULL;
> +	struct request *requeue_list = NULL;
> +
> +	do {
> +		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
> +
> +		if (!nvme_prep_rq_batch(nvmeq, req)) {
> +			/* detach 'req' and add to remainder list */
> +			if (prev)
> +				prev->rq_next = req->rq_next;
> +			rq_list_add(&requeue_list, req);
> +		} else {
> +			prev = req;
> +		}
> +
> +		req = rq_list_next(req);
> +		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
> +			/* detach rest of list, and submit */
> +			prev->rq_next = NULL;

if req == NULL and prev == NULL we'll get a NULL deref here.

I think this can happen in the first iteration.

Correct me if I'm wrong..

> +			nvme_submit_cmds(nvmeq, rqlist);
> +			*rqlist = req;
> +		}
> +	} while (req);
> +
> +	*rqlist = requeue_list;
> +}
> +
>   static __always_inline void nvme_pci_unmap_rq(struct request *req)
>   {
>   	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> @@ -1678,6 +1738,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
>   
>   static const struct blk_mq_ops nvme_mq_ops = {
>   	.queue_rq	= nvme_queue_rq,
> +	.queue_rqs	= nvme_queue_rqs,
>   	.complete	= nvme_pci_complete_rq,
>   	.commit_rqs	= nvme_commit_rqs,
>   	.init_hctx	= nvme_init_hctx,

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16  9:08   ` Christoph Hellwig
@ 2021-12-16 13:06     ` Max Gurtovoy
  2021-12-16 15:48       ` Jens Axboe
  2021-12-16 15:45     ` Jens Axboe
  1 sibling, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 13:06 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: io-uring, linux-nvme, Hannes Reinecke


On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>> +	spin_lock(&nvmeq->sq_lock);
>> +	while (!rq_list_empty(*rqlist)) {
>> +		struct request *req = rq_list_pop(rqlist);
>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>> +
>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>> +			nvmeq->sq_tail = 0;
> So this doesn't even use the new helper added in patch 2?  I think this
> should call nvme_sq_copy_cmd().

I also noticed that.

So need to decide if to open code it or use the helper function.

Inline helper sounds reasonable if you have 3 places that will use it.

> The rest looks identical to the incremental patch I posted, so I guess
> the performance degration measured on the first try was a measurement
> error?

giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.

But how do you moderate it ? what is the batch_sz <--> time_to_wait 
algorithm ?


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16  9:08   ` Christoph Hellwig
  2021-12-16 13:06     ` Max Gurtovoy
@ 2021-12-16 15:45     ` Jens Axboe
  2021-12-16 16:15       ` Christoph Hellwig
  1 sibling, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 15:45 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 2:08 AM, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>> +	spin_lock(&nvmeq->sq_lock);
>> +	while (!rq_list_empty(*rqlist)) {
>> +		struct request *req = rq_list_pop(rqlist);
>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>> +
>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>> +			nvmeq->sq_tail = 0;
> 
> So this doesn't even use the new helper added in patch 2?  I think this
> should call nvme_sq_copy_cmd().

But you NAK'ed that one? It definitely should use that helper, so I take it
you are fine with it then if we do it here too? That would make 3 call sites,
and I still do think the helper makes sense...

> The rest looks identical to the incremental patch I posted, so I guess
> the performance degration measured on the first try was a measurement
> error?

It may have been a measurement error, I'm honestly not quite sure. I
reshuffled and modified a few bits here and there, and verified the
end result. Wish I had a better answer, but...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 13:06     ` Max Gurtovoy
@ 2021-12-16 15:48       ` Jens Axboe
  2021-12-16 16:00         ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 15:48 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 6:06 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>> +	spin_lock(&nvmeq->sq_lock);
>>> +	while (!rq_list_empty(*rqlist)) {
>>> +		struct request *req = rq_list_pop(rqlist);
>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>> +
>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>> +			nvmeq->sq_tail = 0;
>> So this doesn't even use the new helper added in patch 2?  I think this
>> should call nvme_sq_copy_cmd().
> 
> I also noticed that.
> 
> So need to decide if to open code it or use the helper function.
> 
> Inline helper sounds reasonable if you have 3 places that will use it.

Yes agree, that's been my stance too :-)

>> The rest looks identical to the incremental patch I posted, so I guess
>> the performance degration measured on the first try was a measurement
>> error?
> 
> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
> 
> But how do you moderate it ? what is the batch_sz <--> time_to_wait 
> algorithm ?

The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
in total. I do agree that if we ever made it much larger, then we might
want to cap it differently. But 32 seems like a pretty reasonable number
to get enough gain from the batching done in various areas, while still
not making it so large that we have a potential latency issue. That
batch count is already used consistently for other items too (like tag
allocation), so it's not specific to just this one case.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 13:02   ` Max Gurtovoy
@ 2021-12-16 15:59     ` Jens Axboe
  2021-12-16 16:06       ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 15:59 UTC (permalink / raw)
  To: Max Gurtovoy, io-uring, linux-nvme; +Cc: Hannes Reinecke

On 12/16/21 6:02 AM, Max Gurtovoy wrote:
> 
> On 12/15/2021 6:24 PM, Jens Axboe wrote:
>> This enables the block layer to send us a full plug list of requests
>> that need submitting. The block layer guarantees that they all belong
>> to the same queue, but we do have to check the hardware queue mapping
>> for each request.
>>
>> If errors are encountered, leave them in the passed in list. Then the
>> block layer will handle them individually.
>>
>> This is good for about a 4% improvement in peak performance, taking us
>> from 9.6M to 10M IOPS/core.
>>
>> Reviewed-by: Hannes Reinecke <hare@suse.de>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>   drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 61 insertions(+)
>>
>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>> index 6be6b1ab4285..197aa45ef7ef 100644
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>   	return BLK_STS_OK;
>>   }
>>   
>> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
>> +{
>> +	spin_lock(&nvmeq->sq_lock);
>> +	while (!rq_list_empty(*rqlist)) {
>> +		struct request *req = rq_list_pop(rqlist);
>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>> +
>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>> +			nvmeq->sq_tail = 0;
>> +	}
>> +	nvme_write_sq_db(nvmeq, true);
>> +	spin_unlock(&nvmeq->sq_lock);
>> +}
>> +
>> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
>> +{
>> +	/*
>> +	 * We should not need to do this, but we're still using this to
>> +	 * ensure we can drain requests on a dying queue.
>> +	 */
>> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
>> +		return false;
>> +	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
>> +		return false;
>> +
>> +	req->mq_hctx->tags->rqs[req->tag] = req;
>> +	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
>> +}
>> +
>> +static void nvme_queue_rqs(struct request **rqlist)
>> +{
>> +	struct request *req = rq_list_peek(rqlist), *prev = NULL;
>> +	struct request *requeue_list = NULL;
>> +
>> +	do {
>> +		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
>> +
>> +		if (!nvme_prep_rq_batch(nvmeq, req)) {
>> +			/* detach 'req' and add to remainder list */
>> +			if (prev)
>> +				prev->rq_next = req->rq_next;
>> +			rq_list_add(&requeue_list, req);
>> +		} else {
>> +			prev = req;
>> +		}
>> +
>> +		req = rq_list_next(req);
>> +		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
>> +			/* detach rest of list, and submit */
>> +			prev->rq_next = NULL;
> 
> if req == NULL and prev == NULL we'll get a NULL deref here.
> 
> I think this can happen in the first iteration.
> 
> Correct me if I'm wrong..

First iteration we know the list isn't empty, so req can't be NULL
there.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 15:48       ` Jens Axboe
@ 2021-12-16 16:00         ` Max Gurtovoy
  2021-12-16 16:05           ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 16:00 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke


On 12/16/2021 5:48 PM, Jens Axboe wrote:
> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>> +	spin_lock(&nvmeq->sq_lock);
>>>> +	while (!rq_list_empty(*rqlist)) {
>>>> +		struct request *req = rq_list_pop(rqlist);
>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>> +
>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>> +			nvmeq->sq_tail = 0;
>>> So this doesn't even use the new helper added in patch 2?  I think this
>>> should call nvme_sq_copy_cmd().
>> I also noticed that.
>>
>> So need to decide if to open code it or use the helper function.
>>
>> Inline helper sounds reasonable if you have 3 places that will use it.
> Yes agree, that's been my stance too :-)
>
>>> The rest looks identical to the incremental patch I posted, so I guess
>>> the performance degration measured on the first try was a measurement
>>> error?
>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>
>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>> algorithm ?
> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
> in total. I do agree that if we ever made it much larger, then we might
> want to cap it differently. But 32 seems like a pretty reasonable number
> to get enough gain from the batching done in various areas, while still
> not making it so large that we have a potential latency issue. That
> batch count is already used consistently for other items too (like tag
> allocation), so it's not specific to just this one case.

I'm saying that the you can wait to the batch_max_count too long and it 
won't be efficient from latency POV.

So it's better to limit the block layar to wait for the first to come: x 
usecs or batch_max_count before issue queue_rqs.

Also, This batch is per HW queue or SW queue or the entire request queue ?

>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:00         ` Max Gurtovoy
@ 2021-12-16 16:05           ` Jens Axboe
  2021-12-16 16:19             ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:05 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:00 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>> +
>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>> +			nvmeq->sq_tail = 0;
>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>> should call nvme_sq_copy_cmd().
>>> I also noticed that.
>>>
>>> So need to decide if to open code it or use the helper function.
>>>
>>> Inline helper sounds reasonable if you have 3 places that will use it.
>> Yes agree, that's been my stance too :-)
>>
>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>> the performance degration measured on the first try was a measurement
>>>> error?
>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>
>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>> algorithm ?
>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>> in total. I do agree that if we ever made it much larger, then we might
>> want to cap it differently. But 32 seems like a pretty reasonable number
>> to get enough gain from the batching done in various areas, while still
>> not making it so large that we have a potential latency issue. That
>> batch count is already used consistently for other items too (like tag
>> allocation), so it's not specific to just this one case.
> 
> I'm saying that the you can wait to the batch_max_count too long and it 
> won't be efficient from latency POV.
> 
> So it's better to limit the block layar to wait for the first to come: x 
> usecs or batch_max_count before issue queue_rqs.

There's no waiting specifically for this, it's just based on the plug.
We just won't do more than 32 in that plug. This is really just an
artifact of the plugging, and if that should be limited based on "max of
32 or xx time", then that should be done there.

But in general I think it's saner and enough to just limit the total
size. If we spend more than xx usec building up the plug list, we're
doing something horribly wrong. That really should not happen with 32
requests, and we'll never eg wait on requests if we're out of tags. That
will result in a plug flush to begin with.

> Also, This batch is per HW queue or SW queue or the entire request queue ?

It's per submitter, so whatever the submitter ends up queueing IO
against. In general it'll be per-queue.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 15:59     ` Jens Axboe
@ 2021-12-16 16:06       ` Max Gurtovoy
  2021-12-16 16:09         ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 16:06 UTC (permalink / raw)
  To: Jens Axboe, io-uring, linux-nvme; +Cc: Hannes Reinecke


On 12/16/2021 5:59 PM, Jens Axboe wrote:
> On 12/16/21 6:02 AM, Max Gurtovoy wrote:
>> On 12/15/2021 6:24 PM, Jens Axboe wrote:
>>> This enables the block layer to send us a full plug list of requests
>>> that need submitting. The block layer guarantees that they all belong
>>> to the same queue, but we do have to check the hardware queue mapping
>>> for each request.
>>>
>>> If errors are encountered, leave them in the passed in list. Then the
>>> block layer will handle them individually.
>>>
>>> This is good for about a 4% improvement in peak performance, taking us
>>> from 9.6M to 10M IOPS/core.
>>>
>>> Reviewed-by: Hannes Reinecke <hare@suse.de>
>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>> ---
>>>    drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 61 insertions(+)
>>>
>>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>>> index 6be6b1ab4285..197aa45ef7ef 100644
>>> --- a/drivers/nvme/host/pci.c
>>> +++ b/drivers/nvme/host/pci.c
>>> @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>    	return BLK_STS_OK;
>>>    }
>>>    
>>> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
>>> +{
>>> +	spin_lock(&nvmeq->sq_lock);
>>> +	while (!rq_list_empty(*rqlist)) {
>>> +		struct request *req = rq_list_pop(rqlist);
>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>> +
>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>> +			nvmeq->sq_tail = 0;
>>> +	}
>>> +	nvme_write_sq_db(nvmeq, true);
>>> +	spin_unlock(&nvmeq->sq_lock);
>>> +}
>>> +
>>> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
>>> +{
>>> +	/*
>>> +	 * We should not need to do this, but we're still using this to
>>> +	 * ensure we can drain requests on a dying queue.
>>> +	 */
>>> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
>>> +		return false;
>>> +	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
>>> +		return false;
>>> +
>>> +	req->mq_hctx->tags->rqs[req->tag] = req;
>>> +	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
>>> +}
>>> +
>>> +static void nvme_queue_rqs(struct request **rqlist)
>>> +{
>>> +	struct request *req = rq_list_peek(rqlist), *prev = NULL;
>>> +	struct request *requeue_list = NULL;
>>> +
>>> +	do {
>>> +		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
>>> +
>>> +		if (!nvme_prep_rq_batch(nvmeq, req)) {
>>> +			/* detach 'req' and add to remainder list */
>>> +			if (prev)
>>> +				prev->rq_next = req->rq_next;
>>> +			rq_list_add(&requeue_list, req);
>>> +		} else {
>>> +			prev = req;
>>> +		}
>>> +
>>> +		req = rq_list_next(req);
>>> +		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
>>> +			/* detach rest of list, and submit */
>>> +			prev->rq_next = NULL;
>> if req == NULL and prev == NULL we'll get a NULL deref here.
>>
>> I think this can happen in the first iteration.
>>
>> Correct me if I'm wrong..
> First iteration we know the list isn't empty, so req can't be NULL
> there.

but you set "req = rq_list_next(req);"

So can't req be NULL ? after the above line ?

>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:06       ` Max Gurtovoy
@ 2021-12-16 16:09         ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:09 UTC (permalink / raw)
  To: Max Gurtovoy, io-uring, linux-nvme; +Cc: Hannes Reinecke

On 12/16/21 9:06 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 5:59 PM, Jens Axboe wrote:
>> On 12/16/21 6:02 AM, Max Gurtovoy wrote:
>>> On 12/15/2021 6:24 PM, Jens Axboe wrote:
>>>> This enables the block layer to send us a full plug list of requests
>>>> that need submitting. The block layer guarantees that they all belong
>>>> to the same queue, but we do have to check the hardware queue mapping
>>>> for each request.
>>>>
>>>> If errors are encountered, leave them in the passed in list. Then the
>>>> block layer will handle them individually.
>>>>
>>>> This is good for about a 4% improvement in peak performance, taking us
>>>> from 9.6M to 10M IOPS/core.
>>>>
>>>> Reviewed-by: Hannes Reinecke <hare@suse.de>
>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>> ---
>>>>    drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 61 insertions(+)
>>>>
>>>> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>>>> index 6be6b1ab4285..197aa45ef7ef 100644
>>>> --- a/drivers/nvme/host/pci.c
>>>> +++ b/drivers/nvme/host/pci.c
>>>> @@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>>>>    	return BLK_STS_OK;
>>>>    }
>>>>    
>>>> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
>>>> +{
>>>> +	spin_lock(&nvmeq->sq_lock);
>>>> +	while (!rq_list_empty(*rqlist)) {
>>>> +		struct request *req = rq_list_pop(rqlist);
>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>> +
>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>> +			nvmeq->sq_tail = 0;
>>>> +	}
>>>> +	nvme_write_sq_db(nvmeq, true);
>>>> +	spin_unlock(&nvmeq->sq_lock);
>>>> +}
>>>> +
>>>> +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
>>>> +{
>>>> +	/*
>>>> +	 * We should not need to do this, but we're still using this to
>>>> +	 * ensure we can drain requests on a dying queue.
>>>> +	 */
>>>> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
>>>> +		return false;
>>>> +	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
>>>> +		return false;
>>>> +
>>>> +	req->mq_hctx->tags->rqs[req->tag] = req;
>>>> +	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
>>>> +}
>>>> +
>>>> +static void nvme_queue_rqs(struct request **rqlist)
>>>> +{
>>>> +	struct request *req = rq_list_peek(rqlist), *prev = NULL;
>>>> +	struct request *requeue_list = NULL;
>>>> +
>>>> +	do {
>>>> +		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
>>>> +
>>>> +		if (!nvme_prep_rq_batch(nvmeq, req)) {
>>>> +			/* detach 'req' and add to remainder list */
>>>> +			if (prev)
>>>> +				prev->rq_next = req->rq_next;
>>>> +			rq_list_add(&requeue_list, req);
>>>> +		} else {
>>>> +			prev = req;
>>>> +		}
>>>> +
>>>> +		req = rq_list_next(req);
>>>> +		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
>>>> +			/* detach rest of list, and submit */
>>>> +			prev->rq_next = NULL;
>>> if req == NULL and prev == NULL we'll get a NULL deref here.
>>>
>>> I think this can happen in the first iteration.
>>>
>>> Correct me if I'm wrong..
>> First iteration we know the list isn't empty, so req can't be NULL
>> there.
> 
> but you set "req = rq_list_next(req);"
> 
> So can't req be NULL ? after the above line ?

I guess if we hit the prep failure path for the first request that could
be a concern. Probably best to add an if (prev) before that detach,
thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 15:45     ` Jens Axboe
@ 2021-12-16 16:15       ` Christoph Hellwig
  2021-12-16 16:27         ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16 16:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, io-uring, linux-nvme, Hannes Reinecke

On Thu, Dec 16, 2021 at 08:45:46AM -0700, Jens Axboe wrote:
> On 12/16/21 2:08 AM, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
> >> +	spin_lock(&nvmeq->sq_lock);
> >> +	while (!rq_list_empty(*rqlist)) {
> >> +		struct request *req = rq_list_pop(rqlist);
> >> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> >> +
> >> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
> >> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
> >> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
> >> +			nvmeq->sq_tail = 0;
> > 
> > So this doesn't even use the new helper added in patch 2?  I think this
> > should call nvme_sq_copy_cmd().
> 
> But you NAK'ed that one? It definitely should use that helper, so I take it
> you are fine with it then if we do it here too? That would make 3 call sites,
> and I still do think the helper makes sense...

I explained two times that the new helpers is fine as long as you open
code nvme_submit_cmd in its two callers as it now is a trivial wrapper.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:05           ` Jens Axboe
@ 2021-12-16 16:19             ` Max Gurtovoy
  2021-12-16 16:25               ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 16:19 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke


On 12/16/2021 6:05 PM, Jens Axboe wrote:
> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>> +
>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>> +			nvmeq->sq_tail = 0;
>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>> should call nvme_sq_copy_cmd().
>>>> I also noticed that.
>>>>
>>>> So need to decide if to open code it or use the helper function.
>>>>
>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>> Yes agree, that's been my stance too :-)
>>>
>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>> the performance degration measured on the first try was a measurement
>>>>> error?
>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>
>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>> algorithm ?
>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>> in total. I do agree that if we ever made it much larger, then we might
>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>> to get enough gain from the batching done in various areas, while still
>>> not making it so large that we have a potential latency issue. That
>>> batch count is already used consistently for other items too (like tag
>>> allocation), so it's not specific to just this one case.
>> I'm saying that the you can wait to the batch_max_count too long and it
>> won't be efficient from latency POV.
>>
>> So it's better to limit the block layar to wait for the first to come: x
>> usecs or batch_max_count before issue queue_rqs.
> There's no waiting specifically for this, it's just based on the plug.
> We just won't do more than 32 in that plug. This is really just an
> artifact of the plugging, and if that should be limited based on "max of
> 32 or xx time", then that should be done there.
>
> But in general I think it's saner and enough to just limit the total
> size. If we spend more than xx usec building up the plug list, we're
> doing something horribly wrong. That really should not happen with 32
> requests, and we'll never eg wait on requests if we're out of tags. That
> will result in a plug flush to begin with.

I'm not aware of the plug. I hope to get to it soon.

My concern is if the user application submitted only 28 requests and 
then you'll wait forever ? or for very long time.

I guess not, but I'm asking how do you know how to batch and when to 
stop in case 32 commands won't arrive anytime soon.

>
>> Also, This batch is per HW queue or SW queue or the entire request queue ?
> It's per submitter, so whatever the submitter ends up queueing IO
> against. In general it'll be per-queue.

struct request_queue ?

I think the best is to batch per struct blk_mq_hw_ctx.

I see that you check this in the nvme_pci driver but shouldn't it go to 
the block layer ?

>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:19             ` Max Gurtovoy
@ 2021-12-16 16:25               ` Jens Axboe
  2021-12-16 16:34                 ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:25 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:19 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>> +
>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>> should call nvme_sq_copy_cmd().
>>>>> I also noticed that.
>>>>>
>>>>> So need to decide if to open code it or use the helper function.
>>>>>
>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>> Yes agree, that's been my stance too :-)
>>>>
>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>> the performance degration measured on the first try was a measurement
>>>>>> error?
>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>
>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>> algorithm ?
>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>> in total. I do agree that if we ever made it much larger, then we might
>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>> to get enough gain from the batching done in various areas, while still
>>>> not making it so large that we have a potential latency issue. That
>>>> batch count is already used consistently for other items too (like tag
>>>> allocation), so it's not specific to just this one case.
>>> I'm saying that the you can wait to the batch_max_count too long and it
>>> won't be efficient from latency POV.
>>>
>>> So it's better to limit the block layar to wait for the first to come: x
>>> usecs or batch_max_count before issue queue_rqs.
>> There's no waiting specifically for this, it's just based on the plug.
>> We just won't do more than 32 in that plug. This is really just an
>> artifact of the plugging, and if that should be limited based on "max of
>> 32 or xx time", then that should be done there.
>>
>> But in general I think it's saner and enough to just limit the total
>> size. If we spend more than xx usec building up the plug list, we're
>> doing something horribly wrong. That really should not happen with 32
>> requests, and we'll never eg wait on requests if we're out of tags. That
>> will result in a plug flush to begin with.
> 
> I'm not aware of the plug. I hope to get to it soon.
> 
> My concern is if the user application submitted only 28 requests and 
> then you'll wait forever ? or for very long time.
> 
> I guess not, but I'm asking how do you know how to batch and when to 
> stop in case 32 commands won't arrive anytime soon.

The plug is in the stack of the task, so that condition can never
happen. If the application originally asks for 32 but then only submits
28, then once that last one is submitted the plug is flushed and
requests are issued.

>>> Also, This batch is per HW queue or SW queue or the entire request queue ?
>> It's per submitter, so whatever the submitter ends up queueing IO
>> against. In general it'll be per-queue.
> 
> struct request_queue ?
> 
> I think the best is to batch per struct blk_mq_hw_ctx.
> 
> I see that you check this in the nvme_pci driver but shouldn't it go to 
> the block layer ?

That's not how plugging works. In general, unless your task bounces
around, then it'll be a single queue and a single hw queue as well.
Adding code to specifically check the mappings and flush at that point
would be a net loss, rather than just deal with it if it happens for
some cases.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:15       ` Christoph Hellwig
@ 2021-12-16 16:27         ` Jens Axboe
  2021-12-16 16:30           ` Christoph Hellwig
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:15 AM, Christoph Hellwig wrote:
> On Thu, Dec 16, 2021 at 08:45:46AM -0700, Jens Axboe wrote:
>> On 12/16/21 2:08 AM, Christoph Hellwig wrote:
>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>> +	spin_lock(&nvmeq->sq_lock);
>>>> +	while (!rq_list_empty(*rqlist)) {
>>>> +		struct request *req = rq_list_pop(rqlist);
>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>> +
>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>> +			nvmeq->sq_tail = 0;
>>>
>>> So this doesn't even use the new helper added in patch 2?  I think this
>>> should call nvme_sq_copy_cmd().
>>
>> But you NAK'ed that one? It definitely should use that helper, so I take it
>> you are fine with it then if we do it here too? That would make 3 call sites,
>> and I still do think the helper makes sense...
> 
> I explained two times that the new helpers is fine as long as you open
> code nvme_submit_cmd in its two callers as it now is a trivial wrapper.

OK, I misunderstood which one you referred to then. So this incremental,
I'll send out a new series...


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 58d97660374a..51a903d91d92 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -509,21 +509,6 @@ static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq,
 		nvmeq->sq_tail = 0;
 }
 
-/**
- * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
- * @nvmeq: The queue to use
- * @cmd: The command to send
- * @write_sq: whether to write to the SQ doorbell
- */
-static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
-			    bool write_sq)
-{
-	spin_lock(&nvmeq->sq_lock);
-	nvme_sq_copy_cmd(nvmeq, cmd);
-	nvme_write_sq_db(nvmeq, write_sq);
-	spin_unlock(&nvmeq->sq_lock);
-}
-
 static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
 {
 	struct nvme_queue *nvmeq = hctx->driver_data;
@@ -977,7 +962,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	ret = nvme_prep_rq(dev, req);
 	if (unlikely(ret))
 		return ret;
-	nvme_submit_cmd(nvmeq, &iod->cmd, bd->last);
+	spin_lock(&nvmeq->sq_lock);
+	nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+	nvme_write_sq_db(nvmeq, bd->last);
+	spin_unlock(&nvmeq->sq_lock);
 	return BLK_STS_OK;
 }
 
@@ -1213,7 +1201,11 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl)
 
 	c.common.opcode = nvme_admin_async_event;
 	c.common.command_id = NVME_AQ_BLK_MQ_DEPTH;
-	nvme_submit_cmd(nvmeq, &c, true);
+
+	spin_lock(&nvmeq->sq_lock);
+	nvme_sq_copy_cmd(nvmeq, &c);
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
 }
 
 static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id)

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:27         ` Jens Axboe
@ 2021-12-16 16:30           ` Christoph Hellwig
  2021-12-16 16:36             ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16 16:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, io-uring, linux-nvme, Hannes Reinecke

On Thu, Dec 16, 2021 at 09:27:18AM -0700, Jens Axboe wrote:
> OK, I misunderstood which one you referred to then. So this incremental,

Yes, that's the preferred version.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:25               ` Jens Axboe
@ 2021-12-16 16:34                 ` Max Gurtovoy
  2021-12-16 16:36                   ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 16:34 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke


On 12/16/2021 6:25 PM, Jens Axboe wrote:
> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>> +
>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>> should call nvme_sq_copy_cmd().
>>>>>> I also noticed that.
>>>>>>
>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>
>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>> Yes agree, that's been my stance too :-)
>>>>>
>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>> error?
>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>
>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>> algorithm ?
>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>> to get enough gain from the batching done in various areas, while still
>>>>> not making it so large that we have a potential latency issue. That
>>>>> batch count is already used consistently for other items too (like tag
>>>>> allocation), so it's not specific to just this one case.
>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>> won't be efficient from latency POV.
>>>>
>>>> So it's better to limit the block layar to wait for the first to come: x
>>>> usecs or batch_max_count before issue queue_rqs.
>>> There's no waiting specifically for this, it's just based on the plug.
>>> We just won't do more than 32 in that plug. This is really just an
>>> artifact of the plugging, and if that should be limited based on "max of
>>> 32 or xx time", then that should be done there.
>>>
>>> But in general I think it's saner and enough to just limit the total
>>> size. If we spend more than xx usec building up the plug list, we're
>>> doing something horribly wrong. That really should not happen with 32
>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>> will result in a plug flush to begin with.
>> I'm not aware of the plug. I hope to get to it soon.
>>
>> My concern is if the user application submitted only 28 requests and
>> then you'll wait forever ? or for very long time.
>>
>> I guess not, but I'm asking how do you know how to batch and when to
>> stop in case 32 commands won't arrive anytime soon.
> The plug is in the stack of the task, so that condition can never
> happen. If the application originally asks for 32 but then only submits
> 28, then once that last one is submitted the plug is flushed and
> requests are issued.

So if I'm running fio with --iodepth=28 what will plug do ? send batches 
of 28 ? or 1 by 1 ?

>>>> Also, This batch is per HW queue or SW queue or the entire request queue ?
>>> It's per submitter, so whatever the submitter ends up queueing IO
>>> against. In general it'll be per-queue.
>> struct request_queue ?
>>
>> I think the best is to batch per struct blk_mq_hw_ctx.
>>
>> I see that you check this in the nvme_pci driver but shouldn't it go to
>> the block layer ?
> That's not how plugging works. In general, unless your task bounces
> around, then it'll be a single queue and a single hw queue as well.
> Adding code to specifically check the mappings and flush at that point
> would be a net loss, rather than just deal with it if it happens for
> some cases.
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:34                 ` Max Gurtovoy
@ 2021-12-16 16:36                   ` Jens Axboe
  2021-12-16 16:57                     ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:36 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:34 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>> +
>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>> I also noticed that.
>>>>>>>
>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>
>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>> Yes agree, that's been my stance too :-)
>>>>>>
>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>> error?
>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>
>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>> algorithm ?
>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>> not making it so large that we have a potential latency issue. That
>>>>>> batch count is already used consistently for other items too (like tag
>>>>>> allocation), so it's not specific to just this one case.
>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>> won't be efficient from latency POV.
>>>>>
>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>> usecs or batch_max_count before issue queue_rqs.
>>>> There's no waiting specifically for this, it's just based on the plug.
>>>> We just won't do more than 32 in that plug. This is really just an
>>>> artifact of the plugging, and if that should be limited based on "max of
>>>> 32 or xx time", then that should be done there.
>>>>
>>>> But in general I think it's saner and enough to just limit the total
>>>> size. If we spend more than xx usec building up the plug list, we're
>>>> doing something horribly wrong. That really should not happen with 32
>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>> will result in a plug flush to begin with.
>>> I'm not aware of the plug. I hope to get to it soon.
>>>
>>> My concern is if the user application submitted only 28 requests and
>>> then you'll wait forever ? or for very long time.
>>>
>>> I guess not, but I'm asking how do you know how to batch and when to
>>> stop in case 32 commands won't arrive anytime soon.
>> The plug is in the stack of the task, so that condition can never
>> happen. If the application originally asks for 32 but then only submits
>> 28, then once that last one is submitted the plug is flushed and
>> requests are issued.
> 
> So if I'm running fio with --iodepth=28 what will plug do ? send batches 
> of 28 ? or 1 by 1 ?

--iodepth just controls the overall depth, the batch submit count
dictates what happens further down. If you run queue depth 28 and submit
one at the time, then you'll get one at the time further down too. Hence
the batching is directly driven by what the application is already
doing.


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:30           ` Christoph Hellwig
@ 2021-12-16 16:36             ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:36 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:30 AM, Christoph Hellwig wrote:
> On Thu, Dec 16, 2021 at 09:27:18AM -0700, Jens Axboe wrote:
>> OK, I misunderstood which one you referred to then. So this incremental,
> 
> Yes, that's the preferred version.

Respun it, will send out an updated one.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:36                   ` Jens Axboe
@ 2021-12-16 16:57                     ` Max Gurtovoy
  2021-12-16 17:16                       ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-16 16:57 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke


On 12/16/2021 6:36 PM, Jens Axboe wrote:
> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>> +
>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>> I also noticed that.
>>>>>>>>
>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>
>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>
>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>> error?
>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>
>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>> algorithm ?
>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>> allocation), so it's not specific to just this one case.
>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>> won't be efficient from latency POV.
>>>>>>
>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>> 32 or xx time", then that should be done there.
>>>>>
>>>>> But in general I think it's saner and enough to just limit the total
>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>> doing something horribly wrong. That really should not happen with 32
>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>> will result in a plug flush to begin with.
>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>
>>>> My concern is if the user application submitted only 28 requests and
>>>> then you'll wait forever ? or for very long time.
>>>>
>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>> stop in case 32 commands won't arrive anytime soon.
>>> The plug is in the stack of the task, so that condition can never
>>> happen. If the application originally asks for 32 but then only submits
>>> 28, then once that last one is submitted the plug is flushed and
>>> requests are issued.
>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>> of 28 ? or 1 by 1 ?
> --iodepth just controls the overall depth, the batch submit count
> dictates what happens further down. If you run queue depth 28 and submit
> one at the time, then you'll get one at the time further down too. Hence
> the batching is directly driven by what the application is already
> doing.

I see. Thanks for the explanation.

So it works only for io_uring based applications ?

Don't you think it will be a good idea to not depend on applications and 
batch according to some kernel mechanism ?

Wait till X requests or Y usecs (first condition to be fulfilled) before 
submitting the batch to LLD.

Like we do with adaptive completion coalescing/moderation for capable 
devices.


>
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:57                     ` Max Gurtovoy
@ 2021-12-16 17:16                       ` Jens Axboe
  2021-12-19 12:14                         ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 17:16 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig; +Cc: io-uring, linux-nvme, Hannes Reinecke

On 12/16/21 9:57 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>> +
>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>> I also noticed that.
>>>>>>>>>
>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>
>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>
>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>> error?
>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>
>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>> algorithm ?
>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>> won't be efficient from latency POV.
>>>>>>>
>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>> 32 or xx time", then that should be done there.
>>>>>>
>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>> will result in a plug flush to begin with.
>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>
>>>>> My concern is if the user application submitted only 28 requests and
>>>>> then you'll wait forever ? or for very long time.
>>>>>
>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>> stop in case 32 commands won't arrive anytime soon.
>>>> The plug is in the stack of the task, so that condition can never
>>>> happen. If the application originally asks for 32 but then only submits
>>>> 28, then once that last one is submitted the plug is flushed and
>>>> requests are issued.
>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>> of 28 ? or 1 by 1 ?
>> --iodepth just controls the overall depth, the batch submit count
>> dictates what happens further down. If you run queue depth 28 and submit
>> one at the time, then you'll get one at the time further down too. Hence
>> the batching is directly driven by what the application is already
>> doing.
> 
> I see. Thanks for the explanation.
> 
> So it works only for io_uring based applications ?

It's only enabled for io_uring right now, but it's generically available
for anyone that wants to use it... Would be trivial to do for aio, and
other spots that currently use blk_start_plug() and has an idea of how
many IOs will be submitted.

> Don't you think it will be a good idea to not depend on applications and 
> batch according to some kernel mechanism ?
> 
> Wait till X requests or Y usecs (first condition to be fulfilled) before 
> submitting the batch to LLD.
> 
> Like we do with adaptive completion coalescing/moderation for capable 
> devices.

This is how plugging used to work way back in the day. The problem is
that you then introduce per-device state, which can cause contention.
That's why the plug is a pure stack based entity now.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 17:16                       ` Jens Axboe
@ 2021-12-19 12:14                         ` Max Gurtovoy
  2021-12-19 14:48                           ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-19 12:14 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/16/2021 7:16 PM, Jens Axboe wrote:
> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>> +
>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>> I also noticed that.
>>>>>>>>>>
>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>
>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>
>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>> error?
>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>
>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>> algorithm ?
>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>> won't be efficient from latency POV.
>>>>>>>>
>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>
>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>> will result in a plug flush to begin with.
>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>
>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>> then you'll wait forever ? or for very long time.
>>>>>>
>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>> The plug is in the stack of the task, so that condition can never
>>>>> happen. If the application originally asks for 32 but then only submits
>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>> requests are issued.
>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>> of 28 ? or 1 by 1 ?
>>> --iodepth just controls the overall depth, the batch submit count
>>> dictates what happens further down. If you run queue depth 28 and submit
>>> one at the time, then you'll get one at the time further down too. Hence
>>> the batching is directly driven by what the application is already
>>> doing.
>> I see. Thanks for the explanation.
>>
>> So it works only for io_uring based applications ?
> It's only enabled for io_uring right now, but it's generically available
> for anyone that wants to use it... Would be trivial to do for aio, and
> other spots that currently use blk_start_plug() and has an idea of how
> many IOs will be submitted

Can you please share an example application (or is it fio patches) that 
can submit batches ? The same that was used to test this patchset is 
fine too.

I would like to test it with our NVMe SNAP controllers and also to 
develop NVMe/RDMA queue_rqs code and test the perf with it.

> .
>
>> Don't you think it will be a good idea to not depend on applications and
>> batch according to some kernel mechanism ?
>>
>> Wait till X requests or Y usecs (first condition to be fulfilled) before
>> submitting the batch to LLD.
>>
>> Like we do with adaptive completion coalescing/moderation for capable
>> devices.
> This is how plugging used to work way back in the day. The problem is
> that you then introduce per-device state, which can cause contention.
> That's why the plug is a pure stack based entity now.
>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-19 12:14                         ` Max Gurtovoy
@ 2021-12-19 14:48                           ` Jens Axboe
  2021-12-20 10:11                             ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-19 14:48 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/19/21 5:14 AM, Max Gurtovoy wrote:
> 
> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>
>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>
>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>
>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>> error?
>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>
>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>> algorithm ?
>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>
>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>
>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>> will result in a plug flush to begin with.
>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>
>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>
>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>> requests are issued.
>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>> of 28 ? or 1 by 1 ?
>>>> --iodepth just controls the overall depth, the batch submit count
>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>> one at the time, then you'll get one at the time further down too. Hence
>>>> the batching is directly driven by what the application is already
>>>> doing.
>>> I see. Thanks for the explanation.
>>>
>>> So it works only for io_uring based applications ?
>> It's only enabled for io_uring right now, but it's generically available
>> for anyone that wants to use it... Would be trivial to do for aio, and
>> other spots that currently use blk_start_plug() and has an idea of how
>> many IOs will be submitted
> 
> Can you please share an example application (or is it fio patches) that 
> can submit batches ? The same that was used to test this patchset is 
> fine too.
> 
> I would like to test it with our NVMe SNAP controllers and also to 
> develop NVMe/RDMA queue_rqs code and test the perf with it.

You should just be able to use iodepth_batch with fio. For my peak
testing, I use t/io_uring from the fio repo. By default, it'll run QD of
and do batches of 32 for complete and submit. You can just run:

t/io_uring <dev or file>

maybe adding -p0 for IRQ driven rather than polled IO.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-19 14:48                           ` Jens Axboe
@ 2021-12-20 10:11                             ` Max Gurtovoy
  2021-12-20 14:19                               ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-20 10:11 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/19/2021 4:48 PM, Jens Axboe wrote:
> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>
>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>
>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>
>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>> error?
>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>
>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>> algorithm ?
>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>
>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>
>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>
>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>
>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>> requests are issued.
>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>> of 28 ? or 1 by 1 ?
>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>> the batching is directly driven by what the application is already
>>>>> doing.
>>>> I see. Thanks for the explanation.
>>>>
>>>> So it works only for io_uring based applications ?
>>> It's only enabled for io_uring right now, but it's generically available
>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>> other spots that currently use blk_start_plug() and has an idea of how
>>> many IOs will be submitted
>> Can you please share an example application (or is it fio patches) that
>> can submit batches ? The same that was used to test this patchset is
>> fine too.
>>
>> I would like to test it with our NVMe SNAP controllers and also to
>> develop NVMe/RDMA queue_rqs code and test the perf with it.
> You should just be able to use iodepth_batch with fio. For my peak
> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
> and do batches of 32 for complete and submit. You can just run:
>
> t/io_uring <dev or file>
>
> maybe adding -p0 for IRQ driven rather than polled IO.

I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA 
but it was never called using the t/io_uring test nor fio with 
iodepth_batch=32 flag with io_uring engine.

Any idea what might be the issue ?

I installed fio from sources..


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 10:11                             ` Max Gurtovoy
@ 2021-12-20 14:19                               ` Jens Axboe
  2021-12-20 14:25                                 ` Jens Axboe
  2021-12-20 15:29                                 ` Max Gurtovoy
  0 siblings, 2 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-20 14:19 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/20/21 3:11 AM, Max Gurtovoy wrote:
> 
> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>
>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>> error?
>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>
>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>
>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>
>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>
>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>> requests are issued.
>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>> of 28 ? or 1 by 1 ?
>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>> the batching is directly driven by what the application is already
>>>>>> doing.
>>>>> I see. Thanks for the explanation.
>>>>>
>>>>> So it works only for io_uring based applications ?
>>>> It's only enabled for io_uring right now, but it's generically available
>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>> many IOs will be submitted
>>> Can you please share an example application (or is it fio patches) that
>>> can submit batches ? The same that was used to test this patchset is
>>> fine too.
>>>
>>> I would like to test it with our NVMe SNAP controllers and also to
>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>> You should just be able to use iodepth_batch with fio. For my peak
>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>> and do batches of 32 for complete and submit. You can just run:
>>
>> t/io_uring <dev or file>
>>
>> maybe adding -p0 for IRQ driven rather than polled IO.
> 
> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA 
> but it was never called using the t/io_uring test nor fio with 
> iodepth_batch=32 flag with io_uring engine.
> 
> Any idea what might be the issue ?
> 
> I installed fio from sources..

The two main restrictions right now are a scheduler and shared tags, are
you using any of those?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 14:19                               ` Jens Axboe
@ 2021-12-20 14:25                                 ` Jens Axboe
  2021-12-20 15:29                                 ` Max Gurtovoy
  1 sibling, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-20 14:25 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/20/21 7:19 AM, Jens Axboe wrote:
>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>> You should just be able to use iodepth_batch with fio. For my peak
>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>> and do batches of 32 for complete and submit. You can just run:
>>>
>>> t/io_uring <dev or file>
>>>
>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>
>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA 
>> but it was never called using the t/io_uring test nor fio with 
>> iodepth_batch=32 flag with io_uring engine.
>>
>> Any idea what might be the issue ?
>>
>> I installed fio from sources..
> 
> The two main restrictions right now are a scheduler and shared tags, are
> you using any of those?

Here's a sample run, which is 2 threads, each driving 2 devices and
using 31 (-s31) batch submit count, with 16 batch completions (-c16).
Block size is 512b. Ignore most other options, they don't really matter,
the defaults are 128 QD, 32 submit batch, 32 complete batch.

$ sudo taskset -c 10,11 t/io_uring -d256 -b512 -s31 -c16 -p1 -F1 -B1 -n2 /dev/nvme0n1 /dev/nvme3n1 /dev/nvme2n1 /dev/nvme4n1
Added file /dev/nvme0n1 (submitter 0)
Added file /dev/nvme3n1 (submitter 1)
Added file /dev/nvme2n1 (submitter 0)
Added file /dev/nvme4n1 (submitter 1)
polled=1, fixedbufs=1/1, register_files=1, buffered=0, QD=256
Engine=io_uring, sq_ring=256, cq_ring=256
submitter=0, tid=91490
submitter=1, tid=91491
IOPS=13038K, BW=6366MiB/s, IOS/call=30/30, inflight=(128 5 120 128)
IOPS=13042K, BW=6368MiB/s, IOS/call=30/30, inflight=(128 96 128 15)
IOPS=13049K, BW=6371MiB/s, IOS/call=30/30, inflight=(128 122 120 128)
IOPS=13045K, BW=6369MiB/s, IOS/call=30/30, inflight=(110 128 99 128)

That's driving 13M IOPS, using a single CPU core (10/11 are thread
siblings). Top of profile for that:

+    6.41%  io_uring  [kernel.vmlinux]  [k] __blk_mq_alloc_requests
+    5.46%  io_uring  [kernel.vmlinux]  [k] blkdev_direct_IO.part.0
+    5.36%  io_uring  [kernel.vmlinux]  [k] blk_mq_end_request_batch
+    5.24%  io_uring  io_uring          [.] submitter_uring_fn
+    5.18%  io_uring  [kernel.vmlinux]  [k] io_submit_sqes
+    5.12%  io_uring  [kernel.vmlinux]  [k] bio_alloc_kiocb
+    4.75%  io_uring  [nvme]            [k] nvme_poll
+    4.67%  io_uring  [kernel.vmlinux]  [k] __io_import_iovec
+    4.58%  io_uring  [nvme]            [k] nvme_queue_rqs
+    4.49%  io_uring  [kernel.vmlinux]  [k] blk_mq_submit_bio
+    4.32%  io_uring  [nvme]            [k] nvme_map_data
+    3.02%  io_uring  [kernel.vmlinux]  [k] io_issue_sqe
+    2.89%  io_uring  [nvme_core]       [k] nvme_setup_cmd
+    2.60%  io_uring  [kernel.vmlinux]  [k] io_prep_rw
+    2.59%  io_uring  [kernel.vmlinux]  [k] submit_bio_noacct.part.0


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 14:19                               ` Jens Axboe
  2021-12-20 14:25                                 ` Jens Axboe
@ 2021-12-20 15:29                                 ` Max Gurtovoy
  2021-12-20 16:34                                   ` Jens Axboe
  1 sibling, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-20 15:29 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/20/2021 4:19 PM, Jens Axboe wrote:
> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>
>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>
>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>
>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>
>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>> requests are issued.
>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>> the batching is directly driven by what the application is already
>>>>>>> doing.
>>>>>> I see. Thanks for the explanation.
>>>>>>
>>>>>> So it works only for io_uring based applications ?
>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>> many IOs will be submitted
>>>> Can you please share an example application (or is it fio patches) that
>>>> can submit batches ? The same that was used to test this patchset is
>>>> fine too.
>>>>
>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>> You should just be able to use iodepth_batch with fio. For my peak
>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>> and do batches of 32 for complete and submit. You can just run:
>>>
>>> t/io_uring <dev or file>
>>>
>>> maybe adding -p0 for IRQ driven rather than polled IO.
>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>> but it was never called using the t/io_uring test nor fio with
>> iodepth_batch=32 flag with io_uring engine.
>>
>> Any idea what might be the issue ?
>>
>> I installed fio from sources..
> The two main restrictions right now are a scheduler and shared tags, are
> you using any of those?

No.

But maybe I'm missing the .commit_rqs callback. is it mandatory for this 
feature ?



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 15:29                                 ` Max Gurtovoy
@ 2021-12-20 16:34                                   ` Jens Axboe
  2021-12-20 18:48                                     ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-20 16:34 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/20/21 8:29 AM, Max Gurtovoy wrote:
> 
> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>
>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>
>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>
>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>> requests are issued.
>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>> doing.
>>>>>>> I see. Thanks for the explanation.
>>>>>>>
>>>>>>> So it works only for io_uring based applications ?
>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>> many IOs will be submitted
>>>>> Can you please share an example application (or is it fio patches) that
>>>>> can submit batches ? The same that was used to test this patchset is
>>>>> fine too.
>>>>>
>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>> and do batches of 32 for complete and submit. You can just run:
>>>>
>>>> t/io_uring <dev or file>
>>>>
>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>> but it was never called using the t/io_uring test nor fio with
>>> iodepth_batch=32 flag with io_uring engine.
>>>
>>> Any idea what might be the issue ?
>>>
>>> I installed fio from sources..
>> The two main restrictions right now are a scheduler and shared tags, are
>> you using any of those?
> 
> No.
> 
> But maybe I'm missing the .commit_rqs callback. is it mandatory for this 
> feature ?

I've only tested with nvme pci which does have it, but I don't think so.
Unless there's some check somewhere that makes it necessary. Can you
share the patch you're currently using on top?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 16:34                                   ` Jens Axboe
@ 2021-12-20 18:48                                     ` Max Gurtovoy
  2021-12-20 18:58                                       ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-20 18:48 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

[-- Attachment #1: Type: text/plain, Size: 7583 bytes --]


On 12/20/2021 6:34 PM, Jens Axboe wrote:
> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>
>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>
>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>> requests are issued.
>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>> doing.
>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>
>>>>>>>> So it works only for io_uring based applications ?
>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>> many IOs will be submitted
>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>> fine too.
>>>>>>
>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>
>>>>> t/io_uring <dev or file>
>>>>>
>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>> but it was never called using the t/io_uring test nor fio with
>>>> iodepth_batch=32 flag with io_uring engine.
>>>>
>>>> Any idea what might be the issue ?
>>>>
>>>> I installed fio from sources..
>>> The two main restrictions right now are a scheduler and shared tags, are
>>> you using any of those?
>> No.
>>
>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>> feature ?
> I've only tested with nvme pci which does have it, but I don't think so.
> Unless there's some check somewhere that makes it necessary. Can you
> share the patch you're currently using on top?

The attached POC patches apply cleanly on block/for-next branch

commit 7925bb75e8effa5de85b1cf8425cd5c21f212b1d (block/for-next)
Merge: eb12bde9eba8 3427f2b2c533
Author: Jens Axboe <axboe@kernel.dk>
Date:   Fri Dec 17 09:51:05 2021 -0700

     Merge branch 'for-5.17/drivers' into for-next

     * for-5.17/drivers:
       block: remove the rsxx driver
       rsxx: Drop PCI legacy power management
       mtip32xx: convert to generic power management
       mtip32xx: remove pointless drvdata lookups
       mtip32xx: remove pointless drvdata checking
       drbd: Use struct_group() to zero algs
       loop: make autoclear operation asynchronous
       null_blk: cast command status to integer
       pktdvd: stop using bdi congestion framework.


[-- Attachment #2: 0001-nvme-rdma-prepare-for-queue_rqs-implementation.patch --]
[-- Type: text/plain, Size: 6618 bytes --]

From 0de2836ce21df6801db580e154296544a741b6c4 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy <mgurtovoy@nvidia.com>
Date: Thu, 2 Dec 2021 19:59:00 +0200
Subject: [PATCH 1/2] nvme-rdma: prepare for queue_rqs implementation

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 127 ++++++++++++++++++++++++++-------------
 1 file changed, 84 insertions(+), 43 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 850f84d204d0..2d608cb48392 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -70,6 +70,7 @@ struct nvme_rdma_request {
 	struct ib_sge		sge[1 + NVME_RDMA_MAX_INLINE_SEGMENTS];
 	u32			num_sge;
 	struct ib_reg_wr	reg_wr;
+	struct ib_send_wr	send_wr;
 	struct ib_cqe		reg_cqe;
 	struct nvme_rdma_queue  *queue;
 	struct nvme_rdma_sgl	data_sgl;
@@ -1635,33 +1636,31 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
 		nvme_rdma_end_request(req);
 }
 
-static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
+static void nvme_rdma_prep_send(struct nvme_rdma_queue *queue,
 		struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge,
-		struct ib_send_wr *first)
+		struct ib_send_wr *wr)
 {
-	struct ib_send_wr wr;
-	int ret;
-
-	sge->addr   = qe->dma;
+	sge->addr = qe->dma;
 	sge->length = sizeof(struct nvme_command);
-	sge->lkey   = queue->device->pd->local_dma_lkey;
+	sge->lkey = queue->device->pd->local_dma_lkey;
 
-	wr.next       = NULL;
-	wr.wr_cqe     = &qe->cqe;
-	wr.sg_list    = sge;
-	wr.num_sge    = num_sge;
-	wr.opcode     = IB_WR_SEND;
-	wr.send_flags = IB_SEND_SIGNALED;
+	wr->next = NULL;
+	wr->wr_cqe = &qe->cqe;
+	wr->sg_list = sge;
+	wr->num_sge = num_sge;
+	wr->opcode = IB_WR_SEND;
+	wr->send_flags = IB_SEND_SIGNALED;
+}
 
-	if (first)
-		first->next = &wr;
-	else
-		first = &wr;
+static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
+		struct ib_send_wr *wr)
+{
+	int ret;
 
-	ret = ib_post_send(queue->qp, first, NULL);
+	ret = ib_post_send(queue->qp, wr, NULL);
 	if (unlikely(ret)) {
 		dev_err(queue->ctrl->ctrl.device,
-			     "%s failed with error code %d\n", __func__, ret);
+			"%s failed with error code %d\n", __func__, ret);
 	}
 	return ret;
 }
@@ -1715,6 +1714,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg)
 	struct nvme_rdma_qe *sqe = &ctrl->async_event_sqe;
 	struct nvme_command *cmd = sqe->data;
 	struct ib_sge sge;
+	struct ib_send_wr wr;
 	int ret;
 
 	ib_dma_sync_single_for_cpu(dev, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
@@ -1730,7 +1730,8 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg)
 	ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
 			DMA_TO_DEVICE);
 
-	ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL);
+	nvme_rdma_prep_send(queue, sqe, &sge, 1, &wr);
+	ret = nvme_rdma_post_send(queue, &wr);
 	WARN_ON_ONCE(ret);
 }
 
@@ -2034,27 +2035,35 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
 	return BLK_EH_RESET_TIMER;
 }
 
-static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
-		const struct blk_mq_queue_data *bd)
+static blk_status_t nvme_rdma_cleanup_rq(struct nvme_rdma_queue *queue,
+		struct request *rq, int err)
 {
-	struct nvme_ns *ns = hctx->queue->queuedata;
-	struct nvme_rdma_queue *queue = hctx->driver_data;
-	struct request *rq = bd->rq;
+	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	blk_status_t ret;
+
+	nvme_rdma_unmap_data(queue, rq);
+	if (err == -EIO)
+		ret = nvme_host_path_error(rq);
+	else if (err == -ENOMEM || err == -EAGAIN)
+		ret = BLK_STS_RESOURCE;
+	else
+		ret = BLK_STS_IOERR;
+	nvme_cleanup_cmd(rq);
+	ib_dma_unmap_single(queue->device->dev, req->sqe.dma,
+			    sizeof(struct nvme_command), DMA_TO_DEVICE);
+	return ret;
+}
+
+static blk_status_t nvme_rdma_prep_rq(struct nvme_rdma_queue *queue,
+		struct request *rq, struct nvme_ns *ns)
+{
+	struct ib_device *dev = queue->device->dev;
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_rdma_qe *sqe = &req->sqe;
 	struct nvme_command *c = nvme_req(rq)->cmd;
-	struct ib_device *dev;
-	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
 	blk_status_t ret;
 	int err;
 
-	WARN_ON_ONCE(rq->tag < 0);
-
-	if (!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
-		return nvme_fail_nonready_command(&queue->ctrl->ctrl, rq);
-
-	dev = queue->device->dev;
-
 	req->sqe.dma = ib_dma_map_single(dev, req->sqe.data,
 					 sizeof(struct nvme_command),
 					 DMA_TO_DEVICE);
@@ -2083,8 +2092,8 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	err = nvme_rdma_map_data(queue, rq, c);
 	if (unlikely(err < 0)) {
 		dev_err(queue->ctrl->ctrl.device,
-			     "Failed to map data (%d)\n", err);
-		goto err;
+			"Failed to map data (%d)\n", err);
+		goto out_err;
 	}
 
 	sqe->cqe.done = nvme_rdma_send_done;
@@ -2092,16 +2101,13 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	ib_dma_sync_single_for_device(dev, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
-	err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
-			req->mr ? &req->reg_wr.wr : NULL);
-	if (unlikely(err))
-		goto err_unmap;
+	nvme_rdma_prep_send(queue, sqe, req->sge, req->num_sge, &req->send_wr);
+	if (req->mr)
+		req->reg_wr.wr.next = &req->send_wr;
 
 	return BLK_STS_OK;
 
-err_unmap:
-	nvme_rdma_unmap_data(queue, rq);
-err:
+out_err:
 	if (err == -EIO)
 		ret = nvme_host_path_error(rq);
 	else if (err == -ENOMEM || err == -EAGAIN)
@@ -2115,6 +2121,41 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
+static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
+		const struct blk_mq_queue_data *bd)
+{
+	struct nvme_ns *ns = hctx->queue->queuedata;
+	struct nvme_rdma_queue *queue = hctx->driver_data;
+	struct request *rq = bd->rq;
+	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
+	struct ib_send_wr *wr;
+	int err;
+
+	WARN_ON_ONCE(rq->tag < 0);
+
+	if (!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
+		return nvme_fail_nonready_command(&queue->ctrl->ctrl, rq);
+
+	err = nvme_rdma_prep_rq(queue, rq, ns);
+	if (unlikely(err))
+		return err;
+
+	if (req->mr)
+		wr = &req->reg_wr.wr;
+	else
+		wr = &req->send_wr;
+
+	err = nvme_rdma_post_send(queue, wr);
+	if (unlikely(err))
+		goto out_cleanup_rq;
+
+	return BLK_STS_OK;
+
+out_cleanup_rq:
+	return nvme_rdma_cleanup_rq(queue, rq, err);
+}
+
 static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
 {
 	struct nvme_rdma_queue *queue = hctx->driver_data;
-- 
2.18.1


[-- Attachment #3: 0002-nvme-rdma-add-support-for-mq_ops-queue_rqs.patch --]
[-- Type: text/plain, Size: 2952 bytes --]

From 851a1f35420206f7b631d5d12b135e5a7c84b912 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy <mgurtovoy@nvidia.com>
Date: Mon, 20 Dec 2021 20:42:49 +0200
Subject: [PATCH 2/2] nvme-rdma: add support for mq_ops->queue_rqs()

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 75 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 2d608cb48392..765bb57f0a55 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2121,6 +2121,80 @@ static blk_status_t nvme_rdma_prep_rq(struct nvme_rdma_queue *queue,
 	return ret;
 }
 
+static bool nvme_rdma_prep_rq_batch(struct nvme_rdma_queue *queue,
+		struct request *rq)
+{
+	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
+
+	if (unlikely(!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready)))
+		return false;
+
+	rq->mq_hctx->tags->rqs[rq->tag] = rq;
+	return nvme_rdma_prep_rq(queue, rq, rq->q->queuedata) == BLK_STS_OK;
+}
+
+static void nvme_rdma_submit_cmds(struct nvme_rdma_queue *queue,
+		struct request **rqlist)
+{
+	struct request *first_rq = rq_list_peek(rqlist);
+	struct nvme_rdma_request *nreq = blk_mq_rq_to_pdu(first_rq);
+	struct ib_send_wr *first, *last = NULL;
+	int ret;
+
+	if (nreq->mr)
+		first = &nreq->reg_wr.wr;
+	else
+		first = &nreq->send_wr;
+
+	while (!rq_list_empty(*rqlist)) {
+		struct request *rq = rq_list_pop(rqlist);
+		struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+		struct ib_send_wr *tmp;
+
+		tmp = last;
+		last = &req->send_wr;
+		if (tmp) {
+			if (req->mr)
+				tmp->next = &req->reg_wr.wr;
+			else
+				tmp->next = &req->send_wr;
+		}
+	}
+
+	ret = nvme_rdma_post_send(queue, first);
+	WARN_ON_ONCE(ret);
+}
+
+static void nvme_rdma_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_rdma_queue *queue = req->mq_hctx->driver_data;
+
+		if (!nvme_rdma_prep_rq_batch(queue, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			if (prev)
+				prev->rq_next = NULL;
+			nvme_rdma_submit_cmds(queue, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		const struct blk_mq_queue_data *bd)
 {
@@ -2258,6 +2332,7 @@ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
 
 static const struct blk_mq_ops nvme_rdma_mq_ops = {
 	.queue_rq	= nvme_rdma_queue_rq,
+	.queue_rqs	= nvme_rdma_queue_rqs,
 	.complete	= nvme_rdma_complete_rq,
 	.init_request	= nvme_rdma_init_request,
 	.exit_request	= nvme_rdma_exit_request,
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 18:48                                     ` Max Gurtovoy
@ 2021-12-20 18:58                                       ` Jens Axboe
  2021-12-21 10:20                                         ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-20 18:58 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/20/21 11:48 AM, Max Gurtovoy wrote:
> 
> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>
>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>> requests are issued.
>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>> doing.
>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>
>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>> many IOs will be submitted
>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>> fine too.
>>>>>>>
>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>
>>>>>> t/io_uring <dev or file>
>>>>>>
>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>> but it was never called using the t/io_uring test nor fio with
>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>
>>>>> Any idea what might be the issue ?
>>>>>
>>>>> I installed fio from sources..
>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>> you using any of those?
>>> No.
>>>
>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>> feature ?
>> I've only tested with nvme pci which does have it, but I don't think so.
>> Unless there's some check somewhere that makes it necessary. Can you
>> share the patch you're currently using on top?
> 
> The attached POC patches apply cleanly on block/for-next branch

Looks reasonable to me from a quick glance. Not sure why you're not
seeing it hit, maybe try and instrument
block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
called? As mentioned, no elevator or shared tags, should work for
anything else basically.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 1/4] block: add mq_ops->queue_rqs hook
  2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
  2021-12-16  9:01   ` Christoph Hellwig
@ 2021-12-20 20:36   ` Keith Busch
  2021-12-20 20:47     ` Jens Axboe
  1 sibling, 1 reply; 59+ messages in thread
From: Keith Busch @ 2021-12-20 20:36 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, linux-nvme

On Wed, Dec 15, 2021 at 09:24:18AM -0700, Jens Axboe wrote:
> +		/*
> +		 * Peek first request and see if we have a ->queue_rqs() hook.
> +		 * If we do, we can dispatch the whole plug list in one go. We
> +		 * already know at this point that all requests belong to the
> +		 * same queue, caller must ensure that's the case.
> +		 *
> +		 * Since we pass off the full list to the driver at this point,
> +		 * we do not increment the active request count for the queue.
> +		 * Bypass shared tags for now because of that.
> +		 */
> +		if (q->mq_ops->queue_rqs &&
> +		    !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) {
> +			blk_mq_run_dispatch_ops(q,
> +				q->mq_ops->queue_rqs(&plug->mq_list));

I think we still need to verify the queue isn't quiesced within
blk_mq_run_dispatch_ops()'s rcu protected area, prior to calling
.queue_rqs(). Something like below. Or is this supposed to be the
low-level drivers responsibility now?

---
+void __blk_mq_flush_plug_list(struct request_queue *q, struct blk_plug *plug)
+{
+	if (blk_queue_quiesced(q))
+		return;
+	q->mq_ops->queue_rqs(&plug->mq_list);
+}
+
 void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 {
 	struct blk_mq_hw_ctx *this_hctx;
@@ -2580,7 +2587,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 		if (q->mq_ops->queue_rqs &&
 		    !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) {
 			blk_mq_run_dispatch_ops(q,
-				q->mq_ops->queue_rqs(&plug->mq_list));
+				__blk_mq_flush_plug_list(q, plug));
 			if (rq_list_empty(plug->mq_list))
 				return;
 		}
--

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 1/4] block: add mq_ops->queue_rqs hook
  2021-12-20 20:36   ` Keith Busch
@ 2021-12-20 20:47     ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-20 20:47 UTC (permalink / raw)
  To: Keith Busch; +Cc: io-uring, linux-nvme

On 12/20/21 1:36 PM, Keith Busch wrote:
> On Wed, Dec 15, 2021 at 09:24:18AM -0700, Jens Axboe wrote:
>> +		/*
>> +		 * Peek first request and see if we have a ->queue_rqs() hook.
>> +		 * If we do, we can dispatch the whole plug list in one go. We
>> +		 * already know at this point that all requests belong to the
>> +		 * same queue, caller must ensure that's the case.
>> +		 *
>> +		 * Since we pass off the full list to the driver at this point,
>> +		 * we do not increment the active request count for the queue.
>> +		 * Bypass shared tags for now because of that.
>> +		 */
>> +		if (q->mq_ops->queue_rqs &&
>> +		    !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) {
>> +			blk_mq_run_dispatch_ops(q,
>> +				q->mq_ops->queue_rqs(&plug->mq_list));
> 
> I think we still need to verify the queue isn't quiesced within
> blk_mq_run_dispatch_ops()'s rcu protected area, prior to calling
> .queue_rqs(). Something like below. Or is this supposed to be the
> low-level drivers responsibility now?

Yes, that seems very reasonable, and I'd much rather do that than punt it
to the driver. Care to send it as a real patch?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-20 18:58                                       ` Jens Axboe
@ 2021-12-21 10:20                                         ` Max Gurtovoy
  2021-12-21 15:23                                           ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-21 10:20 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/20/2021 8:58 PM, Jens Axboe wrote:
> On 12/20/21 11:48 AM, Max Gurtovoy wrote:
>> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>>> requests are issued.
>>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>>> doing.
>>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>>
>>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>>> many IOs will be submitted
>>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>>> fine too.
>>>>>>>>
>>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>>
>>>>>>> t/io_uring <dev or file>
>>>>>>>
>>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>>> but it was never called using the t/io_uring test nor fio with
>>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>>
>>>>>> Any idea what might be the issue ?
>>>>>>
>>>>>> I installed fio from sources..
>>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>>> you using any of those?
>>>> No.
>>>>
>>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>>> feature ?
>>> I've only tested with nvme pci which does have it, but I don't think so.
>>> Unless there's some check somewhere that makes it necessary. Can you
>>> share the patch you're currently using on top?
>> The attached POC patches apply cleanly on block/for-next branch
> Looks reasonable to me from a quick glance. Not sure why you're not
> seeing it hit, maybe try and instrument
> block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
> called? As mentioned, no elevator or shared tags, should work for
> anything else basically.

Yes. I saw that the blk layer converted the original non-shared tagset 
of NVMe/RDMA to a shared one because of the nvmf connect request queue 
that is using the same tagset (uses only the reserved tag).

So I guess this is the reason that the I couldn't reach the new code of 
queue_rqs.

The question is how we can overcome this ?

Should we create new tagset for the NVMf fabrics connect_q ? or maybe 
not mark the tagset as shared for reserved ids ?

Christoph, any suggestion here ?

>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-21 10:20                                         ` Max Gurtovoy
@ 2021-12-21 15:23                                           ` Jens Axboe
  2021-12-21 15:29                                             ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-21 15:23 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/21/21 3:20 AM, Max Gurtovoy wrote:
> 
> On 12/20/2021 8:58 PM, Jens Axboe wrote:
>> On 12/20/21 11:48 AM, Max Gurtovoy wrote:
>>> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>>>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>>>> requests are issued.
>>>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>>>> doing.
>>>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>>>
>>>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>>>> many IOs will be submitted
>>>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>>>> fine too.
>>>>>>>>>
>>>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>>>
>>>>>>>> t/io_uring <dev or file>
>>>>>>>>
>>>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>>>> but it was never called using the t/io_uring test nor fio with
>>>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>>>
>>>>>>> Any idea what might be the issue ?
>>>>>>>
>>>>>>> I installed fio from sources..
>>>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>>>> you using any of those?
>>>>> No.
>>>>>
>>>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>>>> feature ?
>>>> I've only tested with nvme pci which does have it, but I don't think so.
>>>> Unless there's some check somewhere that makes it necessary. Can you
>>>> share the patch you're currently using on top?
>>> The attached POC patches apply cleanly on block/for-next branch
>> Looks reasonable to me from a quick glance. Not sure why you're not
>> seeing it hit, maybe try and instrument
>> block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
>> called? As mentioned, no elevator or shared tags, should work for
>> anything else basically.
> 
> Yes. I saw that the blk layer converted the original non-shared tagset 
> of NVMe/RDMA to a shared one because of the nvmf connect request queue 
> that is using the same tagset (uses only the reserved tag).
> 
> So I guess this is the reason that the I couldn't reach the new code of 
> queue_rqs.
> 
> The question is how we can overcome this ?

Do we need to mark it shared for just the reserved tags? I wouldn't
think so...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-21 15:23                                           ` Jens Axboe
@ 2021-12-21 15:29                                             ` Max Gurtovoy
  2021-12-21 15:33                                               ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-21 15:29 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/21/2021 5:23 PM, Jens Axboe wrote:
> On 12/21/21 3:20 AM, Max Gurtovoy wrote:
>> On 12/20/2021 8:58 PM, Jens Axboe wrote:
>>> On 12/20/21 11:48 AM, Max Gurtovoy wrote:
>>>> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>>>>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>>>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>>>>> requests are issued.
>>>>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>>>>> doing.
>>>>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>>>>
>>>>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>>>>> many IOs will be submitted
>>>>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>>>>> fine too.
>>>>>>>>>>
>>>>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>>>>
>>>>>>>>> t/io_uring <dev or file>
>>>>>>>>>
>>>>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>>>>> but it was never called using the t/io_uring test nor fio with
>>>>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>>>>
>>>>>>>> Any idea what might be the issue ?
>>>>>>>>
>>>>>>>> I installed fio from sources..
>>>>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>>>>> you using any of those?
>>>>>> No.
>>>>>>
>>>>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>>>>> feature ?
>>>>> I've only tested with nvme pci which does have it, but I don't think so.
>>>>> Unless there's some check somewhere that makes it necessary. Can you
>>>>> share the patch you're currently using on top?
>>>> The attached POC patches apply cleanly on block/for-next branch
>>> Looks reasonable to me from a quick glance. Not sure why you're not
>>> seeing it hit, maybe try and instrument
>>> block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
>>> called? As mentioned, no elevator or shared tags, should work for
>>> anything else basically.
>> Yes. I saw that the blk layer converted the original non-shared tagset
>> of NVMe/RDMA to a shared one because of the nvmf connect request queue
>> that is using the same tagset (uses only the reserved tag).
>>
>> So I guess this is the reason that the I couldn't reach the new code of
>> queue_rqs.
>>
>> The question is how we can overcome this ?
> Do we need to mark it shared for just the reserved tags? I wouldn't
> think so...

We don't mark it. The block layer does it in blk_mq_add_queue_tag_set:

if (!list_empty(&set->tag_list) &&
             !(set->flags & BLK_MQ_F_TAG_QUEUE_SHARED))

>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-21 15:29                                             ` Max Gurtovoy
@ 2021-12-21 15:33                                               ` Jens Axboe
  2021-12-21 16:08                                                 ` Max Gurtovoy
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-21 15:33 UTC (permalink / raw)
  To: Max Gurtovoy, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer

On 12/21/21 8:29 AM, Max Gurtovoy wrote:
> 
> On 12/21/2021 5:23 PM, Jens Axboe wrote:
>> On 12/21/21 3:20 AM, Max Gurtovoy wrote:
>>> On 12/20/2021 8:58 PM, Jens Axboe wrote:
>>>> On 12/20/21 11:48 AM, Max Gurtovoy wrote:
>>>>> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>>>>>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>>>>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>>>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>>>>>> requests are issued.
>>>>>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>>>>>> doing.
>>>>>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>>>>>> many IOs will be submitted
>>>>>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>>>>>> fine too.
>>>>>>>>>>>
>>>>>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>>>>>
>>>>>>>>>> t/io_uring <dev or file>
>>>>>>>>>>
>>>>>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>>>>>> but it was never called using the t/io_uring test nor fio with
>>>>>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>>>>>
>>>>>>>>> Any idea what might be the issue ?
>>>>>>>>>
>>>>>>>>> I installed fio from sources..
>>>>>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>>>>>> you using any of those?
>>>>>>> No.
>>>>>>>
>>>>>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>>>>>> feature ?
>>>>>> I've only tested with nvme pci which does have it, but I don't think so.
>>>>>> Unless there's some check somewhere that makes it necessary. Can you
>>>>>> share the patch you're currently using on top?
>>>>> The attached POC patches apply cleanly on block/for-next branch
>>>> Looks reasonable to me from a quick glance. Not sure why you're not
>>>> seeing it hit, maybe try and instrument
>>>> block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
>>>> called? As mentioned, no elevator or shared tags, should work for
>>>> anything else basically.
>>> Yes. I saw that the blk layer converted the original non-shared tagset
>>> of NVMe/RDMA to a shared one because of the nvmf connect request queue
>>> that is using the same tagset (uses only the reserved tag).
>>>
>>> So I guess this is the reason that the I couldn't reach the new code of
>>> queue_rqs.
>>>
>>> The question is how we can overcome this ?
>> Do we need to mark it shared for just the reserved tags? I wouldn't
>> think so...
> 
> We don't mark it. The block layer does it in blk_mq_add_queue_tag_set:
> 
> if (!list_empty(&set->tag_list) &&
>              !(set->flags & BLK_MQ_F_TAG_QUEUE_SHARED))

Yes, that's what I meant, do we need to mark it as such for just the
reserved tags?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-21 15:33                                               ` Jens Axboe
@ 2021-12-21 16:08                                                 ` Max Gurtovoy
  0 siblings, 0 replies; 59+ messages in thread
From: Max Gurtovoy @ 2021-12-21 16:08 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: io-uring, linux-nvme, Hannes Reinecke, Oren Duer


On 12/21/2021 5:33 PM, Jens Axboe wrote:
> On 12/21/21 8:29 AM, Max Gurtovoy wrote:
>> On 12/21/2021 5:23 PM, Jens Axboe wrote:
>>> On 12/21/21 3:20 AM, Max Gurtovoy wrote:
>>>> On 12/20/2021 8:58 PM, Jens Axboe wrote:
>>>>> On 12/20/21 11:48 AM, Max Gurtovoy wrote:
>>>>>> On 12/20/2021 6:34 PM, Jens Axboe wrote:
>>>>>>> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>>>>>>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>>>>>>>> requests are issued.
>>>>>>>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>>>>>>>> doing.
>>>>>>>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So it works only for io_uring based applications ?
>>>>>>>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>>>>>>>> many IOs will be submitted
>>>>>>>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>>>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>>>>>>>> fine too.
>>>>>>>>>>>>
>>>>>>>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>>>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>>>>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>>>>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>>>>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>>>>>>>
>>>>>>>>>>> t/io_uring <dev or file>
>>>>>>>>>>>
>>>>>>>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>>>>>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>>>>>>>> but it was never called using the t/io_uring test nor fio with
>>>>>>>>>> iodepth_batch=32 flag with io_uring engine.
>>>>>>>>>>
>>>>>>>>>> Any idea what might be the issue ?
>>>>>>>>>>
>>>>>>>>>> I installed fio from sources..
>>>>>>>>> The two main restrictions right now are a scheduler and shared tags, are
>>>>>>>>> you using any of those?
>>>>>>>> No.
>>>>>>>>
>>>>>>>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>>>>>>>> feature ?
>>>>>>> I've only tested with nvme pci which does have it, but I don't think so.
>>>>>>> Unless there's some check somewhere that makes it necessary. Can you
>>>>>>> share the patch you're currently using on top?
>>>>>> The attached POC patches apply cleanly on block/for-next branch
>>>>> Looks reasonable to me from a quick glance. Not sure why you're not
>>>>> seeing it hit, maybe try and instrument
>>>>> block/blk-mq.c:blk_mq_flush_plug_list() and find out why it isn't being
>>>>> called? As mentioned, no elevator or shared tags, should work for
>>>>> anything else basically.
>>>> Yes. I saw that the blk layer converted the original non-shared tagset
>>>> of NVMe/RDMA to a shared one because of the nvmf connect request queue
>>>> that is using the same tagset (uses only the reserved tag).
>>>>
>>>> So I guess this is the reason that the I couldn't reach the new code of
>>>> queue_rqs.
>>>>
>>>> The question is how we can overcome this ?
>>> Do we need to mark it shared for just the reserved tags? I wouldn't
>>> think so...
>> We don't mark it. The block layer does it in blk_mq_add_queue_tag_set:
>>
>> if (!list_empty(&set->tag_list) &&
>>               !(set->flags & BLK_MQ_F_TAG_QUEUE_SHARED))
> Yes, that's what I meant, do we need to mark it as such for just the
> reserved tags?

I'm afraid it doesn't related only to reserved tags.

If you have nvme device with 2 namespaces it will get to this code and 
mark it as shared set. And then the queue_rqs() won't be called for NVMe 
PCI as well.


>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:39 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
@ 2021-12-16 17:53   ` Christoph Hellwig
  0 siblings, 0 replies; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-16 17:53 UTC (permalink / raw)
  To: Jens Axboe
  Cc: io-uring, linux-block, linux-nvme, Hannes Reinecke, Keith Busch

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:38 [PATCHSET v5 0/4] Add support for list issue Jens Axboe
@ 2021-12-16 16:39 ` Jens Axboe
  2021-12-16 17:53   ` Christoph Hellwig
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:39 UTC (permalink / raw)
  To: io-uring, linux-block, linux-nvme
  Cc: Jens Axboe, Hannes Reinecke, Keith Busch

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 59 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7062128c8204..51a903d91d92 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -969,6 +969,64 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+{
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+	}
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+{
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return false;
+	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+		return false;
+
+	req->mq_hctx->tags->rqs[req->tag] = req;
+	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+}
+
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+		if (!nvme_prep_rq_batch(nvmeq, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			if (prev)
+				prev->rq_next = NULL;
+			nvme_submit_cmds(nvmeq, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -1670,6 +1728,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 static const struct blk_mq_ops nvme_mq_ops = {
 	.queue_rq	= nvme_queue_rq,
+	.queue_rqs	= nvme_queue_rqs,
 	.complete	= nvme_pci_complete_rq,
 	.commit_rqs	= nvme_commit_rqs,
 	.init_hctx	= nvme_init_hctx,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-16 16:05 [PATCHSET v4 0/4] Add support for list issue Jens Axboe
@ 2021-12-16 16:05 ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-16 16:05 UTC (permalink / raw)
  To: io-uring, linux-block, linux-nvme
  Cc: Jens Axboe, Hannes Reinecke, Keith Busch

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 58 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6be6b1ab4285..e34ad67c4c41 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -981,6 +981,63 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+{
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+	}
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+{
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return false;
+	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+		return false;
+
+	req->mq_hctx->tags->rqs[req->tag] = req;
+	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+}
+
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+		if (!nvme_prep_rq_batch(nvmeq, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			prev->rq_next = NULL;
+			nvme_submit_cmds(nvmeq, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -1678,6 +1735,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 static const struct blk_mq_ops nvme_mq_ops = {
 	.queue_rq	= nvme_queue_rq,
+	.queue_rqs	= nvme_queue_rqs,
 	.complete	= nvme_pci_complete_rq,
 	.commit_rqs	= nvme_commit_rqs,
 	.init_hctx	= nvme_init_hctx,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-06  7:40   ` Christoph Hellwig
@ 2021-12-06 16:33     ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-06 16:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-nvme

On 12/6/21 12:40 AM, Christoph Hellwig wrote:
> On Fri, Dec 03, 2021 at 02:45:44PM -0700, Jens Axboe wrote:
>> This enables the block layer to send us a full plug list of requests
>> that need submitting. The block layer guarantees that they all belong
>> to the same queue, but we do have to check the hardware queue mapping
>> for each request.
>>
>> If errors are encountered, leave them in the passed in list. Then the
>> block layer will handle them individually.
>>
>> This is good for about a 4% improvement in peak performance, taking us
>> from 9.6M to 10M IOPS/core.
> 
> This looks pretty similar to my proposed cleanups (which is nice), but
> back then you mentioned the cleaner version was much slower.  Do you
> know what brought the speed back in this version?

Yes, it has that folded in and tweaked on top. Current version seems
to be fine.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-03 21:45 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  2021-12-04 10:47   ` Hannes Reinecke
@ 2021-12-06  7:40   ` Christoph Hellwig
  2021-12-06 16:33     ` Jens Axboe
  1 sibling, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2021-12-06  7:40 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Fri, Dec 03, 2021 at 02:45:44PM -0700, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.
> 
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
> 
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.

This looks pretty similar to my proposed cleanups (which is nice), but
back then you mentioned the cleaner version was much slower.  Do you
know what brought the speed back in this version?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-03 21:45 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
@ 2021-12-04 10:47   ` Hannes Reinecke
  2021-12-06  7:40   ` Christoph Hellwig
  1 sibling, 0 replies; 59+ messages in thread
From: Hannes Reinecke @ 2021-12-04 10:47 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

On 12/3/21 10:45 PM, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.
> 
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
> 
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 61 insertions(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-12-03 21:45 [PATCHSET v2 0/4] Add support for list issue Jens Axboe
@ 2021-12-03 21:45 ` Jens Axboe
  2021-12-04 10:47   ` Hannes Reinecke
  2021-12-06  7:40   ` Christoph Hellwig
  0 siblings, 2 replies; 59+ messages in thread
From: Jens Axboe @ 2021-12-03 21:45 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 61 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6be6b1ab4285..197aa45ef7ef 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -981,6 +981,66 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
+static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+{
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
+				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
+		if (++nvmeq->sq_tail == nvmeq->q_depth)
+			nvmeq->sq_tail = 0;
+	}
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+{
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return false;
+	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+		return false;
+
+	req->mq_hctx->tags->rqs[req->tag] = req;
+	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+}
+
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+		if (!nvme_prep_rq_batch(nvmeq, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			prev->rq_next = NULL;
+			nvme_submit_cmds(nvmeq, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -1678,6 +1738,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 static const struct blk_mq_ops nvme_mq_ops = {
 	.queue_rq	= nvme_queue_rq,
+	.queue_rqs	= nvme_queue_rqs,
 	.complete	= nvme_pci_complete_rq,
 	.commit_rqs	= nvme_commit_rqs,
 	.init_hctx	= nvme_init_hctx,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-11-17  3:38 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
  2021-11-17  8:39   ` Christoph Hellwig
@ 2021-11-17 19:41   ` Keith Busch
  1 sibling, 0 replies; 59+ messages in thread
From: Keith Busch @ 2021-11-17 19:41 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, hch

[adding linux-nvme]

On Tue, Nov 16, 2021 at 08:38:07PM -0700, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.

So this means that the nvme namespace will always be the same for all
the requests in the list, but the rqlist may contain requests allocated
from different CPUs?
 
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
> 
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  drivers/nvme/host/pci.c | 67 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 67 insertions(+)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index d2b654fc3603..2eedd04b1f90 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1004,6 +1004,72 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	return ret;
>  }
>  
> +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
> +{
> +	spin_lock(&nvmeq->sq_lock);
> +	while (!rq_list_empty(*rqlist)) {
> +		struct request *req = rq_list_pop(rqlist);
> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> +
> +		nvme_copy_cmd(nvmeq, absolute_pointer(&iod->cmd));
> +	}
> +	nvme_write_sq_db(nvmeq, true);
> +	spin_unlock(&nvmeq->sq_lock);
> +}
> +
> +static void nvme_queue_rqs(struct request **rqlist)
> +{
> +	struct request *requeue_list = NULL, *req, *prev = NULL;
> +	struct blk_mq_hw_ctx *hctx;
> +	struct nvme_queue *nvmeq;
> +	struct nvme_ns *ns;
> +
> +restart:
> +	req = rq_list_peek(rqlist);
> +	hctx = req->mq_hctx;
> +	nvmeq = hctx->driver_data;
> +	ns = hctx->queue->queuedata;
> +
> +	/*
> +	 * We should not need to do this, but we're still using this to
> +	 * ensure we can drain requests on a dying queue.
> +	 */
> +	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
> +		return;
> +
> +	rq_list_for_each(rqlist, req) {
> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> +		blk_status_t ret;
> +
> +		if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
> +			goto requeue;
> +
> +		if (req->mq_hctx != hctx) {
> +			/* detach rest of list, and submit */
> +			prev->rq_next = NULL;
> +			nvme_submit_cmds(nvmeq, rqlist);
> +			/* req now start of new list for this hw queue */
> +			*rqlist = req;
> +			goto restart;
> +		}
> +
> +		hctx->tags->rqs[req->tag] = req;

After checking how this is handled previously, it appears this new
.queue_rqs() skips incrementing active requests, and bypasses the
hctx_lock(). Won't that break quiesce?

> +		ret = nvme_prep_rq(nvmeq->dev, ns, req, &iod->cmd);
> +		if (ret == BLK_STS_OK) {
> +			prev = req;
> +			continue;
> +		}
> +requeue:
> +		/* detach 'req' and add to remainder list */
> +		if (prev)
> +			prev->rq_next = req->rq_next;
> +		rq_list_add(&requeue_list, req);
> +	}
> +
> +	nvme_submit_cmds(nvmeq, rqlist);
> +	*rqlist = requeue_list;
> +}
> +
>  static __always_inline void nvme_pci_unmap_rq(struct request *req)
>  {
>  	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> @@ -1741,6 +1807,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
>  
>  static const struct blk_mq_ops nvme_mq_ops = {
>  	.queue_rq	= nvme_queue_rq,
> +	.queue_rqs	= nvme_queue_rqs,
>  	.complete	= nvme_pci_complete_rq,
>  	.commit_rqs	= nvme_commit_rqs,
>  	.init_hctx	= nvme_init_hctx,
> -- 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-11-17 15:55     ` Jens Axboe
@ 2021-11-17 15:58       ` Jens Axboe
  0 siblings, 0 replies; 59+ messages in thread
From: Jens Axboe @ 2021-11-17 15:58 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/17/21 8:55 AM, Jens Axboe wrote:
> On 11/17/21 1:39 AM, Christoph Hellwig wrote:
>> On Tue, Nov 16, 2021 at 08:38:07PM -0700, Jens Axboe wrote:
>>> This enables the block layer to send us a full plug list of requests
>>> that need submitting. The block layer guarantees that they all belong
>>> to the same queue, but we do have to check the hardware queue mapping
>>> for each request.
>>>
>>> If errors are encountered, leave them in the passed in list. Then the
>>> block layer will handle them individually.
>>>
>>> This is good for about a 4% improvement in peak performance, taking us
>>> from 9.6M to 10M IOPS/core.
>>
>> The concept looks sensible, but the loop in nvme_queue_rqs is a complete
>> mess to follow. What about something like this (untested) on top?
> 
> Let me take a closer look.

Something changed, efficiency is way down:

     2.26%     +4.34%  [nvme]            [k] nvme_queue_rqs


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-11-17  8:39   ` Christoph Hellwig
@ 2021-11-17 15:55     ` Jens Axboe
  2021-11-17 15:58       ` Jens Axboe
  0 siblings, 1 reply; 59+ messages in thread
From: Jens Axboe @ 2021-11-17 15:55 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/17/21 1:39 AM, Christoph Hellwig wrote:
> On Tue, Nov 16, 2021 at 08:38:07PM -0700, Jens Axboe wrote:
>> This enables the block layer to send us a full plug list of requests
>> that need submitting. The block layer guarantees that they all belong
>> to the same queue, but we do have to check the hardware queue mapping
>> for each request.
>>
>> If errors are encountered, leave them in the passed in list. Then the
>> block layer will handle them individually.
>>
>> This is good for about a 4% improvement in peak performance, taking us
>> from 9.6M to 10M IOPS/core.
> 
> The concept looks sensible, but the loop in nvme_queue_rqs is a complete
> mess to follow. What about something like this (untested) on top?

Let me take a closer look.

> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 13722cc400c2c..555a7609580c7 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -509,21 +509,6 @@ static inline void nvme_copy_cmd(struct nvme_queue *nvmeq,
>  		nvmeq->sq_tail = 0;
>  }
>  
> -/**
> - * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
> - * @nvmeq: The queue to use
> - * @cmd: The command to send
> - * @write_sq: whether to write to the SQ doorbell
> - */
> -static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
> -			    bool write_sq)
> -{
> -	spin_lock(&nvmeq->sq_lock);
> -	nvme_copy_cmd(nvmeq, cmd);
> -	nvme_write_sq_db(nvmeq, write_sq);
> -	spin_unlock(&nvmeq->sq_lock);
> -}

You really don't like helpers? Code generation wise it doesn't matter,
but without this and the copy helper we do end up having some trivial
duplicated code...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-11-17  3:38 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
@ 2021-11-17  8:39   ` Christoph Hellwig
  2021-11-17 15:55     ` Jens Axboe
  2021-11-17 19:41   ` Keith Busch
  1 sibling, 1 reply; 59+ messages in thread
From: Christoph Hellwig @ 2021-11-17  8:39 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, hch

On Tue, Nov 16, 2021 at 08:38:07PM -0700, Jens Axboe wrote:
> This enables the block layer to send us a full plug list of requests
> that need submitting. The block layer guarantees that they all belong
> to the same queue, but we do have to check the hardware queue mapping
> for each request.
> 
> If errors are encountered, leave them in the passed in list. Then the
> block layer will handle them individually.
> 
> This is good for about a 4% improvement in peak performance, taking us
> from 9.6M to 10M IOPS/core.

The concept looks sensible, but the loop in nvme_queue_rqs is a complete
mess to follow. What about something like this (untested) on top?

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 13722cc400c2c..555a7609580c7 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -509,21 +509,6 @@ static inline void nvme_copy_cmd(struct nvme_queue *nvmeq,
 		nvmeq->sq_tail = 0;
 }
 
-/**
- * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
- * @nvmeq: The queue to use
- * @cmd: The command to send
- * @write_sq: whether to write to the SQ doorbell
- */
-static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
-			    bool write_sq)
-{
-	spin_lock(&nvmeq->sq_lock);
-	nvme_copy_cmd(nvmeq, cmd);
-	nvme_write_sq_db(nvmeq, write_sq);
-	spin_unlock(&nvmeq->sq_lock);
-}
-
 static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
 {
 	struct nvme_queue *nvmeq = hctx->driver_data;
@@ -918,8 +903,7 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req,
 	return BLK_STS_OK;
 }
 
-static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct nvme_ns *ns,
-				 struct request *req, struct nvme_command *cmnd)
+static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	blk_status_t ret;
@@ -928,18 +912,18 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct nvme_ns *ns,
 	iod->npages = -1;
 	iod->nents = 0;
 
-	ret = nvme_setup_cmd(ns, req);
+	ret = nvme_setup_cmd(req->q->queuedata, req);
 	if (ret)
 		return ret;
 
 	if (blk_rq_nr_phys_segments(req)) {
-		ret = nvme_map_data(dev, req, cmnd);
+		ret = nvme_map_data(dev, req, &iod->cmd);
 		if (ret)
 			goto out_free_cmd;
 	}
 
 	if (blk_integrity_rq(req)) {
-		ret = nvme_map_metadata(dev, req, cmnd);
+		ret = nvme_map_metadata(dev, req, &iod->cmd);
 		if (ret)
 			goto out_unmap_data;
 	}
@@ -959,7 +943,6 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct nvme_ns *ns,
 static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 			 const struct blk_mq_queue_data *bd)
 {
-	struct nvme_ns *ns = hctx->queue->queuedata;
 	struct nvme_queue *nvmeq = hctx->driver_data;
 	struct nvme_dev *dev = nvmeq->dev;
 	struct request *req = bd->rq;
@@ -976,12 +959,15 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	if (!nvme_check_ready(&dev->ctrl, req, true))
 		return nvme_fail_nonready_command(&dev->ctrl, req);
 
-	ret = nvme_prep_rq(dev, ns, req, &iod->cmd);
-	if (ret == BLK_STS_OK) {
-		nvme_submit_cmd(nvmeq, &iod->cmd, bd->last);
-		return BLK_STS_OK;
-	}
-	return ret;
+	ret = nvme_prep_rq(dev, req);
+	if (ret)
+		return ret;
+
+	spin_lock(&nvmeq->sq_lock);
+	nvme_copy_cmd(nvmeq, &iod->cmd);
+	nvme_write_sq_db(nvmeq, bd->last);
+	spin_unlock(&nvmeq->sq_lock);
+	return BLK_STS_OK;
 }
 
 static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
@@ -997,56 +983,47 @@ static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
 	spin_unlock(&nvmeq->sq_lock);
 }
 
-static void nvme_queue_rqs(struct request **rqlist)
+static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
 {
-	struct request *requeue_list = NULL, *req, *prev = NULL;
-	struct blk_mq_hw_ctx *hctx;
-	struct nvme_queue *nvmeq;
-	struct nvme_ns *ns;
-
-restart:
-	req = rq_list_peek(rqlist);
-	hctx = req->mq_hctx;
-	nvmeq = hctx->driver_data;
-	ns = hctx->queue->queuedata;
-
 	/*
 	 * We should not need to do this, but we're still using this to
 	 * ensure we can drain requests on a dying queue.
 	 */
 	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
-		return;
+		return false;
+	if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+		return false;
 
-	rq_list_for_each(rqlist, req) {
-		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-		blk_status_t ret;
+	req->mq_hctx->tags->rqs[req->tag] = req;
+	return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+}
 
-		if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
-			goto requeue;
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+		if (!nvme_prep_rq_batch(nvmeq, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
 
-		if (req->mq_hctx != hctx) {
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
 			/* detach rest of list, and submit */
 			prev->rq_next = NULL;
 			nvme_submit_cmds(nvmeq, rqlist);
-			/* req now start of new list for this hw queue */
 			*rqlist = req;
-			goto restart;
-		}
-
-		hctx->tags->rqs[req->tag] = req;
-		ret = nvme_prep_rq(nvmeq->dev, ns, req, &iod->cmd);
-		if (ret == BLK_STS_OK) {
-			prev = req;
-			continue;
 		}
-requeue:
-		/* detach 'req' and add to remainder list */
-		if (prev)
-			prev->rq_next = req->rq_next;
-		rq_list_add(&requeue_list, req);
-	}
+	} while (req);
 
-	nvme_submit_cmds(nvmeq, rqlist);
 	*rqlist = requeue_list;
 }
 
@@ -1224,7 +1201,11 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl)
 
 	c.common.opcode = nvme_admin_async_event;
 	c.common.command_id = NVME_AQ_BLK_MQ_DEPTH;
-	nvme_submit_cmd(nvmeq, &c, true);
+
+	spin_lock(&nvmeq->sq_lock);
+	nvme_copy_cmd(nvmeq, &c);
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
 }
 
 static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id)

^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
  2021-11-17  3:38 [PATCHSET 0/4] Add support for list issue Jens Axboe
@ 2021-11-17  3:38 ` Jens Axboe
  2021-11-17  8:39   ` Christoph Hellwig
  2021-11-17 19:41   ` Keith Busch
  0 siblings, 2 replies; 59+ messages in thread
From: Jens Axboe @ 2021-11-17  3:38 UTC (permalink / raw)
  To: linux-block; +Cc: hch, Jens Axboe

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 67 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d2b654fc3603..2eedd04b1f90 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1004,6 +1004,72 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
+static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+{
+	spin_lock(&nvmeq->sq_lock);
+	while (!rq_list_empty(*rqlist)) {
+		struct request *req = rq_list_pop(rqlist);
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+		nvme_copy_cmd(nvmeq, absolute_pointer(&iod->cmd));
+	}
+	nvme_write_sq_db(nvmeq, true);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static void nvme_queue_rqs(struct request **rqlist)
+{
+	struct request *requeue_list = NULL, *req, *prev = NULL;
+	struct blk_mq_hw_ctx *hctx;
+	struct nvme_queue *nvmeq;
+	struct nvme_ns *ns;
+
+restart:
+	req = rq_list_peek(rqlist);
+	hctx = req->mq_hctx;
+	nvmeq = hctx->driver_data;
+	ns = hctx->queue->queuedata;
+
+	/*
+	 * We should not need to do this, but we're still using this to
+	 * ensure we can drain requests on a dying queue.
+	 */
+	if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
+		return;
+
+	rq_list_for_each(rqlist, req) {
+		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+		blk_status_t ret;
+
+		if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true)))
+			goto requeue;
+
+		if (req->mq_hctx != hctx) {
+			/* detach rest of list, and submit */
+			prev->rq_next = NULL;
+			nvme_submit_cmds(nvmeq, rqlist);
+			/* req now start of new list for this hw queue */
+			*rqlist = req;
+			goto restart;
+		}
+
+		hctx->tags->rqs[req->tag] = req;
+		ret = nvme_prep_rq(nvmeq->dev, ns, req, &iod->cmd);
+		if (ret == BLK_STS_OK) {
+			prev = req;
+			continue;
+		}
+requeue:
+		/* detach 'req' and add to remainder list */
+		if (prev)
+			prev->rq_next = req->rq_next;
+		rq_list_add(&requeue_list, req);
+	}
+
+	nvme_submit_cmds(nvmeq, rqlist);
+	*rqlist = requeue_list;
+}
+
 static __always_inline void nvme_pci_unmap_rq(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -1741,6 +1807,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 static const struct blk_mq_ops nvme_mq_ops = {
 	.queue_rq	= nvme_queue_rq,
+	.queue_rqs	= nvme_queue_rqs,
 	.complete	= nvme_pci_complete_rq,
 	.commit_rqs	= nvme_commit_rqs,
 	.init_hctx	= nvme_init_hctx,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2021-12-21 16:08 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-20 20:36   ` Keith Busch
2021-12-20 20:47     ` Jens Axboe
2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-16 12:17   ` Max Gurtovoy
2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
2021-12-16  9:02   ` Christoph Hellwig
2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-15 17:29   ` Keith Busch
2021-12-15 20:27     ` Jens Axboe
2021-12-16  9:08   ` Christoph Hellwig
2021-12-16 13:06     ` Max Gurtovoy
2021-12-16 15:48       ` Jens Axboe
2021-12-16 16:00         ` Max Gurtovoy
2021-12-16 16:05           ` Jens Axboe
2021-12-16 16:19             ` Max Gurtovoy
2021-12-16 16:25               ` Jens Axboe
2021-12-16 16:34                 ` Max Gurtovoy
2021-12-16 16:36                   ` Jens Axboe
2021-12-16 16:57                     ` Max Gurtovoy
2021-12-16 17:16                       ` Jens Axboe
2021-12-19 12:14                         ` Max Gurtovoy
2021-12-19 14:48                           ` Jens Axboe
2021-12-20 10:11                             ` Max Gurtovoy
2021-12-20 14:19                               ` Jens Axboe
2021-12-20 14:25                                 ` Jens Axboe
2021-12-20 15:29                                 ` Max Gurtovoy
2021-12-20 16:34                                   ` Jens Axboe
2021-12-20 18:48                                     ` Max Gurtovoy
2021-12-20 18:58                                       ` Jens Axboe
2021-12-21 10:20                                         ` Max Gurtovoy
2021-12-21 15:23                                           ` Jens Axboe
2021-12-21 15:29                                             ` Max Gurtovoy
2021-12-21 15:33                                               ` Jens Axboe
2021-12-21 16:08                                                 ` Max Gurtovoy
2021-12-16 15:45     ` Jens Axboe
2021-12-16 16:15       ` Christoph Hellwig
2021-12-16 16:27         ` Jens Axboe
2021-12-16 16:30           ` Christoph Hellwig
2021-12-16 16:36             ` Jens Axboe
2021-12-16 13:02   ` Max Gurtovoy
2021-12-16 15:59     ` Jens Axboe
2021-12-16 16:06       ` Max Gurtovoy
2021-12-16 16:09         ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-12-16 16:38 [PATCHSET v5 0/4] Add support for list issue Jens Axboe
2021-12-16 16:39 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 17:53   ` Christoph Hellwig
2021-12-16 16:05 [PATCHSET v4 0/4] Add support for list issue Jens Axboe
2021-12-16 16:05 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-03 21:45 [PATCHSET v2 0/4] Add support for list issue Jens Axboe
2021-12-03 21:45 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-04 10:47   ` Hannes Reinecke
2021-12-06  7:40   ` Christoph Hellwig
2021-12-06 16:33     ` Jens Axboe
2021-11-17  3:38 [PATCHSET 0/4] Add support for list issue Jens Axboe
2021-11-17  3:38 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-11-17  8:39   ` Christoph Hellwig
2021-11-17 15:55     ` Jens Axboe
2021-11-17 15:58       ` Jens Axboe
2021-11-17 19:41   ` Keith Busch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.