All of lore.kernel.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@infradead.org>
Cc: <io-uring@vger.kernel.org>, <linux-nvme@lists.infradead.org>,
	"Hannes Reinecke" <hare@suse.de>, Oren Duer <oren@nvidia.com>
Subject: Re: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs()
Date: Mon, 20 Dec 2021 20:48:01 +0200	[thread overview]
Message-ID: <5b001e0f-cec0-112d-533a-d71684eb1d2e@nvidia.com> (raw)
In-Reply-To: <92c5065e-dc2a-9e3f-404a-64c6e22624b7@kernel.dk>

[-- Attachment #1: Type: text/plain, Size: 7583 bytes --]


On 12/20/2021 6:34 PM, Jens Axboe wrote:
> On 12/20/21 8:29 AM, Max Gurtovoy wrote:
>> On 12/20/2021 4:19 PM, Jens Axboe wrote:
>>> On 12/20/21 3:11 AM, Max Gurtovoy wrote:
>>>> On 12/19/2021 4:48 PM, Jens Axboe wrote:
>>>>> On 12/19/21 5:14 AM, Max Gurtovoy wrote:
>>>>>> On 12/16/2021 7:16 PM, Jens Axboe wrote:
>>>>>>> On 12/16/21 9:57 AM, Max Gurtovoy wrote:
>>>>>>>> On 12/16/2021 6:36 PM, Jens Axboe wrote:
>>>>>>>>> On 12/16/21 9:34 AM, Max Gurtovoy wrote:
>>>>>>>>>> On 12/16/2021 6:25 PM, Jens Axboe wrote:
>>>>>>>>>>> On 12/16/21 9:19 AM, Max Gurtovoy wrote:
>>>>>>>>>>>> On 12/16/2021 6:05 PM, Jens Axboe wrote:
>>>>>>>>>>>>> On 12/16/21 9:00 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>> On 12/16/2021 5:48 PM, Jens Axboe wrote:
>>>>>>>>>>>>>>> On 12/16/21 6:06 AM, Max Gurtovoy wrote:
>>>>>>>>>>>>>>>> On 12/16/2021 11:08 AM, Christoph Hellwig wrote:
>>>>>>>>>>>>>>>>> On Wed, Dec 15, 2021 at 09:24:21AM -0700, Jens Axboe wrote:
>>>>>>>>>>>>>>>>>> +	spin_lock(&nvmeq->sq_lock);
>>>>>>>>>>>>>>>>>> +	while (!rq_list_empty(*rqlist)) {
>>>>>>>>>>>>>>>>>> +		struct request *req = rq_list_pop(rqlist);
>>>>>>>>>>>>>>>>>> +		struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>>>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>>>> +		memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes),
>>>>>>>>>>>>>>>>>> +				absolute_pointer(&iod->cmd), sizeof(iod->cmd));
>>>>>>>>>>>>>>>>>> +		if (++nvmeq->sq_tail == nvmeq->q_depth)
>>>>>>>>>>>>>>>>>> +			nvmeq->sq_tail = 0;
>>>>>>>>>>>>>>>>> So this doesn't even use the new helper added in patch 2?  I think this
>>>>>>>>>>>>>>>>> should call nvme_sq_copy_cmd().
>>>>>>>>>>>>>>>> I also noticed that.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So need to decide if to open code it or use the helper function.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Inline helper sounds reasonable if you have 3 places that will use it.
>>>>>>>>>>>>>>> Yes agree, that's been my stance too :-)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The rest looks identical to the incremental patch I posted, so I guess
>>>>>>>>>>>>>>>>> the performance degration measured on the first try was a measurement
>>>>>>>>>>>>>>>>> error?
>>>>>>>>>>>>>>>> giving 1 dbr for a batch of N commands sounds good idea. Also for RDMA host.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But how do you moderate it ? what is the batch_sz <--> time_to_wait
>>>>>>>>>>>>>>>> algorithm ?
>>>>>>>>>>>>>>> The batching is naturally limited at BLK_MAX_REQUEST_COUNT, which is 32
>>>>>>>>>>>>>>> in total. I do agree that if we ever made it much larger, then we might
>>>>>>>>>>>>>>> want to cap it differently. But 32 seems like a pretty reasonable number
>>>>>>>>>>>>>>> to get enough gain from the batching done in various areas, while still
>>>>>>>>>>>>>>> not making it so large that we have a potential latency issue. That
>>>>>>>>>>>>>>> batch count is already used consistently for other items too (like tag
>>>>>>>>>>>>>>> allocation), so it's not specific to just this one case.
>>>>>>>>>>>>>> I'm saying that the you can wait to the batch_max_count too long and it
>>>>>>>>>>>>>> won't be efficient from latency POV.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So it's better to limit the block layar to wait for the first to come: x
>>>>>>>>>>>>>> usecs or batch_max_count before issue queue_rqs.
>>>>>>>>>>>>> There's no waiting specifically for this, it's just based on the plug.
>>>>>>>>>>>>> We just won't do more than 32 in that plug. This is really just an
>>>>>>>>>>>>> artifact of the plugging, and if that should be limited based on "max of
>>>>>>>>>>>>> 32 or xx time", then that should be done there.
>>>>>>>>>>>>>
>>>>>>>>>>>>> But in general I think it's saner and enough to just limit the total
>>>>>>>>>>>>> size. If we spend more than xx usec building up the plug list, we're
>>>>>>>>>>>>> doing something horribly wrong. That really should not happen with 32
>>>>>>>>>>>>> requests, and we'll never eg wait on requests if we're out of tags. That
>>>>>>>>>>>>> will result in a plug flush to begin with.
>>>>>>>>>>>> I'm not aware of the plug. I hope to get to it soon.
>>>>>>>>>>>>
>>>>>>>>>>>> My concern is if the user application submitted only 28 requests and
>>>>>>>>>>>> then you'll wait forever ? or for very long time.
>>>>>>>>>>>>
>>>>>>>>>>>> I guess not, but I'm asking how do you know how to batch and when to
>>>>>>>>>>>> stop in case 32 commands won't arrive anytime soon.
>>>>>>>>>>> The plug is in the stack of the task, so that condition can never
>>>>>>>>>>> happen. If the application originally asks for 32 but then only submits
>>>>>>>>>>> 28, then once that last one is submitted the plug is flushed and
>>>>>>>>>>> requests are issued.
>>>>>>>>>> So if I'm running fio with --iodepth=28 what will plug do ? send batches
>>>>>>>>>> of 28 ? or 1 by 1 ?
>>>>>>>>> --iodepth just controls the overall depth, the batch submit count
>>>>>>>>> dictates what happens further down. If you run queue depth 28 and submit
>>>>>>>>> one at the time, then you'll get one at the time further down too. Hence
>>>>>>>>> the batching is directly driven by what the application is already
>>>>>>>>> doing.
>>>>>>>> I see. Thanks for the explanation.
>>>>>>>>
>>>>>>>> So it works only for io_uring based applications ?
>>>>>>> It's only enabled for io_uring right now, but it's generically available
>>>>>>> for anyone that wants to use it... Would be trivial to do for aio, and
>>>>>>> other spots that currently use blk_start_plug() and has an idea of how
>>>>>>> many IOs will be submitted
>>>>>> Can you please share an example application (or is it fio patches) that
>>>>>> can submit batches ? The same that was used to test this patchset is
>>>>>> fine too.
>>>>>>
>>>>>> I would like to test it with our NVMe SNAP controllers and also to
>>>>>> develop NVMe/RDMA queue_rqs code and test the perf with it.
>>>>> You should just be able to use iodepth_batch with fio. For my peak
>>>>> testing, I use t/io_uring from the fio repo. By default, it'll run QD of
>>>>> and do batches of 32 for complete and submit. You can just run:
>>>>>
>>>>> t/io_uring <dev or file>
>>>>>
>>>>> maybe adding -p0 for IRQ driven rather than polled IO.
>>>> I used your block/for-next branch and implemented queue_rqs in NVMe/RDMA
>>>> but it was never called using the t/io_uring test nor fio with
>>>> iodepth_batch=32 flag with io_uring engine.
>>>>
>>>> Any idea what might be the issue ?
>>>>
>>>> I installed fio from sources..
>>> The two main restrictions right now are a scheduler and shared tags, are
>>> you using any of those?
>> No.
>>
>> But maybe I'm missing the .commit_rqs callback. is it mandatory for this
>> feature ?
> I've only tested with nvme pci which does have it, but I don't think so.
> Unless there's some check somewhere that makes it necessary. Can you
> share the patch you're currently using on top?

The attached POC patches apply cleanly on block/for-next branch

commit 7925bb75e8effa5de85b1cf8425cd5c21f212b1d (block/for-next)
Merge: eb12bde9eba8 3427f2b2c533
Author: Jens Axboe <axboe@kernel.dk>
Date:   Fri Dec 17 09:51:05 2021 -0700

     Merge branch 'for-5.17/drivers' into for-next

     * for-5.17/drivers:
       block: remove the rsxx driver
       rsxx: Drop PCI legacy power management
       mtip32xx: convert to generic power management
       mtip32xx: remove pointless drvdata lookups
       mtip32xx: remove pointless drvdata checking
       drbd: Use struct_group() to zero algs
       loop: make autoclear operation asynchronous
       null_blk: cast command status to integer
       pktdvd: stop using bdi congestion framework.


[-- Attachment #2: 0001-nvme-rdma-prepare-for-queue_rqs-implementation.patch --]
[-- Type: text/plain, Size: 6618 bytes --]

From 0de2836ce21df6801db580e154296544a741b6c4 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy <mgurtovoy@nvidia.com>
Date: Thu, 2 Dec 2021 19:59:00 +0200
Subject: [PATCH 1/2] nvme-rdma: prepare for queue_rqs implementation

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 127 ++++++++++++++++++++++++++-------------
 1 file changed, 84 insertions(+), 43 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 850f84d204d0..2d608cb48392 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -70,6 +70,7 @@ struct nvme_rdma_request {
 	struct ib_sge		sge[1 + NVME_RDMA_MAX_INLINE_SEGMENTS];
 	u32			num_sge;
 	struct ib_reg_wr	reg_wr;
+	struct ib_send_wr	send_wr;
 	struct ib_cqe		reg_cqe;
 	struct nvme_rdma_queue  *queue;
 	struct nvme_rdma_sgl	data_sgl;
@@ -1635,33 +1636,31 @@ static void nvme_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
 		nvme_rdma_end_request(req);
 }
 
-static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
+static void nvme_rdma_prep_send(struct nvme_rdma_queue *queue,
 		struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge,
-		struct ib_send_wr *first)
+		struct ib_send_wr *wr)
 {
-	struct ib_send_wr wr;
-	int ret;
-
-	sge->addr   = qe->dma;
+	sge->addr = qe->dma;
 	sge->length = sizeof(struct nvme_command);
-	sge->lkey   = queue->device->pd->local_dma_lkey;
+	sge->lkey = queue->device->pd->local_dma_lkey;
 
-	wr.next       = NULL;
-	wr.wr_cqe     = &qe->cqe;
-	wr.sg_list    = sge;
-	wr.num_sge    = num_sge;
-	wr.opcode     = IB_WR_SEND;
-	wr.send_flags = IB_SEND_SIGNALED;
+	wr->next = NULL;
+	wr->wr_cqe = &qe->cqe;
+	wr->sg_list = sge;
+	wr->num_sge = num_sge;
+	wr->opcode = IB_WR_SEND;
+	wr->send_flags = IB_SEND_SIGNALED;
+}
 
-	if (first)
-		first->next = &wr;
-	else
-		first = &wr;
+static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
+		struct ib_send_wr *wr)
+{
+	int ret;
 
-	ret = ib_post_send(queue->qp, first, NULL);
+	ret = ib_post_send(queue->qp, wr, NULL);
 	if (unlikely(ret)) {
 		dev_err(queue->ctrl->ctrl.device,
-			     "%s failed with error code %d\n", __func__, ret);
+			"%s failed with error code %d\n", __func__, ret);
 	}
 	return ret;
 }
@@ -1715,6 +1714,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg)
 	struct nvme_rdma_qe *sqe = &ctrl->async_event_sqe;
 	struct nvme_command *cmd = sqe->data;
 	struct ib_sge sge;
+	struct ib_send_wr wr;
 	int ret;
 
 	ib_dma_sync_single_for_cpu(dev, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
@@ -1730,7 +1730,8 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg)
 	ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
 			DMA_TO_DEVICE);
 
-	ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL);
+	nvme_rdma_prep_send(queue, sqe, &sge, 1, &wr);
+	ret = nvme_rdma_post_send(queue, &wr);
 	WARN_ON_ONCE(ret);
 }
 
@@ -2034,27 +2035,35 @@ nvme_rdma_timeout(struct request *rq, bool reserved)
 	return BLK_EH_RESET_TIMER;
 }
 
-static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
-		const struct blk_mq_queue_data *bd)
+static blk_status_t nvme_rdma_cleanup_rq(struct nvme_rdma_queue *queue,
+		struct request *rq, int err)
 {
-	struct nvme_ns *ns = hctx->queue->queuedata;
-	struct nvme_rdma_queue *queue = hctx->driver_data;
-	struct request *rq = bd->rq;
+	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	blk_status_t ret;
+
+	nvme_rdma_unmap_data(queue, rq);
+	if (err == -EIO)
+		ret = nvme_host_path_error(rq);
+	else if (err == -ENOMEM || err == -EAGAIN)
+		ret = BLK_STS_RESOURCE;
+	else
+		ret = BLK_STS_IOERR;
+	nvme_cleanup_cmd(rq);
+	ib_dma_unmap_single(queue->device->dev, req->sqe.dma,
+			    sizeof(struct nvme_command), DMA_TO_DEVICE);
+	return ret;
+}
+
+static blk_status_t nvme_rdma_prep_rq(struct nvme_rdma_queue *queue,
+		struct request *rq, struct nvme_ns *ns)
+{
+	struct ib_device *dev = queue->device->dev;
 	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
 	struct nvme_rdma_qe *sqe = &req->sqe;
 	struct nvme_command *c = nvme_req(rq)->cmd;
-	struct ib_device *dev;
-	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
 	blk_status_t ret;
 	int err;
 
-	WARN_ON_ONCE(rq->tag < 0);
-
-	if (!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
-		return nvme_fail_nonready_command(&queue->ctrl->ctrl, rq);
-
-	dev = queue->device->dev;
-
 	req->sqe.dma = ib_dma_map_single(dev, req->sqe.data,
 					 sizeof(struct nvme_command),
 					 DMA_TO_DEVICE);
@@ -2083,8 +2092,8 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	err = nvme_rdma_map_data(queue, rq, c);
 	if (unlikely(err < 0)) {
 		dev_err(queue->ctrl->ctrl.device,
-			     "Failed to map data (%d)\n", err);
-		goto err;
+			"Failed to map data (%d)\n", err);
+		goto out_err;
 	}
 
 	sqe->cqe.done = nvme_rdma_send_done;
@@ -2092,16 +2101,13 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	ib_dma_sync_single_for_device(dev, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
-	err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
-			req->mr ? &req->reg_wr.wr : NULL);
-	if (unlikely(err))
-		goto err_unmap;
+	nvme_rdma_prep_send(queue, sqe, req->sge, req->num_sge, &req->send_wr);
+	if (req->mr)
+		req->reg_wr.wr.next = &req->send_wr;
 
 	return BLK_STS_OK;
 
-err_unmap:
-	nvme_rdma_unmap_data(queue, rq);
-err:
+out_err:
 	if (err == -EIO)
 		ret = nvme_host_path_error(rq);
 	else if (err == -ENOMEM || err == -EAGAIN)
@@ -2115,6 +2121,41 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return ret;
 }
 
+static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
+		const struct blk_mq_queue_data *bd)
+{
+	struct nvme_ns *ns = hctx->queue->queuedata;
+	struct nvme_rdma_queue *queue = hctx->driver_data;
+	struct request *rq = bd->rq;
+	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
+	struct ib_send_wr *wr;
+	int err;
+
+	WARN_ON_ONCE(rq->tag < 0);
+
+	if (!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
+		return nvme_fail_nonready_command(&queue->ctrl->ctrl, rq);
+
+	err = nvme_rdma_prep_rq(queue, rq, ns);
+	if (unlikely(err))
+		return err;
+
+	if (req->mr)
+		wr = &req->reg_wr.wr;
+	else
+		wr = &req->send_wr;
+
+	err = nvme_rdma_post_send(queue, wr);
+	if (unlikely(err))
+		goto out_cleanup_rq;
+
+	return BLK_STS_OK;
+
+out_cleanup_rq:
+	return nvme_rdma_cleanup_rq(queue, rq, err);
+}
+
 static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
 {
 	struct nvme_rdma_queue *queue = hctx->driver_data;
-- 
2.18.1


[-- Attachment #3: 0002-nvme-rdma-add-support-for-mq_ops-queue_rqs.patch --]
[-- Type: text/plain, Size: 2952 bytes --]

From 851a1f35420206f7b631d5d12b135e5a7c84b912 Mon Sep 17 00:00:00 2001
From: Max Gurtovoy <mgurtovoy@nvidia.com>
Date: Mon, 20 Dec 2021 20:42:49 +0200
Subject: [PATCH 2/2] nvme-rdma: add support for mq_ops->queue_rqs()

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 75 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 2d608cb48392..765bb57f0a55 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -2121,6 +2121,80 @@ static blk_status_t nvme_rdma_prep_rq(struct nvme_rdma_queue *queue,
 	return ret;
 }
 
+static bool nvme_rdma_prep_rq_batch(struct nvme_rdma_queue *queue,
+		struct request *rq)
+{
+	bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
+
+	if (unlikely(!nvme_check_ready(&queue->ctrl->ctrl, rq, queue_ready)))
+		return false;
+
+	rq->mq_hctx->tags->rqs[rq->tag] = rq;
+	return nvme_rdma_prep_rq(queue, rq, rq->q->queuedata) == BLK_STS_OK;
+}
+
+static void nvme_rdma_submit_cmds(struct nvme_rdma_queue *queue,
+		struct request **rqlist)
+{
+	struct request *first_rq = rq_list_peek(rqlist);
+	struct nvme_rdma_request *nreq = blk_mq_rq_to_pdu(first_rq);
+	struct ib_send_wr *first, *last = NULL;
+	int ret;
+
+	if (nreq->mr)
+		first = &nreq->reg_wr.wr;
+	else
+		first = &nreq->send_wr;
+
+	while (!rq_list_empty(*rqlist)) {
+		struct request *rq = rq_list_pop(rqlist);
+		struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+		struct ib_send_wr *tmp;
+
+		tmp = last;
+		last = &req->send_wr;
+		if (tmp) {
+			if (req->mr)
+				tmp->next = &req->reg_wr.wr;
+			else
+				tmp->next = &req->send_wr;
+		}
+	}
+
+	ret = nvme_rdma_post_send(queue, first);
+	WARN_ON_ONCE(ret);
+}
+
+static void nvme_rdma_queue_rqs(struct request **rqlist)
+{
+	struct request *req = rq_list_peek(rqlist), *prev = NULL;
+	struct request *requeue_list = NULL;
+
+	do {
+		struct nvme_rdma_queue *queue = req->mq_hctx->driver_data;
+
+		if (!nvme_rdma_prep_rq_batch(queue, req)) {
+			/* detach 'req' and add to remainder list */
+			if (prev)
+				prev->rq_next = req->rq_next;
+			rq_list_add(&requeue_list, req);
+		} else {
+			prev = req;
+		}
+
+		req = rq_list_next(req);
+		if (!req || (prev && req->mq_hctx != prev->mq_hctx)) {
+			/* detach rest of list, and submit */
+			if (prev)
+				prev->rq_next = NULL;
+			nvme_rdma_submit_cmds(queue, rqlist);
+			*rqlist = req;
+		}
+	} while (req);
+
+	*rqlist = requeue_list;
+}
+
 static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		const struct blk_mq_queue_data *bd)
 {
@@ -2258,6 +2332,7 @@ static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
 
 static const struct blk_mq_ops nvme_rdma_mq_ops = {
 	.queue_rq	= nvme_rdma_queue_rq,
+	.queue_rqs	= nvme_rdma_queue_rqs,
 	.complete	= nvme_rdma_complete_rq,
 	.init_request	= nvme_rdma_init_request,
 	.exit_request	= nvme_rdma_exit_request,
-- 
2.18.1


  reply	other threads:[~2021-12-20 18:48 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-15 16:24 [PATCHSET v3 0/4] Add support for list issue Jens Axboe
2021-12-15 16:24 ` [PATCH 1/4] block: add mq_ops->queue_rqs hook Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-20 20:36   ` Keith Busch
2021-12-20 20:47     ` Jens Axboe
2021-12-15 16:24 ` [PATCH 2/4] nvme: split command copy into a helper Jens Axboe
2021-12-16  9:01   ` Christoph Hellwig
2021-12-16 12:17   ` Max Gurtovoy
2021-12-15 16:24 ` [PATCH 3/4] nvme: separate command prep and issue Jens Axboe
2021-12-16  9:02   ` Christoph Hellwig
2021-12-15 16:24 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-15 17:29   ` Keith Busch
2021-12-15 20:27     ` Jens Axboe
2021-12-16  9:08   ` Christoph Hellwig
2021-12-16 13:06     ` Max Gurtovoy
2021-12-16 15:48       ` Jens Axboe
2021-12-16 16:00         ` Max Gurtovoy
2021-12-16 16:05           ` Jens Axboe
2021-12-16 16:19             ` Max Gurtovoy
2021-12-16 16:25               ` Jens Axboe
2021-12-16 16:34                 ` Max Gurtovoy
2021-12-16 16:36                   ` Jens Axboe
2021-12-16 16:57                     ` Max Gurtovoy
2021-12-16 17:16                       ` Jens Axboe
2021-12-19 12:14                         ` Max Gurtovoy
2021-12-19 14:48                           ` Jens Axboe
2021-12-20 10:11                             ` Max Gurtovoy
2021-12-20 14:19                               ` Jens Axboe
2021-12-20 14:25                                 ` Jens Axboe
2021-12-20 15:29                                 ` Max Gurtovoy
2021-12-20 16:34                                   ` Jens Axboe
2021-12-20 18:48                                     ` Max Gurtovoy [this message]
2021-12-20 18:58                                       ` Jens Axboe
2021-12-21 10:20                                         ` Max Gurtovoy
2021-12-21 15:23                                           ` Jens Axboe
2021-12-21 15:29                                             ` Max Gurtovoy
2021-12-21 15:33                                               ` Jens Axboe
2021-12-21 16:08                                                 ` Max Gurtovoy
2021-12-16 15:45     ` Jens Axboe
2021-12-16 16:15       ` Christoph Hellwig
2021-12-16 16:27         ` Jens Axboe
2021-12-16 16:30           ` Christoph Hellwig
2021-12-16 16:36             ` Jens Axboe
2021-12-16 13:02   ` Max Gurtovoy
2021-12-16 15:59     ` Jens Axboe
2021-12-16 16:06       ` Max Gurtovoy
2021-12-16 16:09         ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2021-12-16 16:38 [PATCHSET v5 0/4] Add support for list issue Jens Axboe
2021-12-16 16:39 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-16 17:53   ` Christoph Hellwig
2021-12-16 16:05 [PATCHSET v4 0/4] Add support for list issue Jens Axboe
2021-12-16 16:05 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-03 21:45 [PATCHSET v2 0/4] Add support for list issue Jens Axboe
2021-12-03 21:45 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-12-04 10:47   ` Hannes Reinecke
2021-12-06  7:40   ` Christoph Hellwig
2021-12-06 16:33     ` Jens Axboe
2021-11-17  3:38 [PATCHSET 0/4] Add support for list issue Jens Axboe
2021-11-17  3:38 ` [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Jens Axboe
2021-11-17  8:39   ` Christoph Hellwig
2021-11-17 15:55     ` Jens Axboe
2021-11-17 15:58       ` Jens Axboe
2021-11-17 19:41   ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5b001e0f-cec0-112d-533a-d71684eb1d2e@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=axboe@kernel.dk \
    --cc=hare@suse.de \
    --cc=hch@infradead.org \
    --cc=io-uring@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=oren@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.