All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-block@vger.kernel.org
Subject: Re: [PATCH 6/9] nvme: add support for batched completion of polled IO
Date: Wed, 13 Oct 2021 09:42:23 -0600	[thread overview]
Message-ID: <e3b138c8-49bd-2dba-b7a0-878d5c857167@kernel.dk> (raw)
In-Reply-To: <YWb4SqWFQinePqzj@infradead.org>

On 10/13/21 9:16 AM, Christoph Hellwig wrote:
> On Wed, Oct 13, 2021 at 09:10:01AM -0600, Jens Axboe wrote:
>>> Also - can you look into turning this logic into an inline function with
>>> a callback for the driver?  I think in general gcc will avoid the
>>> indirect call for function pointers passed directly.  That way we can
>>> keep a nice code structure but also avoid the indirections.
>>>
>>> Same for nvme_pci_complete_batch.
>>
>> Not sure I follow. It's hard to do a generic callback for this, as the
>> batch can live outside the block layer through the plug. That's why
>> it's passed the way it is in terms of completion hooks.
> 
> Basically turn nvme_pci_complete_batch into a core nvme helper (inline)
> with nvme_pci_unmap_rq passed as a function pointer that gets inlined.

Something like this?


diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0ac7bad405ef..1aff0ca37063 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -346,15 +346,19 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
 	return RETRY;
 }
 
-static inline void nvme_end_req(struct request *req)
+static inline void nvme_end_req_zoned(struct request *req)
 {
-	blk_status_t status = nvme_error_status(nvme_req(req)->status);
-
 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
 	    req_op(req) == REQ_OP_ZONE_APPEND)
 		req->__sector = nvme_lba_to_sect(req->q->queuedata,
 			le64_to_cpu(nvme_req(req)->result.u64));
+}
+
+static inline void nvme_end_req(struct request *req)
+{
+	blk_status_t status = nvme_error_status(nvme_req(req)->status);
 
+	nvme_end_req_zoned(req);
 	nvme_trace_bio_complete(req);
 	blk_mq_end_request(req, status);
 }
@@ -381,6 +385,23 @@ void nvme_complete_rq(struct request *req)
 }
 EXPORT_SYMBOL_GPL(nvme_complete_rq);
 
+void nvme_complete_batch(struct io_batch *iob, void (*fn)(struct request *rq))
+{
+	struct request *req;
+
+	req = rq_list_peek(&iob->req_list);
+	while (req) {
+		fn(req);
+		nvme_cleanup_cmd(req);
+		nvme_end_req_zoned(req);
+		req->status = BLK_STS_OK;
+		req = rq_list_next(req);
+	}
+
+	blk_mq_end_request_batch(iob);
+}
+EXPORT_SYMBOL_GPL(nvme_complete_batch);
+
 /*
  * Called to unwind from ->queue_rq on a failed command submission so that the
  * multipathing code gets called to potentially failover to another path.
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index ed79a6c7e804..b73a573472d9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -638,6 +638,7 @@ static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
 }
 
 void nvme_complete_rq(struct request *req);
+void nvme_complete_batch(struct io_batch *iob, void (*fn)(struct request *));
 blk_status_t nvme_host_path_error(struct request *req);
 bool nvme_cancel_request(struct request *req, void *data, bool reserved);
 void nvme_cancel_tagset(struct nvme_ctrl *ctrl);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b8dbee47fced..e79c0f0268b3 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -992,22 +992,7 @@ static void nvme_pci_complete_rq(struct request *req)
 
 static void nvme_pci_complete_batch(struct io_batch *iob)
 {
-	struct request *req;
-
-	req = rq_list_peek(&iob->req_list);
-	while (req) {
-		nvme_pci_unmap_rq(req);
-		if (req->rq_flags & RQF_SPECIAL_PAYLOAD)
-			nvme_cleanup_cmd(req);
-		if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
-				req_op(req) == REQ_OP_ZONE_APPEND)
-			req->__sector = nvme_lba_to_sect(req->q->queuedata,
-					le64_to_cpu(nvme_req(req)->result.u64));
-		req->status = BLK_STS_OK;
-		req = rq_list_next(req);
-	}
-
-	blk_mq_end_request_batch(iob);
+	nvme_complete_batch(iob, nvme_pci_unmap_rq);
 }
 
 /* We read the CQE phase first to check if the rest of the entry is valid */

-- 
Jens Axboe


  reply	other threads:[~2021-10-13 15:42 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-12 18:17 [PATCHSET 0/9] Batched completions Jens Axboe
2021-10-12 18:17 ` [PATCH 1/9] block: add a struct io_batch argument to fops->iopoll() Jens Axboe
2021-10-12 18:25   ` Bart Van Assche
2021-10-12 18:28     ` Jens Axboe
2021-10-12 18:17 ` [PATCH 2/9] sbitmap: add helper to clear a batch of tags Jens Axboe
2021-10-12 18:29   ` Bart Van Assche
2021-10-12 18:34     ` Jens Axboe
2021-10-12 18:17 ` [PATCH 3/9] sbitmap: test bit before calling test_and_set_bit() Jens Axboe
2021-10-12 18:17 ` [PATCH 4/9] block: add support for blk_mq_end_request_batch() Jens Axboe
2021-10-12 18:32   ` Bart Van Assche
2021-10-12 18:55     ` Jens Axboe
2021-10-12 18:17 ` [PATCH 5/9] nvme: move the fast path nvme error and disposition helpers Jens Axboe
2021-10-13  6:57   ` Christoph Hellwig
2021-10-13  6:57     ` Christoph Hellwig
2021-10-13 14:41     ` Jens Axboe
2021-10-13 15:11       ` Christoph Hellwig
2021-10-12 18:17 ` [PATCH 6/9] nvme: add support for batched completion of polled IO Jens Axboe
2021-10-13  7:08   ` Christoph Hellwig
2021-10-13 15:10     ` Jens Axboe
2021-10-13 15:16       ` Christoph Hellwig
2021-10-13 15:42         ` Jens Axboe [this message]
2021-10-13 15:49           ` Jens Axboe
2021-10-13 15:50           ` Christoph Hellwig
2021-10-13 16:04             ` Jens Axboe
2021-10-13 16:13               ` Christoph Hellwig
2021-10-13 16:33                 ` Jens Axboe
2021-10-13 16:45                   ` Jens Axboe
2021-10-13  9:09   ` John Garry
2021-10-13 15:07     ` Jens Axboe
2021-10-12 18:17 ` [PATCH 7/9] block: assign batch completion handler in blk_poll() Jens Axboe
2021-10-12 18:17 ` [PATCH 8/9] io_uring: utilize the io_batch infrastructure for more efficient polled IO Jens Axboe
2021-10-12 18:17 ` [PATCH 9/9] nvme: wire up completion batching for the IRQ path Jens Axboe
2021-10-13  7:12   ` Christoph Hellwig
2021-10-13 15:04     ` Jens Axboe
2021-10-13 16:54 [PATCHSET v2 0/9] Batched completions Jens Axboe
2021-10-13 16:54 ` [PATCH 6/9] nvme: add support for batched completion of polled IO Jens Axboe
2021-10-14  7:43   ` Christoph Hellwig
2021-10-14 15:30     ` Jens Axboe
2021-10-14 15:34       ` Jens Axboe
2021-10-14 16:07       ` Christoph Hellwig
2021-10-14 16:11         ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e3b138c8-49bd-2dba-b7a0-878d5c857167@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@infradead.org \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.