linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET v2 0/8] block plugging improvements
@ 2018-11-28 13:35 Jens Axboe
  2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
                   ` (6 more replies)
  0 siblings, 7 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme

Series improving plugging for fast devices, but some fixes in here too.

Patch 1 changes the logic for when we sort the plug list, switching
it to only doing it if we have requests for multiple queues in the
plug (and enough requests to warrant an actual sort).

2-6 add a ->commit_rqs() hook and implement it in drivers that use (or
will use) bd->last to optimize IO submission. If a driver currently uses
bd->last to know if it's needed to kick the hardware into action, there
are cases where we flag bd->last == false, but then fail to submit any
further IO due to other resource constraints. We probably get saved by
the fact that this happens for the case where we have pending IO and
that will eventually guarantee forward progress, but we really should
kick IO into gear at that point.

7 improves plugging for blk-mq.

In terms of improvements, for virtualized nvme, I've seen a 2-x IOPS
improvement with proper handling of bd->last with this series.

Changes since v1:

- Fold the plug case for blk-mq, do it if we have 1 hw queue, or
  if we have ->commit_rqs().
- Add disable/enable IRQ to ataflop
- Drop nvme ring index helper
- Rework when to sort patch

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/7] block: improve logic around when to sort a plug list
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29 15:44   ` Christoph Hellwig
  2018-12-04  1:35   ` Sagi Grimberg
  2018-11-28 13:35 ` [PATCH 2/7] blk-mq: add mq_ops->commit_rqs() Jens Axboe
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

Only do it if we have requests for multiple queues in the same
plug.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c       |  1 +
 block/blk-mq.c         | 23 ++++++++++++++++++-----
 include/linux/blkdev.h |  1 +
 3 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index be9233400314..d107d016b92b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1780,6 +1780,7 @@ void blk_start_plug(struct blk_plug *plug)
 	INIT_LIST_HEAD(&plug->mq_list);
 	INIT_LIST_HEAD(&plug->cb_list);
 	plug->rq_count = 0;
+	plug->multiple_queues = false;
 
 	/*
 	 * Store ordering should not be needed here, since a potential
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a82830f39933..0e2663048d95 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1677,7 +1677,8 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 	list_splice_init(&plug->mq_list, &list);
 	plug->rq_count = 0;
 
-	list_sort(NULL, &list, plug_rq_cmp);
+	if (plug->rq_count > 2 && plug->multiple_queues)
+		list_sort(NULL, &list, plug_rq_cmp);
 
 	this_q = NULL;
 	this_hctx = NULL;
@@ -1866,6 +1867,20 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 	}
 }
 
+static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+{
+	list_add_tail(&rq->queuelist, &plug->mq_list);
+	plug->rq_count++;
+	if (!plug->multiple_queues && !list_is_singular(&plug->mq_list)) {
+		struct request *tmp;
+
+		tmp = list_first_entry(&plug->mq_list, struct request,
+						queuelist);
+		if (tmp->q != rq->q)
+			plug->multiple_queues = true;
+	}
+}
+
 static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 {
 	const int is_sync = op_is_sync(bio->bi_opf);
@@ -1932,8 +1947,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 			trace_block_plug(q);
 		}
 
-		list_add_tail(&rq->queuelist, &plug->mq_list);
-		plug->rq_count++;
+		blk_add_rq_to_plug(plug, rq);
 	} else if (plug && !blk_queue_nomerges(q)) {
 		blk_mq_bio_to_request(rq, bio);
 
@@ -1950,8 +1964,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 			list_del_init(&same_queue_rq->queuelist);
 			plug->rq_count--;
 		}
-		list_add_tail(&rq->queuelist, &plug->mq_list);
-		plug->rq_count++;
+		blk_add_rq_to_plug(plug, rq);
 
 		blk_mq_put_ctx(data.ctx);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 02732cae6080..08d940f85fa0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1131,6 +1131,7 @@ struct blk_plug {
 	struct list_head mq_list; /* blk-mq requests */
 	struct list_head cb_list; /* md requires an unplug callback */
 	unsigned short rq_count;
+	bool multiple_queues;
 };
 #define BLK_MAX_REQUEST_COUNT 16
 #define BLK_PLUG_FLUSH_SIZE (128 * 1024)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 2/7] blk-mq: add mq_ops->commit_rqs()
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
  2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29 15:45   ` Christoph Hellwig
  2018-12-04  1:35   ` Sagi Grimberg
  2018-11-28 13:35 ` [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook Jens Axboe
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

blk-mq passes information to the hardware about any given request being
the last that we will issue in this sequence. The point is that hardware
can defer costly doorbell type writes to the last request. But if we run
into errors issuing a sequence of requests, we may never send the request
with bd->last == true set. For that case, we need a hook that tells the
hardware that nothing else is coming right now.

For failures returned by the drivers ->queue_rq() hook, the driver is
responsible for flushing pending requests, if it uses bd->last to
optimize that part. This works like before, no changes there.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c         | 16 ++++++++++++++++
 include/linux/blk-mq.h | 10 ++++++++++
 2 files changed, 26 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0e2663048d95..18ff47a85ad3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1259,6 +1259,14 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
 	if (!list_empty(list)) {
 		bool needs_restart;
 
+		/*
+		 * If we didn't flush the entire list, we could have told
+		 * the driver there was more coming, but that turned out to
+		 * be a lie.
+		 */
+		if (q->mq_ops->commit_rqs)
+			q->mq_ops->commit_rqs(hctx);
+
 		spin_lock(&hctx->lock);
 		list_splice_init(list, &hctx->dispatch);
 		spin_unlock(&hctx->lock);
@@ -1865,6 +1873,14 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 			blk_mq_end_request(rq, ret);
 		}
 	}
+
+	/*
+	 * If we didn't flush the entire list, we could have told
+	 * the driver there was more coming, but that turned out to
+	 * be a lie.
+	 */
+	if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
+		hctx->queue->mq_ops->commit_rqs(hctx);
 }
 
 static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index b8de11e0603b..467f1dd21ccf 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -117,6 +117,7 @@ struct blk_mq_queue_data {
 
 typedef blk_status_t (queue_rq_fn)(struct blk_mq_hw_ctx *,
 		const struct blk_mq_queue_data *);
+typedef void (commit_rqs_fn)(struct blk_mq_hw_ctx *);
 /* takes rq->cmd_flags as input, returns a hardware type index */
 typedef int (rq_flags_to_type_fn)(struct request_queue *, unsigned int);
 typedef bool (get_budget_fn)(struct blk_mq_hw_ctx *);
@@ -144,6 +145,15 @@ struct blk_mq_ops {
 	 */
 	queue_rq_fn		*queue_rq;
 
+	/*
+	 * If a driver uses bd->last to judge when to submit requests to
+	 * hardware, it must define this function. In case of errors that
+	 * make us stop issuing further requests, this hook serves the
+	 * purpose of kicking the hardware (which the last request otherwise
+	 * would have done).
+	 */
+	commit_rqs_fn		*commit_rqs;
+
 	/*
 	 * Return a queue map type for the given request/bio flags
 	 */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
  2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
  2018-11-28 13:35 ` [PATCH 2/7] blk-mq: add mq_ops->commit_rqs() Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29 15:47   ` Christoph Hellwig
  2018-11-28 13:35 ` [PATCH 4/7] virtio_blk: " Jens Axboe
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

Split the command submission and the SQ doorbell ring, and add the
doorbell ring as our ->commit_rqs() hook. This allows a list of
requests to be issued, with nvme only writing the SQ update when
it's necessary. This is more efficient if we have lists of requests
to issue, particularly on virtualized hardware, where writing the
SQ doorbell is more expensive than on real hardware. For those cases,
performance increases of 2-3x have been observed.

The use case for this is plugged IO, where blk-mq flushes a batch of
requests at the time.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/nvme/host/pci.c | 52 ++++++++++++++++++++++++++++++++++++-----
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 73effe586e5f..42472bd0cfed 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -203,6 +203,7 @@ struct nvme_queue {
 	u16 q_depth;
 	s16 cq_vector;
 	u16 sq_tail;
+	u16 last_sq_tail;
 	u16 cq_head;
 	u16 last_cq_head;
 	u16 qid;
@@ -522,22 +523,59 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
 	return 0;
 }
 
+static inline void nvme_write_sq_db(struct nvme_queue *nvmeq)
+{
+	if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail,
+			nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
+		writel(nvmeq->sq_tail, nvmeq->q_db);
+	nvmeq->last_sq_tail = nvmeq->sq_tail;
+}
+
+static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
+{
+	if (++index == nvmeq->q_depth)
+		return 0;
+
+	return index;
+}
+
 /**
  * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
  * @nvmeq: The queue to use
  * @cmd: The command to send
+ * @write_sq: whether to write to the SQ doorbell
  */
-static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd)
+static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
+			    bool write_sq)
 {
+	u16 next_tail;
+
 	spin_lock(&nvmeq->sq_lock);
 
 	memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
 
 	if (++nvmeq->sq_tail == nvmeq->q_depth)
 		nvmeq->sq_tail = 0;
-	if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail,
-			nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
-		writel(nvmeq->sq_tail, nvmeq->q_db);
+
+	next_tail = nvmeq->sq_tail + 1;
+	if (next_tail == nvmeq->q_depth)
+		next_tail = 0;
+
+	/*
+	 * Write sq tail if we have to, OR if the next command would wrap
+	 */
+	if (write_sq || next_tail == nvmeq->last_sq_tail)
+		nvme_write_sq_db(nvmeq);
+	spin_unlock(&nvmeq->sq_lock);
+}
+
+static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
+{
+	struct nvme_queue *nvmeq = hctx->driver_data;
+
+	spin_lock(&nvmeq->sq_lock);
+	if (nvmeq->sq_tail != nvmeq->last_sq_tail)
+		nvme_write_sq_db(nvmeq);
 	spin_unlock(&nvmeq->sq_lock);
 }
 
@@ -923,7 +961,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 	}
 
 	blk_mq_start_request(req);
-	nvme_submit_cmd(nvmeq, &cmnd);
+	nvme_submit_cmd(nvmeq, &cmnd, bd->last);
 	return BLK_STS_OK;
 out_cleanup_iod:
 	nvme_free_iod(dev, req);
@@ -1108,7 +1146,7 @@ static void nvme_pci_submit_async_event(struct nvme_ctrl *ctrl)
 	memset(&c, 0, sizeof(c));
 	c.common.opcode = nvme_admin_async_event;
 	c.common.command_id = NVME_AQ_BLK_MQ_DEPTH;
-	nvme_submit_cmd(nvmeq, &c);
+	nvme_submit_cmd(nvmeq, &c, true);
 }
 
 static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id)
@@ -1531,6 +1569,7 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
 
 	spin_lock_irq(&nvmeq->cq_lock);
 	nvmeq->sq_tail = 0;
+	nvmeq->last_sq_tail = 0;
 	nvmeq->cq_head = 0;
 	nvmeq->cq_phase = 1;
 	nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];
@@ -1603,6 +1642,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = {
 
 #define NVME_SHARED_MQ_OPS					\
 	.queue_rq		= nvme_queue_rq,		\
+	.commit_rqs		= nvme_commit_rqs,		\
 	.rq_flags_to_type	= nvme_rq_flags_to_type,	\
 	.complete		= nvme_pci_complete_rq,		\
 	.init_hctx		= nvme_init_hctx,		\
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 4/7] virtio_blk: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
                   ` (2 preceding siblings ...)
  2018-11-28 13:35 ` [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-12-04  1:36   ` Sagi Grimberg
  2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

We need this for blk-mq to kick things into gear, if we told it that
we had more IO coming, but then failed to deliver on that promise.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/block/virtio_blk.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 6e869d05f91e..912c4265e592 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -214,6 +214,20 @@ static void virtblk_done(struct virtqueue *vq)
 	spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
 }
 
+static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx)
+{
+	struct virtio_blk *vblk = hctx->queue->queuedata;
+	struct virtio_blk_vq *vq = &vblk->vqs[hctx->queue_num];
+	bool kick;
+
+	spin_lock_irq(&vq->lock);
+	kick = virtqueue_kick_prepare(vq->vq);
+	spin_unlock_irq(&vq->lock);
+
+	if (kick)
+		virtqueue_notify(vq->vq);
+}
+
 static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
 			   const struct blk_mq_queue_data *bd)
 {
@@ -638,6 +652,7 @@ static void virtblk_initialize_rq(struct request *req)
 
 static const struct blk_mq_ops virtio_mq_ops = {
 	.queue_rq	= virtio_queue_rq,
+	.commit_rqs	= virtio_commit_rqs,
 	.complete	= virtblk_request_done,
 	.init_request	= virtblk_init_request,
 #ifdef CONFIG_VIRTIO_BLK_SCSI
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 5/7] ataflop: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
                   ` (3 preceding siblings ...)
  2018-11-28 13:35 ` [PATCH 4/7] virtio_blk: " Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29  7:03   ` Ming Lei
                     ` (2 more replies)
  2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
  2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
  6 siblings, 3 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

We need this for blk-mq to kick things into gear, if we told it that
we had more IO coming, but then failed to deliver on that promise.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 drivers/block/ataflop.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index f88b4c26d422..a0c6745b0034 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -1471,6 +1471,15 @@ static void setup_req_params( int drive )
 			ReqTrack, ReqSector, (unsigned long)ReqData ));
 }
 
+static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx)
+{
+	spin_lock_irq(&ataflop_lock);
+	atari_disable_irq(IRQ_MFP_FDC);
+	finish_fdc();
+	atari_enable_irq(IRQ_MFP_FDC);
+	spin_unlock_irq(&ataflop_lock);
+}
+
 static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
 				     const struct blk_mq_queue_data *bd)
 {
@@ -1947,6 +1956,7 @@ static const struct block_device_operations floppy_fops = {
 
 static const struct blk_mq_ops ataflop_mq_ops = {
 	.queue_rq = ataflop_queue_rq,
+	.commit_rqs = ataflop_commit_rqs,
 };
 
 static struct kobject *floppy_find(dev_t dev, int *part, void *data)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 6/7] blk-mq: use bd->last == true for list inserts
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
                   ` (4 preceding siblings ...)
  2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29  7:01   ` Ming Lei
                     ` (2 more replies)
  2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
  6 siblings, 3 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

If we are issuing a list of requests, we know if we're at the last one.
If we fail issuing, ensure that we call ->commits_rqs() to flush any
potential previous requests.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c |  2 +-
 block/blk-mq.c   | 16 ++++++++--------
 block/blk-mq.h   |  2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d107d016b92b..3f6f5e6c2fe4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 	 * bypass a potential scheduler on the bottom device for
 	 * insert.
 	 */
-	return blk_mq_request_issue_directly(rq);
+	return blk_mq_request_issue_directly(rq, true);
 }
 EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 18ff47a85ad3..9e1045152e45 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1743,12 +1743,12 @@ static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq)
 
 static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 					    struct request *rq,
-					    blk_qc_t *cookie)
+					    blk_qc_t *cookie, bool last)
 {
 	struct request_queue *q = rq->q;
 	struct blk_mq_queue_data bd = {
 		.rq = rq,
-		.last = true,
+		.last = last,
 	};
 	blk_qc_t new_cookie;
 	blk_status_t ret;
@@ -1783,7 +1783,7 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
 static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 						struct request *rq,
 						blk_qc_t *cookie,
-						bool bypass_insert)
+						bool bypass_insert, bool last)
 {
 	struct request_queue *q = rq->q;
 	bool run_queue = true;
@@ -1812,7 +1812,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 		goto insert;
 	}
 
-	return __blk_mq_issue_directly(hctx, rq, cookie);
+	return __blk_mq_issue_directly(hctx, rq, cookie, last);
 insert:
 	if (bypass_insert)
 		return BLK_STS_RESOURCE;
@@ -1831,7 +1831,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 
 	hctx_lock(hctx, &srcu_idx);
 
-	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
+	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
 	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
 		blk_mq_sched_insert_request(rq, false, true, false);
 	else if (ret != BLK_STS_OK)
@@ -1840,7 +1840,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
 	hctx_unlock(hctx, srcu_idx);
 }
 
-blk_status_t blk_mq_request_issue_directly(struct request *rq)
+blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
 {
 	blk_status_t ret;
 	int srcu_idx;
@@ -1848,7 +1848,7 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq)
 	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
 
 	hctx_lock(hctx, &srcu_idx);
-	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true);
+	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
 	hctx_unlock(hctx, srcu_idx);
 
 	return ret;
@@ -1863,7 +1863,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 				queuelist);
 
 		list_del_init(&rq->queuelist);
-		ret = blk_mq_request_issue_directly(rq);
+		ret = blk_mq_request_issue_directly(rq, list_empty(list));
 		if (ret != BLK_STS_OK) {
 			if (ret == BLK_STS_RESOURCE ||
 					ret == BLK_STS_DEV_RESOURCE) {
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 9ae8e9f8f8b1..7291e5379358 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -69,7 +69,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
 				struct list_head *list);
 
 /* Used by blk_insert_cloned_request() to issue request directly */
-blk_status_t blk_mq_request_issue_directly(struct request *rq);
+blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
 void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
 				    struct list_head *list);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
                   ` (5 preceding siblings ...)
  2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
@ 2018-11-28 13:35 ` Jens Axboe
  2018-11-29  7:13   ` Ming Lei
                     ` (2 more replies)
  6 siblings, 3 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-28 13:35 UTC (permalink / raw)
  To: linux-block, linux-nvme; +Cc: Jens Axboe

If we have that hook, we know the driver handles bd->last == true in
a smart fashion. If it does, even for multiple hardware queues, it's
a good idea to flush batches of requests to the device, if we have
batches of requests from the submitter.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9e1045152e45..091557a43d53 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1945,7 +1945,12 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		/* bypass scheduler for flush rq */
 		blk_insert_flush(rq);
 		blk_mq_run_hw_queue(data.hctx, true);
-	} else if (plug && q->nr_hw_queues == 1) {
+	} else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
+		/*
+		 * Use plugging if we have a ->commit_rqs() hook as well,
+		 * as we know the driver uses bd->last in a smart
+		 * fashion.
+		 */
 		unsigned int request_count = plug->rq_count;
 		struct request *last = NULL;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/7] blk-mq: use bd->last == true for list inserts
  2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
@ 2018-11-29  7:01   ` Ming Lei
  2018-11-29 15:49   ` Christoph Hellwig
  2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Ming Lei @ 2018-11-29  7:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:37AM -0700, Jens Axboe wrote:
> If we are issuing a list of requests, we know if we're at the last one.
> If we fail issuing, ensure that we call ->commits_rqs() to flush any
> potential previous requests.
> 
> Reviewed-by: Omar Sandoval <osandov@fb.com>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-core.c |  2 +-
>  block/blk-mq.c   | 16 ++++++++--------
>  block/blk-mq.h   |  2 +-
>  3 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index d107d016b92b..3f6f5e6c2fe4 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1334,7 +1334,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
>  	 * bypass a potential scheduler on the bottom device for
>  	 * insert.
>  	 */
> -	return blk_mq_request_issue_directly(rq);
> +	return blk_mq_request_issue_directly(rq, true);
>  }
>  EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
>  
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 18ff47a85ad3..9e1045152e45 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1743,12 +1743,12 @@ static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq)
>  
>  static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
>  					    struct request *rq,
> -					    blk_qc_t *cookie)
> +					    blk_qc_t *cookie, bool last)
>  {
>  	struct request_queue *q = rq->q;
>  	struct blk_mq_queue_data bd = {
>  		.rq = rq,
> -		.last = true,
> +		.last = last,
>  	};
>  	blk_qc_t new_cookie;
>  	blk_status_t ret;
> @@ -1783,7 +1783,7 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
>  static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  						struct request *rq,
>  						blk_qc_t *cookie,
> -						bool bypass_insert)
> +						bool bypass_insert, bool last)
>  {
>  	struct request_queue *q = rq->q;
>  	bool run_queue = true;
> @@ -1812,7 +1812,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  		goto insert;
>  	}
>  
> -	return __blk_mq_issue_directly(hctx, rq, cookie);
> +	return __blk_mq_issue_directly(hctx, rq, cookie, last);
>  insert:
>  	if (bypass_insert)
>  		return BLK_STS_RESOURCE;
> @@ -1831,7 +1831,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  
>  	hctx_lock(hctx, &srcu_idx);
>  
> -	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false);
> +	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
>  	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
>  		blk_mq_sched_insert_request(rq, false, true, false);
>  	else if (ret != BLK_STS_OK)
> @@ -1840,7 +1840,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>  	hctx_unlock(hctx, srcu_idx);
>  }
>  
> -blk_status_t blk_mq_request_issue_directly(struct request *rq)
> +blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
>  {
>  	blk_status_t ret;
>  	int srcu_idx;
> @@ -1848,7 +1848,7 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq)
>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
>  
>  	hctx_lock(hctx, &srcu_idx);
> -	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true);
> +	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
>  	hctx_unlock(hctx, srcu_idx);
>  
>  	return ret;
> @@ -1863,7 +1863,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
>  				queuelist);
>  
>  		list_del_init(&rq->queuelist);
> -		ret = blk_mq_request_issue_directly(rq);
> +		ret = blk_mq_request_issue_directly(rq, list_empty(list));
>  		if (ret != BLK_STS_OK) {
>  			if (ret == BLK_STS_RESOURCE ||
>  					ret == BLK_STS_DEV_RESOURCE) {
> diff --git a/block/blk-mq.h b/block/blk-mq.h
> index 9ae8e9f8f8b1..7291e5379358 100644
> --- a/block/blk-mq.h
> +++ b/block/blk-mq.h
> @@ -69,7 +69,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
>  				struct list_head *list);
>  
>  /* Used by blk_insert_cloned_request() to issue request directly */
> -blk_status_t blk_mq_request_issue_directly(struct request *rq);
> +blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
>  void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
>  				    struct list_head *list);
>  
> -- 
> 2.17.1
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thanks,
Ming

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/7] ataflop: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
@ 2018-11-29  7:03   ` Ming Lei
  2018-11-29 15:47   ` Christoph Hellwig
  2018-12-04  1:37   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Ming Lei @ 2018-11-29  7:03 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:36AM -0700, Jens Axboe wrote:
> We need this for blk-mq to kick things into gear, if we told it that
> we had more IO coming, but then failed to deliver on that promise.
> 
> Reviewed-by: Omar Sandoval <osandov@fb.com>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  drivers/block/ataflop.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
> index f88b4c26d422..a0c6745b0034 100644
> --- a/drivers/block/ataflop.c
> +++ b/drivers/block/ataflop.c
> @@ -1471,6 +1471,15 @@ static void setup_req_params( int drive )
>  			ReqTrack, ReqSector, (unsigned long)ReqData ));
>  }
>  
> +static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx)
> +{
> +	spin_lock_irq(&ataflop_lock);
> +	atari_disable_irq(IRQ_MFP_FDC);
> +	finish_fdc();
> +	atari_enable_irq(IRQ_MFP_FDC);
> +	spin_unlock_irq(&ataflop_lock);
> +}
> +
>  static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
>  				     const struct blk_mq_queue_data *bd)
>  {
> @@ -1947,6 +1956,7 @@ static const struct block_device_operations floppy_fops = {
>  
>  static const struct blk_mq_ops ataflop_mq_ops = {
>  	.queue_rq = ataflop_queue_rq,
> +	.commit_rqs = ataflop_commit_rqs,
>  };
>  
>  static struct kobject *floppy_find(dev_t dev, int *part, void *data)
> -- 
> 2.17.1
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

thanks,
Ming

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
@ 2018-11-29  7:13   ` Ming Lei
  2018-11-29 15:49   ` Christoph Hellwig
  2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Ming Lei @ 2018-11-29  7:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:38AM -0700, Jens Axboe wrote:
> If we have that hook, we know the driver handles bd->last == true in
> a smart fashion. If it does, even for multiple hardware queues, it's
> a good idea to flush batches of requests to the device, if we have
> batches of requests from the submitter.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  block/blk-mq.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 9e1045152e45..091557a43d53 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1945,7 +1945,12 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
>  		/* bypass scheduler for flush rq */
>  		blk_insert_flush(rq);
>  		blk_mq_run_hw_queue(data.hctx, true);
> -	} else if (plug && q->nr_hw_queues == 1) {
> +	} else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
> +		/*
> +		 * Use plugging if we have a ->commit_rqs() hook as well,
> +		 * as we know the driver uses bd->last in a smart
> +		 * fashion.
> +		 */
>  		unsigned int request_count = plug->rq_count;
>  		struct request *last = NULL;
>  
> -- 
> 2.17.1
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

thanks,
Ming

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/7] block: improve logic around when to sort a plug list
  2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
@ 2018-11-29 15:44   ` Christoph Hellwig
  2018-12-04  1:35   ` Sagi Grimberg
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/7] blk-mq: add mq_ops->commit_rqs()
  2018-11-28 13:35 ` [PATCH 2/7] blk-mq: add mq_ops->commit_rqs() Jens Axboe
@ 2018-11-29 15:45   ` Christoph Hellwig
  2018-11-29 16:52     ` Jens Axboe
  2018-12-04  1:35   ` Sagi Grimberg
  1 sibling, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:33AM -0700, Jens Axboe wrote:
> blk-mq passes information to the hardware about any given request being
> the last that we will issue in this sequence. The point is that hardware
> can defer costly doorbell type writes to the last request. But if we run
> into errors issuing a sequence of requests, we may never send the request
> with bd->last == true set. For that case, we need a hook that tells the
> hardware that nothing else is coming right now.
> 
> For failures returned by the drivers ->queue_rq() hook, the driver is
> responsible for flushing pending requests, if it uses bd->last to
> optimize that part. This works like before, no changes there.
> 
> Reviewed-by: Omar Sandoval <osandov@fb.com>
> Reviewed-by: Ming Lei <ming.lei@redhat.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>

I don't think I actually reviewed it before in this form.

But now that I took a look it does indeed look fine to me:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 ` [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook Jens Axboe
@ 2018-11-29 15:47   ` Christoph Hellwig
  2018-11-29 17:02     ` Jens Axboe
  0 siblings, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:47 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
> +{
> +	if (++index == nvmeq->q_depth)
> +		return 0;
> +
> +	return index;
> +}

This is unused now.

Also what about this little cleanup on top?

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 42472bd0cfed..527907aa6903 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -523,22 +523,26 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
 	return 0;
 }
 
-static inline void nvme_write_sq_db(struct nvme_queue *nvmeq)
+/*
+ * Write sq tail if we are asked to, or if the next command would wrap.
+ */
+static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
 {
+	if (!write_sq) {
+		u16 next_tail = nvmeq->sq_tail + 1;
+
+		if (next_tail == nvmeq->q_depth)
+			next_tail = 0;
+		if (next_tail != nvmeq->last_sq_tail)
+			return;
+	}
+
 	if (nvme_dbbuf_update_and_check_event(nvmeq->sq_tail,
 			nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
 		writel(nvmeq->sq_tail, nvmeq->q_db);
 	nvmeq->last_sq_tail = nvmeq->sq_tail;
 }
 
-static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
-{
-	if (++index == nvmeq->q_depth)
-		return 0;
-
-	return index;
-}
-
 /**
  * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell
  * @nvmeq: The queue to use
@@ -548,24 +552,11 @@ static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
 static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd,
 			    bool write_sq)
 {
-	u16 next_tail;
-
 	spin_lock(&nvmeq->sq_lock);
-
 	memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
-
 	if (++nvmeq->sq_tail == nvmeq->q_depth)
 		nvmeq->sq_tail = 0;
-
-	next_tail = nvmeq->sq_tail + 1;
-	if (next_tail == nvmeq->q_depth)
-		next_tail = 0;
-
-	/*
-	 * Write sq tail if we have to, OR if the next command would wrap
-	 */
-	if (write_sq || next_tail == nvmeq->last_sq_tail)
-		nvme_write_sq_db(nvmeq);
+	nvme_write_sq_db(nvmeq, write_sq);
 	spin_unlock(&nvmeq->sq_lock);
 }
 
@@ -575,7 +566,7 @@ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
 
 	spin_lock(&nvmeq->sq_lock);
 	if (nvmeq->sq_tail != nvmeq->last_sq_tail)
-		nvme_write_sq_db(nvmeq);
+		nvme_write_sq_db(nvmeq, true);
 	spin_unlock(&nvmeq->sq_lock);
 }
 

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/7] ataflop: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
  2018-11-29  7:03   ` Ming Lei
@ 2018-11-29 15:47   ` Christoph Hellwig
  2018-12-04  1:37   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:47 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:36AM -0700, Jens Axboe wrote:
> We need this for blk-mq to kick things into gear, if we told it that
> we had more IO coming, but then failed to deliver on that promise.
> 
> Reviewed-by: Omar Sandoval <osandov@fb.com>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/7] blk-mq: use bd->last == true for list inserts
  2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
  2018-11-29  7:01   ` Ming Lei
@ 2018-11-29 15:49   ` Christoph Hellwig
  2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Wed, Nov 28, 2018 at 06:35:37AM -0700, Jens Axboe wrote:
> If we are issuing a list of requests, we know if we're at the last one.
> If we fail issuing, ensure that we call ->commits_rqs() to flush any
> potential previous requests.
> 
> Reviewed-by: Omar Sandoval <osandov@fb.com>
> Signed-off-by: Jens Axboe <axboe@kernel.dk>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
  2018-11-29  7:13   ` Ming Lei
@ 2018-11-29 15:49   ` Christoph Hellwig
  2018-11-29 15:50     ` Christoph Hellwig
  2018-11-29 17:05     ` Jens Axboe
  2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 2 replies; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

> +		/*
> +		 * Use plugging if we have a ->commit_rqs() hook as well,
> +		 * as we know the driver uses bd->last in a smart
> +		 * fashion.
> +		 */

Nipick: this could flow on just two lines:

		/*
		 * Use plugging if we have a ->commit_rqs() hook as well, as we
		 * know the driver uses bd->last in a smart fashion.
		 */

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-29 15:49   ` Christoph Hellwig
@ 2018-11-29 15:50     ` Christoph Hellwig
  2018-11-29 17:05     ` Jens Axboe
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 15:50 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-nvme

On Thu, Nov 29, 2018 at 07:49:59AM -0800, Christoph Hellwig wrote:
> > +		/*
> > +		 * Use plugging if we have a ->commit_rqs() hook as well,
> > +		 * as we know the driver uses bd->last in a smart
> > +		 * fashion.
> > +		 */
> 
> Nipick: this could flow on just two lines:
> 
> 		/*
> 		 * Use plugging if we have a ->commit_rqs() hook as well, as we
> 		 * know the driver uses bd->last in a smart fashion.
> 		 */

But of course otherwise this looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/7] blk-mq: add mq_ops->commit_rqs()
  2018-11-29 15:45   ` Christoph Hellwig
@ 2018-11-29 16:52     ` Jens Axboe
  0 siblings, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-29 16:52 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-nvme

On 11/29/18 8:45 AM, Christoph Hellwig wrote:
> On Wed, Nov 28, 2018 at 06:35:33AM -0700, Jens Axboe wrote:
>> blk-mq passes information to the hardware about any given request being
>> the last that we will issue in this sequence. The point is that hardware
>> can defer costly doorbell type writes to the last request. But if we run
>> into errors issuing a sequence of requests, we may never send the request
>> with bd->last == true set. For that case, we need a hook that tells the
>> hardware that nothing else is coming right now.
>>
>> For failures returned by the drivers ->queue_rq() hook, the driver is
>> responsible for flushing pending requests, if it uses bd->last to
>> optimize that part. This works like before, no changes there.
>>
>> Reviewed-by: Omar Sandoval <osandov@fb.com>
>> Reviewed-by: Ming Lei <ming.lei@redhat.com>
>> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> I don't think I actually reviewed it before in this form.
> 
> But now that I took a look it does indeed look fine to me:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Only change is moving those two hunks from the later patch into this
one, so it isn't an empty declaration.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-29 15:47   ` Christoph Hellwig
@ 2018-11-29 17:02     ` Jens Axboe
  2018-11-29 17:04       ` Christoph Hellwig
  0 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-11-29 17:02 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-nvme

On 11/29/18 8:47 AM, Christoph Hellwig wrote:
>> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
>> +{
>> +	if (++index == nvmeq->q_depth)
>> +		return 0;
>> +
>> +	return index;
>> +}
> 
> This is unused now.

Huh, wonder how I missed that. GCC must not have complained.

> Also what about this little cleanup on top?

That looks good, I like it. With that, can I add your reviewed-by? I'll
run a sanity check on it first.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-29 17:02     ` Jens Axboe
@ 2018-11-29 17:04       ` Christoph Hellwig
  2018-11-29 17:06         ` Jens Axboe
  0 siblings, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2018-11-29 17:04 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block, linux-nvme

On Thu, Nov 29, 2018 at 10:02:25AM -0700, Jens Axboe wrote:
> On 11/29/18 8:47 AM, Christoph Hellwig wrote:
> >> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
> >> +{
> >> +	if (++index == nvmeq->q_depth)
> >> +		return 0;
> >> +
> >> +	return index;
> >> +}
> > 
> > This is unused now.
> 
> Huh, wonder how I missed that. GCC must not have complained.

gcc never warns about unused static inline functions.  Which makes a lot
of sense at least for headers..

> 
> > Also what about this little cleanup on top?
> 
> That looks good, I like it. With that, can I add your reviewed-by? I'll
> run a sanity check on it first.

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-29 15:49   ` Christoph Hellwig
  2018-11-29 15:50     ` Christoph Hellwig
@ 2018-11-29 17:05     ` Jens Axboe
  1 sibling, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-29 17:05 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-nvme

On 11/29/18 8:49 AM, Christoph Hellwig wrote:
>> +		/*
>> +		 * Use plugging if we have a ->commit_rqs() hook as well,
>> +		 * as we know the driver uses bd->last in a smart
>> +		 * fashion.
>> +		 */
> 
> Nipick: this could flow on just two lines:
> 
> 		/*
> 		 * Use plugging if we have a ->commit_rqs() hook as well, as we
> 		 * know the driver uses bd->last in a smart fashion.
> 		 */

Fixed up.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-29 17:04       ` Christoph Hellwig
@ 2018-11-29 17:06         ` Jens Axboe
  2018-11-29 17:38           ` Keith Busch
  0 siblings, 1 reply; 31+ messages in thread
From: Jens Axboe @ 2018-11-29 17:06 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, linux-nvme

On 11/29/18 10:04 AM, Christoph Hellwig wrote:
> On Thu, Nov 29, 2018 at 10:02:25AM -0700, Jens Axboe wrote:
>> On 11/29/18 8:47 AM, Christoph Hellwig wrote:
>>>> +static inline int nvme_next_ring_index(struct nvme_queue *nvmeq, u16 index)
>>>> +{
>>>> +	if (++index == nvmeq->q_depth)
>>>> +		return 0;
>>>> +
>>>> +	return index;
>>>> +}
>>>
>>> This is unused now.
>>
>> Huh, wonder how I missed that. GCC must not have complained.
> 
> gcc never warns about unused static inline functions.  Which makes a lot
> of sense at least for headers..

Not so much for non-headers :-)

>>> Also what about this little cleanup on top?
>>
>> That looks good, I like it. With that, can I add your reviewed-by? I'll
>> run a sanity check on it first.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Thanks!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-29 17:06         ` Jens Axboe
@ 2018-11-29 17:38           ` Keith Busch
  2018-11-29 17:42             ` Jens Axboe
  0 siblings, 1 reply; 31+ messages in thread
From: Keith Busch @ 2018-11-29 17:38 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block, linux-nvme

On Thu, Nov 29, 2018 at 10:06:20AM -0700, Jens Axboe wrote:
> On 11/29/18 10:04 AM, Christoph Hellwig wrote:
> > gcc never warns about unused static inline functions.  Which makes a lot
> > of sense at least for headers..
> 
> Not so much for non-headers :-)

You can #include .c files too! :)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook
  2018-11-29 17:38           ` Keith Busch
@ 2018-11-29 17:42             ` Jens Axboe
  0 siblings, 0 replies; 31+ messages in thread
From: Jens Axboe @ 2018-11-29 17:42 UTC (permalink / raw)
  To: Keith Busch; +Cc: Christoph Hellwig, linux-block, linux-nvme

On 11/29/18 10:38 AM, Keith Busch wrote:
> On Thu, Nov 29, 2018 at 10:06:20AM -0700, Jens Axboe wrote:
>> On 11/29/18 10:04 AM, Christoph Hellwig wrote:
>>> gcc never warns about unused static inline functions.  Which makes a lot
>>> of sense at least for headers..
>>
>> Not so much for non-headers :-)
> 
> You can #include .c files too! :)

There's always that one guy...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/7] block: improve logic around when to sort a plug list
  2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
  2018-11-29 15:44   ` Christoph Hellwig
@ 2018-12-04  1:35   ` Sagi Grimberg
  1 sibling, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:35 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/7] blk-mq: add mq_ops->commit_rqs()
  2018-11-28 13:35 ` [PATCH 2/7] blk-mq: add mq_ops->commit_rqs() Jens Axboe
  2018-11-29 15:45   ` Christoph Hellwig
@ 2018-12-04  1:35   ` Sagi Grimberg
  1 sibling, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:35 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/7] virtio_blk: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 ` [PATCH 4/7] virtio_blk: " Jens Axboe
@ 2018-12-04  1:36   ` Sagi Grimberg
  0 siblings, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:36 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 5/7] ataflop: implement mq_ops->commit_rqs() hook
  2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
  2018-11-29  7:03   ` Ming Lei
  2018-11-29 15:47   ` Christoph Hellwig
@ 2018-12-04  1:37   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:37 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs()
  2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
  2018-11-29  7:13   ` Ming Lei
  2018-11-29 15:49   ` Christoph Hellwig
@ 2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:39 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/7] blk-mq: use bd->last == true for list inserts
  2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
  2018-11-29  7:01   ` Ming Lei
  2018-11-29 15:49   ` Christoph Hellwig
@ 2018-12-04  1:39   ` Sagi Grimberg
  2 siblings, 0 replies; 31+ messages in thread
From: Sagi Grimberg @ 2018-12-04  1:39 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2018-12-04  1:39 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-28 13:35 [PATCHSET v2 0/8] block plugging improvements Jens Axboe
2018-11-28 13:35 ` [PATCH 1/7] block: improve logic around when to sort a plug list Jens Axboe
2018-11-29 15:44   ` Christoph Hellwig
2018-12-04  1:35   ` Sagi Grimberg
2018-11-28 13:35 ` [PATCH 2/7] blk-mq: add mq_ops->commit_rqs() Jens Axboe
2018-11-29 15:45   ` Christoph Hellwig
2018-11-29 16:52     ` Jens Axboe
2018-12-04  1:35   ` Sagi Grimberg
2018-11-28 13:35 ` [PATCH 3/7] nvme: implement mq_ops->commit_rqs() hook Jens Axboe
2018-11-29 15:47   ` Christoph Hellwig
2018-11-29 17:02     ` Jens Axboe
2018-11-29 17:04       ` Christoph Hellwig
2018-11-29 17:06         ` Jens Axboe
2018-11-29 17:38           ` Keith Busch
2018-11-29 17:42             ` Jens Axboe
2018-11-28 13:35 ` [PATCH 4/7] virtio_blk: " Jens Axboe
2018-12-04  1:36   ` Sagi Grimberg
2018-11-28 13:35 ` [PATCH 5/7] ataflop: " Jens Axboe
2018-11-29  7:03   ` Ming Lei
2018-11-29 15:47   ` Christoph Hellwig
2018-12-04  1:37   ` Sagi Grimberg
2018-11-28 13:35 ` [PATCH 6/7] blk-mq: use bd->last == true for list inserts Jens Axboe
2018-11-29  7:01   ` Ming Lei
2018-11-29 15:49   ` Christoph Hellwig
2018-12-04  1:39   ` Sagi Grimberg
2018-11-28 13:35 ` [PATCH 7/7] blk-mq: use plug for devices that implement ->commits_rqs() Jens Axboe
2018-11-29  7:13   ` Ming Lei
2018-11-29 15:49   ` Christoph Hellwig
2018-11-29 15:50     ` Christoph Hellwig
2018-11-29 17:05     ` Jens Axboe
2018-12-04  1:39   ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).