All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET RFC 0/3] Add plug based request allocation batching
@ 2021-10-06 16:35 Jens Axboe
  2021-10-06 16:35 ` [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Jens Axboe
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 16:35 UTC (permalink / raw)
  To: linux-block

Hi,

Even if the caller knows that N requests will be submitted, we still
have to go all the way to tag allocation for each one. That's somewhat
inefficient, when we could just grab as many tags as we need initially
instead.

This small series allows request caching in the blk_plug. We add a new
helper for passing in how many requests we expect to submit,
blk_start_plug_nr_ios(), and then we can use that information to
populate the cache on the first request allocation. Subsequent queue
attempts can then just grab a pre-allocated request out of the cache
rather than go into the tag allocation again.

This brings single core performance on my setup from:

sudo taskset -c 0,16  t/io_uring -b512 -d128 -s32 -c32 -p1 -F1 -B1 -n2 /dev/nvme1n1 /dev/nvme2n1
Added file /dev/nvme1n1 (submitter 0)
Added file /dev/nvme2n1 (submitter 1)
polled=1, fixedbufs=1, register_files=1, buffered=0, QD=128
Engine=io_uring, sq_ring=128, cq_ring=256
submitter=0, tid=1206
submitter=1, tid=1207
IOPS=5806400, BW=2835MiB/s, IOS/call=32/32, inflight=(77 32)
IOPS=5860352, BW=2861MiB/s, IOS/call=32/31, inflight=(102 128)
IOPS=5844800, BW=2853MiB/s, IOS/call=32/32, inflight=(102 32)

to:

taskset -c 0,16  t/io_uring -b512 -d128 -s32 -c32 -p1 -F1 -B1 -n2 /dev/nvme1n1 /dev/nvme2n1
Added file /dev/nvme1n1 (submitter 0)
Added file /dev/nvme2n1 (submitter 1)
polled=1, fixedbufs=1, register_files=1, buffered=0, QD=128
Engine=io_uring, sq_ring=128, cq_ring=256
submitter=0, tid=1220
submitter=1, tid=1221
IOPS=6061248, BW=2959MiB/s, IOS/call=32/31, inflight=(114 82)
IOPS=6100672, BW=2978MiB/s, IOS/call=32/31, inflight=(128 128)
IOPS=6082496, BW=2969MiB/s, IOS/call=32/32, inflight=(77 38)

which is about a 4-5% improvement in single core performance. Looking
at profiles, we're spending _less_ time in sbitmap even though we're
doing more work.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] block: bump max plugged deferred size from 16 to 32
  2021-10-06 16:35 [PATCHSET RFC 0/3] Add plug based request allocation batching Jens Axboe
@ 2021-10-06 16:35 ` Jens Axboe
  2021-10-06 17:55   ` Bart Van Assche
  2021-10-06 16:35 ` [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch Jens Axboe
  2021-10-06 16:35 ` [PATCH 3/3] io_uring: inform block layer of how many requests we are submitting Jens Axboe
  2 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 16:35 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

Particularly for NVMe with efficient deferred submission for many
requests, there are nice benefits to be seen by bumping the default max
plug count from 16 to 32. This is especially true for virtualized setups,
where the submit part is more expensive. But can be noticed even on
native hardware.

Reduce the multiple queue factor from 4 to 2, since we're changing the
default size.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c         | 4 ++--
 include/linux/blkdev.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a40c94505680..5327abbefbab 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2145,14 +2145,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
 }
 
 /*
- * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
+ * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
  * queues. This is important for md arrays to benefit from merging
  * requests.
  */
 static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
 {
 	if (plug->multiple_queues)
-		return BLK_MAX_REQUEST_COUNT * 4;
+		return BLK_MAX_REQUEST_COUNT * 2;
 	return BLK_MAX_REQUEST_COUNT;
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b19172db7eef..534298ac73cc 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -727,7 +727,7 @@ struct blk_plug {
 	bool multiple_queues;
 	bool nowait;
 };
-#define BLK_MAX_REQUEST_COUNT 16
+#define BLK_MAX_REQUEST_COUNT 32
 #define BLK_PLUG_FLUSH_SIZE (128 * 1024)
 
 struct blk_plug_cb;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch
  2021-10-06 16:35 [PATCHSET RFC 0/3] Add plug based request allocation batching Jens Axboe
  2021-10-06 16:35 ` [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Jens Axboe
@ 2021-10-06 16:35 ` Jens Axboe
  2021-10-06 18:24   ` Bart Van Assche
  2021-10-06 16:35 ` [PATCH 3/3] io_uring: inform block layer of how many requests we are submitting Jens Axboe
  2 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 16:35 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

The caller typically has a good (or even exact) idea of how many requests
it needs to submit. We can make the request/tag allocation a lot more
efficient if we just allocate N requests/tags upfront when we queue the
first bio from the batch.

Provide a new plug start helper that allows the caller to specify how many
IOs are expected. This sets plug->nr_ios, and we can use that for smarter
request allocation. The plug provides a holding spot for requests, and
request allocation will check it before calling into the normal request
allocation path.

The blk_finish_plug() is called, check if there are unused requests and
free them. This should not happen in normal operations. The exception is
if we get merging, then we may be left with requests that need freeing
when done.

This raises the per-core performance on my setup from ~5.8M to ~6.1M
IOPS.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c          | 47 +++++++++++++++-----------
 block/blk-mq.c            | 70 ++++++++++++++++++++++++++++++++-------
 block/blk-mq.h            |  5 +++
 include/linux/blk-mq.h    |  5 ++-
 include/linux/blk_types.h |  1 +
 include/linux/blkdev.h    | 15 ++++++++-
 6 files changed, 110 insertions(+), 33 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d83e56b2f64e..9b8c70670190 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1624,6 +1624,31 @@ int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
 }
 EXPORT_SYMBOL(kblockd_mod_delayed_work_on);
 
+void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios)
+{
+	struct task_struct *tsk = current;
+
+	/*
+	 * If this is a nested plug, don't actually assign it.
+	 */
+	if (tsk->plug)
+		return;
+
+	INIT_LIST_HEAD(&plug->mq_list);
+	plug->cached_rq = NULL;
+	plug->nr_ios = min_t(unsigned short, nr_ios, BLK_MAX_REQUEST_COUNT);
+	plug->rq_count = 0;
+	plug->multiple_queues = false;
+	plug->nowait = false;
+	INIT_LIST_HEAD(&plug->cb_list);
+
+	/*
+	 * Store ordering should not be needed here, since a potential
+	 * preempt will imply a full memory barrier
+	 */
+	tsk->plug = plug;
+}
+
 /**
  * blk_start_plug - initialize blk_plug and track it inside the task_struct
  * @plug:	The &struct blk_plug that needs to be initialized
@@ -1649,25 +1674,7 @@ EXPORT_SYMBOL(kblockd_mod_delayed_work_on);
  */
 void blk_start_plug(struct blk_plug *plug)
 {
-	struct task_struct *tsk = current;
-
-	/*
-	 * If this is a nested plug, don't actually assign it.
-	 */
-	if (tsk->plug)
-		return;
-
-	INIT_LIST_HEAD(&plug->mq_list);
-	INIT_LIST_HEAD(&plug->cb_list);
-	plug->rq_count = 0;
-	plug->multiple_queues = false;
-	plug->nowait = false;
-
-	/*
-	 * Store ordering should not be needed here, since a potential
-	 * preempt will imply a full memory barrier
-	 */
-	tsk->plug = plug;
+	blk_start_plug_nr_ios(plug, 1);
 }
 EXPORT_SYMBOL(blk_start_plug);
 
@@ -1719,6 +1726,8 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 
 	if (!list_empty(&plug->mq_list))
 		blk_mq_flush_plug_list(plug, from_schedule);
+	if (unlikely(!from_schedule && plug->cached_rq))
+		blk_mq_free_plug_rqs(plug);
 }
 
 /**
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5327abbefbab..ced94eb8e297 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -352,6 +352,7 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
 	struct request_queue *q = data->q;
 	struct elevator_queue *e = q->elevator;
 	u64 alloc_time_ns = 0;
+	struct request *rq;
 	unsigned int tag;
 
 	/* alloc_time includes depth and tag waits */
@@ -385,10 +386,21 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
 	 * case just retry the hctx assignment and tag allocation as CPU hotplug
 	 * should have migrated us to an online CPU by now.
 	 */
-	tag = blk_mq_get_tag(data);
-	if (tag == BLK_MQ_NO_TAG) {
+	do {
+		tag = blk_mq_get_tag(data);
+		if (tag != BLK_MQ_NO_TAG) {
+			rq = blk_mq_rq_ctx_init(data, tag, alloc_time_ns);
+			if (!--data->nr_tags)
+				return rq;
+			if (e || data->hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
+				return rq;
+			rq->rq_next = *data->cached_rq;
+			*data->cached_rq = rq;
+			data->flags |= BLK_MQ_REQ_NOWAIT;
+			continue;
+		}
 		if (data->flags & BLK_MQ_REQ_NOWAIT)
-			return NULL;
+			break;
 
 		/*
 		 * Give up the CPU and sleep for a random short time to ensure
@@ -397,8 +409,15 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
 		 */
 		msleep(3);
 		goto retry;
+	} while (1);
+
+	if (data->cached_rq) {
+		rq = *data->cached_rq;
+		*data->cached_rq = rq->rq_next;
+		return rq;
 	}
-	return blk_mq_rq_ctx_init(data, tag, alloc_time_ns);
+
+	return NULL;
 }
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
@@ -408,6 +427,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 		.q		= q,
 		.flags		= flags,
 		.cmd_flags	= op,
+		.nr_tags	= 1,
 	};
 	struct request *rq;
 	int ret;
@@ -436,6 +456,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 		.q		= q,
 		.flags		= flags,
 		.cmd_flags	= op,
+		.nr_tags	= 1,
 	};
 	u64 alloc_time_ns = 0;
 	unsigned int cpu;
@@ -537,6 +558,18 @@ void blk_mq_free_request(struct request *rq)
 }
 EXPORT_SYMBOL_GPL(blk_mq_free_request);
 
+void blk_mq_free_plug_rqs(struct blk_plug *plug)
+{
+	while (plug->cached_rq) {
+		struct request *rq;
+
+		rq = plug->cached_rq;
+		plug->cached_rq = rq->rq_next;
+		percpu_ref_get(&rq->q->q_usage_counter);
+		blk_mq_free_request(rq);
+	}
+}
+
 inline void __blk_mq_end_request(struct request *rq, blk_status_t error)
 {
 	u64 now = 0;
@@ -2178,6 +2211,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
 	const int is_flush_fua = op_is_flush(bio->bi_opf);
 	struct blk_mq_alloc_data data = {
 		.q		= q,
+		.nr_tags	= 1,
 	};
 	struct request *rq;
 	struct blk_plug *plug;
@@ -2204,13 +2238,26 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
 
 	hipri = bio->bi_opf & REQ_HIPRI;
 
-	data.cmd_flags = bio->bi_opf;
-	rq = __blk_mq_alloc_request(&data);
-	if (unlikely(!rq)) {
-		rq_qos_cleanup(q, bio);
-		if (bio->bi_opf & REQ_NOWAIT)
-			bio_wouldblock_error(bio);
-		goto queue_exit;
+	plug = blk_mq_plug(q, bio);
+	if (plug && plug->cached_rq) {
+		rq = plug->cached_rq;
+		plug->cached_rq = rq->rq_next;
+		INIT_LIST_HEAD(&rq->queuelist);
+		data.hctx = rq->mq_hctx;
+	} else {
+		data.cmd_flags = bio->bi_opf;
+		if (plug) {
+			data.nr_tags = plug->nr_ios;
+			plug->nr_ios = 1;
+			data.cached_rq = &plug->cached_rq;
+		}
+		rq = __blk_mq_alloc_request(&data);
+		if (unlikely(!rq)) {
+			rq_qos_cleanup(q, bio);
+			if (bio->bi_opf & REQ_NOWAIT)
+				bio_wouldblock_error(bio);
+			goto queue_exit;
+		}
 	}
 
 	trace_block_getrq(bio);
@@ -2229,7 +2276,6 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
 		return BLK_QC_T_NONE;
 	}
 
-	plug = blk_mq_plug(q, bio);
 	if (unlikely(is_flush_fua)) {
 		/* Bypass scheduler for flush requests */
 		blk_insert_flush(rq);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 171e8cdcff54..5da970bb8865 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -125,6 +125,7 @@ extern int __blk_mq_register_dev(struct device *dev, struct request_queue *q);
 extern int blk_mq_sysfs_register(struct request_queue *q);
 extern void blk_mq_sysfs_unregister(struct request_queue *q);
 extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx);
+void blk_mq_free_plug_rqs(struct blk_plug *plug);
 
 void blk_mq_release(struct request_queue *q);
 
@@ -152,6 +153,10 @@ struct blk_mq_alloc_data {
 	unsigned int shallow_depth;
 	unsigned int cmd_flags;
 
+	/* allocate multiple requests/tags in one go */
+	unsigned int nr_tags;
+	struct request **cached_rq;
+
 	/* input & output parameter */
 	struct blk_mq_ctx *ctx;
 	struct blk_mq_hw_ctx *hctx;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 75d75657df21..0e941f217578 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -90,7 +90,10 @@ struct request {
 	struct bio *bio;
 	struct bio *biotail;
 
-	struct list_head queuelist;
+	union {
+		struct list_head queuelist;
+		struct request *rq_next;
+	};
 
 	/*
 	 * The hash is used inside the scheduler, and killed once the
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 3b967053e9f5..4b2006ec8bf2 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -22,6 +22,7 @@ struct bio_crypt_ctx;
 
 struct block_device {
 	sector_t		bd_start_sect;
+	sector_t		bd_nr_sectors;
 	struct disk_stats __percpu *bd_stats;
 	unsigned long		bd_stamp;
 	bool			bd_read_only;	/* read-only policy */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 534298ac73cc..39b58b223f84 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -722,10 +722,17 @@ extern void blk_set_queue_dying(struct request_queue *);
  */
 struct blk_plug {
 	struct list_head mq_list; /* blk-mq requests */
-	struct list_head cb_list; /* md requires an unplug callback */
+
+	/* if ios_left is > 1, we can batch tag/rq allocations */
+	struct request *cached_rq;
+	unsigned short nr_ios;
+
 	unsigned short rq_count;
+
 	bool multiple_queues;
 	bool nowait;
+
+	struct list_head cb_list; /* md requires an unplug callback */
 };
 #define BLK_MAX_REQUEST_COUNT 32
 #define BLK_PLUG_FLUSH_SIZE (128 * 1024)
@@ -740,6 +747,7 @@ struct blk_plug_cb {
 extern struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug,
 					     void *data, int size);
 extern void blk_start_plug(struct blk_plug *);
+extern void blk_start_plug_nr_ios(struct blk_plug *, unsigned short);
 extern void blk_finish_plug(struct blk_plug *);
 extern void blk_flush_plug_list(struct blk_plug *, bool);
 
@@ -774,6 +782,11 @@ long nr_blockdev_pages(void);
 struct blk_plug {
 };
 
+static inline void blk_start_plug_nr_ios(struct blk_plug *plug,
+					 unsigned short nr_ios)
+{
+}
+
 static inline void blk_start_plug(struct blk_plug *plug)
 {
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] io_uring: inform block layer of how many requests we are submitting
  2021-10-06 16:35 [PATCHSET RFC 0/3] Add plug based request allocation batching Jens Axboe
  2021-10-06 16:35 ` [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Jens Axboe
  2021-10-06 16:35 ` [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch Jens Axboe
@ 2021-10-06 16:35 ` Jens Axboe
  2 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 16:35 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

The block layer can use this knowledge to make smarter decisions on
how to handle the request, if it knows that N more may be coming. Switch
to using blk_start_plug_nr_ios() to pass in that information.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 62dc128e9b6b..c35b0f230be8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -314,6 +314,8 @@ struct io_submit_state {
 	bool			plug_started;
 	bool			need_plug;
 
+	unsigned short		submit_nr;
+
 	/*
 	 * Batch completion logic
 	 */
@@ -7035,7 +7037,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	 * is potentially a read/write to block based storage.
 	 */
 	if (state->need_plug && io_op_defs[req->opcode].plug) {
-		blk_start_plug(&state->plug);
+		blk_start_plug_nr_ios(&state->plug, state->submit_nr);
 		state->plug_started = true;
 		state->need_plug = false;
 	}
@@ -7150,6 +7152,7 @@ static void io_submit_state_start(struct io_submit_state *state,
 {
 	state->plug_started = false;
 	state->need_plug = max_ios > 2;
+	state->submit_nr = max_ios;
 	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] block: bump max plugged deferred size from 16 to 32
  2021-10-06 16:35 ` [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Jens Axboe
@ 2021-10-06 17:55   ` Bart Van Assche
  2021-10-06 18:05     ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Bart Van Assche @ 2021-10-06 17:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block

On 10/6/21 9:35 AM, Jens Axboe wrote:
> Particularly for NVMe with efficient deferred submission for many
> requests, there are nice benefits to be seen by bumping the default max
> plug count from 16 to 32. This is especially true for virtualized setups,
> where the submit part is more expensive. But can be noticed even on
> native hardware.
> 
> Reduce the multiple queue factor from 4 to 2, since we're changing the
> default size.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   block/blk-mq.c         | 4 ++--
>   include/linux/blkdev.h | 2 +-
>   2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index a40c94505680..5327abbefbab 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2145,14 +2145,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
>   }
>   
>   /*
> - * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
> + * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
>    * queues. This is important for md arrays to benefit from merging
>    * requests.
>    */
>   static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
>   {
>   	if (plug->multiple_queues)
> -		return BLK_MAX_REQUEST_COUNT * 4;
> +		return BLK_MAX_REQUEST_COUNT * 2;
>   	return BLK_MAX_REQUEST_COUNT;
>   }
>   
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index b19172db7eef..534298ac73cc 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -727,7 +727,7 @@ struct blk_plug {
>   	bool multiple_queues;
>   	bool nowait;
>   };
> -#define BLK_MAX_REQUEST_COUNT 16
> +#define BLK_MAX_REQUEST_COUNT 32
>   #define BLK_PLUG_FLUSH_SIZE (128 * 1024)
>   
>   struct blk_plug_cb;

Since BLK_MAX_REQUEST_COUNT is only used inside the block layer core but 
not by any block driver, can it be moved from include/linux/blkdev.h 
into block/blk-mq.h?

Thanks,

Bart.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] block: bump max plugged deferred size from 16 to 32
  2021-10-06 17:55   ` Bart Van Assche
@ 2021-10-06 18:05     ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 18:05 UTC (permalink / raw)
  To: Bart Van Assche, linux-block

On 10/6/21 11:55 AM, Bart Van Assche wrote:
> On 10/6/21 9:35 AM, Jens Axboe wrote:
>> Particularly for NVMe with efficient deferred submission for many
>> requests, there are nice benefits to be seen by bumping the default max
>> plug count from 16 to 32. This is especially true for virtualized setups,
>> where the submit part is more expensive. But can be noticed even on
>> native hardware.
>>
>> Reduce the multiple queue factor from 4 to 2, since we're changing the
>> default size.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>   block/blk-mq.c         | 4 ++--
>>   include/linux/blkdev.h | 2 +-
>>   2 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index a40c94505680..5327abbefbab 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -2145,14 +2145,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
>>   }
>>   
>>   /*
>> - * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
>> + * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
>>    * queues. This is important for md arrays to benefit from merging
>>    * requests.
>>    */
>>   static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
>>   {
>>   	if (plug->multiple_queues)
>> -		return BLK_MAX_REQUEST_COUNT * 4;
>> +		return BLK_MAX_REQUEST_COUNT * 2;
>>   	return BLK_MAX_REQUEST_COUNT;
>>   }
>>   
>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>> index b19172db7eef..534298ac73cc 100644
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -727,7 +727,7 @@ struct blk_plug {
>>   	bool multiple_queues;
>>   	bool nowait;
>>   };
>> -#define BLK_MAX_REQUEST_COUNT 16
>> +#define BLK_MAX_REQUEST_COUNT 32
>>   #define BLK_PLUG_FLUSH_SIZE (128 * 1024)
>>   
>>   struct blk_plug_cb;
> 
> Since BLK_MAX_REQUEST_COUNT is only used inside the block layer core but 
> not by any block driver, can it be moved from include/linux/blkdev.h 
> into block/blk-mq.h?

Good point, I'll move it in there as part of the patch.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch
  2021-10-06 16:35 ` [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch Jens Axboe
@ 2021-10-06 18:24   ` Bart Van Assche
  2021-10-06 18:27     ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Bart Van Assche @ 2021-10-06 18:24 UTC (permalink / raw)
  To: Jens Axboe, linux-block

On 10/6/21 9:35 AM, Jens Axboe wrote:
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index 3b967053e9f5..4b2006ec8bf2 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -22,6 +22,7 @@ struct bio_crypt_ctx;
>   
>   struct block_device {
>   	sector_t		bd_start_sect;
> +	sector_t		bd_nr_sectors;
>   	struct disk_stats __percpu *bd_stats;
>   	unsigned long		bd_stamp;
>   	bool			bd_read_only;	/* read-only policy */

Hmm ... I can't find any code in this patch that sets bd_nr_sectors? Did 
I perhaps overlook something?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch
  2021-10-06 18:24   ` Bart Van Assche
@ 2021-10-06 18:27     ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2021-10-06 18:27 UTC (permalink / raw)
  To: Bart Van Assche, linux-block

On 10/6/21 12:24 PM, Bart Van Assche wrote:
> On 10/6/21 9:35 AM, Jens Axboe wrote:
>> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
>> index 3b967053e9f5..4b2006ec8bf2 100644
>> --- a/include/linux/blk_types.h
>> +++ b/include/linux/blk_types.h
>> @@ -22,6 +22,7 @@ struct bio_crypt_ctx;
>>   
>>   struct block_device {
>>   	sector_t		bd_start_sect;
>> +	sector_t		bd_nr_sectors;
>>   	struct disk_stats __percpu *bd_stats;
>>   	unsigned long		bd_stamp;
>>   	bool			bd_read_only;	/* read-only policy */
> 
> Hmm ... I can't find any code in this patch that sets bd_nr_sectors? Did 
> I perhaps overlook something?

Good eye - nope, that's from a different patch! Looks like it got folded
in by mistake... I'll fix it, thanks Bart.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-06 18:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-06 16:35 [PATCHSET RFC 0/3] Add plug based request allocation batching Jens Axboe
2021-10-06 16:35 ` [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Jens Axboe
2021-10-06 17:55   ` Bart Van Assche
2021-10-06 18:05     ` Jens Axboe
2021-10-06 16:35 ` [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch Jens Axboe
2021-10-06 18:24   ` Bart Van Assche
2021-10-06 18:27     ` Jens Axboe
2021-10-06 16:35 ` [PATCH 3/3] io_uring: inform block layer of how many requests we are submitting Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.