linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET 0/4] Alloc batch fixes
@ 2021-11-03 18:32 Jens Axboe
  2021-11-03 18:32 ` [PATCH 1/4] block: have plug stored requests hold references to the queue Jens Axboe
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-03 18:32 UTC (permalink / raw)
  To: linux-block

Hi,

A few fixes related to the batched allocations:

- Have the requests hold a queue reference, flush them on schedule
  unplug as well.

- Make sure the queue matches, could be a mismatch if we're driving
  multiple devices.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/4] block: have plug stored requests hold references to the queue
  2021-11-03 18:32 [PATCHSET 0/4] Alloc batch fixes Jens Axboe
@ 2021-11-03 18:32 ` Jens Axboe
  2021-11-04  9:01   ` Christoph Hellwig
  2021-11-03 18:32 ` [PATCH 2/4] block: make blk_try_enter_queue() available for blk-mq Jens Axboe
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2021-11-03 18:32 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

Requests that were stored in the cache deliberately didn't hold an enter
reference to the queue, instead we grabbed one every time we pulled a
request out of there. That made for awkward logic on freeing the remainder
of the cached list, if needed, where we had to artificially raise the
queue usage count before each free.

Grab references up front for cached plug requests. That's safer, and also
more efficient.

Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c | 2 +-
 block/blk-mq.c   | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index fd389a16013c..c2d267b6f910 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule)
 		flush_plug_callbacks(plug, from_schedule);
 	if (!rq_list_empty(plug->mq_list))
 		blk_mq_flush_plug_list(plug, from_schedule);
-	if (unlikely(!from_schedule && plug->cached_rq))
+	if (unlikely(!rq_list_empty(plug->cached_rq)))
 		blk_mq_free_plug_rqs(plug);
 }
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c68aa0a332e1..5498454c2164 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -410,7 +410,10 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data,
 		tag_mask &= ~(1UL << i);
 		rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns);
 		rq_list_add(data->cached_rq, rq);
+		nr++;
 	}
+	/* caller already holds a reference, add for remainder */
+	percpu_ref_get_many(&data->q->q_usage_counter, nr - 1);
 	data->nr_tags -= nr;
 
 	return rq_list_pop(data->cached_rq);
@@ -630,10 +633,8 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug)
 {
 	struct request *rq;
 
-	while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) {
-		percpu_ref_get(&rq->q->q_usage_counter);
+	while ((rq = rq_list_pop(&plug->cached_rq)) != NULL)
 		blk_mq_free_request(rq);
-	}
 }
 
 static void req_bio_endio(struct request *rq, struct bio *bio,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/4] block: make blk_try_enter_queue() available for blk-mq
  2021-11-03 18:32 [PATCHSET 0/4] Alloc batch fixes Jens Axboe
  2021-11-03 18:32 ` [PATCH 1/4] block: have plug stored requests hold references to the queue Jens Axboe
@ 2021-11-03 18:32 ` Jens Axboe
  2021-11-03 18:32 ` [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio() Jens Axboe
  2021-11-03 18:32 ` [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match Jens Axboe
  3 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-03 18:32 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

Just a prep patch for shifting the queue enter logic.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c | 26 +-------------------------
 block/blk.h      | 25 +++++++++++++++++++++++++
 2 files changed, 26 insertions(+), 25 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index c2d267b6f910..e00f5a2287cc 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -386,30 +386,6 @@ void blk_cleanup_queue(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_cleanup_queue);
 
-static bool blk_try_enter_queue(struct request_queue *q, bool pm)
-{
-	rcu_read_lock();
-	if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter))
-		goto fail;
-
-	/*
-	 * The code that increments the pm_only counter must ensure that the
-	 * counter is globally visible before the queue is unfrozen.
-	 */
-	if (blk_queue_pm_only(q) &&
-	    (!pm || queue_rpm_status(q) == RPM_SUSPENDED))
-		goto fail_put;
-
-	rcu_read_unlock();
-	return true;
-
-fail_put:
-	blk_queue_exit(q);
-fail:
-	rcu_read_unlock();
-	return false;
-}
-
 /**
  * blk_queue_enter() - try to increase q->q_usage_counter
  * @q: request queue pointer
@@ -442,7 +418,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 	return 0;
 }
 
-static inline int bio_queue_enter(struct bio *bio)
+int bio_queue_enter(struct bio *bio)
 {
 	struct request_queue *q = bdev_get_queue(bio->bi_bdev);
 
diff --git a/block/blk.h b/block/blk.h
index 7afffd548daf..f7371d3b1522 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -55,6 +55,31 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
 void blk_freeze_queue(struct request_queue *q);
 void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
 void blk_queue_start_drain(struct request_queue *q);
+int bio_queue_enter(struct bio *bio);
+
+static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
+{
+	rcu_read_lock();
+	if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter))
+		goto fail;
+
+	/*
+	 * The code that increments the pm_only counter must ensure that the
+	 * counter is globally visible before the queue is unfrozen.
+	 */
+	if (blk_queue_pm_only(q) &&
+	    (!pm || queue_rpm_status(q) == RPM_SUSPENDED))
+		goto fail_put;
+
+	rcu_read_unlock();
+	return true;
+
+fail_put:
+	blk_queue_exit(q);
+fail:
+	rcu_read_unlock();
+	return false;
+}
 
 #define BIO_INLINE_VECS 4
 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-03 18:32 [PATCHSET 0/4] Alloc batch fixes Jens Axboe
  2021-11-03 18:32 ` [PATCH 1/4] block: have plug stored requests hold references to the queue Jens Axboe
  2021-11-03 18:32 ` [PATCH 2/4] block: make blk_try_enter_queue() available for blk-mq Jens Axboe
@ 2021-11-03 18:32 ` Jens Axboe
  2021-11-04  9:10   ` Christoph Hellwig
  2021-11-03 18:32 ` [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match Jens Axboe
  3 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2021-11-03 18:32 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

Retain the old logic for the fops based submit, but for our internal
blk_mq_submit_bio(), move the queue entering logic into the core
function itself.

We need to be a bit careful if going into the scheduler, as a scheduler
or queue mappings can arbitrarily change before we have entered the queue.
Have the bio scheduler mapping do that separately, it's a very cheap
operation compared to actually doing merging locking and lookups.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-core.c     | 14 ++++++--------
 block/blk-mq-sched.c | 13 ++++++++++---
 block/blk-mq.c       | 28 ++++++++++++++++++----------
 3 files changed, 34 insertions(+), 21 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index e00f5a2287cc..2b12a427ffa6 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -868,18 +868,16 @@ static void __submit_bio(struct bio *bio)
 {
 	struct gendisk *disk = bio->bi_bdev->bd_disk;
 
-	if (unlikely(bio_queue_enter(bio) != 0))
-		return;
-
 	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
-		goto queue_exit;
+		return;
 	if (!disk->fops->submit_bio) {
 		blk_mq_submit_bio(bio);
-		return;
+	} else {
+		if (unlikely(bio_queue_enter(bio) != 0))
+			return;
+		disk->fops->submit_bio(bio);
+		blk_queue_exit(disk->queue);
 	}
-	disk->fops->submit_bio(bio);
-queue_exit:
-	blk_queue_exit(disk->queue);
 }
 
 /*
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 4a6789e4398b..4be652fa38e7 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -370,15 +370,20 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
 	bool ret = false;
 	enum hctx_type type;
 
-	if (e && e->type->ops.bio_merge)
-		return e->type->ops.bio_merge(q, bio, nr_segs);
+	if (bio_queue_enter(bio))
+		return false;
+
+	if (e && e->type->ops.bio_merge) {
+		ret = e->type->ops.bio_merge(q, bio, nr_segs);
+		goto out_put;
+	}
 
 	ctx = blk_mq_get_ctx(q);
 	hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
 	type = hctx->type;
 	if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
 	    list_empty_careful(&ctx->rq_lists[type]))
-		return false;
+		goto out_put;
 
 	/* default per sw-queue merge */
 	spin_lock(&ctx->lock);
@@ -391,6 +396,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
 		ret = true;
 
 	spin_unlock(&ctx->lock);
+out_put:
+	blk_queue_exit(q);
 	return ret;
 }
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5498454c2164..4bc98c7264fa 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2478,6 +2478,13 @@ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
 	return BLK_MAX_REQUEST_COUNT;
 }
 
+static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio)
+{
+	if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio))
+		return false;
+	return true;
+}
+
 /**
  * blk_mq_submit_bio - Create and send a request to block device.
  * @bio: Bio pointer.
@@ -2506,21 +2513,20 @@ void blk_mq_submit_bio(struct bio *bio)
 		__blk_queue_split(q, &bio, &nr_segs);
 
 	if (!bio_integrity_prep(bio))
-		goto queue_exit;
+		return;
 
 	if (!blk_queue_nomerges(q) && bio_mergeable(bio)) {
 		if (blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq))
-			goto queue_exit;
+			return;
 		if (blk_mq_sched_bio_merge(q, bio, nr_segs))
-			goto queue_exit;
+			return;
 	}
 
-	rq_qos_throttle(q, bio);
-
 	plug = blk_mq_plug(q, bio);
 	if (plug && plug->cached_rq) {
 		rq = rq_list_pop(&plug->cached_rq);
 		INIT_LIST_HEAD(&rq->queuelist);
+		rq_qos_throttle(q, bio);
 	} else {
 		struct blk_mq_alloc_data data = {
 			.q		= q,
@@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio)
 			.cmd_flags	= bio->bi_opf,
 		};
 
+		if (unlikely(!blk_mq_queue_enter(q, bio)))
+			return;
+
+		rq_qos_throttle(q, bio);
+
 		if (plug) {
 			data.nr_tags = plug->nr_ios;
 			plug->nr_ios = 1;
@@ -2538,7 +2549,8 @@ void blk_mq_submit_bio(struct bio *bio)
 			rq_qos_cleanup(q, bio);
 			if (bio->bi_opf & REQ_NOWAIT)
 				bio_wouldblock_error(bio);
-			goto queue_exit;
+			blk_queue_exit(q);
+			return;
 		}
 	}
 
@@ -2621,10 +2633,6 @@ void blk_mq_submit_bio(struct bio *bio)
 		/* Default case. */
 		blk_mq_sched_insert_request(rq, false, true, true);
 	}
-
-	return;
-queue_exit:
-	blk_queue_exit(q);
 }
 
 static size_t order_to_size(unsigned int order)
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match
  2021-11-03 18:32 [PATCHSET 0/4] Alloc batch fixes Jens Axboe
                   ` (2 preceding siblings ...)
  2021-11-03 18:32 ` [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio() Jens Axboe
@ 2021-11-03 18:32 ` Jens Axboe
  2021-11-04  9:17   ` Christoph Hellwig
  3 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2021-11-03 18:32 UTC (permalink / raw)
  To: linux-block; +Cc: Jens Axboe

We need to improve the logic here a bit, most importantly ensuring that
the request matches the current queue. If it doesn't, we cannot use it
and must fallback to normal request alloc.

Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4bc98c7264fa..e92c36f2326a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2485,6 +2485,24 @@ static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio)
 	return true;
 }
 
+static inline struct request *blk_get_plug_request(struct request_queue *q,
+						   struct blk_plug *plug,
+						   struct bio *bio)
+{
+	struct request *rq;
+
+	if (plug && !rq_list_empty(plug->cached_rq)) {
+		rq = rq_list_peek(&plug->cached_rq);
+		if (rq->q == q) {
+			rq_qos_throttle(q, bio);
+			plug->cached_rq = rq_list_next(rq);
+			INIT_LIST_HEAD(&rq->queuelist);
+			return rq;
+		}
+	}
+	return NULL;
+}
+
 /**
  * blk_mq_submit_bio - Create and send a request to block device.
  * @bio: Bio pointer.
@@ -2523,11 +2541,8 @@ void blk_mq_submit_bio(struct bio *bio)
 	}
 
 	plug = blk_mq_plug(q, bio);
-	if (plug && plug->cached_rq) {
-		rq = rq_list_pop(&plug->cached_rq);
-		INIT_LIST_HEAD(&rq->queuelist);
-		rq_qos_throttle(q, bio);
-	} else {
+	rq = blk_get_plug_request(q, plug, bio);
+	if (!rq) {
 		struct blk_mq_alloc_data data = {
 			.q		= q,
 			.nr_tags	= 1,
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/4] block: have plug stored requests hold references to the queue
  2021-11-03 18:32 ` [PATCH 1/4] block: have plug stored requests hold references to the queue Jens Axboe
@ 2021-11-04  9:01   ` Christoph Hellwig
  2021-11-04 11:33     ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04  9:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

> @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule)
>  		flush_plug_callbacks(plug, from_schedule);
>  	if (!rq_list_empty(plug->mq_list))
>  		blk_mq_flush_plug_list(plug, from_schedule);
> -	if (unlikely(!from_schedule && plug->cached_rq))
> +	if (unlikely(!rq_list_empty(plug->cached_rq)))

How is this related to the rest of the patch?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-03 18:32 ` [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio() Jens Axboe
@ 2021-11-04  9:10   ` Christoph Hellwig
  2021-11-04 11:41     ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04  9:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

On Wed, Nov 03, 2021 at 12:32:21PM -0600, Jens Axboe wrote:
> Retain the old logic for the fops based submit, but for our internal
> blk_mq_submit_bio(), move the queue entering logic into the core
> function itself.

Can you explain the why?  I guess you want to skip the extra reference
for the cached requests now that they already have one.  But please
state that, and explain why it is a fix, as to me it just seems like
another little optimization.

> We need to be a bit careful if going into the scheduler, as a scheduler
> or queue mappings can arbitrarily change before we have entered the queue.
> Have the bio scheduler mapping do that separately, it's a very cheap
> operation compared to actually doing merging locking and lookups.

So just don't do the merges for cache requets and side step this
extra bio_queue_enter for that case?

> -	if (unlikely(bio_queue_enter(bio) != 0))
> -		return;
> -
>  	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
> -		goto queue_exit;
> +		return;

This is broken, we really ant the submit checks under freeze
protection to make sure the parameters can't be changed underneath
us.

> +static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio)
> +{
> +	if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio))
> +		return false;
> +	return true;
> +}

This looks weird, as blk_try_enter_queue is already called by
bio_queue_enter.

>  	} else {
>  		struct blk_mq_alloc_data data = {
>  			.q		= q,
> @@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio)
>  			.cmd_flags	= bio->bi_opf,
>  		};
>  
> +		if (unlikely(!blk_mq_queue_enter(q, bio)))
> +			return;
> +
> +		rq_qos_throttle(q, bio);
> +

At some point the code in this !cached branch really needs to move
into a helper..

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match
  2021-11-03 18:32 ` [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match Jens Axboe
@ 2021-11-04  9:17   ` Christoph Hellwig
  2021-11-04 11:35     ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04  9:17 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block

On Wed, Nov 03, 2021 at 12:32:22PM -0600, Jens Axboe wrote:
> +	if (plug && !rq_list_empty(plug->cached_rq)) {
> +		rq = rq_list_peek(&plug->cached_rq);

No need for the empty check plus peek.  This could be simplified
down to:

	if (!plug)
		return NULL;
	rq = rq_list_peek(&plug->cached_rq);
	if (!rq || rq->q != q)
		return NULL;

	rq_qos_throttle(q, bio);
	plug->cached_rq = rq_list_next(rq);
	INIT_LIST_HEAD(&rq->queuelist);
	return rq;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/4] block: have plug stored requests hold references to the queue
  2021-11-04  9:01   ` Christoph Hellwig
@ 2021-11-04 11:33     ` Jens Axboe
  0 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 11:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 3:01 AM, Christoph Hellwig wrote:
>> @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule)
>>  		flush_plug_callbacks(plug, from_schedule);
>>  	if (!rq_list_empty(plug->mq_list))
>>  		blk_mq_flush_plug_list(plug, from_schedule);
>> -	if (unlikely(!from_schedule && plug->cached_rq))
>> +	if (unlikely(!rq_list_empty(plug->cached_rq)))
> 
> How is this related to the rest of the patch?

With references to the requests, flushing them even from a schedule
unplug condition is a lot saner in case someone is waiting on the
queue to quiesce.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match
  2021-11-04  9:17   ` Christoph Hellwig
@ 2021-11-04 11:35     ` Jens Axboe
  2021-11-04 17:33       ` Christoph Hellwig
  0 siblings, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 11:35 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 3:17 AM, Christoph Hellwig wrote:
> On Wed, Nov 03, 2021 at 12:32:22PM -0600, Jens Axboe wrote:
>> +	if (plug && !rq_list_empty(plug->cached_rq)) {
>> +		rq = rq_list_peek(&plug->cached_rq);
> 
> No need for the empty check plus peek.  This could be simplified
> down to:
> 
> 	if (!plug)
> 		return NULL;
> 	rq = rq_list_peek(&plug->cached_rq);
> 	if (!rq || rq->q != q)
> 		return NULL;
> 
> 	rq_qos_throttle(q, bio);
> 	plug->cached_rq = rq_list_next(rq);
> 	INIT_LIST_HEAD(&rq->queuelist);
> 	return rq;

I tend to prefer having logic flow from the expected conditions. And
since we need to check if the plug is valid anyway, I prefer the current
logic.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04  9:10   ` Christoph Hellwig
@ 2021-11-04 11:41     ` Jens Axboe
  2021-11-04 12:14       ` Jens Axboe
  2021-11-04 17:30       ` Christoph Hellwig
  0 siblings, 2 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 11:41 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 3:10 AM, Christoph Hellwig wrote:
> On Wed, Nov 03, 2021 at 12:32:21PM -0600, Jens Axboe wrote:
>> Retain the old logic for the fops based submit, but for our internal
>> blk_mq_submit_bio(), move the queue entering logic into the core
>> function itself.
> 
> Can you explain the why?  I guess you want to skip the extra reference
> for the cached requests now that they already have one.  But please
> state that, and explain why it is a fix, as to me it just seems like
> another little optimization.

It's just pointless to grab double references, and counter productive
too.

>> We need to be a bit careful if going into the scheduler, as a scheduler
>> or queue mappings can arbitrarily change before we have entered the queue.
>> Have the bio scheduler mapping do that separately, it's a very cheap
>> operation compared to actually doing merging locking and lookups.
> 
> So just don't do the merges for cache requets and side step this
> extra bio_queue_enter for that case?

I'd be fine with that, but it's a bit of a chicken and egg situation as
we don't know. I guess we could move the plugged request check earlier,
and just bypass merging there. Though that makes it a special case
thing, and it's generally useful now. Not sure that would be a good
idea.

>> -	if (unlikely(bio_queue_enter(bio) != 0))
>> -		return;
>> -
>>  	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
>> -		goto queue_exit;
>> +		return;
> 
> This is broken, we really ant the submit checks under freeze
> protection to make sure the parameters can't be changed underneath
> us.

Which parameters are you worried about in submit_bio_checks()? I don't
immediately see anything that would make me worry about it.

>> +static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio)
>> +{
>> +	if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio))
>> +		return false;
>> +	return true;
>> +}
> 
> This looks weird, as blk_try_enter_queue is already called by
> bio_queue_enter.

It's just for avoiding a pointless call into bio_queue_enter(), which
isn't needed it blk_try_enter_queue() is successful. The latter is short
and small and can be inlined, while bio_queue_enter() is a lot bigger.

>>  	} else {
>>  		struct blk_mq_alloc_data data = {
>>  			.q		= q,
>> @@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio)
>>  			.cmd_flags	= bio->bi_opf,
>>  		};
>>  
>> +		if (unlikely(!blk_mq_queue_enter(q, bio)))
>> +			return;
>> +
>> +		rq_qos_throttle(q, bio);
>> +
> 
> At some point the code in this !cached branch really needs to move
> into a helper..

Like in the next patch?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04 11:41     ` Jens Axboe
@ 2021-11-04 12:14       ` Jens Axboe
  2021-11-04 17:32         ` Christoph Hellwig
  2021-11-04 17:30       ` Christoph Hellwig
  1 sibling, 1 reply; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 12:14 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 5:41 AM, Jens Axboe wrote:
>> This is broken, we really ant the submit checks under freeze
>> protection to make sure the parameters can't be changed underneath
>> us.
> 
> Which parameters are you worried about in submit_bio_checks()? I don't
> immediately see anything that would make me worry about it.

To player it safer, I would suggest we fold in something like the
below. That keeps the submit_checks() under the queue enter.


diff --git a/block/blk-core.c b/block/blk-core.c
index 2b12a427ffa6..18aab7f8469a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -746,7 +746,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
 	return BLK_STS_OK;
 }
 
-static noinline_for_stack bool submit_bio_checks(struct bio *bio)
+noinline_for_stack bool submit_bio_checks(struct bio *bio)
 {
 	struct block_device *bdev = bio->bi_bdev;
 	struct request_queue *q = bdev_get_queue(bdev);
@@ -868,14 +868,13 @@ static void __submit_bio(struct bio *bio)
 {
 	struct gendisk *disk = bio->bi_bdev->bd_disk;
 
-	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
-		return;
 	if (!disk->fops->submit_bio) {
 		blk_mq_submit_bio(bio);
 	} else {
 		if (unlikely(bio_queue_enter(bio) != 0))
 			return;
-		disk->fops->submit_bio(bio);
+		if (submit_bio_checks(bio) && blk_crypto_bio_prep(&bio))
+			disk->fops->submit_bio(bio);
 		blk_queue_exit(disk->queue);
 	}
 }
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e92c36f2326a..2dab9bdcc51a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2526,6 +2526,9 @@ void blk_mq_submit_bio(struct bio *bio)
 	unsigned int nr_segs = 1;
 	blk_status_t ret;
 
+	if (unlikely(!blk_crypto_bio_prep(&bio)))
+		return;
+
 	blk_queue_bounce(q, &bio);
 	if (blk_may_split(q, bio))
 		__blk_queue_split(q, &bio, &nr_segs);
@@ -2551,6 +2554,8 @@ void blk_mq_submit_bio(struct bio *bio)
 
 		if (unlikely(!blk_mq_queue_enter(q, bio)))
 			return;
+		if (unlikely(!submit_bio_checks(bio)))
+			goto put_exit;
 
 		rq_qos_throttle(q, bio);
 
diff --git a/block/blk.h b/block/blk.h
index f7371d3b1522..79c98ced59c8 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -56,6 +56,7 @@ void blk_freeze_queue(struct request_queue *q);
 void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
 void blk_queue_start_drain(struct request_queue *q);
 int bio_queue_enter(struct bio *bio);
+bool submit_bio_checks(struct bio *bio);
 
 static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
 {

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04 11:41     ` Jens Axboe
  2021-11-04 12:14       ` Jens Axboe
@ 2021-11-04 17:30       ` Christoph Hellwig
  2021-11-04 17:36         ` Jens Axboe
  1 sibling, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04 17:30 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block

On Thu, Nov 04, 2021 at 05:41:35AM -0600, Jens Axboe wrote:
> >>  	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
> >> -		goto queue_exit;
> >> +		return;
> > 
> > This is broken, we really ant the submit checks under freeze
> > protection to make sure the parameters can't be changed underneath
> > us.
> 
> Which parameters are you worried about in submit_bio_checks()? I don't
> immediately see anything that would make me worry about it.

Mostly checks if certain operations are supported or not, as
revalidation could clear those.

> > This looks weird, as blk_try_enter_queue is already called by
> > bio_queue_enter.
> 
> It's just for avoiding a pointless call into bio_queue_enter(), which
> isn't needed it blk_try_enter_queue() is successful. The latter is short
> and small and can be inlined, while bio_queue_enter() is a lot bigger.

If this is so impotant let's operated with an inlined bio_queue_enter
that calls out of line into slow path instead of open coding it
like this.

> >>  	} else {
> >>  		struct blk_mq_alloc_data data = {
> >>  			.q		= q,
> >> @@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio)
> >>  			.cmd_flags	= bio->bi_opf,
> >>  		};
> >>  
> >> +		if (unlikely(!blk_mq_queue_enter(q, bio)))
> >> +			return;
> >> +
> >> +		rq_qos_throttle(q, bio);
> >> +
> > 
> > At some point the code in this !cached branch really needs to move
> > into a helper..
> 
> Like in the next patch?

No, I mean the !cached case which is a lot more convoluted.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04 12:14       ` Jens Axboe
@ 2021-11-04 17:32         ` Christoph Hellwig
  2021-11-04 17:35           ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04 17:32 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block

On Thu, Nov 04, 2021 at 06:14:30AM -0600, Jens Axboe wrote:
> To player it safer, I would suggest we fold in something like the
> below. That keeps the submit_checks() under the queue enter.

Yes, this looks much better.  Nit below:

>  	struct block_device *bdev = bio->bi_bdev;
>  	struct request_queue *q = bdev_get_queue(bdev);
> @@ -868,14 +868,13 @@ static void __submit_bio(struct bio *bio)
>  {
>  	struct gendisk *disk = bio->bi_bdev->bd_disk;
>  
> -	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
> -		return;
>  	if (!disk->fops->submit_bio) {
>  		blk_mq_submit_bio(bio);
>  	} else {
>  		if (unlikely(bio_queue_enter(bio) != 0))
>  			return;
> -		disk->fops->submit_bio(bio);
> +		if (submit_bio_checks(bio) && blk_crypto_bio_prep(&bio))
> +			disk->fops->submit_bio(bio);
>  		blk_queue_exit(disk->queue);
>  	}

A this point moving the whole ->submit_bio based branch into a
helper probably makes sense as well.

> +	if (unlikely(!blk_crypto_bio_prep(&bio)))
> +		return;
> +
>  	blk_queue_bounce(q, &bio);
>  	if (blk_may_split(q, bio))
>  		__blk_queue_split(q, &bio, &nr_segs);
> @@ -2551,6 +2554,8 @@ void blk_mq_submit_bio(struct bio *bio)
>  
>  		if (unlikely(!blk_mq_queue_enter(q, bio)))
>  			return;
> +		if (unlikely(!submit_bio_checks(bio)))
> +			goto put_exit;

This now skips the checks for the cached request case, doesn't it?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match
  2021-11-04 11:35     ` Jens Axboe
@ 2021-11-04 17:33       ` Christoph Hellwig
  2021-11-04 17:37         ` Jens Axboe
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2021-11-04 17:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Christoph Hellwig, linux-block

On Thu, Nov 04, 2021 at 05:35:10AM -0600, Jens Axboe wrote:
> I tend to prefer having logic flow from the expected conditions. And
> since we need to check if the plug is valid anyway, I prefer the current
> logic.

Well, for 99% of the I/O not using a cached request is the expected
condition.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04 17:32         ` Christoph Hellwig
@ 2021-11-04 17:35           ` Jens Axboe
  0 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 17:35 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 11:32 AM, Christoph Hellwig wrote:
> On Thu, Nov 04, 2021 at 06:14:30AM -0600, Jens Axboe wrote:
>> To player it safer, I would suggest we fold in something like the
>> below. That keeps the submit_checks() under the queue enter.
> 
> Yes, this looks much better.  Nit below:
> 
>>  	struct block_device *bdev = bio->bi_bdev;
>>  	struct request_queue *q = bdev_get_queue(bdev);
>> @@ -868,14 +868,13 @@ static void __submit_bio(struct bio *bio)
>>  {
>>  	struct gendisk *disk = bio->bi_bdev->bd_disk;
>>  
>> -	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
>> -		return;
>>  	if (!disk->fops->submit_bio) {
>>  		blk_mq_submit_bio(bio);
>>  	} else {
>>  		if (unlikely(bio_queue_enter(bio) != 0))
>>  			return;
>> -		disk->fops->submit_bio(bio);
>> +		if (submit_bio_checks(bio) && blk_crypto_bio_prep(&bio))
>> +			disk->fops->submit_bio(bio);
>>  		blk_queue_exit(disk->queue);
>>  	}
> 
> A this point moving the whole ->submit_bio based branch into a
> helper probably makes sense as well.

Sure, I can do that.

>> +	if (unlikely(!blk_crypto_bio_prep(&bio)))
>> +		return;
>> +
>>  	blk_queue_bounce(q, &bio);
>>  	if (blk_may_split(q, bio))
>>  		__blk_queue_split(q, &bio, &nr_segs);
>> @@ -2551,6 +2554,8 @@ void blk_mq_submit_bio(struct bio *bio)
>>  
>>  		if (unlikely(!blk_mq_queue_enter(q, bio)))
>>  			return;
>> +		if (unlikely(!submit_bio_checks(bio)))
>> +			goto put_exit;
> 
> This now skips the checks for the cached request case, doesn't it?

It did, I did add that when folding it in though.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio()
  2021-11-04 17:30       ` Christoph Hellwig
@ 2021-11-04 17:36         ` Jens Axboe
  0 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 17:36 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 11:30 AM, Christoph Hellwig wrote:
> On Thu, Nov 04, 2021 at 05:41:35AM -0600, Jens Axboe wrote:
>>>>  	if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))
>>>> -		goto queue_exit;
>>>> +		return;
>>>
>>> This is broken, we really ant the submit checks under freeze
>>> protection to make sure the parameters can't be changed underneath
>>> us.
>>
>> Which parameters are you worried about in submit_bio_checks()? I don't
>> immediately see anything that would make me worry about it.
> 
> Mostly checks if certain operations are supported or not, as
> revalidation could clear those.
> 
>>> This looks weird, as blk_try_enter_queue is already called by
>>> bio_queue_enter.
>>
>> It's just for avoiding a pointless call into bio_queue_enter(), which
>> isn't needed it blk_try_enter_queue() is successful. The latter is short
>> and small and can be inlined, while bio_queue_enter() is a lot bigger.
> 
> If this is so impotant let's operated with an inlined bio_queue_enter
> that calls out of line into slow path instead of open coding it
> like this.

Sure, I can do that instead.

>>>>  	} else {
>>>>  		struct blk_mq_alloc_data data = {
>>>>  			.q		= q,
>>>> @@ -2528,6 +2534,11 @@ void blk_mq_submit_bio(struct bio *bio)
>>>>  			.cmd_flags	= bio->bi_opf,
>>>>  		};
>>>>  
>>>> +		if (unlikely(!blk_mq_queue_enter(q, bio)))
>>>> +			return;
>>>> +
>>>> +		rq_qos_throttle(q, bio);
>>>> +
>>>
>>> At some point the code in this !cached branch really needs to move
>>> into a helper..
>>
>> Like in the next patch?
> 
> No, I mean the !cached case which is a lot more convoluted.

Yeah, a helper there might be appropriate. I'll write it up.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match
  2021-11-04 17:33       ` Christoph Hellwig
@ 2021-11-04 17:37         ` Jens Axboe
  0 siblings, 0 replies; 18+ messages in thread
From: Jens Axboe @ 2021-11-04 17:37 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 11/4/21 11:33 AM, Christoph Hellwig wrote:
> On Thu, Nov 04, 2021 at 05:35:10AM -0600, Jens Axboe wrote:
>> I tend to prefer having logic flow from the expected conditions. And
>> since we need to check if the plug is valid anyway, I prefer the current
>> logic.
> 
> Well, for 99% of the I/O not using a cached request is the expected
> condition.

That may very well be the case right now, but as more things plug into
propagating expected number of requests, that's likely to change.

That said, I did reflow it for you in the updated version.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-11-04 17:37 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-03 18:32 [PATCHSET 0/4] Alloc batch fixes Jens Axboe
2021-11-03 18:32 ` [PATCH 1/4] block: have plug stored requests hold references to the queue Jens Axboe
2021-11-04  9:01   ` Christoph Hellwig
2021-11-04 11:33     ` Jens Axboe
2021-11-03 18:32 ` [PATCH 2/4] block: make blk_try_enter_queue() available for blk-mq Jens Axboe
2021-11-03 18:32 ` [PATCH 3/4] block: move queue enter logic into blk_mq_submit_bio() Jens Axboe
2021-11-04  9:10   ` Christoph Hellwig
2021-11-04 11:41     ` Jens Axboe
2021-11-04 12:14       ` Jens Axboe
2021-11-04 17:32         ` Christoph Hellwig
2021-11-04 17:35           ` Jens Axboe
2021-11-04 17:30       ` Christoph Hellwig
2021-11-04 17:36         ` Jens Axboe
2021-11-03 18:32 ` [PATCH 4/4] block: move plug rq alloc into helper and ensure queue match Jens Axboe
2021-11-04  9:17   ` Christoph Hellwig
2021-11-04 11:35     ` Jens Axboe
2021-11-04 17:33       ` Christoph Hellwig
2021-11-04 17:37         ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).