linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* avoid a few q_usage_counter roundtrips v3
@ 2020-05-16 18:27 Christoph Hellwig
  2020-05-16 18:27 ` [PATCH 1/4] blk-mq: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 18:27 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

Hi Jens,

the way we track reference on q_usage_counter is a little weird at the
moment, in that we often have to grab another reference in addition to
the current one.  This small series reshuffles that to avoid the extra
references in the normal I/O path.

Changes since v2:
 - increase the q_usage_counter critical section a bit in
   blk_mq_alloc_request_hctx

Changes since v1:
 - rebased to the lastest for-5.8/block tree with the blk-crypt addition

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] blk-mq: move the call to blk_queue_enter_live out of blk_mq_get_request
  2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
@ 2020-05-16 18:27 ` Christoph Hellwig
  2020-05-16 18:27 ` [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 18:27 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, Bart Van Assche

Move the blk_queue_enter_live calls into the callers, where they can
successively be cleaned up.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4f8adef7fd0d9..b7e06673a30ba 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -342,8 +342,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 	bool clear_ctx_on_error = false;
 	u64 alloc_time_ns = 0;
 
-	blk_queue_enter_live(q);
-
 	/* alloc_time includes depth and tag waits */
 	if (blk_queue_rq_alloc_time(q))
 		alloc_time_ns = ktime_get_ns();
@@ -379,7 +377,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 	if (tag == BLK_MQ_TAG_FAIL) {
 		if (clear_ctx_on_error)
 			data->ctx = NULL;
-		blk_queue_exit(q);
 		return NULL;
 	}
 
@@ -409,16 +406,19 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	if (ret)
 		return ERR_PTR(ret);
 
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
 	blk_queue_exit(q);
 
 	if (!rq)
-		return ERR_PTR(-EWOULDBLOCK);
-
+		goto out_queue_exit;
 	rq->__data_len = 0;
 	rq->__sector = (sector_t) -1;
 	rq->bio = rq->biotail = NULL;
 	return rq;
+out_queue_exit:
+	blk_queue_exit(q);
+	return ERR_PTR(-EWOULDBLOCK);
 }
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
@@ -450,21 +450,24 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	 * Check if the hardware context is actually mapped to anything.
 	 * If not tell the caller that it should skip this queue.
 	 */
+	ret = -EXDEV;
 	alloc_data.hctx = q->queue_hw_ctx[hctx_idx];
-	if (!blk_mq_hw_queue_mapped(alloc_data.hctx)) {
-		blk_queue_exit(q);
-		return ERR_PTR(-EXDEV);
-	}
+	if (!blk_mq_hw_queue_mapped(alloc_data.hctx))
+		goto out_queue_exit;
 	cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
 	alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
 
+	ret = -EWOULDBLOCK;
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
 	blk_queue_exit(q);
 
 	if (!rq)
-		return ERR_PTR(-EWOULDBLOCK);
-
+		goto out_queue_exit;
 	return rq;
+out_queue_exit:
+	blk_queue_exit(q);
+	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL_GPL(blk_mq_alloc_request_hctx);
 
@@ -2043,8 +2046,10 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	rq_qos_throttle(q, bio);
 
 	data.cmd_flags = bio->bi_opf;
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, bio, &data);
 	if (unlikely(!rq)) {
+		blk_queue_exit(q);
 		rq_qos_cleanup(q, bio);
 		if (bio->bi_opf & REQ_NOWAIT)
 			bio_wouldblock_error(bio);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request
  2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
  2020-05-16 18:27 ` [PATCH 1/4] blk-mq: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
@ 2020-05-16 18:27 ` Christoph Hellwig
  2020-05-16 18:28 ` [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 18:27 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, Bart Van Assche

No need for two queue references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b7e06673a30ba..4e62b97dceb48 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -406,10 +406,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	if (ret)
 		return ERR_PTR(ret);
 
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
-	blk_queue_exit(q);
-
 	if (!rq)
 		goto out_queue_exit;
 	rq->__data_len = 0;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
  2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
  2020-05-16 18:27 ` [PATCH 1/4] blk-mq: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
  2020-05-16 18:27 ` [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
@ 2020-05-16 18:28 ` Christoph Hellwig
  2020-05-16 18:44   ` Bart Van Assche
  2020-05-16 18:28 ` [PATCH 4/4] blk-mq: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
  2020-05-19 15:43 ` avoid a few q_usage_counter roundtrips v3 Jens Axboe
  4 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 18:28 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

No need for two queue references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4e62b97dceb48..b1c12de8926e3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -455,10 +455,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
 
 	ret = -EWOULDBLOCK;
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
-	blk_queue_exit(q);
-
 	if (!rq)
 		goto out_queue_exit;
 	return rq;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] blk-mq: allow blk_mq_make_request to consume the q_usage_counter reference
  2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2020-05-16 18:28 ` [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
@ 2020-05-16 18:28 ` Christoph Hellwig
  2020-05-19 15:43 ` avoid a few q_usage_counter roundtrips v3 Jens Axboe
  4 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 18:28 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

blk_mq_make_request currently needs to grab an q_usage_counter
reference when allocating a request.  This is because the block layer
grabs one before calling blk_mq_make_request, but also releases it as
soon as blk_mq_make_request returns.  Remove the blk_queue_exit call
after blk_mq_make_request returns, and instead let it consume the
reference.  This works perfectly fine for the block layer caller, just
device mapper needs an extra reference as the old problem still
persists there.  Open code blk_queue_enter_live in device mapper,
as there should be no other callers and this allows better documenting
why we do a non-try get.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c | 33 ++++++++++++++++++++-------------
 block/blk-mq.c   | 13 +++++++------
 block/blk.h      | 11 -----------
 drivers/md/dm.c  | 11 ++++++++++-
 4 files changed, 37 insertions(+), 31 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 1e97f99735232..78683ea61c939 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1066,6 +1066,20 @@ generic_make_request_checks(struct bio *bio)
 	return false;
 }
 
+static blk_qc_t do_make_request(struct bio *bio)
+{
+	struct request_queue *q = bio->bi_disk->queue;
+	blk_qc_t ret = BLK_QC_T_NONE;
+
+	if (blk_crypto_bio_prep(&bio)) {
+		if (!q->make_request_fn)
+			return blk_mq_make_request(q, bio);
+		ret = q->make_request_fn(q, bio);
+	}
+	blk_queue_exit(q);
+	return ret;
+}
+
 /**
  * generic_make_request - re-submit a bio to the block device layer for I/O
  * @bio:  The bio describing the location in memory and on the device.
@@ -1131,14 +1145,7 @@ blk_qc_t generic_make_request(struct bio *bio)
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
 			bio_list_init(&bio_list_on_stack[0]);
-			if (blk_crypto_bio_prep(&bio)) {
-				if (q->make_request_fn)
-					ret = q->make_request_fn(q, bio);
-				else
-					ret = blk_mq_make_request(q, bio);
-			}
-
-			blk_queue_exit(q);
+			ret = do_make_request(bio);
 
 			/* sort new bios into those for a lower level
 			 * and those for the same level
@@ -1175,7 +1182,6 @@ EXPORT_SYMBOL(generic_make_request);
 blk_qc_t direct_make_request(struct bio *bio)
 {
 	struct request_queue *q = bio->bi_disk->queue;
-	blk_qc_t ret = BLK_QC_T_NONE;
 
 	if (WARN_ON_ONCE(q->make_request_fn)) {
 		bio_io_error(bio);
@@ -1185,10 +1191,11 @@ blk_qc_t direct_make_request(struct bio *bio)
 		return BLK_QC_T_NONE;
 	if (unlikely(bio_queue_enter(bio)))
 		return BLK_QC_T_NONE;
-	if (blk_crypto_bio_prep(&bio))
-		ret = blk_mq_make_request(q, bio);
-	blk_queue_exit(q);
-	return ret;
+	if (!blk_crypto_bio_prep(&bio)) {
+		blk_queue_exit(q);
+		return BLK_QC_T_NONE;
+	}
+	return blk_mq_make_request(q, bio);
 }
 EXPORT_SYMBOL_GPL(direct_make_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b1c12de8926e3..cac11945f6023 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2028,26 +2028,24 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	__blk_queue_split(q, &bio, &nr_segs);
 
 	if (!bio_integrity_prep(bio))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	if (!is_flush_fua && !blk_queue_nomerges(q) &&
 	    blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	if (blk_mq_sched_bio_merge(q, bio, nr_segs))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	rq_qos_throttle(q, bio);
 
 	data.cmd_flags = bio->bi_opf;
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, bio, &data);
 	if (unlikely(!rq)) {
-		blk_queue_exit(q);
 		rq_qos_cleanup(q, bio);
 		if (bio->bi_opf & REQ_NOWAIT)
 			bio_wouldblock_error(bio);
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 	}
 
 	trace_block_getrq(q, bio, bio->bi_opf);
@@ -2134,6 +2132,9 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	}
 
 	return cookie;
+queue_exit:
+	blk_queue_exit(q);
+	return BLK_QC_T_NONE;
 }
 EXPORT_SYMBOL_GPL(blk_mq_make_request); /* only for request based dm */
 
diff --git a/block/blk.h b/block/blk.h
index fc00537026a04..9e6ed5f118239 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -64,17 +64,6 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
 
 void blk_freeze_queue(struct request_queue *q);
 
-static inline void blk_queue_enter_live(struct request_queue *q)
-{
-	/*
-	 * Given that running in generic_make_request() context
-	 * guarantees that a live reference against q_usage_counter has
-	 * been established, further references under that same context
-	 * need not check that the queue has been frozen (marked dead).
-	 */
-	percpu_ref_get(&q->q_usage_counter);
-}
-
 static inline bool biovec_phys_mergeable(struct request_queue *q,
 		struct bio_vec *vec1, struct bio_vec *vec2)
 {
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 8921cd79422c6..f215b86664484 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1791,8 +1791,17 @@ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio)
 	int srcu_idx;
 	struct dm_table *map;
 
-	if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED)
+	if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) {
+		/*
+		 * We are called with a live reference on q_usage_counter, but
+		 * that one will be released as soon as we return.  Grab an
+		 * extra one as blk_mq_make_request expects to be able to
+		 * consume a reference (which lives until the request is freed
+		 * in case a request is allocated).
+		 */
+		percpu_ref_get(&q->q_usage_counter);
 		return blk_mq_make_request(q, bio);
+	}
 
 	map = dm_get_live_table(md, &srcu_idx);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
  2020-05-16 18:28 ` [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
@ 2020-05-16 18:44   ` Bart Van Assche
  0 siblings, 0 replies; 9+ messages in thread
From: Bart Van Assche @ 2020-05-16 18:44 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: linux-block

On 2020-05-16 11:28, Christoph Hellwig wrote:
> No need for two queue references.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: avoid a few q_usage_counter roundtrips v3
  2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2020-05-16 18:28 ` [PATCH 4/4] blk-mq: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
@ 2020-05-19 15:43 ` Jens Axboe
  4 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2020-05-19 15:43 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block

On 5/16/20 12:27 PM, Christoph Hellwig wrote:
> Hi Jens,
> 
> the way we track reference on q_usage_counter is a little weird at the
> moment, in that we often have to grab another reference in addition to
> the current one.  This small series reshuffles that to avoid the extra
> references in the normal I/O path.
> 
> Changes since v2:
>  - increase the q_usage_counter critical section a bit in
>    blk_mq_alloc_request_hctx
> 
> Changes since v1:
>  - rebased to the lastest for-5.8/block tree with the blk-crypt addition

Applied, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request
  2020-05-16 15:34 ` [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
@ 2020-05-16 16:25   ` Bart Van Assche
  0 siblings, 0 replies; 9+ messages in thread
From: Bart Van Assche @ 2020-05-16 16:25 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: linux-block

On 2020-05-16 08:34, Christoph Hellwig wrote:
> No need for two queue references.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request
  2020-05-16 15:34 avoid a few q_usage_counter roundtrips v2 Christoph Hellwig
@ 2020-05-16 15:34 ` Christoph Hellwig
  2020-05-16 16:25   ` Bart Van Assche
  0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2020-05-16 15:34 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

No need for two queue references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d2962863e629f..d96d3931f33e6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -406,10 +406,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	if (ret)
 		return ERR_PTR(ret);
 
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
-	blk_queue_exit(q);
-
 	if (!rq) {
 		blk_queue_exit(q);
 		return ERR_PTR(-EWOULDBLOCK);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-05-19 15:43 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-16 18:27 avoid a few q_usage_counter roundtrips v3 Christoph Hellwig
2020-05-16 18:27 ` [PATCH 1/4] blk-mq: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
2020-05-16 18:27 ` [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
2020-05-16 18:28 ` [PATCH 3/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-16 18:44   ` Bart Van Assche
2020-05-16 18:28 ` [PATCH 4/4] blk-mq: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
2020-05-19 15:43 ` avoid a few q_usage_counter roundtrips v3 Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2020-05-16 15:34 avoid a few q_usage_counter roundtrips v2 Christoph Hellwig
2020-05-16 15:34 ` [PATCH 2/4] blk-mq: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
2020-05-16 16:25   ` Bart Van Assche

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).