linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* avoid a few q_usage_counter roundtrips
@ 2020-05-13 11:04 Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 1/4] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 11:04 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

Hi Jens,

the way we track reference on q_usage_counter is a little weird at the
moment, in that we often have to grab another reference in addition to
the current one.  This small series reshuffles that to avoid the extra
references in the normal I/O path.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] block: move the call to blk_queue_enter_live out of blk_mq_get_request
  2020-05-13 11:04 avoid a few q_usage_counter roundtrips Christoph Hellwig
@ 2020-05-13 11:04 ` Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 2/4] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 11:04 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

Move the blk_queue_enter_live calls into the callers, where they can
successively be cleaned up.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9ee695bdf8739..9865a36e6a6e3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -340,8 +340,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 	bool clear_ctx_on_error = false;
 	u64 alloc_time_ns = 0;
 
-	blk_queue_enter_live(q);
-
 	/* alloc_time includes depth and tag waits */
 	if (blk_queue_rq_alloc_time(q))
 		alloc_time_ns = ktime_get_ns();
@@ -377,7 +375,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 	if (tag == BLK_MQ_TAG_FAIL) {
 		if (clear_ctx_on_error)
 			data->ctx = NULL;
-		blk_queue_exit(q);
 		return NULL;
 	}
 
@@ -407,11 +404,14 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	if (ret)
 		return ERR_PTR(ret);
 
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
 	blk_queue_exit(q);
 
-	if (!rq)
+	if (!rq) {
+		blk_queue_exit(q);
 		return ERR_PTR(-EWOULDBLOCK);
+	}
 
 	rq->__data_len = 0;
 	rq->__sector = (sector_t) -1;
@@ -456,11 +456,14 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
 	alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
 
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
 	blk_queue_exit(q);
 
-	if (!rq)
+	if (!rq) {
+		blk_queue_exit(q);
 		return ERR_PTR(-EWOULDBLOCK);
+	}
 
 	return rq;
 }
@@ -2038,8 +2041,10 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	rq_qos_throttle(q, bio);
 
 	data.cmd_flags = bio->bi_opf;
+	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, bio, &data);
 	if (unlikely(!rq)) {
+		blk_queue_exit(q);
 		rq_qos_cleanup(q, bio);
 		if (bio->bi_opf & REQ_NOWAIT)
 			bio_wouldblock_error(bio);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] block: remove a pointless queue enter pair in blk_mq_alloc_request
  2020-05-13 11:04 avoid a few q_usage_counter roundtrips Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 1/4] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
@ 2020-05-13 11:04 ` Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 4/4] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
  3 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 11:04 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

No need for two queue references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9865a36e6a6e3..a3491c1397501 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -404,10 +404,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	if (ret)
 		return ERR_PTR(ret);
 
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
-	blk_queue_exit(q);
-
 	if (!rq) {
 		blk_queue_exit(q);
 		return ERR_PTR(-EWOULDBLOCK);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
  2020-05-13 11:04 avoid a few q_usage_counter roundtrips Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 1/4] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
  2020-05-13 11:04 ` [PATCH 2/4] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
@ 2020-05-13 11:04 ` Christoph Hellwig
  2020-05-13 12:02   ` Johannes Thumshirn
  2020-05-13 11:04 ` [PATCH 4/4] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
  3 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 11:04 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

No need for two queue references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a3491c1397501..81b7af7be70b5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -445,24 +445,22 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	 * Check if the hardware context is actually mapped to anything.
 	 * If not tell the caller that it should skip this queue.
 	 */
+	ret = -EXDEV;
 	alloc_data.hctx = q->queue_hw_ctx[hctx_idx];
-	if (!blk_mq_hw_queue_mapped(alloc_data.hctx)) {
-		blk_queue_exit(q);
-		return ERR_PTR(-EXDEV);
-	}
+	if (!blk_mq_hw_queue_mapped(alloc_data.hctx))
+		goto out_queue_exit;
 	cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
 	alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
 
-	blk_queue_enter_live(q);
+	ret = -EWOULDBLOCK;
 	rq = blk_mq_get_request(q, NULL, &alloc_data);
-	blk_queue_exit(q);
-
-	if (!rq) {
-		blk_queue_exit(q);
-		return ERR_PTR(-EWOULDBLOCK);
-	}
+	if (!rq)
+		goto out_queue_exit;
 
 	return rq;
+out_queue_exit:
+	blk_queue_exit(q);
+	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL_GPL(blk_mq_alloc_request_hctx);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] block: allow blk_mq_make_request to consume the q_usage_counter reference
  2020-05-13 11:04 avoid a few q_usage_counter roundtrips Christoph Hellwig
                   ` (2 preceding siblings ...)
  2020-05-13 11:04 ` [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
@ 2020-05-13 11:04 ` Christoph Hellwig
  3 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 11:04 UTC (permalink / raw)
  To: axboe; +Cc: linux-block

blk_mq_make_request currently needs to grab an q_usage_counter
reference when allocating a request.  This is because the block layer
grabs one before calling blk_mq_make_request, but also releases it as
soon as blk_mq_make_request returns.  Remove the blk_queue_exit call
after blk_mq_make_request returns, and instead let it consume the
reference.  This works perfectly fine for the block layer caller, just
device mapper needs an extra reference as the old problem still
persists there.  Open code blk_queue_enter_live in device mapper,
as there should be no other callers and this allows better documenting
why we do a non-try get.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c | 13 +++++--------
 block/blk-mq.c   | 13 +++++++------
 block/blk.h      | 11 -----------
 drivers/md/dm.c  | 11 ++++++++++-
 4 files changed, 22 insertions(+), 26 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index cf5b2163edfef..e303687811bb3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1124,12 +1124,12 @@ blk_qc_t generic_make_request(struct bio *bio)
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
 			bio_list_init(&bio_list_on_stack[0]);
-			if (q->make_request_fn)
+			if (q->make_request_fn) {
 				ret = q->make_request_fn(q, bio);
-			else
+				blk_queue_exit(q);
+			} else {
 				ret = blk_mq_make_request(q, bio);
-
-			blk_queue_exit(q);
+			}
 
 			/* sort new bios into those for a lower level
 			 * and those for the same level
@@ -1166,7 +1166,6 @@ EXPORT_SYMBOL(generic_make_request);
 blk_qc_t direct_make_request(struct bio *bio)
 {
 	struct request_queue *q = bio->bi_disk->queue;
-	blk_qc_t ret;
 
 	if (WARN_ON_ONCE(q->make_request_fn)) {
 		bio_io_error(bio);
@@ -1176,9 +1175,7 @@ blk_qc_t direct_make_request(struct bio *bio)
 		return BLK_QC_T_NONE;
 	if (unlikely(bio_queue_enter(bio)))
 		return BLK_QC_T_NONE;
-	ret = blk_mq_make_request(q, bio);
-	blk_queue_exit(q);
-	return ret;
+	return blk_mq_make_request(q, bio);
 }
 EXPORT_SYMBOL_GPL(direct_make_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 81b7af7be70b5..33538cce22fa6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2024,26 +2024,24 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	__blk_queue_split(q, &bio, &nr_segs);
 
 	if (!bio_integrity_prep(bio))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	if (!is_flush_fua && !blk_queue_nomerges(q) &&
 	    blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	if (blk_mq_sched_bio_merge(q, bio, nr_segs))
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 
 	rq_qos_throttle(q, bio);
 
 	data.cmd_flags = bio->bi_opf;
-	blk_queue_enter_live(q);
 	rq = blk_mq_get_request(q, bio, &data);
 	if (unlikely(!rq)) {
-		blk_queue_exit(q);
 		rq_qos_cleanup(q, bio);
 		if (bio->bi_opf & REQ_NOWAIT)
 			bio_wouldblock_error(bio);
-		return BLK_QC_T_NONE;
+		goto queue_exit;
 	}
 
 	trace_block_getrq(q, bio, bio->bi_opf);
@@ -2122,6 +2120,9 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	}
 
 	return cookie;
+queue_exit:
+	blk_queue_exit(q);
+	return BLK_QC_T_NONE;
 }
 EXPORT_SYMBOL_GPL(blk_mq_make_request); /* only for request based dm */
 
diff --git a/block/blk.h b/block/blk.h
index e5cd350ca3798..cd1b516bf20b5 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -62,17 +62,6 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
 
 void blk_freeze_queue(struct request_queue *q);
 
-static inline void blk_queue_enter_live(struct request_queue *q)
-{
-	/*
-	 * Given that running in generic_make_request() context
-	 * guarantees that a live reference against q_usage_counter has
-	 * been established, further references under that same context
-	 * need not check that the queue has been frozen (marked dead).
-	 */
-	percpu_ref_get(&q->q_usage_counter);
-}
-
 static inline bool biovec_phys_mergeable(struct request_queue *q,
 		struct bio_vec *vec1, struct bio_vec *vec2)
 {
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 0eb93da44ea2a..88d3fb3d876d8 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1788,8 +1788,17 @@ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio)
 	int srcu_idx;
 	struct dm_table *map;
 
-	if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED)
+	if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) {
+		/*
+		 * We are called with a live reference on q_usage_counter, but
+		 * that one will be released as soon as we return.  Grab an
+		 * extra one as blk_mq_make_request expects to be able to
+		 * consume a reference (which lives until the request is freed
+		 * in case a request is allocated).
+		 */
+		percpu_ref_get(&q->q_usage_counter);
 		return blk_mq_make_request(q, bio);
+	}
 
 	map = dm_get_live_table(md, &srcu_idx);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
  2020-05-13 11:04 ` [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
@ 2020-05-13 12:02   ` Johannes Thumshirn
  2020-05-13 12:23     ` Christoph Hellwig
  0 siblings, 1 reply; 7+ messages in thread
From: Johannes Thumshirn @ 2020-05-13 12:02 UTC (permalink / raw)
  To: Christoph Hellwig, axboe; +Cc: linux-block

On 13/05/2020 13:04, Christoph Hellwig wrote:
>  	rq = blk_mq_get_request(q, NULL, &alloc_data);
> -	blk_queue_exit(q);
> -
> -	if (!rq) {
> -		blk_queue_exit(q);
> -		return ERR_PTR(-EWOULDBLOCK);
> -	}
> +	if (!rq)
> +		goto out_queue_exit;
>  
>  	return rq;
> +out_queue_exit:
> +	blk_queue_exit(q);
> +	return ERR_PTR(ret);

I know success handling is discouraged in the kernel, but I think 

	rq = blk_mq_get_request(q, NULL, &alloc_data);
	if (rq)
		return rq;

out_queue_exit:
	blk_queue_exit(q);
	return ERR_PTR(ret);

looks nicer, doesn't it?	

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
  2020-05-13 12:02   ` Johannes Thumshirn
@ 2020-05-13 12:23     ` Christoph Hellwig
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-05-13 12:23 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: Christoph Hellwig, axboe, linux-block

On Wed, May 13, 2020 at 12:02:10PM +0000, Johannes Thumshirn wrote:
> I know success handling is discouraged in the kernel, but I think 
> 
> 	rq = blk_mq_get_request(q, NULL, &alloc_data);
> 	if (rq)
> 		return rq;
> 
> out_queue_exit:
> 	blk_queue_exit(q);
> 	return ERR_PTR(ret);
> 
> looks nicer, doesn't it?	

Not at all.  A flow of most common case stays in line, everything else
jumps out really helps with reading.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-05-13 12:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-13 11:04 avoid a few q_usage_counter roundtrips Christoph Hellwig
2020-05-13 11:04 ` [PATCH 1/4] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
2020-05-13 11:04 ` [PATCH 2/4] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
2020-05-13 11:04 ` [PATCH 3/4] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
2020-05-13 12:02   ` Johannes Thumshirn
2020-05-13 12:23     ` Christoph Hellwig
2020-05-13 11:04 ` [PATCH 4/4] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).