linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue
@ 2020-09-11  9:44 Ming Lei
  2020-09-11  9:52 ` Christoph Hellwig
  2020-09-11 10:01 ` Hannes Reinecke
  0 siblings, 2 replies; 3+ messages in thread
From: Ming Lei @ 2020-09-11  9:44 UTC (permalink / raw)
  To: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig
  Cc: Ming Lei, David Milburn, Ewan D . Milne

NVMe shares tagset between fabric queue and admin queue or between
connect_q and NS queue, so hctx_may_queue() can be called to allocate
request for these queues.

Tags can be reserved in these tagset. Before error recovery, there is
often lots of in-flight requests which can't be completed, and new
reserved request may be needed in error recovery path. However,
hctx_may_queue() can always return false because there is too many
in-flight requests which can't be completed during error handling.
Finally, everything can't move on.

Fix this issue by always allowing reserved tag allocation in
hctx_may_queue(). This ways is reasonable because reserved tag
suppose to be ready any time.

Cc: David Milburn <dmilburn@redhat.com>
Cc: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-tag.c | 3 ++-
 block/blk-mq.c     | 6 ++++--
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index c31c4a0478a5..aacf10decdbd 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -76,7 +76,8 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
 			    struct sbitmap_queue *bt)
 {
-	if (!data->q->elevator && !hctx_may_queue(data->hctx, bt))
+	if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) &&
+			!hctx_may_queue(data->hctx, bt))
 		return BLK_MQ_NO_TAG;
 
 	if (data->shallow_depth)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ccb500e38008..91cff275451d 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1147,15 +1147,17 @@ static bool __blk_mq_get_driver_tag(struct request *rq)
 	struct sbitmap_queue *bt = rq->mq_hctx->tags->bitmap_tags;
 	unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags;
 	int tag;
+	bool reserved = blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags,
+			rq->internal_tag);
 
 	blk_mq_tag_busy(rq->mq_hctx);
 
-	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
+	if (reserved) {
 		bt = rq->mq_hctx->tags->breserved_tags;
 		tag_offset = 0;
 	}
 
-	if (!hctx_may_queue(rq->mq_hctx, bt))
+	if (!reserved && !hctx_may_queue(rq->mq_hctx, bt))
 		return false;
 	tag = __sbitmap_queue_get(bt);
 	if (tag == BLK_MQ_NO_TAG)
-- 
2.25.2


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue
  2020-09-11  9:44 [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue Ming Lei
@ 2020-09-11  9:52 ` Christoph Hellwig
  2020-09-11 10:01 ` Hannes Reinecke
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2020-09-11  9:52 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	David Milburn, Ewan D . Milne

On Fri, Sep 11, 2020 at 05:44:53PM +0800, Ming Lei wrote:
>  	unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags;
>  	int tag;
> +	bool reserved = blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags,
> +			rq->internal_tag);
>  
>  	blk_mq_tag_busy(rq->mq_hctx);
>  
> -	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
> +	if (reserved) {
>  		bt = rq->mq_hctx->tags->breserved_tags;
>  		tag_offset = 0;
>  	}
>  
> -	if (!hctx_may_queue(rq->mq_hctx, bt))
> +	if (!reserved && !hctx_may_queue(rq->mq_hctx, bt))

What about:

	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
  		bt = rq->mq_hctx->tags->breserved_tags;
  		tag_offset = 0;
	} else {
		if (!hctx_may_queue(rq->mq_hctx, bt))
	 		return false;
	}

which seems a little easier to follow?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue
  2020-09-11  9:44 [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue Ming Lei
  2020-09-11  9:52 ` Christoph Hellwig
@ 2020-09-11 10:01 ` Hannes Reinecke
  1 sibling, 0 replies; 3+ messages in thread
From: Hannes Reinecke @ 2020-09-11 10:01 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block, linux-nvme, Christoph Hellwig
  Cc: David Milburn, Ewan D . Milne

On 9/11/20 11:44 AM, Ming Lei wrote:
> NVMe shares tagset between fabric queue and admin queue or between
> connect_q and NS queue, so hctx_may_queue() can be called to allocate
> request for these queues.
> 
> Tags can be reserved in these tagset. Before error recovery, there is
> often lots of in-flight requests which can't be completed, and new
> reserved request may be needed in error recovery path. However,
> hctx_may_queue() can always return false because there is too many
> in-flight requests which can't be completed during error handling.
> Finally, everything can't move on.
> 
> Fix this issue by always allowing reserved tag allocation in
> hctx_may_queue(). This ways is reasonable because reserved tag
> suppose to be ready any time.
> 
> Cc: David Milburn <dmilburn@redhat.com>
> Cc: Ewan D. Milne <emilne@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq-tag.c | 3 ++-
>  block/blk-mq.c     | 6 ++++--
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index c31c4a0478a5..aacf10decdbd 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -76,7 +76,8 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>  static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
>  			    struct sbitmap_queue *bt)
>  {
> -	if (!data->q->elevator && !hctx_may_queue(data->hctx, bt))
> +	if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) &&
> +			!hctx_may_queue(data->hctx, bt))
>  		return BLK_MQ_NO_TAG;
>  
>  	if (data->shallow_depth)
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ccb500e38008..91cff275451d 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1147,15 +1147,17 @@ static bool __blk_mq_get_driver_tag(struct request *rq)
>  	struct sbitmap_queue *bt = rq->mq_hctx->tags->bitmap_tags;
>  	unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags;
>  	int tag;
> +	bool reserved = blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags,
> +			rq->internal_tag);
>  
>  	blk_mq_tag_busy(rq->mq_hctx);
>  
> -	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
> +	if (reserved) {
>  		bt = rq->mq_hctx->tags->breserved_tags;
>  		tag_offset = 0;
>  	}
>  
> -	if (!hctx_may_queue(rq->mq_hctx, bt))
> +	if (!reserved && !hctx_may_queue(rq->mq_hctx, bt))
>  		return false;
>  	tag = __sbitmap_queue_get(bt);
>  	if (tag == BLK_MQ_NO_TAG)
> 
Very good point.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-09-11 10:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-11  9:44 [PATCH] blk-mq: always allow reserved allocation in hctx_may_queue Ming Lei
2020-09-11  9:52 ` Christoph Hellwig
2020-09-11 10:01 ` Hannes Reinecke

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).