linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 1/7] blk-mq: sync wake_batch update and users number change
  2023-02-09 20:11 ` [PATCH 1/7] blk-mq: sync wake_batch update and users number change Kemeng Shi
@ 2023-02-09 13:43   ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2023-02-09 13:43 UTC (permalink / raw)
  To: Kemeng Shi
  Cc: axboe, hch, jack, andriy.shevchenko, qiulaibin, linux-block,
	linux-kernel

On Fri 10-02-23 04:11:10, Kemeng Shi wrote:
> Commit 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be awakened")
> added recalculation of wake_batch when active_queues changes to avoid io
> hung.
> Function blk_mq_tag_idle and blk_mq_tag_busy can be called concurrently,
> then wake_batch maybe updated with old users number. For example, if
> tag alloctions for two shared queue happen concurrently, blk_mq_tag_busy
> maybe executed as following:
> thread1  			thread2
> atomic_inc_return
> 				atomic_inc_return
> 				blk_mq_update_wake_batch
> blk_mq_update_wake_batch
> 
> 1.Thread1 adds active_queues from zero to one.
> 2.Thread2 adds active_queues from one to two.
> 3.Thread2 calculates wake_batch with latest active_queues number two.
> 4.Thread1 calculates wake_batch with stale active_queues number one.
> Then wake_batch is inconsistent with actual active_queues. If wake_batch
> is calculated with active_queues number smaller than actual active_queues
> number, wake_batch will be greater than it supposed to be and cause io
> hung.
> 
> Sync wake_batch update and users number change to keep wake_batch
> consistent with active_queues to fix this.
> 
> Fixes: 180dccb0dba4 ("blk-mq: fix tag_get wait task can't be awakened")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>

OK, luckily this extra spin_lock happens only when adding and removing a
busy queue which should be reasonably rare. So looks good to me. Feel free
to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/blk-mq-tag.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index 9eb968e14d31..1d3135acfc98 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -39,7 +39,9 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
>   */
>  void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  {
> +	struct blk_mq_tags *tags = hctx->tags;
>  	unsigned int users;
> +	unsigned long flags;
>  
>  	if (blk_mq_is_shared_tags(hctx->flags)) {
>  		struct request_queue *q = hctx->queue;
> @@ -53,9 +55,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  		set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state);
>  	}
>  
> -	users = atomic_inc_return(&hctx->tags->active_queues);
> +	spin_lock_irqsave(&tags->lock, flags);
> +	users = atomic_inc_return(&tags->active_queues);
>  
> -	blk_mq_update_wake_batch(hctx->tags, users);
> +	blk_mq_update_wake_batch(tags, users);
> +	spin_unlock_irqrestore(&tags->lock, flags);
>  }
>  
>  /*
> @@ -76,6 +80,7 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>  {
>  	struct blk_mq_tags *tags = hctx->tags;
>  	unsigned int users;
> +	unsigned long flags;
>  
>  	if (blk_mq_is_shared_tags(hctx->flags)) {
>  		struct request_queue *q = hctx->queue;
> @@ -88,9 +93,11 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>  			return;
>  	}
>  
> +	spin_lock_irqsave(&tags->lock, flags);
>  	users = atomic_dec_return(&tags->active_queues);
>  
>  	blk_mq_update_wake_batch(tags, users);
> +	spin_unlock_irqrestore(&tags->lock, flags);
>  
>  	blk_mq_tag_wakeup_all(tags, false);
>  }
> -- 
> 2.30.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag
  2023-02-09 20:11 ` [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag Kemeng Shi
@ 2023-02-09 13:55   ` Jan Kara
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2023-02-09 13:55 UTC (permalink / raw)
  To: Kemeng Shi
  Cc: axboe, hch, jack, andriy.shevchenko, qiulaibin, linux-block,
	linux-kernel

On Fri 10-02-23 04:11:11, Kemeng Shi wrote:
> Commit d263ed9926823 ("blk-mq: count the hctx as active before allocating
> tag") active hctx before blk_mq_get_tag to avoid petential starvation.
> However, the hctx to alloc tag may change in blk_mq_get_tag if
> BLK_MQ_REQ_NOWAIT is not set, then there are two problems:
> 1. The hctx without real allocation is marked active.
> 2. The starvation problem mentioned in Commit d263ed9926823 ("blk-mq:
> count the hctx as active before allocating tag") still exists on the
> changed hctx as it maybe not marked active before tag allocation.
> 
> For problem 1, the hctx which is marked active probably gets IO
> soon or will be marked inactive after lazy detection of tag idle.
> Mark changed hctx active to fix problem 2.
> 
> Fixes: d263ed992682 ("blk-mq: count the hctx as active before allocating tag")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>

Yeah, makes sense to me. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/blk-mq-tag.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index 1d3135acfc98..e566fd96dc26 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -191,6 +191,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
>  		data->ctx = blk_mq_get_ctx(data->q);
>  		data->hctx = blk_mq_map_queue(data->q, data->cmd_flags,
>  						data->ctx);
> +		if (!(data->rq_flags & RQF_ELV))
> +			blk_mq_tag_busy(data->hctx);
> +
>  		tags = blk_mq_tags_from_data(data);
>  		if (data->flags & BLK_MQ_REQ_RESERVED)
>  			bt = &tags->breserved_tags;
> -- 
> 2.30.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/7] A few bugfix and cleanup patches to blk-mq
@ 2023-02-09 20:11 Kemeng Shi
  2023-02-09 20:11 ` [PATCH 1/7] blk-mq: sync wake_batch update and users number change Kemeng Shi
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

Hi, this patchset contains a few bugfix patches to avoid recalculation
race, mark active before allocating tag in blk_mq_get_tag and some a
few random cleanup patches.

Kemeng Shi (7):
  blk-mq: sync wake_batch update and users number change
  blk-mq: count changed hctx as active in blk_mq_get_tag
  blk-mq: remove wake_batch recalculation for reserved tags
  blk-mq: remove unnecessary bit clear in __blk_mq_alloc_requests_batch
  blk-mq: remove unnecessary "set->queue_depth == 0" check in
    blk_mq_alloc_set_map_and_rqs
  blk-mq: Remove unnecessary hctx check in function
    blk_mq_alloc_and_init_hctx
  blk-mq: remove stale comment of function called for iterated request

 block/blk-mq-tag.c | 49 +++++++++++++++++++++++++---------------------
 block/blk-mq.c     |  8 +++-----
 2 files changed, 30 insertions(+), 27 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/7] blk-mq: sync wake_batch update and users number change
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 13:43   ` Jan Kara
  2023-02-09 20:11 ` [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag Kemeng Shi
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

Commit 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be awakened")
added recalculation of wake_batch when active_queues changes to avoid io
hung.
Function blk_mq_tag_idle and blk_mq_tag_busy can be called concurrently,
then wake_batch maybe updated with old users number. For example, if
tag alloctions for two shared queue happen concurrently, blk_mq_tag_busy
maybe executed as following:
thread1  			thread2
atomic_inc_return
				atomic_inc_return
				blk_mq_update_wake_batch
blk_mq_update_wake_batch

1.Thread1 adds active_queues from zero to one.
2.Thread2 adds active_queues from one to two.
3.Thread2 calculates wake_batch with latest active_queues number two.
4.Thread1 calculates wake_batch with stale active_queues number one.
Then wake_batch is inconsistent with actual active_queues. If wake_batch
is calculated with active_queues number smaller than actual active_queues
number, wake_batch will be greater than it supposed to be and cause io
hung.

Sync wake_batch update and users number change to keep wake_batch
consistent with active_queues to fix this.

Fixes: 180dccb0dba4 ("blk-mq: fix tag_get wait task can't be awakened")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq-tag.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 9eb968e14d31..1d3135acfc98 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -39,7 +39,9 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
  */
 void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
 {
+	struct blk_mq_tags *tags = hctx->tags;
 	unsigned int users;
+	unsigned long flags;
 
 	if (blk_mq_is_shared_tags(hctx->flags)) {
 		struct request_queue *q = hctx->queue;
@@ -53,9 +55,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
 		set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state);
 	}
 
-	users = atomic_inc_return(&hctx->tags->active_queues);
+	spin_lock_irqsave(&tags->lock, flags);
+	users = atomic_inc_return(&tags->active_queues);
 
-	blk_mq_update_wake_batch(hctx->tags, users);
+	blk_mq_update_wake_batch(tags, users);
+	spin_unlock_irqrestore(&tags->lock, flags);
 }
 
 /*
@@ -76,6 +80,7 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 {
 	struct blk_mq_tags *tags = hctx->tags;
 	unsigned int users;
+	unsigned long flags;
 
 	if (blk_mq_is_shared_tags(hctx->flags)) {
 		struct request_queue *q = hctx->queue;
@@ -88,9 +93,11 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 			return;
 	}
 
+	spin_lock_irqsave(&tags->lock, flags);
 	users = atomic_dec_return(&tags->active_queues);
 
 	blk_mq_update_wake_batch(tags, users);
+	spin_unlock_irqrestore(&tags->lock, flags);
 
 	blk_mq_tag_wakeup_all(tags, false);
 }
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
  2023-02-09 20:11 ` [PATCH 1/7] blk-mq: sync wake_batch update and users number change Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 13:55   ` Jan Kara
  2023-02-09 20:11 ` [PATCH 3/7] blk-mq: remove wake_batch recalculation for reserved tags Kemeng Shi
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

Commit d263ed9926823 ("blk-mq: count the hctx as active before allocating
tag") active hctx before blk_mq_get_tag to avoid petential starvation.
However, the hctx to alloc tag may change in blk_mq_get_tag if
BLK_MQ_REQ_NOWAIT is not set, then there are two problems:
1. The hctx without real allocation is marked active.
2. The starvation problem mentioned in Commit d263ed9926823 ("blk-mq:
count the hctx as active before allocating tag") still exists on the
changed hctx as it maybe not marked active before tag allocation.

For problem 1, the hctx which is marked active probably gets IO
soon or will be marked inactive after lazy detection of tag idle.
Mark changed hctx active to fix problem 2.

Fixes: d263ed992682 ("blk-mq: count the hctx as active before allocating tag")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq-tag.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 1d3135acfc98..e566fd96dc26 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -191,6 +191,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
 		data->ctx = blk_mq_get_ctx(data->q);
 		data->hctx = blk_mq_map_queue(data->q, data->cmd_flags,
 						data->ctx);
+		if (!(data->rq_flags & RQF_ELV))
+			blk_mq_tag_busy(data->hctx);
+
 		tags = blk_mq_tags_from_data(data);
 		if (data->flags & BLK_MQ_REQ_RESERVED)
 			bt = &tags->breserved_tags;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/7] blk-mq: remove wake_batch recalculation for reserved tags
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
  2023-02-09 20:11 ` [PATCH 1/7] blk-mq: sync wake_batch update and users number change Kemeng Shi
  2023-02-09 20:11 ` [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 20:11 ` [PATCH 4/7] blk-mq: remove unnecessary bit clear in __blk_mq_alloc_requests_batch Kemeng Shi
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

Commit 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be
awakened") added wake_batch recalculation when user number of shared
tags changes to avoid hung as user number increases, then hctx_max_depth
limit for a single user decreases and may trigger hung if wake_batch >
hctx_max_depth is met.
Commit 285008501c65a ("blk-mq: always allow reserved allocation in
hctx_may_queue") removed hctx_max_depth limit to alloc reserved
tags, so hctx_max_depth limit for reserved tags is not exisiting anymore
so we can remove recalculation for reserved tags.

Fixes: 285008501c65 ("blk-mq: always allow reserved allocation in hctx_may_queue")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq-tag.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index e566fd96dc26..7f1777dc11e5 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -27,8 +27,6 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
 
 	sbitmap_queue_recalculate_wake_batch(&tags->bitmap_tags,
 			users);
-	sbitmap_queue_recalculate_wake_batch(&tags->breserved_tags,
-			users);
 }
 
 /*
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/7] blk-mq: remove unnecessary bit clear in __blk_mq_alloc_requests_batch
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
                   ` (2 preceding siblings ...)
  2023-02-09 20:11 ` [PATCH 3/7] blk-mq: remove wake_batch recalculation for reserved tags Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 20:11 ` [PATCH 5/7] blk-mq: remove unnecessary "set->queue_depth == 0" check in blk_mq_alloc_set_map_and_rqs Kemeng Shi
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

Bits in tag_mask will not be accessed after it's cleared. So we can remove
this unnecessary clear.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 89b4dd81ae17..6014d9b5e296 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -456,7 +456,6 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data,
 			continue;
 		tag = tag_offset + i;
 		prefetch(tags->static_rqs[tag]);
-		tag_mask &= ~(1UL << i);
 		rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns);
 		rq_list_add(data->cached_rq, rq);
 		nr++;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/7] blk-mq: remove unnecessary "set->queue_depth == 0" check in blk_mq_alloc_set_map_and_rqs
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
                   ` (3 preceding siblings ...)
  2023-02-09 20:11 ` [PATCH 4/7] blk-mq: remove unnecessary bit clear in __blk_mq_alloc_requests_batch Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 20:11 ` [PATCH 6/7] blk-mq: Remove unnecessary hctx check in function blk_mq_alloc_and_init_hctx Kemeng Shi
  2023-02-09 20:11 ` [PATCH 7/7] blk-mq: remove stale comment of function called for iterated request Kemeng Shi
  6 siblings, 0 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

We will break the loop and set err to -ENOMEM if "set->queue_depth" is
less than "set->reserved_tags + BLK_MQ_TAG_MIN". set->reserved_tags is
unsigned int and should not be less than 0 logically. BLK_MQ_TAG_MIN is
1 and should be greater than 0 logically. So "set->reserved_tags +
BLK_MQ_TAG_MIN" should be greater than 0. For case set->queue_depth is
halved to zero, "set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN"
check is always met first, then this branch will set err to -ENOMEM and
break the loop. Then the loop break condition "while(set->queue_depth)"
is never met and set->queue_depth check in "if (!set->queue_depth ||
err)" is redundant. Just remove these checks.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6014d9b5e296..4d2ab01549cd 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4324,9 +4324,9 @@ static int blk_mq_alloc_set_map_and_rqs(struct blk_mq_tag_set *set)
 			err = -ENOMEM;
 			break;
 		}
-	} while (set->queue_depth);
+	} while (true);
 
-	if (!set->queue_depth || err) {
+	if (err) {
 		pr_err("blk-mq: failed to allocate request map\n");
 		return -ENOMEM;
 	}
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/7] blk-mq: Remove unnecessary hctx check in function blk_mq_alloc_and_init_hctx
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
                   ` (4 preceding siblings ...)
  2023-02-09 20:11 ` [PATCH 5/7] blk-mq: remove unnecessary "set->queue_depth == 0" check in blk_mq_alloc_set_map_and_rqs Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  2023-02-09 20:11 ` [PATCH 7/7] blk-mq: remove stale comment of function called for iterated request Kemeng Shi
  6 siblings, 0 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

We can remove hctx from list when a valid hctx is found to avoid
extra valid check.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4d2ab01549cd..1aa3cdc55c4e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4121,11 +4121,10 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
 	list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) {
 		if (tmp->numa_node == node) {
 			hctx = tmp;
+			list_del_init(&hctx->hctx_list);
 			break;
 		}
 	}
-	if (hctx)
-		list_del_init(&hctx->hctx_list);
 	spin_unlock(&q->unused_hctx_lock);
 
 	if (!hctx)
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/7] blk-mq: remove stale comment of function called for iterated request
  2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
                   ` (5 preceding siblings ...)
  2023-02-09 20:11 ` [PATCH 6/7] blk-mq: Remove unnecessary hctx check in function blk_mq_alloc_and_init_hctx Kemeng Shi
@ 2023-02-09 20:11 ` Kemeng Shi
  6 siblings, 0 replies; 10+ messages in thread
From: Kemeng Shi @ 2023-02-09 20:11 UTC (permalink / raw)
  To: axboe, hch, jack; +Cc: andriy.shevchenko, qiulaibin, linux-block, linux-kernel

We call function with type "bool (busy_tag_iter_fn)(struct request *,
void *)" for each iterated request now. Remove the stale arguments
@hctx and @reserved in comment which are not needed by iterate
function.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 block/blk-mq-tag.c | 33 +++++++++++++++------------------
 1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 7f1777dc11e5..f1187c901019 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -303,9 +303,9 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
  *		or the bitmap_tags member of struct blk_mq_tags.
  * @fn:		Pointer to the function that will be called for each request
  *		associated with @hctx that has been assigned a driver tag.
- *		@fn will be called as follows: @fn(@hctx, rq, @data, @reserved)
- *		where rq is a pointer to a request. Return true to continue
- *		iterating tags, false to stop.
+ *		@fn will be called as follows: @fn(@rq, @data) where rq is a
+ *		pointer to a request. Return true to continue iterating tags,
+ *		false to stop.
  * @data:	Will be passed as third argument to @fn.
  * @reserved:	Indicates whether @bt is the breserved_tags member or the
  *		bitmap_tags member of struct blk_mq_tags.
@@ -372,9 +372,9 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
  * @bt:		sbitmap to examine. This is either the breserved_tags member
  *		or the bitmap_tags member of struct blk_mq_tags.
  * @fn:		Pointer to the function that will be called for each started
- *		request. @fn will be called as follows: @fn(rq, @data,
- *		@reserved) where rq is a pointer to a request. Return true
- *		to continue iterating tags, false to stop.
+ *		request. @fn will be called as follows: @fn(rq, @data) where
+ *		rq is a pointer to a request. Return true to continue
+ *		iterating tags, false to stop.
  * @data:	Will be passed as second argument to @fn.
  * @flags:	BT_TAG_ITER_*
  */
@@ -407,10 +407,9 @@ static void __blk_mq_all_tag_iter(struct blk_mq_tags *tags,
  * blk_mq_all_tag_iter - iterate over all requests in a tag map
  * @tags:	Tag map to iterate over.
  * @fn:		Pointer to the function that will be called for each
- *		request. @fn will be called as follows: @fn(rq, @priv,
- *		reserved) where rq is a pointer to a request. 'reserved'
- *		indicates whether or not @rq is a reserved request. Return
- *		true to continue iterating tags, false to stop.
+ *		request. @fn will be called as follows: @fn(rq, @priv)
+ *		where rq is a pointer to a request. Return true to
+ *		continue iterating tags, false to stop.
  * @priv:	Will be passed as second argument to @fn.
  *
  * Caller has to pass the tag map from which requests are allocated.
@@ -425,10 +424,9 @@ void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
  * blk_mq_tagset_busy_iter - iterate over all started requests in a tag set
  * @tagset:	Tag set to iterate over.
  * @fn:		Pointer to the function that will be called for each started
- *		request. @fn will be called as follows: @fn(rq, @priv,
- *		reserved) where rq is a pointer to a request. 'reserved'
- *		indicates whether or not @rq is a reserved request. Return
- *		true to continue iterating tags, false to stop.
+ *		request. @fn will be called as follows: @fn(rq, @priv) where
+ *		rq is a pointer to a request. Return true to continue
+ *		iterating tags, false to stop.
  * @priv:	Will be passed as second argument to @fn.
  *
  * We grab one request reference before calling @fn and release it after
@@ -484,10 +482,9 @@ EXPORT_SYMBOL(blk_mq_tagset_wait_completed_request);
  * blk_mq_queue_tag_busy_iter - iterate over all requests with a driver tag
  * @q:		Request queue to examine.
  * @fn:		Pointer to the function that will be called for each request
- *		on @q. @fn will be called as follows: @fn(hctx, rq, @priv,
- *		reserved) where rq is a pointer to a request and hctx points
- *		to the hardware queue associated with the request. 'reserved'
- *		indicates whether or not @rq is a reserved request.
+ *		on @q. @fn will be called as follows: @fn(rq, @priv) where
+ *		rq is a pointer to a request. Return true to continue
+ *		iterating tags, false to stop.
  * @priv:	Will be passed as third argument to @fn.
  *
  * Note: if @q->tag_set is shared with other request queues then @fn will be
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-02-09 13:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-09 20:11 [PATCH 0/7] A few bugfix and cleanup patches to blk-mq Kemeng Shi
2023-02-09 20:11 ` [PATCH 1/7] blk-mq: sync wake_batch update and users number change Kemeng Shi
2023-02-09 13:43   ` Jan Kara
2023-02-09 20:11 ` [PATCH 2/7] blk-mq: count changed hctx as active in blk_mq_get_tag Kemeng Shi
2023-02-09 13:55   ` Jan Kara
2023-02-09 20:11 ` [PATCH 3/7] blk-mq: remove wake_batch recalculation for reserved tags Kemeng Shi
2023-02-09 20:11 ` [PATCH 4/7] blk-mq: remove unnecessary bit clear in __blk_mq_alloc_requests_batch Kemeng Shi
2023-02-09 20:11 ` [PATCH 5/7] blk-mq: remove unnecessary "set->queue_depth == 0" check in blk_mq_alloc_set_map_and_rqs Kemeng Shi
2023-02-09 20:11 ` [PATCH 6/7] blk-mq: Remove unnecessary hctx check in function blk_mq_alloc_and_init_hctx Kemeng Shi
2023-02-09 20:11 ` [PATCH 7/7] blk-mq: remove stale comment of function called for iterated request Kemeng Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).