linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block/mq-deadline: use correct way to throttling write requests
@ 2023-08-03 11:12 Zhiguo Niu
  2023-08-03 17:01 ` Bart Van Assche
  2023-08-08 21:46 ` Jens Axboe
  0 siblings, 2 replies; 3+ messages in thread
From: Zhiguo Niu @ 2023-08-03 11:12 UTC (permalink / raw)
  To: axboe, bvanassche
  Cc: linux-block, linux-kernel, niuzhiguo84, zhiguo.niu, hongyu.jin,
	yunlong.xing

The original formula was inaccurate:
dd->async_depth = max(1UL, 3 * q->nr_requests / 4);

For write requests, when we assign a tags from sched_tags,
data->shallow_depth will be passed to sbitmap_find_bit,
see the following code:

nr = sbitmap_find_bit_in_word(&sb->map[index],
			min_t (unsigned int,
			__map_depth(sb, index),
			depth),
			alloc_hint, wrap);

The smaller of data->shallow_depth and __map_depth(sb, index)
will be used as the maximum range when allocating bits.

For a mmc device (one hw queue, deadline I/O scheduler):
q->nr_requests = sched_tags = 128, so according to the previous
calculation method, dd->async_depth = data->shallow_depth = 96,
and the platform is 64bits with 8 cpus, sched_tags.bitmap_tags.sb.shift=5,
sb.maps[]=32/32/32/32, 32 is smaller than 96, whether it is a read or
a write I/O, tags can be allocated to the maximum range each time,
which has not throttling effect.

In addition, refer to the methods of bfg/kyber I/O scheduler,
limit ratiois are calculated base on sched_tags.bitmap_tags.sb.shift.

This patch can throttle write requests really.

Fixes: 07757588e507 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests")

Signed-off-by: Zhiguo Niu <zhiguo.niu@unisoc.com>

---
 block/mq-deadline.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 5839a027e0f0..7e043d4a78f8 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -620,8 +620,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
 	struct request_queue *q = hctx->queue;
 	struct deadline_data *dd = q->elevator->elevator_data;
 	struct blk_mq_tags *tags = hctx->sched_tags;
+	unsigned int shift = tags->bitmap_tags.sb.shift;
 
-	dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
+	dd->async_depth = max(1U, 3 * (1U << shift)  / 4);
 
 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
 }
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] block/mq-deadline: use correct way to throttling write requests
  2023-08-03 11:12 [PATCH] block/mq-deadline: use correct way to throttling write requests Zhiguo Niu
@ 2023-08-03 17:01 ` Bart Van Assche
  2023-08-08 21:46 ` Jens Axboe
  1 sibling, 0 replies; 3+ messages in thread
From: Bart Van Assche @ 2023-08-03 17:01 UTC (permalink / raw)
  To: Zhiguo Niu, axboe
  Cc: linux-block, linux-kernel, niuzhiguo84, hongyu.jin, yunlong.xing

On 8/3/23 04:12, Zhiguo Niu wrote:
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index 5839a027e0f0..7e043d4a78f8 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -620,8 +620,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
>   	struct request_queue *q = hctx->queue;
>   	struct deadline_data *dd = q->elevator->elevator_data;
>   	struct blk_mq_tags *tags = hctx->sched_tags;
> +	unsigned int shift = tags->bitmap_tags.sb.shift;
>   
> -	dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
> +	dd->async_depth = max(1U, 3 * (1U << shift)  / 4);
>   
>   	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
>   }

Reviewed-by: Bart Van Assche <bvanassche@acm.org>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] block/mq-deadline: use correct way to throttling write requests
  2023-08-03 11:12 [PATCH] block/mq-deadline: use correct way to throttling write requests Zhiguo Niu
  2023-08-03 17:01 ` Bart Van Assche
@ 2023-08-08 21:46 ` Jens Axboe
  1 sibling, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2023-08-08 21:46 UTC (permalink / raw)
  To: bvanassche, Zhiguo Niu
  Cc: linux-block, linux-kernel, niuzhiguo84, hongyu.jin, yunlong.xing


On Thu, 03 Aug 2023 19:12:42 +0800, Zhiguo Niu wrote:
> The original formula was inaccurate:
> dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
> 
> For write requests, when we assign a tags from sched_tags,
> data->shallow_depth will be passed to sbitmap_find_bit,
> see the following code:
> 
> [...]

Applied, thanks!

[1/1] block/mq-deadline: use correct way to throttling write requests
      commit: d47f9717e5cfd0dd8c0ba2ecfa47c38d140f1bb6

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-08-08 21:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-03 11:12 [PATCH] block/mq-deadline: use correct way to throttling write requests Zhiguo Niu
2023-08-03 17:01 ` Bart Van Assche
2023-08-08 21:46 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).