All of lore.kernel.org
 help / color / mirror / Atom feed
From: "牛志国 (Zhiguo Niu)" <Zhiguo.Niu@unisoc.com>
To: Bart Van Assche <bvanassche@acm.org>, Jens Axboe <axboe@kernel.dk>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"Christoph Hellwig" <hch@lst.de>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Damien Le Moal" <dlemoal@kernel.org>,
	"Harshit Mogalapalli" <harshit.m.mogalapalli@oracle.com>,
	"金红宇 (Hongyu Jin)" <hongyu.jin@unisoc.com>
Subject: 答复: [PATCH] Revert "block/mq-deadline: use correct way to throttling write requests"
Date: Thu, 14 Mar 2024 01:03:17 +0000	[thread overview]
Message-ID: <cf8127b0fa594169a71f3257326e5bec@BJMBX02.spreadtrum.com> (raw)
In-Reply-To: <20240313214218.1736147-1-bvanassche@acm.org>

Hi Bart,

Just as mentioned in original patch, "dd->async_depth = max(1UL, 3 * q->nr_requests / 4);", this limitation methods look likes won't have a limit effect, because tag allocated is based on sbitmap, not based the whole nr_requests.
Right?
Thanks!

For write requests, when we assign a tags from sched_tags,
data->shallow_depth will be passed to sbitmap_find_bit,
see the following code:

nr = sbitmap_find_bit_in_word(&sb->map[index],
			min_t (unsigned int,
			__map_depth(sb, index),
			depth),
			alloc_hint, wrap);

The smaller of data->shallow_depth and __map_depth(sb, index)
will be used as the maximum range when allocating bits.

For a mmc device (one hw queue, deadline I/O scheduler):
q->nr_requests = sched_tags = 128, so according to the previous
calculation method, dd->async_depth = data->shallow_depth = 96,
and the platform is 64bits with 8 cpus, sched_tags.bitmap_tags.sb.shift=5,
sb.maps[]=32/32/32/32, 32 is smaller than 96, whether it is a read or
a write I/O, tags can be allocated to the maximum range each time,
which has not throttling effect.

-----邮件原件-----
发件人: Bart Van Assche <bvanassche@acm.org> 
发送时间: 2024年3月14日 5:42
收件人: Jens Axboe <axboe@kernel.dk>
抄送: linux-block@vger.kernel.org; Christoph Hellwig <hch@lst.de>; Bart Van Assche <bvanassche@acm.org>; stable@vger.kernel.org; Damien Le Moal <dlemoal@kernel.org>; Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>; 牛志国 (Zhiguo Niu) <Zhiguo.Niu@unisoc.com>
主题: [PATCH] Revert "block/mq-deadline: use correct way to throttling write requests"


注意: 这封邮件来自于外部。除非你确定邮件内容安全,否则不要点击任何链接和附件。
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.



The code "max(1U, 3 * (1U << shift)  / 4)" comes from the Kyber I/O scheduler. The Kyber I/O scheduler maintains one internal queue per hwq and hence derives its async_depth from the number of hwq tags. Using this approach for the mq-deadline scheduler is wrong since the mq-deadline scheduler maintains one internal queue for all hwqs combined. Hence this revert.

Cc: stable@vger.kernel.org
Cc: Damien Le Moal <dlemoal@kernel.org>
Cc: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
Cc: Zhiguo Niu <Zhiguo.Niu@unisoc.com>
Fixes: d47f9717e5cf ("block/mq-deadline: use correct way to throttling write requests")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/mq-deadline.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c index f958e79277b8..02a916ba62ee 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -646,9 +646,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
        struct request_queue *q = hctx->queue;
        struct deadline_data *dd = q->elevator->elevator_data;
        struct blk_mq_tags *tags = hctx->sched_tags;
-       unsigned int shift = tags->bitmap_tags.sb.shift;

-       dd->async_depth = max(1U, 3 * (1U << shift)  / 4);
+       dd->async_depth = max(1UL, 3 * q->nr_requests / 4);

        sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);  }

  parent reply	other threads:[~2024-03-14  1:03 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-13 21:42 [PATCH] Revert "block/mq-deadline: use correct way to throttling write requests" Bart Van Assche
2024-03-13 21:56 ` Jens Axboe
2024-03-14  1:03 ` 牛志国 (Zhiguo Niu) [this message]
2024-03-14 17:08   ` 答复: " Bart Van Assche
2024-03-14 19:31     ` Jens Axboe
2024-03-14  7:58 ` Harshit Mogalapalli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cf8127b0fa594169a71f3257326e5bec@BJMBX02.spreadtrum.com \
    --to=zhiguo.niu@unisoc.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=dlemoal@kernel.org \
    --cc=harshit.m.mogalapalli@oracle.com \
    --cc=hch@lst.de \
    --cc=hongyu.jin@unisoc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.