From: Damien Le Moal <Damien.LeMoal@wdc.com>
To: Mike Snitzer <snitzer@redhat.com>
Cc: Ming Lei <ming.lei@redhat.com>,
Vijayendra Suman <vijayendra.suman@oracle.com>,
"dm-devel@redhat.com" <dm-devel@redhat.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [PATCH 1/3] block: fix blk_rq_get_max_sectors() to flow more carefully
Date: Tue, 15 Sep 2020 04:21:54 +0000 [thread overview]
Message-ID: <CY4PR04MB3751822DB93B9E155A0BE462E7200@CY4PR04MB3751.namprd04.prod.outlook.com> (raw)
In-Reply-To: CY4PR04MB37510A739D28F993250E2B66E7200@CY4PR04MB3751.namprd04.prod.outlook.com
On 2020/09/15 10:10, Damien Le Moal wrote:
> On 2020/09/15 0:04, Mike Snitzer wrote:
>> On Sun, Sep 13 2020 at 8:46pm -0400,
>> Damien Le Moal <Damien.LeMoal@wdc.com> wrote:
>>
>>> On 2020/09/12 6:53, Mike Snitzer wrote:
>>>> blk_queue_get_max_sectors() has been trained for REQ_OP_WRITE_SAME and
>>>> REQ_OP_WRITE_ZEROES yet blk_rq_get_max_sectors() didn't call it for
>>>> those operations.
>>>>
>>>> Also, there is no need to avoid blk_max_size_offset() if
>>>> 'chunk_sectors' isn't set because it falls back to 'max_sectors'.
>>>>
>>>> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
>>>> ---
>>>> include/linux/blkdev.h | 19 +++++++++++++------
>>>> 1 file changed, 13 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>>>> index bb5636cc17b9..453a3d735d66 100644
>>>> --- a/include/linux/blkdev.h
>>>> +++ b/include/linux/blkdev.h
>>>> @@ -1070,17 +1070,24 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
>>>> sector_t offset)
>>>> {
>>>> struct request_queue *q = rq->q;
>>>> + int op;
>>>> + unsigned int max_sectors;
>>>>
>>>> if (blk_rq_is_passthrough(rq))
>>>> return q->limits.max_hw_sectors;
>>>>
>>>> - if (!q->limits.chunk_sectors ||
>>>> - req_op(rq) == REQ_OP_DISCARD ||
>>>> - req_op(rq) == REQ_OP_SECURE_ERASE)
>>>> - return blk_queue_get_max_sectors(q, req_op(rq));
>>>> + op = req_op(rq);
>>>> + max_sectors = blk_queue_get_max_sectors(q, op);
>>>>
>>>> - return min(blk_max_size_offset(q, offset),
>>>> - blk_queue_get_max_sectors(q, req_op(rq)));
>>>> + switch (op) {
>>>> + case REQ_OP_DISCARD:
>>>> + case REQ_OP_SECURE_ERASE:
>>>> + case REQ_OP_WRITE_SAME:
>>>> + case REQ_OP_WRITE_ZEROES:
>>>> + return max_sectors;
>>>> + }
>>>
>>> Doesn't this break md devices ? (I think does use chunk_sectors for stride size,
>>> no ?)
>>>
>>> As mentioned in my reply to Ming's email, this will allow these commands to
>>> potentially cross over zone boundaries on zoned block devices, which would be an
>>> immediate command failure.
>>
>> Depending on the implementation it is beneficial to get a large
>> discard (one not constrained by chunk_sectors, e.g. dm-stripe.c's
>> optimization for handling large discards and issuing N discards, one per
>> stripe). Same could apply for other commands.
>>
>> Like all devices, zoned devices should impose command specific limits in
>> the queue_limits (and not lean on chunk_sectors to do a
>> one-size-fits-all).
>
> Yes, understood. But I think that in the case of md, chunk_sectors is used to
> indicate the boundary between drives for a raid volume. So it does indeed make
> sense to limit the IO size on submission since otherwise, the md driver itself
> would have to split that bio again anyway.
>
>> But that aside, yes I agree I didn't pay close enough attention to the
>> implications of deferring the splitting of these commands until they
>> were issued to underlying storage. This chunk_sectors early splitting
>> override is a bit of a mess... not quite following the logic given we
>> were supposed to be waiting to split bios as late as possible.
>
> My view is that the multipage bvec (BIOs almost as large as we want) and late
> splitting is beneficial to get larger effective BIO sent to the device as having
> more pages on hand allows bigger segments in the bio instead of always having at
> most PAGE_SIZE per segment. The effect of this is very visible with blktrace. A
> lot of requests end up being much larger than the device max_segments * page_size.
>
> However, if there is already a known limit on the BIO size when the BIO is being
> built, it does not make much sense to try to grow a bio beyond that limit since
> it will have to be split by the driver anyway. chunk_sectors is one such limit
> used for md (I think) to indicate boundaries between drives of a raid volume.
> And we reuse it (abuse it ?) for zoned block devices to ensure that any command
> does not cross over zone boundaries since that triggers errors for writes within
> sequential zones or read/write crossing over zones of different types
> (conventional->sequential zone boundary).
>
> I may not have the entire picture correctly here, but so far, this is my
> understanding.
And I was wrong :) In light of Ming's comment + a little code refresher reading,
indeed, chunk_sectors will split BIOs so that *requests* do not exceed that
limit, but the initial BIO submission may be much larger regardless of
chunk_sectors.
Ming, I think the point here is that building a large BIO first and splitting it
later (as opposed to limiting the bio size by stopping bio_add_page()) is more
efficient as there is only one bio submit instead of many, right ?
--
Damien Le Moal
Western Digital Research
next prev parent reply other threads:[~2020-09-15 4:21 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <529c2394-1b58-b9d8-d462-1f3de1b78ac8@oracle.com>
2020-09-10 14:24 ` Revert "dm: always call blk_queue_split() in dm_process_bio()" Mike Snitzer
2020-09-10 19:29 ` Vijayendra Suman
2020-09-15 1:33 ` Mike Snitzer
2020-09-15 17:03 ` Mike Snitzer
2020-09-16 14:56 ` Vijayendra Suman
2020-09-11 12:20 ` Ming Lei
2020-09-11 16:13 ` Mike Snitzer
2020-09-11 21:53 ` [PATCH 0/3] block: a few chunk_sectors fixes/improvements Mike Snitzer
2020-09-11 21:53 ` [PATCH 1/3] block: fix blk_rq_get_max_sectors() to flow more carefully Mike Snitzer
2020-09-12 13:52 ` Ming Lei
2020-09-14 0:43 ` Damien Le Moal
2020-09-14 14:52 ` Mike Snitzer
2020-09-14 23:28 ` Damien Le Moal
2020-09-15 2:03 ` Ming Lei
2020-09-15 2:15 ` Damien Le Moal
2020-09-14 14:49 ` Mike Snitzer
2020-09-15 1:50 ` Ming Lei
2020-09-14 0:46 ` Damien Le Moal
2020-09-14 15:03 ` Mike Snitzer
2020-09-15 1:09 ` Damien Le Moal
2020-09-15 4:21 ` Damien Le Moal [this message]
2020-09-15 8:01 ` Ming Lei
2020-09-11 21:53 ` [PATCH 2/3] block: use lcm_not_zero() when stacking chunk_sectors Mike Snitzer
2020-09-12 13:58 ` Ming Lei
2020-09-11 21:53 ` [PATCH 3/3] block: allow 'chunk_sectors' to be non-power-of-2 Mike Snitzer
2020-09-12 14:06 ` Ming Lei
2020-09-14 2:43 ` Keith Busch
2020-09-14 0:55 ` Damien Le Moal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CY4PR04MB3751822DB93B9E155A0BE462E7200@CY4PR04MB3751.namprd04.prod.outlook.com \
--to=damien.lemoal@wdc.com \
--cc=dm-devel@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=snitzer@redhat.com \
--cc=vijayendra.suman@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).