All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: "François-Xavier Thomas" <fx.thomas@gmail.com>,
	"Filipe Manana" <fdmanana@kernel.org>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>, Qu Wenruo <wqu@suse.com>
Subject: Re: Massive I/O usage from btrfs-cleaner after upgrading to 5.16
Date: Sat, 22 Jan 2022 07:34:00 +0800	[thread overview]
Message-ID: <7802ff58-d08b-76d4-fcc7-c5d15d798b3b@gmx.com> (raw)
In-Reply-To: <CAEwRaO7cA3bbYMSCoYQ2gqaeJBSes5EBok5Oon-YOm7EQ8JOhw@mail.gmail.com>



On 2022/1/22 03:39, François-Xavier Thomas wrote:
> Thanks, will add that to the list and test. FYI the 6 patches didn't
> seem to have much additional effect today compared to my previous
> stack of 4.

Good and bad news.

Good news is, I got my way to reproduce (at least part of) the problem.

With fsstress, a way to trigger autodefrag at will, and io accounting
for data/metadata read/write, it's clear the newer kernel is indeed
causing more IO.

v5.15 (or just revert the defrag code) causes around 8.7% of total data
IO for autodefrag.

While v5.16, even with the 6 patches, causes 18% of total data IO for
autodefrag.


Then bad news.

I have seen cases where v5.15 doesn't defrag ranges which is completely
sane to defrag.

Something like this:

         item 59 key (287 EXTENT_DATA 118784) itemoff 6211 itemsize 53
                 generation 85 type 1 (regular)
                 extent data disk byte 339296256 nr 8192
                 extent data offset 0 nr 8192 ram 8192
                 extent compression 0 (none)
         item 60 key (287 EXTENT_DATA 126976) itemoff 6158 itemsize 53
                 generation 85 type 1 (regular)
                 extent data disk byte 300445696 nr 4096
                 extent data offset 0 nr 4096 ram 4096
                 extent compression 0 (none)
         item 61 key (287 EXTENT_DATA 131072) itemoff 6105 itemsize 53
                 generation 85 type 1 (regular)
                 extent data disk byte 339304448 nr 4096
                 extent data offset 0 nr 4096 ram 4096
                 extent compression 0 (none)
         item 62 key (287 EXTENT_DATA 135168) itemoff 6052 itemsize 53
                 generation 85 type 1 (regular)
                 extent data disk byte 301170688 nr 4096
                 extent data offset 0 nr 4096 ram 4096
                 extent compression 0 (none)
         item 63 key (287 EXTENT_DATA 139264) itemoff 5999 itemsize 53
                 generation 85 type 1 (regular)
                 extent data disk byte 339308544 nr 106496
                 extent data offset 0 nr 106496 ram 106496
                 extent compression 0 (none)

This 124K range is definitely sane to defrag (and the newer_than
parameter is only 35, all extents are a good fit).

But older kernel by some reason (still under investigation) doesn't
choose to defrag at all, while newer kernel is pretty happy to defrag.

Although there are cases newer kernel is doing too small defrag which
doesn't make sense, with such cases fixed, it still results 15% of total
IO for autodefrag.

I'm afraid there may be some bugs or questionable behaviors in the old
defrag code that is not defragging all good candidates.

So even with more fixes, we may just end up with more IO for autodefrag,
purely because old code is not defragging as hard.

Thanks,
Qu
>
> On Fri, Jan 21, 2022 at 11:49 AM Filipe Manana <fdmanana@kernel.org> wrote:
>>
>> On Thu, Jan 20, 2022 at 6:21 PM François-Xavier Thomas
>> <fx.thomas@gmail.com> wrote:
>>>
>>>> Ok, so new patches to try
>>>
>>> Nice, thanks, I'll let you know how that goes tomorrow!
>>
>> You can also get more one on top of those 6:
>>
>> https://pastebin.com/raw/p87HX6AF
>>
>> Thanks.

  reply	other threads:[~2022-01-21 23:34 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-17 10:06 Massive I/O usage from btrfs-cleaner after upgrading to 5.16 François-Xavier Thomas
2022-01-17 12:02 ` Filipe Manana
2022-01-17 16:59   ` Filipe Manana
2022-01-17 21:37     ` François-Xavier Thomas
2022-01-19  9:44       ` François-Xavier Thomas
2022-01-19 10:13         ` Filipe Manana
2022-01-20 11:37           ` François-Xavier Thomas
2022-01-20 11:44             ` Filipe Manana
2022-01-20 12:02               ` François-Xavier Thomas
2022-01-20 12:45                 ` Qu Wenruo
2022-01-20 12:55                   ` Filipe Manana
2022-01-20 17:46                 ` Filipe Manana
2022-01-20 18:21                   ` François-Xavier Thomas
2022-01-21 10:49                     ` Filipe Manana
2022-01-21 19:39                       ` François-Xavier Thomas
2022-01-21 23:34                         ` Qu Wenruo [this message]
2022-01-22 18:20                           ` François-Xavier Thomas
2022-01-24  7:00                             ` Qu Wenruo
2022-01-25 20:00                               ` François-Xavier Thomas
2022-01-25 23:29                                 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7802ff58-d08b-76d4-fcc7-c5d15d798b3b@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=fdmanana@kernel.org \
    --cc=fx.thomas@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.