archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <>
To: "François-Xavier Thomas" <>
Cc: Filipe Manana <>,
	linux-btrfs <>,
	Qu Wenruo <>
Subject: Re: Massive I/O usage from btrfs-cleaner after upgrading to 5.16
Date: Mon, 24 Jan 2022 15:00:24 +0800	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 2022/1/23 02:20, François-Xavier Thomas wrote:
> The 7th patch doesn't seem to be having a noticeable improvement so far.

Mind to test the latest two patches, which still needs the first 6 patches:

The last one would greatly reduce IO, almost disable autodefrag, as it
will only defrag a full 256K aligned, no hole/preallocated range.

>> So even with more fixes, we may just end up with more IO for autodefrag,
>> purely because old code is not defragging as hard.
> That's unfortunate, but thanks for having looked into it, at least
> there's a known reason for the IO increase.

And just mentioned in that long commit message of the last RFC patch,
the defrag behavior in fact changed in v5.11 first, which reduced the IO
(if the up-to-256K cluster has any hole in it, the cluster will be

While the even older (v5.10-) behavior will try to defrag holes, which
is even less acceptable.

My guess is, sorting by IO caused by autodefrag, the whole picture would
look like this:

v5.10 > v5.16 vanilla > v5.16 + 7 patches > v5.11~v5.15 > v5.16 + 8 patches

v5.10 should be the worst, it has the most amount of IO, but wastes them
for holes/preallocated a lot.

v5.11~v5.15 reduced IO by rejecting a lot of valid cases, but still has
a small bug related to preallocated extents.
But overall, the rejected defrags causes less IO.

v5.16 vanilla is slightly better than v5.10, it skips holes properly,
but doesn't handle preallocated range just like v5.10, along with extra

v5.16 + 7 patches, it should be the most balanced one (a little more
towards defrag though).
It can skip all hole/preallocated ranges properly, while still try its
best to defrag small extents.

v5.16 + 8 patches, the worst efficiency for defrag, thus the least
amount of IO.

 From the beginning, defrag code is not that well documented, thus
causing such "hidden" behavior.

I hope with the pain felt in v5.16, we can catch up on the testing
coverage and more defined/documented defrag behavior.


> François-Xavier

  reply	other threads:[~2022-01-24  7:00 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-17 10:06 François-Xavier Thomas
2022-01-17 12:02 ` Filipe Manana
2022-01-17 16:59   ` Filipe Manana
2022-01-17 21:37     ` François-Xavier Thomas
2022-01-19  9:44       ` François-Xavier Thomas
2022-01-19 10:13         ` Filipe Manana
2022-01-20 11:37           ` François-Xavier Thomas
2022-01-20 11:44             ` Filipe Manana
2022-01-20 12:02               ` François-Xavier Thomas
2022-01-20 12:45                 ` Qu Wenruo
2022-01-20 12:55                   ` Filipe Manana
2022-01-20 17:46                 ` Filipe Manana
2022-01-20 18:21                   ` François-Xavier Thomas
2022-01-21 10:49                     ` Filipe Manana
2022-01-21 19:39                       ` François-Xavier Thomas
2022-01-21 23:34                         ` Qu Wenruo
2022-01-22 18:20                           ` François-Xavier Thomas
2022-01-24  7:00                             ` Qu Wenruo [this message]
2022-01-25 20:00                               ` François-Xavier Thomas
2022-01-25 23:29                                 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \
    --subject='Re: Massive I/O usage from btrfs-cleaner after upgrading to 5.16' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).