All of lore.kernel.org
 help / color / mirror / Atom feed
From: "François-Xavier Thomas" <fx.thomas@gmail.com>
To: Filipe Manana <fdmanana@kernel.org>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Massive I/O usage from btrfs-cleaner after upgrading to 5.16
Date: Mon, 17 Jan 2022 22:37:58 +0100	[thread overview]
Message-ID: <CAEwRaO5JcuHkuKs_hx9SJQ6jDr79TSorEPVEkt7BPRLfK2Rp-g@mail.gmail.com> (raw)
In-Reply-To: <YeWgdQ2ZvceLTIej@debian9.Home>

Hi Filipe,

Thank you so much for the hints!

I compiled 5.16 with the 1-byte file patch and have been running it
for a couple of hours now. I/O seems to have been gradually increasing
compared to 5.15, but I will wait for tomorrow to have a clearer view
on the graphs, then I'll try the both patches.

François-Xavier

On Mon, Jan 17, 2022 at 5:59 PM Filipe Manana <fdmanana@kernel.org> wrote:
>
> On Mon, Jan 17, 2022 at 12:02:08PM +0000, Filipe Manana wrote:
> > On Mon, Jan 17, 2022 at 11:06:42AM +0100, François-Xavier Thomas wrote:
> > > Hello all,
> > >
> > > Just in case someone is having the same issue: Btrfs (in the
> > > btrfs-cleaner process) is taking a large amount of disk IO after
> > > upgrading to 5.16 on one of my volumes, and multiple other people seem
> > > to be having the same issue, see discussion in [0].
> > >
> > > [1] is a close-up screenshot of disk I/O history (blue line is write
> > > ops, going from a baseline of some 10 ops/s to around 1k ops/s). I
> > > downgraded from 5.16 to 5.15 in the middle, which immediately restored
> > > previous performance.
> > >
> > > Common options between affected people are: ssd, autodefrag. No error
> > > in the logs, and no other issue aside from performance (the volume
> > > works just fine for accessing data).
> > >
> > > One person reports that SMART stats show a massive amount of blocks
> > > being written; unfortunately I do not have historical data for that so
> > > I cannot confirm, but this sounds likely given what I see on what
> > > should be a relatively new SSD.
> > >
> > > Any idea of what it could be related to?
> >
> > There was a big refactor of the defrag code that landed in 5.16.
> >
> > On a quick glance, when using autodefrag it seems we now can end up in an
> > infinite loop by marking the same range for degrag (IO) over and over.
> >
> > Can you try the following patch? (also at https://pastebin.com/raw/QR27Jv6n)
>
> Actually try this one instead:
>
> https://pastebin.com/raw/EbEfk1tF
>
> Also, there's a bug with defrag running into an (almost) infinite loop when
> attempting to defrag a 1 byte file. Someone ran into this and I've just sent
> a fix for it:
>
> https://patchwork.kernel.org/project/linux-btrfs/patch/bcbfce0ff7e21bbfed2484b1457e560edf78020d.1642436805.git.fdmanana@suse.com/
>
> Maybe that is what you are running into when using autodefrag.
> Firt try that fix for the 1 byte file case, and if after that you still run
> into problems, then try with the other patch above as well (both patches
> applied).
>
> Thanks.
>
>
>
> >
> > diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> > index a5bd6926f7ff..0a9f6125a566 100644
> > --- a/fs/btrfs/ioctl.c
> > +++ b/fs/btrfs/ioctl.c
> > @@ -1213,6 +1213,13 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
> >                 if (em->generation < newer_than)
> >                         goto next;
> >
> > +               /*
> > +                * Skip extents already under IO, otherwise we can end up in an
> > +                * infinite loop when using auto defrag.
> > +                */
> > +               if (em->generation == (u64)-1)
> > +                       goto next;
> > +
> >                 /*
> >                  * For do_compress case, we want to compress all valid file
> >                  * extents, thus no @extent_thresh or mergeable check.
> >
> >
> > >
> > > François-Xavier
> > >
> > > [0] https://www.reddit.com/r/btrfs/comments/s4nrzb/massive_performance_degradation_after_upgrading/
> > > [1] https://imgur.com/oYhYat1

  reply	other threads:[~2022-01-17 21:38 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-17 10:06 Massive I/O usage from btrfs-cleaner after upgrading to 5.16 François-Xavier Thomas
2022-01-17 12:02 ` Filipe Manana
2022-01-17 16:59   ` Filipe Manana
2022-01-17 21:37     ` François-Xavier Thomas [this message]
2022-01-19  9:44       ` François-Xavier Thomas
2022-01-19 10:13         ` Filipe Manana
2022-01-20 11:37           ` François-Xavier Thomas
2022-01-20 11:44             ` Filipe Manana
2022-01-20 12:02               ` François-Xavier Thomas
2022-01-20 12:45                 ` Qu Wenruo
2022-01-20 12:55                   ` Filipe Manana
2022-01-20 17:46                 ` Filipe Manana
2022-01-20 18:21                   ` François-Xavier Thomas
2022-01-21 10:49                     ` Filipe Manana
2022-01-21 19:39                       ` François-Xavier Thomas
2022-01-21 23:34                         ` Qu Wenruo
2022-01-22 18:20                           ` François-Xavier Thomas
2022-01-24  7:00                             ` Qu Wenruo
2022-01-25 20:00                               ` François-Xavier Thomas
2022-01-25 23:29                                 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEwRaO5JcuHkuKs_hx9SJQ6jDr79TSorEPVEkt7BPRLfK2Rp-g@mail.gmail.com \
    --to=fx.thomas@gmail.com \
    --cc=fdmanana@kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.