All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: David Sterba <dsterba@suse.cz>,
	Naohiro Aota <Naohiro.Aota@wdc.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Pankaj Raghav <p.raghav@samsung.com>,
	"linux-btrfs @ vger . kernel . org" <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH v2 0/4] btrfs: rework background block group relocation
Date: Mon, 4 Apr 2022 17:50:45 +0200	[thread overview]
Message-ID: <20220404155045.GQ15609@twin.jikos.cz> (raw)
In-Reply-To: <cover.1648543951.git.johannes.thumshirn@wdc.com>

On Tue, Mar 29, 2022 at 01:56:05AM -0700, Johannes Thumshirn wrote:
> This is a combination of Josef's series titled "btrfs: rework background
> block group relocation" and my patch titled "btrfs: zoned: make auto-reclaim
> less aggressive" plus another preparation patch to address Josef's comments.
> 
> I've opted for rebasinig my path onto Josef's series to avoid and fix
> conflicts, as we're both touching the same code.
> 
> Here's the original cover letter from Josef:
> 
> Currently the background block group relocation code only works for zoned
> devices, as it prevents the file system from becoming unusable because of block
> group fragmentation.
> 
> However inside Facebook our common workload is to download tens of gigabytes
> worth of send files or package files, and it does this by fallocate()'ing the
> entire package, writing into it, and then free'ing it up afterwards.
> Unfortunately this leads to a similar problem as zoned, we get fragmented data
> block groups, and this trends towards filling the entire disk up with partly
> used data block groups, which then leads to ENOSPC because of the lack of
> metadata space.
> 
> Because of this we have been running balance internally forever, but this was
> triggered based on different size usage hueristics and stil gave us a high
> enough failure rate that it was annoying (figure 10-20 machines needing to be
> reprovisioned per week).
> 
> So I modified the existing bg_reclaim_threshold code to also apply in the !zoned
> case, and I also made it only apply to DATA block groups.  This has completely
> eliminated these random failure cases, and we're no longer reprovisioning
> machines that get stuck with 0 metadata space.
> 
> However my internal patch is kind of janky as it hard codes the DATA check.
> What I've done here is made the bg_reclaim_threshold per-space_info, this way
> a user can target all block group types or just the ones they care about.  This
> won't break any current users because this only applied in the zoned case
> before.
> 
> Additionally I've added the code to allow this to work in the !zoned case, and
> loosened the restriction on the threshold from 50-100 to 0-100.
> 
> I tested this on my vm by writing 500m files and then removing half of them and
> validating that the block groups were automatically reclaimed.
> 
> https://lore.kernel.org/linux-btrfs/cover.1646934721.git.josef@toxicpanda.com/
> 
> Changes to v1:
> * Fix zoned threshold calculation (Pankaj)
> * Drop unneeded patch
> 
> Johannes Thumshirn (1):
>   btrfs: zoned: make auto-reclaim less aggressive
> 
> Josef Bacik (3):
>   btrfs: make the bg_reclaim_threshold per-space info
>   btrfs: allow block group background reclaim for !zoned fs'es
>   btrfs: change the bg_reclaim_threshold valid region from 0 to 100

Added to misc-next, thanks.

      parent reply	other threads:[~2022-04-04 15:54 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-29  8:56 [PATCH v2 0/4] btrfs: rework background block group relocation Johannes Thumshirn
2022-03-29  8:56 ` [PATCH v2 1/4] btrfs: make the bg_reclaim_threshold per-space info Johannes Thumshirn
2022-03-29  8:56 ` [PATCH v2 2/4] btrfs: allow block group background reclaim for !zoned fs'es Johannes Thumshirn
2022-03-29  8:56 ` [PATCH v2 3/4] btrfs: change the bg_reclaim_threshold valid region from 0 to 100 Johannes Thumshirn
2022-03-29  8:56 ` [PATCH v2 4/4] btrfs: zoned: make auto-reclaim less aggressive Johannes Thumshirn
2022-03-30 15:22   ` Pankaj Raghav
2022-04-04 15:48   ` David Sterba
2022-04-04 15:50 ` David Sterba [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220404155045.GQ15609@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=Naohiro.Aota@wdc.com \
    --cc=johannes.thumshirn@wdc.com \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=p.raghav@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.