All of lore.kernel.org
 help / color / mirror / Atom feed
From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: Eric Wheeler <btrfs@lists.ewheeler.net>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Global reserve ran out of space at 512MB, fails to rebalance
Date: Wed, 9 Dec 2020 22:12:51 -0500	[thread overview]
Message-ID: <20201210031251.GJ31381@hungrycats.org> (raw)
In-Reply-To: <alpine.LRH.2.21.2012100149160.15698@pop.dreamhost.com>

On Thu, Dec 10, 2020 at 01:52:19AM +0000, Eric Wheeler wrote:
> Hello all,
> 
> We have a 30TB volume with lots of snapshots that is low on space and we 
> are trying to rebalance.  Even if we don't rebalance, the space cleaner 
> still fills up the Global reserve:
> 
>     Device size:                  30.00TiB
>     Device allocated:             30.00TiB
>     Device unallocated:            1.00GiB
>     Device missing:                  0.00B
>     Used:                         29.27TiB
>     Free (estimated):            705.21GiB	(min: 704.71GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   2.00
> >>> Global reserve:              512.00MiB	(used: 512.00MiB) <<<<<<<

It would be nice to have the rest of the btrfs fi usage output.  We are
having to guess how your drives are populated with data and metadata
and what profiles are in use.

You probably need to be running some data balances (btrfs balance start
-dlimit=9 about once a day) to ensure there is always at least 1GB of
unallocated space on every drive.

Never balance metadata, especially not from a scheduled job.  Metadata
balances lead directly to this situation.

> This was on a Linux 5.6 kernel.  I'm trying a Linux 5.9.13 kernel with a 
> hacked in SZ_4G in place of the SZ_512MB and will report back when I learn 
> more.
> 
> In the meantime, do you have any suggestions to work through the issue?

I've had similar problems with snapshot deletes hitting ENOSPC with
small amounts of free metadata space.  In this case, the upgrade from
5.6 to 5.9 will include a fix for that (it's in 5.8, also 5.4 and earlier
LTS kernels).

Increasing the global reserve may seem to help, but so will just rebooting
over and over, so a positive result from an experimental kernel does not
necessarily mean anything.  Pending snapshot deletes will be making small
amounts of progress just before hitting ENOSPC, so it will eventually
succeed if you repeat the mount enough times even with an old stock
kernel.

> Thank you for your help!
> 
> 
> --
> Eric Wheeler

  parent reply	other threads:[~2020-12-10  3:14 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-10  1:52 Global reserve ran out of space at 512MB, fails to rebalance Eric Wheeler
2020-12-10  2:38 ` Qu Wenruo
2020-12-10  3:12 ` Zygo Blaxell [this message]
2020-12-10 19:02   ` Eric Wheeler
2020-12-10 19:50   ` Eric Wheeler
2020-12-11  3:49     ` Zygo Blaxell
2020-12-11 19:08       ` Eric Wheeler
2020-12-11 21:05         ` Zygo Blaxell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201210031251.GJ31381@hungrycats.org \
    --to=ce3g8jdj@umail.furryterror.org \
    --cc=btrfs@lists.ewheeler.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.