linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Murphy <lists@colorremedies.com>
To: Daniel Martinez <danielsmartinez@gmail.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>
Subject: Re: btrfs reported used space doesn't correspond with space occupied by the files themselves
Date: Mon, 9 Sep 2019 22:41:08 -0600	[thread overview]
Message-ID: <CAJCQCtQ_QuE2dRLwrMKHQ6nFdNGeZghFizHdug5pbQWZqKewyw@mail.gmail.com> (raw)
In-Reply-To: <CAMmfObZuWx0HR48VNnN2M1jguBsfUmyXTQ-KN5J9iCySxRapHw@mail.gmail.com>

On Mon, Sep 9, 2019 at 10:16 PM Daniel Martinez
<danielsmartinez@gmail.com> wrote:
>
> Hello,
>
> I've recently converted my root 32GB ext4 partition to btrfs (using
> btrfs-progs 5.2). After that was done, I made a snapshot and tried to
> update the system. Unfortunately I didn't have enough free space to
> fit the whole update on that small partition, so it failed. I then
> realized my mistake and deleted not only that newly made snapshot, but
> also ext2_saved and some random files on the filesystem, totaling
> about 5GB. For my surprise, the update still failed due to ENOSPC.
>
> At this point, I tried running a balance, but it also failed with
> ENOSPC. I tried the balance -dusage X with X increasing from zero, but
> to my surprise again, it also failed.
>
> Data, single: total=28.54GiB, used=28.34GiB
> System, single: total=32.00MiB, used=16.00KiB
> Metadata, single: total=1.00GiB, used=807.45MiB
> GlobalReserve, single: total=41.44MiB, used=0.00B
>
> Looking at btrfs filesystem df, it looks like those 5GB of data I
> deleted are still occupying space. In fact, ncdu claims all the files
> on that drive sum up to only 19GB.
>
> I tried adding a second 2GB drive but that still wasn't enough to run
> a full data balance (metadata runs fine).
>
> This is what filesystem usage looks like:
>
> Overall:
>     Device size:                  31.59GiB
>     Device allocated:             29.57GiB
>     Device unallocated:            2.03GiB
>     Device missing:                  0.00B
>     Used:                         29.13GiB
>     Free (estimated):              2.22GiB      (min: 2.22GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   1.00
>     Global reserve:               41.44MiB      (used: 0.00B)
>
> Data,single: Size:28.54GiB, Used:28.34GiB
>    /dev/sda7     768.00MiB
>    /dev/sdb1      27.79GiB
>
> Metadata,single: Size:1.00GiB, Used:807.45MiB
>    /dev/sdb1       1.00GiB
>
> System,single: Size:32.00MiB, Used:16.00KiB
>    /dev/sdb1      32.00MiB
>
> Unallocated:
>    /dev/sda7       1.03GiB
>    /dev/sdb1       1.00GiB
>
>
> I then made a read-only snapshot of the root filesystem and used btrfs
> send/receive to transfer it to another btrfs filesystem, and when it
> got there its also only occupying 19GB.
>
> So there seems 10GB got lost somewhere in the process and I can't find
> a way to get them back (other thank mkfs'ing and restoring a backup),
> which in this case is about 30% of the available disk space.
>
> What may be causing this?


Since the 4.6 convert rewrite, I'm not sure off hand if a defragment
is still suggested after the conversion. Qu can answer it.

There is an edge case where extents can get pinned when modified after
a snapshot, and not released even after the snapshot is deleted. But
what you're describing would be a really extreme version of this, and
isn't one I've come across before. It could be an unintended artifact
of conversion from ext4. Hard to say.

I suggest 'btrfs-image -c9 -t4 -ss /dev/ /path/to/file' and keep it
handy in case a developer asks for it. Metadata is only 800MiB so it
should compress down to less than 400 MiB. Also report back what
kernel verion is being used.

In the meantime, I suggest deleting all snapshots to give Btrfs a
chance to clean up unused extents. And then you could try to force a
clean up of unused extents by recursive defragment. The system is so
full right now that it's likely this will fail also with ENOSPC. COW
requires a completely successful write to a new location before old
extents can be freed. So whether delete or defragment, space is
consumed before it can be later freed up. But you might have some luck
at selectively defragmenting directories that you know do not have big
files. Like, start out with /etc/ and /usr - maybe you have VM images
in /var? If not then /var can be next. Maybe big files in /home? So do
that last, or do in such a way to avoid the big files until last.


-- 
Chris Murphy

  reply	other threads:[~2019-09-10  4:41 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-10  4:15 btrfs reported used space doesn't correspond with space occupied by the files themselves Daniel Martinez
2019-09-10  4:41 ` Chris Murphy [this message]
2019-09-10  7:06   ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJCQCtQ_QuE2dRLwrMKHQ6nFdNGeZghFizHdug5pbQWZqKewyw@mail.gmail.com \
    --to=lists@colorremedies.com \
    --cc=danielsmartinez@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).