Linux-BTRFS Archive on lore.kernel.org
 help / color / Atom feed
* btrfs reported used space doesn't correspond with space occupied by the files themselves
@ 2019-09-10  4:15 Daniel Martinez
  2019-09-10  4:41 ` Chris Murphy
  0 siblings, 1 reply; 3+ messages in thread
From: Daniel Martinez @ 2019-09-10  4:15 UTC (permalink / raw)
  To: linux-btrfs

Hello,

I've recently converted my root 32GB ext4 partition to btrfs (using
btrfs-progs 5.2). After that was done, I made a snapshot and tried to
update the system. Unfortunately I didn't have enough free space to
fit the whole update on that small partition, so it failed. I then
realized my mistake and deleted not only that newly made snapshot, but
also ext2_saved and some random files on the filesystem, totaling
about 5GB. For my surprise, the update still failed due to ENOSPC.

At this point, I tried running a balance, but it also failed with
ENOSPC. I tried the balance -dusage X with X increasing from zero, but
to my surprise again, it also failed.

Data, single: total=28.54GiB, used=28.34GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.00GiB, used=807.45MiB
GlobalReserve, single: total=41.44MiB, used=0.00B

Looking at btrfs filesystem df, it looks like those 5GB of data I
deleted are still occupying space. In fact, ncdu claims all the files
on that drive sum up to only 19GB.

I tried adding a second 2GB drive but that still wasn't enough to run
a full data balance (metadata runs fine).

This is what filesystem usage looks like:

Overall:
    Device size:                  31.59GiB
    Device allocated:             29.57GiB
    Device unallocated:            2.03GiB
    Device missing:                  0.00B
    Used:                         29.13GiB
    Free (estimated):              2.22GiB      (min: 2.22GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:               41.44MiB      (used: 0.00B)

Data,single: Size:28.54GiB, Used:28.34GiB
   /dev/sda7     768.00MiB
   /dev/sdb1      27.79GiB

Metadata,single: Size:1.00GiB, Used:807.45MiB
   /dev/sdb1       1.00GiB

System,single: Size:32.00MiB, Used:16.00KiB
   /dev/sdb1      32.00MiB

Unallocated:
   /dev/sda7       1.03GiB
   /dev/sdb1       1.00GiB


I then made a read-only snapshot of the root filesystem and used btrfs
send/receive to transfer it to another btrfs filesystem, and when it
got there its also only occupying 19GB.

So there seems 10GB got lost somewhere in the process and I can't find
a way to get them back (other thank mkfs'ing and restoring a backup),
which in this case is about 30% of the available disk space.

What may be causing this?

Thanks,
Daniel.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: btrfs reported used space doesn't correspond with space occupied by the files themselves
  2019-09-10  4:15 btrfs reported used space doesn't correspond with space occupied by the files themselves Daniel Martinez
@ 2019-09-10  4:41 ` Chris Murphy
  2019-09-10  7:06   ` Qu Wenruo
  0 siblings, 1 reply; 3+ messages in thread
From: Chris Murphy @ 2019-09-10  4:41 UTC (permalink / raw)
  To: Daniel Martinez; +Cc: Btrfs BTRFS, Qu Wenruo

On Mon, Sep 9, 2019 at 10:16 PM Daniel Martinez
<danielsmartinez@gmail.com> wrote:
>
> Hello,
>
> I've recently converted my root 32GB ext4 partition to btrfs (using
> btrfs-progs 5.2). After that was done, I made a snapshot and tried to
> update the system. Unfortunately I didn't have enough free space to
> fit the whole update on that small partition, so it failed. I then
> realized my mistake and deleted not only that newly made snapshot, but
> also ext2_saved and some random files on the filesystem, totaling
> about 5GB. For my surprise, the update still failed due to ENOSPC.
>
> At this point, I tried running a balance, but it also failed with
> ENOSPC. I tried the balance -dusage X with X increasing from zero, but
> to my surprise again, it also failed.
>
> Data, single: total=28.54GiB, used=28.34GiB
> System, single: total=32.00MiB, used=16.00KiB
> Metadata, single: total=1.00GiB, used=807.45MiB
> GlobalReserve, single: total=41.44MiB, used=0.00B
>
> Looking at btrfs filesystem df, it looks like those 5GB of data I
> deleted are still occupying space. In fact, ncdu claims all the files
> on that drive sum up to only 19GB.
>
> I tried adding a second 2GB drive but that still wasn't enough to run
> a full data balance (metadata runs fine).
>
> This is what filesystem usage looks like:
>
> Overall:
>     Device size:                  31.59GiB
>     Device allocated:             29.57GiB
>     Device unallocated:            2.03GiB
>     Device missing:                  0.00B
>     Used:                         29.13GiB
>     Free (estimated):              2.22GiB      (min: 2.22GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   1.00
>     Global reserve:               41.44MiB      (used: 0.00B)
>
> Data,single: Size:28.54GiB, Used:28.34GiB
>    /dev/sda7     768.00MiB
>    /dev/sdb1      27.79GiB
>
> Metadata,single: Size:1.00GiB, Used:807.45MiB
>    /dev/sdb1       1.00GiB
>
> System,single: Size:32.00MiB, Used:16.00KiB
>    /dev/sdb1      32.00MiB
>
> Unallocated:
>    /dev/sda7       1.03GiB
>    /dev/sdb1       1.00GiB
>
>
> I then made a read-only snapshot of the root filesystem and used btrfs
> send/receive to transfer it to another btrfs filesystem, and when it
> got there its also only occupying 19GB.
>
> So there seems 10GB got lost somewhere in the process and I can't find
> a way to get them back (other thank mkfs'ing and restoring a backup),
> which in this case is about 30% of the available disk space.
>
> What may be causing this?


Since the 4.6 convert rewrite, I'm not sure off hand if a defragment
is still suggested after the conversion. Qu can answer it.

There is an edge case where extents can get pinned when modified after
a snapshot, and not released even after the snapshot is deleted. But
what you're describing would be a really extreme version of this, and
isn't one I've come across before. It could be an unintended artifact
of conversion from ext4. Hard to say.

I suggest 'btrfs-image -c9 -t4 -ss /dev/ /path/to/file' and keep it
handy in case a developer asks for it. Metadata is only 800MiB so it
should compress down to less than 400 MiB. Also report back what
kernel verion is being used.

In the meantime, I suggest deleting all snapshots to give Btrfs a
chance to clean up unused extents. And then you could try to force a
clean up of unused extents by recursive defragment. The system is so
full right now that it's likely this will fail also with ENOSPC. COW
requires a completely successful write to a new location before old
extents can be freed. So whether delete or defragment, space is
consumed before it can be later freed up. But you might have some luck
at selectively defragmenting directories that you know do not have big
files. Like, start out with /etc/ and /usr - maybe you have VM images
in /var? If not then /var can be next. Maybe big files in /home? So do
that last, or do in such a way to avoid the big files until last.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: btrfs reported used space doesn't correspond with space occupied by the files themselves
  2019-09-10  4:41 ` Chris Murphy
@ 2019-09-10  7:06   ` Qu Wenruo
  0 siblings, 0 replies; 3+ messages in thread
From: Qu Wenruo @ 2019-09-10  7:06 UTC (permalink / raw)
  To: Chris Murphy, Daniel Martinez; +Cc: Btrfs BTRFS

[-- Attachment #1.1: Type: text/plain, Size: 4785 bytes --]



On 2019/9/10 下午12:41, Chris Murphy wrote:
> On Mon, Sep 9, 2019 at 10:16 PM Daniel Martinez
> <danielsmartinez@gmail.com> wrote:
>>
>> Hello,
>>
>> I've recently converted my root 32GB ext4 partition to btrfs (using
>> btrfs-progs 5.2). After that was done, I made a snapshot and tried to
>> update the system. Unfortunately I didn't have enough free space to
>> fit the whole update on that small partition, so it failed. I then
>> realized my mistake and deleted not only that newly made snapshot, but
>> also ext2_saved and some random files on the filesystem, totaling
>> about 5GB. For my surprise, the update still failed due to ENOSPC.
>>
>> At this point, I tried running a balance, but it also failed with
>> ENOSPC. I tried the balance -dusage X with X increasing from zero, but
>> to my surprise again, it also failed.
>>
>> Data, single: total=28.54GiB, used=28.34GiB
>> System, single: total=32.00MiB, used=16.00KiB
>> Metadata, single: total=1.00GiB, used=807.45MiB
>> GlobalReserve, single: total=41.44MiB, used=0.00B
>>
>> Looking at btrfs filesystem df, it looks like those 5GB of data I
>> deleted are still occupying space. In fact, ncdu claims all the files
>> on that drive sum up to only 19GB.

That's not uncommon.

Since convert make the ext2 image first, then reflink files to use part
of the extents of that image.

So just deleting the image subvolume won't ensure to free all space, as
part of the space is still used by the converted data.

You need to delete some files to free up some space first, make sure
there is no snapshot of your current subvolume, then do a full defrag.

The balance won't really do much help, you need to defrag to free up the
space wasted by the ext*->btrfs convert.

Thanks,
Qu

>>
>> I tried adding a second 2GB drive but that still wasn't enough to run
>> a full data balance (metadata runs fine).
>>
>> This is what filesystem usage looks like:
>>
>> Overall:
>>     Device size:                  31.59GiB
>>     Device allocated:             29.57GiB
>>     Device unallocated:            2.03GiB
>>     Device missing:                  0.00B
>>     Used:                         29.13GiB
>>     Free (estimated):              2.22GiB      (min: 2.22GiB)
>>     Data ratio:                       1.00
>>     Metadata ratio:                   1.00
>>     Global reserve:               41.44MiB      (used: 0.00B)
>>
>> Data,single: Size:28.54GiB, Used:28.34GiB
>>    /dev/sda7     768.00MiB
>>    /dev/sdb1      27.79GiB
>>
>> Metadata,single: Size:1.00GiB, Used:807.45MiB
>>    /dev/sdb1       1.00GiB
>>
>> System,single: Size:32.00MiB, Used:16.00KiB
>>    /dev/sdb1      32.00MiB
>>
>> Unallocated:
>>    /dev/sda7       1.03GiB
>>    /dev/sdb1       1.00GiB
>>
>>
>> I then made a read-only snapshot of the root filesystem and used btrfs
>> send/receive to transfer it to another btrfs filesystem, and when it
>> got there its also only occupying 19GB.
>>
>> So there seems 10GB got lost somewhere in the process and I can't find
>> a way to get them back (other thank mkfs'ing and restoring a backup),
>> which in this case is about 30% of the available disk space.
>>
>> What may be causing this?
> 
> 
> Since the 4.6 convert rewrite, I'm not sure off hand if a defragment
> is still suggested after the conversion. Qu can answer it.
> 
> There is an edge case where extents can get pinned when modified after
> a snapshot, and not released even after the snapshot is deleted. But
> what you're describing would be a really extreme version of this, and
> isn't one I've come across before. It could be an unintended artifact
> of conversion from ext4. Hard to say.
> 
> I suggest 'btrfs-image -c9 -t4 -ss /dev/ /path/to/file' and keep it
> handy in case a developer asks for it. Metadata is only 800MiB so it
> should compress down to less than 400 MiB. Also report back what
> kernel verion is being used.
> 
> In the meantime, I suggest deleting all snapshots to give Btrfs a
> chance to clean up unused extents. And then you could try to force a
> clean up of unused extents by recursive defragment. The system is so
> full right now that it's likely this will fail also with ENOSPC. COW
> requires a completely successful write to a new location before old
> extents can be freed. So whether delete or defragment, space is
> consumed before it can be later freed up. But you might have some luck
> at selectively defragmenting directories that you know do not have big
> files. Like, start out with /etc/ and /usr - maybe you have VM images
> in /var? If not then /var can be next. Maybe big files in /home? So do
> that last, or do in such a way to avoid the big files until last.
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-10  4:15 btrfs reported used space doesn't correspond with space occupied by the files themselves Daniel Martinez
2019-09-10  4:41 ` Chris Murphy
2019-09-10  7:06   ` Qu Wenruo

Linux-BTRFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-btrfs/0 linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ https://lore.kernel.org/linux-btrfs \
		linux-btrfs@vger.kernel.org linux-btrfs@archiver.kernel.org
	public-inbox-index linux-btrfs


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-btrfs


AGPL code for this site: git clone https://public-inbox.org/ public-inbox