* understanding disk space usage
@ 2017-02-07 16:44 Vasco Visser
2017-02-08 3:48 ` Qu Wenruo
0 siblings, 1 reply; 12+ messages in thread
From: Vasco Visser @ 2017-02-07 16:44 UTC (permalink / raw)
To: linux-btrfs
Hello,
My system is or seems to be running out of disk space but I can't find
out how or why. Might be a BTRFS peculiarity, hence posting on this
list. Most indicators seem to suggest I'm filling up, but I can't
trace the disk usage to files on the FS.
The issue is on my root filesystem on a 28GiB ssd partition (commands
below issued when booted into single user mode):
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 28G 26G 2.1G 93% /
$ btrfs --version
btrfs-progs v4.4
$ btrfs fi usage /
Overall:
Device size: 27.94GiB
Device allocated: 27.94GiB
Device unallocated: 1.00MiB
Device missing: 0.00B
Used: 25.03GiB
Free (estimated): 2.37GiB (min: 2.37GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 256.00MiB (used: 0.00B)
Data,single: Size:26.69GiB, Used:24.32GiB
/dev/sda3 26.69GiB
Metadata,single: Size:1.22GiB, Used:731.45MiB
/dev/sda3 1.22GiB
System,single: Size:32.00MiB, Used:16.00KiB
/dev/sda3 32.00MiB
Unallocated:
/dev/sda3 1.00MiB
$ btrfs fi df /
Data, single: total=26.69GiB, used=24.32GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.22GiB, used=731.48MiB
GlobalReserve, single: total=256.00MiB, used=0.00B
However:
$ mount -o bind / /mnt
$ sudo du -hs /mnt
9.3G /mnt
Try to balance:
$ btrfs balance start /
ERROR: error during balancing '/': No space left on device
Am I really filling up? What can explain the huge discrepancy with the
output of du (no open file descriptors on deleted files can explain
this in single user mode) and the FS stats?
Any advice on possible causes and how to proceed?
--
Vasco
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-07 16:44 understanding disk space usage Vasco Visser
@ 2017-02-08 3:48 ` Qu Wenruo
2017-02-08 9:55 ` Vasco Visser
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Qu Wenruo @ 2017-02-08 3:48 UTC (permalink / raw)
To: Vasco Visser, linux-btrfs
At 02/08/2017 12:44 AM, Vasco Visser wrote:
> Hello,
>
> My system is or seems to be running out of disk space but I can't find
> out how or why. Might be a BTRFS peculiarity, hence posting on this
> list. Most indicators seem to suggest I'm filling up, but I can't
> trace the disk usage to files on the FS.
>
> The issue is on my root filesystem on a 28GiB ssd partition (commands
> below issued when booted into single user mode):
>
>
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda3 28G 26G 2.1G 93% /
>
>
> $ btrfs --version
> btrfs-progs v4.4
>
>
> $ btrfs fi usage /
> Overall:
> Device size: 27.94GiB
> Device allocated: 27.94GiB
> Device unallocated: 1.00MiB
So from chunk level, your fs is already full.
And balance won't success since there is no unallocated space at all.
The first 1M of btrfs is always reserved and won't be allocated, and 1M
is too small for btrfs to allocate a chunk.
> Device missing: 0.00B
> Used: 25.03GiB
> Free (estimated): 2.37GiB (min: 2.37GiB)
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 256.00MiB (used: 0.00B)
> Data,single: Size:26.69GiB, Used:24.32GiB
You still have 2G data space, so you can still write things.
> /dev/sda3 26.69GiB
> Metadata,single: Size:1.22GiB, Used:731.45MiB
Metadata has has less space when considering "Global reserve".
In fact the used space would be 987M.
But it's still OK for normal write.
> /dev/sda3 1.22GiB
> System,single: Size:32.00MiB, Used:16.00KiB
> /dev/sda3 32.00MiB
System chunk can hardly be used up.
> Unallocated:
> /dev/sda3 1.00MiB
>
>
> $ btrfs fi df /
> Data, single: total=26.69GiB, used=24.32GiB
> System, single: total=32.00MiB, used=16.00KiB
> Metadata, single: total=1.22GiB, used=731.48MiB
> GlobalReserve, single: total=256.00MiB, used=0.00B
>
>
> However:
> $ mount -o bind / /mnt
> $ sudo du -hs /mnt
> 9.3G /mnt
>
>
> Try to balance:
> $ btrfs balance start /
> ERROR: error during balancing '/': No space left on device
>
>
> Am I really filling up? What can explain the huge discrepancy with the
> output of du (no open file descriptors on deleted files can explain
> this in single user mode) and the FS stats?
Just don't believe the vanilla df output for btrfs.
For btrfs, unlike other fs like ext4/xfs, which allocates chunk
dynamically and has different metadata/data profile, we can only get a
clear view of the fs from both chunk level(allocated/unallocated) and
extent level(total/used).
In your case, your fs doesn't have any unallocated space, this make
balance unable to work at all.
And your data/metadata usage is quite high, although both has small
available space left, the fs should be writable for some time, but not long.
To proceed, add a larger device to current fs, and do a balance or just
delete the 28G partition then btrfs will handle the rest well.
Thanks,
Qu
>
> Any advice on possible causes and how to proceed?
>
>
> --
> Vasco
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 3:48 ` Qu Wenruo
@ 2017-02-08 9:55 ` Vasco Visser
2017-02-09 2:53 ` Qu Wenruo
2017-02-08 14:46 ` Peter Grandi
2017-02-09 13:25 ` Adam Borowski
2 siblings, 1 reply; 12+ messages in thread
From: Vasco Visser @ 2017-02-08 9:55 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
Thank you for the explanation. What I would still like to know is how
to relate the chunk level abstraction to the file level abstraction.
According to the btrfs output there is 2G of data space is available
and 24G of data space is being used. Does this mean 24G of data used
in files? How do I know which files take up most space? du seems
pretty useless as it reports only 9G of files on the volume.
--
Vasco
On Wed, Feb 8, 2017 at 4:48 AM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 02/08/2017 12:44 AM, Vasco Visser wrote:
>>
>> Hello,
>>
>> My system is or seems to be running out of disk space but I can't find
>> out how or why. Might be a BTRFS peculiarity, hence posting on this
>> list. Most indicators seem to suggest I'm filling up, but I can't
>> trace the disk usage to files on the FS.
>>
>> The issue is on my root filesystem on a 28GiB ssd partition (commands
>> below issued when booted into single user mode):
>>
>>
>> $ df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda3 28G 26G 2.1G 93% /
>>
>>
>> $ btrfs --version
>> btrfs-progs v4.4
>>
>>
>> $ btrfs fi usage /
>> Overall:
>> Device size: 27.94GiB
>> Device allocated: 27.94GiB
>> Device unallocated: 1.00MiB
>
>
> So from chunk level, your fs is already full.
>
> And balance won't success since there is no unallocated space at all.
> The first 1M of btrfs is always reserved and won't be allocated, and 1M is
> too small for btrfs to allocate a chunk.
>
>> Device missing: 0.00B
>> Used: 25.03GiB
>> Free (estimated): 2.37GiB (min: 2.37GiB)
>> Data ratio: 1.00
>> Metadata ratio: 1.00
>> Global reserve: 256.00MiB (used: 0.00B)
>> Data,single: Size:26.69GiB, Used:24.32GiB
>
>
> You still have 2G data space, so you can still write things.
>
>> /dev/sda3 26.69GiB
>> Metadata,single: Size:1.22GiB, Used:731.45MiB
>
>
> Metadata has has less space when considering "Global reserve".
> In fact the used space would be 987M.
>
> But it's still OK for normal write.
>
>> /dev/sda3 1.22GiB
>> System,single: Size:32.00MiB, Used:16.00KiB
>> /dev/sda3 32.00MiB
>
>
> System chunk can hardly be used up.
>
>> Unallocated:
>> /dev/sda3 1.00MiB
>>
>>
>> $ btrfs fi df /
>> Data, single: total=26.69GiB, used=24.32GiB
>> System, single: total=32.00MiB, used=16.00KiB
>> Metadata, single: total=1.22GiB, used=731.48MiB
>> GlobalReserve, single: total=256.00MiB, used=0.00B
>>
>>
>> However:
>> $ mount -o bind / /mnt
>> $ sudo du -hs /mnt
>> 9.3G /mnt
>>
>>
>> Try to balance:
>> $ btrfs balance start /
>> ERROR: error during balancing '/': No space left on device
>>
>>
>> Am I really filling up? What can explain the huge discrepancy with the
>> output of du (no open file descriptors on deleted files can explain
>> this in single user mode) and the FS stats?
>
>
> Just don't believe the vanilla df output for btrfs.
>
> For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
> and has different metadata/data profile, we can only get a clear view of the
> fs from both chunk level(allocated/unallocated) and extent
> level(total/used).
>
> In your case, your fs doesn't have any unallocated space, this make balance
> unable to work at all.
>
> And your data/metadata usage is quite high, although both has small
> available space left, the fs should be writable for some time, but not long.
>
> To proceed, add a larger device to current fs, and do a balance or just
> delete the 28G partition then btrfs will handle the rest well.
>
> Thanks,
> Qu
>
>>
>> Any advice on possible causes and how to proceed?
>>
>>
>> --
>> Vasco
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 3:48 ` Qu Wenruo
2017-02-08 9:55 ` Vasco Visser
@ 2017-02-08 14:46 ` Peter Grandi
2017-02-08 17:50 ` Austin S. Hemmelgarn
2017-02-08 18:03 ` Hugo Mills
2017-02-09 13:25 ` Adam Borowski
2 siblings, 2 replies; 12+ messages in thread
From: Peter Grandi @ 2017-02-08 14:46 UTC (permalink / raw)
To: linux-btrfs
>> My system is or seems to be running out of disk space but I
>> can't find out how or why. [ ... ]
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda3 28G 26G 2.1G 93% /
[ ... ]
> So from chunk level, your fs is already full. And balance
> won't success since there is no unallocated space at all.
To add to this, 28GiB is a bit too small for Btrfs, because at
that point chunk size is 1GiB. I have the habit of sizing
partitions to an exact number of GiB, and that means that most
of 1GiB will never be used by Btrfs because there is a small
amount of space allocated that is smaller than 1GiB and thus
there will be eventually just less than 1GiB unallocated.
Unfortunately the chunk size is not manually settable.
Example here from 'btrfs fi usage':
Overall:
Device size: 88.00GiB
Device allocated: 86.06GiB
Device unallocated: 1.94GiB
Device missing: 0.00B
Used: 80.11GiB
Free (estimated): 6.26GiB (min: 5.30GiB)
That means that I should 'btrfs balance' now, because of the
1.94GiB "unallocated", 0.94GiB will never be allocated, and that
leaves just 1GiB "unallocated" which is the minimum for running
'btrfs balance'. I have just done so and this is the result:
Overall:
Device size: 88.00GiB
Device allocated: 82.03GiB
Device unallocated: 5.97GiB
Device missing: 0.00B
Used: 80.11GiB
Free (estimated): 6.26GiB (min: 3.28GiB)
At some point I had decided to use 'mixedbg' allocation to
reduce this problem and hopefully improve locality, but that
means that metadata and data need to have the same profile, and
I really want metadata to be 'dup' because of checksumming,
and I don't want data to be 'dup' too.
> [ ... ] To proceed, add a larger device to current fs, and do
> a balance or just delete the 28G partition then btrfs will
> handle the rest well.
Usually for this I use a USB stick, with a 1-3GiB partition plus
a bit extra because of that extra bit of space.
https://btrfs.wiki.kernel.org/index.php/FAQ#How_much_free_space_do_I_have.3F
https://btrfs.wiki.kernel.org/index.php/FAQ#Help.21_Btrfs_claims_I.27m_out_of_space.2C_but_it_looks_like_I_should_have_lots_left.21
marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
Unfortunately if it is a single device volume and metadata is
'dup' to remove the extra temporary device one has first to
convert the metadata to 'single' and then back to 'dup' after
removal.
There are also some additional reasons why space used (rather
than allocated) may be larger than expected, in special but not
wholly infrequent cases. My impression is that the Btrfs design
trades space for performance and reliability.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 14:46 ` Peter Grandi
@ 2017-02-08 17:50 ` Austin S. Hemmelgarn
2017-02-08 21:45 ` Peter Grandi
2017-02-08 18:03 ` Hugo Mills
1 sibling, 1 reply; 12+ messages in thread
From: Austin S. Hemmelgarn @ 2017-02-08 17:50 UTC (permalink / raw)
To: linux-btrfs
On 2017-02-08 09:46, Peter Grandi wrote:
>>> My system is or seems to be running out of disk space but I
>>> can't find out how or why. [ ... ]
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda3 28G 26G 2.1G 93% /
> [ ... ]
>> So from chunk level, your fs is already full. And balance
>> won't success since there is no unallocated space at all.
>
> To add to this, 28GiB is a bit too small for Btrfs, because at
> that point chunk size is 1GiB. I have the habit of sizing
> partitions to an exact number of GiB, and that means that most
> of 1GiB will never be used by Btrfs because there is a small
> amount of space allocated that is smaller than 1GiB and thus
> there will be eventually just less than 1GiB unallocated.
> Unfortunately the chunk size is not manually settable.
28GB is a perfectly reasonable (if a bit odd) size for a non-mixed-mode
volume. The issue isn't total size, it's the difference between total
size and the amount of data you want to store on it. and how well you
manage chunk usage. If you're balancing regularly to compact chunks
that are less than 50% full, you can get away with as little as 4GB of
extra space beyond your regular data-set with absolutely zero issues.
I've run full Linux installations in VM's with BTRFS on 16GB disk images
before with absolutely zero issues, and have a handful of fairly active
8GB BTRFS volumes on both of my primary systems that never have any
issues with free space despite averaging 5GB of space usage.
>
> Example here from 'btrfs fi usage':
>
> Overall:
> Device size: 88.00GiB
> Device allocated: 86.06GiB
> Device unallocated: 1.94GiB
> Device missing: 0.00B
> Used: 80.11GiB
> Free (estimated): 6.26GiB (min: 5.30GiB)
>
> That means that I should 'btrfs balance' now, because of the
> 1.94GiB "unallocated", 0.94GiB will never be allocated, and that
> leaves just 1GiB "unallocated" which is the minimum for running
> 'btrfs balance'. I have just done so and this is the result:
Actually, that 0.94GB would be used. BTRFS will create smaller chunks
if it has to, so if you allocated two data chunks with that 1.94GB of
space, you would get one 1GB chunk and one 0.94GB chunk.
>
> Overall:
> Device size: 88.00GiB
> Device allocated: 82.03GiB
> Device unallocated: 5.97GiB
> Device missing: 0.00B
> Used: 80.11GiB
> Free (estimated): 6.26GiB (min: 3.28GiB)
>
> At some point I had decided to use 'mixedbg' allocation to
> reduce this problem and hopefully improve locality, but that
> means that metadata and data need to have the same profile, and
> I really want metadata to be 'dup' because of checksumming,
> and I don't want data to be 'dup' too.
You could also use larger partitions and keep a better handle on free space.
>
>> [ ... ] To proceed, add a larger device to current fs, and do
>> a balance or just delete the 28G partition then btrfs will
>> handle the rest well.
>
> Usually for this I use a USB stick, with a 1-3GiB partition plus
> a bit extra because of that extra bit of space.
If you have a lot of RAM and can guarantee that things won't crash (or
don't care about the filesystem too much and are just trying to avoid
having to restore a backup), a ramdisk works well for this too.
>
> https://btrfs.wiki.kernel.org/index.php/FAQ#How_much_free_space_do_I_have.3F
> https://btrfs.wiki.kernel.org/index.php/FAQ#Help.21_Btrfs_claims_I.27m_out_of_space.2C_but_it_looks_like_I_should_have_lots_left.21
> marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
>
> Unfortunately if it is a single device volume and metadata is
> 'dup' to remove the extra temporary device one has first to
> convert the metadata to 'single' and then back to 'dup' after
> removal.
This shouldn't be needed, if it is then it's a bug that should be
reported and ideally fixed (there was such a bug when converting from
multi-device raid profiles to single device, but that got fixed quite a
few kernel versions ago (I distinctly remember because I wrote the fix)).
>
> There are also some additional reasons why space used (rather
> than allocated) may be larger than expected, in special but not
> wholly infrequent cases. My impression is that the Btrfs design
> trades space for performance and reliability.
In general, yes, but a more accurate statement would be that it offers a
trade-off between space and convenience. If you're not going to take
the time to maintain the filesystem properly, then you will need more
excess space for it.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 14:46 ` Peter Grandi
2017-02-08 17:50 ` Austin S. Hemmelgarn
@ 2017-02-08 18:03 ` Hugo Mills
1 sibling, 0 replies; 12+ messages in thread
From: Hugo Mills @ 2017-02-08 18:03 UTC (permalink / raw)
To: Peter Grandi; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 3399 bytes --]
On Wed, Feb 08, 2017 at 02:46:32PM +0000, Peter Grandi wrote:
> >> My system is or seems to be running out of disk space but I
> >> can't find out how or why. [ ... ]
> >> Filesystem Size Used Avail Use% Mounted on
> >> /dev/sda3 28G 26G 2.1G 93% /
> [ ... ]
> > So from chunk level, your fs is already full. And balance
> > won't success since there is no unallocated space at all.
>
> To add to this, 28GiB is a bit too small for Btrfs, because at
> that point chunk size is 1GiB. I have the habit of sizing
> partitions to an exact number of GiB, and that means that most
> of 1GiB will never be used by Btrfs because there is a small
> amount of space allocated that is smaller than 1GiB and thus
> there will be eventually just less than 1GiB unallocated.
Not true -- the last chunk can be smaller than 1 GiB, to use the
available space completely.
Hugo.
> Unfortunately the chunk size is not manually settable.
>
> Example here from 'btrfs fi usage':
>
> Overall:
> Device size: 88.00GiB
> Device allocated: 86.06GiB
> Device unallocated: 1.94GiB
> Device missing: 0.00B
> Used: 80.11GiB
> Free (estimated): 6.26GiB (min: 5.30GiB)
>
> That means that I should 'btrfs balance' now, because of the
> 1.94GiB "unallocated", 0.94GiB will never be allocated, and that
> leaves just 1GiB "unallocated" which is the minimum for running
> 'btrfs balance'. I have just done so and this is the result:
>
> Overall:
> Device size: 88.00GiB
> Device allocated: 82.03GiB
> Device unallocated: 5.97GiB
> Device missing: 0.00B
> Used: 80.11GiB
> Free (estimated): 6.26GiB (min: 3.28GiB)
>
> At some point I had decided to use 'mixedbg' allocation to
> reduce this problem and hopefully improve locality, but that
> means that metadata and data need to have the same profile, and
> I really want metadata to be 'dup' because of checksumming,
> and I don't want data to be 'dup' too.
>
> > [ ... ] To proceed, add a larger device to current fs, and do
> > a balance or just delete the 28G partition then btrfs will
> > handle the rest well.
>
> Usually for this I use a USB stick, with a 1-3GiB partition plus
> a bit extra because of that extra bit of space.
>
> https://btrfs.wiki.kernel.org/index.php/FAQ#How_much_free_space_do_I_have.3F
> https://btrfs.wiki.kernel.org/index.php/FAQ#Help.21_Btrfs_claims_I.27m_out_of_space.2C_but_it_looks_like_I_should_have_lots_left.21
> marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
>
> Unfortunately if it is a single device volume and metadata is
> 'dup' to remove the extra temporary device one has first to
> convert the metadata to 'single' and then back to 'dup' after
> removal.
>
> There are also some additional reasons why space used (rather
> than allocated) may be larger than expected, in special but not
> wholly infrequent cases. My impression is that the Btrfs design
> trades space for performance and reliability.
--
Hugo Mills | Alert status chocolate viridian: Authorised
hugo@... carfax.org.uk | personnel only. Dogs must be carried on escalator.
http://carfax.org.uk/ |
PGP: E2AB1DE4 |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 17:50 ` Austin S. Hemmelgarn
@ 2017-02-08 21:45 ` Peter Grandi
2017-02-09 12:47 ` Austin S. Hemmelgarn
0 siblings, 1 reply; 12+ messages in thread
From: Peter Grandi @ 2017-02-08 21:45 UTC (permalink / raw)
To: Linux Btrfs
[ ... ]
> The issue isn't total size, it's the difference between total
> size and the amount of data you want to store on it. and how
> well you manage chunk usage. If you're balancing regularly to
> compact chunks that are less than 50% full, [ ... ] BTRFS on
> 16GB disk images before with absolutely zero issues, and have
> a handful of fairly active 8GB BTRFS volumes [ ... ]
Unfortunately balance operations are quite expensive, especially
from inside VMs. On the other hand if the system is not much
disk constrained relatively frequent balances is a good idea
indeed. It is a bit like the advice in the other thread on OLTP
to run frequent data defrags, which are also quite expensive.
Both combined are like running the compactor/cleaner on log
structured (another variants of "COW") filesystems like NILFS2:
running that frequently means tighter space use and better
locality, but is quite expensive too.
>> [ ... ] My impression is that the Btrfs design trades space
>> for performance and reliability.
> In general, yes, but a more accurate statement would be that
> it offers a trade-off between space and convenience. [ ... ]
It is not quite "convenience", it is overhead: whole-volume
operations like compacting, defragmenting (or fscking) tend to
cost significantly in IOPS and also in transfer rate, and on
flash SSDs they also consume lifetime.
Therefore personally I prefer to have quite a bit of unused
space in Btrfs or NILFS2, at a minimum around double at 10-20%
than the 5-10% that I think is the minimum advisable with
conventional designs.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 9:55 ` Vasco Visser
@ 2017-02-09 2:53 ` Qu Wenruo
2017-02-09 12:01 ` Vasco Visser
0 siblings, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2017-02-09 2:53 UTC (permalink / raw)
To: Vasco Visser; +Cc: linux-btrfs
At 02/08/2017 05:55 PM, Vasco Visser wrote:
> Thank you for the explanation. What I would still like to know is how
> to relate the chunk level abstraction to the file level abstraction.
> According to the btrfs output there is 2G of data space is available
> and 24G of data space is being used. Does this mean 24G of data used
> in files?
Yes, 24G data is used to store data.
(And space cache, while space cache is relatively small, less than 1M
for each chunk)
> How do I know which files take up most space? du seems
> pretty useless as it reports only 9G of files on the volume.
Are you using snapshots?
If you are only using 1 subvolume(including snapshots), then it seems
that btrfs data CoW waste quite a lot of space.
In case of btrfs data CoW, for example you have a 128M file(one extent),
then you rewrite 64M of it, your data space usage will be 128M + 64M, as
the first 128M will only be freed after *all* its user get freed.
For single subvolume and little to none reflink usage case, "btrfs fi
defrag" should help to free some space.
If you have multiple snapshots or a lot of reflinked files, then I'm
afraid you have to delete some file (including reflink copy or snapshot)
to free some data.
Thanks,
Qu
>
> --
> Vasco
>
>
> On Wed, Feb 8, 2017 at 4:48 AM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 02/08/2017 12:44 AM, Vasco Visser wrote:
>>>
>>> Hello,
>>>
>>> My system is or seems to be running out of disk space but I can't find
>>> out how or why. Might be a BTRFS peculiarity, hence posting on this
>>> list. Most indicators seem to suggest I'm filling up, but I can't
>>> trace the disk usage to files on the FS.
>>>
>>> The issue is on my root filesystem on a 28GiB ssd partition (commands
>>> below issued when booted into single user mode):
>>>
>>>
>>> $ df -h
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/sda3 28G 26G 2.1G 93% /
>>>
>>>
>>> $ btrfs --version
>>> btrfs-progs v4.4
>>>
>>>
>>> $ btrfs fi usage /
>>> Overall:
>>> Device size: 27.94GiB
>>> Device allocated: 27.94GiB
>>> Device unallocated: 1.00MiB
>>
>>
>> So from chunk level, your fs is already full.
>>
>> And balance won't success since there is no unallocated space at all.
>> The first 1M of btrfs is always reserved and won't be allocated, and 1M is
>> too small for btrfs to allocate a chunk.
>>
>>> Device missing: 0.00B
>>> Used: 25.03GiB
>>> Free (estimated): 2.37GiB (min: 2.37GiB)
>>> Data ratio: 1.00
>>> Metadata ratio: 1.00
>>> Global reserve: 256.00MiB (used: 0.00B)
>>> Data,single: Size:26.69GiB, Used:24.32GiB
>>
>>
>> You still have 2G data space, so you can still write things.
>>
>>> /dev/sda3 26.69GiB
>>> Metadata,single: Size:1.22GiB, Used:731.45MiB
>>
>>
>> Metadata has has less space when considering "Global reserve".
>> In fact the used space would be 987M.
>>
>> But it's still OK for normal write.
>>
>>> /dev/sda3 1.22GiB
>>> System,single: Size:32.00MiB, Used:16.00KiB
>>> /dev/sda3 32.00MiB
>>
>>
>> System chunk can hardly be used up.
>>
>>> Unallocated:
>>> /dev/sda3 1.00MiB
>>>
>>>
>>> $ btrfs fi df /
>>> Data, single: total=26.69GiB, used=24.32GiB
>>> System, single: total=32.00MiB, used=16.00KiB
>>> Metadata, single: total=1.22GiB, used=731.48MiB
>>> GlobalReserve, single: total=256.00MiB, used=0.00B
>>>
>>>
>>> However:
>>> $ mount -o bind / /mnt
>>> $ sudo du -hs /mnt
>>> 9.3G /mnt
>>>
>>>
>>> Try to balance:
>>> $ btrfs balance start /
>>> ERROR: error during balancing '/': No space left on device
>>>
>>>
>>> Am I really filling up? What can explain the huge discrepancy with the
>>> output of du (no open file descriptors on deleted files can explain
>>> this in single user mode) and the FS stats?
>>
>>
>> Just don't believe the vanilla df output for btrfs.
>>
>> For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
>> and has different metadata/data profile, we can only get a clear view of the
>> fs from both chunk level(allocated/unallocated) and extent
>> level(total/used).
>>
>> In your case, your fs doesn't have any unallocated space, this make balance
>> unable to work at all.
>>
>> And your data/metadata usage is quite high, although both has small
>> available space left, the fs should be writable for some time, but not long.
>>
>> To proceed, add a larger device to current fs, and do a balance or just
>> delete the 28G partition then btrfs will handle the rest well.
>>
>> Thanks,
>> Qu
>>
>>>
>>> Any advice on possible causes and how to proceed?
>>>
>>>
>>> --
>>> Vasco
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-09 2:53 ` Qu Wenruo
@ 2017-02-09 12:01 ` Vasco Visser
0 siblings, 0 replies; 12+ messages in thread
From: Vasco Visser @ 2017-02-09 12:01 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
I did have snapshots. Apparently my distro's update utility made these
snapshots without clearly telling me... Everything makes sense now.
Thanks again for the help.
--
Vasco
On Thu, Feb 9, 2017 at 3:53 AM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 02/08/2017 05:55 PM, Vasco Visser wrote:
>>
>> Thank you for the explanation. What I would still like to know is how
>> to relate the chunk level abstraction to the file level abstraction.
>> According to the btrfs output there is 2G of data space is available
>> and 24G of data space is being used. Does this mean 24G of data used
>> in files?
>
>
> Yes, 24G data is used to store data.
> (And space cache, while space cache is relatively small, less than 1M for
> each chunk)
>
>> How do I know which files take up most space? du seems
>> pretty useless as it reports only 9G of files on the volume.
>
>
> Are you using snapshots?
>
> If you are only using 1 subvolume(including snapshots), then it seems that
> btrfs data CoW waste quite a lot of space.
>
> In case of btrfs data CoW, for example you have a 128M file(one extent),
> then you rewrite 64M of it, your data space usage will be 128M + 64M, as the
> first 128M will only be freed after *all* its user get freed.
>
> For single subvolume and little to none reflink usage case, "btrfs fi
> defrag" should help to free some space.
>
> If you have multiple snapshots or a lot of reflinked files, then I'm afraid
> you have to delete some file (including reflink copy or snapshot) to free
> some data.
>
> Thanks,
> Qu
>
>
>>
>> --
>> Vasco
>>
>>
>> On Wed, Feb 8, 2017 at 4:48 AM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>>
>>>
>>>
>>> At 02/08/2017 12:44 AM, Vasco Visser wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> My system is or seems to be running out of disk space but I can't find
>>>> out how or why. Might be a BTRFS peculiarity, hence posting on this
>>>> list. Most indicators seem to suggest I'm filling up, but I can't
>>>> trace the disk usage to files on the FS.
>>>>
>>>> The issue is on my root filesystem on a 28GiB ssd partition (commands
>>>> below issued when booted into single user mode):
>>>>
>>>>
>>>> $ df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda3 28G 26G 2.1G 93% /
>>>>
>>>>
>>>> $ btrfs --version
>>>> btrfs-progs v4.4
>>>>
>>>>
>>>> $ btrfs fi usage /
>>>> Overall:
>>>> Device size: 27.94GiB
>>>> Device allocated: 27.94GiB
>>>> Device unallocated: 1.00MiB
>>>
>>>
>>>
>>> So from chunk level, your fs is already full.
>>>
>>> And balance won't success since there is no unallocated space at all.
>>> The first 1M of btrfs is always reserved and won't be allocated, and 1M
>>> is
>>> too small for btrfs to allocate a chunk.
>>>
>>>> Device missing: 0.00B
>>>> Used: 25.03GiB
>>>> Free (estimated): 2.37GiB (min: 2.37GiB)
>>>> Data ratio: 1.00
>>>> Metadata ratio: 1.00
>>>> Global reserve: 256.00MiB (used: 0.00B)
>>>> Data,single: Size:26.69GiB, Used:24.32GiB
>>>
>>>
>>>
>>> You still have 2G data space, so you can still write things.
>>>
>>>> /dev/sda3 26.69GiB
>>>> Metadata,single: Size:1.22GiB, Used:731.45MiB
>>>
>>>
>>>
>>> Metadata has has less space when considering "Global reserve".
>>> In fact the used space would be 987M.
>>>
>>> But it's still OK for normal write.
>>>
>>>> /dev/sda3 1.22GiB
>>>> System,single: Size:32.00MiB, Used:16.00KiB
>>>> /dev/sda3 32.00MiB
>>>
>>>
>>>
>>> System chunk can hardly be used up.
>>>
>>>> Unallocated:
>>>> /dev/sda3 1.00MiB
>>>>
>>>>
>>>> $ btrfs fi df /
>>>> Data, single: total=26.69GiB, used=24.32GiB
>>>> System, single: total=32.00MiB, used=16.00KiB
>>>> Metadata, single: total=1.22GiB, used=731.48MiB
>>>> GlobalReserve, single: total=256.00MiB, used=0.00B
>>>>
>>>>
>>>> However:
>>>> $ mount -o bind / /mnt
>>>> $ sudo du -hs /mnt
>>>> 9.3G /mnt
>>>>
>>>>
>>>> Try to balance:
>>>> $ btrfs balance start /
>>>> ERROR: error during balancing '/': No space left on device
>>>>
>>>>
>>>> Am I really filling up? What can explain the huge discrepancy with the
>>>> output of du (no open file descriptors on deleted files can explain
>>>> this in single user mode) and the FS stats?
>>>
>>>
>>>
>>> Just don't believe the vanilla df output for btrfs.
>>>
>>> For btrfs, unlike other fs like ext4/xfs, which allocates chunk
>>> dynamically
>>> and has different metadata/data profile, we can only get a clear view of
>>> the
>>> fs from both chunk level(allocated/unallocated) and extent
>>> level(total/used).
>>>
>>> In your case, your fs doesn't have any unallocated space, this make
>>> balance
>>> unable to work at all.
>>>
>>> And your data/metadata usage is quite high, although both has small
>>> available space left, the fs should be writable for some time, but not
>>> long.
>>>
>>> To proceed, add a larger device to current fs, and do a balance or just
>>> delete the 28G partition then btrfs will handle the rest well.
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>> Any advice on possible causes and how to proceed?
>>>>
>>>>
>>>> --
>>>> Vasco
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>> in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>>
>>>
>>
>>
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 21:45 ` Peter Grandi
@ 2017-02-09 12:47 ` Austin S. Hemmelgarn
0 siblings, 0 replies; 12+ messages in thread
From: Austin S. Hemmelgarn @ 2017-02-09 12:47 UTC (permalink / raw)
To: Linux Btrfs
On 2017-02-08 16:45, Peter Grandi wrote:
> [ ... ]
>> The issue isn't total size, it's the difference between total
>> size and the amount of data you want to store on it. and how
>> well you manage chunk usage. If you're balancing regularly to
>> compact chunks that are less than 50% full, [ ... ] BTRFS on
>> 16GB disk images before with absolutely zero issues, and have
>> a handful of fairly active 8GB BTRFS volumes [ ... ]
>
> Unfortunately balance operations are quite expensive, especially
> from inside VMs. On the other hand if the system is not much
> disk constrained relatively frequent balances is a good idea
> indeed. It is a bit like the advice in the other thread on OLTP
> to run frequent data defrags, which are also quite expensive.
That depends on how and when you do them. A full balance isn't part of
regular maintenance, and should never be such. Regular partial balances
done to clean up mostly empty chunks absolutely should be part of
regular maintenance, and are pretty inexpensive in terms of both time
and resource usage. Balance with -dusage=20 -musage=20 should run in at
most a few seconds on most reasonably sized filesystems even on low-end
systems like a Raspberry Pi, and running that on an at least weekly
basis will significantly improve the chances that you don't encounter a
situation like this.
>
> Both combined are like running the compactor/cleaner on log
> structured (another variants of "COW") filesystems like NILFS2:
> running that frequently means tighter space use and better
> locality, but is quite expensive too.
If you run with autodefrag, then you should rarely if ever need to
actually run a full defrag operation unless you're storing lots of
database files, VM disk images, or similar stuff. This goes double on
an SSD.
>
>>> [ ... ] My impression is that the Btrfs design trades space
>>> for performance and reliability.
>
>> In general, yes, but a more accurate statement would be that
>> it offers a trade-off between space and convenience. [ ... ]
>
> It is not quite "convenience", it is overhead: whole-volume
> operations like compacting, defragmenting (or fscking) tend to
> cost significantly in IOPS and also in transfer rate, and on
> flash SSDs they also consume lifetime.
Overhead is the inverse of convenience. By over-provisioning to a
greater degree, you're reducing the need to worry about those
'expensive' operations, reducing both resource overhead, and management
overhead.
>
> Therefore personally I prefer to have quite a bit of unused
> space in Btrfs or NILFS2, at a minimum around double at 10-20%
> than the 5-10% that I think is the minimum advisable with
> conventional designs.
I can agree on this point, over-provisioning is mandatory to a much
greater degree on COW filesystems.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-08 3:48 ` Qu Wenruo
2017-02-08 9:55 ` Vasco Visser
2017-02-08 14:46 ` Peter Grandi
@ 2017-02-09 13:25 ` Adam Borowski
2017-02-09 17:53 ` Austin S. Hemmelgarn
2 siblings, 1 reply; 12+ messages in thread
From: Adam Borowski @ 2017-02-09 13:25 UTC (permalink / raw)
To: linux-btrfs
On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote:
> Just don't believe the vanilla df output for btrfs.
>
> For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
> and has different metadata/data profile, we can only get a clear view of the
> fs from both chunk level(allocated/unallocated) and extent
> level(total/used).
Actually, df lies on ext4 too. sysvfs-derived filesystems have statically
allocated inodes, which means that if you try to store small files, at some
point you'll run out of space despite df claiming you have plenty left.
Meow!
--
Autotools hint: to do a zx-spectrum build on a pdp11 host, type:
./configure --host=zx-spectrum --build=pdp11
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: understanding disk space usage
2017-02-09 13:25 ` Adam Borowski
@ 2017-02-09 17:53 ` Austin S. Hemmelgarn
0 siblings, 0 replies; 12+ messages in thread
From: Austin S. Hemmelgarn @ 2017-02-09 17:53 UTC (permalink / raw)
To: linux-btrfs
On 2017-02-09 08:25, Adam Borowski wrote:
> On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote:
>> Just don't believe the vanilla df output for btrfs.
>>
>> For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
>> and has different metadata/data profile, we can only get a clear view of the
>> fs from both chunk level(allocated/unallocated) and extent
>> level(total/used).
>
> Actually, df lies on ext4 too. sysvfs-derived filesystems have statically
> allocated inodes, which means that if you try to store small files, at some
> point you'll run out of space despite df claiming you have plenty left.
Which is why the `-i` option for df exists.
That said, not having any more space is not the same as not being able
to create more files, and that's been the case since long before Linux
even existed. Most people don't think about this though because ext4
and most other filesystems that use static inode tables allocate insane
numbers of inodes at mkfs time so it (usually) doesn't ever become an issue.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-02-09 17:54 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-07 16:44 understanding disk space usage Vasco Visser
2017-02-08 3:48 ` Qu Wenruo
2017-02-08 9:55 ` Vasco Visser
2017-02-09 2:53 ` Qu Wenruo
2017-02-09 12:01 ` Vasco Visser
2017-02-08 14:46 ` Peter Grandi
2017-02-08 17:50 ` Austin S. Hemmelgarn
2017-02-08 21:45 ` Peter Grandi
2017-02-09 12:47 ` Austin S. Hemmelgarn
2017-02-08 18:03 ` Hugo Mills
2017-02-09 13:25 ` Adam Borowski
2017-02-09 17:53 ` Austin S. Hemmelgarn
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.