All of lore.kernel.org
 help / color / mirror / Atom feed
* unable to use all spaces
@ 2021-12-15 14:31 Jingyun He
  2021-12-15 15:50 ` David Sterba
  0 siblings, 1 reply; 11+ messages in thread
From: Jingyun He @ 2021-12-15 14:31 UTC (permalink / raw)
  To: linux-btrfs

Hello,
I have a 14TB WDC HM-SMR disk formatted with BTRFS,
It looks like I'm unable to fill the disk full.
E.g. btrfs fi usage /disk1/
Free (estimated): 128.95GiB (min: 128.95GiB)

It still has 100+GB available
But I'm unable to put more files.

Do you know if there are any mkfs/mount options that I can use to
reach maximum capacity? like mkfs.ext4 -m 0 -O ^has_journal -T
largefile4

Thank you very much.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-15 14:31 unable to use all spaces Jingyun He
@ 2021-12-15 15:50 ` David Sterba
  2021-12-15 23:58   ` Qu Wenruo
                     ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: David Sterba @ 2021-12-15 15:50 UTC (permalink / raw)
  To: Jingyun He; +Cc: linux-btrfs

On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> Hello,
> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> It looks like I'm unable to fill the disk full.
> E.g. btrfs fi usage /disk1/
> Free (estimated): 128.95GiB (min: 128.95GiB)
> 
> It still has 100+GB available
> But I'm unable to put more files.

We'd need more information to diagnose that, eg. output of 'btrfs fi df'
to see if eg. the metadata space is not exhausted or if the remaining
128G account for the unusable space in the zones (this is also in the
'fi df' output).

> Do you know if there are any mkfs/mount options that I can use to
> reach maximum capacity? like mkfs.ext4 -m 0 -O ^has_journal -T
> largefile4

There are no such options and the space should be used at the full range
with respect to the chunk allocations.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-15 15:50 ` David Sterba
@ 2021-12-15 23:58   ` Qu Wenruo
  2021-12-21  3:38     ` Zygo Blaxell
  2021-12-17  7:50   ` Johannes Thumshirn
  2021-12-17  7:51   ` Johannes Thumshirn
  2 siblings, 1 reply; 11+ messages in thread
From: Qu Wenruo @ 2021-12-15 23:58 UTC (permalink / raw)
  To: dsterba, Jingyun He, linux-btrfs



On 2021/12/15 23:50, David Sterba wrote:
> On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
>> Hello,
>> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
>> It looks like I'm unable to fill the disk full.
>> E.g. btrfs fi usage /disk1/
>> Free (estimated): 128.95GiB (min: 128.95GiB)
>>
>> It still has 100+GB available
>> But I'm unable to put more files.
>
> We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> to see if eg. the metadata space is not exhausted or if the remaining
> 128G account for the unusable space in the zones (this is also in the
> 'fi df' output).

A little off-topic, but IMHO we should really make our 'fi usage' and
vanilla 'df' to take metadata and unallocated space into consideration.

The vanilla 'df' command reporting more space than what we can really
use is already causing new btrfs users problems.

We can keep teaching users, but there are still tools relying completely
on vanilla 'df' output to reserve their space usage.

Thus it's not really something can be purely solved by education.

My purpose is to make vanilla 'df' output to take metadata/unallocated
space into consideration.

Unfortunately I don't have solid plan yet.
But maybe we can start by returning 0 available space when no more
unallocated space.

Maybe later we can have more comprehensive available space calculation.

(Other fses like ext4/xfs already does similar behavior by
under-reporting available space)

Thanks,
Qu
>
>> Do you know if there are any mkfs/mount options that I can use to
>> reach maximum capacity? like mkfs.ext4 -m 0 -O ^has_journal -T
>> largefile4
>
> There are no such options and the space should be used at the full range
> with respect to the chunk allocations.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-15 15:50 ` David Sterba
  2021-12-15 23:58   ` Qu Wenruo
@ 2021-12-17  7:50   ` Johannes Thumshirn
  2021-12-17  7:51   ` Johannes Thumshirn
  2 siblings, 0 replies; 11+ messages in thread
From: Johannes Thumshirn @ 2021-12-17  7:50 UTC (permalink / raw)
  To: dsterba, Jingyun He; +Cc: linux-btrfs

On 15/12/2021 16:51, David Sterba wrote:
> On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
>> Hello,
>> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
>> It looks like I'm unable to fill the disk full.
>> E.g. btrfs fi usage /disk1/
>> Free (estimated): 128.95GiB (min: 128.95GiB)
>>
>> It still has 100+GB available
>> But I'm unable to put more files.
> 
> We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> to see if eg. the metadata space is not exhausted or if the remaining
> 128G account for the unusable space in the zones (this is also in the
> 'fi df' output).
Can 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-15 15:50 ` David Sterba
  2021-12-15 23:58   ` Qu Wenruo
  2021-12-17  7:50   ` Johannes Thumshirn
@ 2021-12-17  7:51   ` Johannes Thumshirn
  2021-12-17 17:57     ` Jingyun He
  2 siblings, 1 reply; 11+ messages in thread
From: Johannes Thumshirn @ 2021-12-17  7:51 UTC (permalink / raw)
  To: dsterba, Jingyun He; +Cc: linux-btrfs

On 15/12/2021 16:51, David Sterba wrote:
> On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
>> Hello,
>> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
>> It looks like I'm unable to fill the disk full.
>> E.g. btrfs fi usage /disk1/
>> Free (estimated): 128.95GiB (min: 128.95GiB)
>>
>> It still has 100+GB available
>> But I'm unable to put more files.
> 
> We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> to see if eg. the metadata space is not exhausted or if the remaining
> 128G account for the unusable space in the zones (this is also in the
> 'fi df' output).

Can you also share the output of 'blkzone report /dev/sdX'? 


Thanks a lot,
	Johannes

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-17  7:51   ` Johannes Thumshirn
@ 2021-12-17 17:57     ` Jingyun He
  2021-12-17 18:00       ` Jingyun He
  0 siblings, 1 reply; 11+ messages in thread
From: Jingyun He @ 2021-12-17 17:57 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: dsterba, linux-btrfs

Hello,
Here is output of fi usage:

Overall:
    Device size:   14.55TiB
    Device allocated:   14.55TiB
    Device unallocated:   1.75GiB
    Device missing:     0.00B
    Device zone unusable:     0.00B
    Device zone size:     0.00B
    Used:   14.42TiB
    Free (estimated): 129.49GiB (min: 129.49GiB)
    Free (statfs, df): 129.49GiB
    Data ratio:       1.00
    Metadata ratio:       1.00
    Global reserve: 512.00MiB (used: 112.00KiB)
    Multiple profiles:         no
Data,single: Size:14.53TiB, Used:14.41TiB (99.14%)
   /dev/sds   14.53TiB
Metadata,single: Size:18.00GiB, Used:14.75GiB (81.95%)
   /dev/sds   18.00GiB
System,single: Size:256.00MiB, Used:6.05MiB (2.36%)
   /dev/sds 256.00MiB
Unallocated:
   /dev/sds   1.75GiB

And I'm unable to add another file at 30GiB.
Do you have any advice?

BTW,
blkzone report /dev/sds
returns:
"blkzone: /dev/sds: unable to determine zone size"

Thank you.

On Fri, Dec 17, 2021 at 3:51 PM Johannes Thumshirn
<Johannes.Thumshirn@wdc.com> wrote:
>
> On 15/12/2021 16:51, David Sterba wrote:
> > On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> >> Hello,
> >> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> >> It looks like I'm unable to fill the disk full.
> >> E.g. btrfs fi usage /disk1/
> >> Free (estimated): 128.95GiB (min: 128.95GiB)
> >>
> >> It still has 100+GB available
> >> But I'm unable to put more files.
> >
> > We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> > to see if eg. the metadata space is not exhausted or if the remaining
> > 128G account for the unusable space in the zones (this is also in the
> > 'fi df' output).
>
> Can you also share the output of 'blkzone report /dev/sdX'?
>
>
> Thanks a lot,
>         Johannes

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-17 17:57     ` Jingyun He
@ 2021-12-17 18:00       ` Jingyun He
  2021-12-19 18:04         ` Jingyun He
  0 siblings, 1 reply; 11+ messages in thread
From: Jingyun He @ 2021-12-17 18:00 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: dsterba, linux-btrfs

Sorry, my mistake, this device is not HM-SMR device.
But still I can not fill up the device.

If I mount it with the nodatacow option, will it save some space?

Thank you.

On Sat, Dec 18, 2021 at 1:57 AM Jingyun He <jingyun.ho@gmail.com> wrote:
>
> Hello,
> Here is output of fi usage:
>
> Overall:
>     Device size:   14.55TiB
>     Device allocated:   14.55TiB
>     Device unallocated:   1.75GiB
>     Device missing:     0.00B
>     Device zone unusable:     0.00B
>     Device zone size:     0.00B
>     Used:   14.42TiB
>     Free (estimated): 129.49GiB (min: 129.49GiB)
>     Free (statfs, df): 129.49GiB
>     Data ratio:       1.00
>     Metadata ratio:       1.00
>     Global reserve: 512.00MiB (used: 112.00KiB)
>     Multiple profiles:         no
> Data,single: Size:14.53TiB, Used:14.41TiB (99.14%)
>    /dev/sds   14.53TiB
> Metadata,single: Size:18.00GiB, Used:14.75GiB (81.95%)
>    /dev/sds   18.00GiB
> System,single: Size:256.00MiB, Used:6.05MiB (2.36%)
>    /dev/sds 256.00MiB
> Unallocated:
>    /dev/sds   1.75GiB
>
> And I'm unable to add another file at 30GiB.
> Do you have any advice?
>
> BTW,
> blkzone report /dev/sds
> returns:
> "blkzone: /dev/sds: unable to determine zone size"
>
> Thank you.
>
> On Fri, Dec 17, 2021 at 3:51 PM Johannes Thumshirn
> <Johannes.Thumshirn@wdc.com> wrote:
> >
> > On 15/12/2021 16:51, David Sterba wrote:
> > > On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> > >> Hello,
> > >> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> > >> It looks like I'm unable to fill the disk full.
> > >> E.g. btrfs fi usage /disk1/
> > >> Free (estimated): 128.95GiB (min: 128.95GiB)
> > >>
> > >> It still has 100+GB available
> > >> But I'm unable to put more files.
> > >
> > > We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> > > to see if eg. the metadata space is not exhausted or if the remaining
> > > 128G account for the unusable space in the zones (this is also in the
> > > 'fi df' output).
> >
> > Can you also share the output of 'blkzone report /dev/sdX'?
> >
> >
> > Thanks a lot,
> >         Johannes

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-17 18:00       ` Jingyun He
@ 2021-12-19 18:04         ` Jingyun He
  2021-12-20  3:06           ` Jingyun He
  0 siblings, 1 reply; 11+ messages in thread
From: Jingyun He @ 2021-12-19 18:04 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: dsterba, linux-btrfs

Hello,
Here is the result of the command
"btrfs fi usage /mnt/" of the 14TB HM-SMR device.

Overall:
    Device size:   12.73TiB
    Device allocated:   12.73TiB
    Device unallocated:   1.75GiB
    Device missing:     0.00B
    Device zone unusable:     0.00B
    Device zone size: 256.00MiB
    Used:   12.66TiB
    Free (estimated):   72.29GiB (min: 72.29GiB)
    Free (statfs, df):   72.29GiB
    Data ratio:       1.00
    Metadata ratio:       1.00
    Global reserve: 512.00MiB (used: 640.00KiB)
    Multiple profiles:         no
Data,single: Size:12.71TiB, Used:12.64TiB (99.46%)
   /dev/sda   12.71TiB
Metadata,single: Size:20.00GiB, Used:15.91GiB (79.55%)
   /dev/sda   20.00GiB
System,single: Size:256.00MiB, Used:5.28MiB (2.06%)
   /dev/sda 256.00MiB
Unallocated:
   /dev/sda   1.75GiB

And I'm unable to add more files to this device.
[root@server1 mnt]# > a
-bash: a: No space left on device

Anyone can help me ?

Thanks.

On Sat, Dec 18, 2021 at 2:00 AM Jingyun He <jingyun.ho@gmail.com> wrote:
>
> Sorry, my mistake, this device is not HM-SMR device.
> But still I can not fill up the device.
>
> If I mount it with the nodatacow option, will it save some space?
>
> Thank you.
>
> On Sat, Dec 18, 2021 at 1:57 AM Jingyun He <jingyun.ho@gmail.com> wrote:
> >
> > Hello,
> > Here is output of fi usage:
> >
> > Overall:
> >     Device size:   14.55TiB
> >     Device allocated:   14.55TiB
> >     Device unallocated:   1.75GiB
> >     Device missing:     0.00B
> >     Device zone unusable:     0.00B
> >     Device zone size:     0.00B
> >     Used:   14.42TiB
> >     Free (estimated): 129.49GiB (min: 129.49GiB)
> >     Free (statfs, df): 129.49GiB
> >     Data ratio:       1.00
> >     Metadata ratio:       1.00
> >     Global reserve: 512.00MiB (used: 112.00KiB)
> >     Multiple profiles:         no
> > Data,single: Size:14.53TiB, Used:14.41TiB (99.14%)
> >    /dev/sds   14.53TiB
> > Metadata,single: Size:18.00GiB, Used:14.75GiB (81.95%)
> >    /dev/sds   18.00GiB
> > System,single: Size:256.00MiB, Used:6.05MiB (2.36%)
> >    /dev/sds 256.00MiB
> > Unallocated:
> >    /dev/sds   1.75GiB
> >
> > And I'm unable to add another file at 30GiB.
> > Do you have any advice?
> >
> > BTW,
> > blkzone report /dev/sds
> > returns:
> > "blkzone: /dev/sds: unable to determine zone size"
> >
> > Thank you.
> >
> > On Fri, Dec 17, 2021 at 3:51 PM Johannes Thumshirn
> > <Johannes.Thumshirn@wdc.com> wrote:
> > >
> > > On 15/12/2021 16:51, David Sterba wrote:
> > > > On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> > > >> Hello,
> > > >> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> > > >> It looks like I'm unable to fill the disk full.
> > > >> E.g. btrfs fi usage /disk1/
> > > >> Free (estimated): 128.95GiB (min: 128.95GiB)
> > > >>
> > > >> It still has 100+GB available
> > > >> But I'm unable to put more files.
> > > >
> > > > We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> > > > to see if eg. the metadata space is not exhausted or if the remaining
> > > > 128G account for the unusable space in the zones (this is also in the
> > > > 'fi df' output).
> > >
> > > Can you also share the output of 'blkzone report /dev/sdX'?
> > >
> > >
> > > Thanks a lot,
> > >         Johannes

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-19 18:04         ` Jingyun He
@ 2021-12-20  3:06           ` Jingyun He
  0 siblings, 0 replies; 11+ messages in thread
From: Jingyun He @ 2021-12-20  3:06 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: dsterba, linux-btrfs

Hi,
After running
"btrfs balance start -v -dusage=5 /mnt/" resolved this issue now.

On Mon, Dec 20, 2021 at 2:04 AM Jingyun He <jingyun.ho@gmail.com> wrote:
>
> Hello,
> Here is the result of the command
> "btrfs fi usage /mnt/" of the 14TB HM-SMR device.
>
> Overall:
>     Device size:   12.73TiB
>     Device allocated:   12.73TiB
>     Device unallocated:   1.75GiB
>     Device missing:     0.00B
>     Device zone unusable:     0.00B
>     Device zone size: 256.00MiB
>     Used:   12.66TiB
>     Free (estimated):   72.29GiB (min: 72.29GiB)
>     Free (statfs, df):   72.29GiB
>     Data ratio:       1.00
>     Metadata ratio:       1.00
>     Global reserve: 512.00MiB (used: 640.00KiB)
>     Multiple profiles:         no
> Data,single: Size:12.71TiB, Used:12.64TiB (99.46%)
>    /dev/sda   12.71TiB
> Metadata,single: Size:20.00GiB, Used:15.91GiB (79.55%)
>    /dev/sda   20.00GiB
> System,single: Size:256.00MiB, Used:5.28MiB (2.06%)
>    /dev/sda 256.00MiB
> Unallocated:
>    /dev/sda   1.75GiB
>
> And I'm unable to add more files to this device.
> [root@server1 mnt]# > a
> -bash: a: No space left on device
>
> Anyone can help me ?
>
> Thanks.
>
> On Sat, Dec 18, 2021 at 2:00 AM Jingyun He <jingyun.ho@gmail.com> wrote:
> >
> > Sorry, my mistake, this device is not HM-SMR device.
> > But still I can not fill up the device.
> >
> > If I mount it with the nodatacow option, will it save some space?
> >
> > Thank you.
> >
> > On Sat, Dec 18, 2021 at 1:57 AM Jingyun He <jingyun.ho@gmail.com> wrote:
> > >
> > > Hello,
> > > Here is output of fi usage:
> > >
> > > Overall:
> > >     Device size:   14.55TiB
> > >     Device allocated:   14.55TiB
> > >     Device unallocated:   1.75GiB
> > >     Device missing:     0.00B
> > >     Device zone unusable:     0.00B
> > >     Device zone size:     0.00B
> > >     Used:   14.42TiB
> > >     Free (estimated): 129.49GiB (min: 129.49GiB)
> > >     Free (statfs, df): 129.49GiB
> > >     Data ratio:       1.00
> > >     Metadata ratio:       1.00
> > >     Global reserve: 512.00MiB (used: 112.00KiB)
> > >     Multiple profiles:         no
> > > Data,single: Size:14.53TiB, Used:14.41TiB (99.14%)
> > >    /dev/sds   14.53TiB
> > > Metadata,single: Size:18.00GiB, Used:14.75GiB (81.95%)
> > >    /dev/sds   18.00GiB
> > > System,single: Size:256.00MiB, Used:6.05MiB (2.36%)
> > >    /dev/sds 256.00MiB
> > > Unallocated:
> > >    /dev/sds   1.75GiB
> > >
> > > And I'm unable to add another file at 30GiB.
> > > Do you have any advice?
> > >
> > > BTW,
> > > blkzone report /dev/sds
> > > returns:
> > > "blkzone: /dev/sds: unable to determine zone size"
> > >
> > > Thank you.
> > >
> > > On Fri, Dec 17, 2021 at 3:51 PM Johannes Thumshirn
> > > <Johannes.Thumshirn@wdc.com> wrote:
> > > >
> > > > On 15/12/2021 16:51, David Sterba wrote:
> > > > > On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> > > > >> Hello,
> > > > >> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> > > > >> It looks like I'm unable to fill the disk full.
> > > > >> E.g. btrfs fi usage /disk1/
> > > > >> Free (estimated): 128.95GiB (min: 128.95GiB)
> > > > >>
> > > > >> It still has 100+GB available
> > > > >> But I'm unable to put more files.
> > > > >
> > > > > We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> > > > > to see if eg. the metadata space is not exhausted or if the remaining
> > > > > 128G account for the unusable space in the zones (this is also in the
> > > > > 'fi df' output).
> > > >
> > > > Can you also share the output of 'blkzone report /dev/sdX'?
> > > >
> > > >
> > > > Thanks a lot,
> > > >         Johannes

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-15 23:58   ` Qu Wenruo
@ 2021-12-21  3:38     ` Zygo Blaxell
  2021-12-21  4:45       ` Qu Wenruo
  0 siblings, 1 reply; 11+ messages in thread
From: Zygo Blaxell @ 2021-12-21  3:38 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, Jingyun He, linux-btrfs

On Thu, Dec 16, 2021 at 07:58:39AM +0800, Qu Wenruo wrote:
> On 2021/12/15 23:50, David Sterba wrote:
> > On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
> > > Hello,
> > > I have a 14TB WDC HM-SMR disk formatted with BTRFS,
> > > It looks like I'm unable to fill the disk full.
> > > E.g. btrfs fi usage /disk1/
> > > Free (estimated): 128.95GiB (min: 128.95GiB)
> > > 
> > > It still has 100+GB available
> > > But I'm unable to put more files.
> > 
> > We'd need more information to diagnose that, eg. output of 'btrfs fi df'
> > to see if eg. the metadata space is not exhausted or if the remaining
> > 128G account for the unusable space in the zones (this is also in the
> > 'fi df' output).
> 
> A little off-topic, but IMHO we should really make our 'fi usage' and
> vanilla 'df' to take metadata and unallocated space into consideration.
> 
> The vanilla 'df' command reporting more space than what we can really
> use is already causing new btrfs users problems.
> 
> We can keep teaching users, but there are still tools relying completely
> on vanilla 'df' output to reserve their space usage.
> 
> Thus it's not really something can be purely solved by education.
> 
> My purpose is to make vanilla 'df' output to take metadata/unallocated
> space into consideration.
> 
> Unfortunately I don't have solid plan yet.
> But maybe we can start by returning 0 available space when no more
> unallocated space.

It's a really bad idea to abruptly flip to zero free space.  If df reports
"3.7TB free", and then you write 4K, and then df reports "zero free",
it's no better than if the write had just returned ENOSPC with 3.7TB free.
If the number provides no predictive information about how the filesystem
will respond to near-future allocations, then it's useless.

Worse, automated responses to a reported low free space number might
get triggered, and wipe out significant amounts of data in a futile
attempt to get some usable free space (it's futile because the required
response is data balance not file deletion, and if there are snapshots
the response will make the metadata problem worse for a long time before
making it better).

If we have good information about the ratio of data to metadata on
the filesystem, we could gradually reduce the reported free space,
always reporting a number between zero and the true number of free data
blocks (i.e. the lower of "true free data blocks" and "estimated data
blocks that could be allocated if all the remaining metadata space was
completely consumed at the current data:metadata ratio").  That would
mean that instead of 3.7TB free, we might report 3.5TB free if we have
0.8MB of free space for metadata, and it would drop to 1.7TB as we
drop to 0.4MB of free metadata space (after deducting global reserve).
In this situation, writing 4K to the filesystem might decrease the free
space reported by 4MB, but it would happen while 4MB is 0.2% of the free
space on the filesystem, far enough in advance of reaching zero that an
attentive sysadmin or robot could avoid a disaster before it happens.

On the other hand, if we are tracking those statistics and they're
accurate, we could use them to preallocate more metadata and prevent
surprising shortages of metadata space.  We'd also have to stop metadata
balance from wiping out preallocations.  We'd have correct df free space
numbers based on that--if we need 1% of the filesystem for metadata,
we'd actually allocate 1% of the filesystem for metadata, and df would
report 1% less free space.

We already had the abrupt zeroing behavior and removed it in 5.4 for
that reason (and also the fact that the zero trigger calculation had a
long-standing bug).

The entire discussion is moot as long as df is as wildly inaccurate as
it is now.  Step 0 of this plan would be to give df a working algorithm
to figure out how much space the filesystem has.

> Maybe later we can have more comprehensive available space calculation.
> 
> (Other fses like ext4/xfs already does similar behavior by
> under-reporting available space)

ext4 has "superuser reserved" space which isn't really underreporting,
it's just taking the real free space number (which ext4 can compute
accurately) and subtracting a user-configured constant value.  root can
still fill the filesystem all the way to zero, less a few blocks for
directories and indirect block lists.

ext4 does preallocate space for metadata and the filesystem does give
ENOSPC with non-zero df free when metadata runs out.  There's a data
block to inode ratio that is fixed at mkfs time, and you get up to (size
of filiesystem / that number) of inodes and not a single file more.
The other ext4 "metadata" (indirect block lists and directories) is
stored in its data blocks, so it shows up in df's available space number
and behaves the way users expect.  ext4 has no snapshots or reflinks
so the other btrfs special metadata cases can't happen on ext4.

> Thanks,
> Qu
> > 
> > > Do you know if there are any mkfs/mount options that I can use to
> > > reach maximum capacity? like mkfs.ext4 -m 0 -O ^has_journal -T
> > > largefile4
> > 
> > There are no such options and the space should be used at the full range
> > with respect to the chunk allocations.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: unable to use all spaces
  2021-12-21  3:38     ` Zygo Blaxell
@ 2021-12-21  4:45       ` Qu Wenruo
  0 siblings, 0 replies; 11+ messages in thread
From: Qu Wenruo @ 2021-12-21  4:45 UTC (permalink / raw)
  To: Zygo Blaxell, Qu Wenruo; +Cc: dsterba, Jingyun He, linux-btrfs



On 2021/12/21 11:38, Zygo Blaxell wrote:
> On Thu, Dec 16, 2021 at 07:58:39AM +0800, Qu Wenruo wrote:
>> On 2021/12/15 23:50, David Sterba wrote:
>>> On Wed, Dec 15, 2021 at 10:31:13PM +0800, Jingyun He wrote:
>>>> Hello,
>>>> I have a 14TB WDC HM-SMR disk formatted with BTRFS,
>>>> It looks like I'm unable to fill the disk full.
>>>> E.g. btrfs fi usage /disk1/
>>>> Free (estimated): 128.95GiB (min: 128.95GiB)
>>>>
>>>> It still has 100+GB available
>>>> But I'm unable to put more files.
>>>
>>> We'd need more information to diagnose that, eg. output of 'btrfs fi df'
>>> to see if eg. the metadata space is not exhausted or if the remaining
>>> 128G account for the unusable space in the zones (this is also in the
>>> 'fi df' output).
>>
>> A little off-topic, but IMHO we should really make our 'fi usage' and
>> vanilla 'df' to take metadata and unallocated space into consideration.
>>
>> The vanilla 'df' command reporting more space than what we can really
>> use is already causing new btrfs users problems.
>>
>> We can keep teaching users, but there are still tools relying completely
>> on vanilla 'df' output to reserve their space usage.
>>
>> Thus it's not really something can be purely solved by education.
>>
>> My purpose is to make vanilla 'df' output to take metadata/unallocated
>> space into consideration.
>>
>> Unfortunately I don't have solid plan yet.
>> But maybe we can start by returning 0 available space when no more
>> unallocated space.
> 
> It's a really bad idea to abruptly flip to zero free space.

Indeed, so my plan is to ensure we have a smooth change between metadata 
exhausted and last unallocated space get allocated.

>  If df reports
> "3.7TB free", and then you write 4K, and then df reports "zero free",
> it's no better than if the write had just returned ENOSPC with 3.7TB free.
> If the number provides no predictive information about how the filesystem
> will respond to near-future allocations, then it's useless.
> 
> Worse, automated responses to a reported low free space number might
> get triggered, and wipe out significant amounts of data in a futile
> attempt to get some usable free space (it's futile because the required
> response is data balance not file deletion, and if there are snapshots
> the response will make the metadata problem worse for a long time before
> making it better).
> 
> If we have good information about the ratio of data to metadata on
> the filesystem, we could gradually reduce the reported free space,
> always reporting a number between zero and the true number of free data
> blocks (i.e. the lower of "true free data blocks" and "estimated data
> blocks that could be allocated if all the remaining metadata space was
> completely consumed at the current data:metadata ratio").  That would
> mean that instead of 3.7TB free, we might report 3.5TB free if we have
> 0.8MB of free space for metadata, and it would drop to 1.7TB as we
> drop to 0.4MB of free metadata space (after deducting global reserve).
> In this situation, writing 4K to the filesystem might decrease the free
> space reported by 4MB, but it would happen while 4MB is 0.2% of the free
> space on the filesystem, far enough in advance of reaching zero that an
> attentive sysadmin or robot could avoid a disaster before it happens.

Exactly what I'm thinking.

But I don't really have a good formula for that yet to implement in kernel.

> 
> On the other hand, if we are tracking those statistics and they're
> accurate, we could use them to preallocate more metadata and prevent
> surprising shortages of metadata space.

That's against the metadata over-commit behavior.

For now, we don't allocate extra new metadata space as long as it can be 
covered by the unallocated space.

One problem of preallocate is, it can be reclaimed if we don't really 
use that space immediately.

And during the window of reclaim, if we really need that space as it's 
the last free metadata block group, we can hit -ENOSPC immaturely.

Thanks,
Qu

>  We'd also have to stop metadata
> balance from wiping out preallocations.  We'd have correct df free space
> numbers based on that--if we need 1% of the filesystem for metadata,
> we'd actually allocate 1% of the filesystem for metadata, and df would
> report 1% less free space.
> 
> We already had the abrupt zeroing behavior and removed it in 5.4 for
> that reason (and also the fact that the zero trigger calculation had a
> long-standing bug).
> 
> The entire discussion is moot as long as df is as wildly inaccurate as
> it is now.  Step 0 of this plan would be to give df a working algorithm
> to figure out how much space the filesystem has.
> 
>> Maybe later we can have more comprehensive available space calculation.
>>
>> (Other fses like ext4/xfs already does similar behavior by
>> under-reporting available space)
> 
> ext4 has "superuser reserved" space which isn't really underreporting,
> it's just taking the real free space number (which ext4 can compute
> accurately) and subtracting a user-configured constant value.  root can
> still fill the filesystem all the way to zero, less a few blocks for
> directories and indirect block lists.
> 
> ext4 does preallocate space for metadata and the filesystem does give
> ENOSPC with non-zero df free when metadata runs out.  There's a data
> block to inode ratio that is fixed at mkfs time, and you get up to (size
> of filiesystem / that number) of inodes and not a single file more.
> The other ext4 "metadata" (indirect block lists and directories) is
> stored in its data blocks, so it shows up in df's available space number
> and behaves the way users expect.  ext4 has no snapshots or reflinks
> so the other btrfs special metadata cases can't happen on ext4.
> 
>> Thanks,
>> Qu
>>>
>>>> Do you know if there are any mkfs/mount options that I can use to
>>>> reach maximum capacity? like mkfs.ext4 -m 0 -O ^has_journal -T
>>>> largefile4
>>>
>>> There are no such options and the space should be used at the full range
>>> with respect to the chunk allocations.
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-12-21  4:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-15 14:31 unable to use all spaces Jingyun He
2021-12-15 15:50 ` David Sterba
2021-12-15 23:58   ` Qu Wenruo
2021-12-21  3:38     ` Zygo Blaxell
2021-12-21  4:45       ` Qu Wenruo
2021-12-17  7:50   ` Johannes Thumshirn
2021-12-17  7:51   ` Johannes Thumshirn
2021-12-17 17:57     ` Jingyun He
2021-12-17 18:00       ` Jingyun He
2021-12-19 18:04         ` Jingyun He
2021-12-20  3:06           ` Jingyun He

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.