All of lore.kernel.org
 help / color / mirror / Atom feed
* df free space not correct with raid1 pools with an odd number of devices
@ 2020-07-23 10:24 Jorge Bastos
  2020-07-24  4:40 ` Chris Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Jorge Bastos @ 2020-07-23 10:24 UTC (permalink / raw)
  To: Btrfs BTRFS

Hi there,

Kernel: 5.7.8
btrfs-progs 5.7

Noticed that df reports the wrong free space when used on a raid1
btrfs pool with an odd number of devices, e.g.:

2 x 500GB (correct)

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       466G  3.4M  465G   1% /mnt/cache

3 x 500GB (not correct)

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       699G  3.4M  466G   1% /mnt/cache

btrfs fi usage -T /mnt/cache
Overall:
    Device size:                   1.36TiB
    Device allocated:              4.06GiB
    Device unallocated:            1.36TiB
    Device missing:                  0.00B
    Used:                        288.00KiB
    Free (estimated):            697.61GiB      (min: 697.61GiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:                3.25MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data    Metadata  System
Id Path      RAID1   RAID1     RAID1    Unallocated
-- --------- ------- --------- -------- -----------
 1 /dev/sdd1       -   1.00GiB 32.00MiB   464.73GiB
 2 /dev/sdg1 1.00GiB         -        -   464.76GiB
 3 /dev/sdb1 1.00GiB   1.00GiB 32.00MiB   463.73GiB
-- --------- ------- --------- -------- -----------
   Total     1.00GiB   1.00GiB 32.00MiB     1.36TiB
   Used        0.00B 128.00KiB 16.00KiB





Same for 5 devices and I assume any other odd number of devices:

5 x 500GB

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache

btrfs fi usage -T /mnt/cache
Overall:
    Device size:                   2.27TiB
    Device allocated:              4.06GiB
    Device unallocated:            2.27TiB
    Device missing:                  0.00B
    Used:                        288.00KiB
    Free (estimated):              1.14TiB      (min: 1.14TiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:                3.25MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data    Metadata  System
Id Path      RAID1   RAID1     RAID1    Unallocated
-- --------- ------- --------- -------- -----------
 1 /dev/sdd1       -         - 32.00MiB   465.73GiB
 2 /dev/sdg1       -   1.00GiB        -   464.76GiB
 3 /dev/sdb1       -         - 32.00MiB   465.73GiB
 4 /dev/sde1 1.00GiB   1.00GiB        -   463.76GiB
 5 /dev/sdf1 1.00GiB         -        -   464.76GiB
-- --------- ------- --------- -------- -----------
   Total     1.00GiB   1.00GiB 32.00MiB     2.27TiB
   Used        0.00B 128.00KiB 16.00KiB



Is this a known issue, and if not would it be a btrfs or df problem?

Thanks,
Jorge

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-23 10:24 df free space not correct with raid1 pools with an odd number of devices Jorge Bastos
@ 2020-07-24  4:40 ` Chris Murphy
  2020-07-24  6:53   ` Rolf Wald
  2020-07-24  8:16   ` Jorge Bastos
  0 siblings, 2 replies; 10+ messages in thread
From: Chris Murphy @ 2020-07-24  4:40 UTC (permalink / raw)
  To: Jorge Bastos; +Cc: Btrfs BTRFS

On Thu, Jul 23, 2020 at 4:24 AM Jorge Bastos <jorge.mrbastos@gmail.com> wrote:

> 3 x 500GB (not correct)
>
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sdd1       699G  3.4M  466G   1% /mnt/cache

>
> btrfs fi usage -T /mnt/cache
> Overall:
>     Device size:                   1.36TiB
>     Device allocated:              4.06GiB
>     Device unallocated:            1.36TiB
>     Device missing:                  0.00B
>     Used:                        288.00KiB
>     Free (estimated):            697.61GiB      (min: 697.61GiB)


Looks about correct? 1.36TiB*1024/2=696.32GiB

The discrepancy with Btrfs free showing ~1.3GiB more than device/2,
might be cleared up by using --raw and computing from bytes. But Free
rounded up becomes 698GiB which is 1GiB less than df's reported
699GiB. Again, it might be useful to look at bytes to see what's going
on because they're each using different rounding up.




> Same for 5 devices and I assume any other odd number of devices:
>
> 5 x 500GB
>
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache
>
> btrfs fi usage -T /mnt/cache
> Overall:
>     Device size:                   2.27TiB
>     Device allocated:              4.06GiB
>     Device unallocated:            2.27TiB
>     Device missing:                  0.00B
>     Used:                        288.00KiB
>     Free (estimated):              1.14TiB      (min: 1.14TiB)

2.27/2=1.135 So that's pretty spot on for Free. And yes, df will round
this up yet again to 1.2TiB because it always rounds up.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-24  4:40 ` Chris Murphy
@ 2020-07-24  6:53   ` Rolf Wald
  2020-07-24  8:16   ` Jorge Bastos
  1 sibling, 0 replies; 10+ messages in thread
From: Rolf Wald @ 2020-07-24  6:53 UTC (permalink / raw)
  To: Chris Murphy, Btrfs BTRFS, Jorge Bastos

Am 24.07.20 um 06:40 schrieb Chris Murphy:
> On Thu, Jul 23, 2020 at 4:24 AM Jorge Bastos <jorge.mrbastos@gmail.com> wrote:
> 
>> 3 x 500GB (not correct)
>>
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/sdd1       699G  3.4M  466G   1% /mnt/cache
> 
>>
>> btrfs fi usage -T /mnt/cache
>> Overall:
>>     Device size:                   1.36TiB
>>     Device allocated:              4.06GiB
>>     Device unallocated:            1.36TiB
>>     Device missing:                  0.00B
>>     Used:                        288.00KiB
>>     Free (estimated):            697.61GiB      (min: 697.61GiB)
> 
> 
> Looks about correct? 1.36TiB*1024/2=696.32GiB
> 
> The discrepancy with Btrfs free showing ~1.3GiB more than device/2,
> might be cleared up by using --raw and computing from bytes. But Free
> rounded up becomes 698GiB which is 1GiB less than df's reported
> 699GiB. Again, it might be useful to look at bytes to see what's going
> on because they're each using different rounding up.
> 


Agreed, but the number of available bytes are definitely wrong on 
btrfs-raid1 with odd device number. They don't correspodent to
Free on btrfs file usage nor to the unallocated bytes (divided by 2)

eg. my 3-device btrfs-raid1 with 3 2T disks:

df -h / -> /dev/sdb2       2,8T    2,1T  521G   80% /

btrfs fi us / ->
Overall:
     Device size:                   5.46TiB
     Device allocated:              4.28TiB
     Device unallocated:            1.18TiB
     Device missing:                  0.00B
     Used:                          4.04TiB
     Free (estimated):            722.24GiB      (min: 722.24GiB)
     Data ratio:                       2.00
     Metadata ratio:                   2.00
     Global reserve:              512.00MiB      (used: 0.00B)
...

I hope, this problem could be solved. Applications show now false 
information about free space.

Thanks, Rolf

> 
> 
> 
>> Same for 5 devices and I assume any other odd number of devices:
>>
>> 5 x 500GB
>>
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache
>>
>> btrfs fi usage -T /mnt/cache
>> Overall:
>>     Device size:                   2.27TiB
>>     Device allocated:              4.06GiB
>>     Device unallocated:            2.27TiB
>>     Device missing:                  0.00B
>>     Used:                        288.00KiB
>>     Free (estimated):              1.14TiB      (min: 1.14TiB)
> 
> 2.27/2=1.135 So that's pretty spot on for Free. And yes, df will round
> this up yet again to 1.2TiB because it always rounds up.
> 
> 
> 

-- 
Mit freundlichen Grüßen (kind regards) Rolf Wald
LUG-Balista Hamburg e.V., Germany
c/o Bürgerhaus Barmbek
Lorichsstr. 28a
22307 Hamburg
http://www.lug-hamburg.de
No HTML please
S/MIME signed email preferred, encryption wanted

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-24  4:40 ` Chris Murphy
  2020-07-24  6:53   ` Rolf Wald
@ 2020-07-24  8:16   ` Jorge Bastos
  2020-07-24 20:46     ` Chris Murphy
  1 sibling, 1 reply; 10+ messages in thread
From: Jorge Bastos @ 2020-07-24  8:16 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Btrfs BTRFS

On Fri, Jul 24, 2020 at 5:40 AM Chris Murphy <lists@colorremedies.com> wrote:
>
> Looks about correct? 1.36TiB*1024/2=696.32GiB
>
> The discrepancy with Btrfs free showing ~1.3GiB more than device/2,
> might be cleared up by using --raw and computing from bytes. But Free
> rounded up becomes 698GiB which is 1GiB less than df's reported
> 699GiB. Again, it might be useful to look at bytes to see what's going
> on because they're each using different rounding up.
>
>
>
>
> > Same for 5 devices and I assume any other odd number of devices:
> >
> > 5 x 500GB
> >
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache
> >
> > btrfs fi usage -T /mnt/cache
> > Overall:
> >     Device size:                   2.27TiB
> >     Device allocated:              4.06GiB
> >     Device unallocated:            2.27TiB
> >     Device missing:                  0.00B
> >     Used:                        288.00KiB
> >     Free (estimated):              1.14TiB      (min: 1.14TiB)
>
> 2.27/2=1.135 So that's pretty spot on for Free. And yes, df will round
> this up yet again to 1.2TiB because it always rounds up.
>
>
>
> --
> Chris Murphy

Thanks for the reply, I was referring to the available space as
reported by df, total capacity is correct but please note that df
reports for both the 2 disk and 3 disk pools about the same available
space, 465 and 466GB respectively:

2 x 500GB

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       466G  3.4M  465G   1% /mnt/cache

3 x 500GB

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       699G  3.4M  466G   1% /mnt/cache

Jorge

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-24  8:16   ` Jorge Bastos
@ 2020-07-24 20:46     ` Chris Murphy
  2020-07-25  2:19       ` Chris Murphy
  2020-07-25  7:30       ` Andrei Borzenkov
  0 siblings, 2 replies; 10+ messages in thread
From: Chris Murphy @ 2020-07-24 20:46 UTC (permalink / raw)
  To: Jorge Bastos; +Cc: Chris Murphy, Btrfs BTRFS

On Fri, Jul 24, 2020 at 2:16 AM Jorge Bastos <jorge.mrbastos@gmail.com> wrote:
>
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache

Oh yeah Avail is clearly goofy.


> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sdd1       699G  3.4M  466G   1% /mnt/cache


Anybody know what's up?


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-24 20:46     ` Chris Murphy
@ 2020-07-25  2:19       ` Chris Murphy
  2020-07-25  7:30       ` Andrei Borzenkov
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Murphy @ 2020-07-25  2:19 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Jorge Bastos, Btrfs BTRFS

https://github.com/kdave/btrfs-progs/issues/277

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-24 20:46     ` Chris Murphy
  2020-07-25  2:19       ` Chris Murphy
@ 2020-07-25  7:30       ` Andrei Borzenkov
  2020-07-25  7:43         ` Andrei Borzenkov
  1 sibling, 1 reply; 10+ messages in thread
From: Andrei Borzenkov @ 2020-07-25  7:30 UTC (permalink / raw)
  To: Chris Murphy, Jorge Bastos; +Cc: Btrfs BTRFS

24.07.2020 23:46, Chris Murphy пишет:
> On Fri, Jul 24, 2020 at 2:16 AM Jorge Bastos <jorge.mrbastos@gmail.com> wrote:
>>
>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>> /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache
> 
> Oh yeah Avail is clearly goofy.
> 
> 
>> Filesystem      Size  Used Avail Use% Mounted on
>> /dev/sdd1       699G  3.4M  466G   1% /mnt/cache
> 
> 
> Anybody know what's up?
> 
> 

df "Used" and "Avail" are totally independent values.

"Used" is computed as (total - free), both of which are reported by
statfs. By default df does not show "Free", you need to use --output=
option (at least using coreutils df).

"Avail" is computed by filesystem. Originally the difference comes from
"available to root" and "available to user" .

btrfs computes "Avail" by simulating chunk allocations on devices. See
super.c:btrfs_calc_avail_data_space(), the final chunk:

        btrfs_descending_sort_devices(devices_info, nr_devices);

        i = nr_devices - 1;
        avail_space = 0;
        while (nr_devices >= rattr->devs_min) {
                num_stripes = min(num_stripes, nr_devices);

                if (devices_info[i].max_avail >= min_stripe_size) {
                        int j;
                        u64 alloc_size;

                        avail_space += devices_info[i].max_avail *
num_stripes;
                        alloc_size = devices_info[i].max_avail;
                        for (j = i + 1 - num_stripes; j <= i; j++)
                                devices_info[j].max_avail -= alloc_size;
                }
                i--;
                nr_devices--;
        }

        kfree(devices_info);
        *free_bytes = avail_space;

devices_info holds device list sorted by unallocated space. We start
with device with smallest available space and add its full available
space (adjusted by allocation profile), then move to the previous device
with more free space.

The problem is that if we have three equal sized devices and RAID1
profile, the first iteration consumes two full devices, thus third
device cannot be used anymore (we need two of them for raid1). Real
allocator will evaluate free space every time and so alternate between
all three devices.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-25  7:30       ` Andrei Borzenkov
@ 2020-07-25  7:43         ` Andrei Borzenkov
  2020-07-25 10:04           ` Jorge Bastos
  0 siblings, 1 reply; 10+ messages in thread
From: Andrei Borzenkov @ 2020-07-25  7:43 UTC (permalink / raw)
  To: Chris Murphy, Jorge Bastos; +Cc: Btrfs BTRFS

25.07.2020 10:30, Andrei Borzenkov пишет:
> 24.07.2020 23:46, Chris Murphy пишет:
>> On Fri, Jul 24, 2020 at 2:16 AM Jorge Bastos <jorge.mrbastos@gmail.com> wrote:
>>>
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>> /dev/sdd1       1.2T  3.4M  931G   1% /mnt/cache
>>
>> Oh yeah Avail is clearly goofy.
>>
>>
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/sdd1       699G  3.4M  466G   1% /mnt/cache
>>
>>
>> Anybody know what's up?
>>
>>
> 
> df "Used" and "Avail" are totally independent values.
> 
> "Used" is computed as (total - free), both of which are reported by
> statfs. By default df does not show "Free", you need to use --output=
> option (at least using coreutils df).
> 
> "Avail" is computed by filesystem. Originally the difference comes from
> "available to root" and "available to user" .
> 
> btrfs computes "Avail" by simulating chunk allocations on devices. See
> super.c:btrfs_calc_avail_data_space(), the final chunk:
> 
>         btrfs_descending_sort_devices(devices_info, nr_devices);
> 
>         i = nr_devices - 1;
>         avail_space = 0;
>         while (nr_devices >= rattr->devs_min) {
>                 num_stripes = min(num_stripes, nr_devices);
> 
>                 if (devices_info[i].max_avail >= min_stripe_size) {
>                         int j;
>                         u64 alloc_size;
> 
>                         avail_space += devices_info[i].max_avail *
> num_stripes;
>                         alloc_size = devices_info[i].max_avail;
>                         for (j = i + 1 - num_stripes; j <= i; j++)
>                                 devices_info[j].max_avail -= alloc_size;
>                 }
>                 i--;
>                 nr_devices--;
>         }
> 
>         kfree(devices_info);
>         *free_bytes = avail_space;
> 
> devices_info holds device list sorted by unallocated space. We start
> with device with smallest available space and add its full available
> space (adjusted by allocation profile), then move to the previous device
> with more free space.
> 
> The problem is that if we have three equal sized devices and RAID1
> profile, the first iteration consumes two full devices, thus third
> device cannot be used anymore (we need two of them for raid1). Real
> allocator will evaluate free space every time and so alternate between
> all three devices.
> 
> 

OTOH, this is the correct if the most pessimistic estimation either. If
you have three 250G RAID1 devices and you allocate 250G data in one file
you consume two full devices and won't be able to allocate new data at
all (or for that matter no new metadata either).


So whatever value btrfs returns will be wrong for some allocation pattern.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-25  7:43         ` Andrei Borzenkov
@ 2020-07-25 10:04           ` Jorge Bastos
  2020-07-25 10:21             ` Andrei Borzenkov
  0 siblings, 1 reply; 10+ messages in thread
From: Jorge Bastos @ 2020-07-25 10:04 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: Chris Murphy, Btrfs BTRFS

On Sat, Jul 25, 2020 at 8:43 AM Andrei Borzenkov <arvidjaar@gmail.com> wrote:
>

>
> OTOH, this is the correct if the most pessimistic estimation either. If
> you have three 250G RAID1 devices and you allocate 250G data in one file
> you consume two full devices and won't be able to allocate new data at
> all (or for that matter no new metadata either).
>
>
> So whatever value btrfs returns will be wrong for some allocation pattern.

I considered that but wouldn't a single file still be stripped and
blocks allocated to all devices, on a most free space basis?

E.g.: 3 x 500GB RAID1 pool:

$ btrfs fi usage -T /mnt/cache
Overall:
    Device size:                   1.36TiB
    Device allocated:              2.13GiB
    Device unallocated:            1.36TiB
    Device missing:                  0.00B
    Used:                        288.00KiB
    Free (estimated):            697.61GiB      (min: 697.61GiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:                3.25MiB      (used: 32.00KiB)
    Multiple profiles:                  no

             Data     Metadata  System
Id Path      RAID1    RAID1     RAID1    Unallocated
-- --------- -------- --------- -------- -----------
 1 /dev/sdb1 36.00MiB         - 32.00MiB   465.69GiB
 2 /dev/sde1 36.00MiB   1.00GiB 32.00MiB   464.69GiB
 3 /dev/sdf1        -   1.00GiB        -   464.76GiB
-- --------- -------- --------- -------- -----------
   Total     36.00MiB   1.00GiB 32.00MiB     1.36TiB
   Used         0.00B 128.00KiB 16.00KiB



$ fallocate -l 690G /mnt/cache/file
$ btrfs fi usage -T /mnt/cache
Overall:
    Device size:                   1.36TiB
    Device allocated:              1.35TiB
    Device unallocated:           13.15GiB
    Device missing:                  0.00B
    Used:                          1.35TiB
    Free (estimated):              7.61GiB      (min: 7.61GiB)
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:                3.25MiB      (used: 0.00B)
    Multiple profiles:                  no

             Data      Metadata  System
Id Path      RAID1     RAID1     RAID1     Unallocated
-- --------- --------- --------- --------- -----------
 1 /dev/sdb1 461.04GiB         -  32.00MiB     4.69GiB
 2 /dev/sde1 460.04GiB   1.00GiB  32.00MiB     4.69GiB
 3 /dev/sdf1 461.00GiB   1.00GiB         -     3.76GiB
-- --------- --------- --------- --------- -----------
   Total     691.04GiB   1.00GiB  32.00MiB    13.15GiB
   Used      690.00GiB 976.00KiB 112.00KiB

Jorge

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: df free space not correct with raid1 pools with an odd number of devices
  2020-07-25 10:04           ` Jorge Bastos
@ 2020-07-25 10:21             ` Andrei Borzenkov
  0 siblings, 0 replies; 10+ messages in thread
From: Andrei Borzenkov @ 2020-07-25 10:21 UTC (permalink / raw)
  To: Jorge Bastos; +Cc: Chris Murphy, Btrfs BTRFS

25.07.2020 13:04, Jorge Bastos пишет:
> On Sat, Jul 25, 2020 at 8:43 AM Andrei Borzenkov <arvidjaar@gmail.com> wrote:
>>
> 
>>
>> OTOH, this is the correct if the most pessimistic estimation either. If
>> you have three 250G RAID1 devices and you allocate 250G data in one file
>> you consume two full devices and won't be able to allocate new data at
>> all (or for that matter no new metadata either).
>>
>>
>> So whatever value btrfs returns will be wrong for some allocation pattern.
> 
> I considered that but wouldn't a single file still be stripped and
> blocks allocated to all devices, on a most free space basis?
> 

Yes, I was unsure and stay corrected. It seems real allocation happens
per chunk and so gets distributed to all devices.

Sorry.

> E.g.: 3 x 500GB RAID1 pool:
> 
> $ btrfs fi usage -T /mnt/cache
> Overall:
>     Device size:                   1.36TiB
>     Device allocated:              2.13GiB
>     Device unallocated:            1.36TiB
>     Device missing:                  0.00B
>     Used:                        288.00KiB
>     Free (estimated):            697.61GiB      (min: 697.61GiB)
>     Data ratio:                       2.00
>     Metadata ratio:                   2.00
>     Global reserve:                3.25MiB      (used: 32.00KiB)
>     Multiple profiles:                  no
> 
>              Data     Metadata  System
> Id Path      RAID1    RAID1     RAID1    Unallocated
> -- --------- -------- --------- -------- -----------
>  1 /dev/sdb1 36.00MiB         - 32.00MiB   465.69GiB
>  2 /dev/sde1 36.00MiB   1.00GiB 32.00MiB   464.69GiB
>  3 /dev/sdf1        -   1.00GiB        -   464.76GiB
> -- --------- -------- --------- -------- -----------
>    Total     36.00MiB   1.00GiB 32.00MiB     1.36TiB
>    Used         0.00B 128.00KiB 16.00KiB
> 
> 
> 
> $ fallocate -l 690G /mnt/cache/file
> $ btrfs fi usage -T /mnt/cache
> Overall:
>     Device size:                   1.36TiB
>     Device allocated:              1.35TiB
>     Device unallocated:           13.15GiB
>     Device missing:                  0.00B
>     Used:                          1.35TiB
>     Free (estimated):              7.61GiB      (min: 7.61GiB)
>     Data ratio:                       2.00
>     Metadata ratio:                   2.00
>     Global reserve:                3.25MiB      (used: 0.00B)
>     Multiple profiles:                  no
> 
>              Data      Metadata  System
> Id Path      RAID1     RAID1     RAID1     Unallocated
> -- --------- --------- --------- --------- -----------
>  1 /dev/sdb1 461.04GiB         -  32.00MiB     4.69GiB
>  2 /dev/sde1 460.04GiB   1.00GiB  32.00MiB     4.69GiB
>  3 /dev/sdf1 461.00GiB   1.00GiB         -     3.76GiB
> -- --------- --------- --------- --------- -----------
>    Total     691.04GiB   1.00GiB  32.00MiB    13.15GiB
>    Used      690.00GiB 976.00KiB 112.00KiB
> 
> Jorge
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-07-25 10:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23 10:24 df free space not correct with raid1 pools with an odd number of devices Jorge Bastos
2020-07-24  4:40 ` Chris Murphy
2020-07-24  6:53   ` Rolf Wald
2020-07-24  8:16   ` Jorge Bastos
2020-07-24 20:46     ` Chris Murphy
2020-07-25  2:19       ` Chris Murphy
2020-07-25  7:30       ` Andrei Borzenkov
2020-07-25  7:43         ` Andrei Borzenkov
2020-07-25 10:04           ` Jorge Bastos
2020-07-25 10:21             ` Andrei Borzenkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.