All of lore.kernel.org
 help / color / mirror / Atom feed
* Lost about 3TB
       [not found] <134025801.432834337.1507024250294.JavaMail.root@zimbra65-e11.priv.proxad.net>
@ 2017-10-03 10:44 ` btrfs.fredo
  2017-10-03 10:54   ` Hugo Mills
  0 siblings, 1 reply; 7+ messages in thread
From: btrfs.fredo @ 2017-10-03 10:44 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !

I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).

My BTRFS volume is mounted on /RAID01/.

There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.

It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.

######> du -hs /RAID01/
28T     /RAID01/

If I sum up the result of : ######> find . -printf '%s\n'
I also find 28TB.

I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
on each file and the result is 28TB.



OS : CentOS Linux release 7.3.1611 (Core)
btrfs-progs v4.4.1


######> ssm list

-------------------------------------------------------------------------
Device        Free      Used      Total  Pool                 Mount point
-------------------------------------------------------------------------
/dev/sda                       36.39 TB                       PARTITIONED
/dev/sda1                     200.00 MB                       /boot/efi
/dev/sda2                       1.00 GB                       /boot
/dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
/dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
-------------------------------------------------------------------------
-------------------------------------------------------------------
Pool                    Type   Devices     Free      Used     Total
-------------------------------------------------------------------
cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
-------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------
Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
---------------------------------------------------------------------------------------------------------------------
/dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
/dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
/dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
/dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
/dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
---------------------------------------------------------------------------------------------------------------------


######> btrfs fi sh

Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
        Total devices 1 FS bytes used 31.48TiB
        devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001



######> btrfs fi df /RAID01/

Data, single: total=31.58TiB, used=31.44TiB
System, DUP: total=8.00MiB, used=3.67MiB
Metadata, DUP: total=38.00GiB, used=35.37GiB
GlobalReserve, single: total=512.00MiB, used=0.00B



I tried to repair it :


######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001

enabling repair mode
Checking filesystem on /dev/mapper/lvm_pool-lvol001
UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
checking extents
Fixed 0 roots.
cache and super generation don't match, space cache will be invalidated
checking fs roots
checking csums
checking root refs
found 34600611349019 bytes used err is 0
total csum bytes: 33752513152
total tree bytes: 38037848064
total fs tree bytes: 583942144
total extent tree bytes: 653754368
btree space waste bytes: 2197658704
file data blocks allocated: 183716661284864 ?? what's this ??
 referenced 30095956975616 = 27.3 TB !!



Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB

Overall:
    Device size:                  36.32TiB
    Device allocated:             31.65TiB
    Device unallocated:            4.67TiB
    Device missing:                  0.00B
    Used:                         31.52TiB
    Free (estimated):              4.80TiB      (min: 2.46TiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:31.58TiB, Used:31.45TiB
   /dev/mapper/lvm_pool-lvol001   31.58TiB

Metadata,DUP: Size:38.00GiB, Used:35.37GiB
   /dev/mapper/lvm_pool-lvol001   76.00GiB

System,DUP: Size:8.00MiB, Used:3.69MiB
   /dev/mapper/lvm_pool-lvol001   16.00MiB

Unallocated:
   /dev/mapper/lvm_pool-lvol001    4.67TiB
The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
Code:
file data blocks allocated: 183716661284864 ?? what's this ??
 referenced 30095956975616 = 27.3 TB !!



I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.

I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume  


What did I do wrong or am I missing ?

Thanks in advance.
Frederic Larive.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 10:44 ` Lost about 3TB btrfs.fredo
@ 2017-10-03 10:54   ` Hugo Mills
  2017-10-03 11:08     ` Timofey Titovets
                       ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Hugo Mills @ 2017-10-03 10:54 UTC (permalink / raw)
  To: btrfs.fredo; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 6736 bytes --]

On Tue, Oct 03, 2017 at 12:44:29PM +0200, btrfs.fredo@xoxy.net wrote:
> Hi,
> 
> I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !
> 
> I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).
> 
> My BTRFS volume is mounted on /RAID01/.
> 
> There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.
> 
> It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.
> 
> ######> du -hs /RAID01/
> 28T     /RAID01/
> 
> If I sum up the result of : ######> find . -printf '%s\n'
> I also find 28TB.
> 
> I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
> on each file and the result is 28TB.

   The conclusion here is that there are things that aren't being
found by these processes. This is usually in the form of dot-files
(but I think you've covered that case in what you did above) or
snapshots/subvolumes outside the subvol you've mounted.

   What does "btrfs sub list -a /RAID01/" say?
   Also "grep /RAID01/ /proc/self/mountinfo"?

   There are other possibilities for missing space, but let's cover
the obvious ones first.

   Hugo.

> OS : CentOS Linux release 7.3.1611 (Core)
> btrfs-progs v4.4.1
> 
> 
> ######> ssm list
> 
> -------------------------------------------------------------------------
> Device        Free      Used      Total  Pool                 Mount point
> -------------------------------------------------------------------------
> /dev/sda                       36.39 TB                       PARTITIONED
> /dev/sda1                     200.00 MB                       /boot/efi
> /dev/sda2                       1.00 GB                       /boot
> /dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
> /dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
> -------------------------------------------------------------------------
> -------------------------------------------------------------------
> Pool                    Type   Devices     Free      Used     Total
> -------------------------------------------------------------------
> cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
> lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
> btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
> -------------------------------------------------------------------
> ---------------------------------------------------------------------------------------------------------------------
> Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
> ---------------------------------------------------------------------------------------------------------------------
> /dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
> /dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
> /dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
> btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
> /dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
> /dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
> ---------------------------------------------------------------------------------------------------------------------
> 
> 
> ######> btrfs fi sh
> 
> Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
>         Total devices 1 FS bytes used 31.48TiB
>         devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001
> 
> 
> 
> ######> btrfs fi df /RAID01/
> 
> Data, single: total=31.58TiB, used=31.44TiB
> System, DUP: total=8.00MiB, used=3.67MiB
> Metadata, DUP: total=38.00GiB, used=35.37GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> 
> 
> 
> I tried to repair it :
> 
> 
> ######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001
> 
> enabling repair mode
> Checking filesystem on /dev/mapper/lvm_pool-lvol001
> UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
> checking extents
> Fixed 0 roots.
> cache and super generation don't match, space cache will be invalidated
> checking fs roots
> checking csums
> checking root refs
> found 34600611349019 bytes used err is 0
> total csum bytes: 33752513152
> total tree bytes: 38037848064
> total fs tree bytes: 583942144
> total extent tree bytes: 653754368
> btree space waste bytes: 2197658704
> file data blocks allocated: 183716661284864 ?? what's this ??
>  referenced 30095956975616 = 27.3 TB !!
> 
> 
> 
> Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB
> 
> Overall:
>     Device size:                  36.32TiB
>     Device allocated:             31.65TiB
>     Device unallocated:            4.67TiB
>     Device missing:                  0.00B
>     Used:                         31.52TiB
>     Free (estimated):              4.80TiB      (min: 2.46TiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   2.00
>     Global reserve:              512.00MiB      (used: 0.00B)
> 
> Data,single: Size:31.58TiB, Used:31.45TiB
>    /dev/mapper/lvm_pool-lvol001   31.58TiB
> 
> Metadata,DUP: Size:38.00GiB, Used:35.37GiB
>    /dev/mapper/lvm_pool-lvol001   76.00GiB
> 
> System,DUP: Size:8.00MiB, Used:3.69MiB
>    /dev/mapper/lvm_pool-lvol001   16.00MiB
> 
> Unallocated:
>    /dev/mapper/lvm_pool-lvol001    4.67TiB
> The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
> Code:
> file data blocks allocated: 183716661284864 ?? what's this ??
>  referenced 30095956975616 = 27.3 TB !!
> 
> 
> 
> I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.
> 
> I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
> No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume  
> 
> 
> What did I do wrong or am I missing ?
> 
> Thanks in advance.
> Frederic Larive.
> 

-- 
Hugo Mills             | Beware geeks bearing GIFs
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4          |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 10:54   ` Hugo Mills
@ 2017-10-03 11:08     ` Timofey Titovets
  2017-10-03 12:44     ` Roman Mamedov
  2017-10-03 15:45     ` fred.larive
  2 siblings, 0 replies; 7+ messages in thread
From: Timofey Titovets @ 2017-10-03 11:08 UTC (permalink / raw)
  To: Hugo Mills, btrfs.fredo, linux-btrfs

2017-10-03 13:54 GMT+03:00 Hugo Mills <hugo@carfax.org.uk>:
> On Tue, Oct 03, 2017 at 12:44:29PM +0200, btrfs.fredo@xoxy.net wrote:
>> Hi,
>>
>> I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !
>>
>> I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).
>>
>> My BTRFS volume is mounted on /RAID01/.
>>
>> There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.
>>
>> It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.
>>
>> ######> du -hs /RAID01/
>> 28T     /RAID01/
>>
>> If I sum up the result of : ######> find . -printf '%s\n'
>> I also find 28TB.
>>
>> I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
>> on each file and the result is 28TB.
>
>    The conclusion here is that there are things that aren't being
> found by these processes. This is usually in the form of dot-files
> (but I think you've covered that case in what you did above) or
> snapshots/subvolumes outside the subvol you've mounted.
>
>    What does "btrfs sub list -a /RAID01/" say?
>    Also "grep /RAID01/ /proc/self/mountinfo"?
>
>    There are other possibilities for missing space, but let's cover
> the obvious ones first.
>
>    Hugo.
>
>> OS : CentOS Linux release 7.3.1611 (Core)
>> btrfs-progs v4.4.1
>>
>>
>> ######> ssm list
>>
>> -------------------------------------------------------------------------
>> Device        Free      Used      Total  Pool                 Mount point
>> -------------------------------------------------------------------------
>> /dev/sda                       36.39 TB                       PARTITIONED
>> /dev/sda1                     200.00 MB                       /boot/efi
>> /dev/sda2                       1.00 GB                       /boot
>> /dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
>> /dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
>> -------------------------------------------------------------------------
>> -------------------------------------------------------------------
>> Pool                    Type   Devices     Free      Used     Total
>> -------------------------------------------------------------------
>> cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
>> lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
>> btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
>> -------------------------------------------------------------------
>> ---------------------------------------------------------------------------------------------------------------------
>> Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
>> ---------------------------------------------------------------------------------------------------------------------
>> /dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
>> /dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
>> /dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
>> btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
>> /dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
>> /dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
>> ---------------------------------------------------------------------------------------------------------------------
>>
>>
>> ######> btrfs fi sh
>>
>> Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
>>         Total devices 1 FS bytes used 31.48TiB
>>         devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001
>>
>>
>>
>> ######> btrfs fi df /RAID01/
>>
>> Data, single: total=31.58TiB, used=31.44TiB
>> System, DUP: total=8.00MiB, used=3.67MiB
>> Metadata, DUP: total=38.00GiB, used=35.37GiB
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>
>>
>>
>> I tried to repair it :
>>
>>
>> ######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001
>>
>> enabling repair mode
>> Checking filesystem on /dev/mapper/lvm_pool-lvol001
>> UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
>> checking extents
>> Fixed 0 roots.
>> cache and super generation don't match, space cache will be invalidated
>> checking fs roots
>> checking csums
>> checking root refs
>> found 34600611349019 bytes used err is 0
>> total csum bytes: 33752513152
>> total tree bytes: 38037848064
>> total fs tree bytes: 583942144
>> total extent tree bytes: 653754368
>> btree space waste bytes: 2197658704
>> file data blocks allocated: 183716661284864 ?? what's this ??
>>  referenced 30095956975616 = 27.3 TB !!
>>
>>
>>
>> Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB
>>
>> Overall:
>>     Device size:                  36.32TiB
>>     Device allocated:             31.65TiB
>>     Device unallocated:            4.67TiB
>>     Device missing:                  0.00B
>>     Used:                         31.52TiB
>>     Free (estimated):              4.80TiB      (min: 2.46TiB)
>>     Data ratio:                       1.00
>>     Metadata ratio:                   2.00
>>     Global reserve:              512.00MiB      (used: 0.00B)
>>
>> Data,single: Size:31.58TiB, Used:31.45TiB
>>    /dev/mapper/lvm_pool-lvol001   31.58TiB
>>
>> Metadata,DUP: Size:38.00GiB, Used:35.37GiB
>>    /dev/mapper/lvm_pool-lvol001   76.00GiB
>>
>> System,DUP: Size:8.00MiB, Used:3.69MiB
>>    /dev/mapper/lvm_pool-lvol001   16.00MiB
>>
>> Unallocated:
>>    /dev/mapper/lvm_pool-lvol001    4.67TiB
>> The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
>> Code:
>> file data blocks allocated: 183716661284864 ?? what's this ??
>>  referenced 30095956975616 = 27.3 TB !!
>>
>>
>>
>> I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.
>>
>> I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
>> No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume
>>
>>
>> What did I do wrong or am I missing ?
>>
>> Thanks in advance.
>> Frederic Larive.
>>
>
> --
> Hugo Mills             | Beware geeks bearing GIFs
> hugo@... carfax.org.uk |
> http://carfax.org.uk/  |
> PGP: E2AB1DE4          |

If your storage not write-once, then that can be simple extent book keeping.

i.e. as proof, i create empty fs, create 128KiB file
Then write 2*4KiB random sectors, and get that:
~ filefrag -v /mnt/128KiB
Filesystem type is: 9123683e
File size of /mnt/128KiB is 131072 (32 blocks of 4096 bytes)
ext:     logical_offset:        physical_offset: length:   expected: flags:
  0:        0..       2:       3104..      3106:      3:
  1:        3..       3:       3088..      3088:      1:       3107:
  2:        4..      18:       3108..      3122:     15:       3089:
  3:       19..      19:       3072..      3072:      1:       3123:
  4:       20..      31:       3124..      3135:     12:       3073: last,eof
/mnt/128KiB: 5 extents found

~ btrfs fi df /mnt
Data, single: total=8.00MiB, used=200.00KiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=120.00MiB, used=112.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B

On empty fs 64KiB used by something (i'm not sure that use it)

200 - 64 = 136 KiB

136 - 128 = 8 KiB
8/4 = 2 writes

So on your FS that just can be fragmented space.
-- 
Have a nice day,
Timofey.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 10:54   ` Hugo Mills
  2017-10-03 11:08     ` Timofey Titovets
@ 2017-10-03 12:44     ` Roman Mamedov
  2017-10-03 15:45     ` fred.larive
  2 siblings, 0 replies; 7+ messages in thread
From: Roman Mamedov @ 2017-10-03 12:44 UTC (permalink / raw)
  To: Hugo Mills; +Cc: btrfs.fredo, linux-btrfs

On Tue, 3 Oct 2017 10:54:05 +0000
Hugo Mills <hugo@carfax.org.uk> wrote:

>    There are other possibilities for missing space, but let's cover
> the obvious ones first.

One more obvious thing would be files that are deleted, but still kept open by
some app (possibly even from network, via NFS or SMB!). @Frederic, did you try
rebooting the system?

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 10:54   ` Hugo Mills
  2017-10-03 11:08     ` Timofey Titovets
  2017-10-03 12:44     ` Roman Mamedov
@ 2017-10-03 15:45     ` fred.larive
  2017-10-03 16:00       ` Hugo Mills
  2 siblings, 1 reply; 7+ messages in thread
From: fred.larive @ 2017-10-03 15:45 UTC (permalink / raw)
  To: Hugo Mills - hugo@carfax.org.uk; +Cc: linux-btrfs, btrfs fredo

Hi,


>   What does "btrfs sub list -a /RAID01/" say?
Nothing (no lines displayed)

>   Also "grep /RAID01/ /proc/self/mountinfo"?
Nothing (no lines displayed)


Also server has been rebooted many times and no process has left "deleted open files" on the volume (lsof...).


Fred.


----- Mail original -----
De: "Hugo Mills - hugo@carfax.org.uk" <btrfs.fredo.d1c3ddb588.hugo#carfax.org.uk@ob.0sg.net>
À: "btrfs fredo" <btrfs.fredo@xoxy.net>
Cc: linux-btrfs@vger.kernel.org
Envoyé: Mardi 3 Octobre 2017 12:54:05
Objet: Re: Lost about 3TB

On Tue, Oct 03, 2017 at 12:44:29PM +0200, btrfs.fredo@xoxy.net wrote:
> Hi,
> 
> I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !
> 
> I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).
> 
> My BTRFS volume is mounted on /RAID01/.
> 
> There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.
> 
> It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.
> 
> ######> du -hs /RAID01/
> 28T     /RAID01/
> 
> If I sum up the result of : ######> find . -printf '%s\n'
> I also find 28TB.
> 
> I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
> on each file and the result is 28TB.

   The conclusion here is that there are things that aren't being
found by these processes. This is usually in the form of dot-files
(but I think you've covered that case in what you did above) or
snapshots/subvolumes outside the subvol you've mounted.

   What does "btrfs sub list -a /RAID01/" say?
   Also "grep /RAID01/ /proc/self/mountinfo"?

   There are other possibilities for missing space, but let's cover
the obvious ones first.

   Hugo.

> OS : CentOS Linux release 7.3.1611 (Core)
> btrfs-progs v4.4.1
> 
> 
> ######> ssm list
> 
> -------------------------------------------------------------------------
> Device        Free      Used      Total  Pool                 Mount point
> -------------------------------------------------------------------------
> /dev/sda                       36.39 TB                       PARTITIONED
> /dev/sda1                     200.00 MB                       /boot/efi
> /dev/sda2                       1.00 GB                       /boot
> /dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
> /dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
> -------------------------------------------------------------------------
> -------------------------------------------------------------------
> Pool                    Type   Devices     Free      Used     Total
> -------------------------------------------------------------------
> cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
> lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
> btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
> -------------------------------------------------------------------
> ---------------------------------------------------------------------------------------------------------------------
> Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
> ---------------------------------------------------------------------------------------------------------------------
> /dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
> /dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
> /dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
> btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
> /dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
> /dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
> ---------------------------------------------------------------------------------------------------------------------
> 
> 
> ######> btrfs fi sh
> 
> Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
>         Total devices 1 FS bytes used 31.48TiB
>         devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001
> 
> 
> 
> ######> btrfs fi df /RAID01/
> 
> Data, single: total=31.58TiB, used=31.44TiB
> System, DUP: total=8.00MiB, used=3.67MiB
> Metadata, DUP: total=38.00GiB, used=35.37GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> 
> 
> 
> I tried to repair it :
> 
> 
> ######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001
> 
> enabling repair mode
> Checking filesystem on /dev/mapper/lvm_pool-lvol001
> UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
> checking extents
> Fixed 0 roots.
> cache and super generation don't match, space cache will be invalidated
> checking fs roots
> checking csums
> checking root refs
> found 34600611349019 bytes used err is 0
> total csum bytes: 33752513152
> total tree bytes: 38037848064
> total fs tree bytes: 583942144
> total extent tree bytes: 653754368
> btree space waste bytes: 2197658704
> file data blocks allocated: 183716661284864 ?? what's this ??
>  referenced 30095956975616 = 27.3 TB !!
> 
> 
> 
> Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB
> 
> Overall:
>     Device size:                  36.32TiB
>     Device allocated:             31.65TiB
>     Device unallocated:            4.67TiB
>     Device missing:                  0.00B
>     Used:                         31.52TiB
>     Free (estimated):              4.80TiB      (min: 2.46TiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   2.00
>     Global reserve:              512.00MiB      (used: 0.00B)
> 
> Data,single: Size:31.58TiB, Used:31.45TiB
>    /dev/mapper/lvm_pool-lvol001   31.58TiB
> 
> Metadata,DUP: Size:38.00GiB, Used:35.37GiB
>    /dev/mapper/lvm_pool-lvol001   76.00GiB
> 
> System,DUP: Size:8.00MiB, Used:3.69MiB
>    /dev/mapper/lvm_pool-lvol001   16.00MiB
> 
> Unallocated:
>    /dev/mapper/lvm_pool-lvol001    4.67TiB
> The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
> Code:
> file data blocks allocated: 183716661284864 ?? what's this ??
>  referenced 30095956975616 = 27.3 TB !!
> 
> 
> 
> I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.
> 
> I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
> No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume  
> 
> 
> What did I do wrong or am I missing ?
> 
> Thanks in advance.
> Frederic Larive.
> 

-- 
Hugo Mills             | Beware geeks bearing GIFs
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: E2AB1DE4          |

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 15:45     ` fred.larive
@ 2017-10-03 16:00       ` Hugo Mills
  2017-10-04 12:43         ` fred.larive
  0 siblings, 1 reply; 7+ messages in thread
From: Hugo Mills @ 2017-10-03 16:00 UTC (permalink / raw)
  To: fred.larive; +Cc: Hugo Mills - hugo@carfax.org.uk, linux-btrfs, btrfs fredo

[-- Attachment #1: Type: text/plain, Size: 8951 bytes --]

On Tue, Oct 03, 2017 at 05:45:54PM +0200, fred.larive@free.fr wrote:
> Hi,
> 
> 
> >   What does "btrfs sub list -a /RAID01/" say?
> Nothing (no lines displayed)
> 
> >   Also "grep /RAID01/ /proc/self/mountinfo"?
> Nothing (no lines displayed)
> 
> 
> Also server has been rebooted many times and no process has left "deleted open files" on the volume (lsof...).

   OK. The second command (the grep) was incorrect -- I should have
omitted the slashes. However, it doesn't matter too much, because the
first command indicates that you don't have any subvolumes or
snapshots anyway.

   This means that you're probably looking at the kind of issue
Timofey mentioned in his mail, where writes into the middle of an
existing extent don't free up the overwritten data. This is most
likely to happen on database or VM files, but could happen on others,
depending on the application and how it uses files.

   Since you don't seem to have any snapshots, I _think_ you can deal
with the issue most easily by defragmenting the affected files. It's
worth just getting a second opinion on this one before you try it for
the whole FS. I'm not 100% sure about what defrag will do in this
case, and there are some people round here who have investigated the
behaviour of partially-overwritten extents in more detail than I have.

   Hugo.

> Fred.
> 
> 
> ----- Mail original -----
> De: "Hugo Mills - hugo@carfax.org.uk" <btrfs.fredo.d1c3ddb588.hugo#carfax.org.uk@ob.0sg.net>
> À: "btrfs fredo" <btrfs.fredo@xoxy.net>
> Cc: linux-btrfs@vger.kernel.org
> Envoyé: Mardi 3 Octobre 2017 12:54:05
> Objet: Re: Lost about 3TB
> 
> On Tue, Oct 03, 2017 at 12:44:29PM +0200, btrfs.fredo@xoxy.net wrote:
> > Hi,
> > 
> > I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !
> > 
> > I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).
> > 
> > My BTRFS volume is mounted on /RAID01/.
> > 
> > There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.
> > 
> > It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.
> > 
> > ######> du -hs /RAID01/
> > 28T     /RAID01/
> > 
> > If I sum up the result of : ######> find . -printf '%s\n'
> > I also find 28TB.
> > 
> > I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
> > on each file and the result is 28TB.
> 
>    The conclusion here is that there are things that aren't being
> found by these processes. This is usually in the form of dot-files
> (but I think you've covered that case in what you did above) or
> snapshots/subvolumes outside the subvol you've mounted.
> 
>    What does "btrfs sub list -a /RAID01/" say?
>    Also "grep /RAID01/ /proc/self/mountinfo"?
> 
>    There are other possibilities for missing space, but let's cover
> the obvious ones first.
> 
>    Hugo.
> 
> > OS : CentOS Linux release 7.3.1611 (Core)
> > btrfs-progs v4.4.1
> > 
> > 
> > ######> ssm list
> > 
> > -------------------------------------------------------------------------
> > Device        Free      Used      Total  Pool                 Mount point
> > -------------------------------------------------------------------------
> > /dev/sda                       36.39 TB                       PARTITIONED
> > /dev/sda1                     200.00 MB                       /boot/efi
> > /dev/sda2                       1.00 GB                       /boot
> > /dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
> > /dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
> > -------------------------------------------------------------------------
> > -------------------------------------------------------------------
> > Pool                    Type   Devices     Free      Used     Total
> > -------------------------------------------------------------------
> > cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
> > lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
> > btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
> > -------------------------------------------------------------------
> > ---------------------------------------------------------------------------------------------------------------------
> > Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
> > ---------------------------------------------------------------------------------------------------------------------
> > /dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
> > /dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
> > /dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
> > btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
> > /dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
> > /dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
> > ---------------------------------------------------------------------------------------------------------------------
> > 
> > 
> > ######> btrfs fi sh
> > 
> > Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
> >         Total devices 1 FS bytes used 31.48TiB
> >         devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001
> > 
> > 
> > 
> > ######> btrfs fi df /RAID01/
> > 
> > Data, single: total=31.58TiB, used=31.44TiB
> > System, DUP: total=8.00MiB, used=3.67MiB
> > Metadata, DUP: total=38.00GiB, used=35.37GiB
> > GlobalReserve, single: total=512.00MiB, used=0.00B
> > 
> > 
> > 
> > I tried to repair it :
> > 
> > 
> > ######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001
> > 
> > enabling repair mode
> > Checking filesystem on /dev/mapper/lvm_pool-lvol001
> > UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
> > checking extents
> > Fixed 0 roots.
> > cache and super generation don't match, space cache will be invalidated
> > checking fs roots
> > checking csums
> > checking root refs
> > found 34600611349019 bytes used err is 0
> > total csum bytes: 33752513152
> > total tree bytes: 38037848064
> > total fs tree bytes: 583942144
> > total extent tree bytes: 653754368
> > btree space waste bytes: 2197658704
> > file data blocks allocated: 183716661284864 ?? what's this ??
> >  referenced 30095956975616 = 27.3 TB !!
> > 
> > 
> > 
> > Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB
> > 
> > Overall:
> >     Device size:                  36.32TiB
> >     Device allocated:             31.65TiB
> >     Device unallocated:            4.67TiB
> >     Device missing:                  0.00B
> >     Used:                         31.52TiB
> >     Free (estimated):              4.80TiB      (min: 2.46TiB)
> >     Data ratio:                       1.00
> >     Metadata ratio:                   2.00
> >     Global reserve:              512.00MiB      (used: 0.00B)
> > 
> > Data,single: Size:31.58TiB, Used:31.45TiB
> >    /dev/mapper/lvm_pool-lvol001   31.58TiB
> > 
> > Metadata,DUP: Size:38.00GiB, Used:35.37GiB
> >    /dev/mapper/lvm_pool-lvol001   76.00GiB
> > 
> > System,DUP: Size:8.00MiB, Used:3.69MiB
> >    /dev/mapper/lvm_pool-lvol001   16.00MiB
> > 
> > Unallocated:
> >    /dev/mapper/lvm_pool-lvol001    4.67TiB
> > The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
> > Code:
> > file data blocks allocated: 183716661284864 ?? what's this ??
> >  referenced 30095956975616 = 27.3 TB !!
> > 
> > 
> > 
> > I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.
> > 
> > I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
> > No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume  
> > 
> > 
> > What did I do wrong or am I missing ?
> > 
> > Thanks in advance.
> > Frederic Larive.
> > 
> 

-- 
Hugo Mills             | Well, sir, the floor is yours. But remember, the
hugo@... carfax.org.uk | roof is ours!
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                                             The Goons

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Lost about 3TB
  2017-10-03 16:00       ` Hugo Mills
@ 2017-10-04 12:43         ` fred.larive
  0 siblings, 0 replies; 7+ messages in thread
From: fred.larive @ 2017-10-04 12:43 UTC (permalink / raw)
  To: Hugo Mills; +Cc: Hugo Mills - hugo@carfax.org.uk, linux-btrfs, btrfs fredo

Hi,

thanks all for your fast support.

I did some stats about fragmentation (I didn't do it before because, I wrongly thought it was nearly the same as chunk balance).
As Timofey suspected I have some big files (5TB) heavily fragmented.

The software writting in these files is a backup software called Veeam, working at block level of virtual machines : it create an image of the virtual disk and the incremental backup only takes care about newly changed blocks.
But we use it each month to "synthetically" create monthly backup restore point.

So yes, most of write is sparse write of small blocks, this server is over a WAN link and Veeam is incredbly robust allowing us to break a backup job and continue it later (but taking in account new source blocks) without resending the whole file, the result is then fragmented (I traced it during restore job and can see that it is able to find needed blocks on every received disks images even in broken ones !).

To sum up :
Each files extent average size is 15MB, but at least 30MB (up to 60MB) since I balanced the volume.
But I have four 5TB files with average extent size being between 3 to 7 MB split over about 1 to 2 million extents.

I tried defrag on some 30000 extents files dividing the extent number by two and freeing some used space.

I'm now defragging the whole volume with "-t 128M", permitting to free 200GB in one hour.

I think you hit my problem.
I'll keep you in touch, I think tomorrow.
Thank's again.

Fred.

----- Mail original -----
De: "Hugo Mills" <hugo@carfax.org.uk>
À: "fred larive" <fred.larive@free.fr>
Cc: "Hugo Mills - hugo@carfax.org.uk" <btrfs.fredo.d1c3ddb588.hugo#carfax.org.uk@ob.0sg.net>, linux-btrfs@vger.kernel.org, "btrfs fredo" <btrfs.fredo@xoxy.net>
Envoyé: Mardi 3 Octobre 2017 18:00:04
Objet: Re: Lost about 3TB

On Tue, Oct 03, 2017 at 05:45:54PM +0200, fred.larive@free.fr wrote:
> Hi,
> 
> 
> >   What does "btrfs sub list -a /RAID01/" say?
> Nothing (no lines displayed)
> 
> >   Also "grep /RAID01/ /proc/self/mountinfo"?
> Nothing (no lines displayed)
> 
> 
> Also server has been rebooted many times and no process has left "deleted open files" on the volume (lsof...).

   OK. The second command (the grep) was incorrect -- I should have
omitted the slashes. However, it doesn't matter too much, because the
first command indicates that you don't have any subvolumes or
snapshots anyway.

   This means that you're probably looking at the kind of issue
Timofey mentioned in his mail, where writes into the middle of an
existing extent don't free up the overwritten data. This is most
likely to happen on database or VM files, but could happen on others,
depending on the application and how it uses files.

   Since you don't seem to have any snapshots, I _think_ you can deal
with the issue most easily by defragmenting the affected files. It's
worth just getting a second opinion on this one before you try it for
the whole FS. I'm not 100% sure about what defrag will do in this
case, and there are some people round here who have investigated the
behaviour of partially-overwritten extents in more detail than I have.

   Hugo.

> Fred.
> 
> 
> ----- Mail original -----
> De: "Hugo Mills - hugo@carfax.org.uk" <btrfs.fredo.d1c3ddb588.hugo#carfax.org.uk@ob.0sg.net>
> À: "btrfs fredo" <btrfs.fredo@xoxy.net>
> Cc: linux-btrfs@vger.kernel.org
> Envoyé: Mardi 3 Octobre 2017 12:54:05
> Objet: Re: Lost about 3TB
> 
> On Tue, Oct 03, 2017 at 12:44:29PM +0200, btrfs.fredo@xoxy.net wrote:
> > Hi,
> > 
> > I can't figure out were 3TB on a 36 TB BTRFS volume (on LVM) are gone !
> > 
> > I know BTRFS can be tricky when speaking about space usage when using many physical drives in a RAID setup, but my conf is a very simple BTRFS volume without RAID(single Data type) using the whole disk (perhaps did I do something wrong with the LVM setup ?).
> > 
> > My BTRFS volume is mounted on /RAID01/.
> > 
> > There's only one folder in /RAID01/ shared with Samba, Windows also see a total of 28 TB used.
> > 
> > It only contains 443 files (big backup files created by Veeam), most of the file size is greater than 1GB and be be up to 5TB.
> > 
> > ######> du -hs /RAID01/
> > 28T     /RAID01/
> > 
> > If I sum up the result of : ######> find . -printf '%s\n'
> > I also find 28TB.
> > 
> > I extracted btrfs binary from rpm version v4.9.1 and used ######> btrfs fi du
> > on each file and the result is 28TB.
> 
>    The conclusion here is that there are things that aren't being
> found by these processes. This is usually in the form of dot-files
> (but I think you've covered that case in what you did above) or
> snapshots/subvolumes outside the subvol you've mounted.
> 
>    What does "btrfs sub list -a /RAID01/" say?
>    Also "grep /RAID01/ /proc/self/mountinfo"?
> 
>    There are other possibilities for missing space, but let's cover
> the obvious ones first.
> 
>    Hugo.
> 
> > OS : CentOS Linux release 7.3.1611 (Core)
> > btrfs-progs v4.4.1
> > 
> > 
> > ######> ssm list
> > 
> > -------------------------------------------------------------------------
> > Device        Free      Used      Total  Pool                 Mount point
> > -------------------------------------------------------------------------
> > /dev/sda                       36.39 TB                       PARTITIONED
> > /dev/sda1                     200.00 MB                       /boot/efi
> > /dev/sda2                       1.00 GB                       /boot
> > /dev/sda3  0.00 KB  36.32 TB   36.32 TB  lvm_pool
> > /dev/sda4  0.00 KB  54.00 GB   54.00 GB  cl_xxx-xxxamrepo-01
> > -------------------------------------------------------------------------
> > -------------------------------------------------------------------
> > Pool                    Type   Devices     Free      Used     Total
> > -------------------------------------------------------------------
> > cl_xxx-xxxamrepo-01     lvm    1        0.00 KB  54.00 GB  54.00 GB
> > lvm_pool                lvm    1        0.00 KB  36.32 TB  36.32 TB
> > btrfs_lvm_pool-lvol001  btrfs  1        4.84 TB  36.32 TB  36.32 TB
> > -------------------------------------------------------------------
> > ---------------------------------------------------------------------------------------------------------------------
> > Volume                         Pool                    Volume size  FS        FS size       Free  Type    Mount point
> > ---------------------------------------------------------------------------------------------------------------------
> > /dev/cl_xxx-xxxamrepo-01/root  cl_xxx-xxxamrepo-01        50.00 GB  xfs      49.97 GB   48.50 GB  linear  /
> > /dev/cl_xxx-xxxamrepo-01/swap  cl_xxx-xxxamrepo-01         4.00 GB                                linear
> > /dev/lvm_pool/lvol001          lvm_pool                   36.32 TB                                linear  /RAID01
> > btrfs_lvm_pool-lvol001         btrfs_lvm_pool-lvol001     36.32 TB  btrfs    36.32 TB    4.84 TB  btrfs   /RAID01
> > /dev/sda1                                                200.00 MB  vfat                          part    /boot/efi
> > /dev/sda2                                                  1.00 GB  xfs    1015.00 MB  882.54 MB  part    /boot
> > ---------------------------------------------------------------------------------------------------------------------
> > 
> > 
> > ######> btrfs fi sh
> > 
> > Label: none  uuid: df7ce232-056a-4c27-bde4-6f785d5d9f68
> >         Total devices 1 FS bytes used 31.48TiB
> >         devid    1 size 36.32TiB used 31.66TiB path /dev/mapper/lvm_pool-lvol001
> > 
> > 
> > 
> > ######> btrfs fi df /RAID01/
> > 
> > Data, single: total=31.58TiB, used=31.44TiB
> > System, DUP: total=8.00MiB, used=3.67MiB
> > Metadata, DUP: total=38.00GiB, used=35.37GiB
> > GlobalReserve, single: total=512.00MiB, used=0.00B
> > 
> > 
> > 
> > I tried to repair it :
> > 
> > 
> > ######> btrfs check --repair -p /dev/mapper/lvm_pool-lvol001
> > 
> > enabling repair mode
> > Checking filesystem on /dev/mapper/lvm_pool-lvol001
> > UUID: df7ce232-056a-4c27-bde4-6f785d5d9f68
> > checking extents
> > Fixed 0 roots.
> > cache and super generation don't match, space cache will be invalidated
> > checking fs roots
> > checking csums
> > checking root refs
> > found 34600611349019 bytes used err is 0
> > total csum bytes: 33752513152
> > total tree bytes: 38037848064
> > total fs tree bytes: 583942144
> > total extent tree bytes: 653754368
> > btree space waste bytes: 2197658704
> > file data blocks allocated: 183716661284864 ?? what's this ??
> >  referenced 30095956975616 = 27.3 TB !!
> > 
> > 
> > 
> > Tried the "new usage" display but the problem is the same : 31 TB used but total file size is 28TB
> > 
> > Overall:
> >     Device size:                  36.32TiB
> >     Device allocated:             31.65TiB
> >     Device unallocated:            4.67TiB
> >     Device missing:                  0.00B
> >     Used:                         31.52TiB
> >     Free (estimated):              4.80TiB      (min: 2.46TiB)
> >     Data ratio:                       1.00
> >     Metadata ratio:                   2.00
> >     Global reserve:              512.00MiB      (used: 0.00B)
> > 
> > Data,single: Size:31.58TiB, Used:31.45TiB
> >    /dev/mapper/lvm_pool-lvol001   31.58TiB
> > 
> > Metadata,DUP: Size:38.00GiB, Used:35.37GiB
> >    /dev/mapper/lvm_pool-lvol001   76.00GiB
> > 
> > System,DUP: Size:8.00MiB, Used:3.69MiB
> >    /dev/mapper/lvm_pool-lvol001   16.00MiB
> > 
> > Unallocated:
> >    /dev/mapper/lvm_pool-lvol001    4.67TiB
> > The only btrfs tool speaking about 28TB is btrfs check (but I'm not sure if it's bytes because it speaks about "referenced blocks" and I don't understand the meaning of "file data blocks allocated")
> > Code:
> > file data blocks allocated: 183716661284864 ?? what's this ??
> >  referenced 30095956975616 = 27.3 TB !!
> > 
> > 
> > 
> > I also used the verbose option of https://github.com/knorrie/btrfs-heatmap/ to sum up the total size of all DATA EXTENT and found 32TB.
> > 
> > I did scrub, balance up to -dusage=90 (and also dusage=0) and ended up with 32TB used.
> > No snasphots nor subvolumes nor TB hidden under the mount point after unmounting the BTRFS volume  
> > 
> > 
> > What did I do wrong or am I missing ?
> > 
> > Thanks in advance.
> > Frederic Larive.
> > 
> 

-- 
Hugo Mills             | Well, sir, the floor is yours. But remember, the
hugo@... carfax.org.uk | roof is ours!
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                                             The Goons

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-10-04 12:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <134025801.432834337.1507024250294.JavaMail.root@zimbra65-e11.priv.proxad.net>
2017-10-03 10:44 ` Lost about 3TB btrfs.fredo
2017-10-03 10:54   ` Hugo Mills
2017-10-03 11:08     ` Timofey Titovets
2017-10-03 12:44     ` Roman Mamedov
2017-10-03 15:45     ` fred.larive
2017-10-03 16:00       ` Hugo Mills
2017-10-04 12:43         ` fred.larive

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.