* Removal of Device and Free Space
@ 2021-05-14 8:54 Christian Völker
2021-05-14 16:44 ` Andrei Borzenkov
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Christian Völker @ 2021-05-14 8:54 UTC (permalink / raw)
To: linux-btrfs
Hi all,
I am running a three device DRBD setup (non-RAID!).
When I do "df -h" I see loads of free space:
root@backuppc:~# df -h
[...]
/dev/mapper/crypt_drbd2 2,8T 1,7T 1,1T 63% /var/lib/backuppc
As written, the fs consists of three devices:
root@backuppc:~# btrfs fi sh /var/lib/backuppc/
Label: 'backuppc' uuid: 73b98c7b-832a-437a-a15b-6cb00734e5db
Total devices 3 FS bytes used 1.70TiB
devid 3 size 799.96GiB used 799.96GiB path dm-5
devid 4 size 1.07TiB used 1.07TiB path dm-4
devid 7 size 899.96GiB used 327.00GiB path dm-6
root@backuppc:~# btrfs fi usage /var/lib/backuppc/
Overall:
Device size: 2.73TiB
Device allocated: 2.61TiB
Device unallocated: 128.00GiB
Device missing: 0.00B
Used: 1.70TiB
Free (estimated): 1.03TiB (min: 1.03TiB)
Free (statfs, df): 1.03TiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:2.60TiB, Used:1.69TiB (65.17%)
/dev/mapper/crypt_drbd2 790.93GiB
/dev/mapper/crypt_drbd1 1.07TiB
/dev/mapper/crypt_drbd3 774.96GiB
Metadata,single: Size:9.00GiB, Used:3.95GiB (43.91%)
/dev/mapper/crypt_drbd2 6.00GiB
/dev/mapper/crypt_drbd1 3.00GiB
System,single: Size:32.00MiB, Used:320.00KiB (0.98%)
/dev/mapper/crypt_drbd2 32.00MiB
Unallocated:
/dev/mapper/crypt_drbd2 3.00GiB
/dev/mapper/crypt_drbd1 1.03MiB
/dev/mapper/crypt_drbd3 125.00GiB
So it tells me there is an estimated of ~1TB free. As the crypt_drbd3
device has a size of 899G I wanted to remove the device. I expected no
issue as "Free" shows 1.03TiB. There should still be 200GB of free space
afterwards.
But the removal failed after two hours:
root@backuppc:~# btrfs dev remove /dev/mapper/crypt_drbd3 /var/lib/backuppc/
ERROR: error removing device '/dev/mapper/crypt_drbd3': No space left on
device
Now it looks like this:
root@backuppc:~# btrfs fi usage /var/lib/backuppc/
Overall:
Device size: 2.73TiB
Device allocated: 2.17TiB
Device unallocated: 572.96GiB
Device missing: 0.00B
Used: 1.70TiB
Free (estimated): 1.03TiB (min: 1.03TiB)
Free (statfs, df): 1.03TiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:2.17TiB, Used:1.69TiB (78.24%)
/dev/mapper/crypt_drbd2 793.93GiB
/dev/mapper/crypt_drbd1 1.07TiB
/dev/mapper/crypt_drbd3 327.00GiB
Metadata,single: Size:9.00GiB, Used:3.89GiB (43.23%)
/dev/mapper/crypt_drbd2 6.00GiB
/dev/mapper/crypt_drbd1 3.00GiB
System,single: Size:32.00MiB, Used:288.00KiB (0.88%)
/dev/mapper/crypt_drbd2 32.00MiB
Unallocated:
/dev/mapper/crypt_drbd2 1.03MiB
/dev/mapper/crypt_drbd1 1.03MiB
/dev/mapper/crypt_drbd3 572.96GiB
So some questions arise:
Why can btrfs device remove not check in advance if there is enough
free space available? Instead of working for hours and then failing...
Do I have to balance my fs after the failed removal now?
Why is it not possible to remove the device when all information
tell me there is enough free space available?
What is occupying so much disk space as the data only has 1.7TB
which should fit in 1.8TB (two) devices? (no snapshot, nothing special
configured on btrfs). Looks like there are ~400GB allocated which are
not from data.
Just for completeness:
Debian Buster
root@backuppc:~# btrfs --version
btrfs-progs v5.10.1
Thanks for letting me know.
Greetings
/CV
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-14 8:54 Removal of Device and Free Space Christian Völker
@ 2021-05-14 16:44 ` Andrei Borzenkov
2021-05-14 17:06 ` Roman Mamedov
2021-05-15 1:39 ` Zygo Blaxell
2 siblings, 0 replies; 8+ messages in thread
From: Andrei Borzenkov @ 2021-05-14 16:44 UTC (permalink / raw)
To: Christian Völker, linux-btrfs
On 14.05.2021 11:54, Christian Völker wrote:
>
> Hi all,
>
> I am running a three device DRBD setup (non-RAID!).
> When I do "df -h" I see loads of free space:
>
> root@backuppc:~# df -h
> [...]
> /dev/mapper/crypt_drbd2 2,8T 1,7T 1,1T 63% /var/lib/backuppc
>
> As written, the fs consists of three devices:
>
> root@backuppc:~# btrfs fi sh /var/lib/backuppc/
> Label: 'backuppc' uuid: 73b98c7b-832a-437a-a15b-6cb00734e5db
> Total devices 3 FS bytes used 1.70TiB
> devid 3 size 799.96GiB used 799.96GiB path dm-5
> devid 4 size 1.07TiB used 1.07TiB path dm-4
> devid 7 size 899.96GiB used 327.00GiB path dm-6
>
> root@backuppc:~# btrfs fi usage /var/lib/backuppc/
> Overall:
> Device size: 2.73TiB
> Device allocated: 2.61TiB
> Device unallocated: 128.00GiB
> Device missing: 0.00B
> Used: 1.70TiB
> Free (estimated): 1.03TiB (min: 1.03TiB)
> Free (statfs, df): 1.03TiB
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,single: Size:2.60TiB, Used:1.69TiB (65.17%)
> /dev/mapper/crypt_drbd2 790.93GiB
> /dev/mapper/crypt_drbd1 1.07TiB
> /dev/mapper/crypt_drbd3 774.96GiB
>
That does not match the output of above. "btrfs fi sh" claims drbd3 has
327GiB allocated and "btrfs fi us" claims it is 774GiB. Not sure how it
is possible.
> Metadata,single: Size:9.00GiB, Used:3.95GiB (43.91%)
> /dev/mapper/crypt_drbd2 6.00GiB
> /dev/mapper/crypt_drbd1 3.00GiB
>
> System,single: Size:32.00MiB, Used:320.00KiB (0.98%)
> /dev/mapper/crypt_drbd2 32.00MiB
>
> Unallocated:
> /dev/mapper/crypt_drbd2 3.00GiB
> /dev/mapper/crypt_drbd1 1.03MiB
> /dev/mapper/crypt_drbd3 125.00GiB
>
> So it tells me there is an estimated of ~1TB free. As the crypt_drbd3
> device has a size of 899G I wanted to remove the device. I expected no
> issue as "Free" shows 1.03TiB. There should still be 200GB of free space
> afterwards.
> But the removal failed after two hours:
>
> root@backuppc:~# btrfs dev remove /dev/mapper/crypt_drbd3
> /var/lib/backuppc/
> ERROR: error removing device '/dev/mapper/crypt_drbd3': No space left on
> device
>
> Now it looks like this:
> root@backuppc:~# btrfs fi usage /var/lib/backuppc/
> Overall:
> Device size: 2.73TiB
> Device allocated: 2.17TiB
> Device unallocated: 572.96GiB
> Device missing: 0.00B
> Used: 1.70TiB
> Free (estimated): 1.03TiB (min: 1.03TiB)
> Free (statfs, df): 1.03TiB
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,single: Size:2.17TiB, Used:1.69TiB (78.24%)
> /dev/mapper/crypt_drbd2 793.93GiB
> /dev/mapper/crypt_drbd1 1.07TiB
> /dev/mapper/crypt_drbd3 327.00GiB
>
And now it actually matches.
> Metadata,single: Size:9.00GiB, Used:3.89GiB (43.23%)
> /dev/mapper/crypt_drbd2 6.00GiB
> /dev/mapper/crypt_drbd1 3.00GiB
>
> System,single: Size:32.00MiB, Used:288.00KiB (0.88%)
> /dev/mapper/crypt_drbd2 32.00MiB
>
> Unallocated:
> /dev/mapper/crypt_drbd2 1.03MiB
> /dev/mapper/crypt_drbd1 1.03MiB
> /dev/mapper/crypt_drbd3 572.96GiB
>
>
> So some questions arise:
>
> Why can btrfs device remove not check in advance if there is enough
> free space available? Instead of working for hours and then failing...
>
Because nobody implemented it? I suspect it may not be entirely trivial
in general case and any estimation you do may become obsolete very fast
(your filesystem is in use and so space that was free may suddenly
become allocated when you need it).
> Do I have to balance my fs after the failed removal now?
>
Yes. Device removal moves whole chunks from device to be removed to
other devices. You need as much *unallocated* space on two remaining
devices as it allocated on device you are trying to remove.
btrfs balance start -d usage=0
would be a good starting point.
> Why is it not possible to remove the device when all information
> tell me there is enough free space available?
>
btrfs is using two stage allocator. Free space includes both unused
allocated space and unallocated space. Only the latter can be used for
device removal.
> What is occupying so much disk space as the data only has 1.7TB
> which should fit in 1.8TB (two) devices? (no snapshot, nothing special
> configured on btrfs). Looks like there are ~400GB allocated which are
> not from data.
>
It is your data, how do we know?
> Just for completeness:
>
> Debian Buster
>
> root@backuppc:~# btrfs --version
> btrfs-progs v5.10.1
>
> Thanks for letting me know.
>
>
> Greetings
>
> /CV
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-14 8:54 Removal of Device and Free Space Christian Völker
2021-05-14 16:44 ` Andrei Borzenkov
@ 2021-05-14 17:06 ` Roman Mamedov
2021-05-14 19:54 ` Christian Völker
2021-05-15 1:39 ` Zygo Blaxell
2 siblings, 1 reply; 8+ messages in thread
From: Roman Mamedov @ 2021-05-14 17:06 UTC (permalink / raw)
To: Christian Völker; +Cc: linux-btrfs
On Fri, 14 May 2021 10:54:16 +0200
Christian Völker <cvoelker@knebb.de> wrote:
> What is occupying so much disk space as the data only has 1.7TB
> which should fit in 1.8TB (two) devices? (no snapshot, nothing special
> configured on btrfs). Looks like there are ~400GB allocated which are
> not from data.
Check if there are really no stray snapshots left over, which keep around old
versions of some of your data.
If not, it could be the infamous Btrfs "extent booking" inefficiency, where the
whole extent (up to 128 MB) is kept around as long as some part of it is still
referenced.
Discussed a bit here: https://www.spinics.net/lists/linux-btrfs/msg90352.html
Since then I found that not only VMs, but for example it's really
inefficient space-wise to download torrents onto a Btrfs (without nocow).
Anything where you overwrite small pieces within a large file, will waste
space.
In your case, if it's a backup server and you use rsync or the like in an
incremental mode updating only the changed blocks, switching that to
whole-file updates (-W for rsync) could alleviate the issue. Another way is to
force compression on the filesystem, which clamps the extent size limit down
from 128 MB to 128 KB and mitigates the problem in another way.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-14 17:06 ` Roman Mamedov
@ 2021-05-14 19:54 ` Christian Völker
2021-05-14 21:55 ` Remi Gauvin
0 siblings, 1 reply; 8+ messages in thread
From: Christian Völker @ 2021-05-14 19:54 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
Hi Roman,
thanks for you ideas. Unfortunately none is valid here. I have for sure
no snapshots and I have mounted the device with compression enabled:
root@backuppc42:~# btrfs subvolume list /var/lib/backuppc
root@backuppc42:~# mount| grep backup
/dev/mapper/crypt_drbd2 on /var/lib/backuppc type btrfs
(rw,noatime,compress=zstd:3,space_cache,subvolid=5,subvol=/)
I do not like the idea of going to full file sync for my rsync backups
as the bandwidth is my concern here. And it does not help either as I
have compression enabled, right?
Any further ideas?
Greetings
/CV
Am 14.05.2021 um 19:06 schrieb Roman Mamedov:
> On Fri, 14 May 2021 10:54:16 +0200
> Christian Völker <cvoelker@knebb.de> wrote:
>
>> What is occupying so much disk space as the data only has 1.7TB
>> which should fit in 1.8TB (two) devices? (no snapshot, nothing special
>> configured on btrfs). Looks like there are ~400GB allocated which are
>> not from data.
>
> Check if there are really no stray snapshots left over, which keep around old
> versions of some of your data.
>
> If not, it could be the infamous Btrfs "extent booking" inefficiency, where the
> whole extent (up to 128 MB) is kept around as long as some part of it is still
> referenced.
>
> Discussed a bit here: https://www.spinics.net/lists/linux-btrfs/msg90352.html
>
> Since then I found that not only VMs, but for example it's really
> inefficient space-wise to download torrents onto a Btrfs (without nocow).
>
> Anything where you overwrite small pieces within a large file, will waste
> space.
>
> In your case, if it's a backup server and you use rsync or the like in an
> incremental mode updating only the changed blocks, switching that to
> whole-file updates (-W for rsync) could alleviate the issue. Another way is to
> force compression on the filesystem, which clamps the extent size limit down
> from 128 MB to 128 KB and mitigates the problem in another way.
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-14 19:54 ` Christian Völker
@ 2021-05-14 21:55 ` Remi Gauvin
0 siblings, 0 replies; 8+ messages in thread
From: Remi Gauvin @ 2021-05-14 21:55 UTC (permalink / raw)
To: Christian Völker, linux-btrfs
On 2021-05-14 3:54 p.m., Christian Völker wrote:
>
> I do not like the idea of going to full file sync for my rsync backups
> as the bandwidth is my concern here. And it does not help either as I
> have compression enabled, right?
>
> Any further ideas?
>
You have compression enabled, but are you using compress-force? If not,
many files will not be compressed at all, potentionally including those
responsible for your current situation.
You can use btrfs fi defrag -r -t 100M /path/to/subvolume to reclaim
most of the space lost by fragmentation, (assuming there are no
snapshots or reflink copies, otherwise, this will be counter productive.)
Note that defrag does not cross Subvolumes, so you have to run this
command for each affected subvolume.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-14 8:54 Removal of Device and Free Space Christian Völker
2021-05-14 16:44 ` Andrei Borzenkov
2021-05-14 17:06 ` Roman Mamedov
@ 2021-05-15 1:39 ` Zygo Blaxell
2021-05-15 7:48 ` Roman Mamedov
2 siblings, 1 reply; 8+ messages in thread
From: Zygo Blaxell @ 2021-05-15 1:39 UTC (permalink / raw)
To: Christian Völker; +Cc: linux-btrfs
On Fri, May 14, 2021 at 10:54:16AM +0200, Christian Völker wrote:
>
> Hi all,
>
> I am running a three device DRBD setup (non-RAID!).
> When I do "df -h" I see loads of free space:
>
> root@backuppc:~# df -h
> [...]
> /dev/mapper/crypt_drbd2 2,8T 1,7T 1,1T 63% /var/lib/backuppc
>
> As written, the fs consists of three devices:
>
> root@backuppc:~# btrfs fi sh /var/lib/backuppc/
> Label: 'backuppc' uuid: 73b98c7b-832a-437a-a15b-6cb00734e5db
> Total devices 3 FS bytes used 1.70TiB
> devid 3 size 799.96GiB used 799.96GiB path dm-5
> devid 4 size 1.07TiB used 1.07TiB path dm-4
> devid 7 size 899.96GiB used 327.00GiB path dm-6
>
> root@backuppc:~# btrfs fi usage /var/lib/backuppc/
> Overall:
> Device size: 2.73TiB
> Device allocated: 2.61TiB
> Device unallocated: 128.00GiB
> Device missing: 0.00B
> Used: 1.70TiB
> Free (estimated): 1.03TiB (min: 1.03TiB)
> Free (statfs, df): 1.03TiB
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,single: Size:2.60TiB, Used:1.69TiB (65.17%)
> /dev/mapper/crypt_drbd2 790.93GiB
> /dev/mapper/crypt_drbd1 1.07TiB
> /dev/mapper/crypt_drbd3 774.96GiB
>
> Metadata,single: Size:9.00GiB, Used:3.95GiB (43.91%)
> /dev/mapper/crypt_drbd2 6.00GiB
> /dev/mapper/crypt_drbd1 3.00GiB
>
> System,single: Size:32.00MiB, Used:320.00KiB (0.98%)
> /dev/mapper/crypt_drbd2 32.00MiB
>
> Unallocated:
> /dev/mapper/crypt_drbd2 3.00GiB
> /dev/mapper/crypt_drbd1 1.03MiB
> /dev/mapper/crypt_drbd3 125.00GiB
>
> So it tells me there is an estimated of ~1TB free. As the crypt_drbd3 device
> has a size of 899G I wanted to remove the device. I expected no issue as
> "Free" shows 1.03TiB. There should still be 200GB of free space afterwards.
> But the removal failed after two hours:
>
> root@backuppc:~# btrfs dev remove /dev/mapper/crypt_drbd3 /var/lib/backuppc/
> ERROR: error removing device '/dev/mapper/crypt_drbd3': No space left on
> device
>
> Now it looks like this:
> root@backuppc:~# btrfs fi usage /var/lib/backuppc/
> Overall:
> Device size: 2.73TiB
> Device allocated: 2.17TiB
> Device unallocated: 572.96GiB
> Device missing: 0.00B
> Used: 1.70TiB
> Free (estimated): 1.03TiB (min: 1.03TiB)
> Free (statfs, df): 1.03TiB
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,single: Size:2.17TiB, Used:1.69TiB (78.24%)
> /dev/mapper/crypt_drbd2 793.93GiB
> /dev/mapper/crypt_drbd1 1.07TiB
> /dev/mapper/crypt_drbd3 327.00GiB
>
> Metadata,single: Size:9.00GiB, Used:3.89GiB (43.23%)
> /dev/mapper/crypt_drbd2 6.00GiB
> /dev/mapper/crypt_drbd1 3.00GiB
>
> System,single: Size:32.00MiB, Used:288.00KiB (0.88%)
> /dev/mapper/crypt_drbd2 32.00MiB
>
> Unallocated:
> /dev/mapper/crypt_drbd2 1.03MiB
> /dev/mapper/crypt_drbd1 1.03MiB
> /dev/mapper/crypt_drbd3 572.96GiB
>
>
> So some questions arise:
>
> Why can btrfs device remove not check in advance if there is enough free
> space available? Instead of working for hours and then failing...
Relocation (balance and device remove) cannot change the size of any
extent, so they cannot use free space regions that are too small. It is
not enough to have raw free space--it has to be contiguous free space.
This is not computed in advance because it is not trivial to compute--it
is a significant chunk of the cost of device removal.
> Do I have to balance my fs after the failed removal now?
Balancing before removal would coalesce the free space and make device remove
more likely to succeed. Something like:
btrfs balance start -ddevid=3,limit=160 /var/lib/backuppc/
btrfs balance start -ddevid=4,limit=160 /var/lib/backuppc/
or use btrfs-balance-least-used from the python-btrfs package, which will
start with chunks that have the most free space.
> Why is it not possible to remove the device when all information tell me
> there is enough free space available?
>
> What is occupying so much disk space as the data only has 1.7TB which
> should fit in 1.8TB (two) devices? (no snapshot, nothing special configured
> on btrfs). Looks like there are ~400GB allocated which are not from data.
Chunks are deallocated only when completely empty. If you recently
deleted a large number of files, then you'll have chunks with low data
density and high free space fragmentation. Normally this does not matter,
as the free spaces in chunks would be filled in by new data allocations,
and those allocations would split writes into smaller extents that exactly
fit the free spaces. Relocation can't do this--it can only occupy free
spaces that are equal or larger, and will make free space fragmentation
even worse.
> Just for completeness:
>
> Debian Buster
>
> root@backuppc:~# btrfs --version
> btrfs-progs v5.10.1
>
> Thanks for letting me know.
>
>
> Greetings
>
> /CV
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-15 1:39 ` Zygo Blaxell
@ 2021-05-15 7:48 ` Roman Mamedov
2021-05-16 18:41 ` Christian Völker
0 siblings, 1 reply; 8+ messages in thread
From: Roman Mamedov @ 2021-05-15 7:48 UTC (permalink / raw)
To: Zygo Blaxell; +Cc: Christian Völker, linux-btrfs
On Fri, 14 May 2021 21:39:04 -0400
Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> > What is occupying so much disk space as the data only has 1.7TB which
> > should fit in 1.8TB (two) devices? (no snapshot, nothing special configured
> > on btrfs). Looks like there are ~400GB allocated which are not from data.
>
> Chunks are deallocated only when completely empty. If you recently
> deleted a large number of files, then you'll have chunks with low data
> density and high free space fragmentation. Normally this does not matter,
> as the free spaces in chunks would be filled in by new data allocations,
> and those allocations would split writes into smaller extents that exactly
> fit the free spaces. Relocation can't do this--it can only occupy free
> spaces that are equal or larger, and will make free space fragmentation
> even worse.
Oh yeah, if you were just talking about discrepancy between "allocated" vs
"used", then it is what Zygo said, and my other reply regrading extent booking
doesn't apply here. That other issue is relevant only if you observe a
difference between "du [all files]" and "df [filesystem]".
--
With respect,
Roman
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Removal of Device and Free Space
2021-05-15 7:48 ` Roman Mamedov
@ 2021-05-16 18:41 ` Christian Völker
0 siblings, 0 replies; 8+ messages in thread
From: Christian Völker @ 2021-05-16 18:41 UTC (permalink / raw)
To: Roman Mamedov, Zygo Blaxell; +Cc: linux-btrfs
Hi all,
thanks for your suggestions. Even though I do not understand every
implication.
However, your suggestions in total allowed me to remove the device.
root@backuppc42:/var/lib/backuppc# btrfs fi sh /var/lib/backuppc/
Label: 'backuppc' uuid: 73b98c7b-832a-437a-a15b-6cb00734e5db
Total devices 3 FS bytes used 1.68TiB
devid 3 size 799.96GiB used 798.96GiB path dm-5
devid 4 size 1.07TiB used 1.07TiB path dm-4
devid 7 size 899.96GiB used 327.00GiB path dm-6
root@backuppc42:/var/lib/backuppc# btrfs balance start -ddevid=3
/var/lib/backuppc/;btrfs balance start -ddevid=4 /var/lib/backuppc/
Done, had to relocate 793 out of 2224 chunks
Done, had to relocate 1094 out of 1726 chunks
root@backuppc42:/var/lib/backuppc# btrfs fi df /var/lib/backuppc/
Data, single: total=1.68TiB, used=1.67TiB
System, single: total=32.00MiB, used=208.00KiB
Metadata, single: total=9.00GiB, used=3.49GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
root@backuppc42:/var/lib/backuppc# btrfs fi us /var/lib/backuppc/
Overall:
Device size: 2.73TiB
Device allocated: 1.68TiB
Device unallocated: 1.05TiB
Device missing: 0.00B
Used: 1.68TiB
Free (estimated): 1.05TiB (min: 1.05TiB)
Free (statfs, df): 1.05TiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:1.68TiB, Used:1.67TiB (99.96%)
/dev/mapper/crypt_drbd2 436.00GiB
/dev/mapper/crypt_drbd1 737.00GiB
/dev/mapper/crypt_drbd3 543.00GiB
Metadata,single: Size:9.00GiB, Used:3.49GiB (38.79%)
/dev/mapper/crypt_drbd2 6.00GiB
/dev/mapper/crypt_drbd1 3.00GiB
System,single: Size:32.00MiB, Used:208.00KiB (0.63%)
/dev/mapper/crypt_drbd2 32.00MiB
Unallocated:
/dev/mapper/crypt_drbd2 357.93GiB
/dev/mapper/crypt_drbd1 359.95GiB
/dev/mapper/crypt_drbd3 356.96GiB
root@backuppc42:/var/lib/backuppc# btrfs dev remove
/dev/mapper/crypt_drbd3 /var/lib/backuppc/
root@backuppc42:/var/lib/backuppc# btrfs fi us /var/lib/backuppc/
Overall:
Device size: 1.85TiB
Device allocated: 1.68TiB
Device unallocated: 174.88GiB
Device missing: 0.00B
Used: 1.68TiB
Free (estimated): 175.62GiB (min: 175.62GiB)
Free (statfs, df): 175.62GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:1.68TiB, Used:1.67TiB (99.96%)
/dev/mapper/crypt_drbd2 706.00GiB
/dev/mapper/crypt_drbd1 1010.00GiB
Metadata,single: Size:9.00GiB, Used:3.48GiB (38.62%)
/dev/mapper/crypt_drbd2 6.00GiB
/dev/mapper/crypt_drbd1 3.00GiB
System,single: Size:32.00MiB, Used:224.00KiB (0.68%)
/dev/mapper/crypt_drbd2 32.00MiB
Unallocated:
/dev/mapper/crypt_drbd2 87.93GiB
/dev/mapper/crypt_drbd1 86.95GiB
root@backuppc42:/var/lib/backuppc# btrfs fi df /var/lib/backuppc/
Data, single: total=1.68TiB, used=1.67TiB
System, single: total=32.00MiB, used=224.00KiB
Metadata, single: total=9.00GiB, used=3.48GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
I am still unsure why this did not work initially- I expect a fs doing
everything (ie reallocate, balance, whatever) to get the job (remove a
device) done.
However, device is removed now.
Thanks @all!
Greetings
/CV
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-05-16 18:42 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-14 8:54 Removal of Device and Free Space Christian Völker
2021-05-14 16:44 ` Andrei Borzenkov
2021-05-14 17:06 ` Roman Mamedov
2021-05-14 19:54 ` Christian Völker
2021-05-14 21:55 ` Remi Gauvin
2021-05-15 1:39 ` Zygo Blaxell
2021-05-15 7:48 ` Roman Mamedov
2021-05-16 18:41 ` Christian Völker
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.