All of lore.kernel.org
 help / color / mirror / Atom feed
* BTRFS Raid 5 Space missing - ideas ?
@ 2019-04-20 10:46 Juergen Sauer
  2019-04-20 20:19 ` Adam Borowski
  0 siblings, 1 reply; 5+ messages in thread
From: Juergen Sauer @ 2019-04-20 10:46 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]

Hi!
I wish a happy Easer Days before :)

During my tests with BTRFS as Raid5 setup, I found a courious little
"problem".

Test Envirornment:
Experimental Server:
Arch Linux, v5.0.x Kernel, almost up-to-date

in /srv I have a BTRFS/Raid 5 mounted:
/dev/sdb1        21T     10T  5,1T   67% /srv

[root@pc6 ~]# btrfs fi show /srv
Label: 'Archiv'  uuid: 662c9f40-56b0-4e4e-aa64-55039ff8f4f8
        Total devices 3 FS bytes used 9.98TiB
        devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
        devid    2 size 5.46TiB used 4.99TiB path /dev/sdc1
        devid    3 size 5.46TiB used 4.99TiB path /dev/sde1

[root@pc6 ~]# btrfs fi df /srv
Data, RAID5: total=9.97TiB, used=9.97TiB
System, RAID5: total=64.00MiB, used=896.00KiB
Metadata, RAID5: total=15.00GiB, used=13.38GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


I had a problem with an defect harddrive (6 TB Size) and replaced with
the BTRFS Tool all drives by a 10TB sized.
Thist task worked fine.

All patitioins sdb1 sdc1 sde1 are the same size: 9.0 TiB. But BTRFS ist
not using the bigger space on sdc1, sde1, there is only 5.46 TiB used,
even there are 9.0 Tib Avaible, so 4.0 TiB are unused.


Any Ideas?

mit freundlichen Grüßen
Jürgen Sauer

[-- Attachment #2: juergen_sauer.vcf --]
[-- Type: text/x-vcard, Size: 389 bytes --]

begin:vcard
fn;quoted-printable:J=C3=BCrgen Sauer
n;quoted-printable:Sauer;J=C3=BCrgen
org:automatiX GmbH
adr:;;Neue Str. 11;Schwanewede;Niedersachsen;28790;Deutschland
email;internet:juergen.sauer@automatix.de
tel;work:+49 4209 4699
tel;fax:+49 4209 4644
tel;home:+49 4209 4653
tel;cell:+49 162 9699 259
x-mozilla-html:FALSE
url:https://automatix.de
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BTRFS Raid 5 Space missing - ideas ?
  2019-04-20 10:46 BTRFS Raid 5 Space missing - ideas ? Juergen Sauer
@ 2019-04-20 20:19 ` Adam Borowski
  2019-04-21  4:39   ` Andrei Borzenkov
  0 siblings, 1 reply; 5+ messages in thread
From: Adam Borowski @ 2019-04-20 20:19 UTC (permalink / raw)
  To: Juergen Sauer; +Cc: linux-btrfs

On Sat, Apr 20, 2019 at 12:46:16PM +0200, Juergen Sauer wrote:
> I wish a happy Easer Days before :)

Same to you!

> During my tests with BTRFS as Raid5 setup, I found a courious little
> "problem".

>         Total devices 3 FS bytes used 9.98TiB
>         devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
>         devid    2 size 5.46TiB used 4.99TiB path /dev/sdc1
>         devid    3 size 5.46TiB used 4.99TiB path /dev/sde1

> All patitioins sdb1 sdc1 sde1 are the same size: 9.0 TiB. But BTRFS ist
> not using the bigger space on sdc1, sde1, there is only 5.46 TiB used,
> even there are 9.0 Tib Avaible, so 4.0 TiB are unused.

It's working as expected: while btrfs does RAID per block group rather than
per whole block device, there's no way to place a raid5 block group in a way
that doesn't require at least 3 devices.  This means with a 3-disk setup the
space utilized will be only as big as the smallest one.

This is also the case for raid1 on 2-disk, and for raid10 on 4-disk.

Btrfs can use uneven disks only when it has some freedom how to place the
data around.

There's a tool that lets you visualize space utilization:
    http://carfax.org.uk/btrfs-usage/
or a command-line implementation:
    btrfs-space-calculator (package python[3]-btrfs)


By the way, you can greatly improve performance and safety by switching
metadata profile to raid1: "btrfs bal start -mraid1".  RAID5 is very slow
for random writes, which is nearly all metadata write access; RAID1 doesn't
suffer from this problem -- and metadata tends to be only around 1-2% of
space so having it take a bit more doesn't hurt.

It would also solve your utilization problem, except that metadata uses so
little space.  Having mixed block groups means the space not taken by RAID5
can be recovered by taking twice as much from sdb1 than from each of sdc1
and sde1:

sdb1 *********************
sdc1 * * * * * * * * * * *
sde1  * * * * * * * * * *
(each RAID1 block group is either sdb1,sdc1 or sdb1:sde1)


Meow!
-- 
⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Did ya know that typing "test -j8" instead of "ctest -j8"
⢿⡄⠘⠷⠚⠋⠀ will make your testsuite pass much faster, and fix bugs?
⠈⠳⣄⠀⠀⠀⠀

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BTRFS Raid 5 Space missing - ideas ?
  2019-04-20 20:19 ` Adam Borowski
@ 2019-04-21  4:39   ` Andrei Borzenkov
  2019-04-21  6:50     ` [solved] " Juergen Sauer
  2019-04-22  0:05     ` Zygo Blaxell
  0 siblings, 2 replies; 5+ messages in thread
From: Andrei Borzenkov @ 2019-04-21  4:39 UTC (permalink / raw)
  To: Adam Borowski, Juergen Sauer; +Cc: linux-btrfs

20.04.2019 23:19, Adam Borowski пишет:
> On Sat, Apr 20, 2019 at 12:46:16PM +0200, Juergen Sauer wrote:
>> I wish a happy Easer Days before :)
> 
> Same to you!
> 
>> During my tests with BTRFS as Raid5 setup, I found a courious little
>> "problem".
> 
>>         Total devices 3 FS bytes used 9.98TiB
>>         devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
>>         devid    2 size 5.46TiB used 4.99TiB path /dev/sdc1
>>         devid    3 size 5.46TiB used 4.99TiB path /dev/sde1
> 
>> All patitioins sdb1 sdc1 sde1 are the same size: 9.0 TiB. But BTRFS ist
>> not using the bigger space on sdc1, sde1, there is only 5.46 TiB used,
>> even there are 9.0 Tib Avaible, so 4.0 TiB are unused.
> 
> It's working as expected: while btrfs does RAID per block group rather than
> per whole block device, there's no way to place a raid5 block group in a way
> that doesn't require at least 3 devices.  This means with a 3-disk setup the
> space utilized will be only as big as the smallest one.
> 

But as reported, all drives were replaced by larger ones but only one
drive shows increased size: "All patitioins sdb1 sdc1 sde1 are the same
size".

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [solved] Re: BTRFS Raid 5 Space missing - ideas ?
  2019-04-21  4:39   ` Andrei Borzenkov
@ 2019-04-21  6:50     ` Juergen Sauer
  2019-04-22  0:05     ` Zygo Blaxell
  1 sibling, 0 replies; 5+ messages in thread
From: Juergen Sauer @ 2019-04-21  6:50 UTC (permalink / raw)
  Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1465 bytes --]

Am 21.04.19 um 06:39 schrieb Andrei Borzenkov:
> 20.04.2019 23:19, Adam Borowski пишет:
>> On Sat, Apr 20, 2019 at 12:46:16PM +0200, Juergen Sauer wrote:
>>> I wish a happy Easer Days before :)
>>
>> Same to you!
>>
>>> During my tests with BTRFS as Raid5 setup, I found a courious little
>>> "problem".
>>
>>>         Total devices 3 FS bytes used 9.98TiB
>>>         devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
>>>         devid    2 size 5.46TiB used 4.99TiB path /dev/sdc1
>>>         devid    3 size 5.46TiB used 4.99TiB path /dev/sde1

Mr. Bethencourt send me this night the helping hint.
I overread the updated syntax in the resize command.

The "btrfs filesystem resize" takes by default and silently the first
device to work on, no hint is issued.

It is mandatory to define the devid to work on for "resize" on multi
device btrfs volumes.

btrfs filesystem resize 2:max /srv
btrfs filesystem resize 3:max /srv

Made the wanted job as expected.

This is the result:
[root@pc6 ~]# btrfs filesystem show /srv
Label: 'Archiv'  uuid: 662c9f40-56b0-4e4e-aa64-55039ff8f4f8
        Total devices 3 FS bytes used 9.98TiB
        devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
        devid    2 size 9.09TiB used 4.99TiB path /dev/sdc1
        devid    3 size 9.09TiB used 4.99TiB path /dev/sde1

root@pc6 ~]# df -h /srv
/dev/sdb1        28T     10T   13T   45% /srv

Thank you, Mr. Bethencourt and @all

mit freundlichen Grüßen
Jürgen Sauer


[-- Attachment #2: juergen_sauer.vcf --]
[-- Type: text/x-vcard, Size: 389 bytes --]

begin:vcard
fn;quoted-printable:J=C3=BCrgen Sauer
n;quoted-printable:Sauer;J=C3=BCrgen
org:automatiX GmbH
adr:;;Neue Str. 11;Schwanewede;Niedersachsen;28790;Deutschland
email;internet:juergen.sauer@automatix.de
tel;work:+49 4209 4699
tel;fax:+49 4209 4644
tel;home:+49 4209 4653
tel;cell:+49 162 9699 259
x-mozilla-html:FALSE
url:https://automatix.de
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: BTRFS Raid 5 Space missing - ideas ?
  2019-04-21  4:39   ` Andrei Borzenkov
  2019-04-21  6:50     ` [solved] " Juergen Sauer
@ 2019-04-22  0:05     ` Zygo Blaxell
  1 sibling, 0 replies; 5+ messages in thread
From: Zygo Blaxell @ 2019-04-22  0:05 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: Adam Borowski, Juergen Sauer, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1463 bytes --]

On Sun, Apr 21, 2019 at 07:39:59AM +0300, Andrei Borzenkov wrote:
> 20.04.2019 23:19, Adam Borowski пишет:
> > On Sat, Apr 20, 2019 at 12:46:16PM +0200, Juergen Sauer wrote:
> >> I wish a happy Easer Days before :)
> > 
> > Same to you!
> > 
> >> During my tests with BTRFS as Raid5 setup, I found a courious little
> >> "problem".
> > 
> >>         Total devices 3 FS bytes used 9.98TiB
> >>         devid    1 size 9.09TiB used 4.99TiB path /dev/sdb1
> >>         devid    2 size 5.46TiB used 4.99TiB path /dev/sdc1
> >>         devid    3 size 5.46TiB used 4.99TiB path /dev/sde1
> > 
> >> All patitioins sdb1 sdc1 sde1 are the same size: 9.0 TiB. But BTRFS ist
> >> not using the bigger space on sdc1, sde1, there is only 5.46 TiB used,
> >> even there are 9.0 Tib Avaible, so 4.0 TiB are unused.
> > 
> > It's working as expected: while btrfs does RAID per block group rather than
> > per whole block device, there's no way to place a raid5 block group in a way
> > that doesn't require at least 3 devices.  This means with a 3-disk setup the
> > space utilized will be only as big as the smallest one.
> > 
> 
> But as reported, all drives were replaced by larger ones but only one
> drive shows increased size: "All patitioins sdb1 sdc1 sde1 are the same
> size".

Did you do:

	btrfs fi resize 2:max /path/to/fs
	btrfs fi resize 3:max /path/to/fs

It looks like you only did

	btrfs fi resize 1:max /path/to/fs


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-04-22  0:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-20 10:46 BTRFS Raid 5 Space missing - ideas ? Juergen Sauer
2019-04-20 20:19 ` Adam Borowski
2019-04-21  4:39   ` Andrei Borzenkov
2019-04-21  6:50     ` [solved] " Juergen Sauer
2019-04-22  0:05     ` Zygo Blaxell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.