linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joshua Houghton <joshua.houghton@yandex.ru>
To: linux-btrfs@vger.kernel.org
Cc: Goffredo Baroncelli <kreijack@libero.it>,
	DanglingPointer <danglingpointerexception@gmail.com>,
	Torstein Eide <torsteine@gmail.com>
Subject: Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
Date: Mon, 13 Apr 2020 10:08:50 +0000	[thread overview]
Message-ID: <4521727.GXAFRqVoOG@arch> (raw)
In-Reply-To: <20200318211157.11090-1-kreijack@libero.it>

On Wednesday, 18 March 2020 21:11:56 UTC Goffredo Baroncelli wrote:
> Hi all,
> 
> this patch adds support for the raid5/6 profiles in the command
> 'btrfs filesystem usage'.
> 
> Until now the problem was that the value r_{data,metadata}_used is not
> easy to get for a RAID5/6, because it depends by the number of disks.
> And in a filesystem it is possible to have several raid5/6 chunks with a
> different number of disks.
> 
> In order to bypass this issue, I reworked the code to get rid of these
> values where possible and to use the l_{data,metadata}_used ones.
> Notably the biggest differences is in how the free space estimation
> is computed. Before it was:
> 
> 	free_estimated = (r_data_chunks - r_data_used) / data_ratio;
> 
> After it is:
> 
> 	free_estimated = l_data_chunks - l_data_used;
> 
> which give the same results when there is no mixed raid level, but a
> better result in the other case. I have to point out that before in the
> code there was a comment that said the opposite.
> 
> The other place where the r_{data,metadata}_used are use is for the
> "Used:" field. For this case I estimated these values using the
> following formula (only for raid5/6 profiles):
> 
> 	r_data_used += (double)r_data_chunks * l_data_used /
>                                l_data_chunks;
> 
> Note that this is not fully accurate. Eg. suppose to have two raid5 chunks,
> the first one with 3 disks, the second one with 4 disks, and that each
> chunk is 1GB.
> r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely
> defined, but real r_data_used is completely different in these two cases:
> - the first chunk is full and the second one id empty
> - the first chunk is full empty and the second one is full
> However now this error affect only the "Used:" field.
> 
> 
> So now if you run btrfs fi us in a raid6 filesystem you get:
> 
> $ sudo btrfs fi us /
> Overall:
>     Device size:		  40.00GiB
>     Device allocated:		   8.28GiB
>     Device unallocated:		  31.72GiB
>     Device missing:		     0.00B
>     Used:			   5.00GiB
>     Free (estimated):		  17.36GiB	(min: 17.36GiB)
>     Data ratio:			      2.00
>     Metadata ratio:		      0.00
>     Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> Instead before:
> 
> $ sudo btrfs fi us /
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> Overall:
>     Device size:		  40.00GiB
>     Device allocated:		     0.00B
>     Device unallocated:		  40.00GiB
>     Device missing:		     0.00B
>     Used:			     0.00B
>     Free (estimated):		     0.00B	(min: 8.00EiB)
>     Data ratio:			      0.00
>     Metadata ratio:		      0.00
>     Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> 
> I want to point out that this patch should be compatible with my
> previous patches set (the ones related to the new ioctl
> BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs fi us'
> commands with full support a raid5/6 filesystem without needing root
> capability.
> 
> Comments are welcome.
> BR
> G.Baroncelli

Hi Goffredo

Thanks you for this. It's something I've been wanting for a while. Do 
you know why I get significantly different results in the overall summary when I 
do not run it as root. I'm not sure if this is a bug or a limitation.

When I run it as root it looks to be showing the correct values.

joshua@r2400g:~/development/btrfs-progs$ colordiff -u <(./btrfs fi us /mnt/raid/) <(sudo ./btrfs fi us /mnt/raid/)
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
--- /dev/fd/63  2020-04-13 10:54:26.833747190 +0100
+++ /dev/fd/62  2020-04-13 10:54:26.843746984 +0100
@@ -1,17 +1,32 @@
 Overall:
     Device size:                 29.11TiB
-    Device allocated:           284.06GiB
-    Device unallocated:                  28.83TiB
-    Device missing:              29.11TiB
-    Used:                       280.99GiB
-    Free (estimated):               0.00B      (min: 14.95TiB)
-    Data ratio:                              0.00
+    Device allocated:            19.39TiB
+    Device unallocated:                   9.72TiB
+    Device missing:                 0.00B
+    Used:                        18.67TiB
+    Free (estimated):             7.82TiB      (min: 5.39TiB)
+    Data ratio:                              1.33
     Metadata ratio:                  2.00
     Global reserve:             512.00MiB      (used: 0.00B)
 
 Data,RAID5: Size:14.33TiB, Used:13.80TiB (96.27%)
+   /dev/mapper/traid3     4.78TiB
+   /dev/mapper/traid1     4.78TiB
+   /dev/mapper/traid2     4.78TiB
+   /dev/mapper/traid4     4.78TiB
 
 Metadata,RAID1: Size:142.00GiB, Used:140.49GiB (98.94%)
+   /dev/mapper/traid3    63.00GiB
+   /dev/mapper/traid1    64.00GiB
+   /dev/mapper/traid2    63.00GiB
+   /dev/mapper/traid4    94.00GiB
 
 System,RAID1: Size:32.00MiB, Used:1.00MiB (3.12%)
+   /dev/mapper/traid1    32.00MiB
+   /dev/mapper/traid4    32.00MiB
 
+Unallocated:
+   /dev/mapper/traid3     2.44TiB
+   /dev/mapper/traid1     2.44TiB
+   /dev/mapper/traid2     2.44TiB
+   /dev/mapper/traid4     2.41TiB


This is in contrast to raid1 which seems to be mostly correct, irrespective
of what user I run as.


joshua@arch:/var/joshua$ colordiff -u <(btrfs fi us raid/) <(sudo btrfs fi us raid/)
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
--- /dev/fd/63  2020-04-13 09:52:54.630750079 +0000
+++ /dev/fd/62  2020-04-13 09:52:54.637416835 +0000
@@ -2,7 +2,7 @@
     Device size:                  8.00GiB
     Device allocated:             1.32GiB
     Device unallocated:                   6.68GiB
-    Device missing:               8.00GiB
+    Device missing:                 0.00B
     Used:                       383.40MiB
     Free (estimated):             3.55GiB      (min: 3.55GiB)
     Data ratio:                              2.00
@@ -10,8 +10,17 @@
     Global reserve:               3.25MiB      (used: 0.00B)
 
 Data,RAID1: Size:409.56MiB, Used:191.28MiB (46.70%)
+   /dev/loop0   409.56MiB
+   /dev/loop1   409.56MiB
 
 Metadata,RAID1: Size:256.00MiB, Used:416.00KiB (0.16%)
+   /dev/loop0   256.00MiB
+   /dev/loop1   256.00MiB
 
 System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
+   /dev/loop0     8.00MiB
+   /dev/loop1     8.00MiB
 
+Unallocated:
+   /dev/loop0     3.34GiB
+   /dev/loop1     3.34GiB

Does anyone know if this is something we can fix? I'm happy to take a look.

Joshua Houghton



  parent reply	other threads:[~2020-04-13 10:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
2020-03-18 21:11 ` [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command Goffredo Baroncelli
2020-03-25 20:12 ` [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
2020-03-31 21:55   ` DanglingPointer
2020-04-13 10:08 ` Joshua Houghton [this message]
2020-04-13 10:28   ` Joshua Houghton
2020-04-13 17:05     ` Goffredo Baroncelli
2020-05-25 13:27 ` David Sterba
2020-05-25 20:40   ` Goffredo Baroncelli
2020-04-04 19:29 Torstein Eide

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4521727.GXAFRqVoOG@arch \
    --to=joshua.houghton@yandex.ru \
    --cc=danglingpointerexception@gmail.com \
    --cc=kreijack@libero.it \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=torsteine@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).