linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
@ 2020-03-18 21:11 Goffredo Baroncelli
  2020-03-18 21:11 ` [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command Goffredo Baroncelli
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Goffredo Baroncelli @ 2020-03-18 21:11 UTC (permalink / raw)
  To: linux-btrfs


Hi all,

this patch adds support for the raid5/6 profiles in the command
'btrfs filesystem usage'.

Until now the problem was that the value r_{data,metadata}_used is not
easy to get for a RAID5/6, because it depends by the number of disks.
And in a filesystem it is possible to have several raid5/6 chunks with a
different number of disks.

In order to bypass this issue, I reworked the code to get rid of these
values where possible and to use the l_{data,metadata}_used ones.
Notably the biggest differences is in how the free space estimation
is computed. Before it was:

	free_estimated = (r_data_chunks - r_data_used) / data_ratio;

After it is:

	free_estimated = l_data_chunks - l_data_used;

which give the same results when there is no mixed raid level, but a
better result in the other case. I have to point out that before in the 
code there was a comment that said the opposite.

The other place where the r_{data,metadata}_used are use is for the
"Used:" field. For this case I estimated these values using the
following formula (only for raid5/6 profiles):

	r_data_used += (double)r_data_chunks * l_data_used /
                               l_data_chunks;

Note that this is not fully accurate. Eg. suppose to have two raid5 chunks,
the first one with 3 disks, the second one with 4 disks, and that each
chunk is 1GB.
r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely defined,
but real r_data_used is completely different in these two cases:
- the first chunk is full and the second one id empty
- the first chunk is full empty and the second one is full
However now this error affect only the "Used:" field.


So now if you run btrfs fi us in a raid6 filesystem you get:

$ sudo btrfs fi us / 
Overall:
    Device size:		  40.00GiB
    Device allocated:		   8.28GiB
    Device unallocated:		  31.72GiB
    Device missing:		     0.00B
    Used:			   5.00GiB
    Free (estimated):		  17.36GiB	(min: 17.36GiB)
    Data ratio:			      2.00
    Metadata ratio:		      0.00
    Global reserve:		   3.25MiB	(used: 0.00B)

Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
[...]

Instead before:

$ sudo btrfs fi us /
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Overall:
    Device size:		  40.00GiB
    Device allocated:		     0.00B
    Device unallocated:		  40.00GiB
    Device missing:		     0.00B
    Used:			     0.00B
    Free (estimated):		     0.00B	(min: 8.00EiB)
    Data ratio:			      0.00
    Metadata ratio:		      0.00
    Global reserve:		   3.25MiB	(used: 0.00B)

Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
[...]


I want to point out that this patch should be compatible with my
previous patches set (the ones related to the new ioctl 
BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs fi us'
commands with full support a raid5/6 filesystem without needing root
capability.

Comments are welcome.
BR
G.Baroncelli

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command.
  2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
@ 2020-03-18 21:11 ` Goffredo Baroncelli
  2020-03-25 20:12 ` [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Goffredo Baroncelli @ 2020-03-18 21:11 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goffredo Baroncelli

From: Goffredo Baroncelli <kreijack@inwind.it>

Signed-off-by: Goffredo Baroncelli <kreijack@inwind.it>

---
 cmds/filesystem-usage.c | 140 +++++++++++++++++++++++++++++-----------
 1 file changed, 101 insertions(+), 39 deletions(-)

diff --git a/cmds/filesystem-usage.c b/cmds/filesystem-usage.c
index aa7065d5..a85a209b 100644
--- a/cmds/filesystem-usage.c
+++ b/cmds/filesystem-usage.c
@@ -282,24 +282,44 @@ static struct btrfs_ioctl_space_args *load_space_info(int fd, const char *path)
 }
 
 /*
- * This function computes the space occupied by a *single* RAID5/RAID6 chunk.
- * The computation is performed on the basis of the number of stripes
- * which compose the chunk, which could be different from the number of devices
- * if a disk is added later.
+ * This function computes the chunks size of the RAID5/RAID6
+ * stripes and the max raw_size/logical_size ratio.
  */
-static void get_raid56_used(struct chunk_info *chunks, int chunkcount,
-		u64 *raid5_used, u64 *raid6_used)
+static void raid56_bgs_size(struct chunk_info *chunks, int chunkcount,
+		u64 *r_data_raid56,
+		u64 *r_metadata_raid56,
+		u64 *r_system_raid56,
+		double *data_ratio)
 {
 	struct chunk_info *info_ptr = chunks;
-	*raid5_used = 0;
-	*raid6_used = 0;
-
-	while (chunkcount-- > 0) {
-		if (info_ptr->type & BTRFS_BLOCK_GROUP_RAID5)
-			(*raid5_used) += info_ptr->size / (info_ptr->num_stripes - 1);
-		if (info_ptr->type & BTRFS_BLOCK_GROUP_RAID6)
-			(*raid6_used) += info_ptr->size / (info_ptr->num_stripes - 2);
+
+	while (chunkcount > 0) {
+		u64 size;
+		if (info_ptr->type & BTRFS_BLOCK_GROUP_RAID5) {
+			size = info_ptr->size /	(info_ptr->num_stripes - 1);
+			*data_ratio = max(*data_ratio,
+				1.0*info_ptr->num_stripes /
+					(info_ptr->num_stripes - 1));
+		} else if (info_ptr->type & BTRFS_BLOCK_GROUP_RAID6) {
+			size = info_ptr->size / (info_ptr->num_stripes - 2);
+			*data_ratio = max(*data_ratio,
+				1.0*info_ptr->num_stripes /
+					(info_ptr->num_stripes - 2));
+		} else {
+			/* other raid profiles... */
+			info_ptr++;
+			chunkcount--;
+			continue;
+		}
+		if (info_ptr->type & BTRFS_BLOCK_GROUP_DATA)
+			*r_data_raid56 += size;
+		else if (info_ptr->type & BTRFS_BLOCK_GROUP_METADATA)
+			*r_system_raid56 += size;
+		else if (info_ptr->type & BTRFS_BLOCK_GROUP_SYSTEM)
+			*r_system_raid56 += size;
+
 		info_ptr++;
+		chunkcount--;
 	}
 }
 
@@ -315,6 +335,7 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 	/*
 	 * r_* prefix is for raw data
 	 * l_* is for logical
+	 * *_r56 is for RAID5/RAID6
 	 */
 	u64 r_total_size = 0;	/* filesystem size, sum of device sizes */
 	u64 r_total_chunks = 0;	/* sum of chunks sizes on disk(s) */
@@ -322,23 +343,29 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 	u64 r_total_unused = 0;
 	u64 r_total_missing = 0;	/* sum of missing devices size */
 	u64 r_data_used = 0;
+	u64 l_data_used = 0;
+	u64 l_data_used_r56 = 0;
 	u64 r_data_chunks = 0;
+	u64 r_data_chunks_r56 = 0;
 	u64 l_data_chunks = 0;
+	u64 l_data_chunks_r56 = 0;
 	u64 r_metadata_used = 0;
+	u64 l_metadata_used = 0;
+	u64 l_metadata_used_r56 = 0;
 	u64 r_metadata_chunks = 0;
+	u64 r_metadata_chunks_r56 = 0;
 	u64 l_metadata_chunks = 0;
+	u64 l_metadata_chunks_r56 = 0;
 	u64 r_system_used = 0;
 	u64 r_system_chunks = 0;
 	double data_ratio;
 	double metadata_ratio;
 	/* logical */
-	u64 raid5_used = 0;
-	u64 raid6_used = 0;
 	u64 l_global_reserve = 0;
 	u64 l_global_reserve_used = 0;
 	u64 free_estimated = 0;
 	u64 free_min = 0;
-	int max_data_ratio = 1;
+	double max_data_ratio = 1;
 	int mixed = 0;
 
 	sargs = load_space_info(fd, path);
@@ -360,7 +387,14 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 		ret = 1;
 		goto exit;
 	}
-	get_raid56_used(chunkinfo, chunkcount, &raid5_used, &raid6_used);
+	/*
+	 * the data coming from space info is not sufficient to compute
+	 * r_{data,metadata,system}_chunks values, so have to analyze the
+	 * chunks info.
+	 */
+	raid56_bgs_size(chunkinfo, chunkcount,
+		&r_data_chunks_r56, &r_metadata_chunks_r56, &r_system_chunks,
+		&max_data_ratio);
 
 	for (i = 0; i < sargs->total_spaces; i++) {
 		int ratio;
@@ -389,9 +423,6 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 		else
 			ratio = 1;
 
-		if (!ratio)
-			warning("RAID56 detected, not implemented");
-
 		if (ratio > max_data_ratio)
 			max_data_ratio = ratio;
 
@@ -404,14 +435,26 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 			mixed = 1;
 		}
 		if (flags & BTRFS_BLOCK_GROUP_DATA) {
-			r_data_used += sargs->spaces[i].used_bytes * ratio;
-			r_data_chunks += sargs->spaces[i].total_bytes * ratio;
+			l_data_used += sargs->spaces[i].used_bytes;
 			l_data_chunks += sargs->spaces[i].total_bytes;
+			if (!ratio) {
+				l_data_used_r56 += sargs->spaces[i].used_bytes;
+				l_data_chunks_r56 += sargs->spaces[i].total_bytes;
+			} else {
+				r_data_used += sargs->spaces[i].used_bytes * ratio;
+				r_data_chunks += sargs->spaces[i].total_bytes * ratio;
+			}
 		}
 		if (flags & BTRFS_BLOCK_GROUP_METADATA) {
-			r_metadata_used += sargs->spaces[i].used_bytes * ratio;
-			r_metadata_chunks += sargs->spaces[i].total_bytes * ratio;
+			l_metadata_used += sargs->spaces[i].used_bytes;
 			l_metadata_chunks += sargs->spaces[i].total_bytes;
+			if (!ratio) {
+				l_metadata_used_r56 += sargs->spaces[i].used_bytes;
+				l_metadata_chunks_r56 += sargs->spaces[i].total_bytes;
+			} else {
+				r_metadata_used += sargs->spaces[i].used_bytes * ratio;
+				r_metadata_chunks += sargs->spaces[i].total_bytes * ratio;
+			}
 		}
 		if (flags & BTRFS_BLOCK_GROUP_SYSTEM) {
 			r_system_used += sargs->spaces[i].used_bytes * ratio;
@@ -419,6 +462,29 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 		}
 	}
 
+	/*
+	 * Add the size of RAID5/6 chunks to the total
+	 */
+	r_data_chunks += r_data_chunks_r56;
+	r_metadata_chunks += r_metadata_chunks_r56;
+
+	/*
+	 * Add the size of RAID5/6 data size to the *_used.
+	 * This computation is not exact. However it is the best approximation
+	 * with the data currently available.
+	 * E.g. in case of two block-group RAID5 with different number
+	 * of disks (unusual but not impossible), the real r_*_used is
+	 * different depending which chunk is filled first. Because we have
+	 * only a cumulative info (i.e. only for RAID5), we made a linear
+	 * approximation.
+	 */
+	if (l_data_chunks_r56 > 1024)
+		r_data_used += (double)r_data_chunks_r56 * l_data_used_r56 /
+				l_data_chunks_r56;
+	if (l_metadata_chunks_r56 > 1024)
+		r_metadata_used += (double)r_metadata_chunks_r56 * l_metadata_used_r56 /
+				l_metadata_chunks_r56;
+
 	r_total_chunks = r_data_chunks + r_system_chunks;
 	r_total_used = r_data_used + r_system_used;
 	if (!mixed) {
@@ -434,21 +500,11 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 	else
 		metadata_ratio = (double)r_metadata_chunks / l_metadata_chunks;
 
-#if 0
-	/* add the raid5/6 allocated space */
-	total_chunks += raid5_used + raid6_used;
-#endif
-
 	/*
-	 * We're able to fill at least DATA for the unused space
-	 *
-	 * With mixed raid levels, this gives a rough estimate but more
-	 * accurate than just counting the logical free space
-	 * (l_data_chunks - l_data_used)
-	 *
-	 * In non-mixed case there's no difference.
+	 * We're able to fill at least DATA for the unused space in the
+	 * already allocated chunks.
 	 */
-	free_estimated = (r_data_chunks - r_data_used) / data_ratio;
+	free_estimated = l_data_chunks - l_data_used;
 	/*
 	 * For mixed-bg the metadata are left out in calculations thus global
 	 * reserve would be lost. Part of it could be permanently allocated,
@@ -459,7 +515,13 @@ static int print_filesystem_usage_overall(int fd, struct chunk_info *chunkinfo,
 	free_min = free_estimated;
 
 	/* Chop unallocatable space */
-	/* FIXME: must be applied per device */
+	/*
+	 * FIXME: must be applied per device
+	 * FIXME: we should use the info returned by the kernel function
+	 * 	get_alloc_profile() to computing the data_ratio
+	 * FIXME: we should use the 4 (as RAID1C4) to computing the
+	 * 	max_data_ratio
+	 */
 	if (r_total_unused >= MIN_UNALOCATED_THRESH) {
 		free_estimated += r_total_unused / data_ratio;
 		/* Match the calculation of 'df', use the highest raid ratio */
-- 
2.26.0.rc2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
  2020-03-18 21:11 ` [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command Goffredo Baroncelli
@ 2020-03-25 20:12 ` Goffredo Baroncelli
  2020-03-31 21:55   ` DanglingPointer
  2020-04-13 10:08 ` Joshua Houghton
  2020-05-25 13:27 ` David Sterba
  3 siblings, 1 reply; 10+ messages in thread
From: Goffredo Baroncelli @ 2020-03-25 20:12 UTC (permalink / raw)
  To: linux-btrfs

PING,

does someone find interest on this kind of patch ?

BR
G.Baroncelli

On 3/18/20 10:11 PM, Goffredo Baroncelli wrote:
> 
> Hi all,
> 
> this patch adds support for the raid5/6 profiles in the command
> 'btrfs filesystem usage'.
> 
> Until now the problem was that the value r_{data,metadata}_used is not
> easy to get for a RAID5/6, because it depends by the number of disks.
> And in a filesystem it is possible to have several raid5/6 chunks with a
> different number of disks.
> 
> In order to bypass this issue, I reworked the code to get rid of these
> values where possible and to use the l_{data,metadata}_used ones.
> Notably the biggest differences is in how the free space estimation
> is computed. Before it was:
> 
> 	free_estimated = (r_data_chunks - r_data_used) / data_ratio;
> 
> After it is:
> 
> 	free_estimated = l_data_chunks - l_data_used;
> 
> which give the same results when there is no mixed raid level, but a
> better result in the other case. I have to point out that before in the
> code there was a comment that said the opposite.
> 
> The other place where the r_{data,metadata}_used are use is for the
> "Used:" field. For this case I estimated these values using the
> following formula (only for raid5/6 profiles):
> 
> 	r_data_used += (double)r_data_chunks * l_data_used /
>                                 l_data_chunks;
> 
> Note that this is not fully accurate. Eg. suppose to have two raid5 chunks,
> the first one with 3 disks, the second one with 4 disks, and that each
> chunk is 1GB.
> r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely defined,
> but real r_data_used is completely different in these two cases:
> - the first chunk is full and the second one id empty
> - the first chunk is full empty and the second one is full
> However now this error affect only the "Used:" field.
> 
> 
> So now if you run btrfs fi us in a raid6 filesystem you get:
> 
> $ sudo btrfs fi us /
> Overall:
>      Device size:		  40.00GiB
>      Device allocated:		   8.28GiB
>      Device unallocated:		  31.72GiB
>      Device missing:		     0.00B
>      Used:			   5.00GiB
>      Free (estimated):		  17.36GiB	(min: 17.36GiB)
>      Data ratio:			      2.00
>      Metadata ratio:		      0.00
>      Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> Instead before:
> 
> $ sudo btrfs fi us /
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> Overall:
>      Device size:		  40.00GiB
>      Device allocated:		     0.00B
>      Device unallocated:		  40.00GiB
>      Device missing:		     0.00B
>      Used:			     0.00B
>      Free (estimated):		     0.00B	(min: 8.00EiB)
>      Data ratio:			      0.00
>      Metadata ratio:		      0.00
>      Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> 
> I want to point out that this patch should be compatible with my
> previous patches set (the ones related to the new ioctl
> BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs fi us'
> commands with full support a raid5/6 filesystem without needing root
> capability.
> 
> Comments are welcome.
> BR
> G.Baroncelli
> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-03-25 20:12 ` [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
@ 2020-03-31 21:55   ` DanglingPointer
  0 siblings, 0 replies; 10+ messages in thread
From: DanglingPointer @ 2020-03-31 21:55 UTC (permalink / raw)
  To: Goffredo Baroncelli, linux-btrfs

Yes I do!

Please push/pressure to get this patch reviewed!

BR,
D.Pointer


On 26/3/20 7:12 am, Goffredo Baroncelli wrote:
> PING,
>
> does someone find interest on this kind of patch ?
>
> BR
> G.Baroncelli
>
> On 3/18/20 10:11 PM, Goffredo Baroncelli wrote:
>>
>> Hi all,
>>
>> this patch adds support for the raid5/6 profiles in the command
>> 'btrfs filesystem usage'.
>>
>> Until now the problem was that the value r_{data,metadata}_used is not
>> easy to get for a RAID5/6, because it depends by the number of disks.
>> And in a filesystem it is possible to have several raid5/6 chunks with a
>> different number of disks.
>>
>> In order to bypass this issue, I reworked the code to get rid of these
>> values where possible and to use the l_{data,metadata}_used ones.
>> Notably the biggest differences is in how the free space estimation
>> is computed. Before it was:
>>
>>     free_estimated = (r_data_chunks - r_data_used) / data_ratio;
>>
>> After it is:
>>
>>     free_estimated = l_data_chunks - l_data_used;
>>
>> which give the same results when there is no mixed raid level, but a
>> better result in the other case. I have to point out that before in the
>> code there was a comment that said the opposite.
>>
>> The other place where the r_{data,metadata}_used are use is for the
>> "Used:" field. For this case I estimated these values using the
>> following formula (only for raid5/6 profiles):
>>
>>     r_data_used += (double)r_data_chunks * l_data_used /
>>                                 l_data_chunks;
>>
>> Note that this is not fully accurate. Eg. suppose to have two raid5 
>> chunks,
>> the first one with 3 disks, the second one with 4 disks, and that each
>> chunk is 1GB.
>> r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely 
>> defined,
>> but real r_data_used is completely different in these two cases:
>> - the first chunk is full and the second one id empty
>> - the first chunk is full empty and the second one is full
>> However now this error affect only the "Used:" field.
>>
>>
>> So now if you run btrfs fi us in a raid6 filesystem you get:
>>
>> $ sudo btrfs fi us /
>> Overall:
>>      Device size:          40.00GiB
>>      Device allocated:           8.28GiB
>>      Device unallocated:          31.72GiB
>>      Device missing:             0.00B
>>      Used:               5.00GiB
>>      Free (estimated):          17.36GiB    (min: 17.36GiB)
>>      Data ratio:                  2.00
>>      Metadata ratio:              0.00
>>      Global reserve:           3.25MiB    (used: 0.00B)
>>
>> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
>> [...]
>>
>> Instead before:
>>
>> $ sudo btrfs fi us /
>> WARNING: RAID56 detected, not implemented
>> WARNING: RAID56 detected, not implemented
>> WARNING: RAID56 detected, not implemented
>> Overall:
>>      Device size:          40.00GiB
>>      Device allocated:             0.00B
>>      Device unallocated:          40.00GiB
>>      Device missing:             0.00B
>>      Used:                 0.00B
>>      Free (estimated):             0.00B    (min: 8.00EiB)
>>      Data ratio:                  0.00
>>      Metadata ratio:              0.00
>>      Global reserve:           3.25MiB    (used: 0.00B)
>>
>> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
>> [...]
>>
>>
>> I want to point out that this patch should be compatible with my
>> previous patches set (the ones related to the new ioctl
>> BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs 
>> fi us'
>> commands with full support a raid5/6 filesystem without needing root
>> capability.
>>
>> Comments are welcome.
>> BR
>> G.Baroncelli
>>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
  2020-03-18 21:11 ` [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command Goffredo Baroncelli
  2020-03-25 20:12 ` [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
@ 2020-04-13 10:08 ` Joshua Houghton
  2020-04-13 10:28   ` Joshua Houghton
  2020-05-25 13:27 ` David Sterba
  3 siblings, 1 reply; 10+ messages in thread
From: Joshua Houghton @ 2020-04-13 10:08 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goffredo Baroncelli, DanglingPointer, Torstein Eide

On Wednesday, 18 March 2020 21:11:56 UTC Goffredo Baroncelli wrote:
> Hi all,
> 
> this patch adds support for the raid5/6 profiles in the command
> 'btrfs filesystem usage'.
> 
> Until now the problem was that the value r_{data,metadata}_used is not
> easy to get for a RAID5/6, because it depends by the number of disks.
> And in a filesystem it is possible to have several raid5/6 chunks with a
> different number of disks.
> 
> In order to bypass this issue, I reworked the code to get rid of these
> values where possible and to use the l_{data,metadata}_used ones.
> Notably the biggest differences is in how the free space estimation
> is computed. Before it was:
> 
> 	free_estimated = (r_data_chunks - r_data_used) / data_ratio;
> 
> After it is:
> 
> 	free_estimated = l_data_chunks - l_data_used;
> 
> which give the same results when there is no mixed raid level, but a
> better result in the other case. I have to point out that before in the
> code there was a comment that said the opposite.
> 
> The other place where the r_{data,metadata}_used are use is for the
> "Used:" field. For this case I estimated these values using the
> following formula (only for raid5/6 profiles):
> 
> 	r_data_used += (double)r_data_chunks * l_data_used /
>                                l_data_chunks;
> 
> Note that this is not fully accurate. Eg. suppose to have two raid5 chunks,
> the first one with 3 disks, the second one with 4 disks, and that each
> chunk is 1GB.
> r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely
> defined, but real r_data_used is completely different in these two cases:
> - the first chunk is full and the second one id empty
> - the first chunk is full empty and the second one is full
> However now this error affect only the "Used:" field.
> 
> 
> So now if you run btrfs fi us in a raid6 filesystem you get:
> 
> $ sudo btrfs fi us /
> Overall:
>     Device size:		  40.00GiB
>     Device allocated:		   8.28GiB
>     Device unallocated:		  31.72GiB
>     Device missing:		     0.00B
>     Used:			   5.00GiB
>     Free (estimated):		  17.36GiB	(min: 17.36GiB)
>     Data ratio:			      2.00
>     Metadata ratio:		      0.00
>     Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> Instead before:
> 
> $ sudo btrfs fi us /
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> Overall:
>     Device size:		  40.00GiB
>     Device allocated:		     0.00B
>     Device unallocated:		  40.00GiB
>     Device missing:		     0.00B
>     Used:			     0.00B
>     Free (estimated):		     0.00B	(min: 8.00EiB)
>     Data ratio:			      0.00
>     Metadata ratio:		      0.00
>     Global reserve:		   3.25MiB	(used: 0.00B)
> 
> Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> [...]
> 
> 
> I want to point out that this patch should be compatible with my
> previous patches set (the ones related to the new ioctl
> BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs fi us'
> commands with full support a raid5/6 filesystem without needing root
> capability.
> 
> Comments are welcome.
> BR
> G.Baroncelli

Hi Goffredo

Thanks you for this. It's something I've been wanting for a while. Do 
you know why I get significantly different results in the overall summary when I 
do not run it as root. I'm not sure if this is a bug or a limitation.

When I run it as root it looks to be showing the correct values.

joshua@r2400g:~/development/btrfs-progs$ colordiff -u <(./btrfs fi us /mnt/raid/) <(sudo ./btrfs fi us /mnt/raid/)
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
--- /dev/fd/63  2020-04-13 10:54:26.833747190 +0100
+++ /dev/fd/62  2020-04-13 10:54:26.843746984 +0100
@@ -1,17 +1,32 @@
 Overall:
     Device size:                 29.11TiB
-    Device allocated:           284.06GiB
-    Device unallocated:                  28.83TiB
-    Device missing:              29.11TiB
-    Used:                       280.99GiB
-    Free (estimated):               0.00B      (min: 14.95TiB)
-    Data ratio:                              0.00
+    Device allocated:            19.39TiB
+    Device unallocated:                   9.72TiB
+    Device missing:                 0.00B
+    Used:                        18.67TiB
+    Free (estimated):             7.82TiB      (min: 5.39TiB)
+    Data ratio:                              1.33
     Metadata ratio:                  2.00
     Global reserve:             512.00MiB      (used: 0.00B)
 
 Data,RAID5: Size:14.33TiB, Used:13.80TiB (96.27%)
+   /dev/mapper/traid3     4.78TiB
+   /dev/mapper/traid1     4.78TiB
+   /dev/mapper/traid2     4.78TiB
+   /dev/mapper/traid4     4.78TiB
 
 Metadata,RAID1: Size:142.00GiB, Used:140.49GiB (98.94%)
+   /dev/mapper/traid3    63.00GiB
+   /dev/mapper/traid1    64.00GiB
+   /dev/mapper/traid2    63.00GiB
+   /dev/mapper/traid4    94.00GiB
 
 System,RAID1: Size:32.00MiB, Used:1.00MiB (3.12%)
+   /dev/mapper/traid1    32.00MiB
+   /dev/mapper/traid4    32.00MiB
 
+Unallocated:
+   /dev/mapper/traid3     2.44TiB
+   /dev/mapper/traid1     2.44TiB
+   /dev/mapper/traid2     2.44TiB
+   /dev/mapper/traid4     2.41TiB


This is in contrast to raid1 which seems to be mostly correct, irrespective
of what user I run as.


joshua@arch:/var/joshua$ colordiff -u <(btrfs fi us raid/) <(sudo btrfs fi us raid/)
WARNING: cannot read detailed chunk info, per-device usage will not be shown, run as root
--- /dev/fd/63  2020-04-13 09:52:54.630750079 +0000
+++ /dev/fd/62  2020-04-13 09:52:54.637416835 +0000
@@ -2,7 +2,7 @@
     Device size:                  8.00GiB
     Device allocated:             1.32GiB
     Device unallocated:                   6.68GiB
-    Device missing:               8.00GiB
+    Device missing:                 0.00B
     Used:                       383.40MiB
     Free (estimated):             3.55GiB      (min: 3.55GiB)
     Data ratio:                              2.00
@@ -10,8 +10,17 @@
     Global reserve:               3.25MiB      (used: 0.00B)
 
 Data,RAID1: Size:409.56MiB, Used:191.28MiB (46.70%)
+   /dev/loop0   409.56MiB
+   /dev/loop1   409.56MiB
 
 Metadata,RAID1: Size:256.00MiB, Used:416.00KiB (0.16%)
+   /dev/loop0   256.00MiB
+   /dev/loop1   256.00MiB
 
 System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
+   /dev/loop0     8.00MiB
+   /dev/loop1     8.00MiB
 
+Unallocated:
+   /dev/loop0     3.34GiB
+   /dev/loop1     3.34GiB

Does anyone know if this is something we can fix? I'm happy to take a look.

Joshua Houghton



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-04-13 10:08 ` Joshua Houghton
@ 2020-04-13 10:28   ` Joshua Houghton
  2020-04-13 17:05     ` Goffredo Baroncelli
  0 siblings, 1 reply; 10+ messages in thread
From: Joshua Houghton @ 2020-04-13 10:28 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goffredo Baroncelli, DanglingPointer, Torstein Eide

On Monday, 13 April 2020 10:08:50 UTC Joshua Houghton wrote:
> On Wednesday, 18 March 2020 21:11:56 UTC Goffredo Baroncelli wrote:
> > Hi all,
> > 
> > this patch adds support for the raid5/6 profiles in the command
> > 'btrfs filesystem usage'.
> > 
> > Until now the problem was that the value r_{data,metadata}_used is not
> > easy to get for a RAID5/6, because it depends by the number of disks.
> > And in a filesystem it is possible to have several raid5/6 chunks with a
> > different number of disks.
> > 
> > In order to bypass this issue, I reworked the code to get rid of these
> > values where possible and to use the l_{data,metadata}_used ones.
> > Notably the biggest differences is in how the free space estimation
> > 
> > is computed. Before it was:
> > 	free_estimated = (r_data_chunks - r_data_used) / data_ratio;
> > 
> > After it is:
> > 	free_estimated = l_data_chunks - l_data_used;
> > 
> > which give the same results when there is no mixed raid level, but a
> > better result in the other case. I have to point out that before in the
> > code there was a comment that said the opposite.
> > 
> > The other place where the r_{data,metadata}_used are use is for the
> > "Used:" field. For this case I estimated these values using the
> > 
> > following formula (only for raid5/6 profiles):
> > 	r_data_used += (double)r_data_chunks * l_data_used /
> > 	
> >                                l_data_chunks;
> > 
> > Note that this is not fully accurate. Eg. suppose to have two raid5
> > chunks,
> > the first one with 3 disks, the second one with 4 disks, and that each
> > chunk is 1GB.
> > r_data_chunks_r56, l_data_used_r56, l_data_chunks_r56 are completely
> > defined, but real r_data_used is completely different in these two cases:
> > - the first chunk is full and the second one id empty
> > - the first chunk is full empty and the second one is full
> > However now this error affect only the "Used:" field.
> > 
> > 
> > So now if you run btrfs fi us in a raid6 filesystem you get:
> > 
> > $ sudo btrfs fi us /
> > 
> > Overall:
> >     Device size:		  40.00GiB
> >     Device allocated:		   8.28GiB
> >     Device unallocated:		  31.72GiB
> >     Device missing:		     0.00B
> >     Used:			   5.00GiB
> >     Free (estimated):		  17.36GiB	(min: 17.36GiB)
> >     Data ratio:			      2.00
> >     Metadata ratio:		      0.00
> >     Global reserve:		   3.25MiB	(used: 0.00B)
> > 
> > Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> > [...]
> > 
> > Instead before:
> > 
> > $ sudo btrfs fi us /
> > WARNING: RAID56 detected, not implemented
> > WARNING: RAID56 detected, not implemented
> > WARNING: RAID56 detected, not implemented
> > 
> > Overall:
> >     Device size:		  40.00GiB
> >     Device allocated:		     0.00B
> >     Device unallocated:		  40.00GiB
> >     Device missing:		     0.00B
> >     Used:			     0.00B
> >     Free (estimated):		     0.00B	(min: 8.00EiB)
> >     Data ratio:			      0.00
> >     Metadata ratio:		      0.00
> >     Global reserve:		   3.25MiB	(used: 0.00B)
> > 
> > Data,RAID6: Size:4.00GiB, Used:2.50GiB (62.53%)
> > [...]
> > 
> > 
> > I want to point out that this patch should be compatible with my
> > previous patches set (the ones related to the new ioctl
> > BTRFS_IOC_GET_CHUNK_INFO). If both are merged we will have a 'btrfs fi us'
> > commands with full support a raid5/6 filesystem without needing root
> > capability.
> > 
> > Comments are welcome.
> > BR
> > G.Baroncelli
> 
> Hi Goffredo
> 
> Thanks you for this. It's something I've been wanting for a while. Do
> you know why I get significantly different results in the overall summary
> when I do not run it as root. I'm not sure if this is a bug or a
> limitation.
> 
> When I run it as root it looks to be showing the correct values.
> 
> joshua@r2400g:~/development/btrfs-progs$ colordiff -u <(./btrfs fi us
> /mnt/raid/) <(sudo ./btrfs fi us /mnt/raid/) WARNING: cannot read detailed
> chunk info, per-device usage will not be shown, run as root --- /dev/fd/63 
> 2020-04-13 10:54:26.833747190 +0100
> +++ /dev/fd/62  2020-04-13 10:54:26.843746984 +0100
> @@ -1,17 +1,32 @@
>  Overall:
>      Device size:                 29.11TiB
> -    Device allocated:           284.06GiB
> -    Device unallocated:                  28.83TiB
> -    Device missing:              29.11TiB
> -    Used:                       280.99GiB
> -    Free (estimated):               0.00B      (min: 14.95TiB)
> -    Data ratio:                              0.00
> +    Device allocated:            19.39TiB
> +    Device unallocated:                   9.72TiB
> +    Device missing:                 0.00B
> +    Used:                        18.67TiB
> +    Free (estimated):             7.82TiB      (min: 5.39TiB)
> +    Data ratio:                              1.33
>      Metadata ratio:                  2.00
>      Global reserve:             512.00MiB      (used: 0.00B)
> 
>  Data,RAID5: Size:14.33TiB, Used:13.80TiB (96.27%)
> +   /dev/mapper/traid3     4.78TiB
> +   /dev/mapper/traid1     4.78TiB
> +   /dev/mapper/traid2     4.78TiB
> +   /dev/mapper/traid4     4.78TiB
> 
>  Metadata,RAID1: Size:142.00GiB, Used:140.49GiB (98.94%)
> +   /dev/mapper/traid3    63.00GiB
> +   /dev/mapper/traid1    64.00GiB
> +   /dev/mapper/traid2    63.00GiB
> +   /dev/mapper/traid4    94.00GiB
> 
>  System,RAID1: Size:32.00MiB, Used:1.00MiB (3.12%)
> +   /dev/mapper/traid1    32.00MiB
> +   /dev/mapper/traid4    32.00MiB
> 
> +Unallocated:
> +   /dev/mapper/traid3     2.44TiB
> +   /dev/mapper/traid1     2.44TiB
> +   /dev/mapper/traid2     2.44TiB
> +   /dev/mapper/traid4     2.41TiB
> 
> 
> This is in contrast to raid1 which seems to be mostly correct, irrespective
> of what user I run as.
> 
> 
> joshua@arch:/var/joshua$ colordiff -u <(btrfs fi us raid/) <(sudo btrfs fi
> us raid/) WARNING: cannot read detailed chunk info, per-device usage will
> not be shown, run as root --- /dev/fd/63  2020-04-13 09:52:54.630750079
> +0000
> +++ /dev/fd/62  2020-04-13 09:52:54.637416835 +0000
> @@ -2,7 +2,7 @@
>      Device size:                  8.00GiB
>      Device allocated:             1.32GiB
>      Device unallocated:                   6.68GiB
> -    Device missing:               8.00GiB
> +    Device missing:                 0.00B
>      Used:                       383.40MiB
>      Free (estimated):             3.55GiB      (min: 3.55GiB)
>      Data ratio:                              2.00
> @@ -10,8 +10,17 @@
>      Global reserve:               3.25MiB      (used: 0.00B)
> 
>  Data,RAID1: Size:409.56MiB, Used:191.28MiB (46.70%)
> +   /dev/loop0   409.56MiB
> +   /dev/loop1   409.56MiB
> 
>  Metadata,RAID1: Size:256.00MiB, Used:416.00KiB (0.16%)
> +   /dev/loop0   256.00MiB
> +   /dev/loop1   256.00MiB
> 
>  System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
> +   /dev/loop0     8.00MiB
> +   /dev/loop1     8.00MiB
> 
> +Unallocated:
> +   /dev/loop0     3.34GiB
> +   /dev/loop1     3.34GiB
> 
> Does anyone know if this is something we can fix? I'm happy to take a look.
> 
> Joshua Houghton

Sorry missed this last bit never mind

> If both are merged we will have a 'btrfs fi us'
> commands with full support a raid5/6 filesystem without needing root
> capability.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-04-13 10:28   ` Joshua Houghton
@ 2020-04-13 17:05     ` Goffredo Baroncelli
  0 siblings, 0 replies; 10+ messages in thread
From: Goffredo Baroncelli @ 2020-04-13 17:05 UTC (permalink / raw)
  To: Joshua Houghton, linux-btrfs; +Cc: DanglingPointer, Torstein Eide

On 4/13/20 12:28 PM, Joshua Houghton wrote:
> On Monday, 13 April 2020 10:08:50 UTC Joshua Houghton wrote:
>> On Wednesday, 18 March 2020 21:11:56 UTC Goffredo Baroncelli wrote:
>>> Hi all,
>>>
>>> this patch adds support for the raid5/6 profiles in the command
>>> 'btrfs filesystem usage'.
[...]
[...]
>>
>> Hi Goffredo
>>
>> Thanks you for this. It's something I've been wanting for a while. Do
>> you know why I get significantly different results in the overall summary
>> when I do not run it as root. I'm not sure if this is a bug or a
>> limitation.
>>
>> When I run it as root it looks to be showing the correct values.
>>
>> joshua@r2400g:~/development/btrfs-progs$ colordiff -u <(./btrfs fi us
>> /mnt/raid/) <(sudo ./btrfs fi us /mnt/raid/) WARNING: cannot read detailed
>> chunk info, per-device usage will not be shown, run as root --- /dev/fd/63
>> 2020-04-13 10:54:26.833747190 +0100
>> +++ /dev/fd/62  2020-04-13 10:54:26.843746984 +0100
>> @@ -1,17 +1,32 @@
>>   Overall:
>>       Device size:                 29.11TiB
>> -    Device allocated:           284.06GiB
>> -    Device unallocated:                  28.83TiB
>> -    Device missing:              29.11TiB
>> -    Used:                       280.99GiB
>> -    Free (estimated):               0.00B      (min: 14.95TiB)
>> -    Data ratio:                              0.00
>> +    Device allocated:            19.39TiB
>> +    Device unallocated:                   9.72TiB
>> +    Device missing:                 0.00B
>> +    Used:                        18.67TiB
>> +    Free (estimated):             7.82TiB      (min: 5.39TiB)
>> +    Data ratio:                              1.33
>>       Metadata ratio:                  2.00
>>       Global reserve:             512.00MiB      (used: 0.00B)
>>
>>   Data,RAID5: Size:14.33TiB, Used:13.80TiB (96.27%)
>> +   /dev/mapper/traid3     4.78TiB
>> +   /dev/mapper/traid1     4.78TiB
>> +   /dev/mapper/traid2     4.78TiB
>> +   /dev/mapper/traid4     4.78TiB
>>
>>   Metadata,RAID1: Size:142.00GiB, Used:140.49GiB (98.94%)
>> +   /dev/mapper/traid3    63.00GiB
>> +   /dev/mapper/traid1    64.00GiB
>> +   /dev/mapper/traid2    63.00GiB
>> +   /dev/mapper/traid4    94.00GiB
>>
>>   System,RAID1: Size:32.00MiB, Used:1.00MiB (3.12%)
>> +   /dev/mapper/traid1    32.00MiB
>> +   /dev/mapper/traid4    32.00MiB
>>
>> +Unallocated:
>> +   /dev/mapper/traid3     2.44TiB
>> +   /dev/mapper/traid1     2.44TiB
>> +   /dev/mapper/traid2     2.44TiB
>> +   /dev/mapper/traid4     2.41TiB
>>
>>
>> This is in contrast to raid1 which seems to be mostly correct, irrespective
>> of what user I run as.
>>
>>
>> joshua@arch:/var/joshua$ colordiff -u <(btrfs fi us raid/) <(sudo btrfs fi
>> us raid/) WARNING: cannot read detailed chunk info, per-device usage will
>> not be shown, run as root --- /dev/fd/63  2020-04-13 09:52:54.630750079
>> +0000
>> +++ /dev/fd/62  2020-04-13 09:52:54.637416835 +0000
>> @@ -2,7 +2,7 @@
>>       Device size:                  8.00GiB
>>       Device allocated:             1.32GiB
>>       Device unallocated:                   6.68GiB
>> -    Device missing:               8.00GiB
>> +    Device missing:                 0.00B
>>       Used:                       383.40MiB
>>       Free (estimated):             3.55GiB      (min: 3.55GiB)
>>       Data ratio:                              2.00
>> @@ -10,8 +10,17 @@
>>       Global reserve:               3.25MiB      (used: 0.00B)
>>
>>   Data,RAID1: Size:409.56MiB, Used:191.28MiB (46.70%)
>> +   /dev/loop0   409.56MiB
>> +   /dev/loop1   409.56MiB
>>
>>   Metadata,RAID1: Size:256.00MiB, Used:416.00KiB (0.16%)
>> +   /dev/loop0   256.00MiB
>> +   /dev/loop1   256.00MiB
>>
>>   System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
>> +   /dev/loop0     8.00MiB
>> +   /dev/loop1     8.00MiB
>>
>> +Unallocated:
>> +   /dev/loop0     3.34GiB
>> +   /dev/loop1     3.34GiB
>>
>> Does anyone know if this is something we can fix? I'm happy to take a look.
>>
>> Joshua Houghton
> 
> Sorry missed this last bit never mind
> 
>> If both are merged we will have a 'btrfs fi us'
>> commands with full support a raid5/6 filesystem without needing root
>> capability.
> 

Unfortunately we need root to access the chunks information.
Thanks for giving an eye to that. I will "ping" the status of this patch

BR
G.Baroncelli

> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
                   ` (2 preceding siblings ...)
  2020-04-13 10:08 ` Joshua Houghton
@ 2020-05-25 13:27 ` David Sterba
  2020-05-25 20:40   ` Goffredo Baroncelli
  3 siblings, 1 reply; 10+ messages in thread
From: David Sterba @ 2020-05-25 13:27 UTC (permalink / raw)
  To: Goffredo Baroncelli; +Cc: linux-btrfs

On Wed, Mar 18, 2020 at 10:11:56PM +0100, Goffredo Baroncelli wrote:
> this patch adds support for the raid5/6 profiles in the command
> 'btrfs filesystem usage'.
> 
> Until now the problem was that the value r_{data,metadata}_used is not
> easy to get for a RAID5/6, because it depends by the number of disks.
> And in a filesystem it is possible to have several raid5/6 chunks with a
> different number of disks.

I'd like to get the raid56 'fi du' fixed but the way you implement it
seems to be a too big leap. I've tried to review this patch several
times but always got the impression that reworking the calculations to
make it work for some profiles will most likely break something else. It
has happened in the past.

So, let's start with the case where the filesystem does not have
multiple profiles per block group type, eg. just raid5 for data and
calculate that.

If this also covers the raid56 case with different stripe counts, then
good but as this is special case I won't mind addressing it separately.

The general case of multiple profiles per type is probably an
intermediate state of converting profiles, we can return something sane
if possible or warn as what we have now.

I'm fine if you say you're not going to implement that.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
  2020-05-25 13:27 ` David Sterba
@ 2020-05-25 20:40   ` Goffredo Baroncelli
  0 siblings, 0 replies; 10+ messages in thread
From: Goffredo Baroncelli @ 2020-05-25 20:40 UTC (permalink / raw)
  To: dsterba, linux-btrfs

On 5/25/20 3:27 PM, David Sterba wrote:
> On Wed, Mar 18, 2020 at 10:11:56PM +0100, Goffredo Baroncelli wrote:
>> this patch adds support for the raid5/6 profiles in the command
>> 'btrfs filesystem usage'.
>>
>> Until now the problem was that the value r_{data,metadata}_used is not
>> easy to get for a RAID5/6, because it depends by the number of disks.
>> And in a filesystem it is possible to have several raid5/6 chunks with a
>> different number of disks.
> 
> I'd like to get the raid56 'fi du' fixed but the way you implement it
> seems to be a too big leap. I've tried to review this patch several
> times but always got the impression that reworking the calculations to
> make it work for some profiles will most likely break something else. It
> has happened in the past.

I understand your fear. Frankly speaking this code is quite complex.
More than it should be (even without the raid56 support).

I am looking for a solution less intrusive. Let me few days and I will
update the patch.

Then we can discuss its validity.

  
> So, let's start with the case where the filesystem does not have
> multiple profiles per block group type, eg. just raid5 for data and
> calculate that.
> 
> If this also covers the raid56 case with different stripe counts, then
> good but as this is special case I won't mind addressing it separately.
> 
> The general case of multiple profiles per type is probably an
> intermediate state of converting profiles, we can return something sane
> if possible or warn as what we have now.

Another possibility is when a drive is added and a balance is not performed.

However this should be "safe" because it would underestimate the free space.

> 
> I'm fine if you say you're not going to implement that.
> 

I want to work on that.


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us
@ 2020-04-04 19:29 Torstein Eide
  0 siblings, 0 replies; 10+ messages in thread
From: Torstein Eide @ 2020-04-04 19:29 UTC (permalink / raw)
  To: danglingpointerexception, kreijack, linux-btrfs

I like to see all improvements get push upstream. As a user of raid5 i
like to see BTRFS be the best software raid.

This looks like a good improvement to the unfriendliness of  btrfs fi us.

I approve this patch.


- --
Torstein Eide
Torsteine@gmail.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-05-25 20:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-18 21:11 [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
2020-03-18 21:11 ` [PATCH] Add support for the raid5/6 profiles in the btrfs fi us command Goffredo Baroncelli
2020-03-25 20:12 ` [PATCH] btrfs-progs: add RAID5/6 support to btrfs fi us Goffredo Baroncelli
2020-03-31 21:55   ` DanglingPointer
2020-04-13 10:08 ` Joshua Houghton
2020-04-13 10:28   ` Joshua Houghton
2020-04-13 17:05     ` Goffredo Baroncelli
2020-05-25 13:27 ` David Sterba
2020-05-25 20:40   ` Goffredo Baroncelli
2020-04-04 19:29 Torstein Eide

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).