All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/4] fsx: allow zero range operations to cross eof
@ 2020-04-20 17:07 fdmanana
  2020-04-21 14:22 ` Brian Foster
  0 siblings, 1 reply; 3+ messages in thread
From: fdmanana @ 2020-04-20 17:07 UTC (permalink / raw)
  To: fstests; +Cc: linux-btrfs, Filipe Manana

From: Filipe Manana <fdmanana@suse.com>

Currently we are limiting the range for zero range operations to stay
within the i_size boundary. This is not optimal because like this we lose
coverage of the filesystem's zero range implementation, since zero range
operations are allowed to cross the i_size. Fix this by limiting the range
to 'maxfilelen' and not 'file_size', and update the 'file_size' after each
zero range operation if needed.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 ltp/fsx.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/ltp/fsx.c b/ltp/fsx.c
index 9d598a4f..56479eda 100644
--- a/ltp/fsx.c
+++ b/ltp/fsx.c
@@ -1244,6 +1244,17 @@ do_zero_range(unsigned offset, unsigned length, int keep_size)
 	}
 
 	memset(good_buf + offset, '\0', length);
+
+	if (!keep_size && end_offset > file_size) {
+		/*
+		 * If there's a gap between the old file size and the offset of
+		 * the zero range operation, fill the gap with zeroes.
+		 */
+		if (offset > file_size)
+			memset(good_buf + file_size, '\0', offset - file_size);
+
+		file_size = end_offset;
+	}
 }
 
 #else
@@ -2141,7 +2152,7 @@ have_op:
 		do_punch_hole(offset, size);
 		break;
 	case OP_ZERO_RANGE:
-		TRIM_OFF_LEN(offset, size, file_size);
+		TRIM_OFF_LEN(offset, size, maxfilelen);
 		do_zero_range(offset, size, keep_size);
 		break;
 	case OP_COLLAPSE_RANGE:
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/4] fsx: allow zero range operations to cross eof
  2020-04-20 17:07 [PATCH 1/4] fsx: allow zero range operations to cross eof fdmanana
@ 2020-04-21 14:22 ` Brian Foster
  2020-04-21 14:27   ` btrfs device missing issues Alexandru Dordea
  0 siblings, 1 reply; 3+ messages in thread
From: Brian Foster @ 2020-04-21 14:22 UTC (permalink / raw)
  To: fdmanana; +Cc: fstests, linux-btrfs, Filipe Manana

On Mon, Apr 20, 2020 at 06:07:38PM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
> 
> Currently we are limiting the range for zero range operations to stay
> within the i_size boundary. This is not optimal because like this we lose
> coverage of the filesystem's zero range implementation, since zero range
> operations are allowed to cross the i_size. Fix this by limiting the range
> to 'maxfilelen' and not 'file_size', and update the 'file_size' after each
> zero range operation if needed.
> 
> Signed-off-by: Filipe Manana <fdmanana@suse.com>
> ---

Thanks for the fixup. Looks good to me now:

Reviewed-by: Brian Foster <bfoster@redhat.com>

>  ltp/fsx.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/ltp/fsx.c b/ltp/fsx.c
> index 9d598a4f..56479eda 100644
> --- a/ltp/fsx.c
> +++ b/ltp/fsx.c
> @@ -1244,6 +1244,17 @@ do_zero_range(unsigned offset, unsigned length, int keep_size)
>  	}
>  
>  	memset(good_buf + offset, '\0', length);
> +
> +	if (!keep_size && end_offset > file_size) {
> +		/*
> +		 * If there's a gap between the old file size and the offset of
> +		 * the zero range operation, fill the gap with zeroes.
> +		 */
> +		if (offset > file_size)
> +			memset(good_buf + file_size, '\0', offset - file_size);
> +
> +		file_size = end_offset;
> +	}
>  }
>  
>  #else
> @@ -2141,7 +2152,7 @@ have_op:
>  		do_punch_hole(offset, size);
>  		break;
>  	case OP_ZERO_RANGE:
> -		TRIM_OFF_LEN(offset, size, file_size);
> +		TRIM_OFF_LEN(offset, size, maxfilelen);
>  		do_zero_range(offset, size, keep_size);
>  		break;
>  	case OP_COLLAPSE_RANGE:
> -- 
> 2.11.0
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* btrfs device missing issues
  2020-04-21 14:22 ` Brian Foster
@ 2020-04-21 14:27   ` Alexandru Dordea
  0 siblings, 0 replies; 3+ messages in thread
From: Alexandru Dordea @ 2020-04-21 14:27 UTC (permalink / raw)
  To: linux-btrfs

Hello,
  I’m encountering some issues and slowness while using device delete missing and I am wondering if I’m doing something wrong or someone can point e in the right direction:

Some background:
I have a pool with 15 x 8TB hdds and 5 x 14 TB hdds and one of the 8TB hdd failed. If i'm trying to run the delete missing the system is running extremely slow, most of the hdd's are staying with 100% and after calculating the speed I think it will take 3-4 weeks to complete the task.
If I'm running a balance, it's slow as well and the sys load is increasing fast. Seems that the process is running on only one CPU and the process is keeping it 100%. Does balance doesn't know multi-thread?

As of now, my FS is mounted with space_cache=v2 and quotas are disabled.

# btrfs filesystem usage /mount
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Overall:
Device size: 172.83TiB
Device allocated: 0.00B
Device unallocated: 172.83TiB
Device missing: 7.28TiB
Used: 0.00B
Free (estimated): 0.00B (min: 8.00EiB)
Data ratio: 0.00
Metadata ratio: 0.00
Global reserve: 512.00MiB (used: 0.00B)


Data,RAID6: Size:134.54TiB, Used:131.18TiB (97.51%)
/dev/sdg 7.09TiB
/dev/sdh 7.09TiB
/dev/sdt 7.09TiB
missing 6.08TiB
/dev/sds 7.09TiB
/dev/sdr 7.09TiB
/dev/sdq 7.09TiB
/dev/sdp 7.09TiB
/dev/sdo 7.09TiB
/dev/sdn 7.09TiB
/dev/sdm 7.09TiB
/dev/sdl 7.09TiB
/dev/sdk 7.09TiB
/dev/sdj 7.09TiB
/dev/sdi 7.09TiB
/dev/sdc 11.86TiB
/dev/sdf 11.86TiB
/dev/sde 11.86TiB
/dev/sdb 11.86TiB
/dev/sdd 11.86TiB


Metadata,RAID6: Size:149.90GiB, Used:144.38GiB (96.31%)
/dev/sdg 8.99GiB
/dev/sdh 8.99GiB
/dev/sdt 8.99GiB
missing 7.86GiB
/dev/sds 8.99GiB
/dev/sdr 8.99GiB
/dev/sdq 8.99GiB
/dev/sdp 8.99GiB
/dev/sdo 8.99GiB
/dev/sdn 8.99GiB
/dev/sdm 8.99GiB
/dev/sdl 8.99GiB
/dev/sdk 8.99GiB
/dev/sdj 8.99GiB
/dev/sdi 8.99GiB
/dev/sdc 10.28GiB
/dev/sdf 10.28GiB
/dev/sde 10.28GiB
/dev/sdb 10.28GiB
/dev/sdd 10.28GiB


System,RAID6: Size:13.81MiB, Used:10.72MiB (77.60%)
/dev/sdg 1.06MiB
/dev/sdh 1.06MiB
/dev/sdt 1.06MiB
missing 1.06MiB
/dev/sds 1.06MiB
/dev/sdr 1.06MiB
/dev/sdq 1.06MiB
/dev/sdp 1.06MiB
/dev/sdo 1.06MiB
/dev/sdn 1.06MiB
/dev/sdm 1.06MiB
/dev/sdl 1.06MiB
/dev/sdk 1.06MiB
/dev/sdj 1.06MiB
/dev/sdi 1.06MiB


Unallocated:
/dev/sdg 184.11GiB
/dev/sdh 184.11GiB
/dev/sdt 184.11GiB
missing 1.19TiB
/dev/sds 184.11GiB
/dev/sdr 184.11GiB
/dev/sdq 184.11GiB
/dev/sdp 184.11GiB
/dev/sdo 184.11GiB
/dev/sdn 184.11GiB
/dev/sdm 184.11GiB
/dev/sdl 184.11GiB
/dev/sdk 184.11GiB
/dev/sdj 184.11GiB
/dev/sdi 184.11GiB
/dev/sdc 886.48GiB
/dev/sdf 886.48GiB
/dev/sde 886.48GiB
/dev/sdb 886.48GiB
/dev/sdd 886.48GiB


# btrfs dev stats /mount
[/dev/sdg].write_io_errs 0
[/dev/sdg].read_io_errs 0
[/dev/sdg].flush_io_errs 0
[/dev/sdg].corruption_errs 0
[/dev/sdg].generation_errs 0
[/dev/sdh].write_io_errs 0
[/dev/sdh].read_io_errs 0
[/dev/sdh].flush_io_errs 0
[/dev/sdh].corruption_errs 0
[/dev/sdh].generation_errs 0
[/dev/sdt].write_io_errs 0
[/dev/sdt].read_io_errs 0
[/dev/sdt].flush_io_errs 0
[/dev/sdt].corruption_errs 0
[/dev/sdt].generation_errs 0
[devid:4].write_io_errs 0
[devid:4].read_io_errs 0
[devid:4].flush_io_errs 0
[devid:4].corruption_errs 0
[devid:4].generation_errs 0
[/dev/sds].write_io_errs 0
[/dev/sds].read_io_errs 0
[/dev/sds].flush_io_errs 0
[/dev/sds].corruption_errs 0
[/dev/sds].generation_errs 0
[/dev/sdr].write_io_errs 0
[/dev/sdr].read_io_errs 0
[/dev/sdr].flush_io_errs 0
[/dev/sdr].corruption_errs 0
[/dev/sdr].generation_errs 0
[/dev/sdq].write_io_errs 0
[/dev/sdq].read_io_errs 0
[/dev/sdq].flush_io_errs 0
[/dev/sdq].corruption_errs 0
[/dev/sdq].generation_errs 0
[/dev/sdp].write_io_errs 0
[/dev/sdp].read_io_errs 0
[/dev/sdp].flush_io_errs 0
[/dev/sdp].corruption_errs 0
[/dev/sdp].generation_errs 0
[/dev/sdo].write_io_errs 0
[/dev/sdo].read_io_errs 0
[/dev/sdo].flush_io_errs 0
[/dev/sdo].corruption_errs 0
[/dev/sdo].generation_errs 0
[/dev/sdn].write_io_errs 0
[/dev/sdn].read_io_errs 0
[/dev/sdn].flush_io_errs 0
[/dev/sdn].corruption_errs 0
[/dev/sdn].generation_errs 0
[/dev/sdm].write_io_errs 0
[/dev/sdm].read_io_errs 0
[/dev/sdm].flush_io_errs 0
[/dev/sdm].corruption_errs 0
[/dev/sdm].generation_errs 0
[/dev/sdl].write_io_errs 0
[/dev/sdl].read_io_errs 0
[/dev/sdl].flush_io_errs 0
[/dev/sdl].corruption_errs 0
[/dev/sdl].generation_errs 0
[/dev/sdk].write_io_errs 0
[/dev/sdk].read_io_errs 0
[/dev/sdk].flush_io_errs 0
[/dev/sdk].corruption_errs 0
[/dev/sdk].generation_errs 0
[/dev/sdj].write_io_errs 0
[/dev/sdj].read_io_errs 0
[/dev/sdj].flush_io_errs 0
[/dev/sdj].corruption_errs 0
[/dev/sdj].generation_errs 0
[/dev/sdi].write_io_errs 0
[/dev/sdi].read_io_errs 0
[/dev/sdi].flush_io_errs 0
[/dev/sdi].corruption_errs 0
[/dev/sdi].generation_errs 0
[/dev/sdc].write_io_errs 0
[/dev/sdc].read_io_errs 0
[/dev/sdc].flush_io_errs 0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sdf].write_io_errs 0
[/dev/sdf].read_io_errs 0
[/dev/sdf].flush_io_errs 0
[/dev/sdf].corruption_errs 0
[/dev/sdf].generation_errs 0
[/dev/sde].write_io_errs 0
[/dev/sde].read_io_errs 0
[/dev/sde].flush_io_errs 0
[/dev/sde].corruption_errs 0
[/dev/sde].generation_errs 0
[/dev/sdb].write_io_errs 0
[/dev/sdb].read_io_errs 0
[/dev/sdb].flush_io_errs 0
[/dev/sdb].corruption_errs 0
[/dev/sdb].generation_errs 0
[/dev/sdd].write_io_errs 0
[/dev/sdd].read_io_errs 0
[/dev/sdd].flush_io_errs 0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0



# btrfs fi show /mount
Label: 'mount' uuid: 30ab3069-93bc-4952-a77c-61acdc364563
Total devices 20 FS bytes used 131.32TiB
devid 1 size 7.28TiB used 7.10TiB path /dev/sdg
devid 2 size 7.28TiB used 7.10TiB path /dev/sdh
devid 3 size 7.28TiB used 7.10TiB path /dev/sdt
devid 5 size 7.28TiB used 7.10TiB path /dev/sds
devid 6 size 7.28TiB used 7.10TiB path /dev/sdr
devid 7 size 7.28TiB used 7.10TiB path /dev/sdq
devid 8 size 7.28TiB used 7.10TiB path /dev/sdp
devid 9 size 7.28TiB used 7.10TiB path /dev/sdo
devid 10 size 7.28TiB used 7.10TiB path /dev/sdn
devid 11 size 7.28TiB used 7.10TiB path /dev/sdm
devid 12 size 7.28TiB used 7.10TiB path /dev/sdl
devid 13 size 7.28TiB used 7.10TiB path /dev/sdk
devid 14 size 7.28TiB used 7.10TiB path /dev/sdj
devid 15 size 7.28TiB used 7.10TiB path /dev/sdi
devid 16 size 12.73TiB used 11.87TiB path /dev/sdc
devid 17 size 12.73TiB used 11.87TiB path /dev/sdf
devid 18 size 12.73TiB used 11.87TiB path /dev/sde
devid 19 size 12.73TiB used 11.87TiB path /dev/sdb
devid 20 size 12.73TiB used 11.87TiB path /dev/sdd
*** Some devices missing

# cat /sys/block/sdh/queue/scheduler
mq-deadline kyber [bfq] none



Running:
5.6.4-1-default
btrfs-progs v5.6 
RAM: 64GB DDR4
HDD's are on LSI HBA (no performance issue on R/W).
2 x E5-2698 v3 CPU’s
HDD’s are with APM 254 with no idle.

Anyone have any experience or some best practices to recover the FS? 
Also is there a way to limit the intensity of the HDD's during the delete. I don't mind running it for months but to ensure that the r/w are not 90% degraded on the system.
I'm doing something wrong?

Thanks!

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-21 14:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-20 17:07 [PATCH 1/4] fsx: allow zero range operations to cross eof fdmanana
2020-04-21 14:22 ` Brian Foster
2020-04-21 14:27   ` btrfs device missing issues Alexandru Dordea

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.