All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] fstest changes for LBS
@ 2024-01-22 11:17 Pankaj Raghav (Samsung)
  2024-01-22 11:17 ` [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz Pankaj Raghav (Samsung)
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Pankaj Raghav (Samsung) @ 2024-01-22 11:17 UTC (permalink / raw)
  To: zlang, fstests; +Cc: p.raghav, djwong, mcgrof, gost.dev, linux-xfs

From: Pankaj Raghav <p.raghav@samsung.com>

Some tests need to be adapted to for LBS[1] based on the filesystem
blocksize. These are generic changes where it uses the filesystem
blocksize instead of assuming it.

There are some more generic test cases that are failing due to logdev
size requirement that changes with filesystem blocksize. I will address
them in a separate series.

[1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/

Pankaj Raghav (2):
  xfs/558: scale blk IO size based on the filesystem blksz
  xfs/161: adapt the test case for LBS filesystem

 tests/xfs/161 | 9 +++++++--
 tests/xfs/558 | 7 ++++++-
 2 files changed, 13 insertions(+), 3 deletions(-)


base-commit: c46ca4d1f6c0c45f9a3ea18bc31ba5ae89e02c70
-- 
2.43.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz
  2024-01-22 11:17 [PATCH 0/2] fstest changes for LBS Pankaj Raghav (Samsung)
@ 2024-01-22 11:17 ` Pankaj Raghav (Samsung)
  2024-01-22 16:53   ` Darrick J. Wong
  2024-01-22 11:17 ` [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem Pankaj Raghav (Samsung)
  2024-01-23  0:25 ` [PATCH 0/2] fstest changes for LBS Dave Chinner
  2 siblings, 1 reply; 20+ messages in thread
From: Pankaj Raghav (Samsung) @ 2024-01-22 11:17 UTC (permalink / raw)
  To: zlang, fstests; +Cc: p.raghav, djwong, mcgrof, gost.dev, linux-xfs

From: Pankaj Raghav <p.raghav@samsung.com>

This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
system(see LBS efforts[1]). Scale the `blksz` based on the filesystem
block size instead of fixing it as 64k so that we do get some iomap
invalidations while doing concurrent writes.

Cap the blksz to be at least 64k to retain the same behaviour as before
for smaller filesystem blocksizes.

[1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/

Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 tests/xfs/558 | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tests/xfs/558 b/tests/xfs/558
index 9e9b3be8..270f458c 100755
--- a/tests/xfs/558
+++ b/tests/xfs/558
@@ -127,7 +127,12 @@ _scratch_mount >> $seqres.full
 $XFS_IO_PROG -c 'chattr -x' $SCRATCH_MNT &> $seqres.full
 _require_pagecache_access $SCRATCH_MNT
 
-blksz=65536
+min_blksz=65536
+file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
+blksz=$(( 8 * $file_blksz ))
+
+blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
+
 _require_congruent_file_oplen $SCRATCH_MNT $blksz
 
 # Make sure we have sufficient extent size to create speculative CoW
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem
  2024-01-22 11:17 [PATCH 0/2] fstest changes for LBS Pankaj Raghav (Samsung)
  2024-01-22 11:17 ` [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz Pankaj Raghav (Samsung)
@ 2024-01-22 11:17 ` Pankaj Raghav (Samsung)
  2024-01-22 16:57   ` Darrick J. Wong
  2024-01-23  0:25 ` [PATCH 0/2] fstest changes for LBS Dave Chinner
  2 siblings, 1 reply; 20+ messages in thread
From: Pankaj Raghav (Samsung) @ 2024-01-22 11:17 UTC (permalink / raw)
  To: zlang, fstests; +Cc: p.raghav, djwong, mcgrof, gost.dev, linux-xfs

From: Pankaj Raghav <p.raghav@samsung.com>

This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
system(see LBS efforts[1]). Adapt the blksz so that we create more than
one block for the testcase.

Cap the blksz to be at least 64k to retain the same behaviour as before
for smaller filesystem blocksizes.

[1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/

Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 tests/xfs/161 | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tests/xfs/161 b/tests/xfs/161
index 486fa6ca..f7b03f0e 100755
--- a/tests/xfs/161
+++ b/tests/xfs/161
@@ -38,9 +38,14 @@ _qmount_option "usrquota"
 _scratch_xfs_db -c 'version' -c 'sb 0' -c 'p' >> $seqres.full
 _scratch_mount >> $seqres.full
 
+min_blksz=65536
+file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
+blksz=$(( 2 * $file_blksz))
+
+blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
 # Force the block counters for uid 1 and 2 above zero
-_pwrite_byte 0x61 0 64k $SCRATCH_MNT/a >> $seqres.full
-_pwrite_byte 0x61 0 64k $SCRATCH_MNT/b >> $seqres.full
+_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/a >> $seqres.full
+_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/b >> $seqres.full
 sync
 chown 1 $SCRATCH_MNT/a
 chown 2 $SCRATCH_MNT/b
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz
  2024-01-22 11:17 ` [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz Pankaj Raghav (Samsung)
@ 2024-01-22 16:53   ` Darrick J. Wong
  2024-01-22 17:23     ` Pankaj Raghav
  0 siblings, 1 reply; 20+ messages in thread
From: Darrick J. Wong @ 2024-01-22 16:53 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung)
  Cc: zlang, fstests, p.raghav, mcgrof, gost.dev, linux-xfs

On Mon, Jan 22, 2024 at 12:17:50PM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
> system(see LBS efforts[1]). Scale the `blksz` based on the filesystem

Fails how, specifically?

--D

> block size instead of fixing it as 64k so that we do get some iomap
> invalidations while doing concurrent writes.
> 
> Cap the blksz to be at least 64k to retain the same behaviour as before
> for smaller filesystem blocksizes.
> 
> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> 
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>  tests/xfs/558 | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/xfs/558 b/tests/xfs/558
> index 9e9b3be8..270f458c 100755
> --- a/tests/xfs/558
> +++ b/tests/xfs/558
> @@ -127,7 +127,12 @@ _scratch_mount >> $seqres.full
>  $XFS_IO_PROG -c 'chattr -x' $SCRATCH_MNT &> $seqres.full
>  _require_pagecache_access $SCRATCH_MNT
>  
> -blksz=65536
> +min_blksz=65536
> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
> +blksz=$(( 8 * $file_blksz ))
> +
> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
> +
>  _require_congruent_file_oplen $SCRATCH_MNT $blksz
>  
>  # Make sure we have sufficient extent size to create speculative CoW
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem
  2024-01-22 11:17 ` [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem Pankaj Raghav (Samsung)
@ 2024-01-22 16:57   ` Darrick J. Wong
  2024-01-22 17:32     ` Pankaj Raghav
  2024-01-25 16:06     ` Pankaj Raghav
  0 siblings, 2 replies; 20+ messages in thread
From: Darrick J. Wong @ 2024-01-22 16:57 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung)
  Cc: zlang, fstests, p.raghav, mcgrof, gost.dev, linux-xfs

On Mon, Jan 22, 2024 at 12:17:51PM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
> system(see LBS efforts[1]). Adapt the blksz so that we create more than
> one block for the testcase.

How does this fail, specifically?  And, uh, what block sizes > 64k were
tested?

--D

> Cap the blksz to be at least 64k to retain the same behaviour as before
> for smaller filesystem blocksizes.
> 
> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> 
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> ---
>  tests/xfs/161 | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/xfs/161 b/tests/xfs/161
> index 486fa6ca..f7b03f0e 100755
> --- a/tests/xfs/161
> +++ b/tests/xfs/161
> @@ -38,9 +38,14 @@ _qmount_option "usrquota"
>  _scratch_xfs_db -c 'version' -c 'sb 0' -c 'p' >> $seqres.full
>  _scratch_mount >> $seqres.full
>  
> +min_blksz=65536
> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
> +blksz=$(( 2 * $file_blksz))
> +
> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
>  # Force the block counters for uid 1 and 2 above zero
> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/a >> $seqres.full
> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/b >> $seqres.full
> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/a >> $seqres.full
> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/b >> $seqres.full
>  sync
>  chown 1 $SCRATCH_MNT/a
>  chown 2 $SCRATCH_MNT/b
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz
  2024-01-22 16:53   ` Darrick J. Wong
@ 2024-01-22 17:23     ` Pankaj Raghav
  2024-03-13 20:08       ` Darrick J. Wong
  0 siblings, 1 reply; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-22 17:23 UTC (permalink / raw)
  To: Darrick J. Wong, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, mcgrof, gost.dev, linux-xfs

On 22/01/2024 17:53, Darrick J. Wong wrote:
> On Mon, Jan 22, 2024 at 12:17:50PM +0100, Pankaj Raghav (Samsung) wrote:
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
>> system(see LBS efforts[1]). Scale the `blksz` based on the filesystem
> > Fails how, specifically?

I basically get this in 558.out.bad when I set filesystem block size to be 64k:
QA output created by 558
Expected to hear about writeback iomap invalidations?
Silence is golden

But I do see that iomap invalidations are happening for 16k and 32k, which makes it pass
the test for those block sizes.

My suspicion was that we don't see any invalidations because of the blksz fixed
at 64k in the test, which will contain one FSB in the case of 64k block size.

Let me know if I am missing something.

> 
> --D
> 
>> block size instead of fixing it as 64k so that we do get some iomap
>> invalidations while doing concurrent writes.
>>
>> Cap the blksz to be at least 64k to retain the same behaviour as before
>> for smaller filesystem blocksizes.
>>
>> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>>
>> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
>> ---
>>  tests/xfs/558 | 7 ++++++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/xfs/558 b/tests/xfs/558
>> index 9e9b3be8..270f458c 100755
>> --- a/tests/xfs/558
>> +++ b/tests/xfs/558
>> @@ -127,7 +127,12 @@ _scratch_mount >> $seqres.full
>>  $XFS_IO_PROG -c 'chattr -x' $SCRATCH_MNT &> $seqres.full
>>  _require_pagecache_access $SCRATCH_MNT
>>  
>> -blksz=65536
>> +min_blksz=65536
>> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
>> +blksz=$(( 8 * $file_blksz ))
>> +
>> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
>> +
>>  _require_congruent_file_oplen $SCRATCH_MNT $blksz
>>  
>>  # Make sure we have sufficient extent size to create speculative CoW
>> -- 
>> 2.43.0
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem
  2024-01-22 16:57   ` Darrick J. Wong
@ 2024-01-22 17:32     ` Pankaj Raghav
  2024-01-25 16:06     ` Pankaj Raghav
  1 sibling, 0 replies; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-22 17:32 UTC (permalink / raw)
  To: Darrick J. Wong, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, mcgrof, gost.dev, linux-xfs

On 22/01/2024 17:57, Darrick J. Wong wrote:
> On Mon, Jan 22, 2024 at 12:17:51PM +0100, Pankaj Raghav (Samsung) wrote:
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
>> system(see LBS efforts[1]). Adapt the blksz so that we create more than
>> one block for the testcase.
> And, uh, what block sizes > 64k were tested?

I thought I removed >= 64k and put just 64k before I sent the patches as we
don't allow FSB > 64k, for now. Hypothetically, due to the hardcoded 64k blksz, we might
face the same issue for > 64k FSB as well.
> How does this fail, specifically?

This is the output I get when I set the block size to be 64k:

QA output created by 161
Expected timer expiry (0) to be after now (1705944360).
Running xfs_repair to upgrade filesystem.
Adding large timestamp support to filesystem.
FEATURES: BIGTIME:YES
Expected uid 1 expiry (0) to be after now (1705944361).
Expected uid 2 expiry (0) to be after uid 1 (0).
Expected uid 2 expiry (0) to be after 2038.
Expected uid 1 expiry (0) to be after now (1705944361).
Expected uid 2 expiry (0) to be after uid 1 (0).
Expected uid 2 expiry (0) to be after 2038.
grace2 expiry has value of 0
grace2 expiry is NOT in range 7956915737 .. 7956915747
grace2 expiry after remount has value of 0
grace2 expiry after remount is NOT in range 7956915737 .. 7956915747

Seeing the comment: Force the block counters for uid 1 and 2 above zero,
I added the changes which fixed the issues for 64k FSB.

> --D
> 
>> Cap the blksz to be at least 64k to retain the same behaviour as before
>> for smaller filesystem blocksizes.
>>
>> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>>
>> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
>> ---
>>  tests/xfs/161 | 9 +++++++--
>>  1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/tests/xfs/161 b/tests/xfs/161
>> index 486fa6ca..f7b03f0e 100755
>> --- a/tests/xfs/161
>> +++ b/tests/xfs/161
>> @@ -38,9 +38,14 @@ _qmount_option "usrquota"
>>  _scratch_xfs_db -c 'version' -c 'sb 0' -c 'p' >> $seqres.full
>>  _scratch_mount >> $seqres.full
>>  
>> +min_blksz=65536
>> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
>> +blksz=$(( 2 * $file_blksz))
>> +
>> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
>>  # Force the block counters for uid 1 and 2 above zero
>> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/a >> $seqres.full
>> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/b >> $seqres.full
>> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/a >> $seqres.full
>> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/b >> $seqres.full
>>  sync
>>  chown 1 $SCRATCH_MNT/a
>>  chown 2 $SCRATCH_MNT/b
>> -- 
>> 2.43.0
>>
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-22 11:17 [PATCH 0/2] fstest changes for LBS Pankaj Raghav (Samsung)
  2024-01-22 11:17 ` [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz Pankaj Raghav (Samsung)
  2024-01-22 11:17 ` [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem Pankaj Raghav (Samsung)
@ 2024-01-23  0:25 ` Dave Chinner
  2024-01-23  8:52   ` Pankaj Raghav
  2 siblings, 1 reply; 20+ messages in thread
From: Dave Chinner @ 2024-01-23  0:25 UTC (permalink / raw)
  To: Pankaj Raghav (Samsung)
  Cc: zlang, fstests, p.raghav, djwong, mcgrof, gost.dev, linux-xfs

On Mon, Jan 22, 2024 at 12:17:49PM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
> 
> Some tests need to be adapted to for LBS[1] based on the filesystem
> blocksize. These are generic changes where it uses the filesystem
> blocksize instead of assuming it.
> 
> There are some more generic test cases that are failing due to logdev
> size requirement that changes with filesystem blocksize. I will address
> them in a separate series.
> 
> [1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> 
> Pankaj Raghav (2):
>   xfs/558: scale blk IO size based on the filesystem blksz
>   xfs/161: adapt the test case for LBS filesystem

Do either of these fail and require fixing for a 64k page size
system running 64kB block size?

i.e. are these actual 64kB block size issues, or just issues with
the LBS patchset?

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23  0:25 ` [PATCH 0/2] fstest changes for LBS Dave Chinner
@ 2024-01-23  8:52   ` Pankaj Raghav
  2024-01-23 13:43     ` Zorro Lang
  2024-01-23 15:35     ` Ritesh Harjani
  0 siblings, 2 replies; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-23  8:52 UTC (permalink / raw)
  To: Dave Chinner, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, djwong, mcgrof, gost.dev, linux-xfs,
	Ritesh Harjani (IBM)

On 23/01/2024 01:25, Dave Chinner wrote:
> On Mon, Jan 22, 2024 at 12:17:49PM +0100, Pankaj Raghav (Samsung) wrote:
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> Some tests need to be adapted to for LBS[1] based on the filesystem
>> blocksize. These are generic changes where it uses the filesystem
>> blocksize instead of assuming it.
>>
>> There are some more generic test cases that are failing due to logdev
>> size requirement that changes with filesystem blocksize. I will address
>> them in a separate series.
>>
>> [1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>>
>> Pankaj Raghav (2):
>>   xfs/558: scale blk IO size based on the filesystem blksz
>>   xfs/161: adapt the test case for LBS filesystem
> 
> Do either of these fail and require fixing for a 64k page size
> system running 64kB block size?
> 
> i.e. are these actual 64kB block size issues, or just issues with
> the LBS patchset?
> 

I had the same question in mind. Unfortunately, I don't have access to any 64k Page size
machine at the moment. I will ask around if I can get access to it.

@Zorro I saw you posted a test report for 64k blocksize. Is it possible for you to
see if these test cases(xfs/161, xfs/558) work in your setup with 64k block size?

CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23  8:52   ` Pankaj Raghav
@ 2024-01-23 13:43     ` Zorro Lang
  2024-01-23 15:39       ` Ritesh Harjani
  2024-01-23 16:33       ` Pankaj Raghav
  2024-01-23 15:35     ` Ritesh Harjani
  1 sibling, 2 replies; 20+ messages in thread
From: Zorro Lang @ 2024-01-23 13:43 UTC (permalink / raw)
  To: Pankaj Raghav
  Cc: Dave Chinner, Pankaj Raghav (Samsung),
	fstests, djwong, mcgrof, gost.dev, linux-xfs,
	Ritesh Harjani (IBM)

On Tue, Jan 23, 2024 at 09:52:39AM +0100, Pankaj Raghav wrote:
> On 23/01/2024 01:25, Dave Chinner wrote:
> > On Mon, Jan 22, 2024 at 12:17:49PM +0100, Pankaj Raghav (Samsung) wrote:
> >> From: Pankaj Raghav <p.raghav@samsung.com>
> >>
> >> Some tests need to be adapted to for LBS[1] based on the filesystem
> >> blocksize. These are generic changes where it uses the filesystem
> >> blocksize instead of assuming it.
> >>
> >> There are some more generic test cases that are failing due to logdev
> >> size requirement that changes with filesystem blocksize. I will address
> >> them in a separate series.
> >>
> >> [1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> >>
> >> Pankaj Raghav (2):
> >>   xfs/558: scale blk IO size based on the filesystem blksz
> >>   xfs/161: adapt the test case for LBS filesystem
> > 
> > Do either of these fail and require fixing for a 64k page size
> > system running 64kB block size?
> > 
> > i.e. are these actual 64kB block size issues, or just issues with
> > the LBS patchset?
> > 
> 
> I had the same question in mind. Unfortunately, I don't have access to any 64k Page size
> machine at the moment. I will ask around if I can get access to it.
> 
> @Zorro I saw you posted a test report for 64k blocksize. Is it possible for you to
> see if these test cases(xfs/161, xfs/558) work in your setup with 64k block size?

Sure, I'll reserve one ppc64le and give it a try. But I remember there're more failed
cases on 64k blocksize xfs.

Thanks,
Zorro

> 
> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23  8:52   ` Pankaj Raghav
  2024-01-23 13:43     ` Zorro Lang
@ 2024-01-23 15:35     ` Ritesh Harjani
  2024-01-23 16:40       ` Pankaj Raghav
  1 sibling, 1 reply; 20+ messages in thread
From: Ritesh Harjani @ 2024-01-23 15:35 UTC (permalink / raw)
  To: Pankaj Raghav, Dave Chinner, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, djwong, mcgrof, gost.dev, linux-xfs

Pankaj Raghav <p.raghav@samsung.com> writes:

> On 23/01/2024 01:25, Dave Chinner wrote:
>> On Mon, Jan 22, 2024 at 12:17:49PM +0100, Pankaj Raghav (Samsung) wrote:
>>> From: Pankaj Raghav <p.raghav@samsung.com>
>>>
>>> Some tests need to be adapted to for LBS[1] based on the filesystem
>>> blocksize. These are generic changes where it uses the filesystem
>>> blocksize instead of assuming it.
>>>
>>> There are some more generic test cases that are failing due to logdev
>>> size requirement that changes with filesystem blocksize. I will address
>>> them in a separate series.
>>>
>>> [1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>>>
>>> Pankaj Raghav (2):
>>>   xfs/558: scale blk IO size based on the filesystem blksz
>>>   xfs/161: adapt the test case for LBS filesystem
>> 
>> Do either of these fail and require fixing for a 64k page size
>> system running 64kB block size?
>> 
>> i.e. are these actual 64kB block size issues, or just issues with
>> the LBS patchset?
>> 
>
> I had the same question in mind. Unfortunately, I don't have access to any 64k Page size
> machine at the moment. I will ask around if I can get access to it.
>
> @Zorro I saw you posted a test report for 64k blocksize. Is it possible for you to
> see if these test cases(xfs/161, xfs/558) work in your setup with 64k block size?
>
> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.

Hi Pankaj,

So I tested this on Linux 6.6 on Power8 qemu (which I had it handy).
xfs/558 passed with both 64k blocksize & with 4k blocksize on a 64k
pagesize system.
However, since on this system the quota was v4.05, it does not support
bigtime feature hence could not run xfs/161. 

xfs/161       [not run] quota: bigtime support not detected
xfs/558 7s ...  21s

I will collect this info on a different system with latest kernel and
will update for xfs/161 too.

-ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 13:43     ` Zorro Lang
@ 2024-01-23 15:39       ` Ritesh Harjani
  2024-01-23 16:33       ` Pankaj Raghav
  1 sibling, 0 replies; 20+ messages in thread
From: Ritesh Harjani @ 2024-01-23 15:39 UTC (permalink / raw)
  To: Zorro Lang, Pankaj Raghav
  Cc: Dave Chinner, Pankaj Raghav (Samsung),
	fstests, djwong, mcgrof, gost.dev, linux-xfs

Zorro Lang <zlang@redhat.com> writes:

> On Tue, Jan 23, 2024 at 09:52:39AM +0100, Pankaj Raghav wrote:
>> On 23/01/2024 01:25, Dave Chinner wrote:
>> > On Mon, Jan 22, 2024 at 12:17:49PM +0100, Pankaj Raghav (Samsung) wrote:
>> >> From: Pankaj Raghav <p.raghav@samsung.com>
>> >>
>> >> Some tests need to be adapted to for LBS[1] based on the filesystem
>> >> blocksize. These are generic changes where it uses the filesystem
>> >> blocksize instead of assuming it.
>> >>
>> >> There are some more generic test cases that are failing due to logdev
>> >> size requirement that changes with filesystem blocksize. I will address
>> >> them in a separate series.
>> >>
>> >> [1] https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>> >>
>> >> Pankaj Raghav (2):
>> >>   xfs/558: scale blk IO size based on the filesystem blksz
>> >>   xfs/161: adapt the test case for LBS filesystem
>> > 
>> > Do either of these fail and require fixing for a 64k page size
>> > system running 64kB block size?
>> > 
>> > i.e. are these actual 64kB block size issues, or just issues with
>> > the LBS patchset?
>> > 
>> 
>> I had the same question in mind. Unfortunately, I don't have access to any 64k Page size
>> machine at the moment. I will ask around if I can get access to it.
>> 
>> @Zorro I saw you posted a test report for 64k blocksize. Is it possible for you to
>> see if these test cases(xfs/161, xfs/558) work in your setup with 64k block size?
>
> Sure, I'll reserve one ppc64le and give it a try. But I remember there're more failed
> cases on 64k blocksize xfs.
>

Please share the lists of failed testcases with 64k bs xfs (if you have it handy).
IIRC, many of them could be due to 64k bs itself, but yes, I can take a look and work on those.

Thanks!
-ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 13:43     ` Zorro Lang
  2024-01-23 15:39       ` Ritesh Harjani
@ 2024-01-23 16:33       ` Pankaj Raghav
  1 sibling, 0 replies; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-23 16:33 UTC (permalink / raw)
  To: Zorro Lang
  Cc: Dave Chinner, Pankaj Raghav (Samsung),
	fstests, djwong, mcgrof, gost.dev, linux-xfs,
	Ritesh Harjani (IBM)

>> @Zorro I saw you posted a test report for 64k blocksize. Is it possible for you to
>> see if these test cases(xfs/161, xfs/558) work in your setup with 64k block size?
> 
> Sure, I'll reserve one ppc64le and give it a try. But I remember there're more failed
> cases on 64k blocksize xfs.
> 

Thanks a lot, Zorro. I am also having issues with xfs/166 with LBS. I am not sure if this exists
on a 64k base page size system.

FYI, there are a lot of generic tests that are failing due to the filesystem size being too small
to fit the log with 64k block size. At least with LBS (I am not sure about 64k base page system),
these are the failures due to filesystem size:

generic/042, generic/081, generic/108, generic/455, generic/457, generic/482, generic/704,
generic/730, generic/731, shared/298.

For example in generic/042 with 64k block size:

max log size 388 smaller than min log size 2028, filesystem is too small
Usage: mkfs.xfs
/* blocksize */         [-b size=num]
/* config file */       [-c options=xxx]
/* metadata */          [-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1,
                            inobtcount=0|1,bigtime=0|1]
/* data subvol */       [-d agcount=n,agsize=n,file,name=xxx,size=num,
                            (sunit=value,swidth=value|su=num,sw=num|noalign),
                            sectsize=num
/* force overwrite */   [-f]
/* inode size */        [-i perblock=n|size=num,maxpct=n,attr=0|1|2,
                            projid32bit=0|1,sparse=0|1,nrext64=0|1]
/* no discard */        [-K]
/* log subvol */        [-l agnum=n,internal,size=num,logdev=xxx,version=n
                            sunit=value|su=num,sectsize=num,lazy-count=0|1]
/* label */             [-L label (maximum 12 characters)]
/* naming */            [-n size=num,version=2|ci,ftype=0|1]
/* no-op info only */   [-N]
/* prototype file */    [-p fname]
/* quiet */             [-q]
/* realtime subvol */   [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */        [-s size=num]
/* version */           [-V]
                        devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).

--
Pankaj

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 15:35     ` Ritesh Harjani
@ 2024-01-23 16:40       ` Pankaj Raghav
  2024-01-23 19:42         ` Ritesh Harjani
  0 siblings, 1 reply; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-23 16:40 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), Dave Chinner, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, djwong, mcgrof, gost.dev, linux-xfs

>> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.
> 
> Hi Pankaj,
> 
> So I tested this on Linux 6.6 on Power8 qemu (which I had it handy).
> xfs/558 passed with both 64k blocksize & with 4k blocksize on a 64k
> pagesize system.

Thanks for testing it out. I will investigate this further, and see why
I have this failure in LBS for 64k and not for 32k and 16k block sizes.

As this test also expects some invalidation during the page cache writeback,
this might an issue just with LBS and not for 64k page size machines.

Probably I will also spend some time to set up a Power8 qemu to test these failures.

> However, since on this system the quota was v4.05, it does not support
> bigtime feature hence could not run xfs/161. 
> 
> xfs/161       [not run] quota: bigtime support not detected
> xfs/558 7s ...  21s
> 
> I will collect this info on a different system with latest kernel and
> will update for xfs/161 too.
> 

Sounds good! Thanks!

> -ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 16:40       ` Pankaj Raghav
@ 2024-01-23 19:42         ` Ritesh Harjani
  2024-01-23 20:21           ` Pankaj Raghav
  0 siblings, 1 reply; 20+ messages in thread
From: Ritesh Harjani @ 2024-01-23 19:42 UTC (permalink / raw)
  To: Pankaj Raghav, Dave Chinner, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, djwong, mcgrof, gost.dev, linux-xfs

Pankaj Raghav <p.raghav@samsung.com> writes:

>>> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.
>> 
>> Hi Pankaj,
>> 
>> So I tested this on Linux 6.6 on Power8 qemu (which I had it handy).
>> xfs/558 passed with both 64k blocksize & with 4k blocksize on a 64k
>> pagesize system.

Ok, so it looks like the testcase xfs/558 is failing on linux-next with
64k blocksize but passing with 4k blocksize.
It thought it was passing on my previous linux 6.6 release, but I guess
those too were just some lucky runs. Here is the report -

linux-next: xfs/558 aggregate results across 11 runs: pass=2 (18.2%), fail=9 (81.8%)
v6.6: xfs/558 aggregate results across 11 runs: pass=5 (45.5%), fail=6 (54.5%)

So I guess, I will spend sometime analyzing why the failure.

Failure log
================
xfs/558 36s ... - output mismatch (see /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad)
    --- tests/xfs/558.out       2023-06-29 12:06:13.824276289 +0000
    +++ /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad       2024-01-23 18:54:56.613116520 +0000
    @@ -1,2 +1,3 @@
     QA output created by 558
    +Expected to hear about writeback iomap invalidations?
     Silence is golden
    ...
    (Run 'diff -u /root/xfstests-dev/tests/xfs/558.out /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad'  to see the entire diff)

HINT: You _MAY_ be missing kernel fix:
      5c665e5b5af6 xfs: remove xfs_map_cow

-ritesh

>
> Thanks for testing it out. I will investigate this further, and see why
> I have this failure in LBS for 64k and not for 32k and 16k block sizes.
>
> As this test also expects some invalidation during the page cache writeback,
> this might an issue just with LBS and not for 64k page size machines.
>
> Probably I will also spend some time to set up a Power8 qemu to test these failures.
>
>> However, since on this system the quota was v4.05, it does not support
>> bigtime feature hence could not run xfs/161. 
>> 
>> xfs/161       [not run] quota: bigtime support not detected
>> xfs/558 7s ...  21s
>> 
>> I will collect this info on a different system with latest kernel and
>> will update for xfs/161 too.
>> 
>
> Sounds good! Thanks!
>
>> -ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 19:42         ` Ritesh Harjani
@ 2024-01-23 20:21           ` Pankaj Raghav
  2024-01-24 16:58             ` Darrick J. Wong
  0 siblings, 1 reply; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-23 20:21 UTC (permalink / raw)
  To: Ritesh Harjani (IBM), Dave Chinner, Pankaj Raghav (Samsung)
  Cc: zlang, fstests, djwong, mcgrof, gost.dev, linux-xfs

On 23/01/2024 20:42, Ritesh Harjani (IBM) wrote:
> Pankaj Raghav <p.raghav@samsung.com> writes:
> 
>>>> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.
>>>
>>> Hi Pankaj,
>>>
>>> So I tested this on Linux 6.6 on Power8 qemu (which I had it handy).
>>> xfs/558 passed with both 64k blocksize & with 4k blocksize on a 64k
>>> pagesize system.
> 
> Ok, so it looks like the testcase xfs/558 is failing on linux-next with
> 64k blocksize but passing with 4k blocksize.
> It thought it was passing on my previous linux 6.6 release, but I guess
> those too were just some lucky runs. Here is the report -
> 
> linux-next: xfs/558 aggregate results across 11 runs: pass=2 (18.2%), fail=9 (81.8%)
> v6.6: xfs/558 aggregate results across 11 runs: pass=5 (45.5%), fail=6 (54.5%)
> 

Oh, thanks for reporting back!

I can confirm that it happens 100% of time with my LBS patch enabled for 64k bs.

Let's see what Zorro reports back on a real 64k hardware.

> So I guess, I will spend sometime analyzing why the failure.
> 

Could you try the patch I sent for xfs/558 and see if it works all the time?

The issue is 'xfs_wb*iomap_invalid' not getting triggered when we have larger
bs. I basically increased the blksz in the test based on the underlying bs.
Maybe there is a better solution than what I proposed, but it fixes the test.


> Failure log
> ================
> xfs/558 36s ... - output mismatch (see /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad)
>     --- tests/xfs/558.out       2023-06-29 12:06:13.824276289 +0000
>     +++ /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad       2024-01-23 18:54:56.613116520 +0000
>     @@ -1,2 +1,3 @@
>      QA output created by 558
>     +Expected to hear about writeback iomap invalidations?
>      Silence is golden
>     ...
>     (Run 'diff -u /root/xfstests-dev/tests/xfs/558.out /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad'  to see the entire diff)
> 
> HINT: You _MAY_ be missing kernel fix:
>       5c665e5b5af6 xfs: remove xfs_map_cow
> 
> -ritesh
> 
>>
>> Thanks for testing it out. I will investigate this further, and see why
>> I have this failure in LBS for 64k and not for 32k and 16k block sizes.
>>
>> As this test also expects some invalidation during the page cache writeback,
>> this might an issue just with LBS and not for 64k page size machines.
>>
>> Probably I will also spend some time to set up a Power8 qemu to test these failures.
>>
>>> However, since on this system the quota was v4.05, it does not support
>>> bigtime feature hence could not run xfs/161. 
>>>
>>> xfs/161       [not run] quota: bigtime support not detected
>>> xfs/558 7s ...  21s
>>>
>>> I will collect this info on a different system with latest kernel and
>>> will update for xfs/161 too.
>>>
>>
>> Sounds good! Thanks!
>>
>>> -ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-23 20:21           ` Pankaj Raghav
@ 2024-01-24 16:58             ` Darrick J. Wong
  2024-01-24 21:06               ` Pankaj Raghav
  0 siblings, 1 reply; 20+ messages in thread
From: Darrick J. Wong @ 2024-01-24 16:58 UTC (permalink / raw)
  To: Pankaj Raghav
  Cc: Ritesh Harjani (IBM), Dave Chinner, Pankaj Raghav (Samsung),
	zlang, fstests, mcgrof, gost.dev, linux-xfs

On Tue, Jan 23, 2024 at 09:21:50PM +0100, Pankaj Raghav wrote:
> On 23/01/2024 20:42, Ritesh Harjani (IBM) wrote:
> > Pankaj Raghav <p.raghav@samsung.com> writes:
> > 
> >>>> CCing Ritesh as I saw him post a patch to fix a testcase for 64k block size.
> >>>
> >>> Hi Pankaj,
> >>>
> >>> So I tested this on Linux 6.6 on Power8 qemu (which I had it handy).
> >>> xfs/558 passed with both 64k blocksize & with 4k blocksize on a 64k
> >>> pagesize system.
> > 
> > Ok, so it looks like the testcase xfs/558 is failing on linux-next with
> > 64k blocksize but passing with 4k blocksize.
> > It thought it was passing on my previous linux 6.6 release, but I guess
> > those too were just some lucky runs. Here is the report -
> > 
> > linux-next: xfs/558 aggregate results across 11 runs: pass=2 (18.2%), fail=9 (81.8%)
> > v6.6: xfs/558 aggregate results across 11 runs: pass=5 (45.5%), fail=6 (54.5%)
> > 
> 
> Oh, thanks for reporting back!
> 
> I can confirm that it happens 100% of time with my LBS patch enabled for 64k bs.
> 
> Let's see what Zorro reports back on a real 64k hardware.
> 
> > So I guess, I will spend sometime analyzing why the failure.
> > 
> 
> Could you try the patch I sent for xfs/558 and see if it works all the time?
> 
> The issue is 'xfs_wb*iomap_invalid' not getting triggered when we have larger
> bs. I basically increased the blksz in the test based on the underlying bs.
> Maybe there is a better solution than what I proposed, but it fixes the test.

The only improvement I can think of would be to force-disable large
folios on the file being tested.  Large folios mess with testing because
the race depends on write and writeback needing to walk multiple pages.
Right now the pagecache only institutes large folios if the IO patterns
are large IOs, but in theory that could change some day.

I suspect that the iomap tracepoint data and possibly
trace_mm_filemap_add_to_page_cache might help figure out what size
folios are actually in use during the invalidation test.

(Perhaps it's time for me to add a 64k bs VM to the test fleet.)

--D

> > Failure log
> > ================
> > xfs/558 36s ... - output mismatch (see /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad)
> >     --- tests/xfs/558.out       2023-06-29 12:06:13.824276289 +0000
> >     +++ /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad       2024-01-23 18:54:56.613116520 +0000
> >     @@ -1,2 +1,3 @@
> >      QA output created by 558
> >     +Expected to hear about writeback iomap invalidations?
> >      Silence is golden
> >     ...
> >     (Run 'diff -u /root/xfstests-dev/tests/xfs/558.out /root/xfstests-dev/results//xfs_64k_iomap/xfs/558.out.bad'  to see the entire diff)
> > 
> > HINT: You _MAY_ be missing kernel fix:
> >       5c665e5b5af6 xfs: remove xfs_map_cow
> > 
> > -ritesh
> > 
> >>
> >> Thanks for testing it out. I will investigate this further, and see why
> >> I have this failure in LBS for 64k and not for 32k and 16k block sizes.
> >>
> >> As this test also expects some invalidation during the page cache writeback,
> >> this might an issue just with LBS and not for 64k page size machines.
> >>
> >> Probably I will also spend some time to set up a Power8 qemu to test these failures.
> >>
> >>> However, since on this system the quota was v4.05, it does not support
> >>> bigtime feature hence could not run xfs/161. 
> >>>
> >>> xfs/161       [not run] quota: bigtime support not detected
> >>> xfs/558 7s ...  21s
> >>>
> >>> I will collect this info on a different system with latest kernel and
> >>> will update for xfs/161 too.
> >>>
> >>
> >> Sounds good! Thanks!
> >>
> >>> -ritesh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/2] fstest changes for LBS
  2024-01-24 16:58             ` Darrick J. Wong
@ 2024-01-24 21:06               ` Pankaj Raghav
  0 siblings, 0 replies; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-24 21:06 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Ritesh Harjani (IBM), Dave Chinner, Pankaj Raghav (Samsung),
	zlang, fstests, mcgrof, gost.dev, linux-xfs

>> The issue is 'xfs_wb*iomap_invalid' not getting triggered when we have larger
>> bs. I basically increased the blksz in the test based on the underlying bs.
>> Maybe there is a better solution than what I proposed, but it fixes the test.
> 
> The only improvement I can think of would be to force-disable large
> folios on the file being tested.  Large folios mess with testing because
> the race depends on write and writeback needing to walk multiple pages.
> Right now the pagecache only institutes large folios if the IO patterns
> are large IOs, but in theory that could change some day.
> 

Hmm, so we create like a debug parameter to disable large folios while the file is
being tested?

The only issue is that LBS work needs large folio to be enabled.

So I think then the solution is to add a debug parameter to disable large folios
for normal blocksizes (bs <= ps) while running the test but disable this test
altogether for LBS(bs > ps)?


> I suspect that the iomap tracepoint data and possibly
> trace_mm_filemap_add_to_page_cache might help figure out what size
> folios are actually in use during the invalidation test.
> 

Cool! I will see if I can do some analysis by adding trace_mm_filemap_add_to_page_cache
while running the test.

> (Perhaps it's time for me to add a 64k bs VM to the test fleet.)
> 

I confirmed with Chandan that Oracle OCI with Ampere supports 64kb page sizes. We (Luis and I)
are also looking into running kdevops on XFS with 64kb page size and block size as it might
be useful for the LBS work to cross verify the failures.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem
  2024-01-22 16:57   ` Darrick J. Wong
  2024-01-22 17:32     ` Pankaj Raghav
@ 2024-01-25 16:06     ` Pankaj Raghav
  1 sibling, 0 replies; 20+ messages in thread
From: Pankaj Raghav @ 2024-01-25 16:06 UTC (permalink / raw)
  To: Darrick J. Wong, Dave Chinner
  Cc: zlang, fstests, mcgrof, gost.dev, linux-xfs,
	Pankaj Raghav (Samsung),
	chandan.babu

On 22/01/2024 17:57, Darrick J. Wong wrote:
> On Mon, Jan 22, 2024 at 12:17:51PM +0100, Pankaj Raghav (Samsung) wrote:
>> From: Pankaj Raghav <p.raghav@samsung.com>
>>
>> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
>> system(see LBS efforts[1]). Adapt the blksz so that we create more than
>> one block for the testcase.
> 
> How does this fail, specifically?  And, uh, what block sizes > 64k were
> tested?
> 
> --D
> 
>> Cap the blksz to be at least 64k to retain the same behaviour as before
>> for smaller filesystem blocksizes.
>>
>> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
>>

I tried the same test on a machine with 64k page size and block size, and I get the same error
and this patch fixes it!

Kernel version: 6.7.1
xfstest version: for-next
PAGE_SIZE: 64k

# Without this patch

ubuntu@xfstest:/mnt/linux/xfstests$ getconf PAGE_SIZE
65536

ubuntu@xfstest:/mnt/linux/xfstests$ sudo ./check -s 64k xfs/161
SECTION       -- 64k
RECREATING    -- xfs on /dev/sdb2
FSTYP         -- xfs (non-debug)
PLATFORM      -- Linux/aarch64 xfstest 6.7.1-64k #8 SMP Thu Jan 25 13:38:41 UTC 2024
MKFS_OPTIONS  -- -f -f -m reflink=1,rmapbt=1, -i sparse=1, -b size=64k, /dev/sdb3
MOUNT_OPTIONS -- /dev/sdb3 /mnt/scratch

xfs/161 6s ... - output mismatch (see /mnt/linux/xfstests/results/xfstest/6.7.1-64k/64k/xfs/161.out.bad)
    --- tests/xfs/161.out	2024-01-25 15:36:48.869401419 +0000
    +++ /mnt/linux/xfstests/results/xfstest/6.7.1-64k/64k/xfs/161.out.bad	2024-01-25
15:59:47.702340351 +0000
    @@ -1,6 +1,15 @@
     QA output created by 161
    +Expected timer expiry (0) to be after now (1706198386).
     Running xfs_repair to upgrade filesystem.
     Adding large timestamp support to filesystem.
     FEATURES: BIGTIME:YES
    -grace2 expiry is in range
    -grace2 expiry after remount is in range
    ...
    (Run 'diff -u /mnt/linux/xfstests/tests/xfs/161.out
/mnt/linux/xfstests/results/xfstest/6.7.1-64k/64k/xfs/161.out.bad'  to see the entire diff)
Ran: xfs/161
Failures: xfs/161
Failed 1 of 1 tests

SECTION       -- 64k
=========================
Ran: xfs/161
Failures: xfs/161
Failed 1 of 1 tests


# With this patch:

ubuntu@xfstest:/mnt/linux/xfstests$ sudo ./check -s 64k xfs/161
SECTION       -- 64k
RECREATING    -- xfs on /dev/sdb2
FSTYP         -- xfs (non-debug)
PLATFORM      -- Linux/aarch64 xfstest 6.7.1-64k #8 SMP Thu Jan 25 13:38:41 UTC 2024
MKFS_OPTIONS  -- -f -f -m reflink=1,rmapbt=1, -i sparse=1, -b size=64k, /dev/sdb3
MOUNT_OPTIONS -- /dev/sdb3 /mnt/scratch

xfs/161 6s ...  6s
Ran: xfs/161
Passed all 1 tests

SECTION       -- 64k
=========================
Ran: xfs/161
Passed all 1 tests

>> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
>> ---
>>  tests/xfs/161 | 9 +++++++--
>>  1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> diff --git a/tests/xfs/161 b/tests/xfs/161
>> index 486fa6ca..f7b03f0e 100755
>> --- a/tests/xfs/161
>> +++ b/tests/xfs/161
>> @@ -38,9 +38,14 @@ _qmount_option "usrquota"
>>  _scratch_xfs_db -c 'version' -c 'sb 0' -c 'p' >> $seqres.full
>>  _scratch_mount >> $seqres.full
>>  
>> +min_blksz=65536
>> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
>> +blksz=$(( 2 * $file_blksz))
>> +
>> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
>>  # Force the block counters for uid 1 and 2 above zero
>> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/a >> $seqres.full
>> -_pwrite_byte 0x61 0 64k $SCRATCH_MNT/b >> $seqres.full
>> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/a >> $seqres.full
>> +_pwrite_byte 0x61 0 $blksz $SCRATCH_MNT/b >> $seqres.full
>>  sync
>>  chown 1 $SCRATCH_MNT/a
>>  chown 2 $SCRATCH_MNT/b
>> -- 
>> 2.43.0
>>
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz
  2024-01-22 17:23     ` Pankaj Raghav
@ 2024-03-13 20:08       ` Darrick J. Wong
  0 siblings, 0 replies; 20+ messages in thread
From: Darrick J. Wong @ 2024-03-13 20:08 UTC (permalink / raw)
  To: Pankaj Raghav
  Cc: Pankaj Raghav (Samsung), zlang, fstests, mcgrof, gost.dev, linux-xfs

On Mon, Jan 22, 2024 at 06:23:16PM +0100, Pankaj Raghav wrote:
> On 22/01/2024 17:53, Darrick J. Wong wrote:
> > On Mon, Jan 22, 2024 at 12:17:50PM +0100, Pankaj Raghav (Samsung) wrote:
> >> From: Pankaj Raghav <p.raghav@samsung.com>
> >>
> >> This test fails for >= 64k filesystem block size on a 4k PAGE_SIZE
> >> system(see LBS efforts[1]). Scale the `blksz` based on the filesystem
> > > Fails how, specifically?
> 
> I basically get this in 558.out.bad when I set filesystem block size to be 64k:
> QA output created by 558
> Expected to hear about writeback iomap invalidations?
> Silence is golden
> 
> But I do see that iomap invalidations are happening for 16k and 32k, which makes it pass
> the test for those block sizes.
> 
> My suspicion was that we don't see any invalidations because of the blksz fixed
> at 64k in the test, which will contain one FSB in the case of 64k block size.
> 
> Let me know if I am missing something.

Nope, that sounds good and fixes the problems I saw.  So:
Tested-by: Darrick J. Wong <djwong@kernel.org>

And if you add to the commit message that this test specifically fixes
the "Expected to hear about writeback iomap invalidations?" message for
64k filesystems, then:
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> > 
> > --D
> > 
> >> block size instead of fixing it as 64k so that we do get some iomap
> >> invalidations while doing concurrent writes.
> >>
> >> Cap the blksz to be at least 64k to retain the same behaviour as before
> >> for smaller filesystem blocksizes.
> >>
> >> [1] LBS effort: https://lore.kernel.org/lkml/20230915183848.1018717-1-kernel@pankajraghav.com/
> >>
> >> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
> >> ---
> >>  tests/xfs/558 | 7 ++++++-
> >>  1 file changed, 6 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/tests/xfs/558 b/tests/xfs/558
> >> index 9e9b3be8..270f458c 100755
> >> --- a/tests/xfs/558
> >> +++ b/tests/xfs/558
> >> @@ -127,7 +127,12 @@ _scratch_mount >> $seqres.full
> >>  $XFS_IO_PROG -c 'chattr -x' $SCRATCH_MNT &> $seqres.full
> >>  _require_pagecache_access $SCRATCH_MNT
> >>  
> >> -blksz=65536
> >> +min_blksz=65536
> >> +file_blksz=$(_get_file_block_size "$SCRATCH_MNT")
> >> +blksz=$(( 8 * $file_blksz ))
> >> +
> >> +blksz=$(( blksz > min_blksz ? blksz : min_blksz ))
> >> +
> >>  _require_congruent_file_oplen $SCRATCH_MNT $blksz
> >>  
> >>  # Make sure we have sufficient extent size to create speculative CoW
> >> -- 
> >> 2.43.0
> >>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-03-13 20:08 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-22 11:17 [PATCH 0/2] fstest changes for LBS Pankaj Raghav (Samsung)
2024-01-22 11:17 ` [PATCH 1/2] xfs/558: scale blk IO size based on the filesystem blksz Pankaj Raghav (Samsung)
2024-01-22 16:53   ` Darrick J. Wong
2024-01-22 17:23     ` Pankaj Raghav
2024-03-13 20:08       ` Darrick J. Wong
2024-01-22 11:17 ` [PATCH 2/2] xfs/161: adapt the test case for LBS filesystem Pankaj Raghav (Samsung)
2024-01-22 16:57   ` Darrick J. Wong
2024-01-22 17:32     ` Pankaj Raghav
2024-01-25 16:06     ` Pankaj Raghav
2024-01-23  0:25 ` [PATCH 0/2] fstest changes for LBS Dave Chinner
2024-01-23  8:52   ` Pankaj Raghav
2024-01-23 13:43     ` Zorro Lang
2024-01-23 15:39       ` Ritesh Harjani
2024-01-23 16:33       ` Pankaj Raghav
2024-01-23 15:35     ` Ritesh Harjani
2024-01-23 16:40       ` Pankaj Raghav
2024-01-23 19:42         ` Ritesh Harjani
2024-01-23 20:21           ` Pankaj Raghav
2024-01-24 16:58             ` Darrick J. Wong
2024-01-24 21:06               ` Pankaj Raghav

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.