All of lore.kernel.org
 help / color / mirror / Atom feed
* xfstests seems broken on btrfs with multi-dev TEST_DEV
@ 2021-02-24 16:12 Eric Sandeen
  2021-02-24 21:39 ` Eric Sandeen
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2021-02-24 16:12 UTC (permalink / raw)
  To: linux-btrfs; +Cc: linux-fsdevel

Last week I was curious to just see how btrfs is faring with RAID5 in
xfstests, so I set it up for a quick run with devices configured as:

TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"

and fired off ./check -g auto

Every test after btrfs/124 fails, because that test btrfs/124 does this:

# un-scan the btrfs devices
_btrfs_forget_or_module_reload

and nothing re-scans devices after that, so every attempt to mount TEST_DEV
will fail:

> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"

Other btrfs tests seeme to have the same problem.

If xfstest coverage on multi-device btrfs volumes is desired, it might be
a good idea for someone who understands the btrfs framework in xfstests
to fix this.

Thanks,
-Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-24 16:12 xfstests seems broken on btrfs with multi-dev TEST_DEV Eric Sandeen
@ 2021-02-24 21:39 ` Eric Sandeen
  2021-02-25  1:16   ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2021-02-24 21:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: fstests

On 2/24/21 10:12 AM, Eric Sandeen wrote:
> Last week I was curious to just see how btrfs is faring with RAID5 in
> xfstests, so I set it up for a quick run with devices configured as:

Whoops this was supposed to cc: fstests, not fsdevel, sorry.

-Eric

> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
> 
> and fired off ./check -g auto
> 
> Every test after btrfs/124 fails, because that test btrfs/124 does this:
> 
> # un-scan the btrfs devices
> _btrfs_forget_or_module_reload
> 
> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
> will fail:
> 
>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
> 
> Other btrfs tests seeme to have the same problem.
> 
> If xfstest coverage on multi-device btrfs volumes is desired, it might be
> a good idea for someone who understands the btrfs framework in xfstests
> to fix this.
> 
> Thanks,
> -Eric
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-24 21:39 ` Eric Sandeen
@ 2021-02-25  1:16   ` Anand Jain
  2021-02-25  1:46     ` Eric Sandeen
  0 siblings, 1 reply; 10+ messages in thread
From: Anand Jain @ 2021-02-25  1:16 UTC (permalink / raw)
  To: Eric Sandeen, linux-btrfs; +Cc: fstests

On 25/02/2021 05:39, Eric Sandeen wrote:
> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>> Last week I was curious to just see how btrfs is faring with RAID5 in
>> xfstests, so I set it up for a quick run with devices configured as:
> 
> Whoops this was supposed to cc: fstests, not fsdevel, sorry.
> 
> -Eric
> 
>> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
>> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
>>
>> and fired off ./check -g auto
>>
>> Every test after btrfs/124 fails, because that test btrfs/124 does this:
>>
>> # un-scan the btrfs devices
>> _btrfs_forget_or_module_reload
>>
>> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
>> will fail:
>>
>>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
>>
>> Other btrfs tests seeme to have the same problem.
>>
>> If xfstest coverage on multi-device btrfs volumes is desired, it might be
>> a good idea for someone who understands the btrfs framework in xfstests
>> to fix this.

Eric,

  All our multi-device test-cases under tests/btrfs used the
  SCRATCH_DEV_POOL. Unless I am missing something, any idea if
  TEST_DEV can be made optional for test cases that don't need TEST_DEV?
  OR I don't understand how TEST_DEV is useful in some of these
  test-cases under tests/btrfs.

Thanks, Anand

>>
>> Thanks,
>> -Eric
>>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  1:16   ` Anand Jain
@ 2021-02-25  1:46     ` Eric Sandeen
  2021-02-25  1:56       ` Qu Wenruo
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2021-02-25  1:46 UTC (permalink / raw)
  To: Anand Jain, linux-btrfs; +Cc: fstests

On 2/24/21 7:16 PM, Anand Jain wrote:
> On 25/02/2021 05:39, Eric Sandeen wrote:
>> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>>> Last week I was curious to just see how btrfs is faring with RAID5 in
>>> xfstests, so I set it up for a quick run with devices configured as:
>>
>> Whoops this was supposed to cc: fstests, not fsdevel, sorry.
>>
>> -Eric
>>
>>> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
>>> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
>>>
>>> and fired off ./check -g auto
>>>
>>> Every test after btrfs/124 fails, because that test btrfs/124 does this:
>>>
>>> # un-scan the btrfs devices
>>> _btrfs_forget_or_module_reload
>>>
>>> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
>>> will fail:
>>>
>>>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
>>>
>>> Other btrfs tests seeme to have the same problem.
>>>
>>> If xfstest coverage on multi-device btrfs volumes is desired, it might be
>>> a good idea for someone who understands the btrfs framework in xfstests
>>> to fix this.
> 
> Eric,
> 
>  All our multi-device test-cases under tests/btrfs used the
>  SCRATCH_DEV_POOL. Unless I am missing something, any idea if
>  TEST_DEV can be made optional for test cases that don't need TEST_DEV?
>  OR I don't understand how TEST_DEV is useful in some of these
>  test-cases under tests/btrfs.

Those are the tests specifically designed to poke at multi-dev btrfs, right.

TEST_DEV is more designed to "age" - it is used for more non-destructive tests.

The point is that many tests /d/o run using TEST_DEV, and if a multi-dev TEST_DEV
can't be used, you are getting no coverage from those tests on that type of
btrfs configuration. And if a multi-dev TEST_DEV breaks the test run, nobody's
going to test that way.

There are ~300 tests that run on TEST_DEV, and restricting its functionality
to a single-device btrfs filesytem misses coverage.

# grep require_test tests/generic/??? | wc -l
299

tl;dr: a btrfs test which renders a legitimate btrfs TEST_DEV inoperable is
a flaw in that btrfs test, IMHO.

-Eric

> Thanks, Anand
> 
>>>
>>> Thanks,
>>> -Eric
>>>
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  1:46     ` Eric Sandeen
@ 2021-02-25  1:56       ` Qu Wenruo
  2021-02-25  2:45         ` Eric Sandeen
  0 siblings, 1 reply; 10+ messages in thread
From: Qu Wenruo @ 2021-02-25  1:56 UTC (permalink / raw)
  To: Eric Sandeen, Anand Jain, linux-btrfs; +Cc: fstests



On 2021/2/25 上午9:46, Eric Sandeen wrote:
> On 2/24/21 7:16 PM, Anand Jain wrote:
>> On 25/02/2021 05:39, Eric Sandeen wrote:
>>> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>>>> Last week I was curious to just see how btrfs is faring with RAID5 in
>>>> xfstests, so I set it up for a quick run with devices configured as:
>>>
>>> Whoops this was supposed to cc: fstests, not fsdevel, sorry.
>>>
>>> -Eric
>>>
>>>> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
>>>> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
>>>>
>>>> and fired off ./check -g auto
>>>>
>>>> Every test after btrfs/124 fails, because that test btrfs/124 does this:
>>>>
>>>> # un-scan the btrfs devices
>>>> _btrfs_forget_or_module_reload
>>>>
>>>> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
>>>> will fail:
>>>>
>>>>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
>>>>
>>>> Other btrfs tests seeme to have the same problem.
>>>>
>>>> If xfstest coverage on multi-device btrfs volumes is desired, it might be
>>>> a good idea for someone who understands the btrfs framework in xfstests
>>>> to fix this.
>>
>> Eric,
>>
>>   All our multi-device test-cases under tests/btrfs used the
>>   SCRATCH_DEV_POOL. Unless I am missing something, any idea if
>>   TEST_DEV can be made optional for test cases that don't need TEST_DEV?
>>   OR I don't understand how TEST_DEV is useful in some of these
>>   test-cases under tests/btrfs.
>
> Those are the tests specifically designed to poke at multi-dev btrfs, right.
>
> TEST_DEV is more designed to "age" - it is used for more non-destructive tests.
>
> The point is that many tests /d/o run using TEST_DEV, and if a multi-dev TEST_DEV
> can't be used, you are getting no coverage from those tests on that type of
> btrfs configuration. And if a multi-dev TEST_DEV breaks the test run, nobody's
> going to test that way.

The problem is, TEST_DEV should not be included in SCRATCH_DEV_POOL.

Just try assiging TEST_DEV and SCRATCH_DEV the same, that's what exactly
you're doing.

If you want to test aging on multiple-btrfs, it's not a problem at all,
just mkfs.btrfs on the array, and put one device into TEST_DEV and call
it a day.

>
> There are ~300 tests that run on TEST_DEV, and restricting its functionality
> to a single-device btrfs filesytem misses coverage.
>
> # grep require_test tests/generic/??? | wc -l
> 299

Nope, it's the tester's response to setup TEST_DEV.

As mentioned, just setup TEST_DEV *properly*, you can test it without
any problem even for multi-device btrfs.


>
> tl;dr: a btrfs test which renders a legitimate btrfs TEST_DEV inoperable is
> a flaw in that btrfs test, IMHO.

The root problem is, TEST_DEV should never be included into
SCRACTH_DEV_POOL nor SCRATCH_DEV.

And obviously, putting the same device into both TEST_DEV and
SCRATCH_DEV pool is the problem.

Thanks,
Qu

>
> -Eric
>
>> Thanks, Anand
>>
>>>>
>>>> Thanks,
>>>> -Eric
>>>>
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  1:56       ` Qu Wenruo
@ 2021-02-25  2:45         ` Eric Sandeen
  2021-02-25  3:13           ` Qu Wenruo
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2021-02-25  2:45 UTC (permalink / raw)
  To: Qu Wenruo, Anand Jain, linux-btrfs; +Cc: fstests

On 2/24/21 7:56 PM, Qu Wenruo wrote:
> 
> 
> On 2021/2/25 上午9:46, Eric Sandeen wrote:
>> On 2/24/21 7:16 PM, Anand Jain wrote:
>>> On 25/02/2021 05:39, Eric Sandeen wrote:
>>>> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>>>>> Last week I was curious to just see how btrfs is faring with RAID5 in
>>>>> xfstests, so I set it up for a quick run with devices configured as:
>>>>
>>>> Whoops this was supposed to cc: fstests, not fsdevel, sorry.
>>>>
>>>> -Eric
>>>>
>>>>> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
>>>>> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
>>>>>
>>>>> and fired off ./check -g auto
>>>>>
>>>>> Every test after btrfs/124 fails, because that test btrfs/124 does this:
>>>>>
>>>>> # un-scan the btrfs devices
>>>>> _btrfs_forget_or_module_reload
>>>>>
>>>>> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
>>>>> will fail:
>>>>>
>>>>>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
>>>>>
>>>>> Other btrfs tests seeme to have the same problem.
>>>>>
>>>>> If xfstest coverage on multi-device btrfs volumes is desired, it might be
>>>>> a good idea for someone who understands the btrfs framework in xfstests
>>>>> to fix this.
>>>
>>> Eric,
>>>
>>>   All our multi-device test-cases under tests/btrfs used the
>>>   SCRATCH_DEV_POOL. Unless I am missing something, any idea if
>>>   TEST_DEV can be made optional for test cases that don't need TEST_DEV?
>>>   OR I don't understand how TEST_DEV is useful in some of these
>>>   test-cases under tests/btrfs.
>>
>> Those are the tests specifically designed to poke at multi-dev btrfs, right.
>>
>> TEST_DEV is more designed to "age" - it is used for more non-destructive tests.
>>
>> The point is that many tests /d/o run using TEST_DEV, and if a multi-dev TEST_DEV
>> can't be used, you are getting no coverage from those tests on that type of
>> btrfs configuration. And if a multi-dev TEST_DEV breaks the test run, nobody's
>> going to test that way.
> 
> The problem is, TEST_DEV should not be included in SCRATCH_DEV_POOL.

Sorry, I typed out the config from memory and made an error, sorry for
the confusion.

Let me try again to demonstrate. I have 10 completely different block devices
(loop devices, for this demo)

# cat local.config
export TEST_DEV=/dev/loop1 
export TEST_DIR=/mnt/test
export SCRATCH_DEV_POOL="/dev/loop5 /dev/loop6 /dev/loop7 /dev/loop8 /dev/loop9"
export SCRATCH_MNT=/mnt/scratch

TEST_DEV is a 3-device filesystem:

# mkfs.btrfs -f -d raid5 /dev/loop1 /dev/loop2 /dev/loop3

so: 3-dev TEST_DEV, 5 /different/ devices in the SCRATCH_DEV_POOL

Run btrfs/124:

# ./check btrfs/124
FSTYP         -- btrfs
PLATFORM      -- Linux/x86_64 intel-lizardhead-04 5.11.0-rc7+ #128 SMP Fri Feb 12 16:15:39 EST 2021
MKFS_OPTIONS  -- /dev/loop5
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 /dev/loop5 /mnt/scratch

btrfs/124	- output mismatch (see /root/xfstests-dev/results//btrfs/124.out.bad)

<ok it failed but ... beside the point>

Now, no other test can be run:

# dmesg -c > /dev/null

# ./check generic/001
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
common/rc: retrying test device mount with external set
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
common/rc: could not mount /dev/loop1 on /mnt/test

# dmesg
[544731.787311] BTRFS info (device loop1): disk space caching is enabled
[544731.794514] BTRFS info (device loop1): has skinny extents
[544731.801050] BTRFS error (device loop1): devid 2 uuid 2cb73b87-b5c9-46ec-a457-594455cfb7e3 is missing
[544731.811343] BTRFS error (device loop1): failed to read the system array: -2
[544731.826098] BTRFS error (device loop1): open_ctree failed
[544731.863343] BTRFS info (device loop1): disk space caching is enabled
[544731.870530] BTRFS info (device loop1): has skinny extents
[544731.877022] BTRFS error (device loop1): devid 2 uuid 2cb73b87-b5c9-46ec-a457-594455cfb7e3 is missing
[544731.887335] BTRFS error (device loop1): failed to read the system array: -2
[544731.903094] BTRFS error (device loop1): open_ctree failed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  2:45         ` Eric Sandeen
@ 2021-02-25  3:13           ` Qu Wenruo
  2021-02-25  3:15             ` Eric Sandeen
  0 siblings, 1 reply; 10+ messages in thread
From: Qu Wenruo @ 2021-02-25  3:13 UTC (permalink / raw)
  To: Eric Sandeen, Anand Jain, linux-btrfs; +Cc: fstests



On 2021/2/25 上午10:45, Eric Sandeen wrote:
> On 2/24/21 7:56 PM, Qu Wenruo wrote:
>>
>>
>> On 2021/2/25 上午9:46, Eric Sandeen wrote:
>>> On 2/24/21 7:16 PM, Anand Jain wrote:
>>>> On 25/02/2021 05:39, Eric Sandeen wrote:
>>>>> On 2/24/21 10:12 AM, Eric Sandeen wrote:
>>>>>> Last week I was curious to just see how btrfs is faring with RAID5 in
>>>>>> xfstests, so I set it up for a quick run with devices configured as:
>>>>>
>>>>> Whoops this was supposed to cc: fstests, not fsdevel, sorry.
>>>>>
>>>>> -Eric
>>>>>
>>>>>> TEST_DEV=/dev/sdb1 # <--- this was a 3-disk "-d raid5" filesystem
>>>>>> SCRATCH_DEV_POOL="/dev/sdb2 /dev/sdb3 /dev/sdb4 /dev/sdb5 /dev/sdb6"
>>>>>>
>>>>>> and fired off ./check -g auto
>>>>>>
>>>>>> Every test after btrfs/124 fails, because that test btrfs/124 does this:
>>>>>>
>>>>>> # un-scan the btrfs devices
>>>>>> _btrfs_forget_or_module_reload
>>>>>>
>>>>>> and nothing re-scans devices after that, so every attempt to mount TEST_DEV
>>>>>> will fail:
>>>>>>
>>>>>>> devid 2 uuid e42cd5b8-2de6-4c85-ae51-74b61172051e is missing"
>>>>>>
>>>>>> Other btrfs tests seeme to have the same problem.
>>>>>>
>>>>>> If xfstest coverage on multi-device btrfs volumes is desired, it might be
>>>>>> a good idea for someone who understands the btrfs framework in xfstests
>>>>>> to fix this.
>>>>
>>>> Eric,
>>>>
>>>>    All our multi-device test-cases under tests/btrfs used the
>>>>    SCRATCH_DEV_POOL. Unless I am missing something, any idea if
>>>>    TEST_DEV can be made optional for test cases that don't need TEST_DEV?
>>>>    OR I don't understand how TEST_DEV is useful in some of these
>>>>    test-cases under tests/btrfs.
>>>
>>> Those are the tests specifically designed to poke at multi-dev btrfs, right.
>>>
>>> TEST_DEV is more designed to "age" - it is used for more non-destructive tests.
>>>
>>> The point is that many tests /d/o run using TEST_DEV, and if a multi-dev TEST_DEV
>>> can't be used, you are getting no coverage from those tests on that type of
>>> btrfs configuration. And if a multi-dev TEST_DEV breaks the test run, nobody's
>>> going to test that way.
>>
>> The problem is, TEST_DEV should not be included in SCRATCH_DEV_POOL.
>
> Sorry, I typed out the config from memory and made an error, sorry for
> the confusion.
>
> Let me try again to demonstrate. I have 10 completely different block devices
> (loop devices, for this demo)
>
> # cat local.config
> export TEST_DEV=/dev/loop1
> export TEST_DIR=/mnt/test
> export SCRATCH_DEV_POOL="/dev/loop5 /dev/loop6 /dev/loop7 /dev/loop8 /dev/loop9"
> export SCRATCH_MNT=/mnt/scratch
>
> TEST_DEV is a 3-device filesystem:
>
> # mkfs.btrfs -f -d raid5 /dev/loop1 /dev/loop2 /dev/loop3
>
> so: 3-dev TEST_DEV, 5 /different/ devices in the SCRATCH_DEV_POOL
>
> Run btrfs/124:
>
> # ./check btrfs/124
> FSTYP         -- btrfs
> PLATFORM      -- Linux/x86_64 intel-lizardhead-04 5.11.0-rc7+ #128 SMP Fri Feb 12 16:15:39 EST 2021
> MKFS_OPTIONS  -- /dev/loop5
> MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 /dev/loop5 /mnt/scratch
>
> btrfs/124	- output mismatch (see /root/xfstests-dev/results//btrfs/124.out.bad)
>
> <ok it failed but ... beside the point>
>
> Now, no other test can be run:
>
> # dmesg -c > /dev/null
>
> # ./check generic/001
> mount: wrong fs type, bad option, bad superblock on /dev/loop1,
>         missing codepage or helper program, or other error
>
>         In some cases useful info is found in syslog - try
>         dmesg | tail or so.
> common/rc: retrying test device mount with external set
> mount: wrong fs type, bad option, bad superblock on /dev/loop1,
>         missing codepage or helper program, or other error
>
>         In some cases useful info is found in syslog - try
>         dmesg | tail or so.
> common/rc: could not mount /dev/loop1 on /mnt/test
>
> # dmesg
> [544731.787311] BTRFS info (device loop1): disk space caching is enabled
> [544731.794514] BTRFS info (device loop1): has skinny extents
> [544731.801050] BTRFS error (device loop1): devid 2 uuid 2cb73b87-b5c9-46ec-a457-594455cfb7e3 is missing
> [544731.811343] BTRFS error (device loop1): failed to read the system array: -2
> [544731.826098] BTRFS error (device loop1): open_ctree failed
> [544731.863343] BTRFS info (device loop1): disk space caching is enabled
> [544731.870530] BTRFS info (device loop1): has skinny extents
> [544731.877022] BTRFS error (device loop1): devid 2 uuid 2cb73b87-b5c9-46ec-a457-594455cfb7e3 is missing
> [544731.887335] BTRFS error (device loop1): failed to read the system array: -2
> [544731.903094] BTRFS error (device loop1): open_ctree failed
>
Now this makes way more sense, as your previous comment on
_btrfs_forget_or_module_reload is completely correct.

_btrfs_forget_or_module_reload will really forget all devices, while
what we really want is just excluding certain devices, and not to affect
the other ones.

The proper way to fix it is to only introduce _btrfs_forget to
unregister involved devices, not all.

I'll take a look into the fix, but I'm afraid that, for systems which
don't support forget, they have to skip those tests and reduce the
coverage for older kernel/progs.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  3:13           ` Qu Wenruo
@ 2021-02-25  3:15             ` Eric Sandeen
  2021-02-25  3:18               ` Qu Wenruo
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2021-02-25  3:15 UTC (permalink / raw)
  To: Qu Wenruo, Anand Jain, linux-btrfs; +Cc: fstests

On 2/24/21 9:13 PM, Qu Wenruo wrote:

> Now this makes way more sense,

Sorry for the earlier mistake.

> as your previous comment on
> _btrfs_forget_or_module_reload is completely correct.
> 
> _btrfs_forget_or_module_reload will really forget all devices, while
> what we really want is just excluding certain devices, and not to affect
> the other ones.
> 
> The proper way to fix it is to only introduce _btrfs_forget to
> unregister involved devices, not all.
> 
> I'll take a look into the fix, but I'm afraid that, for systems which
> don't support forget, they have to skip those tests and reduce the
> coverage for older kernel/progs.

Can't you just rescan when the test is done?

> Thanks,
> Qu
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  3:15             ` Eric Sandeen
@ 2021-02-25  3:18               ` Qu Wenruo
  2021-02-26  3:10                 ` Anand Jain
  0 siblings, 1 reply; 10+ messages in thread
From: Qu Wenruo @ 2021-02-25  3:18 UTC (permalink / raw)
  To: Eric Sandeen, Anand Jain, linux-btrfs; +Cc: fstests



On 2021/2/25 上午11:15, Eric Sandeen wrote:
> On 2/24/21 9:13 PM, Qu Wenruo wrote:
>
>> Now this makes way more sense,
>
> Sorry for the earlier mistake.
>
>> as your previous comment on
>> _btrfs_forget_or_module_reload is completely correct.
>>
>> _btrfs_forget_or_module_reload will really forget all devices, while
>> what we really want is just excluding certain devices, and not to affect
>> the other ones.
>>
>> The proper way to fix it is to only introduce _btrfs_forget to
>> unregister involved devices, not all.
>>
>> I'll take a look into the fix, but I'm afraid that, for systems which
>> don't support forget, they have to skip those tests and reduce the
>> coverage for older kernel/progs.
>
> Can't you just rescan when the test is done?

Oh, that's way more simpler.

Thanks for the tip, I just over-engineered.....

Thanks,
Qu
>
>> Thanks,
>> Qu
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfstests seems broken on btrfs with multi-dev TEST_DEV
  2021-02-25  3:18               ` Qu Wenruo
@ 2021-02-26  3:10                 ` Anand Jain
  0 siblings, 0 replies; 10+ messages in thread
From: Anand Jain @ 2021-02-26  3:10 UTC (permalink / raw)
  To: Qu Wenruo, Eric Sandeen, linux-btrfs; +Cc: fstests



On 25/02/2021 11:18, Qu Wenruo wrote:
> 
> 
> On 2021/2/25 上午11:15, Eric Sandeen wrote:
>> On 2/24/21 9:13 PM, Qu Wenruo wrote:
>>
>>> Now this makes way more sense,
>>
>> Sorry for the earlier mistake.
>>
>>> as your previous comment on
>>> _btrfs_forget_or_module_reload is completely correct.
>>>
>>> _btrfs_forget_or_module_reload will really forget all devices, while
>>> what we really want is just excluding certain devices, and not to affect
>>> the other ones.
>>>
>>> The proper way to fix it is to only introduce _btrfs_forget to
>>> unregister involved devices, not all.
>>>
>>> I'll take a look into the fix, but I'm afraid that, for systems which
>>> don't support forget, they have to skip those tests and reduce the
>>> coverage for older kernel/progs.
>>
>> Can't you just rescan when the test is done?
> 
> Oh, that's way more simpler.
> 
> Thanks for the tip, I just over-engineered.....
> 

Yep agreed.

on kernels with forget-ioctl,
   btrfs dev scan --forget $SCRATCH_DEV_POOL
shall suffice.

However if we could define TEST_DEV_POOL="dev1 dev2 dev3" it will help 
both old/new kernels. IMO.
_btrfs_forget_or_module_reload()
{
::
       btrfs dev scan $TEST_DEV_POOL
}

Thanks, Anand


> Thanks,
> Qu
>>
>>> Thanks,
>>> Qu
>>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-02-26  3:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-24 16:12 xfstests seems broken on btrfs with multi-dev TEST_DEV Eric Sandeen
2021-02-24 21:39 ` Eric Sandeen
2021-02-25  1:16   ` Anand Jain
2021-02-25  1:46     ` Eric Sandeen
2021-02-25  1:56       ` Qu Wenruo
2021-02-25  2:45         ` Eric Sandeen
2021-02-25  3:13           ` Qu Wenruo
2021-02-25  3:15             ` Eric Sandeen
2021-02-25  3:18               ` Qu Wenruo
2021-02-26  3:10                 ` Anand Jain

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.