* Re: [PATCH blktests 3/3] Add NVMeOF dm-mpath tests [not found] ` <20180815203728.19521-4-bart.vanassche@wdc.com> @ 2018-08-17 14:24 ` Mike Snitzer 2018-08-17 15:46 ` Bart Van Assche 0 siblings, 1 reply; 13+ messages in thread From: Mike Snitzer @ 2018-08-17 14:24 UTC (permalink / raw) To: Bart Van Assche; +Cc: Omar Sandoval, Johannes Thumshirn, linux-block On Wed, Aug 15 2018 at 4:37pm -0400, Bart Van Assche <bart.vanassche@wdc.com> wrote: > Add a series of tests for the NVMeOF drivers on top of the dm-mpath > driver. These tests are similar to the tests under tests/srp. Both > tests use the dm-mpath driver for multipath and the loopback > functionality of the rdma_rxe driver. The only difference is that the > nvmeof-mp tests use the NVMeOF initiator and target drivers instead > of the SRP initiator and target drivers. > > Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> I like the prospect of keeping NVMe honest by testing it with DM multipath. But will you grow dependent on ANA in the future? If so then you're out of luck given the way the NVMe driver's ANA support was maliciously added without care for making it work without the NVMe driver's native multipath support. Seems very few people care about making NVMe multipath _not_ so tightly coupled to native NVMe multipath. And sadly I don't have time to work on untangling the "all-in" nature of NVMe ANA and native NVMe multipath. Put differently: until Jens stops taking hch's pull requests despite concerns being raised against hch's approach (that hch completely ignores because he rules with an iron fist from the top of a mountain in the Alps) we're pretty much screwed. Mike ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 3/3] Add NVMeOF dm-mpath tests 2018-08-17 14:24 ` [PATCH blktests 3/3] Add NVMeOF dm-mpath tests Mike Snitzer @ 2018-08-17 15:46 ` Bart Van Assche 2018-08-17 16:04 ` Mike Snitzer 0 siblings, 1 reply; 13+ messages in thread From: Bart Van Assche @ 2018-08-17 15:46 UTC (permalink / raw) To: snitzer; +Cc: osandov, jthumshirn, linux-block On Fri, 2018-08-17 at 10:24 -0400, Mike Snitzer wrote: > Put differently: until Jens stops taking hch's pull requests despite > concerns being raised against hch's approach (that hch completely > ignores because he rules with an iron fist from the top of a mountain= in > the Alps) we're pretty much screwed. If everything works out as expected I will attend that mountain summit a few weeks from now. Unless I missed something it's still possible to join the summit as an attendee. I think you would be welcome. Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 3/3] Add NVMeOF dm-mpath tests 2018-08-17 15:46 ` Bart Van Assche @ 2018-08-17 16:04 ` Mike Snitzer 0 siblings, 0 replies; 13+ messages in thread From: Mike Snitzer @ 2018-08-17 16:04 UTC (permalink / raw) To: Bart Van Assche; +Cc: osandov, jthumshirn, linux-block On Fri, Aug 17 2018 at 11:46am -0400, Bart Van Assche <Bart.VanAssche@wdc.com> wrote: > On Fri, 2018-08-17 at 10:24 -0400, Mike Snitzer wrote: > > Put differently: until Jens stops taking hch's pull requests despite > > concerns being raised against hch's approach (that hch completely > > ignores because he rules with an iron fist from the top of a mountain in > > the Alps) we're pretty much screwed. > > If everything works out as expected I will attend that mountain summit a > few weeks from now. Unless I missed something it's still possible to join > the summit as an attendee. I think you would be welcome. Cannot make it. But even if I did attend I've had quite a few instances where hch has said one thing in person, he later changes his mind, and then silently executes on implementing something that only serves his needs. Not the most productive. ^ permalink raw reply [flat|nested] 13+ messages in thread
[parent not found: <20180820073059.quvg3bh4ngv5ka4x@linux-x5ow.site>]
[parent not found: <ee851ae7b88a93b9b5c9f1b0c6e206944c0b3c64.camel@wdc.com>]
[parent not found: <20180821064619.7tyhlst74qkrw4fi@linux-x5ow.site>]
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests [not found] ` <20180821064619.7tyhlst74qkrw4fi@linux-x5ow.site> @ 2018-08-23 1:53 ` Bart Van Assche 2018-08-24 0:21 ` Omar Sandoval 0 siblings, 1 reply; 13+ messages in thread From: Bart Van Assche @ 2018-08-23 1:53 UTC (permalink / raw) To: jthumshirn; +Cc: osandov, msnitzer, linux-block On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote: > On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote: > > Moving these tests into the nvme directory is possible but will= make it > > harder to run the NVMeOF multipath tests separately. Are you fi= ne with this? >=20 > Both way's have it's up and downsides, I agree. >=20 > Having two distinct groups requires to run './check nvme nvmeof-mp' t= o > run full coverage with nvme. >=20 > Having it all in one group would require to run './check nvme 18 19 2= 0 > 21 22 23 24 ...' to get only the dm-mpath ones. >=20 > Honestly I hate both but your's (the two distinct groups) is probably > easier to handle in the end, I have to admit. Omar, do you have a preference for one of the two aforementioned approaches= ? Thanks, Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-08-23 1:53 ` [PATCH blktests 0/3] Add NVMeOF multipath tests Bart Van Assche @ 2018-08-24 0:21 ` Omar Sandoval 2018-09-13 1:06 ` Omar Sandoval 2018-09-18 21:20 ` Bart Van Assche 0 siblings, 2 replies; 13+ messages in thread From: Omar Sandoval @ 2018-08-24 0:21 UTC (permalink / raw) To: Bart Van Assche; +Cc: jthumshirn, osandov, msnitzer, linux-block On Thu, Aug 23, 2018 at 01:53:33AM +0000, Bart Van Assche wrote: > On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote: > > On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote: > > > Moving these tests into the nvme directory is possible but will make it > > > harder to run the NVMeOF multipath tests separately. Are you fine with this? > > > > Both way's have it's up and downsides, I agree. > > > > Having two distinct groups requires to run './check nvme nvmeof-mp' to > > run full coverage with nvme. > > > > Having it all in one group would require to run './check nvme 18 19 20 > > 21 22 23 24 ...' to get only the dm-mpath ones. > > > > Honestly I hate both but your's (the two distinct groups) is probably > > easier to handle in the end, I have to admit. > > Omar, do you have a preference for one of the two aforementioned approaches? > > Thanks, > > Bart. > Let's keep it in a separate category, since lots of people running nvme tests probably aren't interested in testing multipath. A bunch of the tests failed with modprobe: FATAL: Module nvme is in use. Maybe related to my test VM having an nvme device? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-08-24 0:21 ` Omar Sandoval @ 2018-09-13 1:06 ` Omar Sandoval 2018-09-16 4:28 ` Bart Van Assche 2018-09-18 21:20 ` Bart Van Assche 1 sibling, 1 reply; 13+ messages in thread From: Omar Sandoval @ 2018-09-13 1:06 UTC (permalink / raw) To: Bart Van Assche; +Cc: jthumshirn, osandov, msnitzer, linux-block On Thu, Aug 23, 2018 at 05:21:33PM -0700, Omar Sandoval wrote: > On Thu, Aug 23, 2018 at 01:53:33AM +0000, Bart Van Assche wrote: > > On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote: > > > On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote: > > > > Moving these tests into the nvme directory is possible but will make it > > > > harder to run the NVMeOF multipath tests separately. Are you fine with this? > > > > > > Both way's have it's up and downsides, I agree. > > > > > > Having two distinct groups requires to run './check nvme nvmeof-mp' to > > > run full coverage with nvme. > > > > > > Having it all in one group would require to run './check nvme 18 19 20 > > > 21 22 23 24 ...' to get only the dm-mpath ones. > > > > > > Honestly I hate both but your's (the two distinct groups) is probably > > > easier to handle in the end, I have to admit. > > > > Omar, do you have a preference for one of the two aforementioned approaches? > > > > Thanks, > > > > Bart. > > > > Let's keep it in a separate category, since lots of people running nvme > tests probably aren't interested in testing multipath. > > A bunch of the tests failed with > > modprobe: FATAL: Module nvme is in use. > > Maybe related to my test VM having an nvme device? Ping, Bart, can you look into this? It'd be nice to get this in. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-13 1:06 ` Omar Sandoval @ 2018-09-16 4:28 ` Bart Van Assche 0 siblings, 0 replies; 13+ messages in thread From: Bart Van Assche @ 2018-09-16 4:28 UTC (permalink / raw) To: Omar Sandoval; +Cc: jthumshirn, osandov, msnitzer, linux-block On 09/12/18 18:06, Omar Sandoval wrote: > Ping, Bart, can you look into this? It'd be nice to get this in. Hello Omar, I will repost this patch series as soon as I have the time. I have been more busy than usual during the past four weeks. Best regards, Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-08-24 0:21 ` Omar Sandoval 2018-09-13 1:06 ` Omar Sandoval @ 2018-09-18 21:20 ` Bart Van Assche 2018-09-18 23:24 ` Omar Sandoval 1 sibling, 1 reply; 13+ messages in thread From: Bart Van Assche @ 2018-09-18 21:20 UTC (permalink / raw) To: Omar Sandoval; +Cc: jthumshirn, osandov, msnitzer, linux-block On 8/23/18 5:21 PM, Omar Sandoval wrote: > On Thu, Aug 23, 2018 at 01:53:33AM +0000, Bart Van Assche wrote: >> On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote: >>> On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote: >>>> Moving these tests into the nvme directory is possible but will make it >>>> harder to run the NVMeOF multipath tests separately. Are you fine with this? >>> >>> Both way's have it's up and downsides, I agree. >>> >>> Having two distinct groups requires to run './check nvme nvmeof-mp' to >>> run full coverage with nvme. >>> >>> Having it all in one group would require to run './check nvme 18 19 20 >>> 21 22 23 24 ...' to get only the dm-mpath ones. >>> >>> Honestly I hate both but your's (the two distinct groups) is probably >>> easier to handle in the end, I have to admit. >> >> Omar, do you have a preference for one of the two aforementioned approaches? > > Let's keep it in a separate category, since lots of people running nvme > tests probably aren't interested in testing multipath. > > A bunch of the tests failed with > > modprobe: FATAL: Module nvme is in use. > > Maybe related to my test VM having an nvme device? Hello Omar, Can you have a look at the updated master branch of https://github.com/bvanassche/blktests? That code should no longer fail if unloading the nvme kernel module fails. Please note that you will need kernel v4.18 to test these scripts - a KASAN complaint appears if I run these tests against kernel v4.19-rc4. Thanks, Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-18 21:20 ` Bart Van Assche @ 2018-09-18 23:24 ` Omar Sandoval 2018-09-19 0:02 ` Bart Van Assche 0 siblings, 1 reply; 13+ messages in thread From: Omar Sandoval @ 2018-09-18 23:24 UTC (permalink / raw) To: Bart Van Assche; +Cc: jthumshirn, osandov, msnitzer, linux-block On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote: > On 8/23/18 5:21 PM, Omar Sandoval wrote: > > On Thu, Aug 23, 2018 at 01:53:33AM +0000, Bart Van Assche wrote: > > > On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote: > > > > On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote: > > > > > Moving these tests into the nvme directory is possible but will make it > > > > > harder to run the NVMeOF multipath tests separately. Are you fine with this? > > > > > > > > Both way's have it's up and downsides, I agree. > > > > > > > > Having two distinct groups requires to run './check nvme nvmeof-mp' to > > > > run full coverage with nvme. > > > > > > > > Having it all in one group would require to run './check nvme 18 19 20 > > > > 21 22 23 24 ...' to get only the dm-mpath ones. > > > > > > > > Honestly I hate both but your's (the two distinct groups) is probably > > > > easier to handle in the end, I have to admit. > > > > > > Omar, do you have a preference for one of the two aforementioned approaches? > > > > Let's keep it in a separate category, since lots of people running nvme > > tests probably aren't interested in testing multipath. > > > > A bunch of the tests failed with > > > > modprobe: FATAL: Module nvme is in use. > > > > Maybe related to my test VM having an nvme device? > > Hello Omar, > > Can you have a look at the updated master branch of > https://github.com/bvanassche/blktests? That code should no longer fail if > unloading the nvme kernel module fails. Please note that you will need > kernel v4.18 to test these scripts - a KASAN complaint appears if I run > these tests against kernel v4.19-rc4. Thanks, these pass now. Is it expected that my nvme device gets a multipath device configured after running these tests? $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 254:0 0 16G 0 disk └─vda1 254:1 0 16G 0 part / vdb 254:16 0 8G 0 disk vdc 254:32 0 8G 0 disk vdd 254:48 0 8G 0 disk nvme0n1 259:0 0 8G 0 disk └─mpatha 253:0 0 8G 0 mpath Also, can you please fix: _have_kernel_option NVME_MULTIPATH && exit 1 to not exit on failure? It should use SKIP_REASON and return 1. You might need to add something like _dont_have_kernel_option to properly handle the case where the config is not found. Side note which isn't a blocker for merging is that there's a lot of duplicated code between these helpers and the srp helpers. How hard would it be to refactor that? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-18 23:24 ` Omar Sandoval @ 2018-09-19 0:02 ` Bart Van Assche 2018-09-19 0:18 ` Omar Sandoval 0 siblings, 1 reply; 13+ messages in thread From: Bart Van Assche @ 2018-09-19 0:02 UTC (permalink / raw) To: Omar Sandoval; +Cc: jthumshirn, osandov, msnitzer, linux-block On 9/18/18 4:24 PM, Omar Sandoval wrote: > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote: >> Can you have a look at the updated master branch of >> https://github.com/bvanassche/blktests? That code should no longer fail if >> unloading the nvme kernel module fails. Please note that you will need >> kernel v4.18 to test these scripts - a KASAN complaint appears if I run >> these tests against kernel v4.19-rc4. > > Thanks, these pass now. Is it expected that my nvme device gets a > multipath device configured after running these tests? > > $ lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > vda 254:0 0 16G 0 disk > └─vda1 254:1 0 16G 0 part / > vdb 254:16 0 8G 0 disk > vdc 254:32 0 8G 0 disk > vdd 254:48 0 8G 0 disk > nvme0n1 259:0 0 8G 0 disk > └─mpatha 253:0 0 8G 0 mpath No, all multipath devices that were created during a test should be removed before that test finishes. I will look into this. > Also, can you please fix: > > _have_kernel_option NVME_MULTIPATH && exit 1 > > to not exit on failure? It should use SKIP_REASON and return 1. You > might need to add something like _dont_have_kernel_option to properly > handle the case where the config is not found. OK, I will change this. > Side note which isn't a blocker for merging is that there's a lot of > duplicated code between these helpers and the srp helpers. How hard > would it be to refactor that? Are you perhaps referring to the code that is shared between the tests/srp/rc tests/nvmeof-mp/rc shell scripts? The hardest part is probably to chose a location where to store these functions. Should I create a file with common code under common/, under tests/srp/, under tests/nvmeof-mp/ or perhaps somewhere else? Thanks, Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-19 0:02 ` Bart Van Assche @ 2018-09-19 0:18 ` Omar Sandoval 2018-09-27 23:26 ` Bart Van Assche 0 siblings, 1 reply; 13+ messages in thread From: Omar Sandoval @ 2018-09-19 0:18 UTC (permalink / raw) To: Bart Van Assche; +Cc: jthumshirn, osandov, msnitzer, linux-block On Tue, Sep 18, 2018 at 05:02:47PM -0700, Bart Van Assche wrote: > On 9/18/18 4:24 PM, Omar Sandoval wrote: > > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote: > > > Can you have a look at the updated master branch of > > > https://github.com/bvanassche/blktests? That code should no longer fail if > > > unloading the nvme kernel module fails. Please note that you will need > > > kernel v4.18 to test these scripts - a KASAN complaint appears if I run > > > these tests against kernel v4.19-rc4. > > > > Thanks, these pass now. Is it expected that my nvme device gets a > > multipath device configured after running these tests? > > > > $ lsblk > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > vda 254:0 0 16G 0 disk > > └─vda1 254:1 0 16G 0 part / > > vdb 254:16 0 8G 0 disk > > vdc 254:32 0 8G 0 disk > > vdd 254:48 0 8G 0 disk > > nvme0n1 259:0 0 8G 0 disk > > └─mpatha 253:0 0 8G 0 mpath > > No, all multipath devices that were created during a test should be removed > before that test finishes. I will look into this. > > > Also, can you please fix: > > > > _have_kernel_option NVME_MULTIPATH && exit 1 > > > > to not exit on failure? It should use SKIP_REASON and return 1. You > > might need to add something like _dont_have_kernel_option to properly > > handle the case where the config is not found. > > OK, I will change this. > > > Side note which isn't a blocker for merging is that there's a lot of > > duplicated code between these helpers and the srp helpers. How hard > > would it be to refactor that? > > Are you perhaps referring to the code that is shared between the > tests/srp/rc tests/nvmeof-mp/rc shell scripts? Yes, those. > The hardest part is probably > to chose a location where to store these functions. Should I create a file > with common code under common/, under tests/srp/, under tests/nvmeof-mp/ or > perhaps somewhere else? Just put it under common. Thanks! ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-19 0:18 ` Omar Sandoval @ 2018-09-27 23:26 ` Bart Van Assche 2018-10-05 18:50 ` Omar Sandoval 0 siblings, 1 reply; 13+ messages in thread From: Bart Van Assche @ 2018-09-27 23:26 UTC (permalink / raw) To: Omar Sandoval; +Cc: jthumshirn, osandov, msnitzer, linux-block On Tue, 2018-09-18 at 17:18 -0700, Omar Sandoval wrote: > On Tue, Sep 18, 2018 at 05:02:47PM -0700, Bart Van Assche wrote: > > On 9/18/18 4:24 PM, Omar Sandoval wrote: > > > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote: > > > > Can you have a look at the updated master branch of > > > > https://github.com/bvanassche/blktests? That code should no longer fail if > > > > unloading the nvme kernel module fails. Please note that you will need > > > > kernel v4.18 to test these scripts - a KASAN complaint appears if I run > > > > these tests against kernel v4.19-rc4. > > > > > > Thanks, these pass now. Is it expected that my nvme device gets a > > > multipath device configured after running these tests? > > > > > > $ lsblk > > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > > vda 254:0 0 16G 0 disk > > > └─vda1 254:1 0 16G 0 part / > > > vdb 254:16 0 8G 0 disk > > > vdc 254:32 0 8G 0 disk > > > vdd 254:48 0 8G 0 disk > > > nvme0n1 259:0 0 8G 0 disk > > > └─mpatha 253:0 0 8G 0 mpath > > > > No, all multipath devices that were created during a test should be removed > > before that test finishes. I will look into this. > > > > > Also, can you please fix: > > > > > > _have_kernel_option NVME_MULTIPATH && exit 1 > > > > > > to not exit on failure? It should use SKIP_REASON and return 1. You > > > might need to add something like _dont_have_kernel_option to properly > > > handle the case where the config is not found. > > > > OK, I will change this. > > > > > Side note which isn't a blocker for merging is that there's a lot of > > > duplicated code between these helpers and the srp helpers. How hard > > > would it be to refactor that? > > > > Are you perhaps referring to the code that is shared between the > > tests/srp/rc tests/nvmeof-mp/rc shell scripts? > > Yes, those. > > > The hardest part is probably > > to chose a location where to store these functions. Should I create a file > > with common code under common/, under tests/srp/, under tests/nvmeof-mp/ or > > perhaps somewhere else? > > Just put it under common. Hi Omar, All feedback mentioned above has been addressed. The following pull request has been updated: https://github.com/osandov/blktests/pull/33. Please let me know if you want me to post these patches on the linux-block mailing list. Note: neither the upstream kernel v4.18 nor v4.19-rc4 are stable enough to pass all nvmeof-mp tests if kernel debugging options like KASAN are enabled. Additionally, the NVMe device_add_disk() race condition often causes multipathd to refuse to consider /dev/nvme... devices. The output on my test setup is as follows (all tests pass): # ./check -q nvmeof-mp nvmeof-mp/001 (Log in and log out) [passed] runtime 1.528s ... 1.909s nvmeof-mp/002 (File I/O on top of multipath concurrently with logout and login (mq)) [ passed]time 38.968s ... runtime 38.968s ... 38.571s nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on- nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on- mq)) [passed]38.632s ... runtime 38.632s ... 37.529s nvmeof-mp/005 (Direct I/O with large transfer sizes and bs=4M) [passed] runtime 13.382s ... 13.684s nvmeof-mp/006 (Direct I/O with large transfer sizes and bs=8M) [passed] runtime 13.511s ... 13.480s nvmeof-mp/009 (Buffered I/O with large transfer sizes and bs=4M) [passed] runtime 13.665s ... 13.763s nvmeof-mp/010 (Buffered I/O with large transfer sizes and bs=8M) [passed] runtime 13.442s ... 13.900s nvmeof-mp/011 (Block I/O on top of multipath concurrently with logout and login) [pass ed] runtime 37.988s ... runtime 37.988s ... 37.945s nvmeof-mp/012 (dm-mpath on top of multiple I/O schedulers) [passed] runtime 21.659s ... 21.733s Thanks, Bart. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH blktests 0/3] Add NVMeOF multipath tests 2018-09-27 23:26 ` Bart Van Assche @ 2018-10-05 18:50 ` Omar Sandoval 0 siblings, 0 replies; 13+ messages in thread From: Omar Sandoval @ 2018-10-05 18:50 UTC (permalink / raw) To: Bart Van Assche; +Cc: jthumshirn, osandov, msnitzer, linux-block On Thu, Sep 27, 2018 at 04:26:42PM -0700, Bart Van Assche wrote: > On Tue, 2018-09-18 at 17:18 -0700, Omar Sandoval wrote: > > On Tue, Sep 18, 2018 at 05:02:47PM -0700, Bart Van Assche wrote: > > > On 9/18/18 4:24 PM, Omar Sandoval wrote: > > > > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote: > > > > > Can you have a look at the updated master branch of > > > > > https://github.com/bvanassche/blktests? That code should no longer fail if > > > > > unloading the nvme kernel module fails. Please note that you will need > > > > > kernel v4.18 to test these scripts - a KASAN complaint appears if I run > > > > > these tests against kernel v4.19-rc4. > > > > > > > > Thanks, these pass now. Is it expected that my nvme device gets a > > > > multipath device configured after running these tests? > > > > > > > > $ lsblk > > > > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > > > > vda 254:0 0 16G 0 disk > > > > └─vda1 254:1 0 16G 0 part / > > > > vdb 254:16 0 8G 0 disk > > > > vdc 254:32 0 8G 0 disk > > > > vdd 254:48 0 8G 0 disk > > > > nvme0n1 259:0 0 8G 0 disk > > > > └─mpatha 253:0 0 8G 0 mpath > > > > > > No, all multipath devices that were created during a test should be removed > > > before that test finishes. I will look into this. > > > > > > > Also, can you please fix: > > > > > > > > _have_kernel_option NVME_MULTIPATH && exit 1 > > > > > > > > to not exit on failure? It should use SKIP_REASON and return 1. You > > > > might need to add something like _dont_have_kernel_option to properly > > > > handle the case where the config is not found. > > > > > > OK, I will change this. > > > > > > > Side note which isn't a blocker for merging is that there's a lot of > > > > duplicated code between these helpers and the srp helpers. How hard > > > > would it be to refactor that? > > > > > > Are you perhaps referring to the code that is shared between the > > > tests/srp/rc tests/nvmeof-mp/rc shell scripts? > > > > Yes, those. > > > > > The hardest part is probably > > > to chose a location where to store these functions. Should I create a file > > > with common code under common/, under tests/srp/, under tests/nvmeof-mp/ or > > > perhaps somewhere else? > > > > Just put it under common. > > Hi Omar, > > All feedback mentioned above has been addressed. The following pull request has > been updated: https://github.com/osandov/blktests/pull/33. Please let me know > if you want me to post these patches on the linux-block mailing list. > > Note: neither the upstream kernel v4.18 nor v4.19-rc4 are stable enough to pass > all nvmeof-mp tests if kernel debugging options like KASAN are enabled. > Additionally, the NVMe device_add_disk() race condition often causes multipathd > to refuse to consider /dev/nvme... devices. The output on my test setup is as > follows (all tests pass): > > # ./check -q nvmeof-mp > nvmeof-mp/001 (Log in and log out) [passed] > runtime 1.528s ... 1.909s > nvmeof-mp/002 (File I/O on top of multipath concurrently with logout and login (mq)) [ > passed]time 38.968s ... > runtime 38.968s ... 38.571s > nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on- > nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on- > mq)) [passed]38.632s ... > runtime 38.632s ... 37.529s > nvmeof-mp/005 (Direct I/O with large transfer sizes and bs=4M) [passed] > runtime 13.382s ... 13.684s > nvmeof-mp/006 (Direct I/O with large transfer sizes and bs=8M) [passed] > runtime 13.511s ... 13.480s > nvmeof-mp/009 (Buffered I/O with large transfer sizes and bs=4M) [passed] > runtime 13.665s ... 13.763s > nvmeof-mp/010 (Buffered I/O with large transfer sizes and bs=8M) [passed] > runtime 13.442s ... 13.900s > nvmeof-mp/011 (Block I/O on top of multipath concurrently with logout and login) [pass > ed] runtime 37.988s ... > runtime 37.988s ... 37.945s > nvmeof-mp/012 (dm-mpath on top of multiple I/O schedulers) [passed] > runtime 21.659s ... 21.733s Thanks, Bart, merged. ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2018-10-06 1:50 UTC | newest] Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20180815203728.19521-1-bart.vanassche@wdc.com> [not found] ` <20180815203728.19521-4-bart.vanassche@wdc.com> 2018-08-17 14:24 ` [PATCH blktests 3/3] Add NVMeOF dm-mpath tests Mike Snitzer 2018-08-17 15:46 ` Bart Van Assche 2018-08-17 16:04 ` Mike Snitzer [not found] ` <20180820073059.quvg3bh4ngv5ka4x@linux-x5ow.site> [not found] ` <ee851ae7b88a93b9b5c9f1b0c6e206944c0b3c64.camel@wdc.com> [not found] ` <20180821064619.7tyhlst74qkrw4fi@linux-x5ow.site> 2018-08-23 1:53 ` [PATCH blktests 0/3] Add NVMeOF multipath tests Bart Van Assche 2018-08-24 0:21 ` Omar Sandoval 2018-09-13 1:06 ` Omar Sandoval 2018-09-16 4:28 ` Bart Van Assche 2018-09-18 21:20 ` Bart Van Assche 2018-09-18 23:24 ` Omar Sandoval 2018-09-19 0:02 ` Bart Van Assche 2018-09-19 0:18 ` Omar Sandoval 2018-09-27 23:26 ` Bart Van Assche 2018-10-05 18:50 ` Omar Sandoval
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.