* Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
2017-01-10 22:40 ` [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology Chaitanya Kulkarni
@ 2017-01-11 7:42 ` Hannes Reinecke
2017-01-11 9:19 ` Johannes Thumshirn
2017-03-10 19:37 ` Bart Van Assche
2 siblings, 0 replies; 7+ messages in thread
From: Hannes Reinecke @ 2017-01-11 7:42 UTC (permalink / raw)
To: Chaitanya Kulkarni, lsf-pc
Cc: linux-fsdevel, linux-block, linux-nvme, linux-scsi, linux-ide
On 01/10/2017 11:40 PM, Chaitanya Kulkarni wrote:
> Resending it at as a plain text.
>
> From: Chaitanya Kulkarni
> Sent: Tuesday, January 10, 2017 2:37 PM
> To: lsf-pc@lists.linux-foundation.org
> Cc: linux-fsdevel@vger.kernel.org; linux-block@vger.kernel.org; linux-nvme@lists.infradead.org; linux-scsi@vger.kernel.org; linux-ide@vger.kernel.org
> Subject: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
>
>
> Hi Folks,
>
> I would like to propose a general discussion on Storage stack and device driver testing.
>
> Purpose:-
> -------------
> The main objective of this discussion is to address the need for
> a Unified Test Automation Framework which can be used by different subsystems
> in the kernel in order to improve the overall development and stability
> of the storage stack.
>
> For Example:-
> From my previous experience, I've worked on the NVMe driver testing last year and we
> have developed simple unit test framework
> (https://github.com/linux-nvme/nvme-cli/tree/master/tests).
> In current implementation Upstream NVMe Driver supports following subsystems:-
> 1. PCI Host.
> 2. RDMA Target.
> 3. Fiber Channel Target (in progress).
> Today due to lack of centralized automated test framework NVMe Driver testing is
> scattered and performed using the combination of various utilities like nvme-cli/tests,
> nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git nvmf-selftests) etc.
>
> In order to improve overall driver stability with various subsystems, it will be beneficial
> to have a Unified Test Automation Framework (UTAF) which will centralize overall
> testing.
>
> This topic will allow developers from various subsystem engage in the discussion about
> how to collaborate efficiently instead of having discussions on lengthy email threads.
>
> Participants:-
> ------------------
> I'd like to invite developers from different subsystems to discuss an approach towards
> a unified testing methodology for storage stack and device drivers belongs to
> different subsystems.
>
> Topics for Discussion:-
> ------------------------------
> As a part of discussion following are some of the key points which we can focus on:-
> 1. What are the common components of the kernel used by the various subsystems?
> 2. What are the potential target drivers which can benefit from this approach?
> (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.)
> 3. What are the desired features that can be implemented in this Framework?
> (code coverage, unit tests, stress testings, regression, generating Coccinelle reports etc.)
> 4. Desirable Report generation mechanism?
> 5. Basic performance validation?
> 6. Whether QEMU can be used to emulate some of the H/W functionality to create a test
> platform? (Optional subsystem specific)
>
> Some background about myself I'm Chaitanya Kulkarni, I worked as a team lead
> which was responsible for delivering scalable multiplatform Automated Test
> Framework for device drivers testing at HGST. It's been used for more than 1 year on
> Linux/Windows for unit testing/regression/performance validation of the NVMe Linux and
> Windows driver successfully. I've also recently started contributing to the
>
> NVMe Host and NVMe over Fabrics Target driver.
>
Oh, yes, please.
That's a discussion I'd like to have, too.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@suse.de +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: F. Imend�rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N�rnberg)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
2017-01-10 22:40 ` [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology Chaitanya Kulkarni
2017-01-11 7:42 ` Hannes Reinecke
@ 2017-01-11 9:19 ` Johannes Thumshirn
2017-01-11 9:24 ` Christoph Hellwig
2017-03-10 19:37 ` Bart Van Assche
2 siblings, 1 reply; 7+ messages in thread
From: Johannes Thumshirn @ 2017-01-11 9:19 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: lsf-pc, linux-fsdevel, linux-block, linux-nvme, linux-scsi, linux-ide
On Tue, Jan 10, 2017 at 10:40:53PM +0000, Chaitanya Kulkarni wrote:
> Resending it at as a plain text.
>
> From: Chaitanya Kulkarni
> Sent: Tuesday, January 10, 2017 2:37 PM
> To: lsf-pc@lists.linux-foundation.org
> Cc: linux-fsdevel@vger.kernel.org; linux-block@vger.kernel.org; linux-nvme@lists.infradead.org; linux-scsi@vger.kernel.org; linux-ide@vger.kernel.org
> Subject: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
> �
>
> Hi Folks,
>
> I would like to propose a general discussion on Storage stack and device driver testing.
>
> Purpose:-
> -------------
> The main objective of this discussion is to address the need for�
> a Unified Test Automation Framework which can be used by different subsystems
> in the kernel in order to improve the overall development and stability
> of the storage stack.
>
> For Example:-�
> From my previous experience, I've worked on the NVMe driver testing last year and we
> have developed simple unit test framework
> �(https://github.com/linux-nvme/nvme-cli/tree/master/tests).�
> In current implementation Upstream NVMe Driver supports following subsystems:-
> 1. PCI Host.
> 2. RDMA Target.
> 3. Fiber Channel Target (in progress).
> Today due to lack of centralized automated test framework NVMe Driver testing is�
> scattered and performed using the combination of various utilities like nvme-cli/tests,�
> nvmet-cli, shell scripts (git://git.infradead.org/nvme-fabrics.git nvmf-selftests) etc.
>
> In order to improve overall driver stability with various subsystems, it will be beneficial
> to have a Unified Test Automation Framework (UTAF) which will centralize overall
> testing.�
>
> This topic will allow developers from various subsystem engage in the discussion about�
> how to collaborate efficiently instead of having discussions on lengthy email threads.
>
> Participants:-
> ------------------
> I'd like to invite developers from different subsystems to discuss an approach towards�
> a unified testing methodology for storage stack and device drivers belongs to�
> different subsystems.
>
> Topics for Discussion:-
> ------------------------------
> As a part of discussion following are some of the key points which we can focus on:-
> 1. What are the common components of the kernel used by the various subsystems?
> 2. What are the potential target drivers which can benefit from this approach?�
> � (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.)
> 3. What are the desired features that can be implemented in this Framework?
> � (code coverage, unit tests, stress testings, regression, generating Coccinelle reports etc.)�
> 4. Desirable Report generation mechanism?
> 5. Basic performance validation?
> 6. Whether QEMU can be used to emulate some of the H/W functionality to create a test�
> � platform? (Optional subsystem specific)
Well, something I was thinking about but didn't find enough time to actually
implement is making a xfstestes like test suite written using sg3_utils for
SCSI. This idea could very well be extented to NVMe, AHCI, blk, etc...
Byte,
Johannes
--
Johannes Thumshirn Storage
jthumshirn@suse.de +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: Felix Imend�rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N�rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
2017-01-11 9:19 ` Johannes Thumshirn
@ 2017-01-11 9:24 ` Christoph Hellwig
2017-01-11 9:40 ` Hannes Reinecke
0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2017-01-11 9:24 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: Chaitanya Kulkarni, linux-scsi, linux-nvme, linux-block,
linux-ide, linux-fsdevel, lsf-pc
On Wed, Jan 11, 2017 at 10:19:45AM +0100, Johannes Thumshirn wrote:
> Well, something I was thinking about but didn't find enough time to actually
> implement is making a xfstestes like test suite written using sg3_utils for
> SCSI.
Ronnie's libiscsi testsuite can use SG_IO for a new years now:
https://github.com/sahlberg/libiscsi/tree/master/test-tool
and has been very useful to find bus in various protocol
implementations.
> This idea could very well be extented to NVMe
Chaitanya suite is doing something similar for NVMe, although the
coverage is still much more limited so far.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
2017-01-11 9:24 ` Christoph Hellwig
@ 2017-01-11 9:40 ` Hannes Reinecke
0 siblings, 0 replies; 7+ messages in thread
From: Hannes Reinecke @ 2017-01-11 9:40 UTC (permalink / raw)
To: Christoph Hellwig, Johannes Thumshirn
Cc: Chaitanya Kulkarni, linux-scsi, linux-nvme, linux-block,
linux-ide, linux-fsdevel, lsf-pc
On 01/11/2017 10:24 AM, Christoph Hellwig wrote:
> On Wed, Jan 11, 2017 at 10:19:45AM +0100, Johannes Thumshirn wrote:
>> Well, something I was thinking about but didn't find enough time to actually
>> implement is making a xfstestes like test suite written using sg3_utils for
>> SCSI.
>
> Ronnie's libiscsi testsuite can use SG_IO for a new years now:
>
> https://github.com/sahlberg/libiscsi/tree/master/test-tool
>
> and has been very useful to find bus in various protocol
> implementations.
>
>> This idea could very well be extented to NVMe
>
> Chaitanya suite is doing something similar for NVMe, although the
> coverage is still much more limited so far.
>
One of the discussion points here indeed would be if we want to go in
the direction of a protocol-specific testsuites (of which we already
have several) or if it makes sense to move to functional testing.
And if we can have a common interface / documentation on how these
things should run.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@suse.de +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N�rnberg
GF: F. Imend�rffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG N�rnberg)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.
2017-01-10 22:40 ` [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology Chaitanya Kulkarni
2017-01-11 7:42 ` Hannes Reinecke
2017-01-11 9:19 ` Johannes Thumshirn
@ 2017-03-10 19:37 ` Bart Van Assche
2 siblings, 0 replies; 7+ messages in thread
From: Bart Van Assche @ 2017-03-10 19:37 UTC (permalink / raw)
To: Chaitanya Kulkarni, lsf-pc
Cc: linux-scsi, linux-block, linux-nvme, linux-fsdevel, linux-ide
On Tue, 2017-01-10 at 22:40 +0000, Chaitanya Kulkarni wrote:
> Participants:-
> ------------------
> I'd like to invite developers from different subsystems to discuss an approach towards
> a unified testing methodology for storage stack and device drivers belongs to
> different subsystems.
>
> Topics for Discussion:-
> ------------------------------
> As a part of discussion following are some of the key points which we can focus on:-
> 1. What are the common components of the kernel used by the various subsystems?
> 2. What are the potential target drivers which can benefit from this approach?
> (e.g. NVMe, NVMe Over Fabric, Open Channel Solid State Drives etc.)
> 3. What are the desired features that can be implemented in this Framework?
> (code coverage, unit tests, stress testings, regression, generating Coccinelle reports etc.)
> 4. Desirable Report generation mechanism?
> 5. Basic performance validation?
> 6. Whether QEMU can be used to emulate some of the H/W functionality to create a test
> platform? (Optional subsystem specific)
Regarding existing test software: the SRP test software is a thorough test of
the Linux block layer, SCSI core, dm-mpath driver, dm core, SRP initiator and
target drivers and also of the asynchronous I/O subsystem. This test suite
includes experimental support for the NVMeOF drivers. This test suite supports
the rdma_rxe driver which means that an Ethernet adapter is sufficient to run
these tests.
Note: the focus of this test suite is the regular I/O path and device removal.
This test suite neither replaces the libiscsi tests nor xfstests.
See also https://github.com/bvanassche/srp-test.
Bart.
^ permalink raw reply [flat|nested] 7+ messages in thread