linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] improving storage testing
@ 2019-02-13 18:07 Theodore Y. Ts'o
  2019-02-14  7:37 ` Chaitanya Kulkarni
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Theodore Y. Ts'o @ 2019-02-13 18:07 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-block, linux-fsdevel

This should probably be folded into other testing proposals but I'd
like to discuss ways that we can improve storage and file systems
testing.  Specifically,

1) Adding some kind of "smoke test" group.  The "quick" group in
xfstests is no longer terribly quick.  Using gce-xfstests, the time to
run the quick group on f2fs, ext4, btrfs, and xfs is 17 minutes, 18
minutes, 25 minutes, and 31 minutes, respectively.  It probably won't
be too contentious to come up with some kind of criteria --- stress
tests plus maybe a few tests added to maximize code coverage, with the
goal of the smoke test to run in 5-10 minutes for all major file
systems.

Perhaps more controversial might be some way of ordering the tests so
that the ones which are most likely to fail if a bug has been
introduced are run first, so that we can have a "fail fast" sort of
system.

2) Documenting what are known failures should be for various tests on
different file systems and kernel versions.  I think we all have our
own way of excluding tests which are known to fail.  One extreme case
is where the test case was added to xfstests (generic/484), but the
patch to fix it got hung up because it was somewhat controversial, so
it was failing on all file systems.

Other cases might be when fixing a particular test failure is too
complex to backport to stable (maybe because it would drag in all
sorts of other changes in other subsystems), so that test is Just
Going To Fail for a particular stable kernel series.

It probably doesn't make sense to do this in xfstests, which is why we
all have our own individual test runners that are layered on top of
xfstests.  But if we want to automate running xfstests for stable
kernel series, some way of annotating fixes for different kernel
versions would be useful, perhaps some kind of centralized clearing
house of this information would be useful.

3) Making blktests more stable/useful.  For someone who is not a block
layer specialist, it can be hard to determine whether the problem is a
kernel bug, a kernel misconfiguration, some userspace component (such
as nvme-cli) being out of date, or just a test bug.  (For example, all
srp/* tests are currently failing in blktests upstream; I had to pull
some not-yet-merged commits from Bart's tree in order to fix bugs that
caused all of srp to fail.)

Some of the things that we could do include documenting what kernel
CONFIG options are needed to successfully run blktests, perhaps using
a defconfig list.

Also, there are expectations about minimum versions of bash that can
be supported; but there aren't necessarily for other components such
as nvme-cli, and I suspect that it is due to the use of a overly new
version of nvme-cli from its git tree.  Is that supposed to work, or
should I constrain myself to whatever version is being shipped in
Fedora or some other reference distribution?  More generally, what is
the overall expectations that should be expected?  xfstests has some
extremely expansive set of sed scripts to normalize shell script
output to make xfstests extremely portable; will patches along similar
lines something that we should be doing for blktests?

Cheers,

					- Ted

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-13 18:07 [LSF/MM TOPIC] improving storage testing Theodore Y. Ts'o
@ 2019-02-14  7:37 ` Chaitanya Kulkarni
  2019-02-14 10:55 ` Johannes Thumshirn
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Chaitanya Kulkarni @ 2019-02-14  7:37 UTC (permalink / raw)
  To: Theodore Y. Ts'o, lsf-pc; +Cc: linux-block, linux-fsdevel

Thanks for suggesting this topic, we can definitely fold this into
the one which is posted earlier.

On 2/13/19 10:08 AM, Theodore Y. Ts'o wrote:
> This should probably be folded into other testing proposals but I'd
> like to discuss ways that we can improve storage and file systems
> testing.  Specifically,
> 
> 1) Adding some kind of "smoke test" group.  The "quick" group in
> xfstests is no longer terribly quick.  Using gce-xfstests, the time to
> run the quick group on f2fs, ext4, btrfs, and xfs is 17 minutes, 18
> minutes, 25 minutes, and 31 minutes, respectively.  It probably won't
> be too contentious to come up with some kind of criteria --- stress
> tests plus maybe a few tests added to maximize code coverage, with the
> goal of the smoke test to run in 5-10 minutes for all major file
> systems.
> 
> Perhaps more controversial might be some way of ordering the tests so
> that the ones which are most likely to fail if a bug has been
> introduced are run first, so that we can have a "fail fast" sort of
> system.
> 
> 2) Documenting what are known failures should be for various tests on
> different file systems and kernel versions.  I think we all have our
> own way of excluding tests which are known to fail.  One extreme case
> is where the test case was added to xfstests (generic/484), but the
> patch to fix it got hung up because it was somewhat controversial, so
> it was failing on all file systems.
> 
> Other cases might be when fixing a particular test failure is too
> complex to backport to stable (maybe because it would drag in all
> sorts of other changes in other subsystems), so that test is Just
> Going To Fail for a particular stable kernel series.
> 
> It probably doesn't make sense to do this in xfstests, which is why we
> all have our own individual test runners that are layered on top of
> xfstests.  But if we want to automate running xfstests for stable
> kernel series, some way of annotating fixes for different kernel
> versions would be useful, perhaps some kind of centralized clearing
> house of this information would be useful.
> 
> 3) Making blktests more stable/useful.  For someone who is not a block
> layer specialist, it can be hard to determine whether the problem is a
> kernel bug, a kernel misconfiguration, some userspace component (such
> as nvme-cli) being out of date, or just a test bug.  (For example, all
> srp/* tests are currently failing in blktests upstream; I had to pull
> some not-yet-merged commits from Bart's tree in order to fix bugs that
> caused all of srp to fail.)

This is exactly what I want to discuss in the topic I suggested.
> 
> Some of the things that we could do include documenting what kernel
> CONFIG options are needed to successfully run blktests, perhaps using
> a defconfig list.

Good idea, we should have this per test/category.
> 
> Also, there are expectations about minimum versions of bash that can
> be supported; but there aren't necessarily for other components such
> as nvme-cli, and I suspect that it is due to the use of a overly new
> version of nvme-cli from its git tree.  Is that supposed to work, or
> should I constrain myself to whatever version is being shipped in
> Fedora or some other reference distribution? 
Most of the test assumes that you have nvme-cli from Keith's repo:-
https://github.com/linux-nvme/nvme-cli.git and latest code should
always work, if it breaks then we need to either fix the cli or test.
In this way we are also making sure tools are also working along with 
the kernel code. May be I should document that.

  More generally, what is
> the overall expectations that should be expected?  xfstests has some
> extremely expansive set of sed scripts to normalize shell script
> output to make xfstests extremely portable; will patches along similar
> lines something that we should be doing for blktests?
> 
I think this is a good topic for general discussion.
> Cheers,
> 
> 					- Ted
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-13 18:07 [LSF/MM TOPIC] improving storage testing Theodore Y. Ts'o
  2019-02-14  7:37 ` Chaitanya Kulkarni
@ 2019-02-14 10:55 ` Johannes Thumshirn
  2019-02-14 16:21   ` David Sterba
  2019-02-14 23:26   ` Bart Van Assche
  2019-02-14 12:10 ` Lukas Czerner
  2019-02-14 21:56 ` Omar Sandoval
  3 siblings, 2 replies; 13+ messages in thread
From: Johannes Thumshirn @ 2019-02-14 10:55 UTC (permalink / raw)
  To: Theodore Y. Ts'o; +Cc: lsf-pc, linux-block, linux-fsdevel

On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> 2) Documenting what are known failures should be for various tests on
> different file systems and kernel versions.  I think we all have our
> own way of excluding tests which are known to fail.  One extreme case
> is where the test case was added to xfstests (generic/484), but the
> patch to fix it got hung up because it was somewhat controversial, so
> it was failing on all file systems.

How about having a wiki page, either in the respective filesystems wiki or a
common wiki, that show's the list of test that are expected to fail for kernel
version X?

This is something I'm desperately looking for for brtfs for example.

[...]

> 3) Making blktests more stable/useful.  For someone who is not a block
> layer specialist, it can be hard to determine whether the problem is a
> kernel bug, a kernel misconfiguration, some userspace component (such
> as nvme-cli) being out of date, or just a test bug.  (For example, all
> srp/* tests are currently failing in blktests upstream; I had to pull
> some not-yet-merged commits from Bart's tree in order to fix bugs that
> caused all of srp to fail.)
> 
> Some of the things that we could do include documenting what kernel
> CONFIG options are needed to successfully run blktests, perhaps using
> a defconfig list.

Or checking for specific CONFIG_* settings in a test's requires() via
/proc/config.gz. This obviously won't work with kernels that don't have it.
 
> Also, there are expectations about minimum versions of bash that can
> be supported; but there aren't necessarily for other components such
> as nvme-cli, and I suspect that it is due to the use of a overly new
> version of nvme-cli from its git tree.  Is that supposed to work, or
> should I constrain myself to whatever version is being shipped in
> Fedora or some other reference distribution?  More generally, what is
> the overall expectations that should be expected?  xfstests has some
> extremely expansive set of sed scripts to normalize shell script
> output to make xfstests extremely portable; will patches along similar
> lines something that we should be doing for blktests?

I think this is the root cause of the problems you've sent out mails for this
week. A lot of blktests test need filtering. See [1] as an example.

[1] https://github.com/osandov/blktests/pull/34

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-13 18:07 [LSF/MM TOPIC] improving storage testing Theodore Y. Ts'o
  2019-02-14  7:37 ` Chaitanya Kulkarni
  2019-02-14 10:55 ` Johannes Thumshirn
@ 2019-02-14 12:10 ` Lukas Czerner
  2019-02-14 21:28   ` Omar Sandoval
  2019-02-14 21:56 ` Omar Sandoval
  3 siblings, 1 reply; 13+ messages in thread
From: Lukas Czerner @ 2019-02-14 12:10 UTC (permalink / raw)
  To: Theodore Y. Ts'o; +Cc: lsf-pc, linux-block, linux-fsdevel

On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> 
> 2) Documenting what are known failures should be for various tests on
> different file systems and kernel versions.  I think we all have our
> own way of excluding tests which are known to fail.  One extreme case
> is where the test case was added to xfstests (generic/484), but the
> patch to fix it got hung up because it was somewhat controversial, so
> it was failing on all file systems.
> 
> Other cases might be when fixing a particular test failure is too
> complex to backport to stable (maybe because it would drag in all
> sorts of other changes in other subsystems), so that test is Just
> Going To Fail for a particular stable kernel series.
> 
> It probably doesn't make sense to do this in xfstests, which is why we
> all have our own individual test runners that are layered on top of
> xfstests.  But if we want to automate running xfstests for stable
> kernel series, some way of annotating fixes for different kernel
> versions would be useful, perhaps some kind of centralized clearing
> house of this information would be useful.

I think that the first step can be to require the new test to go in
"after" the respective kernel fix. And related to that, require the test
to include a well-defined tag (preferably both in the test itself and
commit description) saying which commit fixed this particular problem.

It does not solve all the problems, but would be a huge help. We could
also update old tests regularly with new tags as problems are introduced
and fixed, but that's a bit more involved. One thing that would help
with this would be to tag a kernel commit that fixes a problem for which
we already have a tast with the repeoctive test number.


Another think I was planning to do since forever was to create a
standard machine readble output, the ability to construct a database of
the results and present it in the easily browsable format like a set of
html with help of js. I never got around to it, but it would be nice to
be able to compare historical data, kernel versions, options, or even
file systems and identify tests that often fail, or never fail and even
how the run time differs. That might also help one to construct fast,
quick fail set of tests from ones own historical data. It would open some
interesting possibilities.

-Lukas

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-14 10:55 ` Johannes Thumshirn
@ 2019-02-14 16:21   ` David Sterba
  2019-02-14 23:26   ` Bart Van Assche
  1 sibling, 0 replies; 13+ messages in thread
From: David Sterba @ 2019-02-14 16:21 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Theodore Y. Ts'o, lsf-pc, linux-block, linux-fsdevel

On Thu, Feb 14, 2019 at 11:55:07AM +0100, Johannes Thumshirn wrote:
> On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> > 2) Documenting what are known failures should be for various tests on
> > different file systems and kernel versions.  I think we all have our
> > own way of excluding tests which are known to fail.  One extreme case
> > is where the test case was added to xfstests (generic/484), but the
> > patch to fix it got hung up because it was somewhat controversial, so
> > it was failing on all file systems.
> 
> How about having a wiki page, either in the respective filesystems wiki or a
> common wiki, that show's the list of test that are expected to fail for kernel
> version X?
> 
> This is something I'm desperately looking for for brtfs for example.

https://btrfs.wiki.kernel.org/index.php/Development_notes#.28x.29fstests

Feel free to add what you're missing to that page. Though I'm not sure
wiki is the best way to track such information, but it can be a start.
Without people regularly checking that the information is accurate, it
will be obsolete and fallback to own scripts and exclusion lists would
happen.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-14 12:10 ` Lukas Czerner
@ 2019-02-14 21:28   ` Omar Sandoval
  0 siblings, 0 replies; 13+ messages in thread
From: Omar Sandoval @ 2019-02-14 21:28 UTC (permalink / raw)
  To: Lukas Czerner; +Cc: Theodore Y. Ts'o, lsf-pc, linux-block, linux-fsdevel

On Thu, Feb 14, 2019 at 01:10:40PM +0100, Lukas Czerner wrote:
> On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> > 
> > 2) Documenting what are known failures should be for various tests on
> > different file systems and kernel versions.  I think we all have our
> > own way of excluding tests which are known to fail.  One extreme case
> > is where the test case was added to xfstests (generic/484), but the
> > patch to fix it got hung up because it was somewhat controversial, so
> > it was failing on all file systems.
> > 
> > Other cases might be when fixing a particular test failure is too
> > complex to backport to stable (maybe because it would drag in all
> > sorts of other changes in other subsystems), so that test is Just
> > Going To Fail for a particular stable kernel series.
> > 
> > It probably doesn't make sense to do this in xfstests, which is why we
> > all have our own individual test runners that are layered on top of
> > xfstests.  But if we want to automate running xfstests for stable
> > kernel series, some way of annotating fixes for different kernel
> > versions would be useful, perhaps some kind of centralized clearing
> > house of this information would be useful.
> 
> I think that the first step can be to require the new test to go in
> "after" the respective kernel fix. And related to that, require the test
> to include a well-defined tag (preferably both in the test itself and
> commit description) saying which commit fixed this particular problem.

For blktests, I require that regression tests include what commit they
are testing in the test comment. For xfstests, sometimes the test
mentions it, sometimes the commit mentions it, but more often you have
to search for keywords in kernel commit messages... It'd be great if
xfstests also required that the test file mentioned the commit/patch it
tests.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-13 18:07 [LSF/MM TOPIC] improving storage testing Theodore Y. Ts'o
                   ` (2 preceding siblings ...)
  2019-02-14 12:10 ` Lukas Czerner
@ 2019-02-14 21:56 ` Omar Sandoval
  2019-02-15  3:02   ` Theodore Y. Ts'o
  3 siblings, 1 reply; 13+ messages in thread
From: Omar Sandoval @ 2019-02-14 21:56 UTC (permalink / raw)
  To: Theodore Y. Ts'o; +Cc: lsf-pc, linux-block, linux-fsdevel

On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> This should probably be folded into other testing proposals but I'd
> like to discuss ways that we can improve storage and file systems
> testing.  Specifically,
> 
> 1) Adding some kind of "smoke test" group.  The "quick" group in
> xfstests is no longer terribly quick.  Using gce-xfstests, the time to
> run the quick group on f2fs, ext4, btrfs, and xfs is 17 minutes, 18
> minutes, 25 minutes, and 31 minutes, respectively.  It probably won't
> be too contentious to come up with some kind of criteria --- stress
> tests plus maybe a few tests added to maximize code coverage, with the
> goal of the smoke test to run in 5-10 minutes for all major file
> systems.
> 
> Perhaps more controversial might be some way of ordering the tests so
> that the ones which are most likely to fail if a bug has been
> introduced are run first, so that we can have a "fail fast" sort of
> system.
> 
> 2) Documenting what are known failures should be for various tests on
> different file systems and kernel versions.  I think we all have our
> own way of excluding tests which are known to fail.  One extreme case
> is where the test case was added to xfstests (generic/484), but the
> patch to fix it got hung up because it was somewhat controversial, so
> it was failing on all file systems.
> 
> Other cases might be when fixing a particular test failure is too
> complex to backport to stable (maybe because it would drag in all
> sorts of other changes in other subsystems), so that test is Just
> Going To Fail for a particular stable kernel series.
> 
> It probably doesn't make sense to do this in xfstests, which is why we
> all have our own individual test runners that are layered on top of
> xfstests.  But if we want to automate running xfstests for stable
> kernel series, some way of annotating fixes for different kernel
> versions would be useful, perhaps some kind of centralized clearing
> house of this information would be useful.
> 
> 3) Making blktests more stable/useful.  For someone who is not a block
> layer specialist, it can be hard to determine whether the problem is a
> kernel bug,

From my experience with running xfstests at Facebook, the same thing
goes for xfstests :) The filesystem developers on the team are the only
ones that can make sense of any test failures.

> a kernel misconfiguration

In theory, every test should verify that the kernel is configured
correctly and skip the test if not, just like xfstests.

> some userspace component (such as nvme-cli) being out of date or just
> a test bug.  (For example, all srp/* tests are currently failing in
> blktests upstream; I had to pull some not-yet-merged commits from
> Bart's tree in order to fix bugs that caused all of srp to fail.)
> 
> Some of the things that we could do include documenting what kernel
> CONFIG options are needed to successfully run blktests, perhaps using
> a defconfig list.

Have you encountered issues where missing config options have caused
test failures? Or you want the config options for maximum coverage? If
you have examples of the former, I'll fix them up. For the latter, I
have a list somewhere that I can add to the blktests repository.

> Also, there are expectations about minimum versions of bash that can
> be supported; but there aren't necessarily for other components such
> as nvme-cli, and I suspect that it is due to the use of a overly new
> version of nvme-cli from its git tree.  Is that supposed to work, or
> should I constrain myself to whatever version is being shipped in
> Fedora or some other reference distribution?  More generally, what is
> the overall expectations that should be expected?

My (undocumented) rule of thumb has been that blktests shouldn't assume
anything newer than whatever ships on Debian oldstable. I can document
that requirement.

For specific tests that require a newer feature, the test _should_ check
that the feature is available. Please report any tests where that isn't
the case, although I'll likely defer to the contributors for
nvme/srp/zbd issues.

> xfstests has some
> extremely expansive set of sed scripts to normalize shell script
> output to make xfstests extremely portable; will patches along similar
> lines something that we should be doing for blktests?

Yup, we've added a couple of these. We should add more as needed.

blktests is new, so we have some rough edges, but I'd like to think that
we're trying to do the right things. Please report the cases where we're
not and we'll get them fixed up.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-14 10:55 ` Johannes Thumshirn
  2019-02-14 16:21   ` David Sterba
@ 2019-02-14 23:26   ` Bart Van Assche
  2019-02-15  2:52     ` Chaitanya Kulkarni
  1 sibling, 1 reply; 13+ messages in thread
From: Bart Van Assche @ 2019-02-14 23:26 UTC (permalink / raw)
  To: Johannes Thumshirn, Theodore Y. Ts'o
  Cc: lsf-pc, linux-block, linux-fsdevel

On Thu, 2019-02-14 at 11:55 +0100, Johannes Thumshirn wrote:
> On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> > Also, there are expectations about minimum versions of bash that can
> > be supported; but there aren't necessarily for other components such
> > as nvme-cli, and I suspect that it is due to the use of a overly new
> > version of nvme-cli from its git tree.  Is that supposed to work, or
> > should I constrain myself to whatever version is being shipped in
> > Fedora or some other reference distribution?  More generally, what is
> > the overall expectations that should be expected?  xfstests has some
> > extremely expansive set of sed scripts to normalize shell script
> > output to make xfstests extremely portable; will patches along similar
> > lines something that we should be doing for blktests?
> 
> I think this is the root cause of the problems you've sent out mails for this
> week. A lot of blktests test need filtering. See [1] as an example.
> 
> [1] https://github.com/osandov/blktests/pull/34

Hi Johannes,

Output of tools like nvme-cli is not an ABI although an ABI is what is
required to make blktests work reliably. One possible approach is to modify
nvme-cli such that it has two output formats: one output format that is
intended for humans and another that is easy to parse by software. I think
we should consider that approach and compare it to using sed scripts.

Bart.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-14 23:26   ` Bart Van Assche
@ 2019-02-15  2:52     ` Chaitanya Kulkarni
  2019-02-15  7:52       ` Johannes Thumshirn
  0 siblings, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2019-02-15  2:52 UTC (permalink / raw)
  To: Bart Van Assche, Johannes Thumshirn, Theodore Y. Ts'o
  Cc: lsf-pc, linux-block, linux-fsdevel

Hi, Bart

In this way, we may end up modifying probably most of the common tools,
which in long run can create a bunch of the code for the tests. If everyone
(test contributors and tools maintainers) agrees to have such a "test" mode
to all the tools, we can go for this approach.

-Chaitanya

From: linux-block-owner@vger.kernel.org <linux-block-owner@vger.kernel.org> on behalf of Bart Van Assche <bvanassche@acm.org>
Sent: Thursday, February 14, 2019 3:26 PM
To: Johannes Thumshirn; Theodore Y. Ts'o
Cc: lsf-pc@lists.linux-foundation.org; linux-block@vger.kernel.org; linux-fsdevel@vger.kernel.org
Subject: Re: [LSF/MM TOPIC] improving storage testing
  
 
On Thu, 2019-02-14 at 11:55 +0100, Johannes Thumshirn wrote:
> On Wed, Feb 13, 2019 at 01:07:54PM -0500, Theodore Y. Ts'o wrote:
> > Also, there are expectations about minimum versions of bash that can
> > be supported; but there aren't necessarily for other components such
> > as nvme-cli, and I suspect that it is due to the use of a overly new
> > version of nvme-cli from its git tree.  Is that supposed to work, or
> > should I constrain myself to whatever version is being shipped in
> > Fedora or some other reference distribution?  More generally, what is
> > the overall expectations that should be expected?  xfstests has some
> > extremely expansive set of sed scripts to normalize shell script
> > output to make xfstests extremely portable; will patches along similar
> > lines something that we should be doing for blktests?
> 
> I think this is the root cause of the problems you've sent out mails for this
> week. A lot of blktests test need filtering. See [1] as an example.
> 
> [1] https://github.com/osandov/blktests/pull/34

 https://avatars0.githubusercontent.com/u/5319409?s=400&v=4 

Add filter function for nvme discover by frankenmichl · Pull Request #34 · osandov/blktests
github.com
Several NVMe tests (002, 016, 017) used a pipe to a sed call filtering the output. This call is moved to a new filter function nvme/rc and the calls to sed are replaced by this function. Additional...


Hi Johannes,

Output of tools like nvme-cli is not an ABI although an ABI is what is
required to make blktests work reliably. One possible approach is to modify
nvme-cli such that it has two output formats: one output format that is
intended for humans and another that is easy to parse by software. I think
we should consider that approach and compare it to using sed scripts.

Bart.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-14 21:56 ` Omar Sandoval
@ 2019-02-15  3:02   ` Theodore Y. Ts'o
  2019-02-15 17:32     ` Keith Busch
  0 siblings, 1 reply; 13+ messages in thread
From: Theodore Y. Ts'o @ 2019-02-15  3:02 UTC (permalink / raw)
  To: Omar Sandoval; +Cc: lsf-pc, linux-block, linux-fsdevel

[-- Attachment #1: Type: text/plain, Size: 5791 bytes --]

On Thu, Feb 14, 2019 at 01:56:34PM -0800, Omar Sandoval wrote:
> > 3) Making blktests more stable/useful.  For someone who is not a block
> > layer specialist, it can be hard to determine whether the problem is a
> > kernel bug,
> 
> From my experience with running xfstests at Facebook, the same thing
> goes for xfstests :) The filesystem developers on the team are the only
> ones that can make sense of any test failures.

What I've done for xfstests is to make it so easy that even a
University Professor (or Graduate Student) can run it.  That's why I
created the {kvm,gce}-xfstests test appliance:

   https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md

I've been trying to integrate blktests into the test appliance, to try
to make it really easy to run.

> Have you encountered issues where missing config options have caused
> test failures? Or you want the config options for maximum coverage? If
> you have examples of the former, I'll fix them up. For the latter, I
> have a list somewhere that I can add to the blktests repository.

There were a few cases a missing config caused test failures; most of
the time, it simply causes tests to get skipped.  But figuring out how
to enable the nvme or srp tests required turning on a *large* number
of modules.  Figuring that out was painful, and required multipel tries.

One of the things that I've done is create kernel defconfigs to make
it really easy for somone to build a kernel for testing purposes.  The
defconfigs for suitable for running xfstests under KVM or GCE can be
found here:

   https://github.com/tytso/xfstests-bld/blob/master/kernel-configs/x86_64-config-4.14

I've attached the defconfig I've been developing suitable for xfstests
and blktests below.  I try to create minimal defconfigs so I can more
quickly build kernels, especially if I am needing to do bisection
search.

> My (undocumented) rule of thumb has been that blktests shouldn't assume
> anything newer than whatever ships on Debian oldstable. I can document
> that requirement.

That's definitely not true for the nvme tests; the nvme-cli from
Debian stable is *not* sufficient.  This is why I've started building
nvme-cli as part of the test appliance in xfstests-bld.  I'm now
somewhat suspicious that there are problems because using the latest
HEAD of the nvme-cli git tree may have had messages printed to
standard out that is subtly different from the version of nvme-cli
that was used to develop some of the nvme tests.

> blktests is new, so we have some rough edges, but I'd like to think that
> we're trying to do the right things. Please report the cases where we're
> not and we'll get them fixed up.

I have been graduately reporting them to linux-block@.  Here are the
full set of test failures I've been working through.

My goal is that eventually, someone will be able to run "gce-xfstests
--blktests" in their kernel development tree, and in less than 45
minutes, they would get an e-mail that would look like the following,
except there wouldn't be any failures reported.  :-)

						- Ted

CMDLINE: --blktests
FSTESTIMG: gce-xfstests/xfstests-201902111955
FSTESTPRJ: gce-xfstests
FSTESTVER: blktests	5f1e24c (Mon, 11 Feb 2019 10:08:14 -0800)
FSTESTVER: fio		fio-3.2 (Fri, 3 Nov 2017 15:23:49 -0600)
FSTESTVER: fsverity	bdebc45 (Wed, 5 Sep 2018 21:32:22 -0700)
FSTESTVER: ima-evm-utils	0267fa1 (Mon, 3 Dec 2018 06:11:35 -0500)
FSTESTVER: nvme-cli	v1.7-22-gf716974 (Wed, 6 Feb 2019 16:03:58 -0700)
FSTESTVER: quota		59b280e (Mon, 5 Feb 2018 16:48:22 +0100)
FSTESTVER: stress-ng	7d0353cf (Sun, 20 Jan 2019 03:30:03 +0000)
FSTESTVER: syzkaller	2103a236 (Fri, 18 Jan 2019 13:20:33 +0100)
FSTESTVER: xfsprogs	v4.19.0 (Fri, 9 Nov 2018 14:31:04 -0600)
FSTESTVER: xfstests-bld	11be69c (Mon, 11 Feb 2019 18:57:39 -0500)
FSTESTVER: xfstests	linux-v3.8-2293-g6f7f9398 (Mon, 11 Feb 2019 19:42:24 -0500)
FSTESTSET: ""
FSTESTEXC: ""
FSTESTOPT: "blktests aex"
CPUS: "2"
MEM: "7680"
BEGIN BLKTESTS Tue Feb 12 00:41:13 EST 2019
block/024 (do I/O faster than a jiffy and check iostats times) [failed]
loop/002 (try various loop device block sizes)               [failed]
nvme/002 (create many subsystems and test discovery)         [failed]
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [failed]
    [ 1857.726308] WARNING: possible recursive locking detected
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [failed]
nvme/015 (unit test for NVMe flush for file backed ns)       [failed]
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [failed]
nvme/017 (create/delete many file-ns and test discovery)     [failed]
srp/002 (File I/O on top of multipath concurrently with logout and login (mq)) [failed]
srp/011 (Block I/O on top of multipath concurrently with logout and login) [failed]
Run: block/001 block/002 block/003 block/004 block/005 block/006 block/009 block/010 block/012 block/013 block/014 block/015 block/016 block/017 block/018 block/020 block/021 block/023 block/024 block/025 block/028 loop/001 loop/002 loop/003 loop/004 loop/005 loop/006 loop/007 nvme/002 nvme/003 nvme/004 nvme/005 nvme/006 nvme/007 nvme/008 nvme/009 nvme/010 nvme/011 nvme/012 nvme/013 nvme/014 nvme/015 nvme/016 nvme/017 nvme/019 nvme/020 nvme/021 nvme/022 nvme/023 nvme/024 nvme/025 nvme/026 nvme/027 nvme/028 scsi/001 scsi/002 scsi/003 scsi/004 scsi/005 scsi/006 srp/001 srp/002 srp/005 srp/006 srp/007 srp/008 srp/009 srp/010 srp/011 srp/012 srp/013
Failures: block/024 loop/002 nvme/002 nvme/012 nvme/013 nvme/015 nvme/016 nvme/017 srp/002 srp/011
Failed 10 of 71 tests
END BLKTESTS Tue Feb 12 01:18:37 EST 2019
Feb 12 01:11:34 xfstests-tytso-20190212003935 kernel: [ 1857.726308] WARNING: possible recursive locking detected


[-- Attachment #2: defconfig --]
[-- Type: text/plain, Size: 5921 bytes --]

CONFIG_LOCALVERSION="-xfstests"
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_CGROUPS=y
CONFIG_USER_NS=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
CONFIG_SMP=y
CONFIG_X86_X2APIC=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_MCORE2=y
CONFIG_NR_CPUS=48
# CONFIG_X86_MCE_AMD is not set
# CONFIG_MICROCODE is not set
CONFIG_NUMA=y
# CONFIG_AMD_NUMA is not set
CONFIG_X86_PMEM_LEGACY=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_HZ_300=y
CONFIG_KEXEC=y
# CONFIG_SUSPEND is not set
# CONFIG_ACPI_REV_OVERRIDE_POSSIBLE is not set
# CONFIG_ACPI_TABLE_UPGRADE is not set
# CONFIG_PCI_MMCONFIG is not set
CONFIG_IA32_EMULATION=y
# CONFIG_DMIID is not set
CONFIG_JUMP_LABEL=y
CONFIG_REFCOUNT_FULL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MQ_IOSCHED_KYBER=m
CONFIG_IOSCHED_BFQ=m
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_ZONE_DEVICE=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=y
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=y
CONFIG_INET=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_UDP_DIAG=y
# CONFIG_INET6_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET6_XFRM_MODE_TUNNEL is not set
# CONFIG_INET6_XFRM_MODE_BEET is not set
CONFIG_NETLINK_DIAG=y
# CONFIG_WIRELESS is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_PCI=y
CONFIG_PCI_MSI=y
CONFIG_DEVTMPFS=y
CONFIG_MTD=y
CONFIG_MTD_BLOCK2MTD=y
CONFIG_MTD_UBI=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_NBD=m
CONFIG_VIRTIO_BLK=y
CONFIG_BLK_DEV_NVME=m
CONFIG_NVME_MULTIPATH=y
CONFIG_NVME_RDMA=m
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_RDMA=m
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_VIRTIO=y
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=m
CONFIG_SCSI_DH_EMC=m
CONFIG_SCSI_DH_ALUA=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_BLK_DEV_DM=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_THIN_PROVISIONING=y
CONFIG_DM_ZERO=y
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=y
CONFIG_DM_LOG_WRITES=y
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_NETDEVICES=y
CONFIG_VIRTIO_NET=y
# CONFIG_ETHERNET is not set
# CONFIG_WLAN is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=32
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
# CONFIG_HW_RANDOM_VIA is not set
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_RANDOM_TRUST_CPU=y
# CONFIG_HWMON is not set
# CONFIG_X86_PKG_TEMP_THERMAL is not set
# CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_MAD=m
CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_INFINIBAND_IPOIB=m
CONFIG_INFINIBAND_IPOIB_CM=y
CONFIG_INFINIBAND_SRP=m
CONFIG_INFINIBAND_SRPT=m
CONFIG_INFINIBAND_ISER=m
CONFIG_RDMA_RXE=m
CONFIG_RTC_CLASS=y
# CONFIG_RTC_DRV_CMOS is not set
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y
# CONFIG_X86_PLATFORM_DEVICES is not set
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
CONFIG_EXT4_ENCRYPTION=y
CONFIG_EXT4_DEBUG=y
CONFIG_JBD2_DEBUG=y
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_BTRFS_DEBUG=y
CONFIG_BTRFS_ASSERT=y
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_CHECK_FS=y
CONFIG_F2FS_FS_ENCRYPTION=y
CONFIG_FS_DAX=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
CONFIG_QFMT_V2=y
CONFIG_AUTOFS4_FS=y
CONFIG_OVERLAY_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_CHILDREN=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CONFIGFS_FS=y
CONFIG_UBIFS_FS=y
CONFIG_UBIFS_FS_ENCRYPTION=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
CONFIG_NFSD=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_9P_FS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_ASCII=y
CONFIG_NLS_UTF8=y
CONFIG_SECURITY=y
CONFIG_FORTIFY_SOURCE=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_IMA=y
CONFIG_IMA_WRITE_POLICY=y
CONFIG_IMA_APPRAISE=y
CONFIG_EVM=y
CONFIG_CRYPTO_RSA=y
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_ECHAINIV=y
CONFIG_CRYPTO_ADIANTUM=y
CONFIG_CRYPTO_CRC32C_INTEL=y
CONFIG_CRYPTO_CRC32_PCLMUL=y
CONFIG_CRYPTO_AES_NI_INTEL=y
# CONFIG_CRYPTO_HW is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
CONFIG_X509_CERTIFICATE_PARSER=y
CONFIG_PKCS7_MESSAGE_PARSER=y
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS="certs/cert.pem"
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_PAGEALLOC=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=3000
CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_WQ_WATCHDOG=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_PROVE_LOCKING=y
CONFIG_LOCK_STAT=y
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_SG=y
CONFIG_RCU_EQS_DEBUG=y
CONFIG_FAULT_INJECTION=y
CONFIG_FAIL_MAKE_REQUEST=y
CONFIG_FAULT_INJECTION_DEBUG_FS=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_FUNCTION_PROFILER=y
# CONFIG_RUNTIME_TESTING_MENU is not set
CONFIG_DEBUG_WX=y

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-15  2:52     ` Chaitanya Kulkarni
@ 2019-02-15  7:52       ` Johannes Thumshirn
  0 siblings, 0 replies; 13+ messages in thread
From: Johannes Thumshirn @ 2019-02-15  7:52 UTC (permalink / raw)
  To: Chaitanya Kulkarni, Bart Van Assche, Theodore Y. Ts'o
  Cc: lsf-pc, linux-block, linux-fsdevel

On 15/02/2019 03:52, Chaitanya Kulkarni wrote:
> Hi, Bart
> 
> In this way, we may end up modifying probably most of the common tools,
> which in long run can create a bunch of the code for the tests. If everyone
> (test contributors and tools maintainers) agrees to have such a "test" mode
> to all the tools, we can go for this approach.

I think going the path similar to xfstests and adding filter functions
to normalize the output in blktests is way less of a hassle than
changing every possible tool we might want to use to have a "test" mode.

Just my two cents,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-15  3:02   ` Theodore Y. Ts'o
@ 2019-02-15 17:32     ` Keith Busch
  2019-02-20  1:33       ` Chaitanya Kulkarni
  0 siblings, 1 reply; 13+ messages in thread
From: Keith Busch @ 2019-02-15 17:32 UTC (permalink / raw)
  To: Theodore Y. Ts'o; +Cc: Omar Sandoval, lsf-pc, linux-block, linux-fsdevel

On Thu, Feb 14, 2019 at 10:02:02PM -0500, Theodore Y. Ts'o wrote:
> > My (undocumented) rule of thumb has been that blktests shouldn't assume
> > anything newer than whatever ships on Debian oldstable. I can document
> > that requirement.
> 
> That's definitely not true for the nvme tests; the nvme-cli from
> Debian stable is *not* sufficient.  This is why I've started building
> nvme-cli as part of the test appliance in xfstests-bld.  I'm now
> somewhat suspicious that there are problems because using the latest
> HEAD of the nvme-cli git tree may have had messages printed to
> standard out that is subtly different from the version of nvme-cli
> that was used to develop some of the nvme tests.

It does appear some expected output has hard coded values that are not
fixed. Some of the failures are assuming an auto-incrementing generation
number will always be 1, but that should just be a wildcard match.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM TOPIC] improving storage testing
  2019-02-15 17:32     ` Keith Busch
@ 2019-02-20  1:33       ` Chaitanya Kulkarni
  0 siblings, 0 replies; 13+ messages in thread
From: Chaitanya Kulkarni @ 2019-02-20  1:33 UTC (permalink / raw)
  To: Keith Busch, Theodore Y. Ts'o
  Cc: Omar Sandoval, lsf-pc, linux-block, linux-fsdevel

We deliberately check the generation counter to make sure discovery
code path is working as expected. The other text match is hardcoded
as per the cli, I sent out the tentative fixes so we can move forward.


I'll send another series to get rid of the all possible text based
comparisons so that we can avoid this scenario.

On 2/15/19 9:32 AM, Keith Busch wrote:
> On Thu, Feb 14, 2019 at 10:02:02PM -0500, Theodore Y. Ts'o wrote:
>>> My (undocumented) rule of thumb has been that blktests shouldn't assume
>>> anything newer than whatever ships on Debian oldstable. I can document
>>> that requirement.
>>
>> That's definitely not true for the nvme tests; the nvme-cli from
>> Debian stable is *not* sufficient.  This is why I've started building
>> nvme-cli as part of the test appliance in xfstests-bld.  I'm now
>> somewhat suspicious that there are problems because using the latest
>> HEAD of the nvme-cli git tree may have had messages printed to
>> standard out that is subtly different from the version of nvme-cli
>> that was used to develop some of the nvme tests.
> 
> It does appear some expected output has hard coded values that are not
> fixed. Some of the failures are assuming an auto-incrementing generation
> number will always be 1, but that should just be a wildcard match.
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-02-20  1:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-13 18:07 [LSF/MM TOPIC] improving storage testing Theodore Y. Ts'o
2019-02-14  7:37 ` Chaitanya Kulkarni
2019-02-14 10:55 ` Johannes Thumshirn
2019-02-14 16:21   ` David Sterba
2019-02-14 23:26   ` Bart Van Assche
2019-02-15  2:52     ` Chaitanya Kulkarni
2019-02-15  7:52       ` Johannes Thumshirn
2019-02-14 12:10 ` Lukas Czerner
2019-02-14 21:28   ` Omar Sandoval
2019-02-14 21:56 ` Omar Sandoval
2019-02-15  3:02   ` Theodore Y. Ts'o
2019-02-15 17:32     ` Keith Busch
2019-02-20  1:33       ` Chaitanya Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).