All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ksummit-discuss] [CORE TOPIC] kernel testing standard
@ 2014-05-23 11:47 Masami Hiramatsu
  2014-05-23 13:32 ` Jason Cooper
                   ` (2 more replies)
  0 siblings, 3 replies; 36+ messages in thread
From: Masami Hiramatsu @ 2014-05-23 11:47 UTC (permalink / raw)
  To: ksummit-discuss

Hi,

As I discussed with Greg K.H. at LinuxCon Japan yesterday,
I'd like to propose kernel testing standard as a separated topic.

Issue:
There are many ways to test the kernel but it's neither well documented
nor standardized/organized.

As you may know, testing kernel is important on each phase of kernel
life-cycle. For example, even at the designing phase, actual test-case
shows us what the new feature/design does, how is will work, and how
to use it. This can improve the quality of the discussion.

Through the previous discussion I realized there are many different methods/
tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
in-kernel selftest etc. Each has good points and bad points.

So, I'd like to discuss how we can standardize them for each subsystem
at this kernel summit.

My suggestion are,
- Organizing existing in-tree kernel test frameworks (as "make test")
- Documenting the standard testing method, including how to run,
  how to add test-cases, and how to report.
- Commenting standard testing for each subsystem, maybe by adding
  UT: or TS: tags to MAINTAINERS, which describes the URL of
  out-of-tree tests or the directory of the selftest.

Note that I don't tend to change the ways to test for subsystems which
already have own tests, but organize it for who wants to get involved in
and/or to evaluate it. :-)

I think we can strongly request developers to add test-cases for new features
if we standardize the testing method.

Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
                       subsystem maintainers.


Thank you,

-- 
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 11:47 [Ksummit-discuss] [CORE TOPIC] kernel testing standard Masami Hiramatsu
@ 2014-05-23 13:32 ` Jason Cooper
  2014-05-23 16:24   ` Olof Johansson
  2014-05-23 18:06   ` Masami Hiramatsu
  2014-05-23 14:05 ` Justin M. Forbes
  2014-05-28 15:37 ` Mel Gorman
  2 siblings, 2 replies; 36+ messages in thread
From: Jason Cooper @ 2014-05-23 13:32 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

Masami,

On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
> Issue:
> There are many ways to test the kernel but it's neither well documented
> nor standardized/organized.
> 
> As you may know, testing kernel is important on each phase of kernel
> life-cycle. For example, even at the designing phase, actual test-case
> shows us what the new feature/design does, how is will work, and how
> to use it. This can improve the quality of the discussion.
> 
> Through the previous discussion I realized there are many different methods/
> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
> in-kernel selftest etc. Each has good points and bad points.

* automated boot testing (embedded platforms)
* runtime testing

A lot of development that we see is embedded platforms using
cross-compilers.  That makes a whole lot of tests impossible to run on
the host.  Especially when it deals with hardware interaction.  So
run-time testing definitely needs to be a part of the discussion.

The boot farms that Kevin and Olof run currently tests booting to a
command prompt.  We're catching a lot of regressions before they hit
mainline, which is great.  But I'd like to see how we can extend that.
And yes, I know those farms are saturated, and we need to bring
something else on line to do more functional testing,  Perhaps break up
the testing load:  boot-test linux-next, and runtime tests of the -rcX
tags and stable tags.

> So, I'd like to discuss how we can standardize them for each subsystem
> at this kernel summit.
> 
> My suggestion are,
> - Organizing existing in-tree kernel test frameworks (as "make test")
> - Documenting the standard testing method, including how to run,
>   how to add test-cases, and how to report.
> - Commenting standard testing for each subsystem, maybe by adding
>   UT: or TS: tags to MAINTAINERS, which describes the URL of
>   out-of-tree tests or the directory of the selftest.

 - classify testing into functional, performance, or stress
    - possibly security/fuzzing

> Note that I don't tend to change the ways to test for subsystems which
> already have own tests, but organize it for who wants to get involved in
> and/or to evaluate it. :-)

And make it clear what type of testing it is.  "Well, I ran make test"
on a patch affecting performance is no good if the test for that area is
purely functional.

On the stress-testing front, there's a great paper [1] on how to
stress-test software destined for deep space.  Definitely worth the
read.  And directly applicable to more than deep space satellites.

> I think we can strongly request developers to add test-cases for new features
> if we standardize the testing method.
> 
> Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
>                        subsystem maintainers.

+ Fenguang Wu, Kevin Hilman, Olof Johansson

thx,

Jason.

[1] http://messenger.jhuapl.edu/the_mission/publications/Hill.2007.pdf

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 11:47 [Ksummit-discuss] [CORE TOPIC] kernel testing standard Masami Hiramatsu
  2014-05-23 13:32 ` Jason Cooper
@ 2014-05-23 14:05 ` Justin M. Forbes
  2014-05-23 16:04   ` Mark Brown
  2014-05-24  0:30   ` Theodore Ts'o
  2014-05-28 15:37 ` Mel Gorman
  2 siblings, 2 replies; 36+ messages in thread
From: Justin M. Forbes @ 2014-05-23 14:05 UTC (permalink / raw)
  To: ksummit-discuss

On Fri, 2014-05-23 at 20:47 +0900, Masami Hiramatsu wrote:
> Hi,
> 
> As I discussed with Greg K.H. at LinuxCon Japan yesterday,
> I'd like to propose kernel testing standard as a separated topic.
> 
> Issue:
> There are many ways to test the kernel but it's neither well documented
> nor standardized/organized.
> 
> As you may know, testing kernel is important on each phase of kernel
> life-cycle. For example, even at the designing phase, actual test-case
> shows us what the new feature/design does, how is will work, and how
> to use it. This can improve the quality of the discussion.
> 
> Through the previous discussion I realized there are many different methods/
> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
> in-kernel selftest etc. Each has good points and bad points.
> 
> So, I'd like to discuss how we can standardize them for each subsystem
> at this kernel summit.
> 
This would be a great discussion to have. Trying to build a regression
testing framework for kernel builds has been one of my big projects
recently.  Getting decent test coverage is kind of a nightmare at the
moment.

> My suggestion are,
> - Organizing existing in-tree kernel test frameworks (as "make test")
> - Documenting the standard testing method, including how to run,
>   how to add test-cases, and how to report.
> - Commenting standard testing for each subsystem, maybe by adding
>   UT: or TS: tags to MAINTAINERS, which describes the URL of
>   out-of-tree tests or the directory of the selftest.
> 
> Note that I don't tend to change the ways to test for subsystems which
> already have own tests, but organize it for who wants to get involved in
> and/or to evaluate it. :-)
> 
All good suggestions. As nice as it would be if tests were in tree, this
might be unmanageable. But even out of tree tests could be automatically
brought in provided they are listed somewhere in tree.  Ideally you
would be able to "make tests" and get all in tree tests run, or "make
alltests" and have it grab/build/run out of tree tests with git urls as
well. 

> I think we can strongly request developers to add test-cases for new features
> if we standardize the testing method.

Not just new features, bug fixes as well. Though writing a test can
sometimes be more difficult than the actual bug fix.  Without some sort
of framework, it is harder to ask for developer participation. We need a
framework that makes it easy.

> Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
>                        subsystem maintainers.
I would love to be a part of this discussion, and creating a working
solution.

Justin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 14:05 ` Justin M. Forbes
@ 2014-05-23 16:04   ` Mark Brown
  2014-05-24  0:30   ` Theodore Ts'o
  1 sibling, 0 replies; 36+ messages in thread
From: Mark Brown @ 2014-05-23 16:04 UTC (permalink / raw)
  To: Justin M. Forbes; +Cc: ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 501 bytes --]

On Fri, May 23, 2014 at 09:05:02AM -0500, Justin M. Forbes wrote:
> On Fri, 2014-05-23 at 20:47 +0900, Masami Hiramatsu wrote:

> > Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
> >                        subsystem maintainers.

> I would love to be a part of this discussion, and creating a working
> solution.

Probably distro and stable kernel maintainers too (I'll raise my hand
for the Linaro one) - they're likely to want to be using any testsuites
they reasonably can.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 13:32 ` Jason Cooper
@ 2014-05-23 16:24   ` Olof Johansson
  2014-05-23 16:35     ` Guenter Roeck
                       ` (2 more replies)
  2014-05-23 18:06   ` Masami Hiramatsu
  1 sibling, 3 replies; 36+ messages in thread
From: Olof Johansson @ 2014-05-23 16:24 UTC (permalink / raw)
  To: Jason Cooper; +Cc: ksummit-discuss

On Fri, May 23, 2014 at 6:32 AM, Jason Cooper <jason@lakedaemon.net> wrote:
> Masami,
>
> On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
>> Issue:
>> There are many ways to test the kernel but it's neither well documented
>> nor standardized/organized.
>>
>> As you may know, testing kernel is important on each phase of kernel
>> life-cycle. For example, even at the designing phase, actual test-case
>> shows us what the new feature/design does, how is will work, and how
>> to use it. This can improve the quality of the discussion.
>>
>> Through the previous discussion I realized there are many different methods/
>> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
>> in-kernel selftest etc. Each has good points and bad points.
>
> * automated boot testing (embedded platforms)
> * runtime testing
>
> A lot of development that we see is embedded platforms using
> cross-compilers.  That makes a whole lot of tests impossible to run on
> the host.  Especially when it deals with hardware interaction.  So
> run-time testing definitely needs to be a part of the discussion.
>
> The boot farms that Kevin and Olof run currently tests booting to a
> command prompt.  We're catching a lot of regressions before they hit
> mainline, which is great.  But I'd like to see how we can extend that.
> And yes, I know those farms are saturated, and we need to bring
> something else on line to do more functional testing,  Perhaps break up
> the testing load:  boot-test linux-next, and runtime tests of the -rcX
> tags and stable tags.

I wouldn't call them saturated, but neither of us will be able to
scale to 10x the current size. 2-3x should be doable.

>> So, I'd like to discuss how we can standardize them for each subsystem
>> at this kernel summit.
>>
>> My suggestion are,
>> - Organizing existing in-tree kernel test frameworks (as "make test")

For my type of setup, I'd prefer a "make install_tests" target,
similar to modules/firmware that I can give a prefix to, and then
something in that directory to actually run them.

That's for runtime testing. For build time unit testing, "make test"
makes sense, of course.



-Olof

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 16:24   ` Olof Johansson
@ 2014-05-23 16:35     ` Guenter Roeck
  2014-05-23 16:36     ` Jason Cooper
  2014-05-23 18:10     ` Masami Hiramatsu
  2 siblings, 0 replies; 36+ messages in thread
From: Guenter Roeck @ 2014-05-23 16:35 UTC (permalink / raw)
  To: Olof Johansson, Jason Cooper; +Cc: ksummit-discuss

On 05/23/2014 09:24 AM, Olof Johansson wrote:
> On Fri, May 23, 2014 at 6:32 AM, Jason Cooper <jason@lakedaemon.net> wrote:
>> Masami,
>>
>> On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
>>> Issue:
>>> There are many ways to test the kernel but it's neither well documented
>>> nor standardized/organized.
>>>
>>> As you may know, testing kernel is important on each phase of kernel
>>> life-cycle. For example, even at the designing phase, actual test-case
>>> shows us what the new feature/design does, how is will work, and how
>>> to use it. This can improve the quality of the discussion.
>>>
>>> Through the previous discussion I realized there are many different methods/
>>> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
>>> in-kernel selftest etc. Each has good points and bad points.
>>
>> * automated boot testing (embedded platforms)
>> * runtime testing
>>
>> A lot of development that we see is embedded platforms using
>> cross-compilers.  That makes a whole lot of tests impossible to run on
>> the host.  Especially when it deals with hardware interaction.  So
>> run-time testing definitely needs to be a part of the discussion.
>>
>> The boot farms that Kevin and Olof run currently tests booting to a
>> command prompt.  We're catching a lot of regressions before they hit
>> mainline, which is great.  But I'd like to see how we can extend that.
>> And yes, I know those farms are saturated, and we need to bring
>> something else on line to do more functional testing,  Perhaps break up
>> the testing load:  boot-test linux-next, and runtime tests of the -rcX
>> tags and stable tags.
>
> I wouldn't call them saturated, but neither of us will be able to
> scale to 10x the current size. 2-3x should be doable.
>

A lot can be done with qemu. My tests run to boot prompt for several architectures,
but are very basic (no networking, for example). I would love to add more tests,
but time is a problem.

Guenter

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 16:24   ` Olof Johansson
  2014-05-23 16:35     ` Guenter Roeck
@ 2014-05-23 16:36     ` Jason Cooper
  2014-05-23 18:10     ` Masami Hiramatsu
  2 siblings, 0 replies; 36+ messages in thread
From: Jason Cooper @ 2014-05-23 16:36 UTC (permalink / raw)
  To: Olof Johansson; +Cc: ksummit-discuss

On Fri, May 23, 2014 at 09:24:42AM -0700, Olof Johansson wrote:
> On Fri, May 23, 2014 at 6:32 AM, Jason Cooper <jason@lakedaemon.net> wrote:
> > Masami,
> >
> > On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
> >> Issue:
> >> There are many ways to test the kernel but it's neither well documented
> >> nor standardized/organized.
> >>
> >> As you may know, testing kernel is important on each phase of kernel
> >> life-cycle. For example, even at the designing phase, actual test-case
> >> shows us what the new feature/design does, how is will work, and how
> >> to use it. This can improve the quality of the discussion.
> >>
> >> Through the previous discussion I realized there are many different methods/
> >> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
> >> in-kernel selftest etc. Each has good points and bad points.
> >
> > * automated boot testing (embedded platforms)
> > * runtime testing
> >
> > A lot of development that we see is embedded platforms using
> > cross-compilers.  That makes a whole lot of tests impossible to run on
> > the host.  Especially when it deals with hardware interaction.  So
> > run-time testing definitely needs to be a part of the discussion.
> >
> > The boot farms that Kevin and Olof run currently tests booting to a
> > command prompt.  We're catching a lot of regressions before they hit
> > mainline, which is great.  But I'd like to see how we can extend that.
> > And yes, I know those farms are saturated, and we need to bring
> > something else on line to do more functional testing,  Perhaps break up
> > the testing load:  boot-test linux-next, and runtime tests of the -rcX
> > tags and stable tags.
> 
> I wouldn't call them saturated, but neither of us will be able to
> scale to 10x the current size. 2-3x should be doable.

That's purely equipment scaling, correct?

thx,

Jason.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 13:32 ` Jason Cooper
  2014-05-23 16:24   ` Olof Johansson
@ 2014-05-23 18:06   ` Masami Hiramatsu
  2014-05-23 18:32     ` Jason Cooper
  1 sibling, 1 reply; 36+ messages in thread
From: Masami Hiramatsu @ 2014-05-23 18:06 UTC (permalink / raw)
  To: Jason Cooper; +Cc: ksummit-discuss

(2014/05/23 22:32), Jason Cooper wrote:
> Masami,
> 
> On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
>> Issue:
>> There are many ways to test the kernel but it's neither well documented
>> nor standardized/organized.
>>
>> As you may know, testing kernel is important on each phase of kernel
>> life-cycle. For example, even at the designing phase, actual test-case
>> shows us what the new feature/design does, how is will work, and how
>> to use it. This can improve the quality of the discussion.
>>
>> Through the previous discussion I realized there are many different methods/
>> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
>> in-kernel selftest etc. Each has good points and bad points.
> 
> * automated boot testing (embedded platforms)
> * runtime testing
> 
> A lot of development that we see is embedded platforms using
> cross-compilers.  That makes a whole lot of tests impossible to run on
> the host.  Especially when it deals with hardware interaction.  So
> run-time testing definitely needs to be a part of the discussion.

Yeah, standardizing how we do run time/boot time testing is good
to be discussed :) And I'd like to focus on standardization process
at this point, since for each implementation there are many hardware
specific reasons why we do/can't do something I guess.

> The boot farms that Kevin and Olof run currently tests booting to a
> command prompt.  We're catching a lot of regressions before they hit
> mainline, which is great.  But I'd like to see how we can extend that.
> And yes, I know those farms are saturated, and we need to bring
> something else on line to do more functional testing,  Perhaps break up
> the testing load:  boot-test linux-next, and runtime tests of the -rcX
> tags and stable tags.

Yeah, it's worth to share the such testing methods. For boot-time
testing, I think we can have a script which packs tests and builds a
special initrd which runs the tests and makes reports :)

>> So, I'd like to discuss how we can standardize them for each subsystem
>> at this kernel summit.
>>
>> My suggestion are,
>> - Organizing existing in-tree kernel test frameworks (as "make test")
>> - Documenting the standard testing method, including how to run,
>>   how to add test-cases, and how to report.
>> - Commenting standard testing for each subsystem, maybe by adding
>>   UT: or TS: tags to MAINTAINERS, which describes the URL of
>>   out-of-tree tests or the directory of the selftest.
> 
>  - classify testing into functional, performance, or stress
>     - possibly security/fuzzing

Good point!

> 
>> Note that I don't tend to change the ways to test for subsystems which
>> already have own tests, but organize it for who wants to get involved in
>> and/or to evaluate it. :-)
> 
> And make it clear what type of testing it is.  "Well, I ran make test"
> on a patch affecting performance is no good if the test for that area is
> purely functional.

Agreed, the test for each phase (design, development, pull-request and
release) should be different. To scale out the test process, we'd better
describe what the subsystem (and sub-subsystem) maintainers run, and
what the release managers run.

> 
> On the stress-testing front, there's a great paper [1] on how to
> stress-test software destined for deep space.  Definitely worth the
> read.  And directly applicable to more than deep space satellites.

Thanks to share such good document :)

> 
>> I think we can strongly request developers to add test-cases for new features
>> if we standardize the testing method.
>>
>> Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
>>                        subsystem maintainers.
> 
> + Fenguang Wu, Kevin Hilman, Olof Johansson
> 
> thx,
> 
> Jason.
> 
> [1] http://messenger.jhuapl.edu/the_mission/publications/Hill.2007.pdf
> 
> 

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 16:24   ` Olof Johansson
  2014-05-23 16:35     ` Guenter Roeck
  2014-05-23 16:36     ` Jason Cooper
@ 2014-05-23 18:10     ` Masami Hiramatsu
  2014-05-23 18:36       ` Jason Cooper
  2 siblings, 1 reply; 36+ messages in thread
From: Masami Hiramatsu @ 2014-05-23 18:10 UTC (permalink / raw)
  To: Olof Johansson; +Cc: Jason Cooper, ksummit-discuss

(2014/05/24 1:24), Olof Johansson wrote:
>> The boot farms that Kevin and Olof run currently tests booting to a
>> command prompt.  We're catching a lot of regressions before they hit
>> mainline, which is great.  But I'd like to see how we can extend that.
>> And yes, I know those farms are saturated, and we need to bring
>> something else on line to do more functional testing,  Perhaps break up
>> the testing load:  boot-test linux-next, and runtime tests of the -rcX
>> tags and stable tags.
> 
> I wouldn't call them saturated, but neither of us will be able to
> scale to 10x the current size. 2-3x should be doable.

Right, the size of test should be considered. If the number of tests
are too big and testing takes too long time, no one executes it.

>>> So, I'd like to discuss how we can standardize them for each subsystem
>>> at this kernel summit.
>>>
>>> My suggestion are,
>>> - Organizing existing in-tree kernel test frameworks (as "make test")
> 
> For my type of setup, I'd prefer a "make install_tests" target,
> similar to modules/firmware that I can give a prefix to, and then
> something in that directory to actually run them.

So it installs tests to /lib/testing/, similar to modules :) ?
Yeah, that's also good, perhaps we can add "make testconfig" too.

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 18:06   ` Masami Hiramatsu
@ 2014-05-23 18:32     ` Jason Cooper
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Cooper @ 2014-05-23 18:32 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Sat, May 24, 2014 at 03:06:35AM +0900, Masami Hiramatsu wrote:
> (2014/05/23 22:32), Jason Cooper wrote:
> > On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
> >> Issue:
> >> There are many ways to test the kernel but it's neither well documented
> >> nor standardized/organized.
> >>
> >> As you may know, testing kernel is important on each phase of kernel
> >> life-cycle. For example, even at the designing phase, actual test-case
> >> shows us what the new feature/design does, how is will work, and how
> >> to use it. This can improve the quality of the discussion.
> >>
> >> Through the previous discussion I realized there are many different methods/
> >> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
> >> in-kernel selftest etc. Each has good points and bad points.
> > 
> > * automated boot testing (embedded platforms)
> > * runtime testing
> > 
> > A lot of development that we see is embedded platforms using
> > cross-compilers.  That makes a whole lot of tests impossible to run on
> > the host.  Especially when it deals with hardware interaction.  So
> > run-time testing definitely needs to be a part of the discussion.
> 
> Yeah, standardizing how we do run time/boot time testing is good
> to be discussed :) And I'd like to focus on standardization process
> at this point, since for each implementation there are many hardware
> specific reasons why we do/can't do something I guess.

I'm not sure standardization is the correct word.  It may be better to
implement generic test harnesses for each scenario.  Then, use the
harness for things you care about and submit bugfixes mentioning the use
of the harness.  iow, this is something best socialized, not mandated.

Developers tend to gravitate to things that are made easy for them.  eg,
no one ever told me to use get_maintainers.pl, but I use it all the
time.  Why?  Because it's useful, not because I was told to.

The same can be said of the Link: tag, the Cc: stable tag, and
checkpatch.pl.  The last is a great example of the double-edged sword of
'thinks for you'. ;-)

> > The boot farms that Kevin and Olof run currently tests booting to a
> > command prompt.  We're catching a lot of regressions before they hit
> > mainline, which is great.  But I'd like to see how we can extend that.
> > And yes, I know those farms are saturated, and we need to bring
> > something else on line to do more functional testing,  Perhaps break up
> > the testing load:  boot-test linux-next, and runtime tests of the -rcX
> > tags and stable tags.
> 
> Yeah, it's worth to share the such testing methods. For boot-time
> testing, I think we can have a script which packs tests and builds a
> special initrd which runs the tests and makes reports :)

Like Olof alluded to, I could see a 'make tests_install' which would put
the Kconfig-selected runtime tests into a directory.  Said directory
could then be built into an initrd.  Or, some IT shops may want to have
the tests run after each upgrade before resuming normal operations.  In
either case, there's benefit to have those tests kept in the kernel
repository.

Classifying tests as functional/performance/stress would help end users
run only the type of tests they feel they need to be comfortable with an
upgrade.

It also doesn't make sense to run performance tests before determining
if there are any functional regressions. ;-)

...
> >> I think we can strongly request developers to add test-cases for new features
> >> if we standardize the testing method.
> >>
> >> Suggested participants: greg k.h., Li Zefan, test-tool maintainers and
> >>                        subsystem maintainers.
> > 
> > + Fenguang Wu, Kevin Hilman, Olof Johansson

I guess I should also explicitly mention:

 + Jason Cooper

thx,

Jason.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 18:10     ` Masami Hiramatsu
@ 2014-05-23 18:36       ` Jason Cooper
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Cooper @ 2014-05-23 18:36 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Sat, May 24, 2014 at 03:10:51AM +0900, Masami Hiramatsu wrote:
> (2014/05/24 1:24), Olof Johansson wrote:
> >> The boot farms that Kevin and Olof run currently tests booting to a
> >> command prompt.  We're catching a lot of regressions before they hit
> >> mainline, which is great.  But I'd like to see how we can extend that.
> >> And yes, I know those farms are saturated, and we need to bring
> >> something else on line to do more functional testing,  Perhaps break up
> >> the testing load:  boot-test linux-next, and runtime tests of the -rcX
> >> tags and stable tags.
> > 
> > I wouldn't call them saturated, but neither of us will be able to
> > scale to 10x the current size. 2-3x should be doable.
> 
> Right, the size of test should be considered. If the number of tests
> are too big and testing takes too long time, no one executes it.
> 
> >>> So, I'd like to discuss how we can standardize them for each subsystem
> >>> at this kernel summit.
> >>>
> >>> My suggestion are,
> >>> - Organizing existing in-tree kernel test frameworks (as "make test")
> > 
> > For my type of setup, I'd prefer a "make install_tests" target,
> > similar to modules/firmware that I can give a prefix to, and then
> > something in that directory to actually run them.
> 
> So it installs tests to /lib/testing/, similar to modules :) ?
> Yeah, that's also good, perhaps we can add "make testconfig" too.

Ideally, the kernel's .config could be used to determine which tests are
relevant.  I can also see the installed runtime tests being tied to the
kernel version they came from.

thx,

Jason.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 14:05 ` Justin M. Forbes
  2014-05-23 16:04   ` Mark Brown
@ 2014-05-24  0:30   ` Theodore Ts'o
  2014-05-24  1:15     ` Guenter Roeck
                       ` (2 more replies)
  1 sibling, 3 replies; 36+ messages in thread
From: Theodore Ts'o @ 2014-05-24  0:30 UTC (permalink / raw)
  To: Justin M. Forbes; +Cc: ksummit-discuss

On Fri, May 23, 2014 at 09:05:02AM -0500, Justin M. Forbes wrote:
> All good suggestions. As nice as it would be if tests were in tree, this
> might be unmanageable. But even out of tree tests could be automatically
> brought in provided they are listed somewhere in tree.  Ideally you
> would be able to "make tests" and get all in tree tests run, or "make
> alltests" and have it grab/build/run out of tree tests with git urls as
> well. 

Um.... how long do you expect "make alltests" to run?

And how do you deal with tests that require specific hardware?

For ext4, just doing a light smoke test takes about 30 minutes.  For
me to run the full set of tests using multiple file system
configurations, it takes about 12 to 16 hours.  And that's just for
one file system.  (I do the tests using KVM, with a 90 megabyte
compressed root file system, and 55 gigabytes worth of scratch
partitions.)

						- Ted

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-24  0:30   ` Theodore Ts'o
@ 2014-05-24  1:15     ` Guenter Roeck
  2014-05-26 11:33     ` Masami Hiramatsu
  2014-05-26 17:08     ` Daniel Vetter
  2 siblings, 0 replies; 36+ messages in thread
From: Guenter Roeck @ 2014-05-24  1:15 UTC (permalink / raw)
  To: Theodore Ts'o, Justin M. Forbes; +Cc: ksummit-discuss

On 05/23/2014 05:30 PM, Theodore Ts'o wrote:
> On Fri, May 23, 2014 at 09:05:02AM -0500, Justin M. Forbes wrote:
>> All good suggestions. As nice as it would be if tests were in tree, this
>> might be unmanageable. But even out of tree tests could be automatically
>> brought in provided they are listed somewhere in tree.  Ideally you
>> would be able to "make tests" and get all in tree tests run, or "make
>> alltests" and have it grab/build/run out of tree tests with git urls as
>> well.
>
> Um.... how long do you expect "make alltests" to run?
>
> And how do you deal with tests that require specific hardware?
>
> For ext4, just doing a light smoke test takes about 30 minutes.  For
> me to run the full set of tests using multiple file system
> configurations, it takes about 12 to 16 hours.  And that's just for
> one file system.  (I do the tests using KVM, with a 90 megabyte
> compressed root file system, and 55 gigabytes worth of scratch
> partitions.)
>

Guess that all depends on the scope of tests needed or wanted.
For my part (basic sanity testing of upcoming -stable releases),
where I boot images for various architectures in qemu, no more
than 10-15 minutes per image with a quad-core cpu.

Guenter

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-24  0:30   ` Theodore Ts'o
  2014-05-24  1:15     ` Guenter Roeck
@ 2014-05-26 11:33     ` Masami Hiramatsu
  2014-05-30 18:35       ` Steven Rostedt
  2014-05-26 17:08     ` Daniel Vetter
  2 siblings, 1 reply; 36+ messages in thread
From: Masami Hiramatsu @ 2014-05-26 11:33 UTC (permalink / raw)
  To: ksummit-discuss

(2014/05/24 9:30), Theodore Ts'o wrote:
> On Fri, May 23, 2014 at 09:05:02AM -0500, Justin M. Forbes wrote:
>> All good suggestions. As nice as it would be if tests were in tree, this
>> might be unmanageable. But even out of tree tests could be automatically
>> brought in provided they are listed somewhere in tree.  Ideally you
>> would be able to "make tests" and get all in tree tests run, or "make
>> alltests" and have it grab/build/run out of tree tests with git urls as
>> well. 
> 
> Um.... how long do you expect "make alltests" to run?
> 
> And how do you deal with tests that require specific hardware?
> 
> For ext4, just doing a light smoke test takes about 30 minutes.  For
> me to run the full set of tests using multiple file system
> configurations, it takes about 12 to 16 hours.  And that's just for
> one file system.  (I do the tests using KVM, with a 90 megabyte
> compressed root file system, and 55 gigabytes worth of scratch
> partitions.)

Agreed. That is why I didn't say uniforming test-harness, but
making a standard way to test. I think we'd better organize maintainers
to test their tree; clarify what they will run and what will not do,
when each branch is tested (e.g. before/after merge, or nightly) and how.

If we force to unify the test frameworks, it will neither be maintained
nor used. Instead, if maintainers state what test they will run and how
to maintain it, we are sure that each test will be done on the each
subsystem branch, and before release, we can avoid to run all tests which
requires many hardware and long time but just run a small number of tests
(e.g. LTP, trinity, etc.)

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-24  0:30   ` Theodore Ts'o
  2014-05-24  1:15     ` Guenter Roeck
  2014-05-26 11:33     ` Masami Hiramatsu
@ 2014-05-26 17:08     ` Daniel Vetter
  2014-05-26 18:21       ` Mark Brown
  2 siblings, 1 reply; 36+ messages in thread
From: Daniel Vetter @ 2014-05-26 17:08 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: ksummit-discuss

On Sat, May 24, 2014 at 2:30 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Fri, May 23, 2014 at 09:05:02AM -0500, Justin M. Forbes wrote:
>> All good suggestions. As nice as it would be if tests were in tree, this
>> might be unmanageable. But even out of tree tests could be automatically
>> brought in provided they are listed somewhere in tree.  Ideally you
>> would be able to "make tests" and get all in tree tests run, or "make
>> alltests" and have it grab/build/run out of tree tests with git urls as
>> well.
>
> Um.... how long do you expect "make alltests" to run?
>
> And how do you deal with tests that require specific hardware?
>
> For ext4, just doing a light smoke test takes about 30 minutes.  For
> me to run the full set of tests using multiple file system
> configurations, it takes about 12 to 16 hours.  And that's just for
> one file system.  (I do the tests using KVM, with a 90 megabyte
> compressed root file system, and 55 gigabytes worth of scratch
> partitions.)

Full drm/i915 regression testing takes about equally long, multiplied
by the need to run this on different physical hw platforms to cover
all relevant code paths. tbh I really don't see much point in having a
fully integrated testsuite for everything. At least for developers.

Otoh if distros/stable trees and other consumers of upstream want to
run this I think it would make sense to have something unified. I've
tried to haggle the drm/i915 testsuite to various people, but thus far
very little success. So I'm not sure whether it's just a lack of
awareness about the tests or whether it is a more fundamental lack of
interest (usually called "we don't have time").
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-26 17:08     ` Daniel Vetter
@ 2014-05-26 18:21       ` Mark Brown
  0 siblings, 0 replies; 36+ messages in thread
From: Mark Brown @ 2014-05-26 18:21 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 699 bytes --]

On Mon, May 26, 2014 at 07:08:45PM +0200, Daniel Vetter wrote:

> Otoh if distros/stable trees and other consumers of upstream want to
> run this I think it would make sense to have something unified. I've
> tried to haggle the drm/i915 testsuite to various people, but thus far
> very little success. So I'm not sure whether it's just a lack of
> awareness about the tests or whether it is a more fundamental lack of
> interest (usually called "we don't have time").

Having some sort of reasonably standard packaging for the various
testsuites might help if it is a lack of time/interest, it'd reduce
the incremental effort to pick them up and put them into whatever
infrastructure is being used.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-23 11:47 [Ksummit-discuss] [CORE TOPIC] kernel testing standard Masami Hiramatsu
  2014-05-23 13:32 ` Jason Cooper
  2014-05-23 14:05 ` Justin M. Forbes
@ 2014-05-28 15:37 ` Mel Gorman
  2014-05-28 18:57   ` Greg KH
  2 siblings, 1 reply; 36+ messages in thread
From: Mel Gorman @ 2014-05-28 15:37 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Fri, May 23, 2014 at 08:47:29PM +0900, Masami Hiramatsu wrote:
> Hi,
> 
> As I discussed with Greg K.H. at LinuxCon Japan yesterday,
> I'd like to propose kernel testing standard as a separated topic.
> 
> Issue:
> There are many ways to test the kernel but it's neither well documented
> nor standardized/organized.
> 
> As you may know, testing kernel is important on each phase of kernel
> life-cycle. For example, even at the designing phase, actual test-case
> shows us what the new feature/design does, how is will work, and how
> to use it. This can improve the quality of the discussion.
> 
> Through the previous discussion I realized there are many different methods/
> tools/functions for testing kernel, LTP, trinity, tools/testing/selftest,
> in-kernel selftest etc. Each has good points and bad points.
> 
> So, I'd like to discuss how we can standardize them for each subsystem
> at this kernel summit.
> 
> My suggestion are,
> - Organizing existing in-tree kernel test frameworks (as "make test")
> - Documenting the standard testing method, including how to run,
>   how to add test-cases, and how to report.
> - Commenting standard testing for each subsystem, maybe by adding
>   UT: or TS: tags to MAINTAINERS, which describes the URL of
>   out-of-tree tests or the directory of the selftest.
> 

I'm not sure we can ever standardise all forms of kernel testing. Even
a simple "make test" is going to run into problems and it will be
hamstrung. It'll either be too short-lived with poor coverage in which case
it catches nothing useful or too long-lived in which case no one will run it.

For example, I have infrastructure that conducts automated performance
tests which I periodically dig through looking for problems. IMO, it is
only testing the basics of the areas I tend to work in and even then it
takes about 4-5 days to test a single kernel. Something like that will
never fit in "make test".

make test will be fine for feature verification and some functional
verification that does not depend on hardware. So new APIs should have test
cases that demonstrate the feature works and make test would be great for
that which is something that is not enforced today. As LTP is reported to
be sane these days for some tests, it could conceivably be wrapped by "make
test" to avoid duplicating effort there. I think that would be worthwhile
if someone had the time to push it because it would be an unconditional win.

However, beware of attempting to put all testing under its banner as
performance testing is never going to fully fit under its umbrella.
I'd even be wary of attempting to mandate a "standard testing method"
because it's situational. I'd even be wary of specifying particular
benchmarks as the same benchmark in different configurations may test
completely different things. fsmark with the most basic tuning options can
test metadata update performance, in-memory page cache performance or IO
performance depending on the parameters given. Similarly, attempting to
define tests on a per-subsystem basis will also be hazardous because any
interesting test is going to cross multiple subsystems.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-28 15:37 ` Mel Gorman
@ 2014-05-28 18:57   ` Greg KH
  2014-05-30 12:07     ` Linus Walleij
  0 siblings, 1 reply; 36+ messages in thread
From: Greg KH @ 2014-05-28 18:57 UTC (permalink / raw)
  To: Mel Gorman; +Cc: ksummit-discuss

On Wed, May 28, 2014 at 04:37:02PM +0100, Mel Gorman wrote:
> make test will be fine for feature verification and some functional
> verification that does not depend on hardware. So new APIs should have test
> cases that demonstrate the feature works and make test would be great for
> that which is something that is not enforced today. As LTP is reported to
> be sane these days for some tests, it could conceivably be wrapped by "make
> test" to avoid duplicating effort there. I think that would be worthwhile
> if someone had the time to push it because it would be an unconditional win.

That is what I have been asking for for _years_.  Hopefully someday
someone does this...

> However, beware of attempting to put all testing under its banner as
> performance testing is never going to fully fit under its umbrella.

No one is saying this is going to be true, but at the least, we want a
way to document _how_ one can run the performance tests if you want to.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-28 18:57   ` Greg KH
@ 2014-05-30 12:07     ` Linus Walleij
  2014-06-05  0:23       ` Greg KH
  0 siblings, 1 reply; 36+ messages in thread
From: Linus Walleij @ 2014-05-30 12:07 UTC (permalink / raw)
  To: Greg KH; +Cc: ksummit-discuss

On Wed, May 28, 2014 at 8:57 PM, Greg KH <greg@kroah.com> wrote:
> On Wed, May 28, 2014 at 04:37:02PM +0100, Mel Gorman wrote:
>> As LTP is reported to
>> be sane these days for some tests, it could conceivably be wrapped by "make
>> test" to avoid duplicating effort there. I think that would be worthwhile
>> if someone had the time to push it because it would be an unconditional win.
>
> That is what I have been asking for for _years_.  Hopefully someday
> someone does this...

Does this read "let's pull LTP into the kernel source tree"?

I have a hard time seing what else it could mean.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-26 11:33     ` Masami Hiramatsu
@ 2014-05-30 18:35       ` Steven Rostedt
  2014-05-30 20:59         ` Kees Cook
  2014-05-30 22:53         ` Theodore Ts'o
  0 siblings, 2 replies; 36+ messages in thread
From: Steven Rostedt @ 2014-05-30 18:35 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Mon, 26 May 2014 20:33:38 +0900
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:

 
> If we force to unify the test frameworks, it will neither be maintained
> nor used. Instead, if maintainers state what test they will run and how
> to maintain it, we are sure that each test will be done on the each
> subsystem branch, and before release, we can avoid to run all tests which
> requires many hardware and long time but just run a small number of tests
> (e.g. LTP, trinity, etc.)

I agree with this. Just having a single place to put tests or tell
people where the tests are would be a huge improvement. If I wanted to
run my own tests on ext file systems, I should be able to set up the
same environment that Ted uses. If someone wants to run my ftrace
tests, then they should be able to as well (which I need to make
available to the general public). Better yet, this can open up a door
for people to contribute to new tests for a particular subsystem. I
would love it if people added tests for me to run on ftrace. I have a
bunch of hacks to test various functionality (as they are hacky, that's
the reason I haven't posted them yet).

This shouldn't be about "make tests" which I think is silly. But a way
to standardize tests, or at least have a single repository to show how
different parts of the kernel is tested. My tests require running as
root, other tests should not require that. This is just an example of
how different tests have different requirements and no one size fits
all.

-- Steve

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-30 18:35       ` Steven Rostedt
@ 2014-05-30 20:59         ` Kees Cook
  2014-05-30 22:53         ` Theodore Ts'o
  1 sibling, 0 replies; 36+ messages in thread
From: Kees Cook @ 2014-05-30 20:59 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: ksummit-discuss

On Fri, May 30, 2014 at 11:35 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 26 May 2014 20:33:38 +0900
> Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:
>
>
>> If we force to unify the test frameworks, it will neither be maintained
>> nor used. Instead, if maintainers state what test they will run and how
>> to maintain it, we are sure that each test will be done on the each
>> subsystem branch, and before release, we can avoid to run all tests which
>> requires many hardware and long time but just run a small number of tests
>> (e.g. LTP, trinity, etc.)
>
> I agree with this. Just having a single place to put tests or tell
> people where the tests are would be a huge improvement. If I wanted to
> run my own tests on ext file systems, I should be able to set up the
> same environment that Ted uses. If someone wants to run my ftrace
> tests, then they should be able to as well (which I need to make
> available to the general public). Better yet, this can open up a door
> for people to contribute to new tests for a particular subsystem. I
> would love it if people added tests for me to run on ftrace. I have a
> bunch of hacks to test various functionality (as they are hacky, that's
> the reason I haven't posted them yet).
>
> This shouldn't be about "make tests" which I think is silly. But a way
> to standardize tests, or at least have a single repository to show how
> different parts of the kernel is tested. My tests require running as
> root, other tests should not require that. This is just an example of
> how different tests have different requirements and no one size fits
> all.

I've been adding stuff in tools/testing/selftests when ever I can.
Based on the stuff living in there, I think there several things
needed for being able to hook stuff up to a framework:

common reporting/exit code handling: right now "make run_tests" mostly
requires a human to read everything and decide if something failed.
That's no good.

common "starting state" documentation: the various tests require
changes to CONFIG items, sysctls, uid, etc. Having a common format for
describing these states mean a framework would be able to, say, build
randconfig and then pick the correct subset of tests to run. Or
changing sysctls before running a test.

(And as an example of code work to be done: on my TODO list is taking
the seccomp-bpf test suite, currently living on github, and getting it
added to the selftests tree.)

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-30 18:35       ` Steven Rostedt
  2014-05-30 20:59         ` Kees Cook
@ 2014-05-30 22:53         ` Theodore Ts'o
  2014-06-04 13:51           ` Masami Hiramatsu
  1 sibling, 1 reply; 36+ messages in thread
From: Theodore Ts'o @ 2014-05-30 22:53 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: ksummit-discuss

On Fri, May 30, 2014 at 02:35:30PM -0400, Steven Rostedt wrote:
> 
> I agree with this. Just having a single place to put tests or tell
> people where the tests are would be a huge improvement. If I wanted to
> run my own tests on ext file systems, I should be able to set up the
> same environment that Ted uses.

And you can!

	https://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/README

My test environment has been designed so it's fully reproducible, and
I've been pushing ext4 developers to use it before submitting me
patches.  And if they don't, and I detect a problem which bisects back
to one of their patches, I let them know gently that if they had used
kvm-xfstests, they would have detected earlier.

Maybe just adding a field in the MAINTAINERS file which contains a
pointer to the tests is the simplest solution?

> If someone wants to run my ftrace tests, then they should be able to
> as well (which I need to make available to the general
> public). Better yet, this can open up a door for people to
> contribute to new tests for a particular subsystem. I would love it
> if people added tests for me to run on ftrace. I have a bunch of
> hacks to test various functionality (as they are hacky, that's the
> reason I haven't posted them yet).

Yeah, it took a while before I had done enough cleanup that I was
willing to make thme fully public.  Now that I have, it's saved me a
lot of time, most definitely.

					- Ted

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-30 22:53         ` Theodore Ts'o
@ 2014-06-04 13:51           ` Masami Hiramatsu
  0 siblings, 0 replies; 36+ messages in thread
From: Masami Hiramatsu @ 2014-06-04 13:51 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: ksummit-discuss

(2014/05/31 7:53), Theodore Ts'o wrote:
> On Fri, May 30, 2014 at 02:35:30PM -0400, Steven Rostedt wrote:
>>
>> I agree with this. Just having a single place to put tests or tell
>> people where the tests are would be a huge improvement. If I wanted to
>> run my own tests on ext file systems, I should be able to set up the
>> same environment that Ted uses.
> 
> And you can!
> 
> 	https://git.kernel.org/cgit/fs/ext2/xfstests-bld.git/tree/README
> 
> My test environment has been designed so it's fully reproducible, and
> I've been pushing ext4 developers to use it before submitting me
> patches.  And if they don't, and I detect a problem which bisects back
> to one of their patches, I let them know gently that if they had used
> kvm-xfstests, they would have detected earlier.
> 
> Maybe just adding a field in the MAINTAINERS file which contains a
> pointer to the tests is the simplest solution?

Yes, I think at least that should be done. It is also good to know
what hardware is required to run the test for some subsystems.

And then, if we can share them among kernel developers, we can
step things forward to ensuring testing process in release/development
cycle. e.g. all subsystem maintainers run own tests before sending pull
request and right after rebasing kernel. This can ensures no regression
on that subsystem.

Thank you,

> 
>> If someone wants to run my ftrace tests, then they should be able to
>> as well (which I need to make available to the general
>> public). Better yet, this can open up a door for people to
>> contribute to new tests for a particular subsystem. I would love it
>> if people added tests for me to run on ftrace. I have a bunch of
>> hacks to test various functionality (as they are hacky, that's the
>> reason I haven't posted them yet).
> 
> Yeah, it took a while before I had done enough cleanup that I was
> willing to make thme fully public.  Now that I have, it's saved me a
> lot of time, most definitely.
> 
> 					- Ted
> 


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-05-30 12:07     ` Linus Walleij
@ 2014-06-05  0:23       ` Greg KH
  2014-06-05  6:54         ` Mel Gorman
  0 siblings, 1 reply; 36+ messages in thread
From: Greg KH @ 2014-06-05  0:23 UTC (permalink / raw)
  To: Linus Walleij; +Cc: ksummit-discuss

On Fri, May 30, 2014 at 02:07:36PM +0200, Linus Walleij wrote:
> On Wed, May 28, 2014 at 8:57 PM, Greg KH <greg@kroah.com> wrote:
> > On Wed, May 28, 2014 at 04:37:02PM +0100, Mel Gorman wrote:
> >> As LTP is reported to
> >> be sane these days for some tests, it could conceivably be wrapped by "make
> >> test" to avoid duplicating effort there. I think that would be worthwhile
> >> if someone had the time to push it because it would be an unconditional win.
> >
> > That is what I have been asking for for _years_.  Hopefully someday
> > someone does this...
> 
> Does this read "let's pull LTP into the kernel source tree"?

Everyone always asks this, and I say, "Sure, why not?"

Actually, if people do complain about "why", then let's take the
"useful" subset of LTP for kernel developers.  It's a great place to
start, don't you agree?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  0:23       ` Greg KH
@ 2014-06-05  6:54         ` Mel Gorman
  2014-06-05  8:30           ` Geert Uytterhoeven
  2014-06-05  8:39           ` chrubis
  0 siblings, 2 replies; 36+ messages in thread
From: Mel Gorman @ 2014-06-05  6:54 UTC (permalink / raw)
  To: Greg KH, chrubis; +Cc: ksummit-discuss

On Wed, Jun 04, 2014 at 05:23:31PM -0700, Greg KH wrote:
> On Fri, May 30, 2014 at 02:07:36PM +0200, Linus Walleij wrote:
> > On Wed, May 28, 2014 at 8:57 PM, Greg KH <greg@kroah.com> wrote:
> > > On Wed, May 28, 2014 at 04:37:02PM +0100, Mel Gorman wrote:
> > >> As LTP is reported to
> > >> be sane these days for some tests, it could conceivably be wrapped by "make
> > >> test" to avoid duplicating effort there. I think that would be worthwhile
> > >> if someone had the time to push it because it would be an unconditional win.
> > >
> > > That is what I have been asking for for _years_.  Hopefully someday
> > > someone does this...
> > 
> > Does this read "let's pull LTP into the kernel source tree"?
> 
> Everyone always asks this, and I say, "Sure, why not?"
> 
> Actually, if people do complain about "why", then let's take the
> "useful" subset of LTP for kernel developers.  It's a great place to
> start, don't you agree?
> 

Cyril Hrubis is an LTP expert who has spend a considerable amount of time
cleaning it up but is not often seen in kernel development circles so I added
him to the cc. He's stated before that there is a large subset of LTP that
is considerably cleaner than it was a few years ago. Cyril, you are probably
not subscribed but the list archives for the "kernel testing standard" thread
can be seen at http://lists.linuxfoundation.org/pipermail/ksummit-discuss/
if you dig around a bit.

There is a hazard that someone bisecting the tree would need to be careful
to not bisect LTP instead. Otherwise, in your opinion how feasible
would it be to import the parts of LTP you trust into the kernel tree
under tools/testing/ ?

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  6:54         ` Mel Gorman
@ 2014-06-05  8:30           ` Geert Uytterhoeven
  2014-06-05  8:44             ` chrubis
                               ` (2 more replies)
  2014-06-05  8:39           ` chrubis
  1 sibling, 3 replies; 36+ messages in thread
From: Geert Uytterhoeven @ 2014-06-05  8:30 UTC (permalink / raw)
  To: Mel Gorman; +Cc: ksummit-discuss, chrubis

On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
> There is a hazard that someone bisecting the tree would need to be careful
> to not bisect LTP instead.

That may actually be a good reason not to import LTP...
I'd imagine you usually want to bisect the kernel to find when a regression
was introduced in the syscall API.

Is there a reason not to run the latest version of LTP (unless bisecting
LTP ;-)? The syscall API is supposed to be stable.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  6:54         ` Mel Gorman
  2014-06-05  8:30           ` Geert Uytterhoeven
@ 2014-06-05  8:39           ` chrubis
  1 sibling, 0 replies; 36+ messages in thread
From: chrubis @ 2014-06-05  8:39 UTC (permalink / raw)
  To: Mel Gorman; +Cc: ksummit-discuss

Hi!
> > > >> As LTP is reported to
> > > >> be sane these days for some tests, it could conceivably be wrapped by "make
> > > >> test" to avoid duplicating effort there. I think that would be worthwhile
> > > >> if someone had the time to push it because it would be an unconditional win.
> > > >
> > > > That is what I have been asking for for _years_.  Hopefully someday
> > > > someone does this...
> > > 
> > > Does this read "let's pull LTP into the kernel source tree"?
> > 
> > Everyone always asks this, and I say, "Sure, why not?"
> > 
> > Actually, if people do complain about "why", then let's take the
> > "useful" subset of LTP for kernel developers.  It's a great place to
> > start, don't you agree?

This sure seems to be recurring topic. My opinion is that this wouldn't
be huge win.

There are a few downsides to this approach as well:

LTP is backward compatible and it makes sense to use the latest version
even for a few years old kernel/distro, because that brings you more
coverage and fixes. If it was part of the kernel tree there would be a
danger of people using outdated version packed with the kernel sources.

Splitting LTP it into two parts is in my opinion wrong thing to do,
because you will end up with two places that have to share some files
which will impose overhead in maintenance and confusion. Actually I've
looked over the testcases LTP has and it seems that there is a very
little testcases that are not kernel related (there are a few tests for
userspace tools (such as tar, unzip, su, at...) and some network tests).

> Cyril Hrubis is an LTP expert who has spend a considerable amount of time
> cleaning it up but is not often seen in kernel development circles so I added
> him to the cc. He's stated before that there is a large subset of LTP that
> is considerably cleaner than it was a few years ago. Cyril, you are probably
> not subscribed but the list archives for the "kernel testing standard" thread
> can be seen at http://lists.linuxfoundation.org/pipermail/ksummit-discuss/
> if you dig around a bit.

I've read this thread and what it's largely about is integration and
documentation both can be easily done even without pulling LTP into
kernel git tree.

Integrating LTP into 'make test' would be reasonably easy task. You just
need to download latest sources, build them, install and run it, then
look into single file for list of failures.

The more complicated part, as Ted said, is to figure out which tests to
run and on what occasion. For that you have to keep a list of testcases
per a group of interest which is time consuming and prone to error.

> There is a hazard that someone bisecting the tree would need to be careful
> to not bisect LTP instead. Otherwise, in your opinion how feasible
> would it be to import the parts of LTP you trust into the kernel tree
> under tools/testing/ ?

That depends on how closely you want to integrate with the rest of the
source tree. Copying all needed pieces from LTP source there would be
fairly easy.

However as I wrote earlier I want to avoid splitting the LTP source tree
in half. That would overly complicate the way LTP community works now.
What we do, for quite some time now, is to fix/cleanup testcase by
testcase and enable the fixed ones in default testrun.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  8:30           ` Geert Uytterhoeven
@ 2014-06-05  8:44             ` chrubis
  2014-06-05  8:53             ` Daniel Vetter
  2014-06-05 14:10             ` James Bottomley
  2 siblings, 0 replies; 36+ messages in thread
From: chrubis @ 2014-06-05  8:44 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: ksummit-discuss

Hi!
> That may actually be a good reason not to import LTP...
> I'd imagine you usually want to bisect the kernel to find when a regression
> was introduced in the syscall API.
> 
> Is there a reason not to run the latest version of LTP (unless bisecting
> LTP ;-)? The syscall API is supposed to be stable.

They mostly are (there are some errno changes from time to time, etc.).

But with each release LTP gets more test coverage and considerable
amount of bugfixes.

As we are still in a phase where we are cleaing and reviewing legacy
code, the amount of bugfixes is quite high. I guess that the amount of
bugfixes will drop in a fortcoming years but for now running latest LTP
is very important.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  8:30           ` Geert Uytterhoeven
  2014-06-05  8:44             ` chrubis
@ 2014-06-05  8:53             ` Daniel Vetter
  2014-06-05 11:17               ` Masami Hiramatsu
  2014-06-05 14:10             ` James Bottomley
  2 siblings, 1 reply; 36+ messages in thread
From: Daniel Vetter @ 2014-06-05  8:53 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: chrubis, ksummit-discuss

On Thu, Jun 5, 2014 at 10:30 AM, Geert Uytterhoeven
<geert@linux-m68k.org> wrote:
> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
>> There is a hazard that someone bisecting the tree would need to be careful
>> to not bisect LTP instead.
>
> That may actually be a good reason not to import LTP...
> I'd imagine you usually want to bisect the kernel to find when a regression
> was introduced in the syscall API.
>
> Is there a reason not to run the latest version of LTP (unless bisecting
> LTP ;-)? The syscall API is supposed to be stable.

Same for validating backports - you want the latest testsuite to make
sure you don't miss important fixes. Downside is that the testsuite
needs to be compatible over a much wider range of kernels to be
useful, which is a pain for e.g. checking that garbage in reserved
fields (like reserved flags) are properly rejected on each kernel
version.

For drm/i915 testing we try to have limited compat for a few kernel
releases (mostly for our own release testing), but still end up with a
few wontfix bugs each release because we've fumbled it (or figured
it's not worth the effort to make the tests more complicated).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  8:53             ` Daniel Vetter
@ 2014-06-05 11:17               ` Masami Hiramatsu
  2014-06-05 11:58                 ` Daniel Vetter
  0 siblings, 1 reply; 36+ messages in thread
From: Masami Hiramatsu @ 2014-06-05 11:17 UTC (permalink / raw)
  To: ksummit-discuss

(2014/06/05 17:53), Daniel Vetter wrote:
> On Thu, Jun 5, 2014 at 10:30 AM, Geert Uytterhoeven
> <geert@linux-m68k.org> wrote:
>> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
>>> There is a hazard that someone bisecting the tree would need to be careful
>>> to not bisect LTP instead.
>>
>> That may actually be a good reason not to import LTP...
>> I'd imagine you usually want to bisect the kernel to find when a regression
>> was introduced in the syscall API.
>>
>> Is there a reason not to run the latest version of LTP (unless bisecting
>> LTP ;-)? The syscall API is supposed to be stable.
> 
> Same for validating backports - you want the latest testsuite to make
> sure you don't miss important fixes. Downside is that the testsuite
> needs to be compatible over a much wider range of kernels to be
> useful, which is a pain for e.g. checking that garbage in reserved
> fields (like reserved flags) are properly rejected on each kernel
> version.

Perhaps, the testsuite can recognize which patch is merged or not
if it can access the git repository by "git log | grep <commit-id>"
or something like that. Then, it can self-configure to reject
non-supported test. :)

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05 11:17               ` Masami Hiramatsu
@ 2014-06-05 11:58                 ` Daniel Vetter
  2014-06-06  9:10                   ` Masami Hiramatsu
  0 siblings, 1 reply; 36+ messages in thread
From: Daniel Vetter @ 2014-06-05 11:58 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Thu, Jun 5, 2014 at 1:17 PM, Masami Hiramatsu
<masami.hiramatsu.pt@hitachi.com> wrote:
> (2014/06/05 17:53), Daniel Vetter wrote:
>> On Thu, Jun 5, 2014 at 10:30 AM, Geert Uytterhoeven
>> <geert@linux-m68k.org> wrote:
>>> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
>>>> There is a hazard that someone bisecting the tree would need to be careful
>>>> to not bisect LTP instead.
>>>
>>> That may actually be a good reason not to import LTP...
>>> I'd imagine you usually want to bisect the kernel to find when a regression
>>> was introduced in the syscall API.
>>>
>>> Is there a reason not to run the latest version of LTP (unless bisecting
>>> LTP ;-)? The syscall API is supposed to be stable.
>>
>> Same for validating backports - you want the latest testsuite to make
>> sure you don't miss important fixes. Downside is that the testsuite
>> needs to be compatible over a much wider range of kernels to be
>> useful, which is a pain for e.g. checking that garbage in reserved
>> fields (like reserved flags) are properly rejected on each kernel
>> version.
>
> Perhaps, the testsuite can recognize which patch is merged or not
> if it can access the git repository by "git log | grep <commit-id>"
> or something like that. Then, it can self-configure to reject
> non-supported test. :)

Well we have feature flags and stuff (need them anyway) and magic
macros to skip tests if something isn't there/supported. So (at least
for us) not a technical issue at all. The problem is in getting these
tests right in all cases and maintaining them while updating tests for
new features.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05  8:30           ` Geert Uytterhoeven
  2014-06-05  8:44             ` chrubis
  2014-06-05  8:53             ` Daniel Vetter
@ 2014-06-05 14:10             ` James Bottomley
  2014-06-06  9:17               ` Masami Hiramatsu
  2014-06-09 14:44               ` chrubis
  2 siblings, 2 replies; 36+ messages in thread
From: James Bottomley @ 2014-06-05 14:10 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: chrubis, ksummit-discuss

On Thu, 2014-06-05 at 10:30 +0200, Geert Uytterhoeven wrote:
> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
> > There is a hazard that someone bisecting the tree would need to be careful
> > to not bisect LTP instead.
> 
> That may actually be a good reason not to import LTP...
> I'd imagine you usually want to bisect the kernel to find when a regression
> was introduced in the syscall API.

I agree with this.  One of the things we might like to ask to be fixed
about bisect is the ability to exclude paths.  You can do a git bisect
with every top level directory except test, but it's a bit cumbersome.

> Is there a reason not to run the latest version of LTP (unless bisecting
> LTP ;-)? The syscall API is supposed to be stable.

I think not, and we have strong reasons for wanting to run the latest
LTP against every kernel (including stable ones), not just the version
in the test directory, so in practise, it looks like this doesn't meet
the changes with the kernel test for inclusion.  On the other hand,
having the tests available is also useful.   Perhaps we just need a
tests repo which pulls from all our other disparate tests so there's one
location everyone knows to go for the latest?

James

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05 11:58                 ` Daniel Vetter
@ 2014-06-06  9:10                   ` Masami Hiramatsu
  0 siblings, 0 replies; 36+ messages in thread
From: Masami Hiramatsu @ 2014-06-06  9:10 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: ksummit-discuss

(2014/06/05 20:58), Daniel Vetter wrote:
> On Thu, Jun 5, 2014 at 1:17 PM, Masami Hiramatsu
> <masami.hiramatsu.pt@hitachi.com> wrote:
>> (2014/06/05 17:53), Daniel Vetter wrote:
>>> On Thu, Jun 5, 2014 at 10:30 AM, Geert Uytterhoeven
>>> <geert@linux-m68k.org> wrote:
>>>> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
>>>>> There is a hazard that someone bisecting the tree would need to be careful
>>>>> to not bisect LTP instead.
>>>>
>>>> That may actually be a good reason not to import LTP...
>>>> I'd imagine you usually want to bisect the kernel to find when a regression
>>>> was introduced in the syscall API.
>>>>
>>>> Is there a reason not to run the latest version of LTP (unless bisecting
>>>> LTP ;-)? The syscall API is supposed to be stable.
>>>
>>> Same for validating backports - you want the latest testsuite to make
>>> sure you don't miss important fixes. Downside is that the testsuite
>>> needs to be compatible over a much wider range of kernels to be
>>> useful, which is a pain for e.g. checking that garbage in reserved
>>> fields (like reserved flags) are properly rejected on each kernel
>>> version.
>>
>> Perhaps, the testsuite can recognize which patch is merged or not
>> if it can access the git repository by "git log | grep <commit-id>"
>> or something like that. Then, it can self-configure to reject
>> non-supported test. :)
> 
> Well we have feature flags and stuff (need them anyway) and magic
> macros to skip tests if something isn't there/supported. So (at least
> for us) not a technical issue at all.

Nice :) , and I think we can use git commit-id as feature flags in other
subsystems. Since the patches in stable tree have [ Upstream commit <commit-id> ]
tag in the description, we can use them for checking which fixes are already
merged.

> The problem is in getting these
> tests right in all cases and maintaining them while updating tests for
> new features.

Is that a limitation comes from feature flags? or version synchronizing
problem?

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05 14:10             ` James Bottomley
@ 2014-06-06  9:17               ` Masami Hiramatsu
  2014-06-09 14:44               ` chrubis
  1 sibling, 0 replies; 36+ messages in thread
From: Masami Hiramatsu @ 2014-06-06  9:17 UTC (permalink / raw)
  To: ksummit-discuss

(2014/06/05 23:10), James Bottomley wrote:
> On Thu, 2014-06-05 at 10:30 +0200, Geert Uytterhoeven wrote:
>> On Thu, Jun 5, 2014 at 8:54 AM, Mel Gorman <mgorman@suse.de> wrote:
>>> There is a hazard that someone bisecting the tree would need to be careful
>>> to not bisect LTP instead.
>>
>> That may actually be a good reason not to import LTP...
>> I'd imagine you usually want to bisect the kernel to find when a regression
>> was introduced in the syscall API.
> 
> I agree with this.  One of the things we might like to ask to be fixed
> about bisect is the ability to exclude paths.  You can do a git bisect
> with every top level directory except test, but it's a bit cumbersome.
> 
>> Is there a reason not to run the latest version of LTP (unless bisecting
>> LTP ;-)? The syscall API is supposed to be stable.
> 
> I think not, and we have strong reasons for wanting to run the latest
> LTP against every kernel (including stable ones), not just the version
> in the test directory, so in practise, it looks like this doesn't meet
> the changes with the kernel test for inclusion.  On the other hand,
> having the tests available is also useful.   Perhaps we just need a
> tests repo which pulls from all our other disparate tests so there's one
> location everyone knows to go for the latest?

I agree. I guess that may be a script of chef or something like, which
pulls tests from other repos and build it. But in that case, why can't we
put it in the kernel tree itself?

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-05 14:10             ` James Bottomley
  2014-06-06  9:17               ` Masami Hiramatsu
@ 2014-06-09 14:44               ` chrubis
  2014-06-09 17:54                 ` Michael Kerrisk (man-pages)
  1 sibling, 1 reply; 36+ messages in thread
From: chrubis @ 2014-06-09 14:44 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

Hi!
> > Is there a reason not to run the latest version of LTP (unless bisecting
> > LTP ;-)? The syscall API is supposed to be stable.
> 
> I think not, and we have strong reasons for wanting to run the latest
> LTP against every kernel (including stable ones), not just the version
> in the test directory, so in practise, it looks like this doesn't meet
> the changes with the kernel test for inclusion.  On the other hand,
> having the tests available is also useful.   Perhaps we just need a
> tests repo which pulls from all our other disparate tests so there's one
> location everyone knows to go for the latest?

That sounds good to me. But as allready said, creating some
scripts/repos that pulls and runs all the tests is relatively easy.
Creating configurations and figuring out who needs to run which parts is
not.

I think that the main problem here is the communication and information
sharing. Maybe we can start with a wiki page or a similar document that
summarizes maintained testsuites, their purpose and structure. Because
just now, if there is any information about kernel testing, it is
scattered around the web, forums, etc.

Also I would like to see more communication between the Kernel and QA.

It's getting a bit better as we have linux-api mailing list and (thanks
to Michaal Kerrisk) commits that change kernel API are starting to CC
it. Which I consider as a great improvement because now we at least know
what we need to write tests for. However I still think that there is
some work lost in the process, particulary because the kernel devs who
wrote the userspace API have surely implemented some kind of tests for
it and these may have been adapted and included into LTP which would be
far less work than writing testcases from scratch.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel testing standard
  2014-06-09 14:44               ` chrubis
@ 2014-06-09 17:54                 ` Michael Kerrisk (man-pages)
  0 siblings, 0 replies; 36+ messages in thread
From: Michael Kerrisk (man-pages) @ 2014-06-09 17:54 UTC (permalink / raw)
  To: chrubis, James Bottomley; +Cc: ksummit-discuss

On 06/09/2014 04:44 PM, chrubis@suse.cz wrote:
> Hi!
>>> Is there a reason not to run the latest version of LTP (unless bisecting
>>> LTP ;-)? The syscall API is supposed to be stable.
>>
>> I think not, and we have strong reasons for wanting to run the latest
>> LTP against every kernel (including stable ones), not just the version
>> in the test directory, so in practise, it looks like this doesn't meet
>> the changes with the kernel test for inclusion.  On the other hand,
>> having the tests available is also useful.   Perhaps we just need a
>> tests repo which pulls from all our other disparate tests so there's one
>> location everyone knows to go for the latest?
> 
> That sounds good to me. But as allready said, creating some
> scripts/repos that pulls and runs all the tests is relatively easy.
> Creating configurations and figuring out who needs to run which parts is
> not.
> 
> I think that the main problem here is the communication and information
> sharing. Maybe we can start with a wiki page or a similar document that
> summarizes maintained testsuites, their purpose and structure. Because
> just now, if there is any information about kernel testing, it is
> scattered around the web, forums, etc.
> 
> Also I would like to see more communication between the Kernel and QA.
> 
> It's getting a bit better as we have linux-api mailing list and (thanks
> to Michaal Kerrisk) commits that change kernel API are starting to CC
> it. Which I consider as a great improvement because now we at least know
> what we need to write tests for. However I still think that there is
> some work lost in the process, particulary because the kernel devs who
> wrote the userspace API have surely implemented some kind of tests for
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Actually, I can point to numerous examples that show that this sadly
could not have been the case. Very many times I've written tests for
an API, or some API feature, only to discover that the most basic of
tests fails--in other words, clearly no-on--including the kernel
dev--did any testing of that particular feature before me.

Cheers,

Michael


-- 
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Linux/UNIX System Programming Training: http://man7.org/training/

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2014-06-09 17:54 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-23 11:47 [Ksummit-discuss] [CORE TOPIC] kernel testing standard Masami Hiramatsu
2014-05-23 13:32 ` Jason Cooper
2014-05-23 16:24   ` Olof Johansson
2014-05-23 16:35     ` Guenter Roeck
2014-05-23 16:36     ` Jason Cooper
2014-05-23 18:10     ` Masami Hiramatsu
2014-05-23 18:36       ` Jason Cooper
2014-05-23 18:06   ` Masami Hiramatsu
2014-05-23 18:32     ` Jason Cooper
2014-05-23 14:05 ` Justin M. Forbes
2014-05-23 16:04   ` Mark Brown
2014-05-24  0:30   ` Theodore Ts'o
2014-05-24  1:15     ` Guenter Roeck
2014-05-26 11:33     ` Masami Hiramatsu
2014-05-30 18:35       ` Steven Rostedt
2014-05-30 20:59         ` Kees Cook
2014-05-30 22:53         ` Theodore Ts'o
2014-06-04 13:51           ` Masami Hiramatsu
2014-05-26 17:08     ` Daniel Vetter
2014-05-26 18:21       ` Mark Brown
2014-05-28 15:37 ` Mel Gorman
2014-05-28 18:57   ` Greg KH
2014-05-30 12:07     ` Linus Walleij
2014-06-05  0:23       ` Greg KH
2014-06-05  6:54         ` Mel Gorman
2014-06-05  8:30           ` Geert Uytterhoeven
2014-06-05  8:44             ` chrubis
2014-06-05  8:53             ` Daniel Vetter
2014-06-05 11:17               ` Masami Hiramatsu
2014-06-05 11:58                 ` Daniel Vetter
2014-06-06  9:10                   ` Masami Hiramatsu
2014-06-05 14:10             ` James Bottomley
2014-06-06  9:17               ` Masami Hiramatsu
2014-06-09 14:44               ` chrubis
2014-06-09 17:54                 ` Michael Kerrisk (man-pages)
2014-06-05  8:39           ` chrubis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.