linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs
@ 2019-01-28 11:20 Balbir Singh
  2019-01-28 11:34 ` Michal Hocko
  0 siblings, 1 reply; 4+ messages in thread
From: Balbir Singh @ 2019-01-28 11:20 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-mm

Sending a patch to linux-mm today has become a complex task. One of the
reasons for the complexity is a lack of fundamental expectation of what
tests to run.

Mel Gorman has a set of tests [1], but there is no easy way to select
what tests to run. Some of them are proprietary (spec*), but others
have varying run times. A single line change may require hours or days
of testing, add to that complexity of configuration. It requires a lot
of tweaking and frequent test spawning to settle down on what to run,
what configuration to choose and benefit to show.

The proposal is to have a discussion on how to design a good sanity
test suite for the mm subsystem, which could potentially include
OOM test cases and known problem patterns with proposed changes

It would be great if we could discuss that in the summit this time,
all members are welcome and encouraged to participate.


References:

[1] https://github.com/gormanm/mmtests

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs
  2019-01-28 11:20 [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs Balbir Singh
@ 2019-01-28 11:34 ` Michal Hocko
  2019-01-29 10:43   ` Balbir Singh
  0 siblings, 1 reply; 4+ messages in thread
From: Michal Hocko @ 2019-01-28 11:34 UTC (permalink / raw)
  To: Balbir Singh; +Cc: lsf-pc, linux-mm

On Mon 28-01-19 22:20:33, Balbir Singh wrote:
> Sending a patch to linux-mm today has become a complex task. One of the
> reasons for the complexity is a lack of fundamental expectation of what
> tests to run.
> 
> Mel Gorman has a set of tests [1], but there is no easy way to select
> what tests to run. Some of them are proprietary (spec*), but others
> have varying run times. A single line change may require hours or days
> of testing, add to that complexity of configuration. It requires a lot
> of tweaking and frequent test spawning to settle down on what to run,
> what configuration to choose and benefit to show.
> 
> The proposal is to have a discussion on how to design a good sanity
> test suite for the mm subsystem, which could potentially include
> OOM test cases and known problem patterns with proposed changes

I am not sure I follow. So what is the problem you would like to solve.
If tests are taking too long then there is a good reason for that most
probably. Are you thinking of any specific tests which should be run or
even included to MM tests or similar?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs
  2019-01-28 11:34 ` Michal Hocko
@ 2019-01-29 10:43   ` Balbir Singh
  2019-01-29 11:26     ` Michal Hocko
  0 siblings, 1 reply; 4+ messages in thread
From: Balbir Singh @ 2019-01-29 10:43 UTC (permalink / raw)
  To: Michal Hocko; +Cc: lsf-pc, linux-mm

On Mon, Jan 28, 2019 at 12:34:42PM +0100, Michal Hocko wrote:
> On Mon 28-01-19 22:20:33, Balbir Singh wrote:
> > Sending a patch to linux-mm today has become a complex task. One of the
> > reasons for the complexity is a lack of fundamental expectation of what
> > tests to run.
> > 
> > Mel Gorman has a set of tests [1], but there is no easy way to select
> > what tests to run. Some of them are proprietary (spec*), but others
> > have varying run times. A single line change may require hours or days
> > of testing, add to that complexity of configuration. It requires a lot
> > of tweaking and frequent test spawning to settle down on what to run,
> > what configuration to choose and benefit to show.
> > 
> > The proposal is to have a discussion on how to design a good sanity
> > test suite for the mm subsystem, which could potentially include
> > OOM test cases and known problem patterns with proposed changes
> 
> I am not sure I follow. So what is the problem you would like to solve.
> If tests are taking too long then there is a good reason for that most
> probably. Are you thinking of any specific tests which should be run or
> even included to MM tests or similar?

Let me elaborate, everytime I think I find something interesting, in terms
of something to develop/fix, I think of how to test the changes. I think
for well established code (such as reclaim) or even other features, it's hard
to find good test cases to run as a base to ensure that

1. There is good coverage of tests against the changes
2. The right test cases have been run from a performance perspective

The reason I brought up the time was not the time for a single test,
but all the tests cumulative in the absence of good guidance for
(1) and (2) above.

IOW, what guidance can we provide to patch writers and bug fixers in terms
of what testing to carry out? How do we avoid biases in results and
ensure consistency?

Balbir Singh.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs
  2019-01-29 10:43   ` Balbir Singh
@ 2019-01-29 11:26     ` Michal Hocko
  0 siblings, 0 replies; 4+ messages in thread
From: Michal Hocko @ 2019-01-29 11:26 UTC (permalink / raw)
  To: Balbir Singh; +Cc: lsf-pc, linux-mm

On Tue 29-01-19 21:43:28, Balbir Singh wrote:
> On Mon, Jan 28, 2019 at 12:34:42PM +0100, Michal Hocko wrote:
> > On Mon 28-01-19 22:20:33, Balbir Singh wrote:
> > > Sending a patch to linux-mm today has become a complex task. One of the
> > > reasons for the complexity is a lack of fundamental expectation of what
> > > tests to run.
> > > 
> > > Mel Gorman has a set of tests [1], but there is no easy way to select
> > > what tests to run. Some of them are proprietary (spec*), but others
> > > have varying run times. A single line change may require hours or days
> > > of testing, add to that complexity of configuration. It requires a lot
> > > of tweaking and frequent test spawning to settle down on what to run,
> > > what configuration to choose and benefit to show.
> > > 
> > > The proposal is to have a discussion on how to design a good sanity
> > > test suite for the mm subsystem, which could potentially include
> > > OOM test cases and known problem patterns with proposed changes
> > 
> > I am not sure I follow. So what is the problem you would like to solve.
> > If tests are taking too long then there is a good reason for that most
> > probably. Are you thinking of any specific tests which should be run or
> > even included to MM tests or similar?
> 
> Let me elaborate, everytime I think I find something interesting, in terms
> of something to develop/fix, I think of how to test the changes. I think
> for well established code (such as reclaim) or even other features, it's hard
> to find good test cases to run as a base to ensure that
> 
> 1. There is good coverage of tests against the changes
> 2. The right test cases have been run from a performance perspective
> 
> The reason I brought up the time was not the time for a single test,
> but all the tests cumulative in the absence of good guidance for
> (1) and (2) above.
> 
> IOW, what guidance can we provide to patch writers and bug fixers in terms
> of what testing to carry out? How do we avoid biases in results and
> ensure consistency?

Well, I am afraid that there is no reference workload for the reclaim
behavior or many other heuristics MM uses. This will always be workload
dependant. Mel's mm-tests have a wider variety of workloads. There might
be more of course. The most important part is how those represent real
workload people do care about.

Abstracting workloads which are not in the test suits yet is definitely
a step in the right direction.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-01-29 11:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-28 11:20 [LSF/MM TOPIC] Test cases to choose for demonstrating mm features or fixing mm bugs Balbir Singh
2019-01-28 11:34 ` Michal Hocko
2019-01-29 10:43   ` Balbir Singh
2019-01-29 11:26     ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).