All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ksummit-discuss] [CORE TOPIC] Testing
@ 2015-07-07  9:24 Mark Brown
  2015-07-07 13:02 ` Alexey Dobriyan
  2015-07-07 15:25 ` Guenter Roeck
  0 siblings, 2 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-07  9:24 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: Shuah Khan, Kevin Hilman, Tyler Baker, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1456 bytes --]

One thing we typically cover at Kernel Summit is some of the activity
that's going on around testing upstream.  I think it'd be useful to have
some more of those discussions, both in terms of making people aware of
what's available and in terms of helping the people doing testing figure
out what would be useful.  A lot of this is probably well suited to a
workshop session between the interested people but I do think some
element in the core day beyond just a readout will be useful.

The main things I'm aware of that are happening at the minute are
kselftest development, the 0day tester, plus kernelci.org and the other
build and boot/test bots that are running against various trees.

In terms of discussion topics some of the issues I'm seeing are:

 - Can we pool resources to share the workload of running things and
   interpreting results, ideally also providing some central way for
   people to discover what results are out there for them to look at
   for a given kernel in the different systems?

 - Should we start carrying config fragments upstream designed to
   support testing, things like the distro config fragments that keep
   getting discussed are one example here but there's other things like
   collections of debug options we could be looking at.  Should we be
   more generally slimming defconfigs and moving things into fragments?

and there's always the the perennial ones about what people would like
to see testing for.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07  9:24 [Ksummit-discuss] [CORE TOPIC] Testing Mark Brown
@ 2015-07-07 13:02 ` Alexey Dobriyan
  2015-07-07 13:14   ` Mark Brown
  2015-07-07 15:25 ` Guenter Roeck
  1 sibling, 1 reply; 45+ messages in thread
From: Alexey Dobriyan @ 2015-07-07 13:02 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, Dan Carpenter, ksummit-discuss

On Tue, Jul 7, 2015 at 12:24 PM, Mark Brown <broonie@kernel.org> wrote:

>  - Should we start carrying config fragments upstream designed to
>    support testing, things like the distro config fragments that keep
>    getting discussed are one example here but there's other things like
>    collections of debug options we could be looking at.

This will gravitate everyone to running the same config which is the opposite
of what you want.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 13:02 ` Alexey Dobriyan
@ 2015-07-07 13:14   ` Mark Brown
  2015-07-07 18:47     ` Steven Rostedt
  0 siblings, 1 reply; 45+ messages in thread
From: Mark Brown @ 2015-07-07 13:14 UTC (permalink / raw)
  To: Alexey Dobriyan
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, Dan Carpenter, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 887 bytes --]

On Tue, Jul 07, 2015 at 04:02:13PM +0300, Alexey Dobriyan wrote:
> On Tue, Jul 7, 2015 at 12:24 PM, Mark Brown <broonie@kernel.org> wrote:

> >  - Should we start carrying config fragments upstream designed to
> >    support testing, things like the distro config fragments that keep
> >    getting discussed are one example here but there's other things like
> >    collections of debug options we could be looking at.

> This will gravitate everyone to running the same config which is the opposite
> of what you want.

Perhaps, perhaps not - it's not an unequivocal thing either way.  The
more barriers there are to enabling things the more likely it is that
people just won't bother in the first place (or that they'll run into
somme problem and give up before they get things working) and it's not
clear that having to figure these things out is always a good use of
people's time.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07  9:24 [Ksummit-discuss] [CORE TOPIC] Testing Mark Brown
  2015-07-07 13:02 ` Alexey Dobriyan
@ 2015-07-07 15:25 ` Guenter Roeck
  2015-07-07 17:18   ` Mark Brown
                     ` (2 more replies)
  1 sibling, 3 replies; 45+ messages in thread
From: Guenter Roeck @ 2015-07-07 15:25 UTC (permalink / raw)
  To: Mark Brown, ksummit-discuss
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, Dan Carpenter

On 07/07/2015 02:24 AM, Mark Brown wrote:
> One thing we typically cover at Kernel Summit is some of the activity
> that's going on around testing upstream.  I think it'd be useful to have
> some more of those discussions, both in terms of making people aware of
> what's available and in terms of helping the people doing testing figure
> out what would be useful.  A lot of this is probably well suited to a
> workshop session between the interested people but I do think some
> element in the core day beyond just a readout will be useful.
>
> The main things I'm aware of that are happening at the minute are
> kselftest development, the 0day tester, plus kernelci.org and the other
> build and boot/test bots that are running against various trees.
>
Maybe list all known ones as a start ?

> In terms of discussion topics some of the issues I'm seeing are:
>
>   - Can we pool resources to share the workload of running things and
>     interpreting results, ideally also providing some central way for
>     people to discover what results are out there for them to look at
>     for a given kernel in the different systems?
>
That might be quite useful. However, I have seen that it doesn't really
help to just provide the test results. kissb test results have been
available for ages, and people just don't look at it. Even the regular
"Build regression" e-mails sent out by Geert seem to be widely ignored.

What I really found to help is to bisect new problems and send an e-mail
to the responsible maintainer and to the submitter of the patch which
introduced it. I'd like to automate that with my test system, but
unfortunately I just don't have the time to do it.

>   - Should we start carrying config fragments upstream designed to
>     support testing, things like the distro config fragments that keep
>     getting discussed are one example here but there's other things like
>     collections of debug options we could be looking at.  Should we be
>     more generally slimming defconfigs and moving things into fragments?
>
> and there's always the the perennial ones about what people would like
> to see testing for.
>

Sharing as many test bot configuration scripts and relevant configurations
as possible would be quite helpful. For example, I am building various
configurations for all architectures, but I don't really know if they
are relevant. Also, I would like to run more qemu configurations,
but it is really hard to find working ones.

Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 15:25 ` Guenter Roeck
@ 2015-07-07 17:18   ` Mark Brown
  2015-07-07 17:23     ` Julia Lawall
                       ` (3 more replies)
  2015-07-07 19:21   ` Geert Uytterhoeven
  2015-07-08  9:27   ` Michael Ellerman
  2 siblings, 4 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-07 17:18 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 3283 bytes --]

On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
> On 07/07/2015 02:24 AM, Mark Brown wrote:

> >The main things I'm aware of that are happening at the minute are
> >kselftest development, the 0day tester, plus kernelci.org and the other
> >build and boot/test bots that are running against various trees.

> Maybe list all known ones as a start ?

Off the top of my head the automated ones I'm aware of are Olof's build
& boot test, Dan running smatch and I think some other static analysis
stuff, someone (not sure who?) running some coccinelle stuff, Coverity
and I've got a builder too.

> >In terms of discussion topics some of the issues I'm seeing are:

> >  - Can we pool resources to share the workload of running things and
> >    interpreting results, ideally also providing some central way for
> >    people to discover what results are out there for them to look at
> >    for a given kernel in the different systems?

> That might be quite useful. However, I have seen that it doesn't really
> help to just provide the test results. kissb test results have been
> available for ages, and people just don't look at it. Even the regular
> "Build regression" e-mails sent out by Geert seem to be widely ignored.

> What I really found to help is to bisect new problems and send an e-mail
> to the responsible maintainer and to the submitter of the patch which
> introduced it. I'd like to automate that with my test system, but
> unfortunately I just don't have the time to do it.

Yes, that's the "and interpreting" bit in the above - this only really
works with people actively pushing.  You do start to get people checking
themselves once things are perceived as something people care about but
it does take active work to establish and maintain that.  

It also really helps if things are delivered promptly, and against trees
people are actively developing for.  But even with clear reports and
sometimes patches not everyone shows an interest.  As we get more and
more actual testing running that's going to start to become more
serious, breaking the build or boot will also mean that automated tests
don't get to run.

This is one of the things 0day gets really right, when it kicks in it'll
e-mail people directly and promptly.

> >  - Should we start carrying config fragments upstream designed to
> >    support testing, things like the distro config fragments that keep
> >    getting discussed are one example here but there's other things like
> >    collections of debug options we could be looking at.  Should we be
> >    more generally slimming defconfigs and moving things into fragments?

> >and there's always the the perennial ones about what people would like
> >to see testing for.

> Sharing as many test bot configuration scripts and relevant configurations
> as possible would be quite helpful. For example, I am building various
> configurations for all architectures, but I don't really know if they
> are relevant. Also, I would like to run more qemu configurations,
> but it is really hard to find working ones.

Grant (just CCed) was working intermittently on the qemu bit.  I think
the last plan was to enhance the scripts Kevin has for driving his build
farm.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:18   ` Mark Brown
@ 2015-07-07 17:23     ` Julia Lawall
  2015-07-07 17:24     ` Shuah Khan
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 45+ messages in thread
From: Julia Lawall @ 2015-07-07 17:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

On Tue, 7 Jul 2015, Mark Brown wrote:

> On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
> > On 07/07/2015 02:24 AM, Mark Brown wrote:
>
> > >The main things I'm aware of that are happening at the minute are
> > >kselftest development, the 0day tester, plus kernelci.org and the other
> > >build and boot/test bots that are running against various trees.
>
> > Maybe list all known ones as a start ?
>
> Off the top of my head the automated ones I'm aware of are Olof's build
> & boot test, Dan running smatch and I think some other static analysis
> stuff, someone (not sure who?) running some coccinelle stuff, Coverity
> and I've got a builder too.

The 0day service runs Coccinelle.

Coccinelle does not need the build to succeed.

julia

> > >In terms of discussion topics some of the issues I'm seeing are:
>
> > >  - Can we pool resources to share the workload of running things and
> > >    interpreting results, ideally also providing some central way for
> > >    people to discover what results are out there for them to look at
> > >    for a given kernel in the different systems?
>
> > That might be quite useful. However, I have seen that it doesn't really
> > help to just provide the test results. kissb test results have been
> > available for ages, and people just don't look at it. Even the regular
> > "Build regression" e-mails sent out by Geert seem to be widely ignored.
>
> > What I really found to help is to bisect new problems and send an e-mail
> > to the responsible maintainer and to the submitter of the patch which
> > introduced it. I'd like to automate that with my test system, but
> > unfortunately I just don't have the time to do it.
>
> Yes, that's the "and interpreting" bit in the above - this only really
> works with people actively pushing.  You do start to get people checking
> themselves once things are perceived as something people care about but
> it does take active work to establish and maintain that.
>
> It also really helps if things are delivered promptly, and against trees
> people are actively developing for.  But even with clear reports and
> sometimes patches not everyone shows an interest.  As we get more and
> more actual testing running that's going to start to become more
> serious, breaking the build or boot will also mean that automated tests
> don't get to run.
>
> This is one of the things 0day gets really right, when it kicks in it'll
> e-mail people directly and promptly.
>
> > >  - Should we start carrying config fragments upstream designed to
> > >    support testing, things like the distro config fragments that keep
> > >    getting discussed are one example here but there's other things like
> > >    collections of debug options we could be looking at.  Should we be
> > >    more generally slimming defconfigs and moving things into fragments?
>
> > >and there's always the the perennial ones about what people would like
> > >to see testing for.
>
> > Sharing as many test bot configuration scripts and relevant configurations
> > as possible would be quite helpful. For example, I am building various
> > configurations for all architectures, but I don't really know if they
> > are relevant. Also, I would like to run more qemu configurations,
> > but it is really hard to find working ones.
>
> Grant (just CCed) was working intermittently on the qemu bit.  I think
> the last plan was to enhance the scripts Kevin has for driving his build
> farm.
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:18   ` Mark Brown
  2015-07-07 17:23     ` Julia Lawall
@ 2015-07-07 17:24     ` Shuah Khan
  2015-07-07 17:37       ` Guenter Roeck
  2015-07-07 17:52     ` Guenter Roeck
  2015-07-20 15:53     ` Mel Gorman
  3 siblings, 1 reply; 45+ messages in thread
From: Shuah Khan @ 2015-07-07 17:24 UTC (permalink / raw)
  To: Mark Brown, Guenter Roeck
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Shuah Khan, Dan Carpenter

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/07/2015 11:18 AM, Mark Brown wrote:
> On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
>> On 07/07/2015 02:24 AM, Mark Brown wrote:
> 
>>> The main things I'm aware of that are happening at the minute
>>> are kselftest development, the 0day tester, plus kernelci.org
>>> and the other build and boot/test bots that are running against
>>> various trees.
> 
>> Maybe list all known ones as a start ?
> 
> Off the top of my head the automated ones I'm aware of are Olof's
> build & boot test, Dan running smatch and I think some other static
> analysis stuff, someone (not sure who?) running some coccinelle
> stuff, Coverity and I've got a builder too.
> 
>>> In terms of discussion topics some of the issues I'm seeing
>>> are:
> 
>>> - Can we pool resources to share the workload of running things
>>> and interpreting results, ideally also providing some central
>>> way for people to discover what results are out there for them
>>> to look at for a given kernel in the different systems?
> 
>> That might be quite useful. However, I have seen that it doesn't
>> really help to just provide the test results. kissb test results
>> have been available for ages, and people just don't look at it.
>> Even the regular "Build regression" e-mails sent out by Geert
>> seem to be widely ignored.
> 
>> What I really found to help is to bisect new problems and send an
>> e-mail to the responsible maintainer and to the submitter of the
>> patch which introduced it. I'd like to automate that with my test
>> system, but unfortunately I just don't have the time to do it.
> 
> Yes, that's the "and interpreting" bit in the above - this only
> really works with people actively pushing.  You do start to get
> people checking themselves once things are perceived as something
> people care about but it does take active work to establish and
> maintain that.
> 
> It also really helps if things are delivered promptly, and against
> trees people are actively developing for.  But even with clear
> reports and sometimes patches not everyone shows an interest.  As
> we get more and more actual testing running that's going to start
> to become more serious, breaking the build or boot will also mean
> that automated tests don't get to run.
> 
> This is one of the things 0day gets really right, when it kicks in
> it'll e-mail people directly and promptly.
> 
>>> - Should we start carrying config fragments upstream designed
>>> to support testing, things like the distro config fragments
>>> that keep getting discussed are one example here but there's
>>> other things like collections of debug options we could be
>>> looking at.  Should we be more generally slimming defconfigs
>>> and moving things into fragments?
> 
>>> and there's always the the perennial ones about what people
>>> would like to see testing for.
> 
>> Sharing as many test bot configuration scripts and relevant
>> configurations as possible would be quite helpful. For example, I
>> am building various configurations for all architectures, but I
>> don't really know if they are relevant. Also, I would like to run
>> more qemu configurations, but it is really hard to find working
>> ones.
> 
> Grant (just CCed) was working intermittently on the qemu bit.  I
> think the last plan was to enhance the scripts Kevin has for
> driving his build farm.
> 

Thanks for starting this discussion. Now that Kselftest install is
in place and several cross-compile problems are fixed, I would like
to gauge interest in being able to include kselftest in qemu boot
tests. I added quicktest option in 4.2 to meet qemu environment time
constraints.

thanks,
- -- Shuah

- -- 
Shuah Khan
Sr. Linux Kernel Developer
Open Source Innovation Group
Samsung Research America (Silicon Valley)
shuahkh@osg.samsung.com | (970) 217-8978
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJVnAtBAAoJEAsCRMQNDUMca5wP/jrsBO++J4IsOnaJYwChWRp1
XHNEtrVwnvuR9zGo295lQUZL0cihug8FiTkfHX3P6M+9kEJQaUKlMyJymh9y4dzU
cmDLhh8kXwdtPq1qM8jzaTqBb6sqXvzXxRoMi3X7AYPd2yA4uLMluAgj+dg8n1x/
W55fIi4ghypNQvtLcC92zyp5MKtBd5zr75HUZPxsqPwo7AHtqy0KoV+1fx+qQ/Lt
GnZxIlWQMmA0yubiAGZHl+nECfiwcx8iS302y30d3+8/YxAztUy9l/7VfgBBgyOp
9Ouw6xC8Xmyb8GfFUUBy4MIAvS0QrBcYQp/z9Qlmj5MioOBydPaS3+Y4AIKLEaeI
Vu+/OVqZO9At+HML6TeDBvM44PWcjfC0plm8n8PLvFtruVnLEnxKeeaXCUrj9AG7
8GUTscjkOA5p8r3qqtKny3OftKvUINXZUL+zbEnV7kUlhlDuOBSN5SDZk6VkpTSP
lTJOTew+At0JH0x8YEA5k6L94JNT4zfksS/4N7z6bGcIdBIITrn3MGYkoi1fCqh2
GLHqKV8fLMv5aEK4tt9phDgmJSxITsMbfwuwoPYK0KRY4hUfri73uuBB1Igg9noI
d3H7yVYpL7Td3b98g7VuzIfOEdMbv9jFGgPJb8A0MiGYeu87iu56jJEXaj2JIAEo
SxNLh6SDkNkuH+6l2FAJ
=jD4c
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:24     ` Shuah Khan
@ 2015-07-07 17:37       ` Guenter Roeck
  0 siblings, 0 replies; 45+ messages in thread
From: Guenter Roeck @ 2015-07-07 17:37 UTC (permalink / raw)
  To: Shuah Khan, Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

On 07/07/2015 10:24 AM, Shuah Khan wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
[ ... ]
>
> Thanks for starting this discussion. Now that Kselftest install is
> in place and several cross-compile problems are fixed, I would like
> to gauge interest in being able to include kselftest in qemu boot
> tests. I added quicktest option in 4.2 to meet qemu environment time
> constraints.
>

Me too, and very much so. Only reason for not doing it yet is lack
of time.

Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:18   ` Mark Brown
  2015-07-07 17:23     ` Julia Lawall
  2015-07-07 17:24     ` Shuah Khan
@ 2015-07-07 17:52     ` Guenter Roeck
  2015-07-07 18:28       ` Mark Brown
  2015-07-07 22:51       ` Peter Hüwe
  2015-07-20 15:53     ` Mel Gorman
  3 siblings, 2 replies; 45+ messages in thread
From: Guenter Roeck @ 2015-07-07 17:52 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

On 07/07/2015 10:18 AM, Mark Brown wrote:
> On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
>> On 07/07/2015 02:24 AM, Mark Brown wrote:
>
>>> The main things I'm aware of that are happening at the minute are
>>> kselftest development, the 0day tester, plus kernelci.org and the other
>>> build and boot/test bots that are running against various trees.
>
>> Maybe list all known ones as a start ?
>
> Off the top of my head the automated ones I'm aware of are Olof's build
> & boot test, Dan running smatch and I think some other static analysis
> stuff, someone (not sure who?) running some coccinelle stuff, Coverity
> and I've got a builder too.
>
Plus mine, of course. Only part missing is automated bisect and e-mail
if something starts failing.

Which reminds me - do you use buildbot ? I think you are sending automated
e-mail on failures. It would help me a lot if someone had automated bisect
and the ability to e-mail results using buildbot to get me started.

>>> In terms of discussion topics some of the issues I'm seeing are:
>
>>>   - Can we pool resources to share the workload of running things and
>>>     interpreting results, ideally also providing some central way for
>>>     people to discover what results are out there for them to look at
>>>     for a given kernel in the different systems?
>
>> That might be quite useful. However, I have seen that it doesn't really
>> help to just provide the test results. kissb test results have been
>> available for ages, and people just don't look at it. Even the regular
>> "Build regression" e-mails sent out by Geert seem to be widely ignored.
>
>> What I really found to help is to bisect new problems and send an e-mail
>> to the responsible maintainer and to the submitter of the patch which
>> introduced it. I'd like to automate that with my test system, but
>> unfortunately I just don't have the time to do it.
>
> Yes, that's the "and interpreting" bit in the above - this only really
> works with people actively pushing.  You do start to get people checking
> themselves once things are perceived as something people care about but
> it does take active work to establish and maintain that.
>
> It also really helps if things are delivered promptly, and against trees
> people are actively developing for.  But even with clear reports and
> sometimes patches not everyone shows an interest.  As we get more and
> more actual testing running that's going to start to become more
> serious, breaking the build or boot will also mean that automated tests
> don't get to run.
>
Yes, I have seen that too. Especially 4.1 was pretty bad in this regard.
4.2 seems to be a bit better, though, so I hope that 4.1 was an exception.

Not really sure what to do about it. What turned out to help in the last
two companies I worked for was automatic revert of broken patches. That
sounds radical and I dislike it myself, but it helped.

> This is one of the things 0day gets really right, when it kicks in it'll
> e-mail people directly and promptly.
>
Agreed.

>>>   - Should we start carrying config fragments upstream designed to
>>>     support testing, things like the distro config fragments that keep
>>>     getting discussed are one example here but there's other things like
>>>     collections of debug options we could be looking at.  Should we be
>>>     more generally slimming defconfigs and moving things into fragments?
>
>>> and there's always the the perennial ones about what people would like
>>> to see testing for.
>
>> Sharing as many test bot configuration scripts and relevant configurations
>> as possible would be quite helpful. For example, I am building various
>> configurations for all architectures, but I don't really know if they
>> are relevant. Also, I would like to run more qemu configurations,
>> but it is really hard to find working ones.
>
> Grant (just CCed) was working intermittently on the qemu bit.  I think
> the last plan was to enhance the scripts Kevin has for driving his build
> farm.
>
Also of interest here (at least for me) would be to explore means to get
more hardware (both architectures and platforms) supported in qemu, but
I guess that may be a bit off topic.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:52     ` Guenter Roeck
@ 2015-07-07 18:28       ` Mark Brown
  2015-07-07 22:51       ` Peter Hüwe
  1 sibling, 0 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-07 18:28 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1341 bytes --]

On Tue, Jul 07, 2015 at 10:52:04AM -0700, Guenter Roeck wrote:
> On 07/07/2015 10:18 AM, Mark Brown wrote:

> >Off the top of my head the automated ones I'm aware of are Olof's build
> >& boot test, Dan running smatch and I think some other static analysis
> >stuff, someone (not sure who?) running some coccinelle stuff, Coverity
> >and I've got a builder too.

> Plus mine, of course. Only part missing is automated bisect and e-mail
> if something starts failing.

> Which reminds me - do you use buildbot ? I think you are sending automated
> e-mail on failures. It would help me a lot if someone had automated bisect
> and the ability to e-mail results using buildbot to get me started.

No, not me - all my failure reports are lovingly hand crafted using
traditional artisan techniques.  Kevin, Tyler and Fengguang have things
but apart from 0day I think everything is still manually triggered.

> Not really sure what to do about it. What turned out to help in the last
> two companies I worked for was automatic revert of broken patches. That
> sounds radical and I dislike it myself, but it helped.

Perhaps that's something we should be discussing?  It may be something
that we just evolve a solution for as we proceed though - right now it's
largely theoretical.  For -next Stephen will often carry extra patches
that make sense.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 13:14   ` Mark Brown
@ 2015-07-07 18:47     ` Steven Rostedt
  2015-07-07 20:46       ` Kees Cook
                         ` (2 more replies)
  0 siblings, 3 replies; 45+ messages in thread
From: Steven Rostedt @ 2015-07-07 18:47 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

On Tue, 7 Jul 2015 14:14:11 +0100
Mark Brown <broonie@sirena.org.uk> wrote:

> On Tue, Jul 07, 2015 at 04:02:13PM +0300, Alexey Dobriyan wrote:
> > On Tue, Jul 7, 2015 at 12:24 PM, Mark Brown <broonie@kernel.org> wrote:
> 
> > >  - Should we start carrying config fragments upstream designed to
> > >    support testing, things like the distro config fragments that keep
> > >    getting discussed are one example here but there's other things like
> > >    collections of debug options we could be looking at.
> 
> > This will gravitate everyone to running the same config which is the opposite
> > of what you want.
> 
> Perhaps, perhaps not - it's not an unequivocal thing either way.  The
> more barriers there are to enabling things the more likely it is that
> people just won't bother in the first place (or that they'll run into
> somme problem and give up before they get things working) and it's not
> clear that having to figure these things out is always a good use of
> people's time.

The testing/selftests tests should have three results: PASS, FAIL,
UNSUPPORTED. The UNSUPPORTED is what should be returned if the kernel
configuration doesn't have the needed features configured. For example,
if you run the ftrace selftests without function tracing enabled, all
the tests that test the function tracer return UNSUPPORTED.

Perhaps we should have a central location that each test needs to add
the required configuration for it to be properly tested. Then if users
want to test various subsystems, they would look in this location for
the proper configs (be it a directory that has files of the tests they
represent, and contain the configs needed). Then there should be no
real barrier for people to run these tests.

Of course if the test requires certain hardware, or a file system, then
that should be properly documented.

-- Steve

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 15:25 ` Guenter Roeck
  2015-07-07 17:18   ` Mark Brown
@ 2015-07-07 19:21   ` Geert Uytterhoeven
  2015-07-08  7:54     ` Dan Carpenter
  2015-07-08  9:27   ` Michael Ellerman
  2 siblings, 1 reply; 45+ messages in thread
From: Geert Uytterhoeven @ 2015-07-07 19:21 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

On Tue, Jul 7, 2015 at 5:25 PM, Guenter Roeck <linux@roeck-us.net> wrote:
>> In terms of discussion topics some of the issues I'm seeing are:
>>
>>   - Can we pool resources to share the workload of running things and
>>     interpreting results, ideally also providing some central way for
>>     people to discover what results are out there for them to look at
>>     for a given kernel in the different systems?
>>
> That might be quite useful. However, I have seen that it doesn't really
> help to just provide the test results. kissb test results have been
> available for ages, and people just don't look at it. Even the regular
> "Build regression" e-mails sent out by Geert seem to be widely ignored.

There's still a manual step involved, so I only download the logs and
generate the regression emails for every rc release.
My original plan was to automate it more, so I could run it fully automated
on the linux-next builds, too.

> What I really found to help is to bisect new problems and send an e-mail
> to the responsible maintainer and to the submitter of the patch which
> introduced it. I'd like to automate that with my test system, but
> unfortunately I just don't have the time to do it.

IIRC, kisskb used to bisect failures, but stopped doing that when it started
consuming too much time. Perhaps it can be restarted, given the system
got an upgrade a few months ago and became much much faster?
Stiil, there's the "analyzing" part.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 18:47     ` Steven Rostedt
@ 2015-07-07 20:46       ` Kees Cook
  2015-07-07 22:02         ` Andy Lutomirski
  2015-07-08 10:43       ` Mark Brown
  2015-07-09 10:24       ` Masami Hiramatsu
  2 siblings, 1 reply; 45+ messages in thread
From: Kees Cook @ 2015-07-07 20:46 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Tue, Jul 7, 2015 at 11:47 AM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Tue, 7 Jul 2015 14:14:11 +0100
> Mark Brown <broonie@sirena.org.uk> wrote:
>
>> On Tue, Jul 07, 2015 at 04:02:13PM +0300, Alexey Dobriyan wrote:
>> > On Tue, Jul 7, 2015 at 12:24 PM, Mark Brown <broonie@kernel.org> wrote:
>>
>> > >  - Should we start carrying config fragments upstream designed to
>> > >    support testing, things like the distro config fragments that keep
>> > >    getting discussed are one example here but there's other things like
>> > >    collections of debug options we could be looking at.
>>
>> > This will gravitate everyone to running the same config which is the opposite
>> > of what you want.
>>
>> Perhaps, perhaps not - it's not an unequivocal thing either way.  The
>> more barriers there are to enabling things the more likely it is that
>> people just won't bother in the first place (or that they'll run into
>> somme problem and give up before they get things working) and it's not
>> clear that having to figure these things out is always a good use of
>> people's time.
>
> The testing/selftests tests should have three results: PASS, FAIL,
> UNSUPPORTED. The UNSUPPORTED is what should be returned if the kernel
> configuration doesn't have the needed features configured. For example,
> if you run the ftrace selftests without function tracing enabled, all
> the tests that test the function tracer return UNSUPPORTED.
>
> Perhaps we should have a central location that each test needs to add
> the required configuration for it to be properly tested. Then if users
> want to test various subsystems, they would look in this location for
> the proper configs (be it a directory that has files of the tests they
> represent, and contain the configs needed). Then there should be no
> real barrier for people to run these tests.
>
> Of course if the test requires certain hardware, or a file system, then
> that should be properly documented.

There's also sysctl settings, and we also want privilege level as part
of that configuration. (i.e. some tests should run as root, some show
not, etc). It'd be nice to have the tests report these in some
machine-readable form so that a test harness could check, prepare the
environment, and then run the tests.

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 20:46       ` Kees Cook
@ 2015-07-07 22:02         ` Andy Lutomirski
  2015-07-08 17:37           ` Mark Brown
  0 siblings, 1 reply; 45+ messages in thread
From: Andy Lutomirski @ 2015-07-07 22:02 UTC (permalink / raw)
  To: Kees Cook
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Tue, Jul 7, 2015 at 1:46 PM, Kees Cook <keescook@chromium.org> wrote:
> There's also sysctl settings, and we also want privilege level as part
> of that configuration. (i.e. some tests should run as root, some show
> not, etc). It'd be nice to have the tests report these in some
> machine-readable form so that a test harness could check, prepare the
> environment, and then run the tests.
>

There was a fair amount of discussion about machine-readable output at
the last KS.  I don't know whether it ever got implemented.

--Andy

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:52     ` Guenter Roeck
  2015-07-07 18:28       ` Mark Brown
@ 2015-07-07 22:51       ` Peter Hüwe
  1 sibling, 0 replies; 45+ messages in thread
From: Peter Hüwe @ 2015-07-07 22:51 UTC (permalink / raw)
  To: ksummit-discuss
  Cc: Shuah Khan, Kevin Hilman, grant, Tyler Baker, Dan Carpenter

Hi,

I definitely think discussing the next steps in terms of automated regression 
testing is important - so thanks for raising this issue!!

> >> Maybe list all known ones as a start ?
> > 
> > Off the top of my head the automated ones I'm aware of are Olof's build
> > & boot test, Dan running smatch and I think some other static analysis
> > stuff, someone (not sure who?) running some coccinelle stuff, Coverity
> > and I've got a builder too.

For the TPM subsystem I have several things in place:
- applying patches automatically triggers a build on travis-ci.org
-- it triggers some basic style checkers
-- the build runs a qemu (with the tpm simulator) running another qemu (which 
now sees a real /dev/tpm0) with the new kernel.
--- the qemu-qemu-linux runs my tpm driver testsuite.

The reason behind this is that I can hook up besides the TPM1.2 simulator also 
the binary only windows based TPM2.0 simulator (with wine) without modifying 
qemu source -- but still the qemu-qemu-kernel sees 'real' hardware.

- some scripts for deploying new kernels to real hw machines and running tests 
with real hw tpms

- a mocked i2c tpm which I can hook up via the I2C_Slave interface, which also 
passes my driver testsuite -- so I can test the drivers within UML :)

- and of course wolfram's ninja-check scripts for style checkers

 
> Plus mine, of course. Only part missing is automated bisect and e-mail
> if something starts failing.


Yeah - getting the reports reasonably fast is probably the most important 
stuff -- the 0day testing really does a great job here.


Thanks,
Peter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 19:21   ` Geert Uytterhoeven
@ 2015-07-08  7:54     ` Dan Carpenter
  2015-07-08  8:37       ` Geert Uytterhoeven
  2015-07-08  9:52       ` Mark Brown
  0 siblings, 2 replies; 45+ messages in thread
From: Dan Carpenter @ 2015-07-08  7:54 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss

Doesn't the 0day system obsolete the "Build regression" emails?  It
should be sending emails to everyone introducing new build warnings.
Are people ignoring them?

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  7:54     ` Dan Carpenter
@ 2015-07-08  8:37       ` Geert Uytterhoeven
  2015-07-08 12:10         ` Jiri Kosina
  2015-07-12 10:21         ` Fengguang Wu
  2015-07-08  9:52       ` Mark Brown
  1 sibling, 2 replies; 45+ messages in thread
From: Geert Uytterhoeven @ 2015-07-08  8:37 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss

Hi Dan,

On Wed, Jul 8, 2015 at 9:54 AM, Dan Carpenter <dan.carpenter@oracle.com> wrote:
> Doesn't the 0day system obsolete the "Build regression" emails?  It

I think kisskb builds more/different configs.

Is there are list of configs and trees covered by the 0day system available?

> should be sending emails to everyone introducing new build warnings.
> Are people ignoring them?

Of course people ignore them ;-)

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 15:25 ` Guenter Roeck
  2015-07-07 17:18   ` Mark Brown
  2015-07-07 19:21   ` Geert Uytterhoeven
@ 2015-07-08  9:27   ` Michael Ellerman
  2015-07-08 13:52     ` Guenter Roeck
  2 siblings, 1 reply; 45+ messages in thread
From: Michael Ellerman @ 2015-07-08  9:27 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

On Tue, 2015-07-07 at 08:25 -0700, Guenter Roeck wrote:
> On 07/07/2015 02:24 AM, Mark Brown wrote:
> > One thing we typically cover at Kernel Summit is some of the activity
> > that's going on around testing upstream.  I think it'd be useful to have
> > some more of those discussions, both in terms of making people aware of
> > what's available and in terms of helping the people doing testing figure
> > out what would be useful.  A lot of this is probably well suited to a
> > workshop session between the interested people but I do think some
> > element in the core day beyond just a readout will be useful.
> >
> > In terms of discussion topics some of the issues I'm seeing are:
> >
> >   - Can we pool resources to share the workload of running things and
> >     interpreting results, ideally also providing some central way for
> >     people to discover what results are out there for them to look at
> >     for a given kernel in the different systems?
>
> That might be quite useful. However, I have seen that it doesn't really
> help to just provide the test results. kissb test results have been
> available for ages, and people just don't look at it.

My concern with kisskb sending emails was always that I didn't want it to
become a spam bot. So it can send emails, but it's opt-in.

The 0-day bot takes the opposite approach, ie. mails everyone without asking,
and in hindsight that is clearly the better option in terms of getting people
to act on the results.


> Sharing as many test bot configuration scripts and relevant configurations
> as possible would be quite helpful. For example, I am building various
> configurations for all architectures, but I don't really know if they
> are relevant.

Agreed. Your buildbot is epic. I'd love to see the config for that. My local
buildbot is running only ~40 builders, which I thought was a lot until I saw
yours :)

The kernelci.org stuff is also really interesting, that's the closest thing
anyone has at the moment to a "proper" kernel CI setup AFAIK.

cheers

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  7:54     ` Dan Carpenter
  2015-07-08  8:37       ` Geert Uytterhoeven
@ 2015-07-08  9:52       ` Mark Brown
  2015-07-12 11:15         ` Fengguang Wu
  1 sibling, 1 reply; 45+ messages in thread
From: Mark Brown @ 2015-07-08  9:52 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: Shuah Khan, Tyler Baker, Kevin Hilman, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 655 bytes --]

On Wed, Jul 08, 2015 at 10:54:09AM +0300, Dan Carpenter wrote:

> Doesn't the 0day system obsolete the "Build regression" emails?  It
> should be sending emails to everyone introducing new build warnings.

0day only covers some configurations and doesn't always manage to kick
in (I know I've had stuff come in via other channels sometimes, it seems
to depend on loading).

> Are people ignoring them?

They're not reliably followed through on, no, and one of the things with
0day is that it just generates a one time report so if things don't get
followed up on then that's that.  A regular "these are all the issues"
mail helps chase down those issues.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 18:47     ` Steven Rostedt
  2015-07-07 20:46       ` Kees Cook
@ 2015-07-08 10:43       ` Mark Brown
  2015-07-09 10:24       ` Masami Hiramatsu
  2 siblings, 0 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-08 10:43 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1489 bytes --]

On Tue, Jul 07, 2015 at 02:47:25PM -0400, Steven Rostedt wrote:
> Mark Brown <broonie@sirena.org.uk> wrote:

> > Perhaps, perhaps not - it's not an unequivocal thing either way.  The
> > more barriers there are to enabling things the more likely it is that
> > people just won't bother in the first place (or that they'll run into
> > somme problem and give up before they get things working) and it's not
> > clear that having to figure these things out is always a good use of
> > people's time.

> The testing/selftests tests should have three results: PASS, FAIL,
> UNSUPPORTED. The UNSUPPORTED is what should be returned if the kernel
> configuration doesn't have the needed features configured. For example,
> if you run the ftrace selftests without function tracing enabled, all
> the tests that test the function tracer return UNSUPPORTED.

That's roughly what they're supposed to be doing now (I'd need to go
check exactly what happens in the unsupported case).

> Perhaps we should have a central location that each test needs to add
> the required configuration for it to be properly tested. Then if users
> want to test various subsystems, they would look in this location for
> the proper configs (be it a directory that has files of the tests they
> represent, and contain the configs needed). Then there should be no
> real barrier for people to run these tests.

Right, this is what I'm suggesting roughly - make the configurations
required to run tests easier to pick up.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  8:37       ` Geert Uytterhoeven
@ 2015-07-08 12:10         ` Jiri Kosina
  2015-07-08 12:37           ` Josh Boyer
  2015-07-08 17:32           ` Mark Brown
  2015-07-12 10:21         ` Fengguang Wu
  1 sibling, 2 replies; 45+ messages in thread
From: Jiri Kosina @ 2015-07-08 12:10 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

On Wed, 8 Jul 2015, Geert Uytterhoeven wrote:

> > should be sending emails to everyone introducing new build warnings.
> > Are people ignoring them?
> 
> Of course people ignore them ;-)

I think the biggest problems with these are:

- they are all squashed together into one report, totally unrelated things 
  together at one place. Noone is ever going to be actively looking into 
  it to see whether something he's responsible for hasn't popped up

- they are not addressed to anybody explicitly. Sending them just to LKML 
  is a direct ticket to the "be ignored" land

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 12:10         ` Jiri Kosina
@ 2015-07-08 12:37           ` Josh Boyer
  2015-07-08 17:32           ` Mark Brown
  1 sibling, 0 replies; 45+ messages in thread
From: Josh Boyer @ 2015-07-08 12:37 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

On Wed, Jul 8, 2015 at 8:10 AM, Jiri Kosina <jkosina@suse.com> wrote:
> On Wed, 8 Jul 2015, Geert Uytterhoeven wrote:
>
>> > should be sending emails to everyone introducing new build warnings.
>> > Are people ignoring them?
>>
>> Of course people ignore them ;-)
>
> I think the biggest problems with these are:
>
> - they are all squashed together into one report, totally unrelated things
>   together at one place. Noone is ever going to be actively looking into
>   it to see whether something he's responsible for hasn't popped up
>
> - they are not addressed to anybody explicitly. Sending them just to LKML
>   is a direct ticket to the "be ignored" land

Yep.  The same goes for bug reports.  LKML is a great archiving tool
for digging through history, but it is useless as a form of general
communication.

josh

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  9:27   ` Michael Ellerman
@ 2015-07-08 13:52     ` Guenter Roeck
  2015-07-08 16:40       ` Kevin Hilman
  0 siblings, 1 reply; 45+ messages in thread
From: Guenter Roeck @ 2015-07-08 13:52 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

Hi Michael,

On 07/08/2015 02:27 AM, Michael Ellerman wrote:
> On Tue, 2015-07-07 at 08:25 -0700, Guenter Roeck wrote:
>> On 07/07/2015 02:24 AM, Mark Brown wrote:
>>> One thing we typically cover at Kernel Summit is some of the activity
>>> that's going on around testing upstream.  I think it'd be useful to have
>>> some more of those discussions, both in terms of making people aware of
>>> what's available and in terms of helping the people doing testing figure
>>> out what would be useful.  A lot of this is probably well suited to a
>>> workshop session between the interested people but I do think some
>>> element in the core day beyond just a readout will be useful.
>>>
>>> In terms of discussion topics some of the issues I'm seeing are:
>>>
>>>    - Can we pool resources to share the workload of running things and
>>>      interpreting results, ideally also providing some central way for
>>>      people to discover what results are out there for them to look at
>>>      for a given kernel in the different systems?
>>
>> That might be quite useful. However, I have seen that it doesn't really
>> help to just provide the test results. kissb test results have been
>> available for ages, and people just don't look at it.
>
> My concern with kisskb sending emails was always that I didn't want it to
> become a spam bot. So it can send emails, but it's opt-in.
>
> The 0-day bot takes the opposite approach, ie. mails everyone without asking,
> and in hindsight that is clearly the better option in terms of getting people
> to act on the results.
>
>
>> Sharing as many test bot configuration scripts and relevant configurations
>> as possible would be quite helpful. For example, I am building various
>> configurations for all architectures, but I don't really know if they
>> are relevant.
>
> Agreed. Your buildbot is epic. I'd love to see the config for that. My local
> buildbot is running only ~40 builders, which I thought was a lot until I saw
> yours :)
>
It is on github: https://github.com/groeck/linux-build-test.

If I count correctly, it runs more than 900 builders. Major hiccup is with
caching - it collects around 10GB of logging data per month, and under
some circumstances keeps it all in memory, so after about two months it
consumes the entire 32GB of RAM on the server it is running on, and I have
to do some manual cleanup. Other than that, it runs surprisingly well.

> The kernelci.org stuff is also really interesting, that's the closest thing
> anyone has at the moment to a "proper" kernel CI setup AFAIK.
>
Agreed.

Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 13:52     ` Guenter Roeck
@ 2015-07-08 16:40       ` Kevin Hilman
  2015-07-08 17:24         ` Guenter Roeck
  2015-07-09  4:23         ` Michael Ellerman
  0 siblings, 2 replies; 45+ messages in thread
From: Kevin Hilman @ 2015-07-08 16:40 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Tyler Baker, Dan Carpenter, ksummit-discuss, Shuah Khan

Guenter Roeck <linux@roeck-us.net> writes:

> Hi Michael,
>
> On 07/08/2015 02:27 AM, Michael Ellerman wrote:
>> On Tue, 2015-07-07 at 08:25 -0700, Guenter Roeck wrote:
>>> On 07/07/2015 02:24 AM, Mark Brown wrote:
>>>> One thing we typically cover at Kernel Summit is some of the activity
>>>> that's going on around testing upstream.  I think it'd be useful to have
>>>> some more of those discussions, both in terms of making people aware of
>>>> what's available and in terms of helping the people doing testing figure
>>>> out what would be useful.  A lot of this is probably well suited to a
>>>> workshop session between the interested people but I do think some
>>>> element in the core day beyond just a readout will be useful.
>>>>
>>>> In terms of discussion topics some of the issues I'm seeing are:
>>>>
>>>>    - Can we pool resources to share the workload of running things and
>>>>      interpreting results, ideally also providing some central way for
>>>>      people to discover what results are out there for them to look at
>>>>      for a given kernel in the different systems?
>>>
>>> That might be quite useful. However, I have seen that it doesn't really
>>> help to just provide the test results. kissb test results have been
>>> available for ages, and people just don't look at it.
>>
>> My concern with kisskb sending emails was always that I didn't want it to
>> become a spam bot. So it can send emails, but it's opt-in.
>>
>> The 0-day bot takes the opposite approach, ie. mails everyone without asking,
>> and in hindsight that is clearly the better option in terms of getting people
>> to act on the results.
>>
>>
>>> Sharing as many test bot configuration scripts and relevant configurations
>>> as possible would be quite helpful. For example, I am building various
>>> configurations for all architectures, but I don't really know if they
>>> are relevant.
>>
>> Agreed. Your buildbot is epic. I'd love to see the config for that. My local
>> buildbot is running only ~40 builders, which I thought was a lot until I saw
>> yours :)
>>
> It is on github: https://github.com/groeck/linux-build-test.
>
> If I count correctly, it runs more than 900 builders. Major hiccup is with
> caching - it collects around 10GB of logging data per month, and under
> some circumstances keeps it all in memory, so after about two months it
> consumes the entire 32GB of RAM on the server it is running on, and I have
> to do some manual cleanup. Other than that, it runs surprisingly well.

How soon after a branch is pushed are the build results available?

Are the build artfiacts ([bz]Image, System.map, modules, etc.) made
available someplace?

>> The kernelci.org stuff is also really interesting, that's the closest thing
>> anyone has at the moment to a "proper" kernel CI setup AFAIK.
>
> Agreed.

Glad you find it useful.

Speaking for kernelci.org... with limited time/resources, we'd like to
focus less on building (others are doing this faster/better) and more on
boot testing across a wide variety of hardware (e.g for the latest -next
build, we did ~430 boots on 88 unique boards covering 23 different
SoCs[1]).  We're also in the process of automating kselftest runs on our
boards.

If we could consume the output of other builders, we'd happily do that
instead of doing our own builds.  Ideally, the builders should produce
some sort of machine readable data with the build artifacts..  Currently
our builders produce a JSON file[2] which can be submitted to
kernelci.org using a RESTful API[3].

AFAICT, 0day doesn't have publically available build artfiacts, and
I haven't had a chance to look closely at Gunter's stuff.

Kevin

[1] http://kernelci.org/boot/all/job/next/kernel/next-20150708/
[2] example for an ARM multi_v7_defconfib build:
    http://storage.kernelci.org/next/next-20150708/arm-multi_v7_defconfig/build.json
[3] http://api.kernelci.org/

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 16:40       ` Kevin Hilman
@ 2015-07-08 17:24         ` Guenter Roeck
  2015-07-08 18:42           ` Kevin Hilman
  2015-07-09  4:23         ` Michael Ellerman
  1 sibling, 1 reply; 45+ messages in thread
From: Guenter Roeck @ 2015-07-08 17:24 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: Tyler Baker, Dan Carpenter, ksummit-discuss, Shuah Khan

On 07/08/2015 09:40 AM, Kevin Hilman wrote:
> Guenter Roeck <linux@roeck-us.net> writes:
>
>> Hi Michael,
>>
>> On 07/08/2015 02:27 AM, Michael Ellerman wrote:
>>> On Tue, 2015-07-07 at 08:25 -0700, Guenter Roeck wrote:
>>>> On 07/07/2015 02:24 AM, Mark Brown wrote:
>>>>> One thing we typically cover at Kernel Summit is some of the activity
>>>>> that's going on around testing upstream.  I think it'd be useful to have
>>>>> some more of those discussions, both in terms of making people aware of
>>>>> what's available and in terms of helping the people doing testing figure
>>>>> out what would be useful.  A lot of this is probably well suited to a
>>>>> workshop session between the interested people but I do think some
>>>>> element in the core day beyond just a readout will be useful.
>>>>>
>>>>> In terms of discussion topics some of the issues I'm seeing are:
>>>>>
>>>>>     - Can we pool resources to share the workload of running things and
>>>>>       interpreting results, ideally also providing some central way for
>>>>>       people to discover what results are out there for them to look at
>>>>>       for a given kernel in the different systems?
>>>>
>>>> That might be quite useful. However, I have seen that it doesn't really
>>>> help to just provide the test results. kissb test results have been
>>>> available for ages, and people just don't look at it.
>>>
>>> My concern with kisskb sending emails was always that I didn't want it to
>>> become a spam bot. So it can send emails, but it's opt-in.
>>>
>>> The 0-day bot takes the opposite approach, ie. mails everyone without asking,
>>> and in hindsight that is clearly the better option in terms of getting people
>>> to act on the results.
>>>
>>>
>>>> Sharing as many test bot configuration scripts and relevant configurations
>>>> as possible would be quite helpful. For example, I am building various
>>>> configurations for all architectures, but I don't really know if they
>>>> are relevant.
>>>
>>> Agreed. Your buildbot is epic. I'd love to see the config for that. My local
>>> buildbot is running only ~40 builders, which I thought was a lot until I saw
>>> yours :)
>>>
>> It is on github: https://github.com/groeck/linux-build-test.
>>
>> If I count correctly, it runs more than 900 builders. Major hiccup is with
>> caching - it collects around 10GB of logging data per month, and under
>> some circumstances keeps it all in memory, so after about two months it
>> consumes the entire 32GB of RAM on the server it is running on, and I have
>> to do some manual cleanup. Other than that, it runs surprisingly well.
>
> How soon after a branch is pushed are the build results available?
>

Depends on the branch and the load on the system. Building a single branch takes
maybe - I am guessing here - 90 minutes. If several branches are changed at the
same time (like when Greg pushes changes into the -stable branches he manages),
the overall build time is multiplied with the number of branches to build.

In addition to that, each branch has a different set of rules. I only pull mainline
a couple of times per week, because it changes too much and by itself would keep the
system busy all the time. -next is pulled once per day, as is -mmotm. Plus, I have
a number of different methods on how to handle changes. For example, Greg tends
to push many changes into -stable queues within a short period of time, resulting
in a lot of build churn if I would start builds immediately after a change is detected.
So if a change on a -stable branch maintained by Greg is detected, the system waits
for a couple of hours before actually starting a build to reduce churn. Other branches
see less churn; -mmotm and -next are built almost immediately after a change is detected,
for example.

In a nutshell, it can take anytime between about two hours and about three days
for results to be available, depending on the branch and the load on the system.

> Are the build artfiacts ([bz]Image, System.map, modules, etc.) made
> available someplace?
>
Not currently. I could make it available if there is interest, though I'd
probably have to find some place out in the web to keep the data (we are talking
16 branches times ~150 builds per branch times the number of builds kept in the log).
If I assume that five builds are kept per branch, and each build takes only 10MB
of space, this already adds up to more than 100 GB of space needed, plus of course
all the associated net bandwidth. There are a couple of sites pulling the complete
build history from my servers on a daily basis (happily ignoring robots.txt),
so that might end up consuming a lot of bandwidth.

>>> The kernelci.org stuff is also really interesting, that's the closest thing
>>> anyone has at the moment to a "proper" kernel CI setup AFAIK.
>>
>> Agreed.
>
> Glad you find it useful.
>
> Speaking for kernelci.org... with limited time/resources, we'd like to
> focus less on building (others are doing this faster/better) and more on
> boot testing across a wide variety of hardware (e.g for the latest -next
> build, we did ~430 boots on 88 unique boards covering 23 different
> SoCs[1]).  We're also in the process of automating kselftest runs on our
> boards.
>
> If we could consume the output of other builders, we'd happily do that
> instead of doing our own builds.  Ideally, the builders should produce
> some sort of machine readable data with the build artifacts..  Currently
> our builders produce a JSON file[2] which can be submitted to
> kernelci.org using a RESTful API[3].
>
Unfortunately, I had to disable JSON on my buildbot installation.
Reason is again that there are sites pulling the entire build history
on a regular basis, and my server ended up getting stuck doing nothing
but serving JSON requests (those take a lot of CPU).

Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 12:10         ` Jiri Kosina
  2015-07-08 12:37           ` Josh Boyer
@ 2015-07-08 17:32           ` Mark Brown
  1 sibling, 0 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-08 17:32 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 834 bytes --]

On Wed, Jul 08, 2015 at 02:10:36PM +0200, Jiri Kosina wrote:
> On Wed, 8 Jul 2015, Geert Uytterhoeven wrote:

[0day]
> > > should be sending emails to everyone introducing new build warnings.
> > > Are people ignoring them?

> > Of course people ignore them ;-)

> I think the biggest problems with these are:

> - they are all squashed together into one report, totally unrelated things 
>   together at one place. Noone is ever going to be actively looking into 
>   it to see whether something he's responsible for hasn't popped up

> - they are not addressed to anybody explicitly. Sending them just to LKML 
>   is a direct ticket to the "be ignored" land

That's not the case for the 0day warning reports, they're sent directly
to the people mentioned in the commit on a per-commit basis but as a one
shot.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 22:02         ` Andy Lutomirski
@ 2015-07-08 17:37           ` Mark Brown
  0 siblings, 0 replies; 45+ messages in thread
From: Mark Brown @ 2015-07-08 17:37 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 349 bytes --]

On Tue, Jul 07, 2015 at 03:02:11PM -0700, Andy Lutomirski wrote:

> There was a fair amount of discussion about machine-readable output at
> the last KS.  I don't know whether it ever got implemented.

Yes, but that's very much at the most basic stage of reporting pass and
fail - there's probably room for looking at if we're ready to extend
that.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 17:24         ` Guenter Roeck
@ 2015-07-08 18:42           ` Kevin Hilman
  0 siblings, 0 replies; 45+ messages in thread
From: Kevin Hilman @ 2015-07-08 18:42 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Tyler Baker, Dan Carpenter, ksummit-discuss, Shuah Khan

Guenter Roeck <linux@roeck-us.net> writes:

>> Are the build artfiacts ([bz]Image, System.map, modules, etc.) made
>> available someplace?
>>
> Not currently. I could make it available if there is interest, though I'd
> probably have to find some place out in the web to keep the data (we are talking
> 16 branches times ~150 builds per branch times the number of builds kept in the log).
> If I assume that five builds are kept per branch, and each build takes only 10MB
> of space, this already adds up to more than 100 GB of space needed, 

Understood. storage.kernelci.org is currently using ~400G for 45 days
worth of builds from the 55+ branches x ~150 builds/branch we do, and
includes boot logs/results too.

Kevin

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08 16:40       ` Kevin Hilman
  2015-07-08 17:24         ` Guenter Roeck
@ 2015-07-09  4:23         ` Michael Ellerman
  2015-07-09 18:08           ` Guenter Roeck
  1 sibling, 1 reply; 45+ messages in thread
From: Michael Ellerman @ 2015-07-09  4:23 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: Shuah Khan, Dan Carpenter, Tyler Baker, ksummit-discuss

On Wed, 2015-07-08 at 09:40 -0700, Kevin Hilman wrote:
>
> Speaking for kernelci.org... with limited time/resources, we'd like to
> focus less on building (others are doing this faster/better) and more on
> boot testing across a wide variety of hardware (e.g for the latest -next
> build, we did ~430 boots on 88 unique boards covering 23 different
> SoCs[1]).  We're also in the process of automating kselftest runs on our
> boards.
> 
> If we could consume the output of other builders, we'd happily do that
> instead of doing our own builds.  Ideally, the builders should produce
> some sort of machine readable data with the build artifacts.. 

Unfortunately providing build artifacts means you're distributing binaries, and
so life just got complicated. At least for folks like me who work at a company
with too many lawyers.

cheers

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 18:47     ` Steven Rostedt
  2015-07-07 20:46       ` Kees Cook
  2015-07-08 10:43       ` Mark Brown
@ 2015-07-09 10:24       ` Masami Hiramatsu
  2015-07-09 12:00         ` Steven Rostedt
  2015-07-10 10:39         ` Alexey Dobriyan
  2 siblings, 2 replies; 45+ messages in thread
From: Masami Hiramatsu @ 2015-07-09 10:24 UTC (permalink / raw)
  To: Steven Rostedt, Mark Brown
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, Dan Carpenter, ksummit-discuss

On 2015/07/08 3:47, Steven Rostedt wrote:
> On Tue, 7 Jul 2015 14:14:11 +0100
> Mark Brown <broonie@sirena.org.uk> wrote:
> 
>> On Tue, Jul 07, 2015 at 04:02:13PM +0300, Alexey Dobriyan wrote:
>>> On Tue, Jul 7, 2015 at 12:24 PM, Mark Brown <broonie@kernel.org> wrote:
>>
>>>>  - Should we start carrying config fragments upstream designed to
>>>>    support testing, things like the distro config fragments that keep
>>>>    getting discussed are one example here but there's other things like
>>>>    collections of debug options we could be looking at.
>>
>>> This will gravitate everyone to running the same config which is the opposite
>>> of what you want.
>>
>> Perhaps, perhaps not - it's not an unequivocal thing either way.  The
>> more barriers there are to enabling things the more likely it is that
>> people just won't bother in the first place (or that they'll run into
>> somme problem and give up before they get things working) and it's not
>> clear that having to figure these things out is always a good use of
>> people's time.
> 
> The testing/selftests tests should have three results: PASS, FAIL,
> UNSUPPORTED. The UNSUPPORTED is what should be returned if the kernel
> configuration doesn't have the needed features configured. For example,
> if you run the ftrace selftests without function tracing enabled, all
> the tests that test the function tracer return UNSUPPORTED.

This may be an off-topic, but I'd like to ask the selftest for tools.
Currently tools/testing/selftests tests the kernel itself, but
there are many tools under tools/, like perf too.

Those are not configured by the kconfig, but selftests are also needed
for tools. I have a runtests script which is just a bit modified
ftracetest for perf-probe. I'd like to integrate it to selftests
but I'm not sure that is a scope of kselftests.

> Perhaps we should have a central location that each test needs to add
> the required configuration for it to be properly tested. Then if users
> want to test various subsystems, they would look in this location for
> the proper configs (be it a directory that has files of the tests they
> represent, and contain the configs needed). Then there should be no
> real barrier for people to run these tests.

/proc/kconfig[.gz]? I think we can add a list of required kconfigs
for each testcase and indicate it. Moreover, we can include it as
a part of kconfig and introduce CONFIG_KSELFTEST to enable those
configs :)

Thank you,

> 
> Of course if the test requires certain hardware, or a file system, then
> that should be properly documented.
> 
> -- Steve
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 


-- 
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-09 10:24       ` Masami Hiramatsu
@ 2015-07-09 12:00         ` Steven Rostedt
  2015-07-10 10:39         ` Alexey Dobriyan
  1 sibling, 0 replies; 45+ messages in thread
From: Steven Rostedt @ 2015-07-09 12:00 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Thu, 09 Jul 2015 19:24:55 +0900
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:

> /proc/kconfig[.gz]? I think we can add a list of required kconfigs
> for each testcase and indicate it. Moreover, we can include it as
> a part of kconfig and introduce CONFIG_KSELFTEST to enable those
> configs :)
> 
> 

I like the idea of having each test a separate config. We could have a
config called "ALL TESTS" and selects all of them, for those that want
to test all tests. But for each test, it would select the needed config
options to test everything for that specific test.

-- Steve

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-09  4:23         ` Michael Ellerman
@ 2015-07-09 18:08           ` Guenter Roeck
  0 siblings, 0 replies; 45+ messages in thread
From: Guenter Roeck @ 2015-07-09 18:08 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: Shuah Khan, Tyler Baker, Dan Carpenter, ksummit-discuss

On Thu, Jul 09, 2015 at 02:23:53PM +1000, Michael Ellerman wrote:
> On Wed, 2015-07-08 at 09:40 -0700, Kevin Hilman wrote:
> >
> > Speaking for kernelci.org... with limited time/resources, we'd like to
> > focus less on building (others are doing this faster/better) and more on
> > boot testing across a wide variety of hardware (e.g for the latest -next
> > build, we did ~430 boots on 88 unique boards covering 23 different
> > SoCs[1]).  We're also in the process of automating kselftest runs on our
> > boards.
> > 
> > If we could consume the output of other builders, we'd happily do that
> > instead of doing our own builds.  Ideally, the builders should produce
> > some sort of machine readable data with the build artifacts.. 
> 
> Unfortunately providing build artifacts means you're distributing binaries, and
> so life just got complicated. At least for folks like me who work at a company
> with too many lawyers.
> 
Never thought about that aspect, but you are right, there would be some hoops
to go through if the test infrastructure runs in a corporate environment.
Should not be too difficult, though, at least if the lawyers are reasonable.
And if not, maybe they just need a different set of guildelines.

Guenter

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-09 10:24       ` Masami Hiramatsu
  2015-07-09 12:00         ` Steven Rostedt
@ 2015-07-10 10:39         ` Alexey Dobriyan
  2015-07-10 14:02           ` Shuah Khan
  1 sibling, 1 reply; 45+ messages in thread
From: Alexey Dobriyan @ 2015-07-10 10:39 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Thu, Jul 9, 2015 at 1:24 PM, Masami Hiramatsu
<masami.hiramatsu.pt@hitachi.com> wrote:

> This may be an off-topic, but I'd like to ask the selftest for tools.
> Currently tools/testing/selftests tests the kernel itself, but
> there are many tools under tools/, like perf too.
>
> Those are not configured by the kconfig, but selftests are also needed
> for tools. I have a runtests script which is just a bit modified
> ftracetest for perf-probe. I'd like to integrate it to selftests
> but I'm not sure that is a scope of kselftests.

This confusion is partially created by peculiar place where people
who wrote testsuite put it.

Gentlemen,
testsuite should be first class citizen in toplevel test/ directory,
command to run it should be "make test" not "make kselftest".
Only placing it in very visible place and using names which are intuitive
and familiar from userspace (git's t/ directory, glibc "make test") will give
hope that other developers will notice it and start using and improving it.
Excuse me, but tools/testing/selftests is hopeless.

>> Perhaps we should have a central location that each test needs to add
>> the required configuration for it to be properly tested. Then if users
>> want to test various subsystems, they would look in this location for
>> the proper configs (be it a directory that has files of the tests they
>> represent, and contain the configs needed). Then there should be no
>> real barrier for people to run these tests.
>
> /proc/kconfig[.gz]? I think we can add a list of required kconfigs
> for each testcase and indicate it. Moreover, we can include it as
> a part of kconfig and introduce CONFIG_KSELFTEST to enable those
> configs :)

I think primary use case is this:
* user builds and reboots into kernel with his custom config,
* user runs "make test" from fresh build directory,
* test harness runs everything runnable and maybe reports necessary
  config options to run more

/proc/kconfig.gz should be kept strictly for runtime config.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-10 10:39         ` Alexey Dobriyan
@ 2015-07-10 14:02           ` Shuah Khan
  2015-07-10 14:28             ` Alexey Dobriyan
  2015-07-10 15:05             ` Steven Rostedt
  0 siblings, 2 replies; 45+ messages in thread
From: Shuah Khan @ 2015-07-10 14:02 UTC (permalink / raw)
  To: Alexey Dobriyan, Masami Hiramatsu, Steven Rostedt
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Shuah Khan, Mark Brown, Dan Carpenter

On 07/10/2015 04:39 AM, Alexey Dobriyan wrote:
> On Thu, Jul 9, 2015 at 1:24 PM, Masami Hiramatsu
> <masami.hiramatsu.pt@hitachi.com> wrote:
> 
>> This may be an off-topic, but I'd like to ask the selftest for tools.
>> Currently tools/testing/selftests tests the kernel itself, but
>> there are many tools under tools/, like perf too.
>>
>> Those are not configured by the kconfig, but selftests are also needed
>> for tools. I have a runtests script which is just a bit modified
>> ftracetest for perf-probe. I'd like to integrate it to selftests
>> but I'm not sure that is a scope of kselftests.
> 
> This confusion is partially created by peculiar place where people
> who wrote testsuite put it.
> 
> Gentlemen,
> testsuite should be first class citizen in toplevel test/ directory,
> command to run it should be "make test" not "make kselftest".
> Only placing it in very visible place and using names which are intuitive
> and familiar from userspace (git's t/ directory, glibc "make test") will give
> hope that other developers will notice it and start using and improving it.
> Excuse me, but tools/testing/selftests is hopeless.

selftests are intended for kernel developers primarily. If developers
and users don't want to figure out what are the ways to test, then it
doesn't matter what the option is named. I would like to hear some
concrete data on why naming test would make it lot more usable.

> 
>>> Perhaps we should have a central location that each test needs to add
>>> the required configuration for it to be properly tested. Then if users
>>> want to test various subsystems, they would look in this location for
>>> the proper configs (be it a directory that has files of the tests they
>>> represent, and contain the configs needed). Then there should be no
>>> real barrier for people to run these tests.
>>
>> /proc/kconfig[.gz]? I think we can add a list of required kconfigs
>> for each testcase and indicate it. Moreover, we can include it as
>> a part of kconfig and introduce CONFIG_KSELFTEST to enable those
>> configs :)

I don't believe adding yet another kernel config option especially
for KSELFTEST is a good idea.

> 
> I think primary use case is this:
> * user builds and reboots into kernel with his custom config,
> * user runs "make test" from fresh build directory,
> * test harness runs everything runnable and maybe reports necessary
>   config options to run more
> 
> /proc/kconfig.gz should be kept strictly for runtime config.
> 

We do have ktest for that. Maybe ktest could include kselftest
run in its default boot test.

thanks,
-- Shuah

-- 
Shuah Khan
Sr. Linux Kernel Developer
Open Source Innovation Group
Samsung Research America (Silicon Valley)
shuahkh@osg.samsung.com | (970) 217-8978

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-10 14:02           ` Shuah Khan
@ 2015-07-10 14:28             ` Alexey Dobriyan
  2015-07-10 15:05             ` Steven Rostedt
  1 sibling, 0 replies; 45+ messages in thread
From: Alexey Dobriyan @ 2015-07-10 14:28 UTC (permalink / raw)
  To: Shuah Khan
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Fri, Jul 10, 2015 at 5:02 PM, Shuah Khan <shuahkh@osg.samsung.com> wrote:
> On 07/10/2015 04:39 AM, Alexey Dobriyan wrote:
>> On Thu, Jul 9, 2015 at 1:24 PM, Masami Hiramatsu
>> <masami.hiramatsu.pt@hitachi.com> wrote:
>>
>>> This may be an off-topic, but I'd like to ask the selftest for tools.
>>> Currently tools/testing/selftests tests the kernel itself, but
>>> there are many tools under tools/, like perf too.
>>>
>>> Those are not configured by the kconfig, but selftests are also needed
>>> for tools. I have a runtests script which is just a bit modified
>>> ftracetest for perf-probe. I'd like to integrate it to selftests
>>> but I'm not sure that is a scope of kselftests.
>>
>> This confusion is partially created by peculiar place where people
>> who wrote testsuite put it.
>>
>> Gentlemen,
>> testsuite should be first class citizen in toplevel test/ directory,
>> command to run it should be "make test" not "make kselftest".
>> Only placing it in very visible place and using names which are intuitive
>> and familiar from userspace (git's t/ directory, glibc "make test") will give
>> hope that other developers will notice it and start using and improving it.
>> Excuse me, but tools/testing/selftests is hopeless.
>
> selftests are intended for kernel developers primarily. If developers
> and users don't want to figure out what are the ways to test, then it
> doesn't matter what the option is named. I would like to hear some
> concrete data on why naming test would make it lot more usable.

Not usable, but visible. Top level implies "important", 3 directories deep
implies some obscure driver you didn't even hear about.

And it's easier to type.

git gets it right:

test: all
        $(MAKE) -C t/ all

    Alexey

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-10 14:02           ` Shuah Khan
  2015-07-10 14:28             ` Alexey Dobriyan
@ 2015-07-10 15:05             ` Steven Rostedt
  2015-07-10 15:54               ` Shuah Khan
  1 sibling, 1 reply; 45+ messages in thread
From: Steven Rostedt @ 2015-07-10 15:05 UTC (permalink / raw)
  To: Shuah Khan
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On Fri, 10 Jul 2015 08:02:14 -0600
Shuah Khan <shuahkh@osg.samsung.com> wrote:

> > 
> > I think primary use case is this:
> > * user builds and reboots into kernel with his custom config,
> > * user runs "make test" from fresh build directory,
> > * test harness runs everything runnable and maybe reports necessary
> >   config options to run more
> > 
> > /proc/kconfig.gz should be kept strictly for runtime config.
> > 
> 
> We do have ktest for that. Maybe ktest could include kselftest
> run in its default boot test.

I could have an option to have ktest run the kselftests, but the user
would still need to be able to update configs to tell ktest where the
tests are. Ktest doesn't assume to be run from the repo (I never do
that).

-- Steve

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-10 15:05             ` Steven Rostedt
@ 2015-07-10 15:54               ` Shuah Khan
  0 siblings, 0 replies; 45+ messages in thread
From: Shuah Khan @ 2015-07-10 15:54 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, Tyler Baker,
	Mark Brown, Dan Carpenter

On 07/10/2015 09:05 AM, Steven Rostedt wrote:
> On Fri, 10 Jul 2015 08:02:14 -0600
> Shuah Khan <shuahkh@osg.samsung.com> wrote:
> 
>>>
>>> I think primary use case is this:
>>> * user builds and reboots into kernel with his custom config,
>>> * user runs "make test" from fresh build directory,
>>> * test harness runs everything runnable and maybe reports necessary
>>>   config options to run more
>>>
>>> /proc/kconfig.gz should be kept strictly for runtime config.
>>>
>>
>> We do have ktest for that. Maybe ktest could include kselftest
>> run in its default boot test.
> 
> I could have an option to have ktest run the kselftests, but the user
> would still need to be able to update configs to tell ktest where the
> tests are. Ktest doesn't assume to be run from the repo (I never do
> that).
> 

Right. Thanks for clarifying it. We have a small number of tests that
depend on a specific kernel config option. Those tests are supposed to
exit gracefully and let the rest of the tests run when the config option
they depend on, isn't enabled. User needs to make a conscious choice on
which tests they want to run. In other words, it is users responsibility
to ensure that the kernel is built correctly to run a desired test. Just
like ktest, the responsibility to pick the right config falls outside
the kselftest framework. Kselftest run will test the parts of the kernel
it can test based on the current configuration.

thanks,
-- Shuah

-- 
Shuah Khan
Sr. Linux Kernel Developer
Open Source Innovation Group
Samsung Research America (Silicon Valley)
shuahkh@osg.samsung.com | (970) 217-8978

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  8:37       ` Geert Uytterhoeven
  2015-07-08 12:10         ` Jiri Kosina
@ 2015-07-12 10:21         ` Fengguang Wu
  1 sibling, 0 replies; 45+ messages in thread
From: Fengguang Wu @ 2015-07-12 10:21 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

Hi Geert,

On Wed, Jul 08, 2015 at 10:37:50AM +0200, Geert Uytterhoeven wrote:
> Hi Dan,
> 
> On Wed, Jul 8, 2015 at 9:54 AM, Dan Carpenter <dan.carpenter@oracle.com> wrote:
> > Doesn't the 0day system obsolete the "Build regression" emails?  It
> 
> I think kisskb builds more/different configs.
> 
> Is there are list of configs and trees covered by the 0day system available?

Apart from randconfig and all(yes|no|def|mod) configs, the
comprehensive list of configs can be found in arch/*/configs

        $ find arch/*/configs -type f | wc -l
        461

0-day tests them all. Since there are so many of them, only 10% are
run immediately after git pushes aiming for 1-hour respond time. The
others will be tested in machine idle time and see longer delay.

The code coverage is now 543 git trees. Here is the comprehensive list:

https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/tree/repo/linux

$ cat repo/linux/ext4
url: git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git

I'm always seeking to add more trees. Patches to add new URLs and
tests are highly appreciated!

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-08  9:52       ` Mark Brown
@ 2015-07-12 11:15         ` Fengguang Wu
  2015-07-13 18:34           ` Mark Brown
  0 siblings, 1 reply; 45+ messages in thread
From: Fengguang Wu @ 2015-07-12 11:15 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

On Wed, Jul 08, 2015 at 10:52:07AM +0100, Mark Brown wrote:
> On Wed, Jul 08, 2015 at 10:54:09AM +0300, Dan Carpenter wrote:
> 
> > Doesn't the 0day system obsolete the "Build regression" emails?  It
> > should be sending emails to everyone introducing new build warnings.
> 
> 0day only covers some configurations

Last year it has extended kconfig coverage to all arch/*/configs/*.
Adding defconfigs there will auto add them to 0day. :)

> and doesn't always manage to kick
> in (I know I've had stuff come in via other channels sometimes, it seems
> to depend on loading).

If so, it should be considered a bug. Feel free to report such cases
to me.

If 0day failed to catch the error, it might be due to

- regressions in 0day itself -- it's still in active development,
  in the past year I did 1k patches to 0day build/boot scripts and
  2k patches to the follow up LKP (Linux Kernel Performance) tests.

- the "report once" logic, it's tricky and can possibly hide issues

- failed bisect, rare but possible

- machine hang, network/disk fails, etc. maintenance incidents

"Loading" may add latency, however it's not the cause to miss errors.

> > Are people ignoring them?
> 
> They're not reliably followed through on, no, and one of the things with
> 0day is that it just generates a one time report so if things don't get
> followed up on then that's that.  A regular "these are all the issues"
> mail helps chase down those issues.

0day has such report type. It will be sent after each git push (unless
you push too quickly) and it looks like this. Just drop me a note and
list the git trees/branches you wish to receive such notice emails.

 Subject: [amirv:for-upstream] b132dcd12e0ab2a49ae5b02b5549cb65408a96ef BUILD DONE

git://flatbed.openfabrics.org/~amirv/linux.git  for-upstream
b132dcd12e0ab2a49ae5b02b5549cb65408a96ef  IB/mlx4: Use correct device for XRC [ib-next:issue_511091]
                                                                                                       
drivers/infiniband/core/cache.c:558:9: sparse: too many arguments for function __builtin_expect
drivers/infiniband/core/device.c:176:9: sparse: too many arguments for function __builtin_expect
drivers/infiniband/core/device.c:767:4-21: code aligned with following code on line 768

Error ids grouped by kconfigs:

recent_errors
├── i386-allmodconfig
│   └── drivers-infiniband-core-device.c:code-aligned-with-following-code-on-line
└── x86_64-allmodconfig
    ├── drivers-infiniband-core-cache.c:sparse:too-many-arguments-for-function-__builtin_expect
    └── drivers-infiniband-core-device.c:sparse:too-many-arguments-for-function-__builtin_expect

elapsed time: 53m

configs tested: 83

powerpc                     tqm8548_defconfig
powerpc                     tqm8555_defconfig
um                               alldefconfig
i386                     randconfig-a0-201527
x86_64                            allnoconfig
sh                            titan_defconfig
sh                          rsk7269_defconfig
sh                  sh7785lcr_32bit_defconfig
sh                                allnoconfig
i386                             allyesconfig
arm                                    sa1100
arm                         at91_dt_defconfig
arm                               allnoconfig
arm                                   samsung
...

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-12 11:15         ` Fengguang Wu
@ 2015-07-13 18:34           ` Mark Brown
  2015-07-14 14:22             ` Fengguang Wu
  0 siblings, 1 reply; 45+ messages in thread
From: Mark Brown @ 2015-07-13 18:34 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1685 bytes --]

On Sun, Jul 12, 2015 at 07:15:47PM +0800, Fengguang Wu wrote:

> If 0day failed to catch the error, it might be due to

> - regressions in 0day itself -- it's still in active development,
>   in the past year I did 1k patches to 0day build/boot scripts and
>   2k patches to the follow up LKP (Linux Kernel Performance) tests.

> - the "report once" logic, it's tricky and can possibly hide issues

> - failed bisect, rare but possible

I think some of the issues have been due to bisection getting confused
by issues appearing in merge commits but ICBW.

> - machine hang, network/disk fails, etc. maintenance incidents

> "Loading" may add latency, however it's not the cause to miss errors.

Latency seems like it might be an issue here, when I say I'm not seeing
things that's issues coming up in the -next build before they're
reported by 0day so they might well get fixed before 0day catches up.

> > > Are people ignoring them?

> > They're not reliably followed through on, no, and one of the things with
> > 0day is that it just generates a one time report so if things don't get
> > followed up on then that's that.  A regular "these are all the issues"
> > mail helps chase down those issues.

> 0day has such report type. It will be sent after each git push (unless
> you push too quickly) and it looks like this. Just drop me a note and
> list the git trees/branches you wish to receive such notice emails.

For me personally it'd be more interesting to be able to get them on
demand (eg, from a web page) than e-mailed, or e-mailed by a human
(possibly with fixes!).  The kernelci.org reporting does a lot of this
but doesn't cover anything except raw compiler warnings.

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-13 18:34           ` Mark Brown
@ 2015-07-14 14:22             ` Fengguang Wu
  2015-07-14 15:38               ` Mark Brown
  0 siblings, 1 reply; 45+ messages in thread
From: Fengguang Wu @ 2015-07-14 14:22 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

On Mon, Jul 13, 2015 at 07:34:41PM +0100, Mark Brown wrote:
> On Sun, Jul 12, 2015 at 07:15:47PM +0800, Fengguang Wu wrote:
> 
> > If 0day failed to catch the error, it might be due to
> 
> > - regressions in 0day itself -- it's still in active development,
> >   in the past year I did 1k patches to 0day build/boot scripts and
> >   2k patches to the follow up LKP (Linux Kernel Performance) tests.
> 
> > - the "report once" logic, it's tricky and can possibly hide issues
> 
> > - failed bisect, rare but possible
> 
> I think some of the issues have been due to bisection getting confused
> by issues appearing in merge commits but ICBW.
> 
> > - machine hang, network/disk fails, etc. maintenance incidents
> 
> > "Loading" may add latency, however it's not the cause to miss errors.
> 
> Latency seems like it might be an issue here, when I say I'm not seeing
> things that's issues coming up in the -next build before they're
> reported by 0day so they might well get fixed before 0day catches up.

Ah linux-next happen to have lower priority and can take many hours to
finish. I'll increase its priority due to its importance.

> > > > Are people ignoring them?
> 
> > > They're not reliably followed through on, no, and one of the things with
> > > 0day is that it just generates a one time report so if things don't get

The real policy is in fact a bit more smart than "report-once".

Build errors will be auto re-reported if it's still not fixed
after 10 days when the branch that introduced the error is updated.

Warnings won't be auto re-reported.

> > > followed up on then that's that.  A regular "these are all the issues"
> > > mail helps chase down those issues.
> 
> > 0day has such report type. It will be sent after each git push (unless
> > you push too quickly) and it looks like this. Just drop me a note and
> > list the git trees/branches you wish to receive such notice emails.
> 
> For me personally it'd be more interesting to be able to get them on
> demand (eg, from a web page) than e-mailed, or e-mailed by a human
> (possibly with fixes!).  The kernelci.org reporting does a lot of this
> but doesn't cover anything except raw compiler warnings.

It should be mostly equivalent if you direct such emails to a local
mbox and check it on demand. :)

For example, setup .procmailrc rule like this:

:0:
* ^Subject: \[.*\] ........................................ BUILD
build-complete-notification

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-14 14:22             ` Fengguang Wu
@ 2015-07-14 15:38               ` Mark Brown
  2015-07-15 14:21                 ` Fengguang Wu
  0 siblings, 1 reply; 45+ messages in thread
From: Mark Brown @ 2015-07-14 15:38 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1090 bytes --]

On Tue, Jul 14, 2015 at 10:22:41PM +0800, Fengguang Wu wrote:
> On Mon, Jul 13, 2015 at 07:34:41PM +0100, Mark Brown wrote:

> > Latency seems like it might be an issue here, when I say I'm not seeing
> > things that's issues coming up in the -next build before they're
> > reported by 0day so they might well get fixed before 0day catches up.

> Ah linux-next happen to have lower priority and can take many hours to
> finish. I'll increase its priority due to its importance.

This was actually things in my trees that get merged into -next (or
which Stephen tries to merge into -next and finds problems with) prior
to being reported by 0day.

> > For me personally it'd be more interesting to be able to get them on
> > demand (eg, from a web page) than e-mailed, or e-mailed by a human
> > (possibly with fixes!).  The kernelci.org reporting does a lot of this
> > but doesn't cover anything except raw compiler warnings.

> It should be mostly equivalent if you direct such emails to a local
> mbox and check it on demand. :)

Well, they don't go away automatically when fixed then...

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-14 15:38               ` Mark Brown
@ 2015-07-15 14:21                 ` Fengguang Wu
  0 siblings, 0 replies; 45+ messages in thread
From: Fengguang Wu @ 2015-07-15 14:21 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, Tyler Baker, ksummit-discuss, Dan Carpenter

On Tue, Jul 14, 2015 at 04:38:31PM +0100, Mark Brown wrote:
> On Tue, Jul 14, 2015 at 10:22:41PM +0800, Fengguang Wu wrote:
> > On Mon, Jul 13, 2015 at 07:34:41PM +0100, Mark Brown wrote:
> 
> > > Latency seems like it might be an issue here, when I say I'm not seeing
> > > things that's issues coming up in the -next build before they're
> > > reported by 0day so they might well get fixed before 0day catches up.
> 
> > Ah linux-next happen to have lower priority and can take many hours to
> > finish. I'll increase its priority due to its importance.
> 
> This was actually things in my trees that get merged into -next (or
> which Stephen tries to merge into -next and finds problems with) prior
> to being reported by 0day.

I feel sorry about that and would like to improve. When it happen in
future, feel free to forward me of the missed/delayed error.  Then I
can try to root cause case-by-case.

> > > For me personally it'd be more interesting to be able to get them on
> > > demand (eg, from a web page) than e-mailed, or e-mailed by a human
> > > (possibly with fixes!).  The kernelci.org reporting does a lot of this
> > > but doesn't cover anything except raw compiler warnings.
> 
> > It should be mostly equivalent if you direct such emails to a local
> > mbox and check it on demand. :)
> 
> Well, they don't go away automatically when fixed then...

You may just check the latest build compete notification email.
It'll contain the latest information.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-07 17:18   ` Mark Brown
                       ` (2 preceding siblings ...)
  2015-07-07 17:52     ` Guenter Roeck
@ 2015-07-20 15:53     ` Mel Gorman
  2015-07-20 16:39       ` Shuah Khan
  3 siblings, 1 reply; 45+ messages in thread
From: Mel Gorman @ 2015-07-20 15:53 UTC (permalink / raw)
  To: Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Dan Carpenter

On Tue, Jul 07, 2015 at 06:18:20PM +0100, Mark Brown wrote:
> On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
> > On 07/07/2015 02:24 AM, Mark Brown wrote:
> 
> > >The main things I'm aware of that are happening at the minute are
> > >kselftest development, the 0day tester, plus kernelci.org and the other
> > >build and boot/test bots that are running against various trees.
> 
> > Maybe list all known ones as a start ?
> 
> Off the top of my head the automated ones I'm aware of are Olof's build
> & boot test, Dan running smatch and I think some other static analysis
> stuff, someone (not sure who?) running some coccinelle stuff, Coverity
> and I've got a builder too.
> 

There is also Marvin which has existed in some shape or form since November
2013. It checks once a month for new releases and executes a battery of
tests on them across a range of machines. The primary purpose of it is
performance verification and the tests are typically more complex than
what I'd expect to see in kselftest. There are small amounts of overlap
with 0-day but generally it's expected that Marvin runs tests that are more
long-lived. Unlike 0-day, it also does not automatically notify people about
regressions as some verification work is often required and I did not want
it generating noise in any inbox.  Technically, it does support automatic
bisection but it's something I trigger manually when I confirm a problem
is a performance regression and cannot quickly identify the root cause.

It actually has been publishing reports for several months
now but I never mentioned it on the lists. I wrote up
some details after reading this thread and posted it at
http://www.csn.ul.ie/~mel/blog/index.php?/archives/23-Continual-testing-of-mainline-kernels.html

If there was a workshop on testing then I'd be interested in attending and
discussing what Marvin does if there was interest. Right now, performance
tends to my area of interest so I'd be interesting is discussing if there
are areas we are continually getting worse at that are slipping through
the cracks. Chris proposed a topic in this general area that I think would
be useful. I've only started looking at mainline kernel performance again
recently and right now, I'm not aware of a single area where we are getting
consistently worse. More commonly I see cases where we create problems
and then later cover them up by fixing something else in the general
area. Any time I find problems, it's a simple matter of programming and
time to fix them but it'd be useful to hear what other peoples recent
experiences have been.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] Testing
  2015-07-20 15:53     ` Mel Gorman
@ 2015-07-20 16:39       ` Shuah Khan
  0 siblings, 0 replies; 45+ messages in thread
From: Shuah Khan @ 2015-07-20 16:39 UTC (permalink / raw)
  To: Mel Gorman, Mark Brown
  Cc: Shuah Khan, Kevin Hilman, ksummit-discuss, grant, Tyler Baker,
	Shuah Khan, Dan Carpenter

On 07/20/2015 09:53 AM, Mel Gorman wrote:
> On Tue, Jul 07, 2015 at 06:18:20PM +0100, Mark Brown wrote:
>> On Tue, Jul 07, 2015 at 08:25:21AM -0700, Guenter Roeck wrote:
>>> On 07/07/2015 02:24 AM, Mark Brown wrote:
>>
>>>> The main things I'm aware of that are happening at the minute are
>>>> kselftest development, the 0day tester, plus kernelci.org and the other
>>>> build and boot/test bots that are running against various trees.
>>
>>> Maybe list all known ones as a start ?
>>
>> Off the top of my head the automated ones I'm aware of are Olof's build
>> & boot test, Dan running smatch and I think some other static analysis
>> stuff, someone (not sure who?) running some coccinelle stuff, Coverity
>> and I've got a builder too.
>>
> 
> There is also Marvin which has existed in some shape or form since November
> 2013. It checks once a month for new releases and executes a battery of
> tests on them across a range of machines. The primary purpose of it is
> performance verification and the tests are typically more complex than
> what I'd expect to see in kselftest. There are small amounts of overlap
> with 0-day but generally it's expected that Marvin runs tests that are more
> long-lived. Unlike 0-day, it also does not automatically notify people about
> regressions as some verification work is often required and I did not want
> it generating noise in any inbox.  Technically, it does support automatic
> bisection but it's something I trigger manually when I confirm a problem
> is a performance regression and cannot quickly identify the root cause.
> 
> It actually has been publishing reports for several months
> now but I never mentioned it on the lists. I wrote up
> some details after reading this thread and posted it at
> http://www.csn.ul.ie/~mel/blog/index.php?/archives/23-Continual-testing-of-mainline-kernels.html
> 
> If there was a workshop on testing then I'd be interested in attending and
> discussing what Marvin does if there was interest. Right now, performance
> tends to my area of interest so I'd be interesting is discussing if there
> are areas we are continually getting worse at that are slipping through
> the cracks. Chris proposed a topic in this general area that I think would
> be useful. I've only started looking at mainline kernel performance again
> recently and right now, I'm not aware of a single area where we are getting
> consistently worse. More commonly I see cases where we create problems
> and then later cover them up by fixing something else in the general
> area. Any time I find problems, it's a simple matter of programming and
> time to fix them but it'd be useful to hear what other peoples recent
> experiences have been.
> 

If there is enough interest, I can pull together a session on testing
at the kernel summit. We can discuss kselftest future plans as well as
adding performance tests into the kselftest, and discuss kselftest use
in auotomated test environments.

thanks,
-- Shuahs

-- 
Shuah Khan
Sr. Linux Kernel Developer
Open Source Innovation Group
Samsung Research America (Silicon Valley)
shuahkh@osg.samsung.com | (970) 217-8978

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2015-07-20 16:39 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-07  9:24 [Ksummit-discuss] [CORE TOPIC] Testing Mark Brown
2015-07-07 13:02 ` Alexey Dobriyan
2015-07-07 13:14   ` Mark Brown
2015-07-07 18:47     ` Steven Rostedt
2015-07-07 20:46       ` Kees Cook
2015-07-07 22:02         ` Andy Lutomirski
2015-07-08 17:37           ` Mark Brown
2015-07-08 10:43       ` Mark Brown
2015-07-09 10:24       ` Masami Hiramatsu
2015-07-09 12:00         ` Steven Rostedt
2015-07-10 10:39         ` Alexey Dobriyan
2015-07-10 14:02           ` Shuah Khan
2015-07-10 14:28             ` Alexey Dobriyan
2015-07-10 15:05             ` Steven Rostedt
2015-07-10 15:54               ` Shuah Khan
2015-07-07 15:25 ` Guenter Roeck
2015-07-07 17:18   ` Mark Brown
2015-07-07 17:23     ` Julia Lawall
2015-07-07 17:24     ` Shuah Khan
2015-07-07 17:37       ` Guenter Roeck
2015-07-07 17:52     ` Guenter Roeck
2015-07-07 18:28       ` Mark Brown
2015-07-07 22:51       ` Peter Hüwe
2015-07-20 15:53     ` Mel Gorman
2015-07-20 16:39       ` Shuah Khan
2015-07-07 19:21   ` Geert Uytterhoeven
2015-07-08  7:54     ` Dan Carpenter
2015-07-08  8:37       ` Geert Uytterhoeven
2015-07-08 12:10         ` Jiri Kosina
2015-07-08 12:37           ` Josh Boyer
2015-07-08 17:32           ` Mark Brown
2015-07-12 10:21         ` Fengguang Wu
2015-07-08  9:52       ` Mark Brown
2015-07-12 11:15         ` Fengguang Wu
2015-07-13 18:34           ` Mark Brown
2015-07-14 14:22             ` Fengguang Wu
2015-07-14 15:38               ` Mark Brown
2015-07-15 14:21                 ` Fengguang Wu
2015-07-08  9:27   ` Michael Ellerman
2015-07-08 13:52     ` Guenter Roeck
2015-07-08 16:40       ` Kevin Hilman
2015-07-08 17:24         ` Guenter Roeck
2015-07-08 18:42           ` Kevin Hilman
2015-07-09  4:23         ` Michael Ellerman
2015-07-09 18:08           ` Guenter Roeck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.