All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ksummit-discuss] [CORE TOPIC] stable issues
@ 2014-05-04 11:19 Li Zefan
  2014-05-04 12:04 ` Guenter Roeck
                   ` (4 more replies)
  0 siblings, 5 replies; 51+ messages in thread
From: Li Zefan @ 2014-05-04 11:19 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: lizf.kern

I've been dealing with stable kernels. There are some issues that I noticed
and may be worth discussing.

- Too many LTS kernels?

2.6.32  Willy Tarreau
3.2     Ben Huchings
3.4     Greg
3.10    Greg
3.12    Jiry Slaby

Too many or not? Is it good or bad? One of the problem is the maintenance
burden. For example, DaveM has to prepare stable patches for 5 stable
kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.

- Equip Greg with a sub-maintainer?

I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
because Ben has been manually backporting patches which don't apply
cleanly, while Greg just doesn't have the time budget. Is it possible
that we find a sub-maintainer to do this work?

- Are there still sub-systems/maintainers not doing very good in stable stuff?

Once I looked into "git log --no-merges v3.4.. kernel/sched/rt.c", out of
22 commits, only 2 fixes have stable tag, and finally I backported 4 commits
to 3.4.x.

- Add a known_issues.txt?

There are stable rules to what patch is acceptable, and besides a maintainer
may decide not send a fix for stable for some reason, or an issue is taken
care by no one.

So how about add a known_issues.txt, then anyone who needs to bulid his
own kernel based on LTS may find it useful.

- Testing stable kernels

The testing of stable kernels when a new version is under review seems
quite limited. We have Dave's Trinity and Fengguang's 0day, but they
are run on mainline/for-next only. Would be useful to also have them
run on stable kernels?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
@ 2014-05-04 12:04 ` Guenter Roeck
  2014-05-04 12:54 ` Josh Boyer
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 51+ messages in thread
From: Guenter Roeck @ 2014-05-04 12:04 UTC (permalink / raw)
  To: Li Zefan, ksummit-discuss; +Cc: lizf.kern

On 05/04/2014 04:19 AM, Li Zefan wrote:
> I've been dealing with stable kernels. There are some issues that I noticed
> and may be worth discussing.
>
> - Too many LTS kernels?
>
> 2.6.32  Willy Tarreau
> 3.2     Ben Huchings
> 3.4     Greg
> 3.10    Greg
> 3.12    Jiry Slaby
>
> Too many or not? Is it good or bad? One of the problem is the maintenance
> burden. For example, DaveM has to prepare stable patches for 5 stable
> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
>
> - Equip Greg with a sub-maintainer?
>
> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
> because Ben has been manually backporting patches which don't apply
> cleanly, while Greg just doesn't have the time budget. Is it possible
> that we find a sub-maintainer to do this work?
>
> - Are there still sub-systems/maintainers not doing very good in stable stuff?
>
> Once I looked into "git log --no-merges v3.4.. kernel/sched/rt.c", out of
> 22 commits, only 2 fixes have stable tag, and finally I backported 4 commits
> to 3.4.x.
>
> - Add a known_issues.txt?
>
> There are stable rules to what patch is acceptable, and besides a maintainer
> may decide not send a fix for stable for some reason, or an issue is taken
> care by no one.
>
> So how about add a known_issues.txt, then anyone who needs to bulid his
> own kernel based on LTS may find it useful.
>
> - Testing stable kernels
>
> The testing of stable kernels when a new version is under review seems
> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> are run on mainline/for-next only. Would be useful to also have them
> run on stable kernels?

For my part I would love to do that, I just don't have the time to set it up.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
  2014-05-04 12:04 ` Guenter Roeck
@ 2014-05-04 12:54 ` Josh Boyer
  2014-05-04 14:26   ` Guenter Roeck
                     ` (2 more replies)
  2014-05-04 15:35 ` Ben Hutchings
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 51+ messages in thread
From: Josh Boyer @ 2014-05-04 12:54 UTC (permalink / raw)
  To: Li Zefan; +Cc: lizf.kern, ksummit-discuss

On Sun, May 4, 2014 at 7:19 AM, Li Zefan <lizefan@huawei.com> wrote:
> I've been dealing with stable kernels. There are some issues that I noticed
> and may be worth discussing.
>
> - Too many LTS kernels?
>
> 2.6.32  Willy Tarreau
> 3.2     Ben Huchings
> 3.4     Greg
> 3.10    Greg
> 3.12    Jiry Slaby
>
> Too many or not? Is it good or bad? One of the problem is the maintenance
> burden. For example, DaveM has to prepare stable patches for 5 stable
> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.

To be fair, he doesn't have to.  He chooses to, and it's great.

> - Equip Greg with a sub-maintainer?
>
> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
> because Ben has been manually backporting patches which don't apply
> cleanly, while Greg just doesn't have the time budget. Is it possible
> that we find a sub-maintainer to do this work?

I think you've already shown exactly how we can handle that.  It just
takes someone willing to do the work to dig in.  Greg seemed very
pleased with the patches for 3.4 being sent to him, and I know he's
thanked me each time I send a report of what Fedora is carrying on top
of a stable release.  Do we need something more formal that what
either of us have already done (or continue to do)?

> - Are there still sub-systems/maintainers not doing very good in stable stuff?
>
> Once I looked into "git log --no-merges v3.4.. kernel/sched/rt.c", out of
> 22 commits, only 2 fixes have stable tag, and finally I backported 4 commits
> to 3.4.x.

This one is a problem.  I actually think your "sub-maintainer" idea
applies more here than it does to a particular stable release.  If
people were working through each subsystem and finding patches that
should go back to stable, even if they aren't marked as such
initially, then we'd be better off overall.

> - Add a known_issues.txt?
>
> There are stable rules to what patch is acceptable, and besides a maintainer
> may decide not send a fix for stable for some reason, or an issue is taken
> care by no one.
>
> So how about add a known_issues.txt, then anyone who needs to bulid his
> own kernel based on LTS may find it useful.

One per subsystem, or one per stable kernel?  I'm not sure which you mean.

> - Testing stable kernels
>
> The testing of stable kernels when a new version is under review seems
> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> are run on mainline/for-next only. Would be useful to also have them
> run on stable kernels?

Yes, but I don't think that's the main problem.  The regressions we
see in stable releases tend to come from patches that trinity and 0day
don't cover.  Things like backlights not working, or specific devices
acting strangely, etc.

Put another way, if trinity and 0day are running on mainline and
linux-next already, and we still see those issues introduced into a
stable kernel later, then trinity and 0day didn't find the original
problem to being with.

josh

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 12:54 ` Josh Boyer
@ 2014-05-04 14:26   ` Guenter Roeck
  2014-05-05  0:37     ` Josh Boyer
  2014-05-05  2:47   ` Li Zefan
  2014-05-05  3:22   ` Greg KH
  2 siblings, 1 reply; 51+ messages in thread
From: Guenter Roeck @ 2014-05-04 14:26 UTC (permalink / raw)
  To: Josh Boyer, Li Zefan; +Cc: lizf.kern, ksummit-discuss

On 05/04/2014 05:54 AM, Josh Boyer wrote:
> On Sun, May 4, 2014 at 7:19 AM, Li Zefan <lizefan@huawei.com> wrote:
>> I've been dealing with stable kernels. There are some issues that I noticed
>> and may be worth discussing.
>>
>> - Too many LTS kernels?
>>
>> 2.6.32  Willy Tarreau
>> 3.2     Ben Huchings
>> 3.4     Greg
>> 3.10    Greg
>> 3.12    Jiry Slaby
>>
>> Too many or not? Is it good or bad? One of the problem is the maintenance
>> burden. For example, DaveM has to prepare stable patches for 5 stable
>> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
>
> To be fair, he doesn't have to.  He chooses to, and it's great.
>
>> - Equip Greg with a sub-maintainer?
>>
>> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
>> because Ben has been manually backporting patches which don't apply
>> cleanly, while Greg just doesn't have the time budget. Is it possible
>> that we find a sub-maintainer to do this work?
>
> I think you've already shown exactly how we can handle that.  It just
> takes someone willing to do the work to dig in.  Greg seemed very
> pleased with the patches for 3.4 being sent to him, and I know he's
> thanked me each time I send a report of what Fedora is carrying on top
> of a stable release.  Do we need something more formal that what
> either of us have already done (or continue to do)?
>
>> - Are there still sub-systems/maintainers not doing very good in stable stuff?
>>
>> Once I looked into "git log --no-merges v3.4.. kernel/sched/rt.c", out of
>> 22 commits, only 2 fixes have stable tag, and finally I backported 4 commits
>> to 3.4.x.
>
> This one is a problem.  I actually think your "sub-maintainer" idea
> applies more here than it does to a particular stable release.  If
> people were working through each subsystem and finding patches that
> should go back to stable, even if they aren't marked as such
> initially, then we'd be better off overall.
>
>> - Add a known_issues.txt?
>>
>> There are stable rules to what patch is acceptable, and besides a maintainer
>> may decide not send a fix for stable for some reason, or an issue is taken
>> care by no one.
>>
>> So how about add a known_issues.txt, then anyone who needs to bulid his
>> own kernel based on LTS may find it useful.
>
> One per subsystem, or one per stable kernel?  I'm not sure which you mean.
>
>> - Testing stable kernels
>>
>> The testing of stable kernels when a new version is under review seems
>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>> are run on mainline/for-next only. Would be useful to also have them
>> run on stable kernels?
>
> Yes, but I don't think that's the main problem.  The regressions we
> see in stable releases tend to come from patches that trinity and 0day
> don't cover.  Things like backlights not working, or specific devices
> acting strangely, etc.
>
> Put another way, if trinity and 0day are running on mainline and
> linux-next already, and we still see those issues introduced into a
> stable kernel later, then trinity and 0day didn't find the original
> problem to being with.
>

Not necessarily. Sometimes bugs are introduced by missing patches or
bad/incoomplete backports. Sure, I catch the compile errors, and others
run basic real-system testing, at least with x86, but we could use more
run-time testing, especially on non-x86 architectures.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
  2014-05-04 12:04 ` Guenter Roeck
  2014-05-04 12:54 ` Josh Boyer
@ 2014-05-04 15:35 ` Ben Hutchings
  2014-05-04 15:45   ` Guenter Roeck
  2014-05-05  3:00   ` Li Zefan
  2014-05-05  1:03 ` Olof Johansson
  2014-05-07  2:49 ` Masami Hiramatsu
  4 siblings, 2 replies; 51+ messages in thread
From: Ben Hutchings @ 2014-05-04 15:35 UTC (permalink / raw)
  To: Li Zefan; +Cc: lizf.kern, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1911 bytes --]

On Sun, 2014-05-04 at 19:19 +0800, Li Zefan wrote:
> I've been dealing with stable kernels. There are some issues that I noticed
> and may be worth discussing.
> 
> - Too many LTS kernels?

Or in another sense, maybe too few?  Less than 5 years' support is
hardly long-term, though I would not volunteer for backporting so far.

> 2.6.32  Willy Tarreau
> 3.2     Ben Huchings
> 3.4     Greg
> 3.10    Greg
> 3.12    Jiry Slaby
> 
> Too many or not? Is it good or bad? One of the problem is the maintenance
> burden. For example, DaveM has to prepare stable patches for 5 stable
> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
> 
> - Equip Greg with a sub-maintainer?
> 
> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
> because Ben has been manually backporting patches which don't apply
> cleanly, while Greg just doesn't have the time budget.
>
> Is it possible that we find a sub-maintainer to do this work?

This is being addressed by others.

[...]
> - Testing stable kernels
> 
> The testing of stable kernels when a new version is under review seems
> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> are run on mainline/for-next only. Would be useful to also have them
> run on stable kernels?

According to my notes from Fengguang's talk, his robot excludes any
branch with a very old commit.  If that meant checking *commit* date,
not author date, then stable branches would already get tested as soon
as they are pushed to git.kernel.org.  As that doesn't seem to be
happening, it seems like the test must be based on author date and
should be changed to commit date.  But also, we would need to commit
each rc patch series to a git branch.

Ben.

-- 
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstein

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 15:35 ` Ben Hutchings
@ 2014-05-04 15:45   ` Guenter Roeck
  2014-05-05  3:00   ` Li Zefan
  1 sibling, 0 replies; 51+ messages in thread
From: Guenter Roeck @ 2014-05-04 15:45 UTC (permalink / raw)
  To: Ben Hutchings, Li Zefan; +Cc: lizf.kern, ksummit-discuss

On 05/04/2014 08:35 AM, Ben Hutchings wrote:
> On Sun, 2014-05-04 at 19:19 +0800, Li Zefan wrote:
>> I've been dealing with stable kernels. There are some issues that I noticed
>> and may be worth discussing.
>>
>> - Too many LTS kernels?
>
> Or in another sense, maybe too few?  Less than 5 years' support is
> hardly long-term, though I would not volunteer for backporting so far.
>
>> 2.6.32  Willy Tarreau
>> 3.2     Ben Huchings
>> 3.4     Greg
>> 3.10    Greg
>> 3.12    Jiry Slaby
>>
>> Too many or not? Is it good or bad? One of the problem is the maintenance
>> burden. For example, DaveM has to prepare stable patches for 5 stable
>> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
>>
>> - Equip Greg with a sub-maintainer?
>>
>> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
>> because Ben has been manually backporting patches which don't apply
>> cleanly, while Greg just doesn't have the time budget.
>>
>> Is it possible that we find a sub-maintainer to do this work?
>
> This is being addressed by others.
>
> [...]
>> - Testing stable kernels
>>
>> The testing of stable kernels when a new version is under review seems
>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>> are run on mainline/for-next only. Would be useful to also have them
>> run on stable kernels?
>
> According to my notes from Fengguang's talk, his robot excludes any
> branch with a very old commit.  If that meant checking *commit* date,
> not author date, then stable branches would already get tested as soon
> as they are pushed to git.kernel.org.  As that doesn't seem to be
> happening, it seems like the test must be based on author date and
> should be changed to commit date.  But also, we would need to commit
> each rc patch series to a git branch.
>

Since I create those branches already for my testing, I could publish them
on kernel.org (pick a name for the repository) if there is interest.
That would imply repeated rebases, though, since the branches reflect
the quilt history, which may change if commits are added or removed.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 14:26   ` Guenter Roeck
@ 2014-05-05  0:37     ` Josh Boyer
  2014-05-05  3:09       ` Li Zefan
                         ` (2 more replies)
  0 siblings, 3 replies; 51+ messages in thread
From: Josh Boyer @ 2014-05-05  0:37 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: lizf.kern, ksummit-discuss

On Sun, May 4, 2014 at 10:26 AM, Guenter Roeck <linux@roeck-us.net> wrote:
> On 05/04/2014 05:54 AM, Josh Boyer wrote:>
>>> - Testing stable kernels
>>>
>>> The testing of stable kernels when a new version is under review seems
>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>> are run on mainline/for-next only. Would be useful to also have them
>>> run on stable kernels?
>>
>>
>> Yes, but I don't think that's the main problem.  The regressions we
>> see in stable releases tend to come from patches that trinity and 0day
>> don't cover.  Things like backlights not working, or specific devices
>> acting strangely, etc.
>>
>> Put another way, if trinity and 0day are running on mainline and
>> linux-next already, and we still see those issues introduced into a
>> stable kernel later, then trinity and 0day didn't find the original
>> problem to being with.
>>
>
> Not necessarily. Sometimes bugs are introduced by missing patches or
> bad/incoomplete backports. Sure, I catch the compile errors, and others
> run basic real-system testing, at least with x86, but we could use more
> run-time testing, especially on non-x86 architectures.

Right, I agreed we should run more testing on stable.  I just don't
think it will result in a massive amount of issues found.  Trinity and
0day aren't going to have the same impact on stable kernels that they
do upstream.  Simply setting expectations.

josh

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
                   ` (2 preceding siblings ...)
  2014-05-04 15:35 ` Ben Hutchings
@ 2014-05-05  1:03 ` Olof Johansson
  2014-05-07  2:49 ` Masami Hiramatsu
  4 siblings, 0 replies; 51+ messages in thread
From: Olof Johansson @ 2014-05-05  1:03 UTC (permalink / raw)
  To: Li Zefan; +Cc: lizf.kern, ksummit-discuss

On Sun, May 4, 2014 at 4:19 AM, Li Zefan <lizefan@huawei.com> wrote:

> - Testing stable kernels
>
> The testing of stable kernels when a new version is under review seems
> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> are run on mainline/for-next only. Would be useful to also have them
> run on stable kernels?

I run Greg's queue through my builder for ARM configs, and they boot
on my and Kevin Hilman's ARM board farms. Of course, we don't have
extensive test coverage on those but at least they see the very
basics.


-Olof

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 12:54 ` Josh Boyer
  2014-05-04 14:26   ` Guenter Roeck
@ 2014-05-05  2:47   ` Li Zefan
  2014-05-05 13:41     ` Theodore Ts'o
  2014-05-05  3:22   ` Greg KH
  2 siblings, 1 reply; 51+ messages in thread
From: Li Zefan @ 2014-05-05  2:47 UTC (permalink / raw)
  To: Josh Boyer; +Cc: lizf.kern, ksummit-discuss

On 2014/5/4 20:54, Josh Boyer wrote:
> On Sun, May 4, 2014 at 7:19 AM, Li Zefan <lizefan@huawei.com> wrote:
>> I've been dealing with stable kernels. There are some issues that I noticed
>> and may be worth discussing.
>>
>> - Too many LTS kernels?
>>
>> 2.6.32  Willy Tarreau
>> 3.2     Ben Huchings
>> 3.4     Greg
>> 3.10    Greg
>> 3.12    Jiry Slaby
>>
>> Too many or not? Is it good or bad? One of the problem is the maintenance
>> burden. For example, DaveM has to prepare stable patches for 5 stable
>> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
> 
> To be fair, he doesn't have to.  He chooses to, and it's great.
> 

Yeah, but we can't expect other maintainers to do this. As Greg has been
emphasizing, we'd want to add as little burden as possible for subsystem
maintainers. With this in mind, focusing on fewer LTS kernels might make
sense?

>> - Equip Greg with a sub-maintainer?
>>
>> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
>> because Ben has been manually backporting patches which don't apply
>> cleanly, while Greg just doesn't have the time budget. Is it possible
>> that we find a sub-maintainer to do this work?
> 
> I think you've already shown exactly how we can handle that.  It just
> takes someone willing to do the work to dig in.  Greg seemed very
> pleased with the patches for 3.4 being sent to him, and I know he's
> thanked me each time I send a report of what Fedora is carrying on top
> of a stable release.

Yeah, but it still ended up with hundreds of fixes missing, so I was
thinking if we can do better.

>  Do we need something more formal that what
> either of us have already done (or continue to do)?
> 

If someone is dedicated to do this work, Greg can work with him in this
way: whenever Greg's script finds a patch can't be applied to some stable
kernels, a notice will be sent to this sub-maintainer, and he will do
the manual check and backport.

>> - Are there still sub-systems/maintainers not doing very good in stable stuff?
>>
>> Once I looked into "git log --no-merges v3.4.. kernel/sched/rt.c", out of
>> 22 commits, only 2 fixes have stable tag, and finally I backported 4 commits
>> to 3.4.x.
> 
> This one is a problem.  I actually think your "sub-maintainer" idea
> applies more here than it does to a particular stable release.  If
> people were working through each subsystem and finding patches that
> should go back to stable, even if they aren't marked as such
> initially, then we'd be better off overall.
> 

I believe what's more important is all subsystem maintainers keep stable
things in mind and tag fixes for stable properly.

Digging into git-log to find fixes for stable trees isn't fun or productive
most of the time.

>> - Add a known_issues.txt?
>>
>> There are stable rules to what patch is acceptable, and besides a maintainer
>> may decide not send a fix for stable for some reason, or an issue is taken
>> care by no one.
>>
>> So how about add a known_issues.txt, then anyone who needs to bulid his
>> own kernel based on LTS may find it useful.
> 
> One per subsystem, or one per stable kernel?  I'm not sure which you mean.
> 

For example, we found using nfs in containers can lead to oops in 3.4.x, and
fixing that requires efforts, and it took us months to finally get it fixed
(actually still waiting Greg to pick up the patchset).

What if we chose to ignore it because we won't use nfs that way, then some day
someone might run into this issue.

If there's a know_issues list, people may try to fix some issues for LTS kernels
or fix them in their own kernels if the fixes aren't suitable for LTS.

We may even document performance issues which are already addressed in newer
kernels.

>> - Testing stable kernels
>>
>> The testing of stable kernels when a new version is under review seems
>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>> are run on mainline/for-next only. Would be useful to also have them
>> run on stable kernels?
> 
> Yes, but I don't think that's the main problem.  The regressions we
> see in stable releases tend to come from patches that trinity and 0day
> don't cover.  Things like backlights not working, or specific devices
> acting strangely, etc.
> 
> Put another way, if trinity and 0day are running on mainline and
> linux-next already, and we still see those issues introduced into a
> stable kernel later, then trinity and 0day didn't find the original
> problem to being with.
> 

Yeah, most of fixes going into stable trees are not changes to core kernel,
but backports can be missing or incomplete and those bugs can lead to
disastrous, running triity and 0day can be useful.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 15:35 ` Ben Hutchings
  2014-05-04 15:45   ` Guenter Roeck
@ 2014-05-05  3:00   ` Li Zefan
  1 sibling, 0 replies; 51+ messages in thread
From: Li Zefan @ 2014-05-05  3:00 UTC (permalink / raw)
  To: Ben Hutchings; +Cc: lizf.kern, ksummit-discuss

On 2014/5/4 23:35, Ben Hutchings wrote:
> On Sun, 2014-05-04 at 19:19 +0800, Li Zefan wrote:
>> I've been dealing with stable kernels. There are some issues that I noticed
>> and may be worth discussing.
>>
>> - Too many LTS kernels?
> 
> Or in another sense, maybe too few?  Less than 5 years' support is
> hardly long-term, though I would not volunteer for backporting so far.
> 

Hah, good point. I don't know if any people complained when an LTS reached
to EOL by Greg.

>> 2.6.32  Willy Tarreau
>> 3.2     Ben Huchings
>> 3.4     Greg
>> 3.10    Greg
>> 3.12    Jiry Slaby
>>
>> Too many or not? Is it good or bad? One of the problem is the maintenance
>> burden. For example, DaveM has to prepare stable patches for 5 stable
>> kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
>>
>> - Equip Greg with a sub-maintainer?
>>
>> I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
>> because Ben has been manually backporting patches which don't apply
>> cleanly, while Greg just doesn't have the time budget.
>>
>> Is it possible that we find a sub-maintainer to do this work?
> 
> This is being addressed by others.
> 

As I said, this still ended up hundreds of fixes missing from 3.4.x,
which has been addressed by us, but we don't have time to do this
for 3.10.x.

> [...]
>> - Testing stable kernels
>>
>> The testing of stable kernels when a new version is under review seems
>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>> are run on mainline/for-next only. Would be useful to also have them
>> run on stable kernels?
> 
> According to my notes from Fengguang's talk, his robot excludes any
> branch with a very old commit.  If that meant checking *commit* date,
> not author date, then stable branches would already get tested as soon
> as they are pushed to git.kernel.org.  As that doesn't seem to be
> happening, it seems like the test must be based on author date and
> should be changed to commit date.  But also, we would need to commit
> each rc patch series to a git branch.
> 

I think it should be quite easy for Fengguang to extend his test framework
to test each new stable release, and then we might also detect performance
regressions in LTS by 0day.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05  0:37     ` Josh Boyer
@ 2014-05-05  3:09       ` Li Zefan
  2014-05-05  3:47       ` Guenter Roeck
  2014-05-05  6:10       ` Michal Simek
  2 siblings, 0 replies; 51+ messages in thread
From: Li Zefan @ 2014-05-05  3:09 UTC (permalink / raw)
  To: Josh Boyer; +Cc: ksummit-discuss, lizf.kern

On 2014/5/5 8:37, Josh Boyer wrote:
> On Sun, May 4, 2014 at 10:26 AM, Guenter Roeck <linux@roeck-us.net> wrote:
>> On 05/04/2014 05:54 AM, Josh Boyer wrote:>
>>>> - Testing stable kernels
>>>>
>>>> The testing of stable kernels when a new version is under review seems
>>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>>> are run on mainline/for-next only. Would be useful to also have them
>>>> run on stable kernels?
>>>
>>>
>>> Yes, but I don't think that's the main problem.  The regressions we
>>> see in stable releases tend to come from patches that trinity and 0day
>>> don't cover.  Things like backlights not working, or specific devices
>>> acting strangely, etc.
>>>
>>> Put another way, if trinity and 0day are running on mainline and
>>> linux-next already, and we still see those issues introduced into a
>>> stable kernel later, then trinity and 0day didn't find the original
>>> problem to being with.
>>>
>>
>> Not necessarily. Sometimes bugs are introduced by missing patches or
>> bad/incoomplete backports. Sure, I catch the compile errors, and others
>> run basic real-system testing, at least with x86, but we could use more
>> run-time testing, especially on non-x86 architectures.
> 
> Right, I agreed we should run more testing on stable.  I just don't
> think it will result in a massive amount of issues found.

Of course, otherwise our stable trees can't really be called stable. ;)

> Trinity and
> 0day aren't going to have the same impact on stable kernels that they
> do upstream.  Simply setting expectations.
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 12:54 ` Josh Boyer
  2014-05-04 14:26   ` Guenter Roeck
  2014-05-05  2:47   ` Li Zefan
@ 2014-05-05  3:22   ` Greg KH
  2 siblings, 0 replies; 51+ messages in thread
From: Greg KH @ 2014-05-05  3:22 UTC (permalink / raw)
  To: Josh Boyer; +Cc: lizf.kern, ksummit-discuss

On Sun, May 04, 2014 at 08:54:31AM -0400, Josh Boyer wrote:
> On Sun, May 4, 2014 at 7:19 AM, Li Zefan <lizefan@huawei.com> wrote:
> > I've been dealing with stable kernels. There are some issues that I noticed
> > and may be worth discussing.
> >
> > - Too many LTS kernels?
> >
> > 2.6.32  Willy Tarreau
> > 3.2     Ben Huchings
> > 3.4     Greg
> > 3.10    Greg
> > 3.12    Jiry Slaby
> >
> > Too many or not? Is it good or bad? One of the problem is the maintenance
> > burden. For example, DaveM has to prepare stable patches for 5 stable
> > kernels: 3.2, 3.4, 3.10, 3.12 and 3.14.
> 
> To be fair, he doesn't have to.  He chooses to, and it's great.
> 
> > - Equip Greg with a sub-maintainer?
> >
> > I found 3.4.x lacked hundreds of fixes compared to 3.2.x. It's mainly
> > because Ben has been manually backporting patches which don't apply
> > cleanly, while Greg just doesn't have the time budget. Is it possible
> > that we find a sub-maintainer to do this work?
> 
> I think you've already shown exactly how we can handle that.  It just
> takes someone willing to do the work to dig in.  Greg seemed very
> pleased with the patches for 3.4 being sent to him, and I know he's
> thanked me each time I send a report of what Fedora is carrying on top
> of a stable release.  Do we need something more formal that what
> either of us have already done (or continue to do)?

I am really happy with people helping me out, and have asked for help in
the past.

The 3.4 patch work that has been going on is a great example of that,
and one that I am behind on in catching up with, due to travel, sorry.

If people want to come up with other ways of helping me, I'm all for it :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05  0:37     ` Josh Boyer
  2014-05-05  3:09       ` Li Zefan
@ 2014-05-05  3:47       ` Guenter Roeck
  2014-05-05 11:31         ` Jason Cooper
  2014-05-05  6:10       ` Michal Simek
  2 siblings, 1 reply; 51+ messages in thread
From: Guenter Roeck @ 2014-05-05  3:47 UTC (permalink / raw)
  To: Josh Boyer; +Cc: lizf.kern, ksummit-discuss

On 05/04/2014 05:37 PM, Josh Boyer wrote:
> On Sun, May 4, 2014 at 10:26 AM, Guenter Roeck <linux@roeck-us.net> wrote:
>> On 05/04/2014 05:54 AM, Josh Boyer wrote:>
>>>> - Testing stable kernels
>>>>
>>>> The testing of stable kernels when a new version is under review seems
>>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>>> are run on mainline/for-next only. Would be useful to also have them
>>>> run on stable kernels?
>>>
>>>
>>> Yes, but I don't think that's the main problem.  The regressions we
>>> see in stable releases tend to come from patches that trinity and 0day
>>> don't cover.  Things like backlights not working, or specific devices
>>> acting strangely, etc.
>>>
>>> Put another way, if trinity and 0day are running on mainline and
>>> linux-next already, and we still see those issues introduced into a
>>> stable kernel later, then trinity and 0day didn't find the original
>>> problem to being with.
>>>
>>
>> Not necessarily. Sometimes bugs are introduced by missing patches or
>> bad/incoomplete backports. Sure, I catch the compile errors, and others
>> run basic real-system testing, at least with x86, but we could use more
>> run-time testing, especially on non-x86 architectures.
>
> Right, I agreed we should run more testing on stable.  I just don't
> think it will result in a massive amount of issues found.  Trinity and
> 0day aren't going to have the same impact on stable kernels that they
> do upstream.  Simply setting expectations.
>

Correct, it depends on expectations, and my expectations for stable releases
are substantially higher than those for baseline releases. I do find quite a
number of compile errors, most of the time even before a stable release is
sent out for review. I would consider each of those critical. I don't find
many runtime errors, simply because my qemu tests are simply along the line
of "it boots". But each runtime error found would, in my opinion, be critical
by definition; stable releases simply should not introduce new bugs, period.

This may be seen as somewhat strong definition of the term "severe",
but in my work environment the attitude is to never update the kernel under
any circumstances. Or, in other words, it is quite hostile to someone who
advocates following upstream kernel releases. Each new bug, as minor as it
may be in a practical sense, is seen as argument (or ammunition) against
kernel updates. Note that this specifically includes performance regressions,
as minor as they may be. Given that, I would love to see Fengguang's
performance tests run on stable releases, simply because that would give me
confidence (and proof) that no performance regressions were introduced.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05  0:37     ` Josh Boyer
  2014-05-05  3:09       ` Li Zefan
  2014-05-05  3:47       ` Guenter Roeck
@ 2014-05-05  6:10       ` Michal Simek
  2 siblings, 0 replies; 51+ messages in thread
From: Michal Simek @ 2014-05-05  6:10 UTC (permalink / raw)
  To: Josh Boyer; +Cc: ksummit-discuss, lizf.kern

[-- Attachment #1: Type: text/plain, Size: 2608 bytes --]

On 05/05/2014 02:37 AM, Josh Boyer wrote:
> On Sun, May 4, 2014 at 10:26 AM, Guenter Roeck <linux@roeck-us.net> wrote:
>> On 05/04/2014 05:54 AM, Josh Boyer wrote:>
>>>> - Testing stable kernels
>>>>
>>>> The testing of stable kernels when a new version is under review seems
>>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>>> are run on mainline/for-next only. Would be useful to also have them
>>>> run on stable kernels?
>>>
>>>
>>> Yes, but I don't think that's the main problem.  The regressions we
>>> see in stable releases tend to come from patches that trinity and 0day
>>> don't cover.  Things like backlights not working, or specific devices
>>> acting strangely, etc.
>>>
>>> Put another way, if trinity and 0day are running on mainline and
>>> linux-next already, and we still see those issues introduced into a
>>> stable kernel later, then trinity and 0day didn't find the original
>>> problem to being with.
>>>
>>
>> Not necessarily. Sometimes bugs are introduced by missing patches or
>> bad/incoomplete backports. Sure, I catch the compile errors, and others
>> run basic real-system testing, at least with x86, but we could use more
>> run-time testing, especially on non-x86 architectures.
> 
> Right, I agreed we should run more testing on stable.  I just don't
> think it will result in a massive amount of issues found.  Trinity and
> 0day aren't going to have the same impact on stable kernels that they
> do upstream.  Simply setting expectations.

We should do more testing on linux-next or individual branches before they
reach linux-next and baseline kernel to ensure that new bugs are not introduced
in the mainline kernel.
When we have less bugs in baseline kernel you will have less patches
for stable.

A lot of architectures/SoC are using Qemu and others simulators.
Zero-day testing system is doing good build coverage which I believe
is very useful for everybody.
Doing the same or extending it with testing on Qemu/similators
will be next step. Then every developer is able to get message that
the patch is breaking someone else.

In general doing more automation testing via one unified framework
that all new patches will be properly tested seems to me reasonable.

Thanks,
Michal

-- 
Michal Simek, Ing. (M.Eng), OpenPGP -> KeyID: FE3D1F91
w: www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel - Microblaze cpu - http://www.monstr.eu/fdt/
Maintainer of Linux kernel - Xilinx Zynq ARM architecture
Microblaze U-BOOT custodian and responsible for u-boot arm zynq platform



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05  3:47       ` Guenter Roeck
@ 2014-05-05 11:31         ` Jason Cooper
  2014-05-05 13:40           ` Guenter Roeck
  0 siblings, 1 reply; 51+ messages in thread
From: Jason Cooper @ 2014-05-05 11:31 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Sun, May 04, 2014 at 08:47:06PM -0700, Guenter Roeck wrote:
> This may be seen as somewhat strong definition of the term "severe",
> but in my work environment the attitude is to never update the kernel under
> any circumstances. Or, in other words, it is quite hostile to someone who
> advocates following upstream kernel releases. Each new bug, as minor as it
> may be in a practical sense, is seen as argument (or ammunition) against
> kernel updates. Note that this specifically includes performance regressions,
> as minor as they may be. Given that, I would love to see Fengguang's
> performance tests run on stable releases, simply because that would give me
> confidence (and proof) that no performance regressions were introduced.

Along this line, I keep coming back to an idea that I really need to
implement.  Say your shop is running v3.12.3, and you'd like to migrate
to v3.12.7 because of a bugfix for your subsystem.

I imagine it would make the argument easier if you could quantify the
changes from v3.12.3 to v3.12.7 relevant to your kernel config.  eg:

$ git diff v3.12.3..v3.12.7 | ./scripts/diff-filter mydefconfig

(no, diff-filter doesn't exist, yet)

I could also see using ./scripts/objdiff for this as well.  Anything
that would help the engineer quantify the differences between the two
releases so he could ask the question, "Show me *which* change you're
uncomfortable with."

That's a much better position to be in than, "I swear, the -stable
process is legit.  You can trust a bunch of people you've never met who
won't suffer any repercussions if our product fails."

This assumes a fairly minimal config, of course.  ;-)

thx,

Jason.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 11:31         ` Jason Cooper
@ 2014-05-05 13:40           ` Guenter Roeck
  0 siblings, 0 replies; 51+ messages in thread
From: Guenter Roeck @ 2014-05-05 13:40 UTC (permalink / raw)
  To: Jason Cooper; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On 05/05/2014 04:31 AM, Jason Cooper wrote:
> On Sun, May 04, 2014 at 08:47:06PM -0700, Guenter Roeck wrote:
>> This may be seen as somewhat strong definition of the term "severe",
>> but in my work environment the attitude is to never update the kernel under
>> any circumstances. Or, in other words, it is quite hostile to someone who
>> advocates following upstream kernel releases. Each new bug, as minor as it
>> may be in a practical sense, is seen as argument (or ammunition) against
>> kernel updates. Note that this specifically includes performance regressions,
>> as minor as they may be. Given that, I would love to see Fengguang's
>> performance tests run on stable releases, simply because that would give me
>> confidence (and proof) that no performance regressions were introduced.
>
> Along this line, I keep coming back to an idea that I really need to
> implement.  Say your shop is running v3.12.3, and you'd like to migrate
> to v3.12.7 because of a bugfix for your subsystem.
>
> I imagine it would make the argument easier if you could quantify the
> changes from v3.12.3 to v3.12.7 relevant to your kernel config.  eg:
>
> $ git diff v3.12.3..v3.12.7 | ./scripts/diff-filter mydefconfig
>
> (no, diff-filter doesn't exist, yet)
>
> I could also see using ./scripts/objdiff for this as well.  Anything
> that would help the engineer quantify the differences between the two
> releases so he could ask the question, "Show me *which* change you're
> uncomfortable with."
>
> That's a much better position to be in than, "I swear, the -stable
> process is legit.  You can trust a bunch of people you've never met who
> won't suffer any repercussions if our product fails."
>

The idea is good, but it would not help in my case.

One of the arguments is that only patches which are relevant and can
be proven to exist in the build/image should be applied. Therefore,
such a script could and would be used as argument to only apply such
patches. This would leave me with a baseline image which wasn't tested
by anyone and would deviate more and more from the stable release.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05  2:47   ` Li Zefan
@ 2014-05-05 13:41     ` Theodore Ts'o
  2014-05-05 15:23       ` Takashi Iwai
  2014-05-05 22:33       ` Greg KH
  0 siblings, 2 replies; 51+ messages in thread
From: Theodore Ts'o @ 2014-05-05 13:41 UTC (permalink / raw)
  To: Li Zefan; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Mon, May 05, 2014 at 10:47:55AM +0800, Li Zefan wrote:
> 
> Yeah, but we can't expect other maintainers to do this. As Greg has been
> emphasizing, we'd want to add as little burden as possible for subsystem
> maintainers. With this in mind, focusing on fewer LTS kernels might make
> sense?

An LTS kernel becomes important when distributions or manufacturers
need to depend on one for their stable/enterprise distribution or for
some product release.  The problem comes when a stable kernel such as
3.10 gets declared, but some feature which is badly needed doesn't
make it into 3.11, say, or at the time when 3.10 gets declared, some
internal team had already decided to use 3.11.

So what might help is if companies or distributions who need a LTS
kernel were willing to disclose that fact ahead of time, and see if
they can find like-minded associates who also might need a LTS kernel
around about the same time.  Obviously if a company is willing to
dedicate resources to maintaining the LTS kernel they should have a
bit more say about which LTS kernel they would be willing to support.

I am aware of companies or distributions which are using 3.10, 3.11,
and 3.12 (yes, all three!) for different long-term product/production
kernels.  The company that used 3.11 didn't talk to anyone externally
before selecting 3.11, and so it's only right that this company live
with the consequences of that particular engineering decision.  But
yeah, with a bit of communication, I suspect it could have resulted in
a bit less work all around.

The challenge is that companies generally need to be able to make that
decision at least 3-6 months ahead of time for planning purposes, and
this requires that companies be willing to actually communicate their
stablization plans externally ahead of time.  Which, unfortuantely,
may or may not always be practical.

And of course, depending on how many patches get integrated into said
"enterprise" kernel, it might end up being very far from the official
upstream stable kernel, so it might or might not matter in any case.

Cheers,

					- Ted

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 13:41     ` Theodore Ts'o
@ 2014-05-05 15:23       ` Takashi Iwai
  2014-05-05 15:39         ` Jan Kara
  2014-05-05 22:33       ` Greg KH
  1 sibling, 1 reply; 51+ messages in thread
From: Takashi Iwai @ 2014-05-05 15:23 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Josh Boyer, ksummit-discuss, lizf.kern

At Mon, 5 May 2014 09:41:26 -0400,
Theodore Ts'o wrote:
1> 
> On Mon, May 05, 2014 at 10:47:55AM +0800, Li Zefan wrote:
> > 
> > Yeah, but we can't expect other maintainers to do this. As Greg has been
> > emphasizing, we'd want to add as little burden as possible for subsystem
> > maintainers. With this in mind, focusing on fewer LTS kernels might make
> > sense?
> 
> An LTS kernel becomes important when distributions or manufacturers
> need to depend on one for their stable/enterprise distribution or for
> some product release.  The problem comes when a stable kernel such as
> 3.10 gets declared, but some feature which is badly needed doesn't
> make it into 3.11, say, or at the time when 3.10 gets declared, some
> internal team had already decided to use 3.11.
> 
> So what might help is if companies or distributions who need a LTS
> kernel were willing to disclose that fact ahead of time, and see if
> they can find like-minded associates who also might need a LTS kernel
> around about the same time.  Obviously if a company is willing to
> dedicate resources to maintaining the LTS kernel they should have a
> bit more say about which LTS kernel they would be willing to support.
> 
> I am aware of companies or distributions which are using 3.10, 3.11,
> and 3.12 (yes, all three!) for different long-term product/production
> kernels.  The company that used 3.11 didn't talk to anyone externally
> before selecting 3.11, and so it's only right that this company live
> with the consequences of that particular engineering decision.  But
> yeah, with a bit of communication, I suspect it could have resulted in
> a bit less work all around.
> 
> The challenge is that companies generally need to be able to make that
> decision at least 3-6 months ahead of time for planning purposes, and
> this requires that companies be willing to actually communicate their
> stablization plans externally ahead of time.  Which, unfortuantely,
> may or may not always be practical.
> 
> And of course, depending on how many patches get integrated into said
> "enterprise" kernel, it might end up being very far from the official
> upstream stable kernel, so it might or might not matter in any case.

Or, other way round: can the upstream LTS kernel be defined earlier?
Then distros may align to it when it's known beforehand.
It'd be even helpful for subsystem maintainers to decide whether some
big infrastructure change should be applied or postponed.


Takashi

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 15:23       ` Takashi Iwai
@ 2014-05-05 15:39         ` Jan Kara
  2014-05-05 16:02           ` Takashi Iwai
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Kara @ 2014-05-05 15:39 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Mon 05-05-14 17:23:18, Takashi Iwai wrote:
> At Mon, 5 May 2014 09:41:26 -0400,
> Theodore Ts'o wrote:
> > The challenge is that companies generally need to be able to make that
> > decision at least 3-6 months ahead of time for planning purposes, and
> > this requires that companies be willing to actually communicate their
> > stablization plans externally ahead of time.  Which, unfortuantely,
> > may or may not always be practical.
> > 
> > And of course, depending on how many patches get integrated into said
> > "enterprise" kernel, it might end up being very far from the official
> > upstream stable kernel, so it might or might not matter in any case.
> 
> Or, other way round: can the upstream LTS kernel be defined earlier?
> Then distros may align to it when it's known beforehand.
> It'd be even helpful for subsystem maintainers to decide whether some
> big infrastructure change should be applied or postponed.
  Well, but Greg doesn't want to declare a kernel LTS before it is released
exactly so that people don't cram in lots of imature stuff which needs to
be fixed up later. And I agree with him that this is going to happen if he
would declare LTS kernels in advance. So I don't think this is a good
alternative.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 15:39         ` Jan Kara
@ 2014-05-05 16:02           ` Takashi Iwai
  2014-05-05 16:07             ` Jason Cooper
  0 siblings, 1 reply; 51+ messages in thread
From: Takashi Iwai @ 2014-05-05 16:02 UTC (permalink / raw)
  To: Jan Kara; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

At Mon, 5 May 2014 17:39:28 +0200,
Jan Kara wrote:
> 
> On Mon 05-05-14 17:23:18, Takashi Iwai wrote:
> > At Mon, 5 May 2014 09:41:26 -0400,
> > Theodore Ts'o wrote:
> > > The challenge is that companies generally need to be able to make that
> > > decision at least 3-6 months ahead of time for planning purposes, and
> > > this requires that companies be willing to actually communicate their
> > > stablization plans externally ahead of time.  Which, unfortuantely,
> > > may or may not always be practical.
> > > 
> > > And of course, depending on how many patches get integrated into said
> > > "enterprise" kernel, it might end up being very far from the official
> > > upstream stable kernel, so it might or might not matter in any case.
> > 
> > Or, other way round: can the upstream LTS kernel be defined earlier?
> > Then distros may align to it when it's known beforehand.
> > It'd be even helpful for subsystem maintainers to decide whether some
> > big infrastructure change should be applied or postponed.
>   Well, but Greg doesn't want to declare a kernel LTS before it is released
> exactly so that people don't cram in lots of imature stuff which needs to
> be fixed up later. And I agree with him that this is going to happen if he
> would declare LTS kernels in advance. So I don't think this is a good
> alternative.

I agree with such a possible risk.  OTOH, if a big change (or file
renames) happens just after LTS kernel, it may make impossible to
carry even a small trivial fix back to LTS kernel.  So, it can be also
a demerit, too.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 16:02           ` Takashi Iwai
@ 2014-05-05 16:07             ` Jason Cooper
  2014-05-05 16:17               ` Takashi Iwai
  0 siblings, 1 reply; 51+ messages in thread
From: Jason Cooper @ 2014-05-05 16:07 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Mon, May 05, 2014 at 06:02:21PM +0200, Takashi Iwai wrote:
> At Mon, 5 May 2014 17:39:28 +0200,
> Jan Kara wrote:
> > 
> > On Mon 05-05-14 17:23:18, Takashi Iwai wrote:
> > > At Mon, 5 May 2014 09:41:26 -0400,
> > > Theodore Ts'o wrote:
> > > > The challenge is that companies generally need to be able to make that
> > > > decision at least 3-6 months ahead of time for planning purposes, and
> > > > this requires that companies be willing to actually communicate their
> > > > stablization plans externally ahead of time.  Which, unfortuantely,
> > > > may or may not always be practical.
> > > > 
> > > > And of course, depending on how many patches get integrated into said
> > > > "enterprise" kernel, it might end up being very far from the official
> > > > upstream stable kernel, so it might or might not matter in any case.
> > > 
> > > Or, other way round: can the upstream LTS kernel be defined earlier?
> > > Then distros may align to it when it's known beforehand.
> > > It'd be even helpful for subsystem maintainers to decide whether some
> > > big infrastructure change should be applied or postponed.
> >   Well, but Greg doesn't want to declare a kernel LTS before it is released
> > exactly so that people don't cram in lots of imature stuff which needs to
> > be fixed up later. And I agree with him that this is going to happen if he
> > would declare LTS kernels in advance. So I don't think this is a good
> > alternative.
> 
> I agree with such a possible risk.  OTOH, if a big change (or file
> renames) happens just after LTS kernel, it may make impossible to
> carry even a small trivial fix back to LTS kernel.  So, it can be also
> a demerit, too.

Do you have an example of this?  git is pretty darn good about tracking
renames.  If you've had a patch you couldn't backport, I'd like to see
what caused the failure.

In the worst case scenario, the maintainer of the code (or anyone
intimately familiar with it) should be able to hand-apply it.  But to
me, that would highlight a shortcoming of the system.

thx,

Jason.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 16:07             ` Jason Cooper
@ 2014-05-05 16:17               ` Takashi Iwai
  0 siblings, 0 replies; 51+ messages in thread
From: Takashi Iwai @ 2014-05-05 16:17 UTC (permalink / raw)
  To: Jason Cooper; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

At Mon, 5 May 2014 12:07:52 -0400,
Jason Cooper wrote:
> 
> On Mon, May 05, 2014 at 06:02:21PM +0200, Takashi Iwai wrote:
> > At Mon, 5 May 2014 17:39:28 +0200,
> > Jan Kara wrote:
> > > 
> > > On Mon 05-05-14 17:23:18, Takashi Iwai wrote:
> > > > At Mon, 5 May 2014 09:41:26 -0400,
> > > > Theodore Ts'o wrote:
> > > > > The challenge is that companies generally need to be able to make that
> > > > > decision at least 3-6 months ahead of time for planning purposes, and
> > > > > this requires that companies be willing to actually communicate their
> > > > > stablization plans externally ahead of time.  Which, unfortuantely,
> > > > > may or may not always be practical.
> > > > > 
> > > > > And of course, depending on how many patches get integrated into said
> > > > > "enterprise" kernel, it might end up being very far from the official
> > > > > upstream stable kernel, so it might or might not matter in any case.
> > > > 
> > > > Or, other way round: can the upstream LTS kernel be defined earlier?
> > > > Then distros may align to it when it's known beforehand.
> > > > It'd be even helpful for subsystem maintainers to decide whether some
> > > > big infrastructure change should be applied or postponed.
> > >   Well, but Greg doesn't want to declare a kernel LTS before it is released
> > > exactly so that people don't cram in lots of imature stuff which needs to
> > > be fixed up later. And I agree with him that this is going to happen if he
> > > would declare LTS kernels in advance. So I don't think this is a good
> > > alternative.
> > 
> > I agree with such a possible risk.  OTOH, if a big change (or file
> > renames) happens just after LTS kernel, it may make impossible to
> > carry even a small trivial fix back to LTS kernel.  So, it can be also
> > a demerit, too.
> 
> Do you have an example of this?  git is pretty darn good about tracking
> renames.  If you've had a patch you couldn't backport, I'd like to see
> what caused the failure.

An example I hit in past was mostly when a code was split.  Then it
can't be backported without the manual fix.  The pure file rename
might work, but often more incompatible changes follow after that
(which is the purpose of cleanup by renames, after all).

> In the worst case scenario, the maintainer of the code (or anyone
> intimately familiar with it) should be able to hand-apply it.  But to
> me, that would highlight a shortcoming of the system.

Sure, a handmade backport is always possible.  But this means that it
would result in more burden to maintainers, just because they didn't
do it in the right time.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 13:41     ` Theodore Ts'o
  2014-05-05 15:23       ` Takashi Iwai
@ 2014-05-05 22:33       ` Greg KH
  2014-05-06  3:20         ` Steven Rostedt
  1 sibling, 1 reply; 51+ messages in thread
From: Greg KH @ 2014-05-05 22:33 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Josh Boyer, ksummit-discuss, lizf.kern

On Mon, May 05, 2014 at 09:41:26AM -0400, Theodore Ts'o wrote:
> On Mon, May 05, 2014 at 10:47:55AM +0800, Li Zefan wrote:
> > 
> > Yeah, but we can't expect other maintainers to do this. As Greg has been
> > emphasizing, we'd want to add as little burden as possible for subsystem
> > maintainers. With this in mind, focusing on fewer LTS kernels might make
> > sense?
> 
> An LTS kernel becomes important when distributions or manufacturers
> need to depend on one for their stable/enterprise distribution or for
> some product release.  The problem comes when a stable kernel such as
> 3.10 gets declared, but some feature which is badly needed doesn't
> make it into 3.11, say, or at the time when 3.10 gets declared, some
> internal team had already decided to use 3.11.
> 
> So what might help is if companies or distributions who need a LTS
> kernel were willing to disclose that fact ahead of time, and see if
> they can find like-minded associates who also might need a LTS kernel
> around about the same time.  Obviously if a company is willing to
> dedicate resources to maintaining the LTS kernel they should have a
> bit more say about which LTS kernel they would be willing to support.

I spend a lot of time talking to a lot of different companies about what
the next LTS kernel will be.  And almost all of them are willing to give
me this information, so this isn't an issue.

The problem is, I can't please all of the people all of the time.  When
picking just one kernel a year, someone's schedule is not going to
align, and so they have to "go it alone".  Which is just part of the
"game" in doing releases, everyone knows this.

And as for announcing it ahead of time, I'm never going to do that
again, the aftermath was horrid of people putting stuff that shouldn't
be there.  Heck, when people know about what the enterprise kernels are
going to be, they throw stuff into upstream "early", so it's a
well-known pattern and issue.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-05 22:33       ` Greg KH
@ 2014-05-06  3:20         ` Steven Rostedt
  2014-05-06  4:04           ` Guenter Roeck
  0 siblings, 1 reply; 51+ messages in thread
From: Steven Rostedt @ 2014-05-06  3:20 UTC (permalink / raw)
  To: Greg KH; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Mon, 5 May 2014 15:33:24 -0700
Greg KH <greg@kroah.com> wrote:


> And as for announcing it ahead of time, I'm never going to do that
> again, the aftermath was horrid of people putting stuff that shouldn't
> be there.  Heck, when people know about what the enterprise kernels are
> going to be, they throw stuff into upstream "early", so it's a
> well-known pattern and issue.

Perhaps you can announce that 3.X "might" be the LTS tree. And just as
3.X merge window opens, announce that 3.(X-1) is the new LTS tree. It
would be too late to have people pushing "unfinished" code into the LTS.

But, they would still push unfinished code into 3.X, but that is why
you announce it just as (or before) the merge window, so people can say
"oh crap", and then delay their half baked projects to when they are
really ready.

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-06  3:20         ` Steven Rostedt
@ 2014-05-06  4:04           ` Guenter Roeck
  2014-05-06 10:49             ` Steven Rostedt
  0 siblings, 1 reply; 51+ messages in thread
From: Guenter Roeck @ 2014-05-06  4:04 UTC (permalink / raw)
  To: Steven Rostedt, Greg KH; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On 05/05/2014 08:20 PM, Steven Rostedt wrote:
> On Mon, 5 May 2014 15:33:24 -0700
> Greg KH <greg@kroah.com> wrote:
>
>
>> And as for announcing it ahead of time, I'm never going to do that
>> again, the aftermath was horrid of people putting stuff that shouldn't
>> be there.  Heck, when people know about what the enterprise kernels are
>> going to be, they throw stuff into upstream "early", so it's a
>> well-known pattern and issue.
>
> Perhaps you can announce that 3.X "might" be the LTS tree. And just as
> 3.X merge window opens, announce that 3.(X-1) is the new LTS tree. It
> would be too late to have people pushing "unfinished" code into the LTS.
>

After the first time this happens people would know or assume that
it is going to be 3.(X-1), and push the unfinished stuff into that
release. Then 3.(X-1) would be in such a bad shape that using it as
stable release would be a bad idea. And if 3.(X-1) doesn't make it,
you would have even more people push unfinished crap into 3.X.
Ultimately you would end up with two bad releases instead of one.

In other words, I don't think this kind of reverse psychology would work.

Guenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-06  4:04           ` Guenter Roeck
@ 2014-05-06 10:49             ` Steven Rostedt
  0 siblings, 0 replies; 51+ messages in thread
From: Steven Rostedt @ 2014-05-06 10:49 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: Josh Boyer, lizf.kern, ksummit-discuss

On Mon, 05 May 2014 21:04:20 -0700
Guenter Roeck <linux@roeck-us.net> wrote:

> On 05/05/2014 08:20 PM, Steven Rostedt wrote:

> In other words, I don't think this kind of reverse psychology would work.

I don't know. It's kind of like Linus closing the merge window early.

But yeah, people will always play the system.

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
                   ` (3 preceding siblings ...)
  2014-05-05  1:03 ` Olof Johansson
@ 2014-05-07  2:49 ` Masami Hiramatsu
  2014-05-07  2:58   ` Davidlohr Bueso
  2014-05-07  3:05   ` Li Zefan
  4 siblings, 2 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-07  2:49 UTC (permalink / raw)
  To: ksummit-discuss

(2014/05/04 20:19), Li Zefan wrote:
> - Testing stable kernels
> 
> The testing of stable kernels when a new version is under review seems
> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> are run on mainline/for-next only. Would be useful to also have them
> run on stable kernels?

This might be a kind of off-topic, but I'm interested in the testing
on the linux kernel, especially standard framework of unit-tests
for each feature.

I see the Trinity and Fengguang's 0day test are useful. But for newer
introduced features/bugfixes, would we have a standard tests?
(for some subsystems have own selftests, but not unified.)

I guess tools/testing/selftest will be an answer. If so, I think
we'd better send bugfixes with a test-case to check the bug is fixed
(and ensure no regression in future), wouldn't it?

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  2:49 ` Masami Hiramatsu
@ 2014-05-07  2:58   ` Davidlohr Bueso
  2014-05-07  8:27     ` Masami Hiramatsu
  2014-05-07  9:06     ` Dan Carpenter
  2014-05-07  3:05   ` Li Zefan
  1 sibling, 2 replies; 51+ messages in thread
From: Davidlohr Bueso @ 2014-05-07  2:58 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Wed, 2014-05-07 at 11:49 +0900, Masami Hiramatsu wrote:
> (2014/05/04 20:19), Li Zefan wrote:
> > - Testing stable kernels
> > 
> > The testing of stable kernels when a new version is under review seems
> > quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> > are run on mainline/for-next only. Would be useful to also have them
> > run on stable kernels?
> 
> This might be a kind of off-topic, but I'm interested in the testing
> on the linux kernel, especially standard framework of unit-tests
> for each feature.

I tend to think of LTP as a nice way of doing unit-tests for the uapi.
Fengguang's scripts do include it, iirc, but I'm referring more to unit
level tests. It serves well for changes in ipc, and should also for
other subsystems.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  2:49 ` Masami Hiramatsu
  2014-05-07  2:58   ` Davidlohr Bueso
@ 2014-05-07  3:05   ` Li Zefan
  2014-05-07  3:31     ` Masami Hiramatsu
                       ` (2 more replies)
  1 sibling, 3 replies; 51+ messages in thread
From: Li Zefan @ 2014-05-07  3:05 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On 2014/5/7 10:49, Masami Hiramatsu wrote:
> (2014/05/04 20:19), Li Zefan wrote:
>> - Testing stable kernels
>>
>> The testing of stable kernels when a new version is under review seems
>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>> are run on mainline/for-next only. Would be useful to also have them
>> run on stable kernels?
> 
> This might be a kind of off-topic, but I'm interested in the testing
> on the linux kernel, especially standard framework of unit-tests
> for each feature.
> 
> I see the Trinity and Fengguang's 0day test are useful. But for newer
> introduced features/bugfixes, would we have a standard tests?
> (for some subsystems have own selftests, but not unified.)
> 

I kind of remember Andrew once suggested a new feature can't be accepted
unless it comes with test cases?

> I guess tools/testing/selftest will be an answer. If so, I think
> we'd better send bugfixes with a test-case to check the bug is fixed
> (and ensure no regression in future), wouldn't it?
> 
> Thank you,
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  3:05   ` Li Zefan
@ 2014-05-07  3:31     ` Masami Hiramatsu
  2014-05-07  7:20     ` Laurent Pinchart
  2014-05-13 20:46     ` Steven Rostedt
  2 siblings, 0 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-07  3:31 UTC (permalink / raw)
  To: Li Zefan; +Cc: ksummit-discuss

(2014/05/07 12:05), Li Zefan wrote:
> On 2014/5/7 10:49, Masami Hiramatsu wrote:
>> (2014/05/04 20:19), Li Zefan wrote:
>>> - Testing stable kernels
>>>
>>> The testing of stable kernels when a new version is under review seems
>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>> are run on mainline/for-next only. Would be useful to also have them
>>> run on stable kernels?
>>
>> This might be a kind of off-topic, but I'm interested in the testing
>> on the linux kernel, especially standard framework of unit-tests
>> for each feature.
>>
>> I see the Trinity and Fengguang's 0day test are useful. But for newer
>> introduced features/bugfixes, would we have a standard tests?
>> (for some subsystems have own selftests, but not unified.)
>>
> 
> I kind of remember Andrew once suggested a new feature can't be accepted
> unless it comes with test cases?

Yeah, that's a good suggestion :)

And I think we should have a unified test framework which can drive
such test cases, and document how to add a test case within
a series of patches. It would be great if we can point out
which feature/commit is broken because of a test-case failure. :)

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  3:05   ` Li Zefan
  2014-05-07  3:31     ` Masami Hiramatsu
@ 2014-05-07  7:20     ` Laurent Pinchart
  2014-05-13 20:46     ` Steven Rostedt
  2 siblings, 0 replies; 51+ messages in thread
From: Laurent Pinchart @ 2014-05-07  7:20 UTC (permalink / raw)
  To: ksummit-discuss

On Wednesday 07 May 2014 11:05:53 Li Zefan wrote:
> On 2014/5/7 10:49, Masami Hiramatsu wrote:
> > (2014/05/04 20:19), Li Zefan wrote:
> >> - Testing stable kernels
> >> 
> >> The testing of stable kernels when a new version is under review seems
> >> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> >> are run on mainline/for-next only. Would be useful to also have them
> >> run on stable kernels?
> > 
> > This might be a kind of off-topic, but I'm interested in the testing
> > on the linux kernel, especially standard framework of unit-tests
> > for each feature.
> > 
> > I see the Trinity and Fengguang's 0day test are useful. But for newer
> > introduced features/bugfixes, would we have a standard tests?
> > (for some subsystems have own selftests, but not unified.)
> 
> I kind of remember Andrew once suggested a new feature can't be accepted
> unless it comes with test cases?

I'd like to add documentation to that. The amount of documentation for kernel 
APIs varies from good to non-existent. As (close to) nobody likes writing 
documentation, one solution to fix the problem in a way that can scale would 
be to spread the burden of documenting features among developers. Some 
subsystems (namely V4L2) already require this, no patch touching an API can 
come in without a corresponding documentation patch. Developers got used to it 
and I haven't noticed any slow down in the development pace.

> > I guess tools/testing/selftest will be an answer. If so, I think
> > we'd better send bugfixes with a test-case to check the bug is fixed
> > (and ensure no regression in future), wouldn't it?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  2:58   ` Davidlohr Bueso
@ 2014-05-07  8:27     ` Masami Hiramatsu
  2014-05-07  8:39       ` Matt Fleming
  2014-05-07 18:40       ` Davidlohr Bueso
  2014-05-07  9:06     ` Dan Carpenter
  1 sibling, 2 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-07  8:27 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: ksummit-discuss

(2014/05/07 11:58), Davidlohr Bueso wrote:
> On Wed, 2014-05-07 at 11:49 +0900, Masami Hiramatsu wrote:
>> (2014/05/04 20:19), Li Zefan wrote:
>>> - Testing stable kernels
>>>
>>> The testing of stable kernels when a new version is under review seems
>>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
>>> are run on mainline/for-next only. Would be useful to also have them
>>> run on stable kernels?
>>
>> This might be a kind of off-topic, but I'm interested in the testing
>> on the linux kernel, especially standard framework of unit-tests
>> for each feature.
> 
> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> Fengguang's scripts do include it, iirc, but I'm referring more to unit
> level tests. It serves well for changes in ipc, and should also for
> other subsystems.

Hm, yes, uapi tests can be done in LTP. However, I have some considerations;
- What uapi means? syscall, ioctl are OK, but what about procfs, sysfs, kernfs,
  etc?
- There could be some non-uapi features/bugfixes, in kernel. e.g. kmodule
  interface. How LTP handles it?
- I'm not sure how LTP synchronize the version of test cases with target
  kernel version. Is that possible to update the test cases as patch-level?
  And also, for stable trees, we'll need different test-sets (branches) for
  each tree.

IOW, would the test cases be better to be out-of-tree or in-tree? If it is
out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?
What infrastructure should we have (e.g. bugzilla which provides a database for
relationship between bug# and test-case) ?
Those are my interests :)

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  8:27     ` Masami Hiramatsu
@ 2014-05-07  8:39       ` Matt Fleming
  2014-05-07 11:45         ` Masami Hiramatsu
  2014-05-07 18:40       ` Davidlohr Bueso
  1 sibling, 1 reply; 51+ messages in thread
From: Matt Fleming @ 2014-05-07  8:39 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Wed, 07 May, at 05:27:05PM, Masami Hiramatsu wrote:
> 
> IOW, would the test cases be better to be out-of-tree or in-tree? If it is
> out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?
> What infrastructure should we have (e.g. bugzilla which provides a database for
> relationship between bug# and test-case) ?
> Those are my interests :)

There's definitely huge merit in having in-tree tests like the current
selftests stuff because it allows you to roll up fixes and regression
tests into a single commit, see commit 123abd76edf5 ("efivars:
efivarfs_valid_name() should handle pstore syntax").

-- 
Matt Fleming, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  2:58   ` Davidlohr Bueso
  2014-05-07  8:27     ` Masami Hiramatsu
@ 2014-05-07  9:06     ` Dan Carpenter
  2014-05-07 14:15       ` Jan Kara
  1 sibling, 1 reply; 51+ messages in thread
From: Dan Carpenter @ 2014-05-07  9:06 UTC (permalink / raw)
  To: Davidlohr Bueso; +Cc: ksummit-discuss

On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> Fengguang's scripts do include it, iirc, but I'm referring more to unit
> level tests. It serves well for changes in ipc, and should also for
> other subsystems.

LTP is too complicated and enterprisey.  With trinity you don't can just
type:

	./configure.sh && make && ./trinity

With LTP you have to read the install documents.  You can't run it
from your home directory so you have to build a virtual machine which
you don't care about before you install it.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  8:39       ` Matt Fleming
@ 2014-05-07 11:45         ` Masami Hiramatsu
  2014-05-07 12:45           ` Daniel Vetter
  0 siblings, 1 reply; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-07 11:45 UTC (permalink / raw)
  To: Matt Fleming; +Cc: ksummit-discuss

(2014/05/07 17:39), Matt Fleming wrote:
> On Wed, 07 May, at 05:27:05PM, Masami Hiramatsu wrote:
>>
>> IOW, would the test cases be better to be out-of-tree or in-tree? If it is
>> out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?
>> What infrastructure should we have (e.g. bugzilla which provides a database for
>> relationship between bug# and test-case) ?
>> Those are my interests :)
> 
> There's definitely huge merit in having in-tree tests like the current
> selftests stuff because it allows you to roll up fixes and regression
> tests into a single commit, see commit 123abd76edf5 ("efivars:
> efivarfs_valid_name() should handle pstore syntax").
> 

Ah, that's a good example for adding new feature/bugfix with test case! :)
I think this type of combined patch will be good to run tests with git-bisect.
At least out-of-tree test should work with git-bisect.

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07 11:45         ` Masami Hiramatsu
@ 2014-05-07 12:45           ` Daniel Vetter
  2014-05-08  3:20             ` Masami Hiramatsu
  0 siblings, 1 reply; 51+ messages in thread
From: Daniel Vetter @ 2014-05-07 12:45 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Wed, May 7, 2014 at 1:45 PM, Masami Hiramatsu
<masami.hiramatsu.pt@hitachi.com> wrote:
> (2014/05/07 17:39), Matt Fleming wrote:
>> On Wed, 07 May, at 05:27:05PM, Masami Hiramatsu wrote:
>>>
>>> IOW, would the test cases be better to be out-of-tree or in-tree? If it is
>>> out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?
>>> What infrastructure should we have (e.g. bugzilla which provides a database for
>>> relationship between bug# and test-case) ?
>>> Those are my interests :)
>>
>> There's definitely huge merit in having in-tree tests like the current
>> selftests stuff because it allows you to roll up fixes and regression
>> tests into a single commit, see commit 123abd76edf5 ("efivars:
>> efivarfs_valid_name() should handle pstore syntax").
>>
>
> Ah, that's a good example for adding new feature/bugfix with test case! :)
> I think this type of combined patch will be good to run tests with git-bisect.
> At least out-of-tree test should work with git-bisect.

At least for drm/i915 I don't think merging the tests into the kernel
would be beneficial, at least now:
- Our tests are integrated into the regression test framework used by
graphics people in general (piglit), and that most certainly won't
move into the kernel.
- We have lots of debug tools in the same repo (with shared code), and
it tends to be less scary for bug reporters to grab
intel-gpu-tools.git to run one of them instead of the entire kernel.
- Documentation tooling in userspace sucks a lot less than kerneldoc.
Which is important since we use testcases and tooling as getting
started tasks for newcomers.
- Also I want much stricter review requirements on kernel patches than
testcase patches, separate git trees helps with that.

Hence why we thus far just link the kernel patch to its testcase with
an Testcase: tag.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  9:06     ` Dan Carpenter
@ 2014-05-07 14:15       ` Jan Kara
  2014-05-08  3:38         ` Li Zefan
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Kara @ 2014-05-07 14:15 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: ksummit-discuss

On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
> > I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> > Fengguang's scripts do include it, iirc, but I'm referring more to unit
> > level tests. It serves well for changes in ipc, and should also for
> > other subsystems.
> 
> LTP is too complicated and enterprisey.  With trinity you don't can just
> type:
> 
> 	./configure.sh && make && ./trinity
> 
> With LTP you have to read the install documents.  You can't run it
> from your home directory so you have to build a virtual machine which
> you don't care about before you install it.
  Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
And it seems LTP is improving over time so I'm mostly happy about it.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  8:27     ` Masami Hiramatsu
  2014-05-07  8:39       ` Matt Fleming
@ 2014-05-07 18:40       ` Davidlohr Bueso
  1 sibling, 0 replies; 51+ messages in thread
From: Davidlohr Bueso @ 2014-05-07 18:40 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Wed, 2014-05-07 at 17:27 +0900, Masami Hiramatsu wrote:
> (2014/05/07 11:58), Davidlohr Bueso wrote:
> > On Wed, 2014-05-07 at 11:49 +0900, Masami Hiramatsu wrote:
> >> (2014/05/04 20:19), Li Zefan wrote:
> >>> - Testing stable kernels
> >>>
> >>> The testing of stable kernels when a new version is under review seems
> >>> quite limited. We have Dave's Trinity and Fengguang's 0day, but they
> >>> are run on mainline/for-next only. Would be useful to also have them
> >>> run on stable kernels?
> >>
> >> This might be a kind of off-topic, but I'm interested in the testing
> >> on the linux kernel, especially standard framework of unit-tests
> >> for each feature.
> > 
> > I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> > Fengguang's scripts do include it, iirc, but I'm referring more to unit
> > level tests. It serves well for changes in ipc, and should also for
> > other subsystems.
> 
> Hm, yes, uapi tests can be done in LTP. However, I have some considerations;
> - What uapi means? syscall, ioctl are OK, but what about procfs, sysfs, kernfs,
>   etc?

Yeah, I'm mostly referring to syscalls and ioctls here. I believe LTP
also covers procfs in some cases, but it's not the norm.

> - There could be some non-uapi features/bugfixes, in kernel. e.g. kmodule
>   interface. How LTP handles it?

That's kind of beyond the idea of LTP, afaik.

> - I'm not sure how LTP synchronize the version of test cases with target
>   kernel version. 

Well, again this is uapi, which doesn't/shouldn't change from version to
version. That's the whole point, make sure we don't break userspace.

> Is that possible to update the test cases as patch-level?
>   And also, for stable trees, we'll need different test-sets (branches) for
>   each tree.
> 
> IOW, would the test cases be better to be out-of-tree or in-tree? If it is
> out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?

Out of tree projects have their place, such as LTP, which has proven
itself in the past.

> Those are my interests :)

In general I'm very interested in this topic and would like to
participate in the discussion. In addition, there are areas within
futexes that could use some serious unit testing... perhaps in
selftests, dunno, would have to think about that. Right now we've got
some tests in perf-bench, but that's more performance than correctness.
The rest relies on Darren's out of tree futextests suite. However, this,
unfortunately, isn't at a unit granularity.

Thanks,
Davidlohr

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07 12:45           ` Daniel Vetter
@ 2014-05-08  3:20             ` Masami Hiramatsu
  2014-05-09 12:32               ` Daniel Vetter
  0 siblings, 1 reply; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-08  3:20 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: ksummit-discuss

(2014/05/07 21:45), Daniel Vetter wrote:
> On Wed, May 7, 2014 at 1:45 PM, Masami Hiramatsu
> <masami.hiramatsu.pt@hitachi.com> wrote:
>> (2014/05/07 17:39), Matt Fleming wrote:
>>> On Wed, 07 May, at 05:27:05PM, Masami Hiramatsu wrote:
>>>>
>>>> IOW, would the test cases be better to be out-of-tree or in-tree? If it is
>>>> out-of-tree(like LTP), how can we maintain both test-cases and upstream kernels?
>>>> What infrastructure should we have (e.g. bugzilla which provides a database for
>>>> relationship between bug# and test-case) ?
>>>> Those are my interests :)
>>>
>>> There's definitely huge merit in having in-tree tests like the current
>>> selftests stuff because it allows you to roll up fixes and regression
>>> tests into a single commit, see commit 123abd76edf5 ("efivars:
>>> efivarfs_valid_name() should handle pstore syntax").
>>>
>>
>> Ah, that's a good example for adding new feature/bugfix with test case! :)
>> I think this type of combined patch will be good to run tests with git-bisect.
>> At least out-of-tree test should work with git-bisect.
> 
> At least for drm/i915 I don't think merging the tests into the kernel
> would be beneficial, at least now:

Hm, it seems some other subsystems have their own testsuites, I think we'd
better clarify the testing policy for each subsystem,
using dedicated testing tools or in-kernel selftest.

> - Our tests are integrated into the regression test framework used by
> graphics people in general (piglit), and that most certainly won't
> move into the kernel.
> - We have lots of debug tools in the same repo (with shared code), and
> it tends to be less scary for bug reporters to grab
> intel-gpu-tools.git to run one of them instead of the entire kernel.
> - Documentation tooling in userspace sucks a lot less than kerneldoc.
> Which is important since we use testcases and tooling as getting
> started tasks for newcomers.
> - Also I want much stricter review requirements on kernel patches than
> testcase patches, separate git trees helps with that.
> 
> Hence why we thus far just link the kernel patch to its testcase with
> an Testcase: tag.

Ah, that's also nice to find an appropriate testcases. I think adding a
link(or git hash) to testcase allows us to automate test configuration
when git-bisecting, even if the test is out-of-tree. :)

Thank you,


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07 14:15       ` Jan Kara
@ 2014-05-08  3:38         ` Li Zefan
  2014-05-08  9:41           ` Jan Kara
  0 siblings, 1 reply; 51+ messages in thread
From: Li Zefan @ 2014-05-08  3:38 UTC (permalink / raw)
  To: Jan Kara; +Cc: ksummit-discuss, Dan Carpenter

On 2014/5/7 22:15, Jan Kara wrote:
> On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
>> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
>>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
>>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
>>> level tests. It serves well for changes in ipc, and should also for
>>> other subsystems.
>>
>> LTP is too complicated and enterprisey.  With trinity you don't can just
>> type:
>>
>> 	./configure.sh && make && ./trinity
>>
>> With LTP you have to read the install documents.  You can't run it
>> from your home directory so you have to build a virtual machine which
>> you don't care about before you install it.
>   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
> And it seems LTP is improving over time so I'm mostly happy about it.

But how useful LTP is in finding kernel bugs? It seems to me we seldom
see bug reports which say the bug was found by LTP?

That said, recently I tried to run cgroup test suits in LTP, one of which
was written by myself many years ago, and then I figured out there was a
kernel bug in cgroup. :)

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-08  3:38         ` Li Zefan
@ 2014-05-08  9:41           ` Jan Kara
  2014-05-08 20:35             ` Andy Lutomirski
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Kara @ 2014-05-08  9:41 UTC (permalink / raw)
  To: Li Zefan; +Cc: ksummit-discuss, Dan Carpenter

On Thu 08-05-14 11:38:14, Li Zefan wrote:
> On 2014/5/7 22:15, Jan Kara wrote:
> > On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
> >> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
> >>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> >>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
> >>> level tests. It serves well for changes in ipc, and should also for
> >>> other subsystems.
> >>
> >> LTP is too complicated and enterprisey.  With trinity you don't can just
> >> type:
> >>
> >> 	./configure.sh && make && ./trinity
> >>
> >> With LTP you have to read the install documents.  You can't run it
> >> from your home directory so you have to build a virtual machine which
> >> you don't care about before you install it.
> >   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
> > And it seems LTP is improving over time so I'm mostly happy about it.
> 
> But how useful LTP is in finding kernel bugs? It seems to me we seldom
> see bug reports which say the bug was found by LTP?
  I'm handling a few (3-5) per year. I'm also extending the coverage (e.g.
recently I've added fanotify interface coverage) when doing more involved
changes to some code so that LTP can be reasonably used for regression
checking.
 
								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-08  9:41           ` Jan Kara
@ 2014-05-08 20:35             ` Andy Lutomirski
  2014-05-09  4:11               ` Greg KH
  0 siblings, 1 reply; 51+ messages in thread
From: Andy Lutomirski @ 2014-05-08 20:35 UTC (permalink / raw)
  To: Jan Kara; +Cc: Dan Carpenter, ksummit-discuss

On Thu, May 8, 2014 at 2:41 AM, Jan Kara <jack@suse.cz> wrote:
> On Thu 08-05-14 11:38:14, Li Zefan wrote:
>> On 2014/5/7 22:15, Jan Kara wrote:
>> > On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
>> >> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
>> >>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
>> >>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
>> >>> level tests. It serves well for changes in ipc, and should also for
>> >>> other subsystems.
>> >>
>> >> LTP is too complicated and enterprisey.  With trinity you don't can just
>> >> type:
>> >>
>> >>    ./configure.sh && make && ./trinity
>> >>
>> >> With LTP you have to read the install documents.  You can't run it
>> >> from your home directory so you have to build a virtual machine which
>> >> you don't care about before you install it.
>> >   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
>> > And it seems LTP is improving over time so I'm mostly happy about it.
>>
>> But how useful LTP is in finding kernel bugs? It seems to me we seldom
>> see bug reports which say the bug was found by LTP?
>   I'm handling a few (3-5) per year. I'm also extending the coverage (e.g.
> recently I've added fanotify interface coverage) when doing more involved
> changes to some code so that LTP can be reasonably used for regression
> checking.

There was some talk about having some kind of 'make test' that you can
type in a kernel tree.  I'm not sure what the plan is, if any.

I've been working on a tool called virtme that might be a useful thing
to build on.

--Andy

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-08 20:35             ` Andy Lutomirski
@ 2014-05-09  4:11               ` Greg KH
  2014-05-09  5:33                 ` Masami Hiramatsu
  0 siblings, 1 reply; 51+ messages in thread
From: Greg KH @ 2014-05-09  4:11 UTC (permalink / raw)
  To: Andy Lutomirski; +Cc: ksummit-discuss, Dan Carpenter

On Thu, May 08, 2014 at 01:35:45PM -0700, Andy Lutomirski wrote:
> On Thu, May 8, 2014 at 2:41 AM, Jan Kara <jack@suse.cz> wrote:
> > On Thu 08-05-14 11:38:14, Li Zefan wrote:
> >> On 2014/5/7 22:15, Jan Kara wrote:
> >> > On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
> >> >> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
> >> >>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> >> >>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
> >> >>> level tests. It serves well for changes in ipc, and should also for
> >> >>> other subsystems.
> >> >>
> >> >> LTP is too complicated and enterprisey.  With trinity you don't can just
> >> >> type:
> >> >>
> >> >>    ./configure.sh && make && ./trinity
> >> >>
> >> >> With LTP you have to read the install documents.  You can't run it
> >> >> from your home directory so you have to build a virtual machine which
> >> >> you don't care about before you install it.
> >> >   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
> >> > And it seems LTP is improving over time so I'm mostly happy about it.
> >>
> >> But how useful LTP is in finding kernel bugs? It seems to me we seldom
> >> see bug reports which say the bug was found by LTP?
> >   I'm handling a few (3-5) per year. I'm also extending the coverage (e.g.
> > recently I've added fanotify interface coverage) when doing more involved
> > changes to some code so that LTP can be reasonably used for regression
> > checking.
> 
> There was some talk about having some kind of 'make test' that you can
> type in a kernel tree.  I'm not sure what the plan is, if any.

The plan is to fix it, we already have it in the tree today, but it is
broken.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-09  4:11               ` Greg KH
@ 2014-05-09  5:33                 ` Masami Hiramatsu
  2014-05-09  5:41                   ` Greg KH
  0 siblings, 1 reply; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-09  5:33 UTC (permalink / raw)
  To: ksummit-discuss

(2014/05/09 13:11), Greg KH wrote:
> On Thu, May 08, 2014 at 01:35:45PM -0700, Andy Lutomirski wrote:
>> On Thu, May 8, 2014 at 2:41 AM, Jan Kara <jack@suse.cz> wrote:
>>> On Thu 08-05-14 11:38:14, Li Zefan wrote:
>>>> On 2014/5/7 22:15, Jan Kara wrote:
>>>>> On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
>>>>>> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
>>>>>>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
>>>>>>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
>>>>>>> level tests. It serves well for changes in ipc, and should also for
>>>>>>> other subsystems.
>>>>>>
>>>>>> LTP is too complicated and enterprisey.  With trinity you don't can just
>>>>>> type:
>>>>>>
>>>>>>    ./configure.sh && make && ./trinity
>>>>>>
>>>>>> With LTP you have to read the install documents.  You can't run it
>>>>>> from your home directory so you have to build a virtual machine which
>>>>>> you don't care about before you install it.
>>>>>   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
>>>>> And it seems LTP is improving over time so I'm mostly happy about it.
>>>>
>>>> But how useful LTP is in finding kernel bugs? It seems to me we seldom
>>>> see bug reports which say the bug was found by LTP?
>>>   I'm handling a few (3-5) per year. I'm also extending the coverage (e.g.
>>> recently I've added fanotify interface coverage) when doing more involved
>>> changes to some code so that LTP can be reasonably used for regression
>>> checking.
>>
>> There was some talk about having some kind of 'make test' that you can
>> type in a kernel tree.  I'm not sure what the plan is, if any.
> 
> The plan is to fix it, we already have it in the tree today, but it is
> broken.

So will the "make test" run tools/testing/selftest? or other tests?

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-09  5:33                 ` Masami Hiramatsu
@ 2014-05-09  5:41                   ` Greg KH
  0 siblings, 0 replies; 51+ messages in thread
From: Greg KH @ 2014-05-09  5:41 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Fri, May 09, 2014 at 02:33:55PM +0900, Masami Hiramatsu wrote:
> (2014/05/09 13:11), Greg KH wrote:
> > On Thu, May 08, 2014 at 01:35:45PM -0700, Andy Lutomirski wrote:
> >> On Thu, May 8, 2014 at 2:41 AM, Jan Kara <jack@suse.cz> wrote:
> >>> On Thu 08-05-14 11:38:14, Li Zefan wrote:
> >>>> On 2014/5/7 22:15, Jan Kara wrote:
> >>>>> On Wed 07-05-14 12:06:28, Dan Carpenter wrote:
> >>>>>> On Tue, May 06, 2014 at 07:58:58PM -0700, Davidlohr Bueso wrote:
> >>>>>>> I tend to think of LTP as a nice way of doing unit-tests for the uapi.
> >>>>>>> Fengguang's scripts do include it, iirc, but I'm referring more to unit
> >>>>>>> level tests. It serves well for changes in ipc, and should also for
> >>>>>>> other subsystems.
> >>>>>>
> >>>>>> LTP is too complicated and enterprisey.  With trinity you don't can just
> >>>>>> type:
> >>>>>>
> >>>>>>    ./configure.sh && make && ./trinity
> >>>>>>
> >>>>>> With LTP you have to read the install documents.  You can't run it
> >>>>>> from your home directory so you have to build a virtual machine which
> >>>>>> you don't care about before you install it.
> >>>>>   Actually, I'm occasionally using LTP and it doesn't seem too bad to me.
> >>>>> And it seems LTP is improving over time so I'm mostly happy about it.
> >>>>
> >>>> But how useful LTP is in finding kernel bugs? It seems to me we seldom
> >>>> see bug reports which say the bug was found by LTP?
> >>>   I'm handling a few (3-5) per year. I'm also extending the coverage (e.g.
> >>> recently I've added fanotify interface coverage) when doing more involved
> >>> changes to some code so that LTP can be reasonably used for regression
> >>> checking.
> >>
> >> There was some talk about having some kind of 'make test' that you can
> >> type in a kernel tree.  I'm not sure what the plan is, if any.
> > 
> > The plan is to fix it, we already have it in the tree today, but it is
> > broken.
> 
> So will the "make test" run tools/testing/selftest? or other tests?

To start with, it runs the tests we have in the kernel today.  Expanding
that to fix those tests is a good start, and we can go from there.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-08  3:20             ` Masami Hiramatsu
@ 2014-05-09 12:32               ` Daniel Vetter
  2014-05-12  6:55                 ` Masami Hiramatsu
  0 siblings, 1 reply; 51+ messages in thread
From: Daniel Vetter @ 2014-05-09 12:32 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Thu, May 8, 2014 at 5:20 AM, Masami Hiramatsu
<masami.hiramatsu.pt@hitachi.com> wrote:
>>> Ah, that's a good example for adding new feature/bugfix with test case! :)
>>> I think this type of combined patch will be good to run tests with git-bisect.
>>> At least out-of-tree test should work with git-bisect.
>>
>> At least for drm/i915 I don't think merging the tests into the kernel
>> would be beneficial, at least now:
>
> Hm, it seems some other subsystems have their own testsuites, I think we'd
> better clarify the testing policy for each subsystem,
> using dedicated testing tools or in-kernel selftest.

What I'd like to discuss for the overall kernel is minimal standards
that apply everywhere. And process issues like where external test
suites should be documented (they're useless if people can't find
them) and how much pressure we should apply to get them into the
kernel git tree. And whether that makes sense in all cases or not.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-09 12:32               ` Daniel Vetter
@ 2014-05-12  6:55                 ` Masami Hiramatsu
  2014-05-13 20:36                   ` Steven Rostedt
  0 siblings, 1 reply; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-12  6:55 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: ksummit-discuss

(2014/05/09 21:32), Daniel Vetter wrote:
> On Thu, May 8, 2014 at 5:20 AM, Masami Hiramatsu
> <masami.hiramatsu.pt@hitachi.com> wrote:
>>>> Ah, that's a good example for adding new feature/bugfix with test case! :)
>>>> I think this type of combined patch will be good to run tests with git-bisect.
>>>> At least out-of-tree test should work with git-bisect.
>>>
>>> At least for drm/i915 I don't think merging the tests into the kernel
>>> would be beneficial, at least now:
>>
>> Hm, it seems some other subsystems have their own testsuites, I think we'd
>> better clarify the testing policy for each subsystem,
>> using dedicated testing tools or in-kernel selftest.
> 
> What I'd like to discuss for the overall kernel is minimal standards
> that apply everywhere. And process issues like where external test
> suites should be documented (they're useless if people can't find
> them) and how much pressure we should apply to get them into the
> kernel git tree. And whether that makes sense in all cases or not.

Agreed. I'd like to suggest adding UT: tag or TS: tag for unit test
for testsuite to MAINTAINERS file, which will make clear how to test
that subsystem when submitting patches. :)

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-12  6:55                 ` Masami Hiramatsu
@ 2014-05-13 20:36                   ` Steven Rostedt
  2014-05-13 20:40                     ` Davidlohr Bueso
  2014-05-14  1:30                     ` Masami Hiramatsu
  0 siblings, 2 replies; 51+ messages in thread
From: Steven Rostedt @ 2014-05-13 20:36 UTC (permalink / raw)
  To: Masami Hiramatsu; +Cc: ksummit-discuss

On Mon, 12 May 2014 15:55:42 +0900
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:


> Agreed. I'd like to suggest adding UT: tag or TS: tag for unit test
> for testsuite to MAINTAINERS file, which will make clear how to test
> that subsystem when submitting patches. :)

That's a good idea.

I still need to post my ftrace tests somewhere. I could add it to
tools/testing/selftests, but I break rule #1:

 *  Do as much as you can if you're not root

because ftrace only works with root.

Some of my tests take a bit of time to run as well. Maybe it's OK to
just add a tools/testing/tracing directory to place my tests (after
they get cleaned up to be "public"). Or perhaps just send them to the
LTP project.

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-13 20:36                   ` Steven Rostedt
@ 2014-05-13 20:40                     ` Davidlohr Bueso
  2014-05-14  1:30                     ` Masami Hiramatsu
  1 sibling, 0 replies; 51+ messages in thread
From: Davidlohr Bueso @ 2014-05-13 20:40 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: ksummit-discuss

On Tue, 2014-05-13 at 16:36 -0400, Steven Rostedt wrote:
> On Mon, 12 May 2014 15:55:42 +0900
> Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:
> 
> 
> > Agreed. I'd like to suggest adding UT: tag or TS: tag for unit test
> > for testsuite to MAINTAINERS file, which will make clear how to test
> > that subsystem when submitting patches. :)
> 
> That's a good idea.
> 
> I still need to post my ftrace tests somewhere. I could add it to
> tools/testing/selftests, but I break rule #1:
> 
>  *  Do as much as you can if you're not root

Sorry but *why*? I would have thought that selftests are crash and burn,
at your own risk... heck its only used by kernel developers, not
sysadmins!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-07  3:05   ` Li Zefan
  2014-05-07  3:31     ` Masami Hiramatsu
  2014-05-07  7:20     ` Laurent Pinchart
@ 2014-05-13 20:46     ` Steven Rostedt
  2 siblings, 0 replies; 51+ messages in thread
From: Steven Rostedt @ 2014-05-13 20:46 UTC (permalink / raw)
  To: Li Zefan; +Cc: ksummit-discuss

On Wed, 7 May 2014 11:05:53 +0800
Li Zefan <lizefan@huawei.com> wrote:

> I kind of remember Andrew once suggested a new feature can't be accepted
> unless it comes with test cases?

This is a good idea. But where should we place it. I'm not sure I care
for the tools/testing/selftest/* to run all the tests. Hmm, does a test
in this directory need to be run from the top level? That is, if I add
tracing here, I expect it to be ignored by the top level makefile.

Perhaps the "run all" shouldn't be the default but have to be
specified.

Do people like to test everything? Or just what they modify. To me,
this should be something for developers to use. If you modify the
memory subsystem, you should run the memory tests before submitting.
Network? Then run network tests. And so on. I don't see the normal user
of the selftests wanting to run tests for every subsystem.

And things like network may require more work to get the tests running
(another machine perhaps). But having a place to put tests that
subsystem maintainers use to verify their work would be useful. I
believe it should be more for those working on the code than another
LTP.

-- Steve

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable issues
  2014-05-13 20:36                   ` Steven Rostedt
  2014-05-13 20:40                     ` Davidlohr Bueso
@ 2014-05-14  1:30                     ` Masami Hiramatsu
  1 sibling, 0 replies; 51+ messages in thread
From: Masami Hiramatsu @ 2014-05-14  1:30 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: ksummit-discuss

(2014/05/14 5:36), Steven Rostedt wrote:
> On Mon, 12 May 2014 15:55:42 +0900
> Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> wrote:
> 
> 
>> Agreed. I'd like to suggest adding UT: tag or TS: tag for unit test
>> for testsuite to MAINTAINERS file, which will make clear how to test
>> that subsystem when submitting patches. :)
> 
> That's a good idea.
> 
> I still need to post my ftrace tests somewhere. I could add it to
> tools/testing/selftests, but I break rule #1:
> 
>  *  Do as much as you can if you're not root
> 
> because ftrace only works with root.
> 
> Some of my tests take a bit of time to run as well.

I see. I'd like to have an option to run the test shortly or
full (which takes very long time). For example, probing all kernel
functions with kprobes may take very long time, but I'd like to add
it to kernel tree for stability check. Or should we separate such
time-consuming test from selftest?

> Maybe it's OK to
> just add a tools/testing/tracing directory to place my tests (after
> they get cleaned up to be "public"). Or perhaps just send them to the
> LTP project.

IMHO the tests for debugfs things would better to go into kernel tree,
since their uapi is unstable (not ensured to be stable).

Thank you,

-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2014-05-14  1:30 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-04 11:19 [Ksummit-discuss] [CORE TOPIC] stable issues Li Zefan
2014-05-04 12:04 ` Guenter Roeck
2014-05-04 12:54 ` Josh Boyer
2014-05-04 14:26   ` Guenter Roeck
2014-05-05  0:37     ` Josh Boyer
2014-05-05  3:09       ` Li Zefan
2014-05-05  3:47       ` Guenter Roeck
2014-05-05 11:31         ` Jason Cooper
2014-05-05 13:40           ` Guenter Roeck
2014-05-05  6:10       ` Michal Simek
2014-05-05  2:47   ` Li Zefan
2014-05-05 13:41     ` Theodore Ts'o
2014-05-05 15:23       ` Takashi Iwai
2014-05-05 15:39         ` Jan Kara
2014-05-05 16:02           ` Takashi Iwai
2014-05-05 16:07             ` Jason Cooper
2014-05-05 16:17               ` Takashi Iwai
2014-05-05 22:33       ` Greg KH
2014-05-06  3:20         ` Steven Rostedt
2014-05-06  4:04           ` Guenter Roeck
2014-05-06 10:49             ` Steven Rostedt
2014-05-05  3:22   ` Greg KH
2014-05-04 15:35 ` Ben Hutchings
2014-05-04 15:45   ` Guenter Roeck
2014-05-05  3:00   ` Li Zefan
2014-05-05  1:03 ` Olof Johansson
2014-05-07  2:49 ` Masami Hiramatsu
2014-05-07  2:58   ` Davidlohr Bueso
2014-05-07  8:27     ` Masami Hiramatsu
2014-05-07  8:39       ` Matt Fleming
2014-05-07 11:45         ` Masami Hiramatsu
2014-05-07 12:45           ` Daniel Vetter
2014-05-08  3:20             ` Masami Hiramatsu
2014-05-09 12:32               ` Daniel Vetter
2014-05-12  6:55                 ` Masami Hiramatsu
2014-05-13 20:36                   ` Steven Rostedt
2014-05-13 20:40                     ` Davidlohr Bueso
2014-05-14  1:30                     ` Masami Hiramatsu
2014-05-07 18:40       ` Davidlohr Bueso
2014-05-07  9:06     ` Dan Carpenter
2014-05-07 14:15       ` Jan Kara
2014-05-08  3:38         ` Li Zefan
2014-05-08  9:41           ` Jan Kara
2014-05-08 20:35             ` Andy Lutomirski
2014-05-09  4:11               ` Greg KH
2014-05-09  5:33                 ` Masami Hiramatsu
2014-05-09  5:41                   ` Greg KH
2014-05-07  3:05   ` Li Zefan
2014-05-07  3:31     ` Masami Hiramatsu
2014-05-07  7:20     ` Laurent Pinchart
2014-05-13 20:46     ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.