All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ksummit-discuss] [CORE TOPIC] stable workflow
@ 2016-07-08 22:35 Jiri Kosina
  2016-07-08 23:12 ` Guenter Roeck
                   ` (4 more replies)
  0 siblings, 5 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-07-08 22:35 UTC (permalink / raw)
  To: ksummit-discuss

Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it, 
wouldn't it? :)

As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the 
crucial elements I rely on (and I also try to make sure that SUSE 
contributes back as much as possible).

Hence any planned changes in the workflow / releases are rather essential 
for me, and I'd like to participate, should any such discussion take 
place.

In addition to that, I'd again (like during the past 5+ years, but it 
never really happened) like to propose a stable tree discussion topic: I'd 
like to see an attempt to make the stable workflow more oriented towards 
"maintainers sending pull requests" rather than "random people pointing to 
patches that should go to stable". This has been much of an issue in the 
past, when we've been seeing many stable tree regressions; that's not the 
case any more, but still something where I sense a room for improvement.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
@ 2016-07-08 23:12 ` Guenter Roeck
  2016-07-08 23:38   ` Luck, Tony
                     ` (2 more replies)
  2016-07-09  0:06 ` Jason Cooper
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-08 23:12 UTC (permalink / raw)
  To: Jiri Kosina, ksummit-discuss

On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> wouldn't it? :)
>
> As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> crucial elements I rely on (and I also try to make sure that SUSE
> contributes back as much as possible).
>
> Hence any planned changes in the workflow / releases are rather essential
> for me, and I'd like to participate, should any such discussion take
> place.
>

Same here. New employer, lots of unhappiness with stable releases, to the point
where stable trees are not used as basis for shipping releases.
That kind of defeats the purpose. So, instead of "let's ignore stable",
maybe we can get to a point where people feel comfortable with the quality
of stable releases, and where stable can actually be used as basis for production
releases.

> In addition to that, I'd again (like during the past 5+ years, but it
> never really happened) like to propose a stable tree discussion topic: I'd
> like to see an attempt to make the stable workflow more oriented towards
> "maintainers sending pull requests" rather than "random people pointing to
> patches that should go to stable". This has been much of an issue in the

Sounds like an excellent idea.

> past, when we've been seeing many stable tree regressions; that's not the
> case any more, but still something where I sense a room for improvement.
>

We have a lot more testing, which I am sure helps, but from what I hear
the lack of testing and the concern of inheriting regressions is still a
big concern.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 23:12 ` Guenter Roeck
@ 2016-07-08 23:38   ` Luck, Tony
  2016-07-09  8:34     ` Jiri Kosina
  2016-07-08 23:52   ` Rafael J. Wysocki
  2016-07-09  0:10   ` Dmitry Torokhov
  2 siblings, 1 reply; 244+ messages in thread
From: Luck, Tony @ 2016-07-08 23:38 UTC (permalink / raw)
  To: Guenter Roeck, Jiri Kosina, ksummit-discuss

> In addition to that, I'd again (like during the past 5+ years, but it
> never really happened) like to propose a stable tree discussion topic: I'd
> like to see an attempt to make the stable workflow more oriented towards
> "maintainers sending pull requests" rather than "random people pointing to
> patches that should go to stable". This has been much of an issue in the

Shouldn't the common case be "Maintainer sends list of commit IDs to
be cherry-picked" rather than a pull request?

It's only when things get complicated that you fall back to:
1) commit to cherry-pick + fixup patch
or in extreme cases
2) Patch that does equivalent fix, but without having to pull in a ton
    of other things from mainline.

-Tony

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 23:12 ` Guenter Roeck
  2016-07-08 23:38   ` Luck, Tony
@ 2016-07-08 23:52   ` Rafael J. Wysocki
  2016-07-09  0:06     ` Dmitry Torokhov
  2016-07-09  0:10   ` Dmitry Torokhov
  2 siblings, 1 reply; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-08 23:52 UTC (permalink / raw)
  To: Guenter Roeck, Jiri Kosina
  Cc: ksummit-discuss, Greg Kroah-Hartman, ksummit-discuss

On Friday, July 08, 2016 04:12:14 PM Guenter Roeck wrote:
> On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> > Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> > wouldn't it? :)
> >
> > As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> > crucial elements I rely on (and I also try to make sure that SUSE
> > contributes back as much as possible).
> >
> > Hence any planned changes in the workflow / releases are rather essential
> > for me, and I'd like to participate, should any such discussion take
> > place.
> >
> 
> Same here. New employer, lots of unhappiness with stable releases, to the point
> where stable trees are not used as basis for shipping releases.
> That kind of defeats the purpose. So, instead of "let's ignore stable",
> maybe we can get to a point where people feel comfortable with the quality
> of stable releases, and where stable can actually be used as basis for production
> releases.
> 
> > In addition to that, I'd again (like during the past 5+ years, but it
> > never really happened) like to propose a stable tree discussion topic: I'd
> > like to see an attempt to make the stable workflow more oriented towards
> > "maintainers sending pull requests" rather than "random people pointing to
> > patches that should go to stable". This has been much of an issue in the
> 
> Sounds like an excellent idea.

I'm wondering if anyone can tell what fraction of stable regressions come
from commits marked with the "Cc: stable" tag by the maintainers in the first
place.

If that fraction is significant, then I'm afraid it won't help to ask
maintainers to send pull requests to stable and it will affect their
bandwidth (sort of limited already).

To me, the source of the problem is that sometimes it really is hard to see
the "regression potential" upfront, so to speak, and when the commit gets into
stable, it's already too late.

And honestly, the "we don't revert things from stable if the mainline hasn't
reverted them yet" policy doesn't really help, because the mainline may choose
to fix the problem instead of reverting and that may take time and while the
mainline fix is happily being worked on, the users of stable are sort of left
in the cold with code that doesn't work.  And it may go like that for weeks
in extreme cases.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
  2016-07-08 23:12 ` Guenter Roeck
@ 2016-07-09  0:06 ` Jason Cooper
  2016-07-09  0:42   ` James Bottomley
  2016-07-10  7:21 ` Takashi Iwai
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 244+ messages in thread
From: Jason Cooper @ 2016-07-09  0:06 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss

Hi Jiri,

On Sat, Jul 09, 2016 at 12:35:09AM +0200, Jiri Kosina wrote:
> I'd like to see an attempt to make the stable workflow more oriented
> towards "maintainers sending pull requests" rather than "random people
> pointing to patches that should go to stable".

How does that differ from "Cc: stable.." ?  In my experience, it's
mostly the maintainers adding that tag after looking at the commit it
"Fixes", if the commit id was provided.  Admittedly, my exposure is
limited to ARM mvebu and irqchip for the most part.

Do you want pull requests in order to limit patches to only from
maintainers?  Or to include a series of patches that have had more
testing against specific kernel versions?

Do you have a sense of the specific regressions that cause people to
give up on -stable?

thx,

Jason.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 23:52   ` Rafael J. Wysocki
@ 2016-07-09  0:06     ` Dmitry Torokhov
  2016-07-09  8:37       ` Jiri Kosina
  0 siblings, 1 reply; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-09  0:06 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: ksummit-discuss, Greg Kroah-Hartman, ksummit-discuss

On Sat, Jul 09, 2016 at 01:52:19AM +0200, Rafael J. Wysocki wrote:
> On Friday, July 08, 2016 04:12:14 PM Guenter Roeck wrote:
> > On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> > > Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> > > wouldn't it? :)
> > >
> > > As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> > > crucial elements I rely on (and I also try to make sure that SUSE
> > > contributes back as much as possible).
> > >
> > > Hence any planned changes in the workflow / releases are rather essential
> > > for me, and I'd like to participate, should any such discussion take
> > > place.
> > >
> > 
> > Same here. New employer, lots of unhappiness with stable releases, to the point
> > where stable trees are not used as basis for shipping releases.
> > That kind of defeats the purpose. So, instead of "let's ignore stable",
> > maybe we can get to a point where people feel comfortable with the quality
> > of stable releases, and where stable can actually be used as basis for production
> > releases.
> > 
> > > In addition to that, I'd again (like during the past 5+ years, but it
> > > never really happened) like to propose a stable tree discussion topic: I'd
> > > like to see an attempt to make the stable workflow more oriented towards
> > > "maintainers sending pull requests" rather than "random people pointing to
> > > patches that should go to stable". This has been much of an issue in the
> > 
> > Sounds like an excellent idea.

I am not sure if that will work well for everyone. I can not keep track
and prepare pull requests for all stable releases out there. Everyone
and their dog has a stable release nowadays. If they copy me on a patch
they want to include and I see an issue with it I'll drop them a note,
but that is it.

> 
> I'm wondering if anyone can tell what fraction of stable regressions come
> from commits marked with the "Cc: stable" tag by the maintainers in the first
> place.

Also, do we have many changes that go to stable without maintainers
actually adding cc: stable annotation in mainline?

> 
> If that fraction is significant, then I'm afraid it won't help to ask
> maintainers to send pull requests to stable and it will affect their
> bandwidth (sort of limited already).
> 
> To me, the source of the problem is that sometimes it really is hard to see
> the "regression potential" upfront, so to speak, and when the commit gets into
> stable, it's already too late.
> 
> And honestly, the "we don't revert things from stable if the mainline hasn't
> reverted them yet" policy doesn't really help, because the mainline may choose
> to fix the problem instead of reverting and that may take time and while the
> mainline fix is happily being worked on, the users of stable are sort of left
> in the cold with code that doesn't work.  And it may go like that for weeks
> in extreme cases.

This is especially true if such regression introduced in latest mainline
stays in older stable tree. It would be better to revert such
regressions right away and then, if mainline adopts a fix rather than
revert, re-cherry-pick along with the fix.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 23:12 ` Guenter Roeck
  2016-07-08 23:38   ` Luck, Tony
  2016-07-08 23:52   ` Rafael J. Wysocki
@ 2016-07-09  0:10   ` Dmitry Torokhov
  2016-07-09  0:37     ` Rafael J. Wysocki
  2016-07-10  7:37     ` Takashi Iwai
  2 siblings, 2 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-09  0:10 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: ksummit-discuss

On Fri, Jul 08, 2016 at 04:12:14PM -0700, Guenter Roeck wrote:
> On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> >Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> >wouldn't it? :)
> >
> >As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> >crucial elements I rely on (and I also try to make sure that SUSE
> >contributes back as much as possible).
> >
> >Hence any planned changes in the workflow / releases are rather essential
> >for me, and I'd like to participate, should any such discussion take
> >place.
> >
> 
> Same here. New employer, lots of unhappiness with stable releases, to the point
> where stable trees are not used as basis for shipping releases.
> That kind of defeats the purpose. So, instead of "let's ignore stable",
> maybe we can get to a point where people feel comfortable with the quality
> of stable releases, and where stable can actually be used as basis for production
> releases.

I wonder if it would not be a good idea to split stable into several
flavors: security, fixes to core (really fixes), and fixes to device
drivers + new hardware support. I feel that with current single stable
tree (per stable release) we are too liberal with what we direct towards
stable, with many changes not being strictly necessary, but rather "nice
to have".

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:10   ` Dmitry Torokhov
@ 2016-07-09  0:37     ` Rafael J. Wysocki
  2016-07-09  0:43       ` Dmitry Torokhov
  2016-07-10  7:37     ` Takashi Iwai
  1 sibling, 1 reply; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-09  0:37 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: ksummit-discuss, ksummit-discuss

On Friday, July 08, 2016 05:10:46 PM Dmitry Torokhov wrote:
> On Fri, Jul 08, 2016 at 04:12:14PM -0700, Guenter Roeck wrote:
> > On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> > >Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> > >wouldn't it? :)
> > >
> > >As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> > >crucial elements I rely on (and I also try to make sure that SUSE
> > >contributes back as much as possible).
> > >
> > >Hence any planned changes in the workflow / releases are rather essential
> > >for me, and I'd like to participate, should any such discussion take
> > >place.
> > >
> > 
> > Same here. New employer, lots of unhappiness with stable releases, to the point
> > where stable trees are not used as basis for shipping releases.
> > That kind of defeats the purpose. So, instead of "let's ignore stable",
> > maybe we can get to a point where people feel comfortable with the quality
> > of stable releases, and where stable can actually be used as basis for production
> > releases.
> 
> I wonder if it would not be a good idea to split stable into several
> flavors: security, fixes to core (really fixes), and fixes to device
> drivers + new hardware support.

That would be sort of confusing IMO.

The "one stable series per release" model is good, because it is really
straightforward and it is rather hard to get things wrong within it.

> I feel that with current single stable
> tree (per stable release) we are too liberal with what we direct towards
> stable, with many changes not being strictly necessary, but rather "nice
> to have".

To me, as long as they all are fixes, that's fine.

I tend to think that all known bugs should be fixed, at least because once
they have been fixed, no one needs to remember about them any more. :-)

Moreover, minor fixes don't really introduce regressions that often (from my
experience), because they tend to be simple.  Significant fixes, on the other
hand, tend to be more complicated or more subtle and the risk of regressions
from them is so much greater.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:06 ` Jason Cooper
@ 2016-07-09  0:42   ` James Bottomley
  2016-07-09  8:43     ` Jiri Kosina
  2016-07-09 14:57     ` Jason Cooper
  0 siblings, 2 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-09  0:42 UTC (permalink / raw)
  To: Jason Cooper, Jiri Kosina; +Cc: ksummit-discuss

On Sat, 2016-07-09 at 00:06 +0000, Jason Cooper wrote:
> Hi Jiri,
> 
> On Sat, Jul 09, 2016 at 12:35:09AM +0200, Jiri Kosina wrote:
> > I'd like to see an attempt to make the stable workflow more 
> > oriented towards "maintainers sending pull requests" rather than 
> > "random people pointing to patches that should go to stable".
> 
> How does that differ from "Cc: stable.." ?  In my experience, it's
> mostly the maintainers adding that tag after looking at the commit it
> "Fixes", if the commit id was provided.  Admittedly, my exposure is
> limited to ARM mvebu and irqchip for the most part.

Actually, we do have maintainers who curate their own stable tree. 
 David Miller for networking is an example.  Perhaps we should ask him
and others who do this to describe the advantages they see in their
trees over the "tag it for stable and forget about it" mentality that
the rest of us have.  Perhaps Maintainers should be running their own
stable trees ... perhaps what theyre doing is OK.  Debating it will at
least flush out the issues.

> Do you want pull requests in order to limit patches to only from
> maintainers?  Or to include a series of patches that have had more
> testing against specific kernel versions?

The former is how the net stable tree works.

> Do you have a sense of the specific regressions that cause people to
> give up on -stable?

Every added patch has potential consequences.  In theory the
maintainers are best placed to understand what they are, so a
maintainer to stable flow might be the best way of controlling
regressions in stable.  On the other hand, running stable trees is
something Greg was supposed to be offloading from Maintainers, so I
suspect a lot of them don't want the added burden of having to care.

I'm not saying there's a right answer.  I am saying I think it's worth
the discussion.

James


> thx,
> 
> Jason.
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:37     ` Rafael J. Wysocki
@ 2016-07-09  0:43       ` Dmitry Torokhov
  2016-07-09  1:53         ` Guenter Roeck
  2016-07-09 10:05         ` James Bottomley
  0 siblings, 2 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-09  0:43 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: ksummit-discuss, ksummit-discuss

On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
> On Friday, July 08, 2016 05:10:46 PM Dmitry Torokhov wrote:
> > On Fri, Jul 08, 2016 at 04:12:14PM -0700, Guenter Roeck wrote:
> > > On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> > > >Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> > > >wouldn't it? :)
> > > >
> > > >As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> > > >crucial elements I rely on (and I also try to make sure that SUSE
> > > >contributes back as much as possible).
> > > >
> > > >Hence any planned changes in the workflow / releases are rather essential
> > > >for me, and I'd like to participate, should any such discussion take
> > > >place.
> > > >
> > > 
> > > Same here. New employer, lots of unhappiness with stable releases, to the point
> > > where stable trees are not used as basis for shipping releases.
> > > That kind of defeats the purpose. So, instead of "let's ignore stable",
> > > maybe we can get to a point where people feel comfortable with the quality
> > > of stable releases, and where stable can actually be used as basis for production
> > > releases.
> > 
> > I wonder if it would not be a good idea to split stable into several
> > flavors: security, fixes to core (really fixes), and fixes to device
> > drivers + new hardware support.
> 
> That would be sort of confusing IMO.
> 
> The "one stable series per release" model is good, because it is really
> straightforward and it is rather hard to get things wrong within it.

It really depends on what you intend to do with it. For general-purpose
distribution - yes, you take everything and cross your fingers that we
fixed more bugs than introduced new ones. When you building kernel for
set of devices you might want to be more selective because you do know
the subset of features and hardware you are using.

> 
> > I feel that with current single stable
> > tree (per stable release) we are too liberal with what we direct towards
> > stable, with many changes not being strictly necessary, but rather "nice
> > to have".
> 
> To me, as long as they all are fixes, that's fine.

It depends how much testing was done with them. Given that nobody has
all hardware permutations that are in the wild and on severity of the
bug sometimes it makes sense to skip stable and let it be till next
release.

> 
> I tend to think that all known bugs should be fixed, at least because once
> they have been fixed, no one needs to remember about them any more. :-)
> 
> Moreover, minor fixes don't really introduce regressions that often 

Famous last words :)

> (from my
> experience), because they tend to be simple.  Significant fixes, on the other
> hand, tend to be more complicated or more subtle and the risk of regressions
> from them is so much greater.
> 

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:43       ` Dmitry Torokhov
@ 2016-07-09  1:53         ` Guenter Roeck
  2016-07-09 10:05         ` James Bottomley
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-09  1:53 UTC (permalink / raw)
  To: Dmitry Torokhov, Rafael J. Wysocki; +Cc: ksummit-discuss, ksummit-discuss

On 07/08/2016 05:43 PM, Dmitry Torokhov wrote:
> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
>> On Friday, July 08, 2016 05:10:46 PM Dmitry Torokhov wrote:
>>> On Fri, Jul 08, 2016 at 04:12:14PM -0700, Guenter Roeck wrote:
>>>> On 07/08/2016 03:35 PM, Jiri Kosina wrote:
>>>>> Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
>>>>> wouldn't it? :)
>>>>>
>>>>> As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
>>>>> crucial elements I rely on (and I also try to make sure that SUSE
>>>>> contributes back as much as possible).
>>>>>
>>>>> Hence any planned changes in the workflow / releases are rather essential
>>>>> for me, and I'd like to participate, should any such discussion take
>>>>> place.
>>>>>
>>>>
>>>> Same here. New employer, lots of unhappiness with stable releases, to the point
>>>> where stable trees are not used as basis for shipping releases.
>>>> That kind of defeats the purpose. So, instead of "let's ignore stable",
>>>> maybe we can get to a point where people feel comfortable with the quality
>>>> of stable releases, and where stable can actually be used as basis for production
>>>> releases.
>>>
>>> I wonder if it would not be a good idea to split stable into several
>>> flavors: security, fixes to core (really fixes), and fixes to device
>>> drivers + new hardware support.
>>
>> That would be sort of confusing IMO.
>>
>> The "one stable series per release" model is good, because it is really
>> straightforward and it is rather hard to get things wrong within it.
>
> It really depends on what you intend to do with it. For general-purpose
> distribution - yes, you take everything and cross your fingers that we
> fixed more bugs than introduced new ones. When you building kernel for
> set of devices you might want to be more selective because you do know
> the subset of features and hardware you are using.
>

Ideally, -stable patches should not introduce any new bugs (yes, I know,
I am dreaming here).

Unfortunately, we can not avoid introducing new bugs. Question for me would
be how to catch them. Being more restrictive might help, improved testing
might help.

Testing improved a lot over the last couple of years. It would be interesting
to know how much improved test coverage resulted in objective stability
improvements. No idea if there is a way to measure that. Is there ?

Either case, even though testing has improved a lot, I think it still has a long
way to go. How can we get there ? How can we convince companies that it would be
worth their money to invest in testing upstream releases - not just mainline,
but -stable releases as well ? Trying to find answers for those questions might
be worthwhile discussion points for the KS.

>>
>>> I feel that with current single stable
>>> tree (per stable release) we are too liberal with what we direct towards
>>> stable, with many changes not being strictly necessary, but rather "nice
>>> to have".
>>
>> To me, as long as they all are fixes, that's fine.
>

Reminds me of that seemingly straightforward fix for a build warning which
required several follow-up patches to fix the problems it introduced.
In 4.4, this ended up as:

5a58f809d731 regulator: core: Fix nested locking of supplies
29c9f634cb13 regulator: core: Ensure we lock all regulators
f500da32a166 regulator: core: fix regulator_lock_supply regression
34af67eb941a Revert "regulator: core: Fix nested locking of supplies"
b1999fa6e814 regulator: core: Fix nested locking of supplies
4c8fe4f52755 regulator: core: avoid unused variable warning

One could argue that 4c8fe4f52755 wasn't really worth the trouble.

Guenter

> It depends how much testing was done with them. Given that nobody has
> all hardware permutations that are in the wild and on severity of the
> bug sometimes it makes sense to skip stable and let it be till next
> release.
>
>>
>> I tend to think that all known bugs should be fixed, at least because once
>> they have been fixed, no one needs to remember about them any more. :-)
>>
>> Moreover, minor fixes don't really introduce regressions that often
>
> Famous last words :)
>
>> (from my
>> experience), because they tend to be simple.  Significant fixes, on the other
>> hand, tend to be more complicated or more subtle and the risk of regressions
>> from them is so much greater.
>>
>
> Thanks.
>

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 23:38   ` Luck, Tony
@ 2016-07-09  8:34     ` Jiri Kosina
  2016-07-09  8:58       ` Guenter Roeck
                         ` (3 more replies)
  0 siblings, 4 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-07-09  8:34 UTC (permalink / raw)
  To: Luck, Tony; +Cc: ksummit-discuss

On Fri, 8 Jul 2016, Luck, Tony wrote:

> > In addition to that, I'd again (like during the past 5+ years, but it
> > never really happened) like to propose a stable tree discussion topic: I'd
> > like to see an attempt to make the stable workflow more oriented towards
> > "maintainers sending pull requests" rather than "random people pointing to
> > patches that should go to stable". This has been much of an issue in the
> 
> Shouldn't the common case be "Maintainer sends list of commit IDs to
> be cherry-picked" rather than a pull request?

Yeah, explicitly using term "pull request" was probably way too specific. 

The model I'd really love to see is "a person/group of people 
(maintainers) are identified and appointed responsible for what end up in 
-stable for particular subsystem", i.e. the same model we use for mainline 
development.

Whether it's actual git pull request, list of commit IDs, etc. is really 
just a technicality.

Basically: currently the model is that everybody is free to pick up a 
random commit and bounce it to -stable. What I'd like see is that this is 
routed through the maintainers instead, who then push thing upstream 
(where upstream means stable).

I know that there are exceptions where this is working properly (netdev), 
I personally am doing that also informally (when people tell me "hey, this 
should go to stable", I do whatever is necessary), but still the general 
process as such is not there.

The usual counter-argument I've always received from the stable team to 
that was "Maintainers are busy enough already, if we start enforcing this, 
we'd have much less patches in -stable". I personally don't see that as a 
bad thing. "Less is more" might apply here. If someone is really unhappy 
about state of particular subsystem in -stable, it'd mean that group of 
maintainers will have to be extended for that particular subsystem.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:06     ` Dmitry Torokhov
@ 2016-07-09  8:37       ` Jiri Kosina
  2016-07-09  9:12         ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-07-09  8:37 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: ksummit-discuss, Greg Kroah-Hartman, ksummit-discuss

On Fri, 8 Jul 2016, Dmitry Torokhov wrote:

> Also, do we have many changes that go to stable without maintainers 
> actually adding cc: stable annotation in mainline?

I have a feeling (but it's not backed up by any proper statistics) that we 
do have quite a high number of such patches.

Yes, sure, the maintainer is later CCed on the patch in the -stable queue 
review patchbomb, but that's unfortunately in practice not the same thing 
as if he actually actively considered the patch for inclusion himself.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:42   ` James Bottomley
@ 2016-07-09  8:43     ` Jiri Kosina
  2016-07-09  9:36       ` Mark Brown
  2016-07-09 14:57     ` Jason Cooper
  1 sibling, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-07-09  8:43 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss, Jason Cooper

On Fri, 8 Jul 2016, James Bottomley wrote:

> In theory the maintainers are best placed to understand what they are, 
> so a maintainer to stable flow might be the best way of controlling 
> regressions in stable.  

That exactly has been my reasoning, yes.

> On the other hand, running stable trees is something Greg was supposed 
> to be offloading from Maintainers, so I suspect a lot of them don't want 
> the added burden of having to care.

I do understand that, but let's face it; our ultimate and primary goal 
here should be 'making -stable reliably stable', not so much offloading 
work from maintainers.

If maintainers are overwhelmed by extra work needed for stable, 
"offloading to Greg" doesn't sound like a proper solution to me at all. 
"Fixing a maintainer workflow for that particular subsystem" (such as 
extending the group of maintainers) does.

> I'm not saying there's a right answer.  I am saying I think it's worth
> the discussion.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:34     ` Jiri Kosina
@ 2016-07-09  8:58       ` Guenter Roeck
  2016-07-09  9:29       ` Johannes Berg
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-09  8:58 UTC (permalink / raw)
  To: Jiri Kosina, Luck, Tony; +Cc: ksummit-discuss

On 07/09/2016 01:34 AM, Jiri Kosina wrote:
[ ... ]
>
> The model I'd really love to see is "a person/group of people
> (maintainers) are identified and appointed responsible for what end up in
> -stable for particular subsystem", i.e. the same model we use for mainline
> development.
>
[ ... ]
>
> The usual counter-argument I've always received from the stable team to
> that was "Maintainers are busy enough already, if we start enforcing this,
> we'd have much less patches in -stable". I personally don't see that as a
> bad thing. "Less is more" might apply here. If someone is really unhappy
> about state of particular subsystem in -stable, it'd mean that group of
> maintainers will have to be extended for that particular subsystem.
>

Agreed, especially in the context of concerns that too many patches find
their way into stable releases.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:37       ` Jiri Kosina
@ 2016-07-09  9:12         ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-09  9:12 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss, Greg Kroah-Hartman, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 815 bytes --]

On Sat, Jul 09, 2016 at 10:37:12AM +0200, Jiri Kosina wrote:
> On Fri, 8 Jul 2016, Dmitry Torokhov wrote:

> > Also, do we have many changes that go to stable without maintainers 
> > actually adding cc: stable annotation in mainline?

> I have a feeling (but it's not backed up by any proper statistics) that we 
> do have quite a high number of such patches.

> Yes, sure, the maintainer is later CCed on the patch in the -stable queue 
> review patchbomb, but that's unfortunately in practice not the same thing 
> as if he actually actively considered the patch for inclusion himself.

I'd expect to be copied in on any requests to add stable patches if
they're not coming from a commit I made, not just on the patchbomb.  I
do agree that the patchbombs are sufficiently noisy to not be useful.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:34     ` Jiri Kosina
  2016-07-09  8:58       ` Guenter Roeck
@ 2016-07-09  9:29       ` Johannes Berg
  2016-07-09 15:19         ` Jason Cooper
  2016-07-09 19:15         ` Vlastimil Babka
  2016-07-09 18:39       ` Andrew Lunn
  2016-07-10  1:22       ` Rafael J. Wysocki
  3 siblings, 2 replies; 244+ messages in thread
From: Johannes Berg @ 2016-07-09  9:29 UTC (permalink / raw)
  To: Jiri Kosina, Luck, Tony; +Cc: ksummit-discuss

On Sat, 2016-07-09 at 10:34 +0200, Jiri Kosina wrote:
> 
> Basically: currently the model is that everybody is free to pick up
> a 
> random commit and bounce it to -stable. What I'd like see is that
> this is 
> routed through the maintainers instead, who then push thing upstream
> (where upstream means stable).
> 
> I know that there are exceptions where this is working properly
> (netdev), 

Note that for subsets of the networking tree, particularly wireless, we
*do* add Cc: stable tags, but usually that's me adding the tag.

> The usual counter-argument I've always received from the stable team
> to that was "Maintainers are busy enough already, if we start
> enforcing this, we'd have much less patches in -stable". I personally
> don't see that as a bad thing. "Less is more" might apply here. If
> someone is really unhappy about state of particular subsystem in
> -stable, it'd mean that group of maintainers will have to be extended
> for that particular subsystem.
> 

I'm not convinced by that line of reasoning, especially since in my
experience the unhappy people are often the least qualified to actually
determine the impact of a patch.


Perhaps a hybrid model, close to what we have today, would work? If a
patch is proposed for stable, instead of including it by default, ask
the maintainer(s) to separately acknowledge the patch for stable? IOW,
rather than sending a patchbomb that requires an explicit NACK (with
the previously discussed signal/noise problem), just send a list of
commits and ask maintainers to edit it? They could remove and add
commits then.

Yes, maintainers would still have some more work with that than the
current "fire & forget" approach, but it's more "fire & get a reminder"
rather than what you're proposing which would require me to track it
all, across various stable releases managed by various people, which
frankly I can't imagine being able to do (or even find qualified people
willing to help me do that, right now.)

johannes

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:43     ` Jiri Kosina
@ 2016-07-09  9:36       ` Mark Brown
  2016-07-09 15:13         ` Guenter Roeck
  2016-07-10 16:22         ` Vinod Koul
  0 siblings, 2 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-09  9:36 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1034 bytes --]

On Sat, Jul 09, 2016 at 10:43:05AM +0200, Jiri Kosina wrote:

> If maintainers are overwhelmed by extra work needed for stable, 
> "offloading to Greg" doesn't sound like a proper solution to me at all. 
> "Fixing a maintainer workflow for that particular subsystem" (such as 
> extending the group of maintainers) does.

I think one of the big things we're missing here is QA.  I don't
personally have the hardware that would allow me to test a huge chunk of
the code in my subsystems, I'm relying on things like kernelci.org for
the bulk of it.  There's some work going on on getting Greg's stable
queue tested more which will hopefully make things better but it's not
100% there yet.

There's also the volume of stable trees to consider here - we've got a
large number of stable trees which seem to be maintained in different
ways with different tooling.  One big advantage from my point of view
as a maintainer with the current model is that I don't have to figure
out which I care about or anything like that.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:43       ` Dmitry Torokhov
  2016-07-09  1:53         ` Guenter Roeck
@ 2016-07-09 10:05         ` James Bottomley
  2016-07-09 15:49           ` Trond Myklebust
                             ` (2 more replies)
  1 sibling, 3 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-09 10:05 UTC (permalink / raw)
  To: Dmitry Torokhov, Rafael J. Wysocki; +Cc: ksummit-discuss, ksummit-discuss

On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
> > I tend to think that all known bugs should be fixed, at least 
> > because once they have been fixed, no one needs to remember about 
> > them any more. :-)
> > 
> > Moreover, minor fixes don't really introduce regressions that often
> 
> Famous last words :)

Actually, beyond the humour, the idea that small fixes don't introduce
regressions must be our most annoying anti-pattern.  The reality is
that a lot of so called fixes do introduce bugs.  The way this happens
is that a lot of these "obvious" fixes go through without any deep
review (because they're obvious, right?) and the bugs noisily turn up
slightly later.  The way this works is usually that some code
rearrangement is sold as a "fix" and later turns out not to be
equivalent to the prior code ... sometimes in incredibly subtle ways. I
think we should all be paying much more than lip service to the old
adage "If it ain't broke don't fix it".

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:42   ` James Bottomley
  2016-07-09  8:43     ` Jiri Kosina
@ 2016-07-09 14:57     ` Jason Cooper
  2016-07-09 22:51       ` Jonathan Corbet
  1 sibling, 1 reply; 244+ messages in thread
From: Jason Cooper @ 2016-07-09 14:57 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

Hi James,

On Fri, Jul 08, 2016 at 05:42:26PM -0700, James Bottomley wrote:
> On Sat, 2016-07-09 at 00:06 +0000, Jason Cooper wrote:
> > On Sat, Jul 09, 2016 at 12:35:09AM +0200, Jiri Kosina wrote:
> > > I'd like to see an attempt to make the stable workflow more 
> > > oriented towards "maintainers sending pull requests" rather than 
> > > "random people pointing to patches that should go to stable".
> > 
> > How does that differ from "Cc: stable.." ?  In my experience, it's
> > mostly the maintainers adding that tag after looking at the commit it
> > "Fixes", if the commit id was provided.  Admittedly, my exposure is
> > limited to ARM mvebu and irqchip for the most part.
> 
> Actually, we do have maintainers who curate their own stable tree. 
>  David Miller for networking is an example.  Perhaps we should ask him
> and others who do this to describe the advantages they see in their
> trees over the "tag it for stable and forget about it" mentality that
> the rest of us have.  Perhaps Maintainers should be running their own
> stable trees ... perhaps what theyre doing is OK.  Debating it will at
> least flush out the issues.

It would be helpful if we could get a set of examples of regressions
which have occurred in the past.  It may be that we need to point
testing infra at the pending stable releases.  Or, change policy to
allow reverts in the stable tree while waiting for the proper fix from
mainline.  Without hard data, we're just guessing.

> > Do you want pull requests in order to limit patches to only from
> > maintainers?  Or to include a series of patches that have had more
> > testing against specific kernel versions?
> 
> The former is how the net stable tree works.

fwiw, I'm branch-happy any way.  Creating a few extra branches and
pushing them up for a stable-next merging and testing process
doesn't seem to be that much more effort than what we do now (Cc:
stable... # v3.8+, Fixes:, for-next)

I am concerned with how /many/ of those branches there would be.  If
maintainers only had to make one branch against the oldest relevant
version, that would be manageable.  We probably just need to indicate
the mainline commit in the stable commit.  Automation could handle
rebasing, merging, compile-testing, etc.

> > Do you have a sense of the specific regressions that cause people to
> > give up on -stable?
> 
> Every added patch has potential consequences.  In theory the
> maintainers are best placed to understand what they are, so a
> maintainer to stable flow might be the best way of controlling
> regressions in stable.  On the other hand, running stable trees is
> something Greg was supposed to be offloading from Maintainers, so I
> suspect a lot of them don't want the added burden of having to care.

Well, we've had this discussion in the past, and iirc, Greg was adamant
about not increasing the burden on maintainers.  I get that, but I think
if we make stable branches an optional path to -stable, it's a win for
everybody.  A maintainer either keeps doing Cc: stable, *or* posts a
branch based on the oldest applicable version.

> I'm not saying there's a right answer.  I am saying I think it's worth
> the discussion.

Agreed, but I'd love to hear from distro's and possibly target companies
who have experience with -stable.  Some which are still using -stable,
and some who have stopped.  And why for both. :)

I think it would also be good to hear from Stephen Rothwell and Mark
Brown wrt daily merging of hundreds of branches.

For myself, I've been happily using stable branches on most of my
embedded boxes that run my home network for years.  But that's a
completely different animal from delivering a product to a consumer.
Well, except when Netflix goes away :-P

thx,

Jason.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  9:36       ` Mark Brown
@ 2016-07-09 15:13         ` Guenter Roeck
  2016-07-09 19:40           ` Sudip Mukherjee
                             ` (3 more replies)
  2016-07-10 16:22         ` Vinod Koul
  1 sibling, 4 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-09 15:13 UTC (permalink / raw)
  To: Mark Brown, Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On 07/09/2016 02:36 AM, Mark Brown wrote:
> On Sat, Jul 09, 2016 at 10:43:05AM +0200, Jiri Kosina wrote:
>
>> If maintainers are overwhelmed by extra work needed for stable,
>> "offloading to Greg" doesn't sound like a proper solution to me at all.
>> "Fixing a maintainer workflow for that particular subsystem" (such as
>> extending the group of maintainers) does.
>
> I think one of the big things we're missing here is QA.  I don't
> personally have the hardware that would allow me to test a huge chunk of
> the code in my subsystems, I'm relying on things like kernelci.org for
> the bulk of it.  There's some work going on on getting Greg's stable
> queue tested more which will hopefully make things better but it's not
> 100% there yet.
>
Improving QA is very much part of it. Yes, there is kernelci.org, there is
kerneltest.org, there are the 0day builders, and there are various individuals
testing the trees. This all helped a lot in stabilizing both mainline and
the stable trees, but is not enough. We are pretty well covered with build
tests, but runtime tests are for the most part limited to "it boots, therefore
it works". We still have a long way to go to get real QA testing. As I
suggested earlier, we'll have to find a way to convince companies to actively
invest in QA.

> There's also the volume of stable trees to consider here - we've got a
> large number of stable trees which seem to be maintained in different
> ways with different tooling.  One big advantage from my point of view
> as a maintainer with the current model is that I don't have to figure
> out which I care about or anything like that.
>
The proliferation of stable trees (or rather, how to avoid it) might be
one of the parts of the puzzle. Yes, there are way too many right now.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  9:29       ` Johannes Berg
@ 2016-07-09 15:19         ` Jason Cooper
  2016-07-09 16:04           ` Guenter Roeck
  2016-07-09 19:15         ` Vlastimil Babka
  1 sibling, 1 reply; 244+ messages in thread
From: Jason Cooper @ 2016-07-09 15:19 UTC (permalink / raw)
  To: Johannes Berg; +Cc: ksummit-discuss

Hi Johannes,

On Sat, Jul 09, 2016 at 11:29:01AM +0200, Johannes Berg wrote:
> On Sat, 2016-07-09 at 10:34 +0200, Jiri Kosina wrote:
> > 
> > Basically: currently the model is that everybody is free to pick up
> > a  random commit and bounce it to -stable. What I'd like see is that
> > this is  routed through the maintainers instead, who then push thing
> > upstream (where upstream means stable).
> > 
> > I know that there are exceptions where this is working properly
> > (netdev), 
> 
> Note that for subsets of the networking tree, particularly wireless, we
> *do* add Cc: stable tags, but usually that's me adding the tag.

Same here.  Usually, the best we get from a patch submitter is the commit
introducing the bug.  Some extra work is already built into our workflow
for anything destined for -stable.

> > The usual counter-argument I've always received from the stable team
> > to that was "Maintainers are busy enough already, if we start
> > enforcing this, we'd have much less patches in -stable". I personally
> > don't see that as a bad thing. "Less is more" might apply here. If
> > someone is really unhappy about state of particular subsystem in
> > -stable, it'd mean that group of maintainers will have to be extended
> > for that particular subsystem.
> 
> I'm not convinced by that line of reasoning, especially since in my
> experience the unhappy people are often the least qualified to actually
> determine the impact of a patch.

Ack.

> Perhaps a hybrid model, close to what we have today, would work? If a
> patch is proposed for stable, instead of including it by default, ask
> the maintainer(s) to separately acknowledge the patch for stable? IOW,
> rather than sending a patchbomb that requires an explicit NACK (with
> the previously discussed signal/noise problem), just send a list of
> commits and ask maintainers to edit it? They could remove and add
> commits then.

I dunno.  I agree we need to increase feedback, but I think relying on
active dialog wouldn't last.  Posting a branch based on the oldest
relevant version, and getting automated/semi-automated feedback on it
*before* it's accepted into -stable would be a huge help.

But that's assuming I'm reading the nature of the regressions correctly.
Namely that they're compile failures or boot up failures.  Which are
both things we have automated testing for.

> Yes, maintainers would still have some more work with that than the
> current "fire & forget" approach, but it's more "fire & get a reminder"
> rather than what you're proposing which would require me to track it
> all, across various stable releases managed by various people, which
> frankly I can't imagine being able to do (or even find qualified people
> willing to help me do that, right now.)

Agreed.

thx,

Jason.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 10:05         ` James Bottomley
@ 2016-07-09 15:49           ` Trond Myklebust
  2016-07-09 22:41             ` Dan Williams
  2016-07-10  1:34             ` James Bottomley
  2016-07-10  7:29           ` Takashi Iwai
  2016-07-26 13:08           ` David Woodhouse
  2 siblings, 2 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-09 15:49 UTC (permalink / raw)
  To: Bottomley James; +Cc: ksummit-discuss, ksummit-discuss


> On Jul 9, 2016, at 06:05, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
>>> I tend to think that all known bugs should be fixed, at least 
>>> because once they have been fixed, no one needs to remember about 
>>> them any more. :-)
>>> 
>>> Moreover, minor fixes don't really introduce regressions that often
>> 
>> Famous last words :)
> 
> Actually, beyond the humour, the idea that small fixes don't introduce
> regressions must be our most annoying anti-pattern.  The reality is
> that a lot of so called fixes do introduce bugs.  The way this happens
> is that a lot of these "obvious" fixes go through without any deep
> review (because they're obvious, right?) and the bugs noisily turn up
> slightly later.  The way this works is usually that some code
> rearrangement is sold as a "fix" and later turns out not to be
> equivalent to the prior code ... sometimes in incredibly subtle ways. I
> think we should all be paying much more than lip service to the old
> adage "If it ain't broke don't fix it”.

The main problem with the stable kernel model right now is that we have no set of regression tests to apply. Unless someone goes in and actually tests each and every stable kernel affected by that “Cc: stable” line, then regressions will eventually happen.

So do we want to have another round of “how do we regression test the kernel” talks?

Cheers
  Trond

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:19         ` Jason Cooper
@ 2016-07-09 16:04           ` Guenter Roeck
  0 siblings, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-09 16:04 UTC (permalink / raw)
  To: Jason Cooper, Johannes Berg; +Cc: ksummit-discuss

Hi Jason,

On 07/09/2016 08:19 AM, Jason Cooper wrote:

>
> I dunno.  I agree we need to increase feedback, but I think relying on
> active dialog wouldn't last.  Posting a branch based on the oldest
> relevant version, and getting automated/semi-automated feedback on it
> *before* it's accepted into -stable would be a huge help.
>

Very much so. However, running such regressions would be technically impossible
with the current test infrastructure, with the possible exception of the 0day
build tests. Even the 0day build tests, given the number of "incomplete test"
results I am seeing lately, may have reached its limits.

> But that's assuming I'm reading the nature of the regressions correctly.
> Namely that they're compile failures or boot up failures.  Which are
> both things we have automated testing for.
>
Such regressions are discovered because there is now automated testing.
A couple of years ago, stable trees would not even build for anything
but major architectures. However, that only means that all the other
regressions (the ones more subtle than "it builds and boots, therefore
it works") are not discovered. It does not mean that there are no such
regressions.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:34     ` Jiri Kosina
  2016-07-09  8:58       ` Guenter Roeck
  2016-07-09  9:29       ` Johannes Berg
@ 2016-07-09 18:39       ` Andrew Lunn
  2016-07-10  1:22       ` Rafael J. Wysocki
  3 siblings, 0 replies; 244+ messages in thread
From: Andrew Lunn @ 2016-07-09 18:39 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss

> Basically: currently the model is that everybody is free to pick up a 
> random commit and bounce it to -stable. What I'd like see is that this is 
> routed through the maintainers instead, who then push thing upstream 
> (where upstream means stable).
> 
> I know that there are exceptions where this is working properly (netdev), 
> I personally am doing that also informally (when people tell me "hey, this 
> should go to stable", I do whatever is necessary), but still the general 
> process as such is not there.

Jonathan Corbet did a LWN article estimating how many stable patches
introduced regressions. Has anybody broken the numbers down per
subsystem? Can we get some numerical evidence which suggests
maintainer driven stable submissions are better than average?

	   Andrew

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  9:29       ` Johannes Berg
  2016-07-09 15:19         ` Jason Cooper
@ 2016-07-09 19:15         ` Vlastimil Babka
  2016-08-01  9:32           ` Johannes Berg
  1 sibling, 1 reply; 244+ messages in thread
From: Vlastimil Babka @ 2016-07-09 19:15 UTC (permalink / raw)
  To: Johannes Berg, Jiri Kosina, Luck, Tony; +Cc: ksummit-discuss

On 07/09/2016 11:29 AM, Johannes Berg wrote:
> On Sat, 2016-07-09 at 10:34 +0200, Jiri Kosina wrote:
>
> Perhaps a hybrid model, close to what we have today, would work? If a
> patch is proposed for stable, instead of including it by default, ask
> the maintainer(s) to separately acknowledge the patch for stable? IOW,
> rather than sending a patchbomb that requires an explicit NACK (with
> the previously discussed signal/noise problem), just send a list of
> commits and ask maintainers to edit it? They could remove and add
> commits then.

Does it have to be strictly maintainers? What if we just require that
*somebody* (maintainer or otherwise) tags the patch with something like
a "Stable-Acked-By", which should mean taking more responsibility for it
than just forwarding a patch to stable without consequences. It should
imply that the acker has checked the patch in the context of the
particular kernel version, and be clearly separated from acks/reviews of
the mainline commit. It would be of course better if stable tree
maintainer would check if the acking person is a regular contributor of
the subsystem (I guess get-maintainers.pl with its git checking can help
here).
This could be required initially at least for patches where the Cc:
stable wasn't already present at time of maintainer's signed-off-by.

> Yes, maintainers would still have some more work with that than the
> current "fire & forget" approach, but it's more "fire & get a reminder"
> rather than what you're proposing which would require me to track it
> all, across various stable releases managed by various people, which
> frankly I can't imagine being able to do (or even find qualified people
> willing to help me do that, right now.)
> 
> johannes
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:13         ` Guenter Roeck
@ 2016-07-09 19:40           ` Sudip Mukherjee
  2016-07-11  8:14             ` Jiri Kosina
  2016-07-09 21:21           ` Theodore Ts'o
                             ` (2 subsequent siblings)
  3 siblings, 1 reply; 244+ messages in thread
From: Sudip Mukherjee @ 2016-07-09 19:40 UTC (permalink / raw)
  To: Guenter Roeck, Mark Brown, Jiri Kosina
  Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Saturday 09 July 2016 04:13 PM, Guenter Roeck wrote:
> On 07/09/2016 02:36 AM, Mark Brown wrote:
>> On Sat, Jul 09, 2016 at 10:43:05AM +0200, Jiri Kosina wrote:
>>
>>> If maintainers are overwhelmed by extra work needed for stable,
>>> "offloading to Greg" doesn't sound like a proper solution to me at all.
>>> "Fixing a maintainer workflow for that particular subsystem" (such as
>>> extending the group of maintainers) does.
>>
>> I think one of the big things we're missing here is QA.  I don't
>> personally have the hardware that would allow me to test a huge chunk of
>> the code in my subsystems, I'm relying on things like kernelci.org for
>> the bulk of it.  There's some work going on on getting Greg's stable
>> queue tested more which will hopefully make things better but it's not
>> 100% there yet.
>>
> Improving QA is very much part of it. Yes, there is kernelci.org, there is
> kerneltest.org, there are the 0day builders, and there are various
> individuals
> testing the trees. This all helped a lot in stabilizing both mainline and
> the stable trees, but is not enough. We are pretty well covered with build
> tests, but runtime tests are for the most part limited to "it boots,
> therefore
> it works". We still have a long way to go to get real QA testing. As I
> suggested earlier, we'll have to find a way to convince companies to
> actively
> invest in QA.

Individual testing will depend mostly on the available time. Personally 
for me, I used to test before but with my job change I rarely get time 
to check stable anymore.
Just a thought, why dont we have a stable-next tree like the way we have 
linux-next? in that way it might get more testing than it gets now. I 
know it will be more work but atleast worth a try.

Regards
Sudip

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:13         ` Guenter Roeck
  2016-07-09 19:40           ` Sudip Mukherjee
@ 2016-07-09 21:21           ` Theodore Ts'o
  2016-07-11 15:13             ` Mark Brown
  2016-07-11  8:18           ` Jiri Kosina
  2016-07-11 14:22           ` Mark Brown
  3 siblings, 1 reply; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-09 21:21 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sat, Jul 09, 2016 at 08:13:19AM -0700, Guenter Roeck wrote:
> The proliferation of stable trees (or rather, how to avoid it) might be
> one of the parts of the puzzle. Yes, there are way too many right now.

Something that would really help is if there was some way to know who
is actually *using* each of the various stable trees.  It would
certainly help prioritize my work.  I started paying more attention to
3.10 and 3.18 kernels because I was directly involved with a projects
which had product kernels based on 3.10 and 3.18.  I do perform "fire
and forget" gce-xfstests runs on 3.10, 3.14, 3.18, 4.1, and 4.4
because it doesn't take much effort.

But actually going through and figuring out why we have a lot of test
failures on a particular test kernel (generally because there were
patches that were too dangerous or too complicated to backport via the
cc:stable route), and then trying to get the fixes to a specific
stable kernel takes a lot more time, and that I don't have.  So as a
results, 3.10.102 looks like this:

BEGIN TEST 4k: Ext4 4k block Wed Jul  6 12:33:11 EDT 2016
Failures: ext4/308 generic/067 generic/092 generic/135 generic/323 generic/324

... and 3.18.36 looks like this:

BEGIN TEST 4k: Ext4 4k block Tue Jul  5 16:17:29 EDT 2016
Failures: ext4/001 generic/313

... and 3.14.73 looks like *this*:

BEGIN TEST 4k: Ext4 4k block Tue Jul  5 15:52:48 EDT 2016
Failures: ext4/308 generic/034 generic/039 generic/040 generic/041 generic/056 generic/057 generic/059
generic/065 generic/066 generic/073 generic/090 generic/101 generic/104 generic/106 generic/107
generic/135 generic/177 generic/313 generic/321 generic/322 generic/324 generic/325 generic/335
generic/336 generic/341 generic/342 generic/343 generic/348


The other thing that I'll note which is very discouraging as an
upstream maintainer trying to get backports and fixes into stable
kernel is that I don't have any proof that it actually helps.  I've
lost count of the number of times when someone has asked me about a
bug or a test failure with a particular device kernel based on 3.10 or
3.18, and it will turn out that device kernels generally don't take
updates the stable kernels, and it's not obvious to me whether or not
SOC vendors update their BSP kernels to take into account fixes from
the latest stable kernel.  (But even if they do, apparently many
device vendors aren't bothering to merge in changes from the SOC's BSP
kernel, even if the BSP kernel is getting -stable updates.)

So if I'm going to invest more time into getting fixes into the many,
MANY stable kernels, and/or try to invest time in recruiting
volunteers and training them to do this task, can someone please tell
me how much difference it actually makes?

Thanks,

						- Ted

P.S.  For the recorder, the newer stable kernels are in much better
shape.  For example, 4.4.14 looks like this:

BEGIN TEST 4k: Ext4 4k block Tue Jul  5 23:02:09 EDT 2016
Passed all 223 tests

Of course, as far as I know there are **no** devices based on 4.4
yet....  for devices shipping for this Christmas season, I suspect
we'll be *lucky* if they are using 3.18....

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:49           ` Trond Myklebust
@ 2016-07-09 22:41             ` Dan Williams
  2016-07-10  1:34             ` James Bottomley
  1 sibling, 0 replies; 244+ messages in thread
From: Dan Williams @ 2016-07-09 22:41 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: Bottomley James, ksummit-discuss, ksummit-discuss

On Sat, Jul 9, 2016 at 8:49 AM, Trond Myklebust <trondmy@primarydata.com> wrote:
>
>> On Jul 9, 2016, at 06:05, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
>>
>> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
>>>> I tend to think that all known bugs should be fixed, at least
>>>> because once they have been fixed, no one needs to remember about
>>>> them any more. :-)
>>>>
>>>> Moreover, minor fixes don't really introduce regressions that often
>>>
>>> Famous last words :)
>>
>> Actually, beyond the humour, the idea that small fixes don't introduce
>> regressions must be our most annoying anti-pattern.  The reality is
>> that a lot of so called fixes do introduce bugs.  The way this happens
>> is that a lot of these "obvious" fixes go through without any deep
>> review (because they're obvious, right?) and the bugs noisily turn up
>> slightly later.  The way this works is usually that some code
>> rearrangement is sold as a "fix" and later turns out not to be
>> equivalent to the prior code ... sometimes in incredibly subtle ways. I
>> think we should all be paying much more than lip service to the old
>> adage "If it ain't broke don't fix it”.
>
> The main problem with the stable kernel model right now is that we have no set of regression tests to apply. Unless someone goes in and actually tests each and every stable kernel affected by that “Cc: stable” line, then regressions will eventually happen.
>
> So do we want to have another round of “how do we regression test the kernel” talks?

I'd be interested in this discussion.  tools/testing/nvdimm/ has saved
me from shipping bugs on several occasions and allowed testing of
libnvdimm driver paths without needing the device(s).  However, it's
yet another test environment that takes unique effort to setup vs
something like "make test M=drivers/nvdimm/".

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 14:57     ` Jason Cooper
@ 2016-07-09 22:51       ` Jonathan Corbet
  0 siblings, 0 replies; 244+ messages in thread
From: Jonathan Corbet @ 2016-07-09 22:51 UTC (permalink / raw)
  To: Jason Cooper; +Cc: James Bottomley, ksummit-discuss

On Sat, 9 Jul 2016 14:57:48 +0000
Jason Cooper <jason@lakedaemon.net> wrote:

> It would be helpful if we could get a set of examples of regressions
> which have occurred in the past.

FWIW, I recently looked at this, using the Fixes: tags from -stable
patches:

	https://lwn.net/Articles/692866/

Of course, less than half of the -stable changesets carry Fixes: tags, so
the results are necessarily approximate.

jon

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  8:34     ` Jiri Kosina
                         ` (2 preceding siblings ...)
  2016-07-09 18:39       ` Andrew Lunn
@ 2016-07-10  1:22       ` Rafael J. Wysocki
  3 siblings, 0 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-10  1:22 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss, ksummit-discuss

On Saturday, July 09, 2016 10:34:35 AM Jiri Kosina wrote:
> On Fri, 8 Jul 2016, Luck, Tony wrote:
> 
> > > In addition to that, I'd again (like during the past 5+ years, but it
> > > never really happened) like to propose a stable tree discussion topic: I'd
> > > like to see an attempt to make the stable workflow more oriented towards
> > > "maintainers sending pull requests" rather than "random people pointing to
> > > patches that should go to stable". This has been much of an issue in the
> > 
> > Shouldn't the common case be "Maintainer sends list of commit IDs to
> > be cherry-picked" rather than a pull request?
> 
> Yeah, explicitly using term "pull request" was probably way too specific. 
> 
> The model I'd really love to see is "a person/group of people 
> (maintainers) are identified and appointed responsible for what end up in 
> -stable for particular subsystem", i.e. the same model we use for mainline 
> development.
> 
> Whether it's actual git pull request, list of commit IDs, etc. is really 
> just a technicality.
> 
> Basically: currently the model is that everybody is free to pick up a 
> random commit and bounce it to -stable. What I'd like see is that this is 
> routed through the maintainers instead, who then push thing upstream 
> (where upstream means stable).
> 
> I know that there are exceptions where this is working properly (netdev), 
> I personally am doing that also informally (when people tell me "hey, this 
> should go to stable", I do whatever is necessary), but still the general 
> process as such is not there.
> 
> The usual counter-argument I've always received from the stable team to 
> that was "Maintainers are busy enough already, if we start enforcing this, 
> we'd have much less patches in -stable". I personally don't see that as a 
> bad thing. "Less is more" might apply here. If someone is really unhappy 
> about state of particular subsystem in -stable, it'd mean that group of 
> maintainers will have to be extended for that particular subsystem.

You still need to demonstrate that the "random people" bouncing commits off
of -stable introduce more regressions in it than maintainers adding the
"Cc: stable" tag to their commits.

If more regressions are introduced by the latter, I'm afraid that the whole
reasoning falls apart.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:49           ` Trond Myklebust
  2016-07-09 22:41             ` Dan Williams
@ 2016-07-10  1:34             ` James Bottomley
  2016-07-10  1:43               ` Trond Myklebust
  2016-07-10  6:19               ` Olof Johansson
  1 sibling, 2 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-10  1:34 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: ksummit-discuss

[duplicate ksummit-discuss@ cc removed]
On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
> > On Jul 9, 2016, at 06:05, James Bottomley <
> > James.Bottomley@HansenPartnership.com> wrote:
> > 
> > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
> > > wrote:
> > > > I tend to think that all known bugs should be fixed, at least 
> > > > because once they have been fixed, no one needs to remember 
> > > > about them any more. :-)
> > > > 
> > > > Moreover, minor fixes don't really introduce regressions that
> > > > often
> > > 
> > > Famous last words :)
> > 
> > Actually, beyond the humour, the idea that small fixes don't 
> > introduce regressions must be our most annoying anti-pattern.  The 
> > reality is that a lot of so called fixes do introduce bugs.  The 
> > way this happens is that a lot of these "obvious" fixes go through 
> > without any deep review (because they're obvious, right?) and the 
> > bugs noisily turn up slightly later.  The way this works is usually 
> > that some code rearrangement is sold as a "fix" and later turns out 
> > not to be equivalent to the prior code ... sometimes in incredibly 
> > subtle ways. I think we should all be paying much more than lip 
> > service to the old adage "If it ain't broke don't fix it”.
> 
> The main problem with the stable kernel model right now is that we
> have no set of regression tests to apply. Unless someone goes in and
> actually tests each and every stable kernel affected by that “Cc:
> stable” line, then regressions will eventually happen.
> 
> So do we want to have another round of “how do we regression test the
> kernel” talks?

If I look back on our problems, they were all in device drivers, so
generic regression testing wouldn't have picked them up, in fact most
would need specific testing on the actual problem device.  So, I don't
really think testing is the issue, I think it's that we commit way too
many "obvious" patches.  In SCSI we try to gate it by having a
mandatory Reviewed-by: tag before something gets in, but really perhaps
we should insist on Tested-by: as well ... that way there's some
guarantee that the actual device being modified has been tested.

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:34             ` James Bottomley
@ 2016-07-10  1:43               ` Trond Myklebust
  2016-07-10  1:56                 ` James Bottomley
  2016-07-10  2:07                 ` [Ksummit-discuss] [CORE TOPIC] stable workflow Rafael J. Wysocki
  2016-07-10  6:19               ` Olof Johansson
  1 sibling, 2 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-10  1:43 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss


> On Jul 9, 2016, at 21:34, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> [duplicate ksummit-discuss@ cc removed]
> On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
>>> On Jul 9, 2016, at 06:05, James Bottomley <
>>> James.Bottomley@HansenPartnership.com> wrote:
>>> 
>>> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>>>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
>>>> wrote:
>>>>> I tend to think that all known bugs should be fixed, at least 
>>>>> because once they have been fixed, no one needs to remember 
>>>>> about them any more. :-)
>>>>> 
>>>>> Moreover, minor fixes don't really introduce regressions that
>>>>> often
>>>> 
>>>> Famous last words :)
>>> 
>>> Actually, beyond the humour, the idea that small fixes don't 
>>> introduce regressions must be our most annoying anti-pattern.  The 
>>> reality is that a lot of so called fixes do introduce bugs.  The 
>>> way this happens is that a lot of these "obvious" fixes go through 
>>> without any deep review (because they're obvious, right?) and the 
>>> bugs noisily turn up slightly later.  The way this works is usually 
>>> that some code rearrangement is sold as a "fix" and later turns out 
>>> not to be equivalent to the prior code ... sometimes in incredibly 
>>> subtle ways. I think we should all be paying much more than lip 
>>> service to the old adage "If it ain't broke don't fix it”.
>> 
>> The main problem with the stable kernel model right now is that we
>> have no set of regression tests to apply. Unless someone goes in and
>> actually tests each and every stable kernel affected by that “Cc:
>> stable” line, then regressions will eventually happen.
>> 
>> So do we want to have another round of “how do we regression test the
>> kernel” talks?
> 
> If I look back on our problems, they were all in device drivers, so
> generic regression testing wouldn't have picked them up, in fact most
> would need specific testing on the actual problem device.  So, I don't
> really think testing is the issue, I think it's that we commit way too
> many "obvious" patches.  In SCSI we try to gate it by having a
> mandatory Reviewed-by: tag before something gets in, but really perhaps
> we should insist on Tested-by: as well ... that way there's some
> guarantee that the actual device being modified has been tested.

That guarantees that it has been tested on the head of the kernel tree, but it doesn’t really tell you much about the behaviour when it hits the stable trees. What I’m saying is that we really want some form of unit testing that can be run to perform a minimal validation of the patch when it hits the older tree.

Even device drivers have expected outputs for a given input that can be validated through unit testing.

Cheers
  Trond

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:43               ` Trond Myklebust
@ 2016-07-10  1:56                 ` James Bottomley
  2016-07-10  2:12                   ` Trond Myklebust
                                     ` (2 more replies)
  2016-07-10  2:07                 ` [Ksummit-discuss] [CORE TOPIC] stable workflow Rafael J. Wysocki
  1 sibling, 3 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-10  1:56 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: ksummit-discuss

On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
> > On Jul 9, 2016, at 21:34, James Bottomley <
> > James.Bottomley@HansenPartnership.com> wrote:
> > 
> > [duplicate ksummit-discuss@ cc removed]
> > On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
> > > > On Jul 9, 2016, at 06:05, James Bottomley <
> > > > James.Bottomley@HansenPartnership.com> wrote:
> > > > 
> > > > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > > > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
> > > > > wrote:
> > > > > > I tend to think that all known bugs should be fixed, at
> > > > > > least 
> > > > > > because once they have been fixed, no one needs to remember
> > > > > > about them any more. :-)
> > > > > > 
> > > > > > Moreover, minor fixes don't really introduce regressions
> > > > > > that
> > > > > > often
> > > > > 
> > > > > Famous last words :)
> > > > 
> > > > Actually, beyond the humour, the idea that small fixes don't 
> > > > introduce regressions must be our most annoying anti-pattern. 
> > > >  The 
> > > > reality is that a lot of so called fixes do introduce bugs. 
> > > >  The 
> > > > way this happens is that a lot of these "obvious" fixes go
> > > > through 
> > > > without any deep review (because they're obvious, right?) and
> > > > the 
> > > > bugs noisily turn up slightly later.  The way this works is
> > > > usually 
> > > > that some code rearrangement is sold as a "fix" and later turns
> > > > out 
> > > > not to be equivalent to the prior code ... sometimes in
> > > > incredibly 
> > > > subtle ways. I think we should all be paying much more than lip
> > > > service to the old adage "If it ain't broke don't fix it”.
> > > 
> > > The main problem with the stable kernel model right now is that
> > > we
> > > have no set of regression tests to apply. Unless someone goes in
> > > and
> > > actually tests each and every stable kernel affected by that “Cc:
> > > stable” line, then regressions will eventually happen.
> > > 
> > > So do we want to have another round of “how do we regression test
> > > the
> > > kernel” talks?
> > 
> > If I look back on our problems, they were all in device drivers, so
> > generic regression testing wouldn't have picked them up, in fact
> > most
> > would need specific testing on the actual problem device.  So, I
> > don't
> > really think testing is the issue, I think it's that we commit way
> > too
> > many "obvious" patches.  In SCSI we try to gate it by having a
> > mandatory Reviewed-by: tag before something gets in, but really
> > perhaps
> > we should insist on Tested-by: as well ... that way there's some
> > guarantee that the actual device being modified has been tested.
> 
> That guarantees that it has been tested on the head of the kernel
> tree, but it doesn’t really tell you much about the behaviour when it
> hits the stable trees.

The majority of stable regressions are actually patches with subtle
failures even in the head, so testing on the head properly would have
eliminated them.  I grant there are some problems where the backport
itself is flawed but the head works (usually because of missing
intermediate stuff) but perhaps by insisting on a Tested-by: before
backporting, we can at least eliminate a significant fraction of
regressions.

>  What I’m saying is that we really want some form of unit testing
> that can be run to perform a minimal validation of the patch when it
> hits the older tree.
> 
> Even device drivers have expected outputs for a given input that can
> be validated through unit testing.

Without the actual hardware, this is difficult ...

James

> Cheers
>   Trond
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:43               ` Trond Myklebust
  2016-07-10  1:56                 ` James Bottomley
@ 2016-07-10  2:07                 ` Rafael J. Wysocki
  1 sibling, 0 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-10  2:07 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: James Bottomley, ksummit-discuss

On Sunday, July 10, 2016 01:43:48 AM Trond Myklebust wrote:
> 
> > On Jul 9, 2016, at 21:34, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > 
> > [duplicate ksummit-discuss@ cc removed]
> > On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
> >>> On Jul 9, 2016, at 06:05, James Bottomley <
> >>> James.Bottomley@HansenPartnership.com> wrote:
> >>> 
> >>> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> >>>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
> >>>> wrote:
> >>>>> I tend to think that all known bugs should be fixed, at least 
> >>>>> because once they have been fixed, no one needs to remember 
> >>>>> about them any more. :-)
> >>>>> 
> >>>>> Moreover, minor fixes don't really introduce regressions that
> >>>>> often
> >>>> 
> >>>> Famous last words :)
> >>> 
> >>> Actually, beyond the humour, the idea that small fixes don't 
> >>> introduce regressions must be our most annoying anti-pattern.  The 
> >>> reality is that a lot of so called fixes do introduce bugs.  The 
> >>> way this happens is that a lot of these "obvious" fixes go through 
> >>> without any deep review (because they're obvious, right?) and the 
> >>> bugs noisily turn up slightly later.  The way this works is usually 
> >>> that some code rearrangement is sold as a "fix" and later turns out 
> >>> not to be equivalent to the prior code ... sometimes in incredibly 
> >>> subtle ways. I think we should all be paying much more than lip 
> >>> service to the old adage "If it ain't broke don't fix it”.
> >> 
> >> The main problem with the stable kernel model right now is that we
> >> have no set of regression tests to apply. Unless someone goes in and
> >> actually tests each and every stable kernel affected by that “Cc:
> >> stable” line, then regressions will eventually happen.
> >> 
> >> So do we want to have another round of “how do we regression test the
> >> kernel” talks?
> > 
> > If I look back on our problems, they were all in device drivers, so
> > generic regression testing wouldn't have picked them up, in fact most
> > would need specific testing on the actual problem device.  So, I don't
> > really think testing is the issue, I think it's that we commit way too
> > many "obvious" patches.  In SCSI we try to gate it by having a
> > mandatory Reviewed-by: tag before something gets in, but really perhaps
> > we should insist on Tested-by: as well ... that way there's some
> > guarantee that the actual device being modified has been tested.
> 
> That guarantees that it has been tested on the head of the kernel tree,
> but it doesn’t really tell you much about the behaviour when it hits the
> stable trees. What I’m saying is that we really want some form of unit
> testing that can be run to perform a minimal validation of the patch when
> it hits the older tree.
> 
> Even device drivers have expected outputs for a given input that can be
> validated through unit testing.

One thing is to be able to catch problems before commits go into -stable (and
I'm all for more QA, regression testing and such if possible to arrange), but
also note that this has to happen in a specific time frame.  It just can't
take too much time, or the commit may miss the release it should go into if
it turns out to be valid after all.

But even if all that is in place and works like charm, some bugs will not be
caught, so the next question is what to do about them.

And I'm still thinking that problematic commits should be reverted from -stable
right away regardless of what the mainline is going to do with them.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:56                 ` James Bottomley
@ 2016-07-10  2:12                   ` Trond Myklebust
  2016-07-10  2:15                   ` Rafael J. Wysocki
  2016-07-10  2:27                   ` Dan Williams
  2 siblings, 0 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-10  2:12 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss


> On Jul 9, 2016, at 21:56, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
>>> On Jul 9, 2016, at 21:34, James Bottomley <
>>> James.Bottomley@HansenPartnership.com> wrote:
>>> 
>>> [duplicate ksummit-discuss@ cc removed]
>>> On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
>>>>> On Jul 9, 2016, at 06:05, James Bottomley <
>>>>> James.Bottomley@HansenPartnership.com> wrote:
>>>>> 
>>>>> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>>>>>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
>>>>>> wrote:
>>>>>>> I tend to think that all known bugs should be fixed, at
>>>>>>> least 
>>>>>>> because once they have been fixed, no one needs to remember
>>>>>>> about them any more. :-)
>>>>>>> 
>>>>>>> Moreover, minor fixes don't really introduce regressions
>>>>>>> that
>>>>>>> often
>>>>>> 
>>>>>> Famous last words :)
>>>>> 
>>>>> Actually, beyond the humour, the idea that small fixes don't 
>>>>> introduce regressions must be our most annoying anti-pattern. 
>>>>> The 
>>>>> reality is that a lot of so called fixes do introduce bugs. 
>>>>> The 
>>>>> way this happens is that a lot of these "obvious" fixes go
>>>>> through 
>>>>> without any deep review (because they're obvious, right?) and
>>>>> the 
>>>>> bugs noisily turn up slightly later.  The way this works is
>>>>> usually 
>>>>> that some code rearrangement is sold as a "fix" and later turns
>>>>> out 
>>>>> not to be equivalent to the prior code ... sometimes in
>>>>> incredibly 
>>>>> subtle ways. I think we should all be paying much more than lip
>>>>> service to the old adage "If it ain't broke don't fix it”.
>>>> 
>>>> The main problem with the stable kernel model right now is that
>>>> we
>>>> have no set of regression tests to apply. Unless someone goes in
>>>> and
>>>> actually tests each and every stable kernel affected by that “Cc:
>>>> stable” line, then regressions will eventually happen.
>>>> 
>>>> So do we want to have another round of “how do we regression test
>>>> the
>>>> kernel” talks?
>>> 
>>> If I look back on our problems, they were all in device drivers, so
>>> generic regression testing wouldn't have picked them up, in fact
>>> most
>>> would need specific testing on the actual problem device.  So, I
>>> don't
>>> really think testing is the issue, I think it's that we commit way
>>> too
>>> many "obvious" patches.  In SCSI we try to gate it by having a
>>> mandatory Reviewed-by: tag before something gets in, but really
>>> perhaps
>>> we should insist on Tested-by: as well ... that way there's some
>>> guarantee that the actual device being modified has been tested.
>> 
>> That guarantees that it has been tested on the head of the kernel
>> tree, but it doesn’t really tell you much about the behaviour when it
>> hits the stable trees.
> 
> The majority of stable regressions are actually patches with subtle
> failures even in the head, so testing on the head properly would have
> eliminated them.  I grant there are some problems where the backport
> itself is flawed but the head works (usually because of missing
> intermediate stuff) but perhaps by insisting on a Tested-by: before
> backporting, we can at least eliminate a significant fraction of
> regressions.

I don’t disagree that testing the head thoroughly is goodness. :-)

>> What I’m saying is that we really want some form of unit testing
>> that can be run to perform a minimal validation of the patch when it
>> hits the older tree.
>> 
>> Even device drivers have expected outputs for a given input that can
>> be validated through unit testing.
> 
> Without the actual hardware, this is difficult …

The premise of unit testing is that you are operating on key functions which are expected to give a certain output given a known input. You’re essentially just testing the software APIs and making sure that the contracts are obeyed. It’s not a catch-all for bugs; it’s not a test for end-to-end behaviour. However it is a way to ensure that localised changes don’t break local assumptions.

Cheers
  Trond


^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:56                 ` James Bottomley
  2016-07-10  2:12                   ` Trond Myklebust
@ 2016-07-10  2:15                   ` Rafael J. Wysocki
  2016-07-10  3:00                     ` James Bottomley
  2016-07-10  2:27                   ` Dan Williams
  2 siblings, 1 reply; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-10  2:15 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On Sunday, July 10, 2016 10:56:10 AM James Bottomley wrote:
> On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
> > > On Jul 9, 2016, at 21:34, James Bottomley <
> > > James.Bottomley@HansenPartnership.com> wrote:
> > > 
> > > [duplicate ksummit-discuss@ cc removed]
> > > On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
> > > > > On Jul 9, 2016, at 06:05, James Bottomley <
> > > > > James.Bottomley@HansenPartnership.com> wrote:
> > > > > 
> > > > > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > > > > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
> > > > > > wrote:
> > > > > > > I tend to think that all known bugs should be fixed, at
> > > > > > > least 
> > > > > > > because once they have been fixed, no one needs to remember
> > > > > > > about them any more. :-)
> > > > > > > 
> > > > > > > Moreover, minor fixes don't really introduce regressions
> > > > > > > that
> > > > > > > often
> > > > > > 
> > > > > > Famous last words :)
> > > > > 
> > > > > Actually, beyond the humour, the idea that small fixes don't 
> > > > > introduce regressions must be our most annoying anti-pattern. 
> > > > >  The 
> > > > > reality is that a lot of so called fixes do introduce bugs. 
> > > > >  The 
> > > > > way this happens is that a lot of these "obvious" fixes go
> > > > > through 
> > > > > without any deep review (because they're obvious, right?) and
> > > > > the 
> > > > > bugs noisily turn up slightly later.  The way this works is
> > > > > usually 
> > > > > that some code rearrangement is sold as a "fix" and later turns
> > > > > out 
> > > > > not to be equivalent to the prior code ... sometimes in
> > > > > incredibly 
> > > > > subtle ways. I think we should all be paying much more than lip
> > > > > service to the old adage "If it ain't broke don't fix it”.
> > > > 
> > > > The main problem with the stable kernel model right now is that
> > > > we
> > > > have no set of regression tests to apply. Unless someone goes in
> > > > and
> > > > actually tests each and every stable kernel affected by that “Cc:
> > > > stable” line, then regressions will eventually happen.
> > > > 
> > > > So do we want to have another round of “how do we regression test
> > > > the
> > > > kernel” talks?
> > > 
> > > If I look back on our problems, they were all in device drivers, so
> > > generic regression testing wouldn't have picked them up, in fact
> > > most
> > > would need specific testing on the actual problem device.  So, I
> > > don't
> > > really think testing is the issue, I think it's that we commit way
> > > too
> > > many "obvious" patches.  In SCSI we try to gate it by having a
> > > mandatory Reviewed-by: tag before something gets in, but really
> > > perhaps
> > > we should insist on Tested-by: as well ... that way there's some
> > > guarantee that the actual device being modified has been tested.
> > 
> > That guarantees that it has been tested on the head of the kernel
> > tree, but it doesn’t really tell you much about the behaviour when it
> > hits the stable trees.
> 
> The majority of stable regressions are actually patches with subtle
> failures even in the head, so testing on the head properly would have
> eliminated them.

You really sound like you had some statistics on -stable regressions handy,
but is it the case?

The above is my impression too, but then I'm not sure how accurate it is.

> I grant there are some problems where the backport
> itself is flawed but the head works (usually because of missing
> intermediate stuff) but perhaps by insisting on a Tested-by: before
> backporting, we can at least eliminate a significant fraction of
> regressions.

It also depends on how much time it takes for the bug to show up.

For example, if you fixed a bug that's 100% reproducible, but you introduced
another one that happens once in a blue moon in the same commit, it may not
be frequent enough to be caught before the commit goes into -stable.

> >  What I’m saying is that we really want some form of unit testing
> > that can be run to perform a minimal validation of the patch when it
> > hits the older tree.
> > 
> > Even device drivers have expected outputs for a given input that can
> > be validated through unit testing.
> 
> Without the actual hardware, this is difficult ...

Right.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:56                 ` James Bottomley
  2016-07-10  2:12                   ` Trond Myklebust
  2016-07-10  2:15                   ` Rafael J. Wysocki
@ 2016-07-10  2:27                   ` Dan Williams
  2016-07-10  6:10                     ` Guenter Roeck
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
  2 siblings, 2 replies; 244+ messages in thread
From: Dan Williams @ 2016-07-10  2:27 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On Sat, Jul 9, 2016 at 6:56 PM, James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
>> > On Jul 9, 2016, at 21:34, James Bottomley <
>> > James.Bottomley@HansenPartnership.com> wrote:
>> >
>> > [duplicate ksummit-discuss@ cc removed]
>> > On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
>> > > > On Jul 9, 2016, at 06:05, James Bottomley <
>> > > > James.Bottomley@HansenPartnership.com> wrote:
>> > > >
>> > > > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>> > > > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
>> > > > > wrote:
>> > > > > > I tend to think that all known bugs should be fixed, at
>> > > > > > least
>> > > > > > because once they have been fixed, no one needs to remember
>> > > > > > about them any more. :-)
>> > > > > >
>> > > > > > Moreover, minor fixes don't really introduce regressions
>> > > > > > that
>> > > > > > often
>> > > > >
>> > > > > Famous last words :)
>> > > >
>> > > > Actually, beyond the humour, the idea that small fixes don't
>> > > > introduce regressions must be our most annoying anti-pattern.
>> > > >  The
>> > > > reality is that a lot of so called fixes do introduce bugs.
>> > > >  The
>> > > > way this happens is that a lot of these "obvious" fixes go
>> > > > through
>> > > > without any deep review (because they're obvious, right?) and
>> > > > the
>> > > > bugs noisily turn up slightly later.  The way this works is
>> > > > usually
>> > > > that some code rearrangement is sold as a "fix" and later turns
>> > > > out
>> > > > not to be equivalent to the prior code ... sometimes in
>> > > > incredibly
>> > > > subtle ways. I think we should all be paying much more than lip
>> > > > service to the old adage "If it ain't broke don't fix it”.
>> > >
>> > > The main problem with the stable kernel model right now is that
>> > > we
>> > > have no set of regression tests to apply. Unless someone goes in
>> > > and
>> > > actually tests each and every stable kernel affected by that “Cc:
>> > > stable” line, then regressions will eventually happen.
>> > >
>> > > So do we want to have another round of “how do we regression test
>> > > the
>> > > kernel” talks?
>> >
>> > If I look back on our problems, they were all in device drivers, so
>> > generic regression testing wouldn't have picked them up, in fact
>> > most
>> > would need specific testing on the actual problem device.  So, I
>> > don't
>> > really think testing is the issue, I think it's that we commit way
>> > too
>> > many "obvious" patches.  In SCSI we try to gate it by having a
>> > mandatory Reviewed-by: tag before something gets in, but really
>> > perhaps
>> > we should insist on Tested-by: as well ... that way there's some
>> > guarantee that the actual device being modified has been tested.
>>
>> That guarantees that it has been tested on the head of the kernel
>> tree, but it doesn’t really tell you much about the behaviour when it
>> hits the stable trees.
>
> The majority of stable regressions are actually patches with subtle
> failures even in the head, so testing on the head properly would have
> eliminated them.  I grant there are some problems where the backport
> itself is flawed but the head works (usually because of missing
> intermediate stuff) but perhaps by insisting on a Tested-by: before
> backporting, we can at least eliminate a significant fraction of
> regressions.
>
>>  What I’m saying is that we really want some form of unit testing
>> that can be run to perform a minimal validation of the patch when it
>> hits the older tree.
>>
>> Even device drivers have expected outputs for a given input that can
>> be validated through unit testing.
>
> Without the actual hardware, this is difficult ...

...but not impossible, certainly there's opportunity to test more code
paths than we do today with unit testing approaches.  For example
tools/testing/nvdimm/ simulates "interesting" values in an ACPI NFIT
table, and does not need a physical platform.  Yes, there will always
be a class of bugs that can only be reproduced with hardware.
However, I've tested USB host controller TRB handling code with unit
tests for conditions that are difficult to reproduce with actual
hardware.  I think there is room for improvement for device driver
unit testing.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  2:15                   ` Rafael J. Wysocki
@ 2016-07-10  3:00                     ` James Bottomley
  2016-07-10  3:07                       ` Trond Myklebust
                                         ` (2 more replies)
  0 siblings, 3 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-10  3:00 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Trond Myklebust, ksummit-discuss

On Sun, 2016-07-10 at 04:15 +0200, Rafael J. Wysocki wrote:
> On Sunday, July 10, 2016 10:56:10 AM James Bottomley wrote:
> > On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
> > > > On Jul 9, 2016, at 21:34, James Bottomley <
> > > > James.Bottomley@HansenPartnership.com> wrote:
> > > > 
> > > > [duplicate ksummit-discuss@ cc removed]
> > > > On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
> > > > > > On Jul 9, 2016, at 06:05, James Bottomley <
> > > > > > James.Bottomley@HansenPartnership.com> wrote:
> > > > > > 
> > > > > > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > > > > > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J.
> > > > > > > Wysocki
> > > > > > > wrote:
> > > > > > > > I tend to think that all known bugs should be fixed, at
> > > > > > > > least because once they have been fixed, no one needs 
> > > > > > > > to remember about them any more. :-)
> > > > > > > > 
> > > > > > > > Moreover, minor fixes don't really introduce 
> > > > > > > > regressions that often
> > > > > > > 
> > > > > > > Famous last words :)
> > > > > > 
> > > > > > Actually, beyond the humour, the idea that small fixes 
> > > > > > don't introduce regressions must be our most annoying anti
> > > > > > -pattern.  The reality is that a lot of so called fixes do 
> > > > > > introduce bugs.  The way this happens is that a lot of 
> > > > > > these "obvious" fixes go through without any deep review 
> > > > > > (because they're obvious, right?) and the bugs noisily turn 
> > > > > > up slightly later.  The way this works is usually
> > > > > > that some code rearrangement is sold as a "fix" and later 
> > > > > > turns out not to be equivalent to the prior code ... 
> > > > > > sometimes in incredibly subtle ways. I think we should all 
> > > > > > be paying much more than lip service to the old adage "If
> > > > > > it ain't broke don't fix it”.
> > > > > 
> > > > > The main problem with the stable kernel model right now is 
> > > > > that we have no set of regression tests to apply. Unless 
> > > > > someone goes in and actually tests each and every stable 
> > > > > kernel affected by that “Cc: stable” line, then regressions
> > > > > will eventually happen.
> > > > > 
> > > > > So do we want to have another round of “how do we regression 
> > > > > test the kernel” talks?
> > > > 
> > > > If I look back on our problems, they were all in device 
> > > > drivers, so generic regression testing wouldn't have picked 
> > > > them up, in fact most would need specific testing on the actual 
> > > > problem device.  So, I don't really think testing is the issue, 
> > > > I think it's that we commit way too many "obvious" patches.  In 
> > > > SCSI we try to gate it by having a mandatory Reviewed-by: tag 
> > > > before something gets in, but really perhaps we should insist 
> > > > on Tested-by: as well ... that way there's some guarantee that 
> > > > the actual device being modified has been tested.
> > > 
> > > That guarantees that it has been tested on the head of the kernel
> > > tree, but it doesn’t really tell you much about the behaviour 
> > > when it hits the stable trees.
> > 
> > The majority of stable regressions are actually patches with subtle
> > failures even in the head, so testing on the head properly would 
> > have eliminated them.
> 
> You really sound like you had some statistics on -stable regressions 
> handy, but is it the case?

No, it's purely based on what went wrong (at least what I found out
about) with SCSI cc's to stable.

> The above is my impression too, but then I'm not sure how accurate it
> is.
> 
> > I grant there are some problems where the backport
> > itself is flawed but the head works (usually because of missing
> > intermediate stuff) but perhaps by insisting on a Tested-by: before
> > backporting, we can at least eliminate a significant fraction of
> > regressions.
> 
> It also depends on how much time it takes for the bug to show up.
> 
> For example, if you fixed a bug that's 100% reproducible, but you 
> introduced another one that happens once in a blue moon in the same 
> commit, it may not be frequent enough to be caught before the commit
> goes into -stable.

If I'm suspicious of something, I usually mark it not to be backported
until we've got some testing:

cc: stable@vger.kernel.org # delay until 4.8-rc1

Greg seems to be able to cope with this.

Note: I'm not saying don't do testing, or even that testing isn't a
suitable discussion topic for KS.  What I am saying is that I think we
should discuss our stable practices separately from testing.

James


> > >  What I’m saying is that we really want some form of unit testing
> > > that can be run to perform a minimal validation of the patch when 
> > > it hits the older tree.
> > > 
> > > Even device drivers have expected outputs for a given input that 
> > > can be validated through unit testing.
> > 
> > Without the actual hardware, this is difficult ...
> 
> Right.
> 
> Thanks,
> Rafael
> 
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  3:00                     ` James Bottomley
@ 2016-07-10  3:07                       ` Trond Myklebust
  2016-07-26 13:35                       ` David Woodhouse
  2016-08-02 14:12                       ` Jani Nikula
  2 siblings, 0 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-10  3:07 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss


> On Jul 9, 2016, at 23:00, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> Note: I'm not saying don't do testing, or even that testing isn't a
> suitable discussion topic for KS.  What I am saying is that I think we
> should discuss our stable practices separately from testing.
> 

They are not entirely non-intersecting… I’d be fine with seeing them discussed separately, though.

Cheers
  Trond

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  2:27                   ` Dan Williams
@ 2016-07-10  6:10                     ` Guenter Roeck
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-10  6:10 UTC (permalink / raw)
  To: Dan Williams, James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On 07/09/2016 07:27 PM, Dan Williams wrote:
> On Sat, Jul 9, 2016 at 6:56 PM, James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
>> On Sun, 2016-07-10 at 01:43 +0000, Trond Myklebust wrote:
>>>> On Jul 9, 2016, at 21:34, James Bottomley <
>>>> James.Bottomley@HansenPartnership.com> wrote:
>>>>
>>>> [duplicate ksummit-discuss@ cc removed]
>>>> On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
>>>>>> On Jul 9, 2016, at 06:05, James Bottomley <
>>>>>> James.Bottomley@HansenPartnership.com> wrote:
>>>>>>
>>>>>> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>>>>>>> On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
>>>>>>> wrote:
>>>>>>>> I tend to think that all known bugs should be fixed, at
>>>>>>>> least
>>>>>>>> because once they have been fixed, no one needs to remember
>>>>>>>> about them any more. :-)
>>>>>>>>
>>>>>>>> Moreover, minor fixes don't really introduce regressions
>>>>>>>> that
>>>>>>>> often
>>>>>>>
>>>>>>> Famous last words :)
>>>>>>
>>>>>> Actually, beyond the humour, the idea that small fixes don't
>>>>>> introduce regressions must be our most annoying anti-pattern.
>>>>>>   The
>>>>>> reality is that a lot of so called fixes do introduce bugs.
>>>>>>   The
>>>>>> way this happens is that a lot of these "obvious" fixes go
>>>>>> through
>>>>>> without any deep review (because they're obvious, right?) and
>>>>>> the
>>>>>> bugs noisily turn up slightly later.  The way this works is
>>>>>> usually
>>>>>> that some code rearrangement is sold as a "fix" and later turns
>>>>>> out
>>>>>> not to be equivalent to the prior code ... sometimes in
>>>>>> incredibly
>>>>>> subtle ways. I think we should all be paying much more than lip
>>>>>> service to the old adage "If it ain't broke don't fix it”.
>>>>>
>>>>> The main problem with the stable kernel model right now is that
>>>>> we
>>>>> have no set of regression tests to apply. Unless someone goes in
>>>>> and
>>>>> actually tests each and every stable kernel affected by that “Cc:
>>>>> stable” line, then regressions will eventually happen.
>>>>>
>>>>> So do we want to have another round of “how do we regression test
>>>>> the
>>>>> kernel” talks?
>>>>
>>>> If I look back on our problems, they were all in device drivers, so
>>>> generic regression testing wouldn't have picked them up, in fact
>>>> most
>>>> would need specific testing on the actual problem device.  So, I
>>>> don't
>>>> really think testing is the issue, I think it's that we commit way
>>>> too
>>>> many "obvious" patches.  In SCSI we try to gate it by having a
>>>> mandatory Reviewed-by: tag before something gets in, but really
>>>> perhaps
>>>> we should insist on Tested-by: as well ... that way there's some
>>>> guarantee that the actual device being modified has been tested.
>>>
>>> That guarantees that it has been tested on the head of the kernel
>>> tree, but it doesn’t really tell you much about the behaviour when it
>>> hits the stable trees.
>>
>> The majority of stable regressions are actually patches with subtle
>> failures even in the head, so testing on the head properly would have
>> eliminated them.  I grant there are some problems where the backport
>> itself is flawed but the head works (usually because of missing
>> intermediate stuff) but perhaps by insisting on a Tested-by: before
>> backporting, we can at least eliminate a significant fraction of
>> regressions.
>>
>>>   What I’m saying is that we really want some form of unit testing
>>> that can be run to perform a minimal validation of the patch when it
>>> hits the older tree.
>>>
>>> Even device drivers have expected outputs for a given input that can
>>> be validated through unit testing.
>>
>> Without the actual hardware, this is difficult ...
>
> ...but not impossible, certainly there's opportunity to test more code
> paths than we do today with unit testing approaches.  For example
> tools/testing/nvdimm/ simulates "interesting" values in an ACPI NFIT
> table, and does not need a physical platform.  Yes, there will always
> be a class of bugs that can only be reproduced with hardware.
> However, I've tested USB host controller TRB handling code with unit
> tests for conditions that are difficult to reproduce with actual
> hardware.  I think there is room for improvement for device driver
> unit testing.

Also, testing may well include real hardware. kernelci.org _does_ test
with real hardware, just not extensively so. Plus, there is always qemu.
Sure, that is not _real_ real hardware, but it can be seen as a tool
to come as close as possible to real hardware without requiring an
expensive lab infrastructure. The question, just as with testing in general,
is more if anyone is willing to invest in it, not if it is possible or not.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  1:34             ` James Bottomley
  2016-07-10  1:43               ` Trond Myklebust
@ 2016-07-10  6:19               ` Olof Johansson
  2016-07-10 14:42                 ` Theodore Ts'o
  1 sibling, 1 reply; 244+ messages in thread
From: Olof Johansson @ 2016-07-10  6:19 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On Sat, Jul 9, 2016 at 6:34 PM, James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> [duplicate ksummit-discuss@ cc removed]
> On Sat, 2016-07-09 at 15:49 +0000, Trond Myklebust wrote:
>> > On Jul 9, 2016, at 06:05, James Bottomley <
>> > James.Bottomley@HansenPartnership.com> wrote:
>> >
>> > On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
>> > > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki
>> > > wrote:
>> > > > I tend to think that all known bugs should be fixed, at least
>> > > > because once they have been fixed, no one needs to remember
>> > > > about them any more. :-)
>> > > >
>> > > > Moreover, minor fixes don't really introduce regressions that
>> > > > often
>> > >
>> > > Famous last words :)
>> >
>> > Actually, beyond the humour, the idea that small fixes don't
>> > introduce regressions must be our most annoying anti-pattern.  The
>> > reality is that a lot of so called fixes do introduce bugs.  The
>> > way this happens is that a lot of these "obvious" fixes go through
>> > without any deep review (because they're obvious, right?) and the
>> > bugs noisily turn up slightly later.  The way this works is usually
>> > that some code rearrangement is sold as a "fix" and later turns out
>> > not to be equivalent to the prior code ... sometimes in incredibly
>> > subtle ways. I think we should all be paying much more than lip
>> > service to the old adage "If it ain't broke don't fix it”.
>>
>> The main problem with the stable kernel model right now is that we
>> have no set of regression tests to apply. Unless someone goes in and
>> actually tests each and every stable kernel affected by that “Cc:
>> stable” line, then regressions will eventually happen.
>>
>> So do we want to have another round of “how do we regression test the
>> kernel” talks?
>
> If I look back on our problems, they were all in device drivers, so
> generic regression testing wouldn't have picked them up, in fact most
> would need specific testing on the actual problem device.  So, I don't
> really think testing is the issue, I think it's that we commit way too
> many "obvious" patches.  In SCSI we try to gate it by having a
> mandatory Reviewed-by: tag before something gets in, but really perhaps
> we should insist on Tested-by: as well ... that way there's some
> guarantee that the actual device being modified has been tested.


Having worked on one of the projects that were trying to track stable
but got internal pushback against, it it came down to this:

The in-house developers on a certain subsystem didn't trust the
upstream maintainers to not regress their drivers -- in particular
they had seen some painful regressions on older chipsets when newer
hardware support was picked up. Esoteric bugs that had been fixed with
the help of the support team weren't folded in properly in the
upstream sources, or when they did they looked sufficiently different
that when -stable came around they didn't want to revert back to that
version, or they weren't yet picked up for upstream and now other
fixes were touching the same code and that seemed risky. They had a
code base that worked for the use cases they cared about (with the fix
applied that the support team had provided), and very little interest
in risking a regression from switching to the upstream version.

In hindsight, I think the specific problems seen had later been solved
through other means, but the reluctance to keep upreving to -stable
was hard to get rid of once someone had gotten burnt by it, and it
didn't seem worth it at the time.

Instead, what the team started doing was using -stable as a source for
fixes -- when looking at a bug, first think you looked for was to see
if someone had touched that code/subsystem in -stable. It's not ideal
in the sense that you have to hit the bug and someone has to look at
it, but it was the state we ended up in on that project. It means
-stable still has substanial value even though it's not merged
directly.


-Olof

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
  2016-07-08 23:12 ` Guenter Roeck
  2016-07-09  0:06 ` Jason Cooper
@ 2016-07-10  7:21 ` Takashi Iwai
  2016-07-11  7:44 ` Christian Borntraeger
  2016-08-02 13:49 ` Jani Nikula
  4 siblings, 0 replies; 244+ messages in thread
From: Takashi Iwai @ 2016-07-10  7:21 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: ksummit-discuss

On Sat, 09 Jul 2016 00:35:09 +0200,
Jiri Kosina wrote:
> 
> In addition to that, I'd again (like during the past 5+ years, but it 
> never really happened) like to propose a stable tree discussion topic: I'd 
> like to see an attempt to make the stable workflow more oriented towards 
> "maintainers sending pull requests" rather than "random people pointing to 
> patches that should go to stable". This has been much of an issue in the 
> past, when we've been seeing many stable tree regressions; that's not the 
> case any more, but still something where I sense a room for improvement.

I guess the stable workflow couldn't be unified since the "pull" model
needs more load to maintainers.  So the discussion should go rather
about the pattern, what model would fit for which kind of person.

One big obstacle to swith to the pull model for me is the large number
of stable branches.  Should we focus on only Greg's branches?  And,
how much QA test is required (or supposed) for *each* branch?

Also, it's not clear what about subtree maintainership.  Is stable
patches in net tree all managed currently only by Dave?  Or each
subtree maintainer manages the stable patches and push up to the top
maintainer?


So, more questions than suggestions from my side: it already implies
that the topic is definitely interesting to me.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 10:05         ` James Bottomley
  2016-07-09 15:49           ` Trond Myklebust
@ 2016-07-10  7:29           ` Takashi Iwai
  2016-07-10 10:20             ` Jiri Kosina
  2016-07-26 13:08           ` David Woodhouse
  2 siblings, 1 reply; 244+ messages in thread
From: Takashi Iwai @ 2016-07-10  7:29 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

On Sat, 09 Jul 2016 12:05:21 +0200,
James Bottomley wrote:
> 
> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
> > > I tend to think that all known bugs should be fixed, at least 
> > > because once they have been fixed, no one needs to remember about 
> > > them any more. :-)
> > > 
> > > Moreover, minor fixes don't really introduce regressions that often
> > 
> > Famous last words :)
> 
> Actually, beyond the humour, the idea that small fixes don't introduce
> regressions must be our most annoying anti-pattern.  The reality is
> that a lot of so called fixes do introduce bugs.  The way this happens
> is that a lot of these "obvious" fixes go through without any deep
> review (because they're obvious, right?) and the bugs noisily turn up
> slightly later.

And there have been quite a few cases where the fix introduces a bug
only in the older kernels while the fix itself is correct for the
latest kernel.  And, catching it only by a patch review is difficult.
Partly because the patch shows only a small context around the changes
(thus it looks apparently OK), and partly because the stable trees are
old and the maintainer's brain storage has too short refresh time,
thus often he forgets about the relevant change in the past.

IMO, we need a really better QA before releasing stable trees.  They
are all fixes, yes, but they aren't always fixes for stable trees, in
reality.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  0:10   ` Dmitry Torokhov
  2016-07-09  0:37     ` Rafael J. Wysocki
@ 2016-07-10  7:37     ` Takashi Iwai
  1 sibling, 0 replies; 244+ messages in thread
From: Takashi Iwai @ 2016-07-10  7:37 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: ksummit-discuss

On Sat, 09 Jul 2016 02:10:46 +0200,
Dmitry Torokhov wrote:
> 
> On Fri, Jul 08, 2016 at 04:12:14PM -0700, Guenter Roeck wrote:
> > On 07/08/2016 03:35 PM, Jiri Kosina wrote:
> > >Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it,
> > >wouldn't it? :)
> > >
> > >As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the
> > >crucial elements I rely on (and I also try to make sure that SUSE
> > >contributes back as much as possible).
> > >
> > >Hence any planned changes in the workflow / releases are rather essential
> > >for me, and I'd like to participate, should any such discussion take
> > >place.
> > >
> > 
> > Same here. New employer, lots of unhappiness with stable releases, to the point
> > where stable trees are not used as basis for shipping releases.
> > That kind of defeats the purpose. So, instead of "let's ignore stable",
> > maybe we can get to a point where people feel comfortable with the quality
> > of stable releases, and where stable can actually be used as basis for production
> > releases.
> 
> I wonder if it would not be a good idea to split stable into several
> flavors: security, fixes to core (really fixes), and fixes to device
> drivers + new hardware support. I feel that with current single stable
> tree (per stable release) we are too liberal with what we direct towards
> stable, with many changes not being strictly necessary, but rather "nice
> to have".

Well, I'm not sure whether splitting to these two categories would
improve the regression rate.  In general, a patch for a new hardware
is just adding something (like ID entries), and even if it's buggy, it
shouldn't affect other older devices.  Of course, there are
exceptions, but judging from only my own experience, it's likely a
really small fraction.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  7:29           ` Takashi Iwai
@ 2016-07-10 10:20             ` Jiri Kosina
  2016-07-10 13:33               ` Guenter Roeck
  0 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-07-10 10:20 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: James Bottomley, ksummit-discuss

On Sun, 10 Jul 2016, Takashi Iwai wrote:

> IMO, we need a really better QA before releasing stable trees.  They are 
> all fixes, yes, but they aren't always fixes for stable trees, in 
> reality.

I agree.

BTW, how much coverage does -stable get from Fengguang's 0day robot? I 
think that as most of the stable tress don't really use the git workflow, 
the trees are being pushed out to git.kernel.org only shortly before 
actual release, so the 0day bot doesn't have enough time to catch up; but 
I have to admit I don't really know how exactly the timing and flow of 
patches works here.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 10:20             ` Jiri Kosina
@ 2016-07-10 13:33               ` Guenter Roeck
  2016-07-15  9:27                 ` Zefan Li
  0 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-07-10 13:33 UTC (permalink / raw)
  To: Jiri Kosina, Takashi Iwai; +Cc: James Bottomley, ksummit-discuss

On 07/10/2016 03:20 AM, Jiri Kosina wrote:
> On Sun, 10 Jul 2016, Takashi Iwai wrote:
>
>> IMO, we need a really better QA before releasing stable trees.  They are
>> all fixes, yes, but they aren't always fixes for stable trees, in
>> reality.
>
> I agree.
>
> BTW, how much coverage does -stable get from Fengguang's 0day robot? I
> think that as most of the stable tress don't really use the git workflow,
> the trees are being pushed out to git.kernel.org only shortly before
> actual release, so the 0day bot doesn't have enough time to catch up; but
> I have to admit I don't really know how exactly the timing and flow of
> patches works here.
>

Greg tends to update his trees on a quite regular basis, as he applies patches.
I don't really know for sure about the others, but overall my impression is
that there tends to be a flurry of patches applied in the day before a stable
release candidate is announced.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  6:19               ` Olof Johansson
@ 2016-07-10 14:42                 ` Theodore Ts'o
  2016-07-11  1:18                   ` Olof Johansson
  0 siblings, 1 reply; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-10 14:42 UTC (permalink / raw)
  To: Olof Johansson; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Sat, Jul 09, 2016 at 11:19:39PM -0700, Olof Johansson wrote:
> 
> The in-house developers on a certain subsystem didn't trust the
> upstream maintainers to not regress their drivers -- in particular
> they had seen some painful regressions on older chipsets when newer
> hardware support was picked up. Esoteric bugs that had been fixed with
> the help of the support team weren't folded in properly in the
> upstream sources, or when they did they looked sufficiently different
> that when -stable came around they didn't want to revert back to that
> version, or they weren't yet picked up for upstream and now other
> fixes were touching the same code and that seemed risky. They had a
> code base that worked for the use cases they cared about (with the fix
> applied that the support team had provided), and very little interest
> in risking a regression from switching to the upstream version.

Hrm.  That's interesting color commentary, thanks.  This won't help
for those devices that are't using BSP kernels from SOC vendors, but
for those platforms where kernels from vendors are available, do you
know off-hand if they are tracking -stable?  Because if they are,
presumably at least the SOC vendors would have the capability of doing
the necessary testing.

OTOH, the problem with that is once the SOC vendors have stopped
selling a particular chip version, they probably don't have any
interest in continuing to do QA for stable kernels for that particular
SOC set.  So I'm guessing the answer is "no", it won't help, but I'd
love to be pleasantly surprised to the contrary.

> Instead, what the team started doing was using -stable as a source for
> fixes -- when looking at a bug, first think you looked for was to see
> if someone had touched that code/subsystem in -stable. It's not ideal
> in the sense that you have to hit the bug and someone has to look at
> it, but it was the state we ended up in on that project. It means
> -stable still has substanial value even though it's not merged
> directly.

The concern with this approach is that it won't necessary get security
fixes, since that implies that the product team is only looking at
-stable once a bug has been reported.

I could tell interested product teams that there are patches that will
prevent an maliciously crafted SD card from hanging a system or
causing a memory bounds overrun possibly leading to a privilege
escalation attack (for example), but that really doesn't scale, and
unless the maintainer uses out-of-band notification methods, how would
the product team know to look in -stable?

					- Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09  9:36       ` Mark Brown
  2016-07-09 15:13         ` Guenter Roeck
@ 2016-07-10 16:22         ` Vinod Koul
  2016-07-10 17:01           ` Theodore Ts'o
  1 sibling, 1 reply; 244+ messages in thread
From: Vinod Koul @ 2016-07-10 16:22 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1448 bytes --]

On Sat, Jul 09, 2016 at 11:36:26AM +0200, Mark Brown wrote:
> On Sat, Jul 09, 2016 at 10:43:05AM +0200, Jiri Kosina wrote:
> 
> > If maintainers are overwhelmed by extra work needed for stable, 
> > "offloading to Greg" doesn't sound like a proper solution to me at all. 
> > "Fixing a maintainer workflow for that particular subsystem" (such as 
> > extending the group of maintainers) does.
> 
> I think one of the big things we're missing here is QA.  I don't
> personally have the hardware that would allow me to test a huge chunk of
> the code in my subsystems, I'm relying on things like kernelci.org for
> the bulk of it.  There's some work going on on getting Greg's stable
> queue tested more which will hopefully make things better but it's not
> 100% there yet.

For patch merge, the expectation is that it is tested against upstream.
For stable, should we also mandate that it be verified against the stable
tree(s) as well, or if Maintainer feels it is stable material then we
can ask Submitters to test before CCing stable...

> There's also the volume of stable trees to consider here - we've got a
> large number of stable trees which seem to be maintained in different
> ways with different tooling.  One big advantage from my point of view
> as a maintainer with the current model is that I don't have to figure
> out which I care about or anything like that.

Yeah that also an issue...

-- 
~Vinod

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 16:22         ` Vinod Koul
@ 2016-07-10 17:01           ` Theodore Ts'o
  2016-07-10 18:28             ` Guenter Roeck
  2016-07-11  5:00             ` Vinod Koul
  0 siblings, 2 replies; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-10 17:01 UTC (permalink / raw)
  To: Vinod Koul; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sun, Jul 10, 2016 at 09:52:04PM +0530, Vinod Koul wrote:
> 
> For patch merge, the expectation is that it is tested against upstream.
> For stable, should we also mandate that it be verified against the stable
> tree(s) as well, or if Maintainer feels it is stable material then we
> can ask Submitters to test before CCing stable...

This is simply not realistic.

There are **eleven** stable or longterm trees listed on kernel.org.
If you are going to ask patch submitters to test on all of the stable
trees, that pretty much guarantees that nothing at all will be cc'ed
to stable.

And this doesn't into account patches that don't apply cleanly on
stable, so someone has to bash the patches until they apply.  The real
problem here is that there is a significant tax which needs to be
imposed by each stable tree.  You can either force maintainers to pay
the tax, or pay the patch submitters to pay the tax, or put that
burden on the stable tree maintainers.  It's not clear any of this is
viable.

And if device kernels or BSP kernels aren't bothering to track
-stable, it becomes even more unfair to force that work on the
maintainers or patch submitters.  If they are just going to be cherry
picking random patches out of the -stable kernel when they notice a
problem, does it make sense to do invest in doing full QA's for every
single commit before it goes into -stable?

						- Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 17:01           ` Theodore Ts'o
@ 2016-07-10 18:28             ` Guenter Roeck
  2016-07-10 22:38               ` Rafael J. Wysocki
  2016-07-10 22:39               ` Theodore Ts'o
  2016-07-11  5:00             ` Vinod Koul
  1 sibling, 2 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-10 18:28 UTC (permalink / raw)
  To: Theodore Ts'o, Vinod Koul
  Cc: James Bottomley, ksummit-discuss, Jason Cooper

On 07/10/2016 10:01 AM, Theodore Ts'o wrote:
> On Sun, Jul 10, 2016 at 09:52:04PM +0530, Vinod Koul wrote:
>>
>> For patch merge, the expectation is that it is tested against upstream.
>> For stable, should we also mandate that it be verified against the stable
>> tree(s) as well, or if Maintainer feels it is stable material then we
>> can ask Submitters to test before CCing stable...
>
> This is simply not realistic.
>
Agreed. Testing has to happen on the back-end side.

> There are **eleven** stable or longterm trees listed on kernel.org.

I think this is one of the problems we are having: There are way too many
stable / longterm trees.

> If you are going to ask patch submitters to test on all of the stable
> trees, that pretty much guarantees that nothing at all will be cc'ed
> to stable.
>
> And this doesn't into account patches that don't apply cleanly on
> stable, so someone has to bash the patches until they apply.  The real
> problem here is that there is a significant tax which needs to be
> imposed by each stable tree.  You can either force maintainers to pay
> the tax, or pay the patch submitters to pay the tax, or put that
> burden on the stable tree maintainers.  It's not clear any of this is
> viable.
>
> And if device kernels or BSP kernels aren't bothering to track
> -stable, it becomes even more unfair to force that work on the
> maintainers or patch submitters.  If they are just going to be cherry
> picking random patches out of the -stable kernel when they notice a
> problem, does it make sense to do invest in doing full QA's for every
> single commit before it goes into -stable?
>

I think we are having kind of a circular problem: Device/BSP kernels
don't track stable because stable branches are considered to be not stable
enough, and stable branches are not tested well enough because they are not
picked up anyway. The only means to break that circle is to improve
stable testing to the point where people do feel comfortable picking it up.

The key to solving that problem might be automation. There are lots of tools
available nowadays which could be used for that purpose (gerrit, buildbot, ...).
Patch submissions to stable releases could be run through an automated test
system and only be applied to stable release candidates after all tests passed.
This is widely done with vendor kernels today, and should be possible for
stable kernels as well. Such a system could even pick up patches tagged
with Fixes: or with Cc: stable from mainline automatically.

That system could start with a single kernel release, with more releases added
as its capacity and capabilities are improved. Test coverage could be increased
over time, starting with build tests and adding qemu boot tests and runtime
tests as they are made available. The only limitations of such a system would
be money to build and run it, and time for people to set up, maintain,
and enhance it.

Sure, that would not be perfect, but it would be a vast improvement over
what is available today, and its automation would ensure that maintainers
only have to get involved when there is a problem.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 18:28             ` Guenter Roeck
@ 2016-07-10 22:38               ` Rafael J. Wysocki
  2016-07-11  8:47                 ` Jiri Kosina
  2016-07-27  3:19                 ` Steven Rostedt
  2016-07-10 22:39               ` Theodore Ts'o
  1 sibling, 2 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-07-10 22:38 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Jason Cooper, ksummit-discuss, James Bottomley, ksummit-discuss

On Sunday, July 10, 2016 11:28:21 AM Guenter Roeck wrote:
> On 07/10/2016 10:01 AM, Theodore Ts'o wrote:
> > On Sun, Jul 10, 2016 at 09:52:04PM +0530, Vinod Koul wrote:
> >>
> >> For patch merge, the expectation is that it is tested against upstream.
> >> For stable, should we also mandate that it be verified against the stable
> >> tree(s) as well, or if Maintainer feels it is stable material then we
> >> can ask Submitters to test before CCing stable...
> >
> > This is simply not realistic.
> >
> Agreed. Testing has to happen on the back-end side.
> 
> > There are **eleven** stable or longterm trees listed on kernel.org.
> 
> I think this is one of the problems we are having: There are way too many
> stable / longterm trees.

So going back to the origins of -stable, the problem it was invented to address
at that time, IIRC, was that people started to perceive switching over to the
kernels released by Linus as risky, because it was hard to get fixes for bugs
found in them.  The idea at that time was to collect the fixes (and fixes only)
in a "stable" tree, so that whoever decided to use the latest kernel released
by Linus could get them readily, but without burdening maintainers with having
their own "stable" branches and similar.  And that was going to last until the
next kernel release from Linus, at which point a new "stable" tree was to be
started.

That's what the 4.6.y "stable" series is today.

To me, that particular part has been very successful and it actually works
well enough, so I wouldn't change anything in it.

However, "long-term stable" trees started to appear at one point and those are
quite different and serve a different purpose.  I'm not quite sure if handling
them in the same way as 4.6.y is really the best approach.  At least it seems
to lead to some mismatch between the expectations and what is really delivered.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 18:28             ` Guenter Roeck
  2016-07-10 22:38               ` Rafael J. Wysocki
@ 2016-07-10 22:39               ` Theodore Ts'o
  2016-07-11  1:12                 ` Olof Johansson
  1 sibling, 1 reply; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-10 22:39 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sun, Jul 10, 2016 at 11:28:21AM -0700, Guenter Roeck wrote:
> > There are **eleven** stable or longterm trees listed on kernel.org.
> 
> I think this is one of the problems we are having: There are way too many
> stable / longterm trees.

Part of this is because it's too easy for someone to say, "I want to
support [34].XX as a stable kernel".  Maybe it will only be for one
architecture and only used for one platform (e.g. Yacto, or some other
random distribution), but it's not immediately obvious (a) who is
going to be using the stable kernel, and (b) what sort of testing it is
actually getting.

This is fine if stable kernels are advertised as being "best efforts
only; whatever an individual stable kernel maintainer feels like
putting into the project".  Which is fine, but then it's also no
surprise if device kernel maintainers and BSP kernel maintainers
aren't aren't taking the -stable kernel series.  And it also becomes
surprising if other people are expecting that stable trees are
supposed to be more stable than that, and then get indignant when
there are regressions, bug fixes that aren't backported, bug fixes
that work fine on the tip but which break after getting backported,
etc.

To be clear, though: That's the way things are right now, and someone
who wants to change it is going to have to propose a procedure which
ends up taking less work on maintainers and individual patch
submiters, and/or volunteers to do the extra work, or realistically,
it's not going to happen.

> I think we are having kind of a circular problem: Device/BSP kernels
> don't track stable because stable branches are considered to be not stable
> enough, and stable branches are not tested well enough because they are not
> picked up anyway. The only means to break that circle is to improve
> stable testing to the point where people do feel comfortable picking it up.
> 
> The key to solving that problem might be automation. There are lots of tools
> available nowadays which could be used for that purpose (gerrit, buildbot, ...).
> Patch submissions to stable releases could be run through an automated test
> system and only be applied to stable release candidates after all tests passed.
> This is widely done with vendor kernels today, and should be possible for
> stable kernels as well. Such a system could even pick up patches tagged
> with Fixes: or with Cc: stable from mainline automatically.

Testing works fine for core kernel features and for things like file
systems.  But it really doesn't work with real hardware, and Olaf
described a couple of scenarios where fixes to device drivers broke
older hardware supported by the same driver.  If what we are most
worried about is "no regressions", one really extreme approach would
be for a particular stable kernel series, to have a branch which
*only* has patches for which reliable and comprehensive tests exist.
This branch would at least get all of the security fixes and other bug
fixes which are applicable to the core kernel, and but it would filter
out, at least initially, all or most device driver patches.

We could have another branch which includes the device driver fixes,
and perhaps over time we could figure out some scheme by which if the
significant device kernel and BSP kernel users could be convinced to
contribute hardware and some test engineer resources, maybe some of
the device driver fixes could go into the "tested" stable branch as
well.

Or maybe we just leave a clean separation between "core" and "device
driver" stable branches, since in practice the answer seems to be that
once an embedded device kernel maintainer gets things working, they
**really** don't want to touch the device drivers ever again, since if
there are any hardware or software issues, they want users buying an
upgraded device every 12-18 months anyway.  :-)    At least that way
maybe the users will get the core security and stability fixes....

Or maybe we have a different policy for x86-specific device drivers
than we do for the embedded architectures, since in practice we have
more end users testing the x86 stable kernels, where as the embedded
architectures tend to get things like OTA updates, and so it's not
surprising that those maintainers are much more paranoid about driver
changes which might brick their devices.

(Yes, I know that some drivers are shared between x86 and ARM; and I
suspect that's one of the places where we could easily have a problem
where a bugfix that fixes things for an device on an x86 base might
accidentally cause a regression for the same device hanging off of a
different bus in a SOC configuration....  and no amount of test
automation has any *hope* of catching thoes sorts of problems.)

      	  	     	     	  	       - Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 22:39               ` Theodore Ts'o
@ 2016-07-11  1:12                 ` Olof Johansson
  0 siblings, 0 replies; 244+ messages in thread
From: Olof Johansson @ 2016-07-11  1:12 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 5329 bytes --]

On Sun, Jul 10, 2016 at 3:39 PM, Theodore Ts'o <tytso@mit.edu> wrote:

> On Sun, Jul 10, 2016 at 11:28:21AM -0700, Guenter Roeck wrote:
> > > There are **eleven** stable or longterm trees listed on kernel.org.
> >
> > I think this is one of the problems we are having: There are way too many
> > stable / longterm trees.
>
> Part of this is because it's too easy for someone to say, "I want to
> support [34].XX as a stable kernel".  Maybe it will only be for one
> architecture and only used for one platform (e.g. Yacto, or some other
> random distribution), but it's not immediately obvious (a) who is
> going to be using the stable kernel, and (b) what sort of testing it is
> actually getting.
>
> This is fine if stable kernels are advertised as being "best efforts
> only; whatever an individual stable kernel maintainer feels like
> putting into the project".  Which is fine, but then it's also no
> surprise if device kernel maintainers and BSP kernel maintainers
> aren't aren't taking the -stable kernel series.  And it also becomes
> surprising if other people are expecting that stable trees are
> supposed to be more stable than that, and then get indignant when
> there are regressions, bug fixes that aren't backported, bug fixes
> that work fine on the tip but which break after getting backported,
> etc.
>
> To be clear, though: That's the way things are right now, and someone
> who wants to change it is going to have to propose a procedure which
> ends up taking less work on maintainers and individual patch
> submiters, and/or volunteers to do the extra work, or realistically,
> it's not going to happen.
>
> > I think we are having kind of a circular problem: Device/BSP kernels
> > don't track stable because stable branches are considered to be not
> stable
> > enough, and stable branches are not tested well enough because they are
> not
> > picked up anyway. The only means to break that circle is to improve
> > stable testing to the point where people do feel comfortable picking it
> up.
> >
> > The key to solving that problem might be automation. There are lots of
> tools
> > available nowadays which could be used for that purpose (gerrit,
> buildbot, ...).
> > Patch submissions to stable releases could be run through an automated
> test
> > system and only be applied to stable release candidates after all tests
> passed.
> > This is widely done with vendor kernels today, and should be possible for
> > stable kernels as well. Such a system could even pick up patches tagged
> > with Fixes: or with Cc: stable from mainline automatically.
>
> Testing works fine for core kernel features and for things like file
> systems.  But it really doesn't work with real hardware, and Olaf
> described a couple of scenarios where fixes to device drivers broke
> older hardware supported by the same driver.  If what we are most
> worried about is "no regressions", one really extreme approach would
> be for a particular stable kernel series, to have a branch which
> *only* has patches for which reliable and comprehensive tests exist.
> This branch would at least get all of the security fixes and other bug
> fixes which are applicable to the core kernel, and but it would filter
> out, at least initially, all or most device driver patches.
>
> We could have another branch which includes the device driver fixes,
> and perhaps over time we could figure out some scheme by which if the
> significant device kernel and BSP kernel users could be convinced to
> contribute hardware and some test engineer resources, maybe some of
> the device driver fixes could go into the "tested" stable branch as
> well.
>
> Or maybe we just leave a clean separation between "core" and "device
> driver" stable branches, since in practice the answer seems to be that
> once an embedded device kernel maintainer gets things working, they
> **really** don't want to touch the device drivers ever again, since if
> there are any hardware or software issues, they want users buying an
> upgraded device every 12-18 months anyway.  :-)    At least that way
> maybe the users will get the core security and stability fixes....
>
> Or maybe we have a different policy for x86-specific device drivers
> than we do for the embedded architectures, since in practice we have
> more end users testing the x86 stable kernels, where as the embedded
> architectures tend to get things like OTA updates, and so it's not
> surprising that those maintainers are much more paranoid about driver
> changes which might brick their devices.
>
> (Yes, I know that some drivers are shared between x86 and ARM; and I
> suspect that's one of the places where we could easily have a problem
> where a bugfix that fixes things for an device on an x86 base might
> accidentally cause a regression for the same device hanging off of a
> different bus in a SOC configuration....  and no amount of test
> automation has any *hope* of catching thoes sorts of problems.)
>

Just to clarify, my commentary was NOT for ARM SoC support. It was for
drivers frequently used on x86 laptops. So it's not an "embedded only"
problem.

That being said, this was several years ago, and it's not necessarily worth
focusing all that much on -- I just wanted to give an example of a case
where using -stable in a product tree got pushback and why.


-Olof

[-- Attachment #2: Type: text/html, Size: 6266 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 14:42                 ` Theodore Ts'o
@ 2016-07-11  1:18                   ` Olof Johansson
  0 siblings, 0 replies; 244+ messages in thread
From: Olof Johansson @ 2016-07-11  1:18 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 3764 bytes --]

On Sun, Jul 10, 2016 at 7:42 AM, Theodore Ts'o <tytso@mit.edu> wrote:

> On Sat, Jul 09, 2016 at 11:19:39PM -0700, Olof Johansson wrote:
> >
> > The in-house developers on a certain subsystem didn't trust the
> > upstream maintainers to not regress their drivers -- in particular
> > they had seen some painful regressions on older chipsets when newer
> > hardware support was picked up. Esoteric bugs that had been fixed with
> > the help of the support team weren't folded in properly in the
> > upstream sources, or when they did they looked sufficiently different
> > that when -stable came around they didn't want to revert back to that
> > version, or they weren't yet picked up for upstream and now other
> > fixes were touching the same code and that seemed risky. They had a
> > code base that worked for the use cases they cared about (with the fix
> > applied that the support team had provided), and very little interest
> > in risking a regression from switching to the upstream version.
>
> Hrm.  That's interesting color commentary, thanks.


As mentioned downthread already: This wasn't actually for a BSP-based
embedded chipset/driver. This was for common hardware found in many laptops
(at the time).


> This won't help
> for those devices that are't using BSP kernels from SOC vendors, but
> for those platforms where kernels from vendors are available, do you
> know off-hand if they are tracking -stable?  Because if they are,
> presumably at least the SOC vendors would have the capability of doing
> the necessary testing.
>
> OTOH, the problem with that is once the SOC vendors have stopped
> selling a particular chip version, they probably don't have any
> interest in continuing to do QA for stable kernels for that particular
> SOC set.  So I'm guessing the answer is "no", it won't help, but I'd
> love to be pleasantly surprised to the contrary.
>

Optimizing our workflow for what some random SoC manufacturer does with
their BSP is probably not a useful exercise. As you say, once they're done
with the product they usually move on to the next generation.

As arm-soc maintainer, it's very rare that we see fixes targeted to
-stable, often because I don't think there are many downstream users of the
upstream tree for embedded platforms, so including fixes there doesn't mean
it shows up in product trees. As platforms do get more and more support, it
will get better over time but it's not there yet.


> > Instead, what the team started doing was using -stable as a source for
> > fixes -- when looking at a bug, first think you looked for was to see
> > if someone had touched that code/subsystem in -stable. It's not ideal
> > in the sense that you have to hit the bug and someone has to look at
> > it, but it was the state we ended up in on that project. It means
> > -stable still has substanial value even though it's not merged
> > directly.
>
> The concern with this approach is that it won't necessary get security
> fixes, since that implies that the product team is only looking at
> -stable once a bug has been reported.
>

That's true. For fixes that get CVE labels there's sometimes tracking that
happens and things get picked up, but for "silent" security fixes there's
not.


> I could tell interested product teams that there are patches that will
> prevent an maliciously crafted SD card from hanging a system or
> causing a memory bounds overrun possibly leading to a privilege
> escalation attack (for example), but that really doesn't scale, and
> unless the maintainer uses out-of-band notification methods, how would
> the product team know to look in -stable?
>

By tracking CVEs or having representation on the security lists. Only large
projects tend to have resources to do so, unfortunately.


-Olof

[-- Attachment #2: Type: text/html, Size: 4949 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-10  2:27                   ` Dan Williams
  2016-07-10  6:10                     ` Guenter Roeck
@ 2016-07-11  4:03                     ` Trond Myklebust
  2016-07-11  4:22                       ` James Bottomley
                                         ` (4 more replies)
  1 sibling, 5 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-11  4:03 UTC (permalink / raw)
  To: Dan Williams, ksummit-discuss; +Cc: James Bottomley

So, we might as well make this a formal proposal.

I’d like to propose that we have a discussion around how to make it easier to implement kernel unit tests. I’ve co-opted Dan as he has expressed both an interest and hands-on experience. :-)

Cheers
  Trond

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
@ 2016-07-11  4:22                       ` James Bottomley
  2016-07-11  4:30                         ` Trond Myklebust
  2016-07-11  5:23                       ` Guenter Roeck
                                         ` (3 subsequent siblings)
  4 siblings, 1 reply; 244+ messages in thread
From: James Bottomley @ 2016-07-11  4:22 UTC (permalink / raw)
  To: Trond Myklebust, Dan Williams, ksummit-discuss

On Mon, 2016-07-11 at 04:03 +0000, Trond Myklebust wrote:
> So, we might as well make this a formal proposal.
> 
> I’d like to propose that we have a discussion around how to make it 
> easier to implement kernel unit tests. I’ve co-opted Dan as he has 
> expressed both an interest and hands-on experience. :-)

OK, if you want to be formal, I'll propose we do a separate topic on
stable workflow.  Probably beginning with one of the maintainers who
does their own stable tree (I'm not organised enough to have co-opted
someone yet, but I'll try) to explain why they do this instead of just
adding a stable tag like the rest of us and how much extra effort it
costs them and whether more of us should be adopting it.  Then moving
on to discuss extra steps for preventing stable regressions, like
should we insist that patches tagged for stable be tested by someone
with the hardware on current head.  Finally I think we should debate
whether we have too many stable trees and perhaps we should sort them
into "official" (we care more about this tree) and "unofficial" meaning
it's run fully at the risk of the maintainer.

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:22                       ` James Bottomley
@ 2016-07-11  4:30                         ` Trond Myklebust
  0 siblings, 0 replies; 244+ messages in thread
From: Trond Myklebust @ 2016-07-11  4:30 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss


> On Jul 11, 2016, at 00:22, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> On Mon, 2016-07-11 at 04:03 +0000, Trond Myklebust wrote:
>> So, we might as well make this a formal proposal.
>> 
>> I’d like to propose that we have a discussion around how to make it 
>> easier to implement kernel unit tests. I’ve co-opted Dan as he has 
>> expressed both an interest and hands-on experience. :-)
> 
> OK, if you want to be formal, I'll propose we do a separate topic on
> stable workflow.  Probably beginning with one of the maintainers who
> does their own stable tree (I'm not organised enough to have co-opted
> someone yet, but I'll try) to explain why they do this instead of just
> adding a stable tag like the rest of us and how much extra effort it
> costs them and whether more of us should be adopting it.  Then moving
> on to discuss extra steps for preventing stable regressions, like
> should we insist that patches tagged for stable be tested by someone
> with the hardware on current head.  Finally I think we should debate
> whether we have too many stable trees and perhaps we should sort them
> into "official" (we care more about this tree) and "unofficial" meaning
> it's run fully at the risk of the maintainer.
> 

Just to clarify: I didn’t mean “formal" in any pejorative sense… As I understand it, the kernel summit PC wants to see clear proposals with a “CORE TOPIC” etc and so that was all I intended.
I’m quite happy to see a discussion on stable workflow under the premises you describe.

Cheers
  Trond

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 17:01           ` Theodore Ts'o
  2016-07-10 18:28             ` Guenter Roeck
@ 2016-07-11  5:00             ` Vinod Koul
  2016-07-11  5:13               ` Theodore Ts'o
  1 sibling, 1 reply; 244+ messages in thread
From: Vinod Koul @ 2016-07-11  5:00 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sun, Jul 10, 2016 at 01:01:17PM -0400, Theodore Ts'o wrote:
> On Sun, Jul 10, 2016 at 09:52:04PM +0530, Vinod Koul wrote:
> > 
> > For patch merge, the expectation is that it is tested against upstream.
> > For stable, should we also mandate that it be verified against the stable
> > tree(s) as well, or if Maintainer feels it is stable material then we
> > can ask Submitters to test before CCing stable...
> 
> This is simply not realistic.
> 
> There are **eleven** stable or longterm trees listed on kernel.org.
> If you are going to ask patch submitters to test on all of the stable
> trees, that pretty much guarantees that nothing at all will be cc'ed
> to stable.

Isn't that a part of problem as well. If I am submitting a fix,
shouldn't I be able to backport and validate the fix on stable kernels?

> And this doesn't into account patches that don't apply cleanly on
> stable, so someone has to bash the patches until they apply.  The real
> problem here is that there is a significant tax which needs to be
> imposed by each stable tree.  You can either force maintainers to pay
> the tax, or pay the patch submitters to pay the tax, or put that
> burden on the stable tree maintainers.  It's not clear any of this is
> viable.

The fix submitter is the best person to do that. Anyway when patch
doesn't apply cleanly, Greg does ask submitter for backported patch

> And if device kernels or BSP kernels aren't bothering to track
> -stable, it becomes even more unfair to force that work on the
> maintainers or patch submitters.  If they are just going to be cherry
> picking random patches out of the -stable kernel when they notice a
> problem, does it make sense to do invest in doing full QA's for every
> single commit before it goes into -stable?

And IMO since submitter know the target and has the hardware for test,
it would be more easy for that person to verify..

Thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11  5:00             ` Vinod Koul
@ 2016-07-11  5:13               ` Theodore Ts'o
  2016-07-11 10:57                 ` Luis de Bethencourt
  2016-07-11 14:18                 ` Vinod Koul
  0 siblings, 2 replies; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-11  5:13 UTC (permalink / raw)
  To: Vinod Koul; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 10:30:00AM +0530, Vinod Koul wrote:
> > There are **eleven** stable or longterm trees listed on kernel.org.
> > If you are going to ask patch submitters to test on all of the stable
> > trees, that pretty much guarantees that nothing at all will be cc'ed
> > to stable.
> 
> Isn't that a part of problem as well. If I am submitting a fix,
> shouldn't I be able to backport and validate the fix on stable kernels?

You might be *able* to pay me a $1000 dollars (e.g., you have that
much in your savings account).  That doesn't mean that you *will*, or
that you *should*, or have any moral *obligation* to pay me $1000
dollars.  (But if you do want to write me a check, feel free....  :-)

If a developer at Facebook finds a bug, and they fix it for upstream,
they do so partially out of the goodness of their hearts, and
partially because that way they don't have to do extra work to forward
port a private patch when they move to a newer upstream kernel.  But
the GPL doesn't require that they share bug fixes with upstream ---
it's just in their economic incentive to do so.  (Even if it does help
their competitors who might now not have to do the same bug report.
And since developers at Google are also doing the same thing, it all
works well.)

Ok, now let's grant that the same Facebook developer is *able* to
backport the same patch to 11 different stable kernels, and do a full
QA validation on all of these different stable.  Now the extra 15-30
minutes that it might take to prepare the patch for upstream might now
take a day or two.  What benefit does the Facebook developer have
doing that?  Almost none.  By what moral or legal right do you have to
demand that the Facebook developer do all of that extra work?  Exactly
zero.

Now, suppose *you* are under a tight deadline to get work done for
your company's shipping product.  How do you think your manager would
react if you tell her, I'm sorry, but our competitors at Qualcomm are
demanding that I take my upstream patch contribution and backport and
QA it on a dozen different stable kernels so they can more easily put
out BSP kernels for products that directly complete with Intel's?  Let
me guess that the answer might very well be, "not well".

> > And if device kernels or BSP kernels aren't bothering to track
> > -stable, it becomes even more unfair to force that work on the
> > maintainers or patch submitters.  If they are just going to be cherry
> > picking random patches out of the -stable kernel when they notice a
> > problem, does it make sense to do invest in doing full QA's for every
> > single commit before it goes into -stable?
> 
> And IMO since submitter know the target and has the hardware for test,
> it would be more easy for that person to verify..

The submitter is not necessarily going to have all of the hardware to
test.  Heck, Intel has shipped i915 drivers that have broken my
Thinkpad dock (in fact the video out on my dock has been mostly
useless for the past year), multiple times in the past and so I'm
pretty sure Intel isn't testing their i915 driver on all of the
different hardware connected to the i915 chipset --- and this is
regressions on the *HEAD* of the Linux tree, never mind backports into
stable....

       	      	    	  	     	       - Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
  2016-07-11  4:22                       ` James Bottomley
@ 2016-07-11  5:23                       ` Guenter Roeck
  2016-07-11  8:56                         ` Hannes Reinecke
  2016-07-11 16:20                         ` Mark Brown
  2016-07-11 19:58                       ` Dan Williams
                                         ` (2 subsequent siblings)
  4 siblings, 2 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11  5:23 UTC (permalink / raw)
  To: Trond Myklebust, Dan Williams, ksummit-discuss; +Cc: James Bottomley

On 07/10/2016 09:03 PM, Trond Myklebust wrote:
> So, we might as well make this a formal proposal.
>
> I’d like to propose that we have a discussion around how to make it easier to implement kernel unit tests. I’ve co-opted Dan as he has expressed both an interest and hands-on experience. :-)
>

Making it easier to implement such tests won't get such tests executed.
I think we should also discuss how to implement more formal testing
of release candidates, and/or how to improve test coverage.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
                   ` (2 preceding siblings ...)
  2016-07-10  7:21 ` Takashi Iwai
@ 2016-07-11  7:44 ` Christian Borntraeger
  2016-08-02 13:49 ` Jani Nikula
  4 siblings, 0 replies; 244+ messages in thread
From: Christian Borntraeger @ 2016-07-11  7:44 UTC (permalink / raw)
  To: Jiri Kosina, ksummit-discuss

On 07/09/2016 12:35 AM, Jiri Kosina wrote:
> Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it, 
> wouldn't it? :)
> 
> As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the 
> crucial elements I rely on (and I also try to make sure that SUSE 
> contributes back as much as possible).
> 
> Hence any planned changes in the workflow / releases are rather essential 
> for me, and I'd like to participate, should any such discussion take 
> place.
> 
> In addition to that, I'd again (like during the past 5+ years, but it 
> never really happened) like to propose a stable tree discussion topic: I'd 
> like to see an attempt to make the stable workflow more oriented towards 
> "maintainers sending pull requests" rather than "random people pointing to 
> patches that should go to stable". This has been much of an issue in the 
> past, when we've been seeing many stable tree regressions; that's not the 
> case any more, but still something where I sense a room for improvement.
> 
> Thanks,

I think the model ("cc stable", vs "commit ids" vs "pull request") does not
matter that much. 

Some ideas
a: what about some CI environment/infrastructure that goes beyond what Guenther
and other provide (we have build testing and bootup testing in qemu). maybe
some kind subset of make test that has to work everywhere and can be executed
by anyone
b: maybe provide an rc for stable to trigger testing of a
c: quicker reverts even without upstream revert

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 19:40           ` Sudip Mukherjee
@ 2016-07-11  8:14             ` Jiri Kosina
  0 siblings, 0 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-07-11  8:14 UTC (permalink / raw)
  To: Sudip Mukherjee; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sat, 9 Jul 2016, Sudip Mukherjee wrote:

> Just a thought, why dont we have a stable-next tree like the way we have 
> linux-next? in that way it might get more testing than it gets now. I 
> know it will be more work but atleast worth a try.

What do you envision this stable-next to contain though?

linux-next is a merge of gazillions trees that are heading upstream. 
Stable tree though is not comprised of merged trees, it's a 
linear stream of commits in one queue.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:13         ` Guenter Roeck
  2016-07-09 19:40           ` Sudip Mukherjee
  2016-07-09 21:21           ` Theodore Ts'o
@ 2016-07-11  8:18           ` Jiri Kosina
  2016-07-11 23:32             ` Guenter Roeck
  2016-07-11 14:22           ` Mark Brown
  3 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-07-11  8:18 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Sat, 9 Jul 2016, Guenter Roeck wrote:

> As I suggested earlier, we'll have to find a way to convince companies 
> to actively invest in QA.

The potential issue I see here is coverage.

Linus' tree greatly benefits from the "crowdsourcing" effect, in a sense 
that everybody is, at the end of the day, interested in that tree to work 
(because most of the stakeholders are going to include that codebase in 
their product sooner or later).

That's not the case with stable codestreams; the interest there is much 
more scattered.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 22:38               ` Rafael J. Wysocki
@ 2016-07-11  8:47                 ` Jiri Kosina
  2016-07-27  3:19                 ` Steven Rostedt
  1 sibling, 0 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-07-11  8:47 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Jason Cooper, ksummit-discuss, James Bottomley, ksummit-discuss

On Mon, 11 Jul 2016, Rafael J. Wysocki wrote:

> So going back to the origins of -stable, the problem it was invented to 
> address at that time, IIRC, was that people started to perceive 
> switching over to the kernels released by Linus as risky, because it was 
> hard to get fixes for bugs found in them.  The idea at that time was to 
> collect the fixes (and fixes only) in a "stable" tree, so that whoever 
> decided to use the latest kernel released by Linus could get them 
> readily, but without burdening maintainers with having their own 
> "stable" branches and similar.  And that was going to last until the 
> next kernel release from Linus, at which point a new "stable" tree was 
> to be started.
> 
> That's what the 4.6.y "stable" series is today.
> 
> To me, that particular part has been very successful and it actually 
> works well enough, so I wouldn't change anything in it.
> 
> However, "long-term stable" trees started to appear at one point and 
> those are quite different and serve a different purpose.  I'm not quite 
> sure if handling them in the same way as 4.6.y is really the best 
> approach.  At least it seems to lead to some mismatch between the 
> expectations and what is really delivered.

That's a very good point, and I fully agree with that. Actually all my 
previous proposals applied mostly to the long-term releases. The "last 
release + fixes" indeed seems to be rather fluent.
OTOH, I am not really sure how many consumers does that tree have.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  5:23                       ` Guenter Roeck
@ 2016-07-11  8:56                         ` Hannes Reinecke
  2016-07-11 16:20                         ` Mark Brown
  1 sibling, 0 replies; 244+ messages in thread
From: Hannes Reinecke @ 2016-07-11  8:56 UTC (permalink / raw)
  To: ksummit-discuss

On 07/11/2016 07:23 AM, Guenter Roeck wrote:
> On 07/10/2016 09:03 PM, Trond Myklebust wrote:
>> So, we might as well make this a formal proposal.
>>
>> I’d like to propose that we have a discussion around how to make it
>> easier to implement kernel unit tests. I’ve co-opted Dan as he has
>> expressed both an interest and hands-on experience. :-)
>>
> 
> Making it easier to implement such tests won't get such tests executed.
> I think we should also discuss how to implement more formal testing
> of release candidates, and/or how to improve test coverage.
> 
Count me in.
I'm working on preparing a multipath testbed from within Qemu (FCoE over
virtio, quite fun), with the aim of developing an automated
kernel-ci testbed.
So I'd be very interested in this one, too.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		               zSeries & Storage
hare@suse.com			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11  5:13               ` Theodore Ts'o
@ 2016-07-11 10:57                 ` Luis de Bethencourt
  2016-07-11 14:18                 ` Vinod Koul
  1 sibling, 0 replies; 244+ messages in thread
From: Luis de Bethencourt @ 2016-07-11 10:57 UTC (permalink / raw)
  To: Theodore Ts'o, Vinod Koul
  Cc: James Bottomley, ksummit-discuss, Jason Cooper

On 11/07/16 06:13, Theodore Ts'o wrote:
> On Mon, Jul 11, 2016 at 10:30:00AM +0530, Vinod Koul wrote:
>>> There are **eleven** stable or longterm trees listed on kernel.org.
>>> If you are going to ask patch submitters to test on all of the stable
>>> trees, that pretty much guarantees that nothing at all will be cc'ed
>>> to stable.
>>
>> Isn't that a part of problem as well. If I am submitting a fix,
>> shouldn't I be able to backport and validate the fix on stable kernels?
> 
> You might be *able* to pay me a $1000 dollars (e.g., you have that
> much in your savings account).  That doesn't mean that you *will*, or
> that you *should*, or have any moral *obligation* to pay me $1000
> dollars.  (But if you do want to write me a check, feel free....  :-)
> 
> If a developer at Facebook finds a bug, and they fix it for upstream,
> they do so partially out of the goodness of their hearts, and
> partially because that way they don't have to do extra work to forward
> port a private patch when they move to a newer upstream kernel.  But
> the GPL doesn't require that they share bug fixes with upstream ---
> it's just in their economic incentive to do so.  (Even if it does help
> their competitors who might now not have to do the same bug report.
> And since developers at Google are also doing the same thing, it all
> works well.)
> 
> Ok, now let's grant that the same Facebook developer is *able* to
> backport the same patch to 11 different stable kernels, and do a full
> QA validation on all of these different stable.  Now the extra 15-30
> minutes that it might take to prepare the patch for upstream might now
> take a day or two.  What benefit does the Facebook developer have
> doing that?  Almost none.  By what moral or legal right do you have to
> demand that the Facebook developer do all of that extra work?  Exactly
> zero.
> 
> Now, suppose *you* are under a tight deadline to get work done for
> your company's shipping product.  How do you think your manager would
> react if you tell her, I'm sorry, but our competitors at Qualcomm are
> demanding that I take my upstream patch contribution and backport and
> QA it on a dozen different stable kernels so they can more easily put
> out BSP kernels for products that directly complete with Intel's?  Let
> me guess that the answer might very well be, "not well".
>

Increasing the barrier of entry for patches in mainline would demotivate
many submitters. Linux has done a great job of keeping it low and IMHO
it is one of the reasons of its great success.

Could the work of creating and running a unified testing, QA, and
continuous integration system for stable branches be taken by other
people, who are not the original submitters or the subsystem maintainers?

Would stable branch maintainers sharing more information and/or
infrastructure help reduce work duplication?

Luis
 
>>> And if device kernels or BSP kernels aren't bothering to track
>>> -stable, it becomes even more unfair to force that work on the
>>> maintainers or patch submitters.  If they are just going to be cherry
>>> picking random patches out of the -stable kernel when they notice a
>>> problem, does it make sense to do invest in doing full QA's for every
>>> single commit before it goes into -stable?
>>
>> And IMO since submitter know the target and has the hardware for test,
>> it would be more easy for that person to verify..
> 
> The submitter is not necessarily going to have all of the hardware to
> test.  Heck, Intel has shipped i915 drivers that have broken my
> Thinkpad dock (in fact the video out on my dock has been mostly
> useless for the past year), multiple times in the past and so I'm
> pretty sure Intel isn't testing their i915 driver on all of the
> different hardware connected to the i915 chipset --- and this is
> regressions on the *HEAD* of the Linux tree, never mind backports into
> stable....
> 
>        	      	    	  	     	       - Ted
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11  5:13               ` Theodore Ts'o
  2016-07-11 10:57                 ` Luis de Bethencourt
@ 2016-07-11 14:18                 ` Vinod Koul
  2016-07-11 17:34                   ` Guenter Roeck
  2016-07-27  3:12                   ` Steven Rostedt
  1 sibling, 2 replies; 244+ messages in thread
From: Vinod Koul @ 2016-07-11 14:18 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 01:13:35AM -0400, Theodore Ts'o wrote:
> On Mon, Jul 11, 2016 at 10:30:00AM +0530, Vinod Koul wrote:
> > > There are **eleven** stable or longterm trees listed on kernel.org.
> > > If you are going to ask patch submitters to test on all of the stable
> > > trees, that pretty much guarantees that nothing at all will be cc'ed
> > > to stable.
> > 
> > Isn't that a part of problem as well. If I am submitting a fix,
> > shouldn't I be able to backport and validate the fix on stable kernels?
> 
> You might be *able* to pay me a $1000 dollars (e.g., you have that
> much in your savings account).  That doesn't mean that you *will*, or
> that you *should*, or have any moral *obligation* to pay me $1000
> dollars.  (But if you do want to write me a check, feel free....  :-)
> 
> If a developer at Facebook finds a bug, and they fix it for upstream,
> they do so partially out of the goodness of their hearts, and
> partially because that way they don't have to do extra work to forward
> port a private patch when they move to a newer upstream kernel.  But
> the GPL doesn't require that they share bug fixes with upstream ---
> it's just in their economic incentive to do so.  (Even if it does help
> their competitors who might now not have to do the same bug report.
> And since developers at Google are also doing the same thing, it all
> works well.)
> 
> Ok, now let's grant that the same Facebook developer is *able* to
> backport the same patch to 11 different stable kernels, and do a full
> QA validation on all of these different stable.  Now the extra 15-30
> minutes that it might take to prepare the patch for upstream might now
> take a day or two.  What benefit does the Facebook developer have
> doing that?  Almost none.  By what moral or legal right do you have to
> demand that the Facebook developer do all of that extra work?  Exactly
> zero.
> 
> Now, suppose *you* are under a tight deadline to get work done for
> your company's shipping product.  How do you think your manager would
> react if you tell her, I'm sorry, but our competitors at Qualcomm are
> demanding that I take my upstream patch contribution and backport and
> QA it on a dozen different stable kernels so they can more easily put
> out BSP kernels for products that directly complete with Intel's?  Let
> me guess that the answer might very well be, "not well".

Ted,

I do whole heartedly agree to your arguments and yes that is a big
issue. *BUT* what is the solution then, Maintainers do not even have
hardware to test.

> > > And if device kernels or BSP kernels aren't bothering to track
> > > -stable, it becomes even more unfair to force that work on the
> > > maintainers or patch submitters.  If they are just going to be cherry
> > > picking random patches out of the -stable kernel when they notice a
> > > problem, does it make sense to do invest in doing full QA's for every
> > > single commit before it goes into -stable?
> > 
> > And IMO since submitter know the target and has the hardware for test,
> > it would be more easy for that person to verify..
> 
> The submitter is not necessarily going to have all of the hardware to
> test.  Heck, Intel has shipped i915 drivers that have broken my
> Thinkpad dock (in fact the video out on my dock has been mostly
> useless for the past year), multiple times in the past and so I'm
> pretty sure Intel isn't testing their i915 driver on all of the
> different hardware connected to the i915 chipset --- and this is
> regressions on the *HEAD* of the Linux tree, never mind backports into
> stable....

But the person might be slightly better off than you or me :-)

-- 
~Vinod

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 15:13         ` Guenter Roeck
                             ` (2 preceding siblings ...)
  2016-07-11  8:18           ` Jiri Kosina
@ 2016-07-11 14:22           ` Mark Brown
  3 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-11 14:22 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1345 bytes --]

On Sat, Jul 09, 2016 at 08:13:19AM -0700, Guenter Roeck wrote:

> it works". We still have a long way to go to get real QA testing. As I
> suggested earlier, we'll have to find a way to convince companies to actively
> invest in QA.

There *is* some stuff going on there (slowly) with kernelci.org
including some more active work, but definitely more investment is
indeed needed.  I am somewhat hopeful that it'll be like a lot of the
other testing things where once we start to see some results becoming
available there will be a bit of a snowball effect and we'll start to
see more people getting involved (I know I wouldn't have been running a
build bot if I hadn't wanted things other build bots weren't offering at
the time).

> > There's also the volume of stable trees to consider here - we've got a
> > large number of stable trees which seem to be maintained in different
> > ways with different tooling.  One big advantage from my point of view
> > as a maintainer with the current model is that I don't have to figure
> > out which I care about or anything like that.

> The proliferation of stable trees (or rather, how to avoid it) might be
> one of the parts of the puzzle. Yes, there are way too many right now.

OTOH if people want to run a given kernel version it's nice for them to
have a place to collaborate and share fixes.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 21:21           ` Theodore Ts'o
@ 2016-07-11 15:13             ` Mark Brown
  2016-07-11 17:03               ` Theodore Ts'o
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-11 15:13 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 541 bytes --]

On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:

> the latest stable kernel.  (But even if they do, apparently many
> device vendors aren't bothering to merge in changes from the SOC's BSP
> kernel, even if the BSP kernel is getting -stable updates.)

It would be pretty irresponsible for device vendors to be merging BSP
trees, they're generally development things with ongoing feature updates
that might interact badly with things the system integrator has done
rather than something stable enough to just merge constantly.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  5:23                       ` Guenter Roeck
  2016-07-11  8:56                         ` Hannes Reinecke
@ 2016-07-11 16:20                         ` Mark Brown
  1 sibling, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-11 16:20 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1160 bytes --]

On Sun, Jul 10, 2016 at 10:23:06PM -0700, Guenter Roeck wrote:
> On 07/10/2016 09:03 PM, Trond Myklebust wrote:
> > So, we might as well make this a formal proposal.

> > I’d like to propose that we have a discussion around how to make it
> > easier to implement kernel unit tests. I’ve co-opted Dan as he has
> > expressed both an interest and hands-on experience. :-)

I'm definitely interested in this discussion, it's something I've been
actively pushing on at work.

> Making it easier to implement such tests won't get such tests executed.
> I think we should also discuss how to implement more formal testing
> of release candidates, and/or how to improve test coverage.

Indeed, I think both are very important to getting progress on testing.
It's much easier to get people to write tests and fix issues they
identify if there's evidence that they're stable people care about the
results, but equally it's much easier to convince people to pay
attention to tests where the tests are actually there.  Bootstrapping is
going to need to push on all these things simultaneously until we've got
something sufficiently established.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 15:13             ` Mark Brown
@ 2016-07-11 17:03               ` Theodore Ts'o
  2016-07-11 17:07                 ` Justin Forbes
                                   ` (5 more replies)
  0 siblings, 6 replies; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-11 17:03 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> 
> > the latest stable kernel.  (But even if they do, apparently many
> > device vendors aren't bothering to merge in changes from the SOC's BSP
> > kernel, even if the BSP kernel is getting -stable updates.)
> 
> It would be pretty irresponsible for device vendors to be merging BSP
> trees, they're generally development things with ongoing feature updates
> that might interact badly with things the system integrator has done
> rather than something stable enough to just merge constantly.

So the question is who actually uses -stable kernels, and does it make
sense for it even to be managed in a git tree?

Very few people will actually be merging them, and in fact maybe
having a patch queue which is checked into git might actually work
better, since it sounds like most people are just cherry-picking
specific patches.

						- Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
@ 2016-07-11 17:07                 ` Justin Forbes
  2016-07-11 17:11                 ` Mark Brown
                                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 244+ messages in thread
From: Justin Forbes @ 2016-07-11 17:07 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1261 bytes --]

On Mon, Jul 11, 2016 at 12:03 PM, Theodore Ts'o <tytso@mit.edu> wrote:

> On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> > On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> >
> > > the latest stable kernel.  (But even if they do, apparently many
> > > device vendors aren't bothering to merge in changes from the SOC's BSP
> > > kernel, even if the BSP kernel is getting -stable updates.)
> >
> > It would be pretty irresponsible for device vendors to be merging BSP
> > trees, they're generally development things with ongoing feature updates
> > that might interact badly with things the system integrator has done
> > rather than something stable enough to just merge constantly.
>
> So the question is who actually uses -stable kernels, and does it make
> sense for it even to be managed in a git tree?
>
> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.
>
>
This is exactly what stable-queue.git is. It has been around for a long
time, and fairly helpful for cherry-picking specific patches or testing
queued patches before an rc is called out.

Justin

[-- Attachment #2: Type: text/html, Size: 1765 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
  2016-07-11 17:07                 ` Justin Forbes
@ 2016-07-11 17:11                 ` Mark Brown
  2016-07-11 17:13                   ` Olof Johansson
  2016-07-13  1:08                   ` Geert Uytterhoeven
  2016-07-11 17:15                 ` Dmitry Torokhov
                                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-11 17:11 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 689 bytes --]

On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:

> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.

I think at this point even if people are cherry picking patches it's
probably still going to be easier for people to work with a git tree
than anything else - the workflow for git cherry-pick, looking for
dependent patches and so on is pretty clear, the upstream commit IDs are
there if you prefer to go direct to them and if you really do want a raw
patch stack then it's easy to translate into one.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:11                 ` Mark Brown
@ 2016-07-11 17:13                   ` Olof Johansson
  2016-07-11 17:17                     ` Mark Brown
  2016-07-13  1:08                   ` Geert Uytterhoeven
  1 sibling, 1 reply; 244+ messages in thread
From: Olof Johansson @ 2016-07-11 17:13 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 914 bytes --]

On Mon, Jul 11, 2016 at 10:11 AM, Mark Brown <broonie@kernel.org> wrote:

> On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:
>
> > Very few people will actually be merging them, and in fact maybe
> > having a patch queue which is checked into git might actually work
> > better, since it sounds like most people are just cherry-picking
> > specific patches.
>
> I think at this point even if people are cherry picking patches it's
> probably still going to be easier for people to work with a git tree
> than anything else - the workflow for git cherry-pick, looking for
> dependent patches and so on is pretty clear, the upstream commit IDs are
> there if you prefer to go direct to them and if you really do want a raw
> patch stack then it's easy to translate into one.
>

Yeah, git-backed is much preferred -- you can easily do git log on a
subdirectory, git annotate file contents, etc.


-Olof

[-- Attachment #2: Type: text/html, Size: 1373 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
  2016-07-11 17:07                 ` Justin Forbes
  2016-07-11 17:11                 ` Mark Brown
@ 2016-07-11 17:15                 ` Dmitry Torokhov
  2016-07-11 17:20                   ` Theodore Ts'o
  2016-07-11 23:13                   ` Guenter Roeck
  2016-07-11 17:17                 ` Josh Boyer
                                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-11 17:15 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:
> On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> > On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> > 
> > > the latest stable kernel.  (But even if they do, apparently many
> > > device vendors aren't bothering to merge in changes from the SOC's BSP
> > > kernel, even if the BSP kernel is getting -stable updates.)
> > 
> > It would be pretty irresponsible for device vendors to be merging BSP
> > trees, they're generally development things with ongoing feature updates
> > that might interact badly with things the system integrator has done
> > rather than something stable enough to just merge constantly.
> 
> So the question is who actually uses -stable kernels,

Community-based distros definitely use stable, but that might be the
latest stable, not the older stables that are out there.

> and does it make
> sense for it even to be managed in a git tree?
> 
> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.

Cherry picking a commit from a branch/remote is much nicer than fetching
a part of [quilt based?] patch queue from git, unless I misunderstand
what you are proposing.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:13                   ` Olof Johansson
@ 2016-07-11 17:17                     ` Mark Brown
  2016-07-11 17:24                       ` Guenter Roeck
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-11 17:17 UTC (permalink / raw)
  To: Olof Johansson; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1168 bytes --]

On Mon, Jul 11, 2016 at 10:13:56AM -0700, Olof Johansson wrote:
> On Mon, Jul 11, 2016 at 10:11 AM, Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:

> > > Very few people will actually be merging them, and in fact maybe
> > > having a patch queue which is checked into git might actually work
> > > better, since it sounds like most people are just cherry-picking
> > > specific patches.

> > I think at this point even if people are cherry picking patches it's
> > probably still going to be easier for people to work with a git tree
> > than anything else - the workflow for git cherry-pick, looking for
> > dependent patches and so on is pretty clear, the upstream commit IDs are
> > there if you prefer to go direct to them and if you really do want a raw
> > patch stack then it's easy to translate into one.

> Yeah, git-backed is much preferred -- you can easily do git log on a
> subdirectory, git annotate file contents, etc.

Probably also worth mentioning that this was one of the blockers for
getting kernelci.org testing Greg's queue for quite a while - it only
knows how to consume git branches.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
                                   ` (2 preceding siblings ...)
  2016-07-11 17:15                 ` Dmitry Torokhov
@ 2016-07-11 17:17                 ` Josh Boyer
  2016-07-11 22:42                 ` James Bottomley
  2016-07-20 17:50                 ` Stephen Hemminger
  5 siblings, 0 replies; 244+ messages in thread
From: Josh Boyer @ 2016-07-11 17:17 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 1:03 PM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
>> On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
>>
>> > the latest stable kernel.  (But even if they do, apparently many
>> > device vendors aren't bothering to merge in changes from the SOC's BSP
>> > kernel, even if the BSP kernel is getting -stable updates.)
>>
>> It would be pretty irresponsible for device vendors to be merging BSP
>> trees, they're generally development things with ongoing feature updates
>> that might interact badly with things the system integrator has done
>> rather than something stable enough to just merge constantly.
>
> So the question is who actually uses -stable kernels, and does it make
> sense for it even to be managed in a git tree?

A number of distributions use stable kernels.  Fedora being one of
them.  Having the releases helps from both "what upstream are we
using" and "convenient patch (patch-4.6.4.xz) to apply" standpoints.

> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.

I think you need to be careful with generalities this early in the
discussion.  Thus far it's mostly people doing heavy development work
on the kernel or for a specific platform that have chimed in.  I would
suggest that they aren't really the target of stable kernels.  It is
likely much more useful for distributions and end users.

josh

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:15                 ` Dmitry Torokhov
@ 2016-07-11 17:20                   ` Theodore Ts'o
  2016-07-11 17:26                     ` Dmitry Torokhov
  2016-07-11 17:27                     ` Olof Johansson
  2016-07-11 23:13                   ` Guenter Roeck
  1 sibling, 2 replies; 244+ messages in thread
From: Theodore Ts'o @ 2016-07-11 17:20 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 10:15:06AM -0700, Dmitry Torokhov wrote:
> 
> Cherry picking a commit from a branch/remote is much nicer than fetching
> a part of [quilt based?] patch queue from git, unless I misunderstand
> what you are proposing.

The reason why I'm wondering whether or not quilt based might be
better is because if it turns out that a patch introduces a
regression, with a quilt based system the patch can be *dropped*, or
revised in place.

When you do a cherry-pick, you depend on the BSP kernel maintainer
figuring out that a commit was later reverted, or fixed by a later
commit that might have a different name, and which doesn't make it
obvious that if you take commit 'A', you REALLY want to take commits
'B' and 'C' as well....

					- Ted

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:17                     ` Mark Brown
@ 2016-07-11 17:24                       ` Guenter Roeck
  2016-07-11 17:44                         ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11 17:24 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 06:17:18PM +0100, Mark Brown wrote:
> On Mon, Jul 11, 2016 at 10:13:56AM -0700, Olof Johansson wrote:
> > On Mon, Jul 11, 2016 at 10:11 AM, Mark Brown <broonie@kernel.org> wrote:
> > > On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:
> 
> > > > Very few people will actually be merging them, and in fact maybe
> > > > having a patch queue which is checked into git might actually work
> > > > better, since it sounds like most people are just cherry-picking
> > > > specific patches.
> 
> > > I think at this point even if people are cherry picking patches it's
> > > probably still going to be easier for people to work with a git tree
> > > than anything else - the workflow for git cherry-pick, looking for
> > > dependent patches and so on is pretty clear, the upstream commit IDs are
> > > there if you prefer to go direct to them and if you really do want a raw
> > > patch stack then it's easy to translate into one.
> 
> > Yeah, git-backed is much preferred -- you can easily do git log on a
> > subdirectory, git annotate file contents, etc.
> 
> Probably also worth mentioning that this was one of the blockers for
> getting kernelci.org testing Greg's queue for quite a while - it only
> knows how to consume git branches.

Kevin ended up pulling git branches from my repository, so that wasn't an
absolute blocker. One key advantage of having the queue in git is that it is
always consistent - the quilt queue was not always in sync with the -stable
baseline, especially right after a new stable release. Another advantage
is that the git repository can be tested by 0day.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:20                   ` Theodore Ts'o
@ 2016-07-11 17:26                     ` Dmitry Torokhov
  2016-07-11 17:27                     ` Olof Johansson
  1 sibling, 0 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-11 17:26 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 10:20 AM, Theodore Ts'o <tytso@mit.edu> wrote:
> On Mon, Jul 11, 2016 at 10:15:06AM -0700, Dmitry Torokhov wrote:
>>
>> Cherry picking a commit from a branch/remote is much nicer than fetching
>> a part of [quilt based?] patch queue from git, unless I misunderstand
>> what you are proposing.
>
> The reason why I'm wondering whether or not quilt based might be
> better is because if it turns out that a patch introduces a
> regression, with a quilt based system the patch can be *dropped*, or
> revised in place.
>
> When you do a cherry-pick, you depend on the BSP kernel maintainer
> figuring out that a commit was later reverted, or fixed by a later
> commit that might have a different name, and which doesn't make it
> obvious that if you take commit 'A', you REALLY want to take commits
> 'B' and 'C' as well....

You _never_ use pure stable unless you are talking about bleeding edge
community distro, there is always work on top of stable, so you can't
do "quilt rebases".

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:20                   ` Theodore Ts'o
  2016-07-11 17:26                     ` Dmitry Torokhov
@ 2016-07-11 17:27                     ` Olof Johansson
  1 sibling, 0 replies; 244+ messages in thread
From: Olof Johansson @ 2016-07-11 17:27 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 1735 bytes --]

On Mon, Jul 11, 2016 at 10:20 AM, Theodore Ts'o <tytso@mit.edu> wrote:

> On Mon, Jul 11, 2016 at 10:15:06AM -0700, Dmitry Torokhov wrote:
> >
> > Cherry picking a commit from a branch/remote is much nicer than fetching
> > a part of [quilt based?] patch queue from git, unless I misunderstand
> > what you are proposing.
>
> The reason why I'm wondering whether or not quilt based might be
> better is because if it turns out that a patch introduces a
> regression, with a quilt based system the patch can be *dropped*, or
> revised in place.
>
> When you do a cherry-pick, you depend on the BSP kernel maintainer
> figuring out that a commit was later reverted, or fixed by a later
> commit that might have a different name, and which doesn't make it
> obvious that if you take commit 'A', you REALLY want to take commits
> 'B' and 'C' as well....
>

Ted, you keep using the term "BSP kernel maintainer". Mind clarifying what
you mean with it? I don't want to be picky, I just want to avoid
miscommunication.

BSP kernels are a small subset of downstream kernels. A BSP is often the
"evil vendor tree" that we keep seeing people complain about. They rarely
see updates on base versions (i.e. many are still on 3.10 and 3.18).

Some downstream _product_ kernels are based on BSP kernels, some are not.

Typical examples of those who _are_ based on BSPs tend to be $random
android device out there, where an OEM takes what the SoC manufacturer
gives them, does minimal changes and ships it.

Examples who are _not_ based on BSPs tend to be Android's own kernels
(Nexus devices), Chrome OS, and many inhouse kernel trees out there.

I think most people engaged in this discussion are working on non-BSP
downstream kernel trees.


-Olof

[-- Attachment #2: Type: text/html, Size: 2348 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 14:18                 ` Vinod Koul
@ 2016-07-11 17:34                   ` Guenter Roeck
  2016-07-27  3:12                   ` Steven Rostedt
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11 17:34 UTC (permalink / raw)
  To: Vinod Koul; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 07:48:34PM +0530, Vinod Koul wrote:
> 
> I do whole heartedly agree to your arguments and yes that is a big
> issue. *BUT* what is the solution then, Maintainers do not even have
> hardware to test.
> 

One solution would be to have test beds available for use by everyone
to run tests on real hardware. kernelci.org tries to do that. My concern
with that approach is that hardware breaks down and needs constant
maintenance.

My personal favorite is to use qemu. That is not a perfect replacement for
real hardware (and it will never be possible to use it for, say, graphics
tests), but it could be used much more than today. Of course, its downsides
are that it doesn't really reflect the behavior of real hardware, and that
it also needs maintenance. On the plus side, it scales much better than real
hardware (all you need is more servers), and it is (relatively) easy to add
support for new hardware to it.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:24                       ` Guenter Roeck
@ 2016-07-11 17:44                         ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-11 17:44 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

[-- Attachment #1: Type: text/plain, Size: 976 bytes --]

On Mon, Jul 11, 2016 at 10:24:55AM -0700, Guenter Roeck wrote:
> On Mon, Jul 11, 2016 at 06:17:18PM +0100, Mark Brown wrote:

> > Probably also worth mentioning that this was one of the blockers for
> > getting kernelci.org testing Greg's queue for quite a while - it only
> > knows how to consume git branches.

> Kevin ended up pulling git branches from my repository, so that wasn't an
> absolute blocker. One key advantage of having the queue in git is that it is

I'm aware of that - obviously it's possible to go in both directions but
it does mean that someone needs to do the translation which is a bit
less obvious than the translation in the other direction.

> always consistent - the quilt queue was not always in sync with the -stable
> baseline, especially right after a new stable release. Another advantage
> is that the git repository can be tested by 0day.

Right, it's just generally more likely that tooling will be able to
consume git than anything else.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
  2016-07-11  4:22                       ` James Bottomley
  2016-07-11  5:23                       ` Guenter Roeck
@ 2016-07-11 19:58                       ` Dan Williams
  2016-07-12  9:35                         ` Jan Kara
  2016-07-11 20:24                       ` Kevin Hilman
  2016-07-13  4:48                       ` Alex Shi
  4 siblings, 1 reply; 244+ messages in thread
From: Dan Williams @ 2016-07-11 19:58 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: James Bottomley, ksummit-discuss

On Sun, Jul 10, 2016 at 9:03 PM, Trond Myklebust
<trondmy@primarydata.com> wrote:
> So, we might as well make this a formal proposal.
>
> I’d like to propose that we have a discussion around how to make it easier to implement kernel unit tests. I’ve co-opted Dan as he has expressed both an interest and hands-on experience. :-)

Yes, I think there is some benefit to talk about unit test
implementation details.  Also, there are some wider questions, beyond
what I wrote up for LWN [1], that could be productive to discuss in
person:

* Are unit tests strictly out of tree (where tools/testing/ is "out-of-tree")?
There's maintenance overhead and code readability concerns if unit
test infrastructure is included in-line.  While it may enable testing
of deep internals, there's significant coverage that can be obtained
by mocking interfaces at the level of exported-symbols.

* What does "we never regress userspace" mean when we have unit tests
that are tightly coupled to the kernel?
For example, there have been occasions where I have "regressed" a test
case to improve kernel behavior with the knowledge that no real
(non-test) application was dependent on the old behavior.

* Are tests only for developers, or should an end consumer of the
kernel expect to be able to run the tests and baseline their kernel?
The unit tests for libnvdimm [2] are meant to run against latest
mainline. There's some support for checking the kernel version, but I
don't promise that tests for new kernel functionality will be skipped
on older kernels without that specific enabling. I'd revisit that
stance if -stable tree maintainers or distros were looking to run the
tests.

[1]: https://lwn.net/Articles/654071/
[2]: https://github.com/pmem/ndctl/tree/master/test

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
                                         ` (2 preceding siblings ...)
  2016-07-11 19:58                       ` Dan Williams
@ 2016-07-11 20:24                       ` Kevin Hilman
  2016-07-11 23:03                         ` Guenter Roeck
  2016-07-28 21:09                         ` Laurent Pinchart
  2016-07-13  4:48                       ` Alex Shi
  4 siblings, 2 replies; 244+ messages in thread
From: Kevin Hilman @ 2016-07-11 20:24 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: James Bottomley, ksummit-discuss

Trond Myklebust <trondmy@primarydata.com> writes:

> So, we might as well make this a formal proposal.
>
> I’d like to propose that we have a discussion around how to make it
> easier to implement kernel unit tests. I’ve co-opted Dan as he has
> expressed both an interest and hands-on experience. :-)

Count me in.

I'm working on the kernelci.org project, where we're testing
mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
across ~30 unique SoC families: arm, arm64, x86.)

Right now, we're mainly doing basic boot tests, but are starting to run
kselftests on all these platforms as well.

Kevin

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
                                   ` (3 preceding siblings ...)
  2016-07-11 17:17                 ` Josh Boyer
@ 2016-07-11 22:42                 ` James Bottomley
  2016-07-20 17:50                 ` Stephen Hemminger
  5 siblings, 0 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-11 22:42 UTC (permalink / raw)
  To: Theodore Ts'o, Mark Brown; +Cc: ksummit-discuss, Jason Cooper

On Mon, 2016-07-11 at 13:03 -0400, Theodore Ts'o wrote:
> On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> > On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> > 
> > > the latest stable kernel.  (But even if they do, apparently many
> > > device vendors aren't bothering to merge in changes from the 
> > > SOC's BSP kernel, even if the BSP kernel is getting -stable
> > > updates.)
> > 
> > It would be pretty irresponsible for device vendors to be merging 
> > BSP trees, they're generally development things with ongoing 
> > feature updates that might interact badly with things the system 
> > integrator has done rather than something stable enough to just
> > merge constantly.
> 
> So the question is who actually uses -stable kernels, and does it 
> make sense for it even to be managed in a git tree?
> 
> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.

Cherry picking from git is easy provided the descriptions are useful,
so I don't think maintaining patch queues would work.  I suspect even
for people whose work flow is cherry picking, a git tree is the best
input.

Conversely, I run both linux head and latest stable on my laptop, so if
you make stable not a git tree, you make it harder for me.  I've got to
confess I only boot latest stable if there's a problem with git head,
which has been really rare lately, so I'm not really a very good stable
tester ... (that's good, though, I feel).

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11 20:24                       ` Kevin Hilman
@ 2016-07-11 23:03                         ` Guenter Roeck
  2016-07-18  7:44                           ` Christian Borntraeger
  2016-07-28 21:09                         ` Laurent Pinchart
  1 sibling, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11 23:03 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Mon, Jul 11, 2016 at 01:24:25PM -0700, Kevin Hilman wrote:
> Trond Myklebust <trondmy@primarydata.com> writes:
> 
> > So, we might as well make this a formal proposal.
> >
> > I’d like to propose that we have a discussion around how to make it
> > easier to implement kernel unit tests. I’ve co-opted Dan as he has
> > expressed both an interest and hands-on experience. :-)
> 
> Count me in.
> 
> I'm working on the kernelci.org project, where we're testing
> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
> across ~30 unique SoC families: arm, arm64, x86.)
> 
> Right now, we're mainly doing basic boot tests, but are starting to run
> kselftests on all these platforms as well.
> 

Augmenting that: For my part the interest would be to improve qemu based
testing along the same line (and maybe figure out if/how we can merge
kerneltests.org into kernelci.org).

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:15                 ` Dmitry Torokhov
  2016-07-11 17:20                   ` Theodore Ts'o
@ 2016-07-11 23:13                   ` Guenter Roeck
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11 23:13 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 10:15:06AM -0700, Dmitry Torokhov wrote:
> On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:
> > On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> > > On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> > > 
> > > > the latest stable kernel.  (But even if they do, apparently many
> > > > device vendors aren't bothering to merge in changes from the SOC's BSP
> > > > kernel, even if the BSP kernel is getting -stable updates.)
> > > 
> > > It would be pretty irresponsible for device vendors to be merging BSP
> > > trees, they're generally development things with ongoing feature updates
> > > that might interact badly with things the system integrator has done
> > > rather than something stable enough to just merge constantly.
> > 
> > So the question is who actually uses -stable kernels,
> 
> Community-based distros definitely use stable, but that might be the
> latest stable, not the older stables that are out there.
> 
Also smaller companies who don't want to pay distribution vendors for kernel
maintenance (especially since some of those won't let you change the kernel).
Those would include older stable releases.

> > and does it make
> > sense for it even to be managed in a git tree?
> > 
> > Very few people will actually be merging them, and in fact maybe
> > having a patch queue which is checked into git might actually work
> > better, since it sounds like most people are just cherry-picking
> > specific patches.
> 
> Cherry picking a commit from a branch/remote is much nicer than fetching
> a part of [quilt based?] patch queue from git, unless I misunderstand
> what you are proposing.
> 
Yes, but it isn't really feasible for small companies, where the entire
kernel maintenance may be handled by one engineer. Using -stable is really
the only feasible option there.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11  8:18           ` Jiri Kosina
@ 2016-07-11 23:32             ` Guenter Roeck
  0 siblings, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-11 23:32 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 10:18:01AM +0200, Jiri Kosina wrote:
> On Sat, 9 Jul 2016, Guenter Roeck wrote:
> 
> > As I suggested earlier, we'll have to find a way to convince companies 
> > to actively invest in QA.
> 
> The potential issue I see here is coverage.
> 
> Linus' tree greatly benefits from the "crowdsourcing" effect, in a sense 
> that everybody is, at the end of the day, interested in that tree to work 
> (because most of the stakeholders are going to include that codebase in 
> their product sooner or later).
> 
> That's not the case with stable codestreams; the interest there is much 
> more scattered.
> 
Scattered but still existing. Either case, I don't think the crowdsourcing
model works well with stable kernels, because most of the stakeholders are
only interested in a very limited subset of stable branches. The challenge
will be to find a consistent way to test _all_ stable kernels, and to test
those branches much more thoroughly than it is done today.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11 19:58                       ` Dan Williams
@ 2016-07-12  9:35                         ` Jan Kara
  2016-07-13  4:56                           ` Dan Williams
  0 siblings, 1 reply; 244+ messages in thread
From: Jan Kara @ 2016-07-12  9:35 UTC (permalink / raw)
  To: Dan Williams; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Mon 11-07-16 12:58:50, Dan Williams wrote:
> * What does "we never regress userspace" mean when we have unit tests
> that are tightly coupled to the kernel?
> For example, there have been occasions where I have "regressed" a test
> case to improve kernel behavior with the knowledge that no real
> (non-test) application was dependent on the old behavior.

It's always a judgement call. Linus has always stated that the rule only
holds for real users using real applications. So if you are confident that
it is only the test that regresses, then that is fine.

> * Are tests only for developers, or should an end consumer of the
> kernel expect to be able to run the tests and baseline their kernel?
> The unit tests for libnvdimm [2] are meant to run against latest
> mainline. There's some support for checking the kernel version, but I
> don't promise that tests for new kernel functionality will be skipped
> on older kernels without that specific enabling. I'd revisit that
> stance if -stable tree maintainers or distros were looking to run the
> tests.

Well, in the end it is your call but my experience with xfstests shows that
it is good to detect the case that the functionality is not supported by
the kernel and fail gracefully. Because especially with enterprise
distributions you get a strange mix of kernel and userspace and it is *very*
useful to be able to run testsuites there when testing backports etc. It is
a pita to figure out whether the test failed because of missing
functionality of because of a bug in your backport...

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:11                 ` Mark Brown
  2016-07-11 17:13                   ` Olof Johansson
@ 2016-07-13  1:08                   ` Geert Uytterhoeven
  1 sibling, 0 replies; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-07-13  1:08 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, Jul 11, 2016 at 7:11 PM, Mark Brown <broonie@kernel.org> wrote:
> On Mon, Jul 11, 2016 at 01:03:33PM -0400, Theodore Ts'o wrote:
>> Very few people will actually be merging them, and in fact maybe
>> having a patch queue which is checked into git might actually work
>> better, since it sounds like most people are just cherry-picking
>> specific patches.
>
> I think at this point even if people are cherry picking patches it's
> probably still going to be easier for people to work with a git tree
> than anything else - the workflow for git cherry-pick, looking for
> dependent patches and so on is pretty clear, the upstream commit IDs are
> there if you prefer to go direct to them and if you really do want a raw
> patch stack then it's easy to translate into one.

+1

Personally I hate it that the LTSI tree is available as a patch only.
Even ltsi-kernel.git is... a collection of patches, although I've just noticed
the README there does explain how to (re)create a git tree from it.

Hence every time I want to test it, and build on top of it, I have to
import the patches into git myself...

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
                                         ` (3 preceding siblings ...)
  2016-07-11 20:24                       ` Kevin Hilman
@ 2016-07-13  4:48                       ` Alex Shi
  2016-07-13  9:07                         ` Greg KH
  4 siblings, 1 reply; 244+ messages in thread
From: Alex Shi @ 2016-07-13  4:48 UTC (permalink / raw)
  To: Trond Myklebust, Dan Williams, ksummit-discuss; +Cc: James Bottomley

Count me in, please.

I am working as Lianro stable kernel maintainer for 3 years. and was run 
Intel linux kernel performance project before this job.

I am thinking if it's possible to share an basic tree which include some 
widely wanted backporting features. That could share the testing and 
review, then will reduce bugs much more.

Thanks
Alex

On 07/11/2016 01:03 PM, Trond Myklebust wrote:
> So, we might as well make this a formal proposal.
>
> I’d like to propose that we have a discussion around how to make it easier to implement kernel unit tests. I’ve co-opted Dan as he has expressed both an interest and hands-on experience. :-)
>
> Cheers
>    Trond
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-12  9:35                         ` Jan Kara
@ 2016-07-13  4:56                           ` Dan Williams
  2016-07-13  9:04                             ` Jan Kara
  0 siblings, 1 reply; 244+ messages in thread
From: Dan Williams @ 2016-07-13  4:56 UTC (permalink / raw)
  To: Jan Kara; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Tue, Jul 12, 2016 at 2:35 AM, Jan Kara <jack@suse.cz> wrote:
> Well, in the end it is your call but my experience with xfstests shows that
> it is good to detect the case that the functionality is not supported by
> the kernel and fail gracefully. Because especially with enterprise
> distributions you get a strange mix of kernel and userspace and it is *very*
> useful to be able to run testsuites there when testing backports etc. It is
> a pita to figure out whether the test failed because of missing
> functionality of because of a bug in your backport...

I'm assuming you can gate tests based on filesystem feature flags?
For unit tests that are deeper, at the individual kernel symbol level,
it's hard to predict the behavior of that symbol outside of the kernel
version.  Unless, we ship a test-interface version / feature-flags
data in the base kernel?  That direction relates to the question about
whether test infrastructure remain out-of-tree.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13  4:56                           ` Dan Williams
@ 2016-07-13  9:04                             ` Jan Kara
  0 siblings, 0 replies; 244+ messages in thread
From: Jan Kara @ 2016-07-13  9:04 UTC (permalink / raw)
  To: Dan Williams; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Tue 12-07-16 21:56:02, Dan Williams wrote:
> On Tue, Jul 12, 2016 at 2:35 AM, Jan Kara <jack@suse.cz> wrote:
> > Well, in the end it is your call but my experience with xfstests shows that
> > it is good to detect the case that the functionality is not supported by
> > the kernel and fail gracefully. Because especially with enterprise
> > distributions you get a strange mix of kernel and userspace and it is *very*
> > useful to be able to run testsuites there when testing backports etc. It is
> > a pita to figure out whether the test failed because of missing
> > functionality of because of a bug in your backport...
> 
> I'm assuming you can gate tests based on filesystem feature flags?

Yes, each test has in the beginning 'require' checks - these check whether
the functionality the test needs from the kernel is available and if any of
them fails, the test reports it cannot run because functionality this and
that is missing.

> For unit tests that are deeper, at the individual kernel symbol level,
> it's hard to predict the behavior of that symbol outside of the kernel
> version.  Unless, we ship a test-interface version / feature-flags
> data in the base kernel?  That direction relates to the question about
> whether test infrastructure remain out-of-tree.

Yeah, so xfstests do not test kernel at that deep level (I assume they are
not unit tests in the sense you use it). They just use syscalls to
excercise kernel and thus checking whether the functionality is available
is much simpler.

So the tests you are speaking about are compiled into kernel (possibly as a
module) so that they can excercise kernel functions directly, right? Then I
guess it doesn't make much sense to consider them out-of-tree - the test
module is IMHO just another module that uses the exported API. But I don't
really have any experience with tests at that deep level in the kernel.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13  4:48                       ` Alex Shi
@ 2016-07-13  9:07                         ` Greg KH
  2016-07-13 12:37                           ` Alex Shi
  2016-07-13 14:34                           ` Mark Brown
  0 siblings, 2 replies; 244+ messages in thread
From: Greg KH @ 2016-07-13  9:07 UTC (permalink / raw)
  To: Alex Shi; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
> I am thinking if it's possible to share an basic tree which include some
> widely wanted backporting features. That could share the testing and review,
> then will reduce bugs much more.

Like LTSI already does today?  :)

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13  9:07                         ` Greg KH
@ 2016-07-13 12:37                           ` Alex Shi
  2016-07-13 19:59                             ` Olof Johansson
  2016-07-14  1:19                             ` Greg KH
  2016-07-13 14:34                           ` Mark Brown
  1 sibling, 2 replies; 244+ messages in thread
From: Alex Shi @ 2016-07-13 12:37 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss



On 07/13/2016 06:07 PM, Greg KH wrote:
> On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
>> I am thinking if it's possible to share an basic tree which include some
>> widely wanted backporting features. That could share the testing and review,
>> then will reduce bugs much more.
> Like LTSI already does today?  :)

It looks we share some basic ideas on backporting part. But industry 
need much more backporting features. and new features which out of 
upstream aren't started from here, since it's a upstream quality without 
more eyes in community.

> greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13  9:07                         ` Greg KH
  2016-07-13 12:37                           ` Alex Shi
@ 2016-07-13 14:34                           ` Mark Brown
  2016-07-14  3:17                             ` Greg KH
  1 sibling, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-13 14:34 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 704 bytes --]

On Wed, Jul 13, 2016 at 06:07:39PM +0900, Greg KH wrote:
> On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:

> > I am thinking if it's possible to share an basic tree which include some
> > widely wanted backporting features. That could share the testing and review,
> > then will reduce bugs much more.

> Like LTSI already does today?  :)

There was a lot of pushback against LTSI, the most concrete bit I could
see was the inclusion of board support and vendor specific drivers -
people doing products won't care so much but people releasing source
weren't thrilled with the idea of it ending up either conflicting with
their internal work or showing up in the diffstat of what they release.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13 12:37                           ` Alex Shi
@ 2016-07-13 19:59                             ` Olof Johansson
  2016-07-13 22:23                               ` Alex Shi
  2016-07-14  1:19                             ` Greg KH
  1 sibling, 1 reply; 244+ messages in thread
From: Olof Johansson @ 2016-07-13 19:59 UTC (permalink / raw)
  To: Alex Shi; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1246 bytes --]

On Wed, Jul 13, 2016 at 5:37 AM, Alex Shi <alex.shi@linaro.org> wrote:

>
>
> On 07/13/2016 06:07 PM, Greg KH wrote:
>
>> On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
>>
>>> I am thinking if it's possible to share an basic tree which include some
>>> widely wanted backporting features. That could share the testing and
>>> review,
>>> then will reduce bugs much more.
>>>
>> Like LTSI already does today?  :)
>>
>
> It looks we share some basic ideas on backporting part. But industry need
> much more backporting features. and new features which out of upstream
> aren't started from here, since it's a upstream quality without more eyes
> in community.


If you want more eyes AND more backporting, how about you move ahead to the
newer version instead? In the end, if you'll backport most of the code you
end up with close to the same code base.

Doing security and minimal security fixes on -stable is a very different
endeavor than creating downstream trees full of feature backports.

(We only care about a few features, you might say -- but once you join up
with others, who care about a few but different features, you'll eventually
end up approximating the kernel from which you're backporting all these
features).


-Olof

[-- Attachment #2: Type: text/html, Size: 2104 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13 19:59                             ` Olof Johansson
@ 2016-07-13 22:23                               ` Alex Shi
  0 siblings, 0 replies; 244+ messages in thread
From: Alex Shi @ 2016-07-13 22:23 UTC (permalink / raw)
  To: Olof Johansson; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1584 bytes --]



On 07/14/2016 04:59 AM, Olof Johansson wrote:
>
>
> On Wed, Jul 13, 2016 at 5:37 AM, Alex Shi <alex.shi@linaro.org 
> <mailto:alex.shi@linaro.org>> wrote:
>
>
>
>     On 07/13/2016 06:07 PM, Greg KH wrote:
>
>         On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
>
>             I am thinking if it's possible to share an basic tree
>             which include some
>             widely wanted backporting features. That could share the
>             testing and review,
>             then will reduce bugs much more.
>
>         Like LTSI already does today?  :)
>
>
>     It looks we share some basic ideas on backporting part. But
>     industry need much more backporting features. and new features
>     which out of upstream aren't started from here, since it's a
>     upstream quality without more eyes in community.
>
>
> If you want more eyes AND more backporting, how about you move ahead 
> to the newer version instead? In the end, if you'll backport most of 
> the code you end up with close to the same code base.
>
> Doing security and minimal security fixes on -stable is a very 
> different endeavor than creating downstream trees full of feature 
> backports.
>
> (We only care about a few features, you might say -- but once you join 
> up with others, who care about a few but different features, you'll 
> eventually end up approximating the kernel from which you're 
> backporting all these features).
>

It highly depends the new feature backporting criteria, like the current 
LTSI, there are no much feature backportings as its claimed.

Alex

[-- Attachment #2: Type: text/html, Size: 3317 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13 12:37                           ` Alex Shi
  2016-07-13 19:59                             ` Olof Johansson
@ 2016-07-14  1:19                             ` Greg KH
  2016-07-14  9:48                               ` Alex Shi
  1 sibling, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-14  1:19 UTC (permalink / raw)
  To: Alex Shi; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Jul 13, 2016 at 09:37:10PM +0900, Alex Shi wrote:
> 
> 
> On 07/13/2016 06:07 PM, Greg KH wrote:
> > On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
> > > I am thinking if it's possible to share an basic tree which include some
> > > widely wanted backporting features. That could share the testing and review,
> > > then will reduce bugs much more.
> > Like LTSI already does today?  :)
> 
> It looks we share some basic ideas on backporting part. But industry need
> much more backporting features. and new features which out of upstream
> aren't started from here, since it's a upstream quality without more eyes in
> community.

I have no idea what you mean by this.  Please give specific examples of
what you have problems with.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-13 14:34                           ` Mark Brown
@ 2016-07-14  3:17                             ` Greg KH
  2016-07-14 10:06                               ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-14  3:17 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:
> On Wed, Jul 13, 2016 at 06:07:39PM +0900, Greg KH wrote:
> > On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
> 
> > > I am thinking if it's possible to share an basic tree which include some
> > > widely wanted backporting features. That could share the testing and review,
> > > then will reduce bugs much more.
> 
> > Like LTSI already does today?  :)
> 
> There was a lot of pushback against LTSI,

pushback from whom?

> the most concrete bit I could
> see was the inclusion of board support and vendor specific drivers -

That's exactly the goal of LTSI.

> people doing products won't care so much but people releasing source
> weren't thrilled with the idea of it ending up either conflicting with
> their internal work or showing up in the diffstat of what they release.

What is conflicting?  BSP and drivers for hardware that you don't use?

confused,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-14  1:19                             ` Greg KH
@ 2016-07-14  9:48                               ` Alex Shi
  2016-07-14  9:54                                 ` Ard Biesheuvel
  0 siblings, 1 reply; 244+ messages in thread
From: Alex Shi @ 2016-07-14  9:48 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss



On 07/14/2016 10:19 AM, Greg KH wrote:
> On Wed, Jul 13, 2016 at 09:37:10PM +0900, Alex Shi wrote:
>>
>> On 07/13/2016 06:07 PM, Greg KH wrote:
>>> On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
>>>> I am thinking if it's possible to share an basic tree which include some
>>>> widely wanted backporting features. That could share the testing and review,
>>>> then will reduce bugs much more.
>>> Like LTSI already does today?  :)
>> It looks we share some basic ideas on backporting part. But industry need
>> much more backporting features. and new features which out of upstream
>> aren't started from here, since it's a upstream quality without more eyes in
>> community.
> I have no idea what you mean by this.  Please give specific examples of
> what you have problems with.

The industry need much more features on LTS kernel for their product.
Like on linaro stable kernel 4.1, we backported PCIe of arm64, opp v2, 
writeback cgroup... 11 features on that. All of them are come from arm, 
hisilicon, QC, zte etc.

And in fact, hosting new features which target on upstream kernel isn't 
a good idea, since no much upstream guys like to look into this tree or 
do testing on this tree.

So looks like to share more backporting instead of upstream target 
feature could fit more industry needs.
> thanks,
>
> greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-14  9:48                               ` Alex Shi
@ 2016-07-14  9:54                                 ` Ard Biesheuvel
  2016-07-14 14:13                                   ` Alex Shi
  0 siblings, 1 reply; 244+ messages in thread
From: Ard Biesheuvel @ 2016-07-14  9:54 UTC (permalink / raw)
  To: Alex Shi; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On 14 July 2016 at 11:48, Alex Shi <alex.shi@linaro.org> wrote:
>
>
> On 07/14/2016 10:19 AM, Greg KH wrote:
>>
>> On Wed, Jul 13, 2016 at 09:37:10PM +0900, Alex Shi wrote:
>>>
>>>
>>> On 07/13/2016 06:07 PM, Greg KH wrote:
>>>>
>>>> On Wed, Jul 13, 2016 at 01:48:15PM +0900, Alex Shi wrote:
>>>>>
>>>>> I am thinking if it's possible to share an basic tree which include
>>>>> some
>>>>> widely wanted backporting features. That could share the testing and
>>>>> review,
>>>>> then will reduce bugs much more.
>>>>
>>>> Like LTSI already does today?  :)
>>>
>>> It looks we share some basic ideas on backporting part. But industry need
>>> much more backporting features. and new features which out of upstream
>>> aren't started from here, since it's a upstream quality without more eyes
>>> in
>>> community.
>>
>> I have no idea what you mean by this.  Please give specific examples of
>> what you have problems with.
>
>
> The industry need much more features on LTS kernel for their product.
> Like on linaro stable kernel 4.1, we backported PCIe of arm64, opp v2,
> writeback cgroup... 11 features on that. All of them are come from arm,
> hisilicon, QC, zte etc.
>
> And in fact, hosting new features which target on upstream kernel isn't a
> good idea, since no much upstream guys like to look into this tree or do
> testing on this tree.
>
> So looks like to share more backporting instead of upstream target feature
> could fit more industry needs.

Alex,

I think Linaro's interpretation of a stable kernel is not very
relevant for this discussion. arm64 support in the Linux kernel is not
nearly as mature as support for the x86 architecture and other
features and/or subsystems, and this is why we have the LSK, which
consists of an otherwise stable kernel tree combined with more recent
changes specific to the arm64 architecture and various SoCs and
platforms that implement it.

I think this discussion is more about regressions in production
systems running stable kernels.

-- 
Ard.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-14  3:17                             ` Greg KH
@ 2016-07-14 10:06                               ` Mark Brown
  2016-07-15  0:22                                 ` Greg KH
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-14 10:06 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 985 bytes --]

On Thu, Jul 14, 2016 at 12:17:53PM +0900, Greg KH wrote:
> On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:

> > There was a lot of pushback against LTSI,

> pushback from whom?

Linaro members who wanted the LSK.

> > the most concrete bit I could
> > see was the inclusion of board support and vendor specific drivers -

> That's exactly the goal of LTSI.

Right, which is a problem for some people.

> > people doing products won't care so much but people releasing source
> > weren't thrilled with the idea of it ending up either conflicting with
> > their internal work or showing up in the diffstat of what they release.

> What is conflicting?  BSP and drivers for hardware that you don't use?

No, hardware that you do use.  If LTSI is including changes for a driver
that's also being worked on in the vendor's own trees then at some point 
the two sets of changes are going to have to get merged which makes the
workflow more stressful.  

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-14  9:54                                 ` Ard Biesheuvel
@ 2016-07-14 14:13                                   ` Alex Shi
  0 siblings, 0 replies; 244+ messages in thread
From: Alex Shi @ 2016-07-14 14:13 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust



On 07/14/2016 06:54 PM, Ard Biesheuvel wrote:
>
> Alex,
>
> I think Linaro's interpretation of a stable kernel is not very
> relevant for this discussion. arm64 support in the Linux kernel is not
> nearly as mature as support for the x86 architecture and other
> features and/or subsystems, and this is why we have the LSK, which
> consists of an otherwise stable kernel tree combined with more recent
> changes specific to the arm64 architecture and various SoCs and
> platforms that implement it.
>
> I think this discussion is more about regressions in production
> systems running stable kernels.

Yes, that's what am I talking.

I didn't propose LSK as a candidate for public industry stable kernel. 
LSK is a example to show there are much more feature needed by industry 
guys. Except the arm arch specific feature, there are still much of arch 
non-related feature required by our members, from QC, huawei, etc.

Yes, to enable all required feature will make stable kernel very big. 
LSK also has this problem. The solution for LSK is to isolated each of 
feature on separate branch, and do testing on the everything merged 
mainline. It can give user a flexible choice -- to pick up just needed 
features, while keep as much as testing insurance --  If the feature 
requests are kind of common, the collaboration is good to save all guys time

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-14 10:06                               ` Mark Brown
@ 2016-07-15  0:22                                 ` Greg KH
  2016-07-15  0:51                                   ` Guenter Roeck
  0 siblings, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-15  0:22 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:
> On Thu, Jul 14, 2016 at 12:17:53PM +0900, Greg KH wrote:
> > On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:
> 
> > > There was a lot of pushback against LTSI,
> 
> > pushback from whom?
> 
> Linaro members who wanted the LSK.

Ok, there's no need for everyone to use the same messy tree, but perhaps
Linaro could participate with LTSI to help make something that more
people can all use?  No need to keep duplicating the same work...

But this is way off-topic here, sorry.

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  0:22                                 ` Greg KH
@ 2016-07-15  0:51                                   ` Guenter Roeck
  2016-07-15  1:41                                     ` Greg KH
  2016-07-15 11:10                                     ` Mark Brown
  0 siblings, 2 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-15  0:51 UTC (permalink / raw)
  To: Greg KH, Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 07/14/2016 05:22 PM, Greg KH wrote:
> On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:
>> On Thu, Jul 14, 2016 at 12:17:53PM +0900, Greg KH wrote:
>>> On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:
>>
>>>> There was a lot of pushback against LTSI,
>>
>>> pushback from whom?
>>
>> Linaro members who wanted the LSK.
>
> Ok, there's no need for everyone to use the same messy tree, but perhaps
> Linaro could participate with LTSI to help make something that more
> people can all use?  No need to keep duplicating the same work...
>
> But this is way off-topic here, sorry.
>

Maybe a separate topic, and not entirely feasible for the kernel summit,
but it might be worthwhile figuring out why companies are or are not
using LTSI. My major problem with it was always that it is just a collection
of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
Sure, one can create a kernel tree from it, but that is not the same.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  0:51                                   ` Guenter Roeck
@ 2016-07-15  1:41                                     ` Greg KH
  2016-07-15  2:56                                       ` Guenter Roeck
  2016-07-15 11:10                                     ` Mark Brown
  1 sibling, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-15  1:41 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, Jul 14, 2016 at 05:51:11PM -0700, Guenter Roeck wrote:
> On 07/14/2016 05:22 PM, Greg KH wrote:
> > On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:
> > > On Thu, Jul 14, 2016 at 12:17:53PM +0900, Greg KH wrote:
> > > > On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:
> > > 
> > > > > There was a lot of pushback against LTSI,
> > > 
> > > > pushback from whom?
> > > 
> > > Linaro members who wanted the LSK.
> > 
> > Ok, there's no need for everyone to use the same messy tree, but perhaps
> > Linaro could participate with LTSI to help make something that more
> > people can all use?  No need to keep duplicating the same work...
> > 
> > But this is way off-topic here, sorry.
> > 
> 
> Maybe a separate topic, and not entirely feasible for the kernel summit,
> but it might be worthwhile figuring out why companies are or are not
> using LTSI. My major problem with it was always that it is just a collection
> of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
> Sure, one can create a kernel tree from it, but that is not the same.

It's maintained like most other "distro" kernels are, so I find your
annoyance about a quilt tree of patches odd.  But it is trivial to turn
it into a git tree, the scripts are included in the LTSI repo, which is
what some companies do with it today, others take the built Yocto
packages and just use them.  At the LTSI meeting today in Tokyo, a
number of companies said they are relying on it and using it in shipping
products, so it seems to be useful to them :)

Personally, keeping a kernel tree as external patches is a much more
sane way to manage a kernel, in that it makes it easy to update the base
easier, and it gives people a huge hint just how far off of "mainline"
they really are.  Keeping everything in one branch, in one git tree,
hides all of that, and seems to cause lots of perception problems at
times (look at the 2.5 million line addition monstrocity that QCOM
publishes for older 3.10-based SoCs and people consume unknowingly as
proof of that...)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  1:41                                     ` Greg KH
@ 2016-07-15  2:56                                       ` Guenter Roeck
  2016-07-15  4:29                                         ` Greg KH
  0 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-07-15  2:56 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 07/14/2016 06:41 PM, Greg KH wrote:
> On Thu, Jul 14, 2016 at 05:51:11PM -0700, Guenter Roeck wrote:
>> On 07/14/2016 05:22 PM, Greg KH wrote:
>>> On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:
>>>> On Thu, Jul 14, 2016 at 12:17:53PM +0900, Greg KH wrote:
>>>>> On Wed, Jul 13, 2016 at 03:34:47PM +0100, Mark Brown wrote:
>>>>
>>>>>> There was a lot of pushback against LTSI,
>>>>
>>>>> pushback from whom?
>>>>
>>>> Linaro members who wanted the LSK.
>>>
>>> Ok, there's no need for everyone to use the same messy tree, but perhaps
>>> Linaro could participate with LTSI to help make something that more
>>> people can all use?  No need to keep duplicating the same work...
>>>
>>> But this is way off-topic here, sorry.
>>>
>>
>> Maybe a separate topic, and not entirely feasible for the kernel summit,
>> but it might be worthwhile figuring out why companies are or are not
>> using LTSI. My major problem with it was always that it is just a collection
>> of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
>> Sure, one can create a kernel tree from it, but that is not the same.
>
> It's maintained like most other "distro" kernels are, so I find your
> annoyance about a quilt tree of patches odd.  But it is trivial to turn

Not annoyance. It just didn't seem practical. We tried using a patch series
in Yocto at my previous job, and it didn't work out well. We gave it up
pretty quickly and started to maintain the kernel in git instead.

> it into a git tree, the scripts are included in the LTSI repo, which is
> what some companies do with it today, others take the built Yocto
> packages and just use them.  At the LTSI meeting today in Tokyo, a
> number of companies said they are relying on it and using it in shipping
> products, so it seems to be useful to them :)
>

Good to hear that. I do wonder, though, if those companies use the tree
entirely or mostly unmodified, or if they have active kernel development.

> Personally, keeping a kernel tree as external patches is a much more
> sane way to manage a kernel, in that it makes it easy to update the base
> easier, and it gives people a huge hint just how far off of "mainline"
> they really are.  Keeping everything in one branch, in one git tree,
> hides all of that, and seems to cause lots of perception problems at
> times (look at the 2.5 million line addition monstrocity that QCOM
> publishes for older 3.10-based SoCs and people consume unknowingly as
> proof of that...)
>

At my previous job, we maintained ("only") a few hundred patches on top of the
mainline kernel. In that case, we _could_ have used LTSI instead of a stable
kernel as basis. However, we did track the stable kernel, and "git merge" was quite
straightforward and painless to use. Even a rebase to new kernel releases was
easy and typically took just a few hours. This was only possible by using a well
defined git tree as basis. Using LTSI would just have added additional complexity.

A different team at my previous employer used a distro which for all practical
purposes mandated using a series of patches maintained in Yocto to be applied
on top of the vendor kernel. The development model this team used was to check
out the kernel in Yocto (with existing patches applied), to make local changes,
and to apply those changes as additional patch or patches to the Yocto tree.
To me that seems to be a pretty complicated development model. It makes it hard
to just look at the kernel source, and it makes active development quite
difficult - even more so if a patch has to be applied to multiple branches.

Overall, I can not imagine that it is even possible to use quilt trees as basis
for development in a company with if active kernel development, even more so
if a large number of engineers and/or a number of branches are involved.
Sure, the QCOM example may be extreme, but do you really think that writing
those 2.5M LOC would have been possible if QCOM had used Quilt trees instead
of git ? Using Quilt would for sure have prevented them from writing those
2.5M LOC, but then there would be nothing. That doesn't sound like a feasible
alternative either.

Not that I like that chromeos-4.4 already has some 4,400+ patches on top of 4.4.14;
I would rather see those in the upstream kernel (to be fair, more than half of
those patches are backports). Maintaining such a kernel in a quilt tree ? Not really.

Of course, if the goal is to provide a set of patches to companies who don't
actively modify the kernel, using quilt trees might be an acceptable or even
a better option. However, again, I don't think it is a feasible alternative
to git if active development is involved.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  2:56                                       ` Guenter Roeck
@ 2016-07-15  4:29                                         ` Greg KH
  2016-07-15  5:52                                           ` NeilBrown
  2016-07-21  7:13                                           ` Daniel Vetter
  0 siblings, 2 replies; 244+ messages in thread
From: Greg KH @ 2016-07-15  4:29 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
> Overall, I can not imagine that it is even possible to use quilt trees as basis
> for development in a company with if active kernel development, even more so
> if a large number of engineers and/or a number of branches are involved.
> Sure, the QCOM example may be extreme, but do you really think that writing
> those 2.5M LOC would have been possible if QCOM had used Quilt trees instead
> of git ? Using Quilt would for sure have prevented them from writing those
> 2.5M LOC, but then there would be nothing. That doesn't sound like a feasible
> alternative either.

It is possible, look at the Red Hat and SuSE kernel development teams.
Yes, in the end, most of the patches are backports from upstream, but
during new releases they use quilt for all of their work, adding and
removing and updating patches all the time.

There are the usual merge issues with doing that, but for an SoC, I
don't think that would be all that hard given that almost all patches
are driver/subsystem-specific and don't touch other places.

It does take a better calibre of developer to do this type of thing,
that might be a harder thing to deal with at some SoC vendors :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  4:29                                         ` Greg KH
@ 2016-07-15  5:52                                           ` NeilBrown
  2016-07-15  6:14                                             ` Greg KH
                                                               ` (3 more replies)
  2016-07-21  7:13                                           ` Daniel Vetter
  1 sibling, 4 replies; 244+ messages in thread
From: NeilBrown @ 2016-07-15  5:52 UTC (permalink / raw)
  To: Greg KH, Guenter Roeck; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 2938 bytes --]

On Fri, Jul 15 2016, Greg KH wrote:

> On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
>> Overall, I can not imagine that it is even possible to use quilt trees as basis
>> for development in a company with if active kernel development, even more so
>> if a large number of engineers and/or a number of branches are involved.
>> Sure, the QCOM example may be extreme, but do you really think that writing
>> those 2.5M LOC would have been possible if QCOM had used Quilt trees instead
>> of git ? Using Quilt would for sure have prevented them from writing those
>> 2.5M LOC, but then there would be nothing. That doesn't sound like a feasible
>> alternative either.
>
> It is possible, look at the Red Hat and SuSE kernel development teams.
> Yes, in the end, most of the patches are backports from upstream, but

You are glossing over a key point.  We (or at least I as a SUSE kernel
developer) don't use quilt for development because, like Guenter says,
it would be too clumsy.  I do development upstream if git.  Upstream first.
And I have scripts to help turn the result into something suitable for
quilt, making the use of quilt a pain rather than a nightmare.

I do find quilt useful when backporting a series of patches so that I
can resolve the conflicts on each patch individually and move backwards
and forwards through the list of patches.  I don't think git has an easy
way to store a branch of patches-that-I-need-to-apply and to then give
me one at a time, removing them from the branch.  I could use 'stgit'
for that if necessary, though it is very tempting to write something
that is better integrated with git.

> during new releases they use quilt for all of their work, adding and
> removing and updating patches all the time.

So you are saying quilt is good for release management, and Guenter is
saying it is bad for development.  Maybe you are in agreement.

It probably is quite useful when pulling in a new -stable base.  We
typically have a bunch of patches that we applied before they came out
in -stable, and just removing them has some value ... though
occasionally I do wonder "what happened to that patch..... oh, stable!".

Personally, I would rather do all my kernel development (and non-kernel
development) using git and git only.  Other engineers might have
different opinions.  But we work with what we have.

NeilBrown

>
> There are the usual merge issues with doing that, but for an SoC, I
> don't think that would be all that hard given that almost all patches
> are driver/subsystem-specific and don't touch other places.
>
> It does take a better calibre of developer to do this type of thing,
> that might be a harder thing to deal with at some SoC vendors :)
>
> thanks,
>
> greg k-h
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  5:52                                           ` NeilBrown
@ 2016-07-15  6:14                                             ` Greg KH
  2016-07-15  7:02                                               ` Jiri Kosina
  2016-07-15  6:19                                             ` Rik van Riel
                                                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-15  6:14 UTC (permalink / raw)
  To: NeilBrown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, Jul 15, 2016 at 03:52:39PM +1000, NeilBrown wrote:
> > during new releases they use quilt for all of their work, adding and
> > removing and updating patches all the time.
> 
> So you are saying quilt is good for release management, and Guenter is
> saying it is bad for development.  Maybe you are in agreement.

Heh, yes, I think we are in agreement.  The "fun" thing is, people take
the thing you release and do development on it.  So the developers want
the output of your release in a format that they can do work with.

I talked with Geert today about this, and he gave me some ideas for how
to make the output of the LTSI tree in a git tree that people can work
off of.  Much like I've started to do now with the stable trees, and the
-rc git tree of patches built from my quilt series.  I'll work on this
over time and see how that goes.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  5:52                                           ` NeilBrown
  2016-07-15  6:14                                             ` Greg KH
@ 2016-07-15  6:19                                             ` Rik van Riel
  2016-07-15 12:17                                               ` Mark Brown
  2016-07-15  6:32                                             ` James Bottomley
  2016-07-15 11:24                                             ` Vlastimil Babka
  3 siblings, 1 reply; 244+ messages in thread
From: Rik van Riel @ 2016-07-15  6:19 UTC (permalink / raw)
  To: NeilBrown, Greg KH, Guenter Roeck
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1780 bytes --]

On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
> On Fri, Jul 15 2016, Greg KH wrote:
> > On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
> > > Overall, I can not imagine that it is even possible to use quilt
> > > trees as basis
> > > for development in a company with if active kernel development,
> > > even more so
> > > if a large number of engineers and/or a number of branches are
> > > involved.
> > > Sure, the QCOM example may be extreme, but do you really think
> > > that writing
> > > those 2.5M LOC would have been possible if QCOM had used Quilt
> > > trees instead
> > > of git ? Using Quilt would for sure have prevented them from
> > > writing those
> > > 2.5M LOC, but then there would be nothing. That doesn't sound
> > > like a feasible
> > > alternative either.
> > 
> > It is possible, look at the Red Hat and SuSE kernel development
> > teams.
> > Yes, in the end, most of the patches are backports from upstream,
> > but
> 
> You are glossing over a key point.  We (or at least I as a SUSE
> kernel
> developer) don't use quilt for development because, like Guenter
> says,
> it would be too clumsy.  I do development upstream if git.  Upstream
> first.

The same is true for Red Hat. We have had an "upstream first"
policy in place for over a decade now.

RHEL is just not where development happens, because development
happens upstream.

RHEL is also developed on a git tree nowadays, because there is
no need to extract patches from RHEL, since the code came from
upstream to begin with.

It sounds like the embedded people are causing themselves a lot
of pain. Pain the distro people got all too familiar with a decade
ago, and decided to leave behind.

-- 

All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  5:52                                           ` NeilBrown
  2016-07-15  6:14                                             ` Greg KH
  2016-07-15  6:19                                             ` Rik van Riel
@ 2016-07-15  6:32                                             ` James Bottomley
  2016-07-15  7:01                                               ` NeilBrown
  2016-07-15 11:05                                               ` Geert Uytterhoeven
  2016-07-15 11:24                                             ` Vlastimil Babka
  3 siblings, 2 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-15  6:32 UTC (permalink / raw)
  To: NeilBrown, Greg KH, Guenter Roeck; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1047 bytes --]

On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
> I do find quilt useful when backporting a series of patches so that I
> can resolve the conflicts on each patch individually and move 
> backwards and forwards through the list of patches.  I don't think 
> git has an easy way to store a branch of patches-that-I-need-to-apply 
> and to then give me one at a time, removing them from the branch.  I 
> could use 'stgit' for that if necessary, though it is very tempting 
> to write something that is better integrated with git.

Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
range of patches to apply, so you can select a bunch of patches to
backport or otherwise move all at once.  Git cherry can tell you (to
within an approximation, since it uses matching) what patches are
common between two branches even if they have differing commit ids.

The format is a bit frightening if you're not used to it, which is why
stgit may be a better user experience, but you can do it with basic
git.

James

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  6:32                                             ` James Bottomley
@ 2016-07-15  7:01                                               ` NeilBrown
  2016-07-15  7:28                                                 ` James Bottomley
  2016-07-15  7:36                                                 ` Dmitry Torokhov
  2016-07-15 11:05                                               ` Geert Uytterhoeven
  1 sibling, 2 replies; 244+ messages in thread
From: NeilBrown @ 2016-07-15  7:01 UTC (permalink / raw)
  To: James Bottomley, Greg KH, Guenter Roeck; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 2006 bytes --]

On Fri, Jul 15 2016, James Bottomley wrote:

> On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
>> I do find quilt useful when backporting a series of patches so that I
>> can resolve the conflicts on each patch individually and move 
>> backwards and forwards through the list of patches.  I don't think 
>> git has an easy way to store a branch of patches-that-I-need-to-apply 
>> and to then give me one at a time, removing them from the branch.  I 
>> could use 'stgit' for that if necessary, though it is very tempting 
>> to write something that is better integrated with git.
>
> Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
> range of patches to apply, so you can select a bunch of patches to
> backport or otherwise move all at once.  Git cherry can tell you (to
> within an approximation, since it uses matching) what patches are
> common between two branches even if they have differing commit ids.
>
> The format is a bit frightening if you're not used to it, which is why
> stgit may be a better user experience, but you can do it with basic
> git.

I wasn't aware of "git cherry".  It certainly could be useful, but based
on the man page it would get confused by modifications made to resolve
conflicts.
If "get cherry-pick" auto-added an "upstream HASHID" line to the comment, and
if "git cherry" used that to understand that two commits where "the
same", then it would be a lot closer.

Then a command, maybe "git cherry-pick" with no args, which did the
equivalent of: 
  git cherry-pick `git cherry | head -n1`

would almost work for "quilt push", and the "git rerere" thing (which I
almost understand) would mean that "git reset --hard HEAD^" would work
for "git pop" (or "git cherry-pop").

I'd probably want some way to record the upstream and limit commits for
a particular session.  e.g.
   git cherry start XX YY

then "git cherry-pick" and "git cherry-pop" would DoTheRightThing.
Maybe.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  6:14                                             ` Greg KH
@ 2016-07-15  7:02                                               ` Jiri Kosina
  2016-07-15 11:42                                                 ` Greg KH
  0 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-07-15  7:02 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 15 Jul 2016, Greg KH wrote:

> > > during new releases they use quilt for all of their work, adding and
> > > removing and updating patches all the time.
> > 
> > So you are saying quilt is good for release management, and Guenter is
> > saying it is bad for development.  Maybe you are in agreement.
> 
> Heh, yes, I think we are in agreement.  The "fun" thing is, people take
> the thing you release and do development on it.  So the developers want
> the output of your release in a format that they can do work with.
> 
> I talked with Geert today about this, and he gave me some ideas for how
> to make the output of the LTSI tree in a git tree that people can work
> off of.  Much like I've started to do now with the stable trees, and the
> -rc git tree of patches built from my quilt series.  I'll work on this
> over time and see how that goes.

FWIW what we do in SUSE is that we actually have our kernels maintained as 
a quilt series (in git), but at the same time we are actually 
automatically generating proper non-rebasing git tree from that series, so 
that our partners can work on a proper git tree. I think it's a rather 
successful model.

If you'd like to see this in practice, then this is how the "primary" tree 
(to which developers are actually pushing patches backported from 
upstream) looks like:

	http://kernel.suse.com/cgit/kernel-source/tree/?h=SLE12-SP2
	http://kernel.suse.com/cgit/kernel-source/log/?h=SLE12-SP2

and this is how the auto-generated proper git tree looks like

	http://kernel.suse.com/cgit/kernel/tree/?h=SLE12-SP2
	http://kernel.suse.com/cgit/kernel/tree/?h=SLE12-SP2

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  7:01                                               ` NeilBrown
@ 2016-07-15  7:28                                                 ` James Bottomley
  2016-07-15  7:36                                                 ` Dmitry Torokhov
  1 sibling, 0 replies; 244+ messages in thread
From: James Bottomley @ 2016-07-15  7:28 UTC (permalink / raw)
  To: NeilBrown, Greg KH, Guenter Roeck; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 2845 bytes --]

On Fri, 2016-07-15 at 17:01 +1000, NeilBrown wrote:
> On Fri, Jul 15 2016, James Bottomley wrote:
> 
> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
> > > I do find quilt useful when backporting a series of patches so
> > > that I
> > > can resolve the conflicts on each patch individually and move 
> > > backwards and forwards through the list of patches.  I don't
> > > think 
> > > git has an easy way to store a branch of patches-that-I-need-to
> > > -apply 
> > > and to then give me one at a time, removing them from the branch.
> > >   I 
> > > could use 'stgit' for that if necessary, though it is very
> > > tempting 
> > > to write something that is better integrated with git.
> > 
> > Git cherry and git cherry-pick can do this.  Git cherry-pick can
> > take a
> > range of patches to apply, so you can select a bunch of patches to
> > backport or otherwise move all at once.  Git cherry can tell you
> > (to
> > within an approximation, since it uses matching) what patches are
> > common between two branches even if they have differing commit ids.
> > 
> > The format is a bit frightening if you're not used to it, which is
> > why
> > stgit may be a better user experience, but you can do it with basic
> > git.
> 
> I wasn't aware of "git cherry".  It certainly could be useful, but 
> based non the man page it would get confused by modifications made to
> resolve conflicts.

I know, currently the matching is hard in the sha1 sums of the patch
less certain things which might change like line numbers.  However, we
use similarity matching in git merges to detect file moves, we could do
the same thing in git cherry to detect patches which look about the
same (probably we'd need some sort of flag to specify degree of
similarity).

> If "get cherry-pick" auto-added an "upstream HASHID" line to the 
> comment, and if "git cherry" used that to understand that two commits 
> where "the same", then it would be a lot closer.

it does, just not by default, see -x option.

James


> Then a command, maybe "git cherry-pick" with no args, which did the
> equivalent of: 
>   git cherry-pick `git cherry | head -n1`
> 
> would almost work for "quilt push", and the "git rerere" thing (which
> I
> almost understand) would mean that "git reset --hard HEAD^" would
> work
> for "git pop" (or "git cherry-pop").
> 
> I'd probably want some way to record the upstream and limit commits
> for
> a particular session.  e.g.
>    git cherry start XX YY
> 
> then "git cherry-pick" and "git cherry-pop" would DoTheRightThing.
> Maybe.
> 
> Thanks,
> NeilBrown
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  7:01                                               ` NeilBrown
  2016-07-15  7:28                                                 ` James Bottomley
@ 2016-07-15  7:36                                                 ` Dmitry Torokhov
  2016-07-15  9:29                                                   ` NeilBrown
  1 sibling, 1 reply; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-15  7:36 UTC (permalink / raw)
  To: NeilBrown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Fri, Jul 15, 2016 at 05:01:33PM +1000, NeilBrown wrote:
> On Fri, Jul 15 2016, James Bottomley wrote:
> 
> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
> >> I do find quilt useful when backporting a series of patches so that I
> >> can resolve the conflicts on each patch individually and move 
> >> backwards and forwards through the list of patches.  I don't think 
> >> git has an easy way to store a branch of patches-that-I-need-to-apply 
> >> and to then give me one at a time, removing them from the branch.  I 
> >> could use 'stgit' for that if necessary, though it is very tempting 
> >> to write something that is better integrated with git.
> >
> > Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
> > range of patches to apply, so you can select a bunch of patches to
> > backport or otherwise move all at once.  Git cherry can tell you (to
> > within an approximation, since it uses matching) what patches are
> > common between two branches even if they have differing commit ids.
> >
> > The format is a bit frightening if you're not used to it, which is why
> > stgit may be a better user experience, but you can do it with basic
> > git.
> 
> I wasn't aware of "git cherry".  It certainly could be useful, but based
> on the man page it would get confused by modifications made to resolve
> conflicts.
> If "get cherry-pick" auto-added an "upstream HASHID" line to the comment, and

"git cherry-pick -x <commit>" does this.

> if "git cherry" used that to understand that two commits where "the
> same", then it would be a lot closer.

That would be nice.

> 
> Then a command, maybe "git cherry-pick" with no args, which did the
> equivalent of: 
>   git cherry-pick `git cherry | head -n1`
> 
> would almost work for "quilt push", and the "git rerere" thing (which I
> almost understand) would mean that "git reset --hard HEAD^" would work
> for "git pop" (or "git cherry-pop").
> 
> I'd probably want some way to record the upstream and limit commits for
> a particular session.  e.g.
>    git cherry start XX YY
> 
> then "git cherry-pick" and "git cherry-pop" would DoTheRightThing.
> Maybe.
> 

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 13:33               ` Guenter Roeck
@ 2016-07-15  9:27                 ` Zefan Li
  2016-07-15 13:52                   ` Guenter Roeck
  0 siblings, 1 reply; 244+ messages in thread
From: Zefan Li @ 2016-07-15  9:27 UTC (permalink / raw)
  To: Guenter Roeck, Jiri Kosina, Takashi Iwai; +Cc: James Bottomley, ksummit-discuss

On 2016/7/10 21:33, Guenter Roeck wrote:
> On 07/10/2016 03:20 AM, Jiri Kosina wrote:
>> On Sun, 10 Jul 2016, Takashi Iwai wrote:
>>
>>> IMO, we need a really better QA before releasing stable trees.  They are
>>> all fixes, yes, but they aren't always fixes for stable trees, in
>>> reality.
>>
>> I agree.
>>
>> BTW, how much coverage does -stable get from Fengguang's 0day robot? I
>> think that as most of the stable tress don't really use the git workflow,
>> the trees are being pushed out to git.kernel.org only shortly before
>> actual release, so the 0day bot doesn't have enough time to catch up; but
>> I have to admit I don't really know how exactly the timing and flow of
>> patches works here.
>>
> 
> Greg tends to update his trees on a quite regular basis, as he applies patches.
> I don't really know for sure about the others, but overall my impression is
> that there tends to be a flurry of patches applied in the day before a stable
> release candidate is announced.
> 

I also set up a testing branch for 3.4.y so that Fengguang's 0day can test it
before I send out stable-rc1.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  7:36                                                 ` Dmitry Torokhov
@ 2016-07-15  9:29                                                   ` NeilBrown
  2016-07-15 16:08                                                     ` Dmitry Torokhov
  0 siblings, 1 reply; 244+ messages in thread
From: NeilBrown @ 2016-07-15  9:29 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 2498 bytes --]

On Fri, Jul 15 2016, Dmitry Torokhov wrote:

> On Fri, Jul 15, 2016 at 05:01:33PM +1000, NeilBrown wrote:
>> On Fri, Jul 15 2016, James Bottomley wrote:
>> 
>> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
>> >> I do find quilt useful when backporting a series of patches so that I
>> >> can resolve the conflicts on each patch individually and move 
>> >> backwards and forwards through the list of patches.  I don't think 
>> >> git has an easy way to store a branch of patches-that-I-need-to-apply 
>> >> and to then give me one at a time, removing them from the branch.  I 
>> >> could use 'stgit' for that if necessary, though it is very tempting 
>> >> to write something that is better integrated with git.
>> >
>> > Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
>> > range of patches to apply, so you can select a bunch of patches to
>> > backport or otherwise move all at once.  Git cherry can tell you (to
>> > within an approximation, since it uses matching) what patches are
>> > common between two branches even if they have differing commit ids.
>> >
>> > The format is a bit frightening if you're not used to it, which is why
>> > stgit may be a better user experience, but you can do it with basic
>> > git.
>> 
>> I wasn't aware of "git cherry".  It certainly could be useful, but based
>> on the man page it would get confused by modifications made to resolve
>> conflicts.
>> If "get cherry-pick" auto-added an "upstream HASHID" line to the comment, and
>
> "git cherry-pick -x <commit>" does this.

 From the man page
     This is done only for cherry picks without conflicts.
 making it fairly useless for my use-case.

 Thanks anyway,
 NeilBrown


>
>> if "git cherry" used that to understand that two commits where "the
>> same", then it would be a lot closer.
>
> That would be nice.
>
>> 
>> Then a command, maybe "git cherry-pick" with no args, which did the
>> equivalent of: 
>>   git cherry-pick `git cherry | head -n1`
>> 
>> would almost work for "quilt push", and the "git rerere" thing (which I
>> almost understand) would mean that "git reset --hard HEAD^" would work
>> for "git pop" (or "git cherry-pop").
>> 
>> I'd probably want some way to record the upstream and limit commits for
>> a particular session.  e.g.
>>    git cherry start XX YY
>> 
>> then "git cherry-pick" and "git cherry-pop" would DoTheRightThing.
>> Maybe.
>> 
>
> Thanks.
>
> -- 
> Dmitry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 818 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  6:32                                             ` James Bottomley
  2016-07-15  7:01                                               ` NeilBrown
@ 2016-07-15 11:05                                               ` Geert Uytterhoeven
  2016-07-15 12:35                                                 ` James Bottomley
  1 sibling, 1 reply; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-07-15 11:05 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss, Trond Myklebust

On Fri, Jul 15, 2016 at 8:32 AM, James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
>> I do find quilt useful when backporting a series of patches so that I
>> can resolve the conflicts on each patch individually and move
>> backwards and forwards through the list of patches.  I don't think
>> git has an easy way to store a branch of patches-that-I-need-to-apply
>> and to then give me one at a time, removing them from the branch.  I
>> could use 'stgit' for that if necessary, though it is very tempting
>> to write something that is better integrated with git.
>
> Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
> range of patches to apply, so you can select a bunch of patches to
> backport or otherwise move all at once.  Git cherry can tell you (to
> within an approximation, since it uses matching) what patches are
> common between two branches even if they have differing commit ids.

... which is basically the same as creating a new branch matching your old
private tree, and rebasing that --onto the new upstream.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  0:51                                   ` Guenter Roeck
  2016-07-15  1:41                                     ` Greg KH
@ 2016-07-15 11:10                                     ` Mark Brown
  2016-07-15 11:40                                       ` Greg KH
  1 sibling, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-15 11:10 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1328 bytes --]

On Thu, Jul 14, 2016 at 05:51:11PM -0700, Guenter Roeck wrote:
> On 07/14/2016 05:22 PM, Greg KH wrote:
> > On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:

> > Ok, there's no need for everyone to use the same messy tree, but perhaps
> > Linaro could participate with LTSI to help make something that more
> > people can all use?  No need to keep duplicating the same work...

> > But this is way off-topic here, sorry.

> Maybe a separate topic, and not entirely feasible for the kernel summit,
> but it might be worthwhile figuring out why companies are or are not
> using LTSI. My major problem with it was always that it is just a collection

I do think that could be a useful topic to cover in stable discussions
at KS, we've always focused on the stable trees but there's a much
broader spectrum of work going on there.

> of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
> Sure, one can create a kernel tree from it, but that is not the same.

This is actually the main reason why I've never got around to pushing
things back into LTSI (it has been a little while since I last did that
admittedly).  The effort involved in figuring out the tooling for LTSI
always got in the way before anything productive came of it, having a
directly usable git tree would be *so* much easier.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  5:52                                           ` NeilBrown
                                                               ` (2 preceding siblings ...)
  2016-07-15  6:32                                             ` James Bottomley
@ 2016-07-15 11:24                                             ` Vlastimil Babka
  2016-07-28 22:07                                               ` Laurent Pinchart
  3 siblings, 1 reply; 244+ messages in thread
From: Vlastimil Babka @ 2016-07-15 11:24 UTC (permalink / raw)
  To: NeilBrown, Greg KH, Guenter Roeck
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 07/15/2016 07:52 AM, NeilBrown wrote:
> On Fri, Jul 15 2016, Greg KH wrote:
> 
>> On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
>>> Overall, I can not imagine that it is even possible to use quilt trees as basis
>>> for development in a company with if active kernel development, even more so
>>> if a large number of engineers and/or a number of branches are involved.
>>> Sure, the QCOM example may be extreme, but do you really think that writing
>>> those 2.5M LOC would have been possible if QCOM had used Quilt trees instead
>>> of git ? Using Quilt would for sure have prevented them from writing those
>>> 2.5M LOC, but then there would be nothing. That doesn't sound like a feasible
>>> alternative either.
>>
>> It is possible, look at the Red Hat and SuSE kernel development teams.
>> Yes, in the end, most of the patches are backports from upstream, but
> 
> You are glossing over a key point.  We (or at least I as a SUSE kernel
> developer) don't use quilt for development because, like Guenter says,
> it would be too clumsy.  I do development upstream if git.  Upstream first.
> And I have scripts to help turn the result into something suitable for
> quilt, making the use of quilt a pain rather than a nightmare.
> 
> I do find quilt useful when backporting a series of patches so that I
> can resolve the conflicts on each patch individually and move backwards
> and forwards through the list of patches.  I don't think git has an easy
> way to store a branch of patches-that-I-need-to-apply and to then give
> me one at a time, removing them from the branch.  I could use 'stgit'
> for that if necessary, though it is very tempting to write something
> that is better integrated with git.

I think (but never actually tried yet) this should be somehow possible
with git rebase --interactive and git rebase --edit-todo (where the
editing of todo would be automatic via some wrappers). You should be
able to put any commits in the todo to "pick" (not just those you are
rebasing), effectively cherry-picking them. "quilt push" equivalent
becomes "git commit" and/or "git rebase --continue" (which reminds me,
can't count how many times I've used git commit --amend instead of
rebase --continue after resolving a conflict, which amended the
*previous* commit, grrr. Having some intelligent wrapper over that would
be great so you don't have to remember if there was a conflict or not,
and which command is thus appropriate). "quilt pop" would take the HEAD,
put it as the first thing to "pick" in the rebase todo, and do git reset
--hard HEAD^.

Anyway sorry for adding to the OT :)

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:10                                     ` Mark Brown
@ 2016-07-15 11:40                                       ` Greg KH
  2016-07-15 12:38                                         ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-07-15 11:40 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, Jul 15, 2016 at 12:10:34PM +0100, Mark Brown wrote:
> On Thu, Jul 14, 2016 at 05:51:11PM -0700, Guenter Roeck wrote:
> > On 07/14/2016 05:22 PM, Greg KH wrote:
> > > On Thu, Jul 14, 2016 at 11:06:03AM +0100, Mark Brown wrote:
> 
> > > Ok, there's no need for everyone to use the same messy tree, but perhaps
> > > Linaro could participate with LTSI to help make something that more
> > > people can all use?  No need to keep duplicating the same work...
> 
> > > But this is way off-topic here, sorry.
> 
> > Maybe a separate topic, and not entirely feasible for the kernel summit,
> > but it might be worthwhile figuring out why companies are or are not
> > using LTSI. My major problem with it was always that it is just a collection
> 
> I do think that could be a useful topic to cover in stable discussions
> at KS, we've always focused on the stable trees but there's a much
> broader spectrum of work going on there.

I agree it would be fun to talk about it, but the relevance of it to 90%
of the people in the room whose day-job doesn't have to deal with that
type of thing, is probably very low.

Let's stick to the stable workflow issues here, not the "why aren't
companies getting their code upstream and have to keep these big trees"
issue.  Which might be a fine separate topic to bring up, but usually we
all know the reasons there, and no one who is invited to KS can resolve
them...

> > of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
> > Sure, one can create a kernel tree from it, but that is not the same.
> 
> This is actually the main reason why I've never got around to pushing
> things back into LTSI (it has been a little while since I last did that
> admittedly).  The effort involved in figuring out the tooling for LTSI
> always got in the way before anything productive came of it, having a
> directly usable git tree would be *so* much easier.

Ok fine, I'll work on that, but if I do so, I will expect to see patches
from you for it :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  7:02                                               ` Jiri Kosina
@ 2016-07-15 11:42                                                 ` Greg KH
  2016-07-15 11:47                                                   ` Jiri Kosina
  2016-07-15 12:17                                                   ` Geert Uytterhoeven
  0 siblings, 2 replies; 244+ messages in thread
From: Greg KH @ 2016-07-15 11:42 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, Jul 15, 2016 at 09:02:19AM +0200, Jiri Kosina wrote:
> On Fri, 15 Jul 2016, Greg KH wrote:
> 
> > > > during new releases they use quilt for all of their work, adding and
> > > > removing and updating patches all the time.
> > > 
> > > So you are saying quilt is good for release management, and Guenter is
> > > saying it is bad for development.  Maybe you are in agreement.
> > 
> > Heh, yes, I think we are in agreement.  The "fun" thing is, people take
> > the thing you release and do development on it.  So the developers want
> > the output of your release in a format that they can do work with.
> > 
> > I talked with Geert today about this, and he gave me some ideas for how
> > to make the output of the LTSI tree in a git tree that people can work
> > off of.  Much like I've started to do now with the stable trees, and the
> > -rc git tree of patches built from my quilt series.  I'll work on this
> > over time and see how that goes.
> 
> FWIW what we do in SUSE is that we actually have our kernels maintained as 
> a quilt series (in git), but at the same time we are actually 
> automatically generating proper non-rebasing git tree from that series, so 
> that our partners can work on a proper git tree. I think it's a rather 
> successful model.
> 
> If you'd like to see this in practice, then this is how the "primary" tree 
> (to which developers are actually pushing patches backported from 
> upstream) looks like:
> 
> 	http://kernel.suse.com/cgit/kernel-source/tree/?h=SLE12-SP2
> 	http://kernel.suse.com/cgit/kernel-source/log/?h=SLE12-SP2
> 
> and this is how the auto-generated proper git tree looks like
> 
> 	http://kernel.suse.com/cgit/kernel/tree/?h=SLE12-SP2
> 	http://kernel.suse.com/cgit/kernel/tree/?h=SLE12-SP2

Nice, I didn't know about these.

But how do you deal with patches in the middle of the series that get
changed, when you rebuild the git branch?  Or do you have a lot of
different branches?  Any pointer to some workflow documentation that you
might have for how this all works?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:42                                                 ` Greg KH
@ 2016-07-15 11:47                                                   ` Jiri Kosina
  2016-07-15 12:17                                                   ` Geert Uytterhoeven
  1 sibling, 0 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-07-15 11:47 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 15 Jul 2016, Greg KH wrote:

> But how do you deal with patches in the middle of the series that get
> changed, 

For each change (even refresh of existing patch in the middle of the 
series), corresponding git commit is created.

As an example how this looks like, here is a "refresh existing patch in 
the middle of the quilt series" commit:

	http://kernel.suse.com/cgit/kernel-source/commit/?h=SLE12-SP2&id=c8ee9b3eab8f58ad16dcb132a2e1e360b5b8cc9a

and this is the resulting autogenerated commit to the "proper" git tree:

	http://kernel.suse.com/cgit/kernel/commit/?h=SLE12-SP2&id=5d9e4d4df7211ac59578aa4cd009c0d2fbafd1eb

> when you rebuild the git branch?  

I am not sure I properly understand what you mean by "rebuild" here, could 
you please elaborate?

> Or do you have a lot of different branches?  

Every supported release (codestream) has it's own branch.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  6:19                                             ` Rik van Riel
@ 2016-07-15 12:17                                               ` Mark Brown
  2016-07-26 13:45                                                 ` David Woodhouse
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-15 12:17 UTC (permalink / raw)
  To: Rik van Riel; +Cc: ksummit-discuss, James Bottomley, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1470 bytes --]

On Fri, Jul 15, 2016 at 02:19:17AM -0400, Rik van Riel wrote:

> RHEL is also developed on a git tree nowadays, because there is
> no need to extract patches from RHEL, since the code came from
> upstream to begin with.

> It sounds like the embedded people are causing themselves a lot
> of pain. Pain the distro people got all too familiar with a decade
> ago, and decided to leave behind.

No, at least not in the terms you're thinking of here.  The constraints
that go into supporting servers in distros are very different to the
constraints that apply when building systems based around trying to
exploit the new capabilities of silicon that was taped out rather more
close to product launch than might be comfortable.  Some of it is just
bad practice and technical debt but far from all of it, these aren't
solved problems.

This isn't just something that goes on in embedded either - look at the
experience people have buying laptops with brand new Intel chipsets,
even running bleeding edge upstream versions of things it takes a good
few months after the systems start hitting the market for things to
become stable and reasonably functional.  

We do need to bring these worlds closer together, things like LTSI and
LSK which backport generic functionality from upstream are part of that
story in that they help avoid people reinventing the wheel in their
product kernels and make it much easier for them to move towards
mainline.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:42                                                 ` Greg KH
  2016-07-15 11:47                                                   ` Jiri Kosina
@ 2016-07-15 12:17                                                   ` Geert Uytterhoeven
  1 sibling, 0 replies; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-07-15 12:17 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

Hi Greg,

On Fri, Jul 15, 2016 at 1:42 PM, Greg KH <greg@kroah.com> wrote:
> But how do you deal with patches in the middle of the series that get
> changed, when you rebuild the git branch?  Or do you have a lot of
> different branches?  Any pointer to some workflow documentation that you
> might have for how this all works?

# Make a new branch pointing to previous LTSI release
git branch v4.1.27-ltsi v4.1.17-ltsi
# Rebase it on top of latest LTS
git rebase --onto v4.1.27 v4.1.17 v4.1.27-ltsi

If there were no conflicts, "git diff v4.1.17-ltsi..v4.1.27-ltsi" should be
identical (modulo context changes) to "git diff v4.1.17..v4.1.27".

If you have to make some changes in the middle of the series,
you could have used git rebase -i instead, and edit the commit list:
  - Drop/reorder/replace commits,
  - Mark commits for amendment ("edit"),
  - Add commits (but you better do that afterwards, using git cherry-pick -x).

Or do you mean recreating the quilt series after the rebase?
I'd suggest to use "git format-patch v4.1.27" for that.

This is all basic workflow for developers rebasing their local work before
(re)submission ;-) So if you have questions, feel free to ask!

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:05                                               ` Geert Uytterhoeven
@ 2016-07-15 12:35                                                 ` James Bottomley
  2016-07-15 12:44                                                   ` Geert Uytterhoeven
  0 siblings, 1 reply; 244+ messages in thread
From: James Bottomley @ 2016-07-15 12:35 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: Trond Myklebust, ksummit-discuss

On Fri, 2016-07-15 at 13:05 +0200, Geert Uytterhoeven wrote:
> On Fri, Jul 15, 2016 at 8:32 AM, James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
> > > I do find quilt useful when backporting a series of patches so
> > > that I
> > > can resolve the conflicts on each patch individually and move
> > > backwards and forwards through the list of patches.  I don't
> > > think
> > > git has an easy way to store a branch of patches-that-I-need-to
> > > -apply
> > > and to then give me one at a time, removing them from the branch.
> > >   I
> > > could use 'stgit' for that if necessary, though it is very
> > > tempting
> > > to write something that is better integrated with git.
> > 
> > Git cherry and git cherry-pick can do this.  Git cherry-pick can
> > take a
> > range of patches to apply, so you can select a bunch of patches to
> > backport or otherwise move all at once.  Git cherry can tell you
> > (to
> > within an approximation, since it uses matching) what patches are
> > common between two branches even if they have differing commit ids.
> 
> ... which is basically the same as creating a new branch matching 
> your old private tree, and rebasing that --onto the new upstream.

You mean using rebase -i so you can pick the commits?  Yes, it sort of
is but there's the extra step of firing up the editor and selecting
them.  If you have the ids handy, you can feed them directly into git
cherry-pick without needing the extra edit and select step (which also
makes it scriptable).

You still need git cherry to see what is common and what you added (or
didn't add).

James


> Gr{oetje,eeting}s,
> 
>                         Geert
> 
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- 
> geert@linux-m68k.org
> 
> In personal conversations with technical people, I call myself a
> hacker. But
> when I'm talking to journalists I just say "programmer" or something
> like that.
>                                 -- Linus Torvalds
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:40                                       ` Greg KH
@ 2016-07-15 12:38                                         ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-15 12:38 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1910 bytes --]

On Fri, Jul 15, 2016 at 08:40:37PM +0900, Greg KH wrote:
> On Fri, Jul 15, 2016 at 12:10:34PM +0100, Mark Brown wrote:

> > I do think that could be a useful topic to cover in stable discussions
> > at KS, we've always focused on the stable trees but there's a much
> > broader spectrum of work going on there.

> I agree it would be fun to talk about it, but the relevance of it to 90%
> of the people in the room whose day-job doesn't have to deal with that
> type of thing, is probably very low.

> Let's stick to the stable workflow issues here, not the "why aren't
> companies getting their code upstream and have to keep these big trees"
> issue.  Which might be a fine separate topic to bring up, but usually we
> all know the reasons there, and no one who is invited to KS can resolve
> them...

I wasn't thinking about the out of tree code discussion but rather the
discussion we've been having here about workflows for feature backports
and use/applicability of things like LTSI for sharing those.  That seems
to have generated some interest, it seems clear that we've got some
interest and some diversity of opinion.  Perhaps it might fit better
outside the main day as a breakout session of some kind but we ought to
be able to something useful.

> > > of patches, not a kernel tree, meaning merges or cherry-picks are non-trivial.
> > > Sure, one can create a kernel tree from it, but that is not the same.

> > This is actually the main reason why I've never got around to pushing
> > things back into LTSI (it has been a little while since I last did that
> > admittedly).  The effort involved in figuring out the tooling for LTSI
> > always got in the way before anything productive came of it, having a
> > directly usable git tree would be *so* much easier.

> Ok fine, I'll work on that, but if I do so, I will expect to see patches
> from you for it :)

I think I can manage two patches :)

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 12:35                                                 ` James Bottomley
@ 2016-07-15 12:44                                                   ` Geert Uytterhoeven
  0 siblings, 0 replies; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-07-15 12:44 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

Hi James,

On Fri, Jul 15, 2016 at 2:35 PM, James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Fri, 2016-07-15 at 13:05 +0200, Geert Uytterhoeven wrote:
>> On Fri, Jul 15, 2016 at 8:32 AM, James Bottomley
>> <James.Bottomley@hansenpartnership.com> wrote:
>> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
>> > > I do find quilt useful when backporting a series of patches so
>> > > that I
>> > > can resolve the conflicts on each patch individually and move
>> > > backwards and forwards through the list of patches.  I don't
>> > > think
>> > > git has an easy way to store a branch of patches-that-I-need-to
>> > > -apply
>> > > and to then give me one at a time, removing them from the branch.
>> > >   I
>> > > could use 'stgit' for that if necessary, though it is very
>> > > tempting
>> > > to write something that is better integrated with git.
>> >
>> > Git cherry and git cherry-pick can do this.  Git cherry-pick can
>> > take a
>> > range of patches to apply, so you can select a bunch of patches to
>> > backport or otherwise move all at once.  Git cherry can tell you
>> > (to
>> > within an approximation, since it uses matching) what patches are
>> > common between two branches even if they have differing commit ids.
>>
>> ... which is basically the same as creating a new branch matching
>> your old private tree, and rebasing that --onto the new upstream.
>
> You mean using rebase -i so you can pick the commits?  Yes, it sort of

No, without the -i.

> is but there's the extra step of firing up the editor and selecting
> them.  If you have the ids handy, you can feed them directly into git
> cherry-pick without needing the extra edit and select step (which also
> makes it scriptable).

If it's about passing a range of patches to git cherry-pick, you could the same
operation using one rebase command.

> You still need git cherry to see what is common and what you added (or
> didn't add).

git rebase already does the same filtering internally.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-15  9:27                 ` Zefan Li
@ 2016-07-15 13:52                   ` Guenter Roeck
  0 siblings, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-15 13:52 UTC (permalink / raw)
  To: Zefan Li, Jiri Kosina, Takashi Iwai; +Cc: James Bottomley, ksummit-discuss

On 07/15/2016 02:27 AM, Zefan Li wrote:
> On 2016/7/10 21:33, Guenter Roeck wrote:
>> On 07/10/2016 03:20 AM, Jiri Kosina wrote:
>>> On Sun, 10 Jul 2016, Takashi Iwai wrote:
>>>
>>>> IMO, we need a really better QA before releasing stable trees.  They are
>>>> all fixes, yes, but they aren't always fixes for stable trees, in
>>>> reality.
>>>
>>> I agree.
>>>
>>> BTW, how much coverage does -stable get from Fengguang's 0day robot? I
>>> think that as most of the stable tress don't really use the git workflow,
>>> the trees are being pushed out to git.kernel.org only shortly before
>>> actual release, so the 0day bot doesn't have enough time to catch up; but
>>> I have to admit I don't really know how exactly the timing and flow of
>>> patches works here.
>>>
>>
>> Greg tends to update his trees on a quite regular basis, as he applies patches.
>> I don't really know for sure about the others, but overall my impression is
>> that there tends to be a flurry of patches applied in the day before a stable
>> release candidate is announced.
>>
>
> I also set up a testing branch for 3.4.y so that Fengguang's 0day can test it
> before I send out stable-rc1.
>

Is that at a well defined location ? I would prefer to pick up the changes from there
instead of the quilt queue (and I am sure Kevin would like to do the same).

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  9:29                                                   ` NeilBrown
@ 2016-07-15 16:08                                                     ` Dmitry Torokhov
  0 siblings, 0 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-07-15 16:08 UTC (permalink / raw)
  To: NeilBrown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Fri, Jul 15, 2016 at 2:29 AM, NeilBrown <neilb@suse.com> wrote:
> On Fri, Jul 15 2016, Dmitry Torokhov wrote:
>
>> On Fri, Jul 15, 2016 at 05:01:33PM +1000, NeilBrown wrote:
>>> On Fri, Jul 15 2016, James Bottomley wrote:
>>>
>>> > On Fri, 2016-07-15 at 15:52 +1000, NeilBrown wrote:
>>> >> I do find quilt useful when backporting a series of patches so that I
>>> >> can resolve the conflicts on each patch individually and move
>>> >> backwards and forwards through the list of patches.  I don't think
>>> >> git has an easy way to store a branch of patches-that-I-need-to-apply
>>> >> and to then give me one at a time, removing them from the branch.  I
>>> >> could use 'stgit' for that if necessary, though it is very tempting
>>> >> to write something that is better integrated with git.
>>> >
>>> > Git cherry and git cherry-pick can do this.  Git cherry-pick can take a
>>> > range of patches to apply, so you can select a bunch of patches to
>>> > backport or otherwise move all at once.  Git cherry can tell you (to
>>> > within an approximation, since it uses matching) what patches are
>>> > common between two branches even if they have differing commit ids.
>>> >
>>> > The format is a bit frightening if you're not used to it, which is why
>>> > stgit may be a better user experience, but you can do it with basic
>>> > git.
>>>
>>> I wasn't aware of "git cherry".  It certainly could be useful, but based
>>> on the man page it would get confused by modifications made to resolve
>>> conflicts.
>>> If "get cherry-pick" auto-added an "upstream HASHID" line to the comment, and
>>
>> "git cherry-pick -x <commit>" does this.
>
>  From the man page
>      This is done only for cherry picks without conflicts.
>  making it fairly useless for my use-case.

That is man-page author's opinion, not something that is enforced by
the implementation. In fact, if there is a conflict it will
cherry-pick adds section

Conflicts:
        file1.c
        file2.c
        ...

We use "git cherry-pick -x" all the time in Chrome OS kernel tree to
document origin of patches.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11 23:03                         ` Guenter Roeck
@ 2016-07-18  7:44                           ` Christian Borntraeger
  2016-07-18  8:44                             ` Hannes Reinecke
  0 siblings, 1 reply; 244+ messages in thread
From: Christian Borntraeger @ 2016-07-18  7:44 UTC (permalink / raw)
  To: Guenter Roeck, Kevin Hilman
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 07/12/2016 01:03 AM, Guenter Roeck wrote:
> On Mon, Jul 11, 2016 at 01:24:25PM -0700, Kevin Hilman wrote:
>> Trond Myklebust <trondmy@primarydata.com> writes:
>>
>>> So, we might as well make this a formal proposal.
>>>
>>> I’d like to propose that we have a discussion around how to make it
>>> easier to implement kernel unit tests. I’ve co-opted Dan as he has
>>> expressed both an interest and hands-on experience. :-)
>>
>> Count me in.
>>
>> I'm working on the kernelci.org project, where we're testing
>> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
>> across ~30 unique SoC families: arm, arm64, x86.)
>>
>> Right now, we're mainly doing basic boot tests, but are starting to run
>> kselftests on all these platforms as well.
>>
> 
> Augmenting that: For my part the interest would be to improve qemu based
> testing along the same line (and maybe figure out if/how we can merge
> kerneltests.org into kernelci.org).

I am also interested in this topic. I would like to find a way to integrate
architectures (with non-perfect qemu coverage) like s390 in that regression
testing. There are several installations which could be used to run some nightly
regression but the hardware is not that wide-spread among kernel hackers.
I would like to discuss the different options (e.g. do we consider email reports
as working or no?)

Christian

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-18  7:44                           ` Christian Borntraeger
@ 2016-07-18  8:44                             ` Hannes Reinecke
  0 siblings, 0 replies; 244+ messages in thread
From: Hannes Reinecke @ 2016-07-18  8:44 UTC (permalink / raw)
  To: ksummit-discuss

On 07/18/2016 09:44 AM, Christian Borntraeger wrote:
> On 07/12/2016 01:03 AM, Guenter Roeck wrote:
>> On Mon, Jul 11, 2016 at 01:24:25PM -0700, Kevin Hilman wrote:
>>> Trond Myklebust <trondmy@primarydata.com> writes:
>>>
>>>> So, we might as well make this a formal proposal.
>>>>
>>>> I’d like to propose that we have a discussion around how to make it
>>>> easier to implement kernel unit tests. I’ve co-opted Dan as he has
>>>> expressed both an interest and hands-on experience. :-)
>>>
>>> Count me in.
>>>
>>> I'm working on the kernelci.org project, where we're testing
>>> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
>>> across ~30 unique SoC families: arm, arm64, x86.)
>>>
>>> Right now, we're mainly doing basic boot tests, but are starting to run
>>> kselftests on all these platforms as well.
>>>
>>
>> Augmenting that: For my part the interest would be to improve qemu based
>> testing along the same line (and maybe figure out if/how we can merge
>> kerneltests.org into kernelci.org).
>
> I am also interested in this topic. I would like to find a way to integrate
> architectures (with non-perfect qemu coverage) like s390 in that regression
> testing. There are several installations which could be used to run some nightly
> regression but the hardware is not that wide-spread among kernel hackers.
> I would like to discuss the different options (e.g. do we consider email reports
> as working or no?)
>
I'd be interested in that, too.
While we're already doing quite some testing (eg performance) we're 
looking into doing setup or architecture-specific tests.
With no clear result yet, so a broader discussion would be good.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		               zSeries & Storage
hare@suse.com			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 17:03               ` Theodore Ts'o
                                   ` (4 preceding siblings ...)
  2016-07-11 22:42                 ` James Bottomley
@ 2016-07-20 17:50                 ` Stephen Hemminger
  5 siblings, 0 replies; 244+ messages in thread
From: Stephen Hemminger @ 2016-07-20 17:50 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, 11 Jul 2016 13:03:33 -0400
Theodore Ts'o <tytso@mit.edu> wrote:

> On Mon, Jul 11, 2016 at 04:13:00PM +0100, Mark Brown wrote:
> > On Sat, Jul 09, 2016 at 05:21:30PM -0400, Theodore Ts'o wrote:
> >   
> > > the latest stable kernel.  (But even if they do, apparently many
> > > device vendors aren't bothering to merge in changes from the SOC's BSP
> > > kernel, even if the BSP kernel is getting -stable updates.)  
> > 
> > It would be pretty irresponsible for device vendors to be merging BSP
> > trees, they're generally development things with ongoing feature updates
> > that might interact badly with things the system integrator has done
> > rather than something stable enough to just merge constantly.  
> 
> So the question is who actually uses -stable kernels, and does it make
> sense for it even to be managed in a git tree?
> 
> Very few people will actually be merging them, and in fact maybe
> having a patch queue which is checked into git might actually work
> better, since it sounds like most people are just cherry-picking
> specific patches.
> 

Actually, at Brocade they regularly merge stable kernels into the code base
without any serious issues. Mostly because Linux is a platform, and there is
very little vendor specific changes. The kernel major version is selected early
in the release process, then stable kernels are merged during development. Has
never been a big issue. Like most distro's it is a continual battle to keep
the number of patches down, but never as big a problem as RHEL, or SLES.

I think a lot of people actually use and depend on stable kernels, you
just never hear from the happy users. Only the 1% who get hit complain.
Not that it wouldn't be good to make that .001% instead.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15  4:29                                         ` Greg KH
  2016-07-15  5:52                                           ` NeilBrown
@ 2016-07-21  7:13                                           ` Daniel Vetter
  2016-07-21  7:44                                             ` Josh Triplett
  1 sibling, 1 reply; 244+ messages in thread
From: Daniel Vetter @ 2016-07-21  7:13 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, Jul 15, 2016 at 6:29 AM, Greg KH <greg@kroah.com> wrote:
> On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
>> Overall, I can not imagine that it is even possible to use quilt trees as basis
>> for development in a company with if active kernel development, even more so
>> if a large number of engineers and/or a number of branches are involved.
>> Sure, the QCOM example may be extreme, but do you really think that writing
>> those 2.5M LOC would have been possible if QCOM had used Quilt trees instead
>> of git ? Using Quilt would for sure have prevented them from writing those
>> 2.5M LOC, but then there would be nothing. That doesn't sound like a feasible
>> alternative either.
>
> It is possible, look at the Red Hat and SuSE kernel development teams.
> Yes, in the end, most of the patches are backports from upstream, but
> during new releases they use quilt for all of their work, adding and
> removing and updating patches all the time.
>
> There are the usual merge issues with doing that, but for an SoC, I
> don't think that would be all that hard given that almost all patches
> are driver/subsystem-specific and don't touch other places.
>
> It does take a better calibre of developer to do this type of thing,
> that might be a harder thing to deal with at some SoC vendors :)

Random tool plug: I stitched together a quilt+git thing, which through
hidden git refs makes sure that the underlying git baseline also gets
pushed around together with the quilt patches. Allows awesome stuff
like bisecting changes in the quilt pile over rebases:

https://cgit.freedesktop.org/drm-intel/tree/qf?h=maintainer-tools

We use that to maintain the internal drm/i915 patches. Unfortunatel
everyone else uses plain git, since I fully agree with Greg: Quilt (or
some other pile-of-patches tool) is the only way to sanely manage
kernel trees which aren't directly upstream.

And yes it also takes a decent calibre of developer with some
understanding of the tooling to make effective use of it.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-21  7:13                                           ` Daniel Vetter
@ 2016-07-21  7:44                                             ` Josh Triplett
  0 siblings, 0 replies; 244+ messages in thread
From: Josh Triplett @ 2016-07-21  7:44 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Thu, Jul 21, 2016 at 09:13:37AM +0200, Daniel Vetter wrote:
> Random tool plug: I stitched together a quilt+git thing, which through
> hidden git refs makes sure that the underlying git baseline also gets
> pushed around together with the quilt patches. Allows awesome stuff
> like bisecting changes in the quilt pile over rebases:
> 
> https://cgit.freedesktop.org/drm-intel/tree/qf?h=maintainer-tools
> 
> We use that to maintain the internal drm/i915 patches. Unfortunatel
> everyone else uses plain git, since I fully agree with Greg: Quilt (or
> some other pile-of-patches tool) is the only way to sanely manage
> kernel trees which aren't directly upstream.

I just released a tool I've been working on to address that problem:
https://github.com/git-series/git-series

I plan to announce it on LKML and various other places at some point,
but since the topic came up and people are looking at workflows, it
seemed worth mentioning here.

git-series handles the "history of a patch series" problem, tracking
both the patches themselves and the history of the patch series in git.
It also tracks the baseline, and the cover letter.  And even when the
patch series goes through non-fast-forwarding changes like rebase -i or
rebasing onto a new baseline, git-series tracks that with refs that
themselves fast-forward and can be pushed and fetched with normal git
tools.

I wrote it after the Nth time of running into this problem and talking
to other people who ran into this problem.  Usually, you either have to
pull the patch series out of git into quilt to version it, or or you
keep the patches in git and version by branch names (which rapidly
start looking like filenames from a corporate email system).

Here's the manpage:

NAME
       git-series - track changes to a patch series with git

SYNOPSIS
       git series [SUBCOMMAND] [OPTIONS]

DESCRIPTION
       git  series  tracks  changes  to a patch series over time.  git
       series also tracks a cover letter for the patch series, formats
       the series for email, and prepares pull requests.

       Use git series start seriesname to start a patch series series‐
       name.  Use normal git commands to commit changes, and  use  git
       series  status to check what has changed.  Use git series cover
       to add or edit a cover letter.  Use  git  series  add  and  git
       series  commit  (or  git series commit -a) to commit changes to
       the patch series.  Use git series rebase -i to help  rework  or
       reorganize  the patch series.  Use git series format to prepare
       the patch series to send via email, or git series req  to  pre‐
       pare a "please pull" mail.

       Running  git  series  without arguments shows the list of patch
       series, marking the current patch series with a '*'.

SUBCOMMANDS
       git series add change...
              Add changes to the staging area for the next git  series
              commit.

              change...
                     Changes  to  add:  any  combination  of "series",
                     "base", and "cover".

       git series base [-d|--delete] [base]
              Get or set the base commit for the patch  series.   With
              no  parameters, print the hash of the base commit.  With
              parameters, set or delete the base commit.

              This only changes the base in the working version of the
              patch series; use git series add base to add that change
              to the next git series commit, or use git series  commit
              -a  to  commit the new base and all other changes to the
              series in one step.

              base   New base commit.  This can use a commit hash, ref
                     name,  or  special  syntaxes  such as refname^ or
                     refname~2.

              -d|--delete
                     Delete the current base commit.

       git series checkout name
              Resume work on the patch series name; check out the cur‐
              rent version as HEAD.

       git series commit [-a|--all] [-m message] [-v|--verbose]
              Record a new version of the patch series.  Without argu‐
              ments, this will run an editor to edit a commit message,
              and  then  commit  the changes previously added with git
              series add.

              -a|--all
                     Commit all changes, not just those added with git
                     series add.

              -m message
                     Use  message  as  the commit message, rather than
                     running an editor.

              -v|--verbose
                     Show a diff of the commit in  the  editor,  below
                     the  commit message, as a reminder of the changes
                     in the commit.  This diff will not appear in  the
                     commit message.

       git series cover [-d|--delete]
              Create  or  edit  the cover letter for the patch series.
              Without arguments, this will run an editor to  edit  the
              cover letter.

              This  only  changes the cover letter in the working ver‐
              sion of the patch series; use git series  add  cover  to
              add  that  change  to the next git series commit, or use
              git series commit -a to commit the new cover letter  and
              all other changes to the series in one step.

              -d|--delete
                     Delete the cover letter rather than editing it.

       git series delete name
              Delete  the series name, including any work in progress,
              staged or unstaged.

       git series detach
              Stop working  on  any  patch  series.   Any  changes  in
              progress,  staged  or  unstaged, will remain intact.  To
              start working on the branch again, use git series check‐
              out.

       git series format [--in-reply-to=Message-Id] [--stdout]
              Prepare  the  patch series to send via email.  This cre‐
              ates one file per patch in the series,  plus  one  addi‐
              tional  file  for  the  cover  letter if any.  The patch
              series must have a base set with  git  series  base,  to
              identify the series of patches to format.

              Each  file  contains  one email in mbox format, ready to
              send, with  email  headers  threading  all  the  patches
              together.   If the series has a cover letter, all of the
              patches will include headers to make them a reply to the
              cover letter; otherwise, all of the patches will include
              headers to make them a reply to the first patch.

              --in-reply-to=Message-Id
                     Make the first mail a reply to the specified Mes‐
                     sage-Id.   The Message-Id may include or omit the
                     surrounding angle brackets; git-series  will  add
                     them if not present.

              --stdout
                     Write  the  entire  patch series to stdout rather
                     than to separate patch files.

       git series help [subcommand]
              Show help for git series or a subcommand.  Without argu‐
              ments,  shows  a summary of the subcommands supported by
              git series.

              subcommand
                     Show help for subcommand.

       git series log [-p|--patch]
              Show the history of the patch series.

              -p|--patch
                     Include a patch for each change committed to  the
                     series.

       git series rebase [-i|--interactive] [onto]
              Rebase  the patch series, either onto a new base, inter‐
              actively, or both.  The patch series must  have  a  base
              set  with  git  series  base,  to identify the series of
              patches to rebase.

              onto   Commit to rebase the series onto.  This can use a
                     commit  hash,  ref name, or special syntaxes such
                     as refname^ or refname~2.

              -i|--interactive
                     Interactively edit the  list  of  commits.   This
                     uses the same format and syntax as git rebase -i,
                     to  allow  reordering,  dropping,  combining,  or
                     editing commits.

       git series req [-p|--patch] url tag
              Generate a mail requesting a pull of the patch series.

              Before  running  this  command, push the patch series to
              the repository at url, as a tag or branch named tag.

              A pull request  for  a  signed  or  annotated  tag  will
              include the message from the tag.  The pull request will
              also include the cover letter if  any,  unless  the  tag
              message  already contains the cover letter.  The subject
              of the mail will include the first line from  the  cover
              letter, or the name of the series if no cover letter.

              The  patch  series  must have a base set with git series
              base, to identify the series of  patches  to  request  a
              pull of.

              url    URL of the repository to pull from.

              tag    Name of a tag or branch to request a pull from.

              -p|--patch
                     Include  a patch showing the combined change made
                     by all the patches in the series.  This can  help
                     a reviewer see the effect of pulling the series.

       git series start name
              Start a new patch series named name.

       git series status
              Show the status of the current patch series.

              This  shows  any  changes staged for the next git series
              commit, changes in the  current  working  copy  but  not
              staged  for  the next git series commit, and hints about
              the next commands to run.

       git series unadd change
              Remove changes from the next git series commit,  undoing
              git series add.

              The changes remain in the current working version of the
              series.

              change...
                     Changes to remove: any combination  of  "series",
                     "base", and "cover".

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 10:05         ` James Bottomley
  2016-07-09 15:49           ` Trond Myklebust
  2016-07-10  7:29           ` Takashi Iwai
@ 2016-07-26 13:08           ` David Woodhouse
  2 siblings, 0 replies; 244+ messages in thread
From: David Woodhouse @ 2016-07-26 13:08 UTC (permalink / raw)
  To: James Bottomley, Dmitry Torokhov, Rafael J. Wysocki
  Cc: ksummit-discuss, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1113 bytes --]

On Sat, 2016-07-09 at 19:05 +0900, James Bottomley wrote:
> On Fri, 2016-07-08 at 17:43 -0700, Dmitry Torokhov wrote:
> > On Sat, Jul 09, 2016 at 02:37:40AM +0200, Rafael J. Wysocki wrote:
> > > I tend to think that all known bugs should be fixed, at least 
> > > because once they have been fixed, no one needs to remember about 
> > > them any more. :-)
> > > 
> > > Moreover, minor fixes don't really introduce regressions that often
> > 
> > Famous last words :)
> 
> Actually, beyond the humour, the idea that small fixes don't introduce
> regressions must be our most annoying anti-pattern.  The reality is
> that a lot of so called fixes do introduce bugs. 

And this one (commit fa731ac7ea0 upstream) is a prime example of the
kind of 'fix' that actually introduces bugs.

Commit messages which purely claim to fix a compiler warning are
*often* problematic. Although in this case there was reasonable
justification for it.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  3:00                     ` James Bottomley
  2016-07-10  3:07                       ` Trond Myklebust
@ 2016-07-26 13:35                       ` David Woodhouse
  2016-07-26 13:44                         ` Guenter Roeck
  2016-08-02 14:12                       ` Jani Nikula
  2 siblings, 1 reply; 244+ messages in thread
From: David Woodhouse @ 2016-07-26 13:35 UTC (permalink / raw)
  To: James Bottomley, Rafael J. Wysocki; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1201 bytes --]

On Sun, 2016-07-10 at 12:00 +0900, James Bottomley wrote:
> 
> Note: I'm not saying don't do testing, or even that testing isn't a
> suitable discussion topic for KS.  What I am saying is that I think we
> should discuss our stable practices separately from testing.

Well... there is a danger that by divorcing the discussion of the
problem, from any discussion of specific things we might do to fix it,
we could transform the former into little more than a navel-gazing
session.

Testing seems like the most productive way to reduce the number of
regressions we see.

We really should have more of an expectation that new code should be
submitted *with* test cases. After all, it's not like people are
generally submitting code that's *entirely* untested. It's more that
testing is ad-hoc, and sometimes depends on running on specific
hardware. But even the latter can often be fixed, with appropriate test
harnesses.

Even actual device drivers could sometimes be exercised with tools
based on MMIO tracing and playback.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-26 13:35                       ` David Woodhouse
@ 2016-07-26 13:44                         ` Guenter Roeck
  2016-07-26 14:33                           ` David Woodhouse
  0 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-07-26 13:44 UTC (permalink / raw)
  To: David Woodhouse, James Bottomley, Rafael J. Wysocki
  Cc: Trond Myklebust, ksummit-discuss

On 07/26/2016 06:35 AM, David Woodhouse wrote:
> On Sun, 2016-07-10 at 12:00 +0900, James Bottomley wrote:
>>
>> Note: I'm not saying don't do testing, or even that testing isn't a
>> suitable discussion topic for KS.  What I am saying is that I think we
>> should discuss our stable practices separately from testing.
>
> Well... there is a danger that by divorcing the discussion of the
> problem, from any discussion of specific things we might do to fix it,
> we could transform the former into little more than a navel-gazing
> session.
>
> Testing seems like the most productive way to reduce the number of
> regressions we see.
>
> We really should have more of an expectation that new code should be
> submitted *with* test cases. After all, it's not like people are
> generally submitting code that's *entirely* untested. It's more that
> testing is ad-hoc, and sometimes depends on running on specific
> hardware. But even the latter can often be fixed, with appropriate test
> harnesses.
>

Worthy goal, but knowing developers I am quite concerned that it would result
in (possibly much) less kernel contributions. In addition to contributions
from unaffiliated individuals, there is a lot of code in vendor trees which
is not upstreamed today. Demanding test cases for upstreaming would for sure
make the interest in upstreaming that code even lower than it is today.

Guenter

> Even actual device drivers could sometimes be exercised with tools
> based on MMIO tracing and playback.
>
>
>
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
>

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 12:17                                               ` Mark Brown
@ 2016-07-26 13:45                                                 ` David Woodhouse
  0 siblings, 0 replies; 244+ messages in thread
From: David Woodhouse @ 2016-07-26 13:45 UTC (permalink / raw)
  To: Mark Brown, Rik van Riel
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1721 bytes --]

On Fri, 2016-07-15 at 13:17 +0100, Mark Brown wrote:
> On Fri, Jul 15, 2016 at 02:19:17AM -0400, Rik van Riel wrote:
> 
> > RHEL is also developed on a git tree nowadays, because there is
> > no need to extract patches from RHEL, since the code came from
> > upstream to begin with.
> 
> > It sounds like the embedded people are causing themselves a lot
> > of pain. Pain the distro people got all too familiar with a decade
> > ago, and decided to leave behind.
> 
> No, at least not in the terms you're thinking of here.  The constraints
> that go into supporting servers in distros are very different to the
> constraints that apply when building systems based around trying to
> exploit the new capabilities of silicon that was taped out rather more
> close to product launch than might be comfortable.  Some of it is just
> bad practice and technical debt but far from all of it, these aren't
> solved problems.
> 
> This isn't just something that goes on in embedded either - look at the
> experience people have buying laptops with brand new Intel chipsets,
> even running bleeding edge upstream versions of things it takes a good
> few months after the systems start hitting the market for things to
> become stable and reasonably functional. 

And it's not just laptops. Sure, laptops have to deal with graphics,
and there's always a lot of fun there. But we also spend a fair amount
of time ensuring new *server* features work nicely in the distro
kernels.

There really isn't all *that* much difference in the constraints.

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-26 13:44                         ` Guenter Roeck
@ 2016-07-26 14:33                           ` David Woodhouse
  2016-07-26 15:52                             ` Guenter Roeck
  2016-07-28 21:02                             ` Laurent Pinchart
  0 siblings, 2 replies; 244+ messages in thread
From: David Woodhouse @ 2016-07-26 14:33 UTC (permalink / raw)
  To: Guenter Roeck, James Bottomley, Rafael J. Wysocki
  Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1381 bytes --]

On Tue, 2016-07-26 at 06:44 -0700, Guenter Roeck wrote:
> 
> > We really should have more of an expectation that new code should be
> > submitted *with* test cases. After all, it's not like people are
> > generally submitting code that's *entirely* untested. It's more that
> > testing is ad-hoc, and sometimes depends on running on specific
> > hardware. But even the latter can often be fixed, with appropriate test
> > harnesses.
> >
> 
> Worthy goal, but knowing developers I am quite concerned that it would result
> in (possibly much) less kernel contributions. In addition to contributions
> from unaffiliated individuals, there is a lot of code in vendor trees which
> is not upstreamed today. Demanding test cases for upstreaming would for sure
> make the interest in upstreaming that code even lower than it is today.

Sure, but I did say an *expectation* rather than a hard requirement. We
are nothing if not pragmatic.

Having the *infrastructure* in place, and plenty of existing examples,
would make this a whole lot easier for submitters. And also might be a
good proving ground for people who would otherwise be doing precisely
the kind of 'trivial' patches which are often problematic...

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-26 14:33                           ` David Woodhouse
@ 2016-07-26 15:52                             ` Guenter Roeck
  2016-07-28 21:02                             ` Laurent Pinchart
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-07-26 15:52 UTC (permalink / raw)
  To: David Woodhouse; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Tue, Jul 26, 2016 at 03:33:29PM +0100, David Woodhouse wrote:
> On Tue, 2016-07-26 at 06:44 -0700, Guenter Roeck wrote:
> > 
> > > We really should have more of an expectation that new code should be
> > > submitted *with* test cases. After all, it's not like people are
> > > generally submitting code that's *entirely* untested. It's more that
> > > testing is ad-hoc, and sometimes depends on running on specific
> > > hardware. But even the latter can often be fixed, with appropriate test
> > > harnesses.
> > >
> > 
> > Worthy goal, but knowing developers I am quite concerned that it would result
> > in (possibly much) less kernel contributions. In addition to contributions
> > from unaffiliated individuals, there is a lot of code in vendor trees which
> > is not upstreamed today. Demanding test cases for upstreaming would for sure
> > make the interest in upstreaming that code even lower than it is today.
> 
> Sure, but I did say an *expectation* rather than a hard requirement. We
> are nothing if not pragmatic.
> 
Ok, makes sense.

> Having the *infrastructure* in place, and plenty of existing examples,
> would make this a whole lot easier for submitters. And also might be a
> good proving ground for people who would otherwise be doing precisely
> the kind of 'trivial' patches which are often problematic...
> 
Now that sounds like an excellent idea!

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-11 14:18                 ` Vinod Koul
  2016-07-11 17:34                   ` Guenter Roeck
@ 2016-07-27  3:12                   ` Steven Rostedt
  2016-07-27  4:36                     ` Vinod Koul
  1 sibling, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-27  3:12 UTC (permalink / raw)
  To: Vinod Koul; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Mon, 11 Jul 2016 19:48:34 +0530
Vinod Koul <vinod.koul@intel.com> wrote:

> But the person might be slightly better off than you or me :-)
> 

I still believe that it comes down to the maintainer making the final
decision about marking a patch as stable. If they don't have the
hardware to test it, then they should then ask the patch submitter to
test for stable. If the submitter doesn't want to for whatever reason,
then that patch simply shouldn't be marked for stable.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10 22:38               ` Rafael J. Wysocki
  2016-07-11  8:47                 ` Jiri Kosina
@ 2016-07-27  3:19                 ` Steven Rostedt
  1 sibling, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-07-27  3:19 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: James Bottomley, ksummit-discuss, ksummit-discuss, Jason Cooper

On Mon, 11 Jul 2016 00:38:05 +0200
"Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:

> However, "long-term stable" trees started to appear at one point and those are
> quite different and serve a different purpose.  I'm not quite sure if handling
> them in the same way as 4.6.y is really the best approach.  At least it seems
> to lead to some mismatch between the expectations and what is really delivered.

I think what happens is simply time. For 4.6.y stable kernels, there's
nothing but bug fixes to add to them. But when you talk about older
kernels, there is a tendency to add stuff that could be questionable
about how much of a "fix" something is. Not to mention, the older a
kernel is, the more it diverges from mainline, and there may be fixes
in mainline, that are "sorta" fixes for those kernels.

I've had fixes that fixed a bug that mutated over time. In older
kernels, it was still a bug, but perhaps not as critical. In the newer
kernels, the bug made a bigger impact. Sometimes it was simply that the
newer kernel was much more likely to trigger the race condition. How
far back to have a fix go becomes a gray area. Perhaps the fix for
4.6.y is obviously correct, but that same fix may not be so obvious for
older kernels.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-27  3:12                   ` Steven Rostedt
@ 2016-07-27  4:36                     ` Vinod Koul
  0 siblings, 0 replies; 244+ messages in thread
From: Vinod Koul @ 2016-07-27  4:36 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, ksummit-discuss, Jason Cooper

On Tue, Jul 26, 2016 at 11:12:42PM -0400, Steven Rostedt wrote:
> On Mon, 11 Jul 2016 19:48:34 +0530
> Vinod Koul <vinod.koul@intel.com> wrote:
> 
> > But the person might be slightly better off than you or me :-)
> > 
> 
> I still believe that it comes down to the maintainer making the final
> decision about marking a patch as stable. If they don't have the
> hardware to test it, then they should then ask the patch submitter to
> test for stable. If the submitter doesn't want to for whatever reason,
> then that patch simply shouldn't be marked for stable.

Agreed, that's the reason I was asking if we should ask submitters. But
looks like that adds a cost which no one wants to be burdened with!

-- 
~Vinod

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-26 14:33                           ` David Woodhouse
  2016-07-26 15:52                             ` Guenter Roeck
@ 2016-07-28 21:02                             ` Laurent Pinchart
  2016-07-29  0:10                               ` Steven Rostedt
  1 sibling, 1 reply; 244+ messages in thread
From: Laurent Pinchart @ 2016-07-28 21:02 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: James Bottomley, Trond Myklebust

On Tuesday 26 Jul 2016 15:33:29 David Woodhouse wrote:
> On Tue, 2016-07-26 at 06:44 -0700, Guenter Roeck wrote:
> > > We really should have more of an expectation that new code should be
> > > submitted *with* test cases. After all, it's not like people are
> > > generally submitting code that's *entirely* untested. It's more that
> > > testing is ad-hoc, and sometimes depends on running on specific
> > > hardware. But even the latter can often be fixed, with appropriate test
> > > harnesses.
> > 
> > Worthy goal, but knowing developers I am quite concerned that it would
> > result in (possibly much) less kernel contributions. In addition to
> > contributions from unaffiliated individuals, there is a lot of code in
> > vendor trees which is not upstreamed today. Demanding test cases for
> > upstreaming would for sure make the interest in upstreaming that code
> > even lower than it is today.
> Sure, but I did say an *expectation* rather than a hard requirement. We
> are nothing if not pragmatic.
> 
> Having the *infrastructure* in place, and plenty of existing examples,
> would make this a whole lot easier for submitters. And also might be a
> good proving ground for people who would otherwise be doing precisely
> the kind of 'trivial' patches which are often problematic...

That's really worth pursuing. As the author of various media-related drivers 
I've obviously run tests before submitting patches, but most of the time they 
were based on scripts quickly hacked in a ad-hoc fashion. I've been very 
creative at finding excuses for not investing more in testing until one day 
when I decided to bite the bullet and publish a unit test framework for one of 
those drivers (http://git.ideasonboard.com/renesas/vsp-tests.git). There were 
a few lessons learnt in the process that I find worth sharing.

- Having to deliver test cases to your customer (in the broader sense of the 
term, internal or external) is a very good incentive for developing a test 
suite. In this specific case I was even the one proposing to add tests as part 
as the acceptance criteria, but the important part is that the customer agreed 
to take test development time into account.

- The lack of a test infrastructure was a major reason for not developing 
tests earlier. If we want developers to write and even submit test cases we 
need to lower the barrier to entry. A proper infrastructure (with libraries, 
locations where to submit test cases, documentation, examples, ...) will not 
only make it more likely that test cases will be created and published, but 
should also hopefully avoid the proliferation of incompatible test frameworks. 

- Catching bugs through self-developed test cases is motivating. We all know 
how useful test cases are in theory, but seeing them run in practice is really 
different. I believe we would get more test cases developed if we could push 
developers through the first few steps and make them experience this by 
themselves.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-11 20:24                       ` Kevin Hilman
  2016-07-11 23:03                         ` Guenter Roeck
@ 2016-07-28 21:09                         ` Laurent Pinchart
  2016-07-28 21:33                           ` Bird, Timothy
  2016-08-02 18:42                           ` Kevin Hilman
  1 sibling, 2 replies; 244+ messages in thread
From: Laurent Pinchart @ 2016-07-28 21:09 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: James Bottomley, Trond Myklebust

On Monday 11 Jul 2016 13:24:25 Kevin Hilman wrote:
> Trond Myklebust <trondmy@primarydata.com> writes:
> > So, we might as well make this a formal proposal.
> > 
> > I’d like to propose that we have a discussion around how to make it
> > easier to implement kernel unit tests. I’ve co-opted Dan as he has
> > expressed both an interest and hands-on experience. :-)
> 
> Count me in.
> 
> I'm working on the kernelci.org project, where we're testing
> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
> across ~30 unique SoC families: arm, arm64, x86.)
> 
> Right now, we're mainly doing basic boot tests, but are starting to run
> kselftests on all these platforms as well.

Would you be interested in running other test suites as well (for instance 
http://git.ideasonboard.com/renesas/vsp-tests.git) ? If so we need to decide 
on an interface for test suites to make it easy for you to integrate them in 
your test farm.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-28 21:09                         ` Laurent Pinchart
@ 2016-07-28 21:33                           ` Bird, Timothy
  2016-08-02 18:42                           ` Kevin Hilman
  1 sibling, 0 replies; 244+ messages in thread
From: Bird, Timothy @ 2016-07-28 21:33 UTC (permalink / raw)
  To: Laurent Pinchart, ksummit-discuss; +Cc: James Bottomley, Trond Myklebust



> -----Original Message-----
> From: ksummit-discuss-bounces@lists.linuxfoundation.org [mailto:ksummit-
> discuss-bounces@lists.linuxfoundation.org] On Behalf Of Laurent Pinchart
> Sent: Thursday, July 28, 2016 2:09 PM
> To: ksummit-discuss@lists.linuxfoundation.org
> Cc: James Bottomley <James.Bottomley@hansenpartnership.com>; Trond
> Myklebust <trondmy@primarydata.com>
> Subject: Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
> 
> On Monday 11 Jul 2016 13:24:25 Kevin Hilman wrote:
> > Trond Myklebust <trondmy@primarydata.com> writes:
> > > So, we might as well make this a formal proposal.
> > >
> > > I’d like to propose that we have a discussion around how to make it
> > > easier to implement kernel unit tests. I’ve co-opted Dan as he has
> > > expressed both an interest and hands-on experience. :-)
> >
> > Count me in.
> >
> > I'm working on the kernelci.org project, where we're testing
> > mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
> > across ~30 unique SoC families: arm, arm64, x86.)
> >
> > Right now, we're mainly doing basic boot tests, but are starting to run
> > kselftests on all these platforms as well.
> 
> Would you be interested in running other test suites as well (for instance
> http://git.ideasonboard.com/renesas/vsp-tests.git) ? If so we need to decide
> on an interface for test suites to make it easy for you to integrate them in
> your test farm.

I'm interested in integrating more test suites as part of the Fuego [1] project.
This project provides a test framework for the LTSI project and the AGL project
in the Linux Foundation, but we're always looking for more tests.  We are still,
admittedly, still in our early days, but I'd be interested in discussing the requirements
for these tests.

[1] http://bird.org/fuego/FrontPage.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-15 11:24                                             ` Vlastimil Babka
@ 2016-07-28 22:07                                               ` Laurent Pinchart
  0 siblings, 0 replies; 244+ messages in thread
From: Laurent Pinchart @ 2016-07-28 22:07 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: James Bottomley, Vlastimil Babka, Trond Myklebust

On Friday 15 Jul 2016 13:24:10 Vlastimil Babka wrote:
> On 07/15/2016 07:52 AM, NeilBrown wrote:
> > On Fri, Jul 15 2016, Greg KH wrote:
> >> On Thu, Jul 14, 2016 at 07:56:43PM -0700, Guenter Roeck wrote:
> >>> Overall, I can not imagine that it is even possible to use quilt trees
> >>> as basis for development in a company with if active kernel
> >>> development, even more so if a large number of engineers and/or a
> >>> number of branches are involved. Sure, the QCOM example may be extreme,
> >>> but do you really think that writing those 2.5M LOC would have been
> >>> possible if QCOM had used Quilt trees instead of git ? Using Quilt
> >>> would for sure have prevented them from writing those 2.5M LOC, but
> >>> then there would be nothing. That doesn't sound like a feasible
> >>> alternative either.
> >> 
> >> It is possible, look at the Red Hat and SuSE kernel development teams.
> >> Yes, in the end, most of the patches are backports from upstream, but
> > 
> > You are glossing over a key point.  We (or at least I as a SUSE kernel
> > developer) don't use quilt for development because, like Guenter says,
> > it would be too clumsy.  I do development upstream if git.  Upstream
> > first.
> > And I have scripts to help turn the result into something suitable for
> > quilt, making the use of quilt a pain rather than a nightmare.
> > 
> > I do find quilt useful when backporting a series of patches so that I
> > can resolve the conflicts on each patch individually and move backwards
> > and forwards through the list of patches.  I don't think git has an easy
> > way to store a branch of patches-that-I-need-to-apply and to then give
> > me one at a time, removing them from the branch.  I could use 'stgit'
> > for that if necessary, though it is very tempting to write something
> > that is better integrated with git.
> 
> I think (but never actually tried yet) this should be somehow possible
> with git rebase --interactive and git rebase --edit-todo (where the
> editing of todo would be automatic via some wrappers). You should be
> able to put any commits in the todo to "pick" (not just those you are
> rebasing), effectively cherry-picking them. "quilt push" equivalent
> becomes "git commit" and/or "git rebase --continue" (which reminds me,
> can't count how many times I've used git commit --amend instead of
> rebase --continue after resolving a conflict, which amended the
> *previous* commit, grrr. Having some intelligent wrapper over that would
> be great so you don't have to remember if there was a conflict or not,
> and which command is thus appropriate).

As long as you git add all the changes, git rebase --continue will do the 
right thing, regardless of whether there was a conflict or not. I've stopped 
using git commit --amend when rebasing interactively, unless git rebase --
continue complained that it can't proceed.

There are two main reasons why git rebase --continue can't proceed. The first 
one is when you still have changes in the working copy not added to the index, 
which can be the case when you want to commit the changes in multiple separate 
commits. git rebase --continue will complain, and the right command in that 
case is usually git commit, not git commit --amend. In any case, after the 
error message from git rebase --continue, you can decide how to proceed.

The second reason is also related to applying changes in multiple separate 
commits. If you have applied any commit (either through git commit or git 
commit --amend) since the interactive rebase stopped, even after adding all 
changes to the index git rebase --continue will complain. In that case you 
will again have to decide how to proceed.

The bottom line is that using git rebase --continue will either do the right 
thing, or complain without proceeding, letting you analyze the situation. I 
found that to minimize the countless amendments of the previous commits that 
used to bite me all the time as well.

> "quilt pop" would take the HEAD,
> put it as the first thing to "pick" in the rebase todo, and do git reset
> --hard HEAD^.
> 
> Anyway sorry for adding to the OT :)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-28 21:02                             ` Laurent Pinchart
@ 2016-07-29  0:10                               ` Steven Rostedt
  2016-07-29  8:59                                 ` Laurent Pinchart
  0 siblings, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29  0:10 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 29 Jul 2016 00:02:12 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:


> - The lack of a test infrastructure was a major reason for not developing 
> tests earlier. If we want developers to write and even submit test cases we 
> need to lower the barrier to entry. A proper infrastructure (with libraries, 
> locations where to submit test cases, documentation, examples, ...) will not 
> only make it more likely that test cases will be created and published, but 
> should also hopefully avoid the proliferation of incompatible test frameworks. 

Does tools/testing/selftests/ not satisfy this?

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29  0:10                               ` Steven Rostedt
@ 2016-07-29  8:59                                 ` Laurent Pinchart
  2016-07-29 14:28                                   ` Steven Rostedt
  2016-07-29 15:12                                   ` Mark Brown
  0 siblings, 2 replies; 244+ messages in thread
From: Laurent Pinchart @ 2016-07-29  8:59 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thursday 28 Jul 2016 20:10:10 Steven Rostedt wrote:
> On Fri, 29 Jul 2016 00:02:12 +0300 Laurent Pinchart wrote:
> >
> > - The lack of a test infrastructure was a major reason for not developing
> > tests earlier. If we want developers to write and even submit test cases
> > we need to lower the barrier to entry. A proper infrastructure (with
> > libraries, locations where to submit test cases, documentation, examples,
> > ...) will not only make it more likely that test cases will be created
> > and published, but should also hopefully avoid the proliferation of
> > incompatible test frameworks.
>
> Does tools/testing/selftests/ not satisfy this?

It does, but lacks features to support driver-related test cases. For instance 
it doesn't (for quite obvious reasons) provide machine-readable information 
about the hardware requirements for a particular test.

I'm not sure whether kselftest could/should be extended for that purpose. Due 
to its integration in the kernel, there is little need to standardize the test 
case interface beyond providing a Makefile to declare the list of test 
programs and compile them. Something slightly more formal is in my opinion 
needed if we want to scale to device driver tests with out-of-tree test cases.

Another limitation of kselftest is the lack of standardization for logging and 
status reporting. This would be needed to interpret the test output in a 
consistent way and generate reports. Regardless of whether we extend kselftest 
to cover device drivers this would in my opinion be worth fixing.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29  8:59                                 ` Laurent Pinchart
@ 2016-07-29 14:28                                   ` Steven Rostedt
  2016-08-01 13:53                                     ` Shuah Khan
  2016-07-29 15:12                                   ` Mark Brown
  1 sibling, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29 14:28 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: James Bottomley, Trond Myklebust, Shuah Khan, ksummit-discuss

On Fri, 29 Jul 2016 11:59:47 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> Another limitation of kselftest is the lack of standardization for logging and 
> status reporting. This would be needed to interpret the test output in a 
> consistent way and generate reports. Regardless of whether we extend kselftest 
> to cover device drivers this would in my opinion be worth fixing.
> 

Perhaps this should be a core topic at KS.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29  8:59                                 ` Laurent Pinchart
  2016-07-29 14:28                                   ` Steven Rostedt
@ 2016-07-29 15:12                                   ` Mark Brown
  2016-07-29 15:20                                     ` Steven Rostedt
  2016-08-01 13:35                                     ` Laurent Pinchart
  1 sibling, 2 replies; 244+ messages in thread
From: Mark Brown @ 2016-07-29 15:12 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1595 bytes --]

On Fri, Jul 29, 2016 at 11:59:47AM +0300, Laurent Pinchart wrote:
> On Thursday 28 Jul 2016 20:10:10 Steven Rostedt wrote:

> > Does tools/testing/selftests/ not satisfy this?

> It does, but lacks features to support driver-related test cases. For instance 
> it doesn't (for quite obvious reasons) provide machine-readable information 
> about the hardware requirements for a particular test.

Plus in general the hardware related tests can end up requiring some
specific environment beyond that which is machine enumerable.

> I'm not sure whether kselftest could/should be extended for that purpose. Due 
> to its integration in the kernel, there is little need to standardize the test 
> case interface beyond providing a Makefile to declare the list of test 
> programs and compile them. Something slightly more formal is in my opinion 
> needed if we want to scale to device driver tests with out-of-tree test cases.

There's also the risk that we make it harder for a random user to pick
up the tests and predict what the expected results should be - one of
the things that can really hurt a testsuite is if users don't find it
consistent and stable.

> Another limitation of kselftest is the lack of standardization for logging and 
> status reporting. This would be needed to interpret the test output in a 
> consistent way and generate reports. Regardless of whether we extend kselftest 
> to cover device drivers this would in my opinion be worth fixing.

I thought that was supposed to be logging via stdout/stderr and the
return code for the result.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 15:12                                   ` Mark Brown
@ 2016-07-29 15:20                                     ` Steven Rostedt
  2016-07-29 15:50                                       ` Mark Brown
  2016-08-01 13:35                                     ` Laurent Pinchart
  1 sibling, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29 15:20 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 29 Jul 2016 16:12:47 +0100
Mark Brown <broonie@kernel.org> wrote:

> On Fri, Jul 29, 2016 at 11:59:47AM +0300, Laurent Pinchart wrote:
> > On Thursday 28 Jul 2016 20:10:10 Steven Rostedt wrote:  
> 
> > > Does tools/testing/selftests/ not satisfy this?  
> 
> > It does, but lacks features to support driver-related test cases. For instance 
> > it doesn't (for quite obvious reasons) provide machine-readable information 
> > about the hardware requirements for a particular test.  
> 
> Plus in general the hardware related tests can end up requiring some
> specific environment beyond that which is machine enumerable.
> 
> > I'm not sure whether kselftest could/should be extended for that purpose. Due 
> > to its integration in the kernel, there is little need to standardize the test 
> > case interface beyond providing a Makefile to declare the list of test 
> > programs and compile them. Something slightly more formal is in my opinion 
> > needed if we want to scale to device driver tests with out-of-tree test cases.  
> 
> There's also the risk that we make it harder for a random user to pick
> up the tests and predict what the expected results should be - one of
> the things that can really hurt a testsuite is if users don't find it
> consistent and stable.

I believe the ideal solution would be that a test would check if the
hardware it wants to test is available or not. If it is not, it simple
returns "Unsupported", and not success or failure.

The kselftests should be run by anyone. If the user doesn't have a
kernel with the right configs, or the right hardware for the test, the
test should exit politely, saying that it could not run due to not
having the proper environment. But if the configs and HW are available,
do you envision having any other type of inconsistent result?

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 15:20                                     ` Steven Rostedt
@ 2016-07-29 15:50                                       ` Mark Brown
  2016-07-29 16:06                                         ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-29 15:50 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1338 bytes --]

On Fri, Jul 29, 2016 at 11:20:19AM -0400, Steven Rostedt wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > There's also the risk that we make it harder for a random user to pick
> > up the tests and predict what the expected results should be - one of
> > the things that can really hurt a testsuite is if users don't find it
> > consistent and stable.

> I believe the ideal solution would be that a test would check if the
> hardware it wants to test is available or not. If it is not, it simple
> returns "Unsupported", and not success or failure.

> The kselftests should be run by anyone. If the user doesn't have a
> kernel with the right configs, or the right hardware for the test, the
> test should exit politely, saying that it could not run due to not
> having the proper environment. But if the configs and HW are available,
> do you envision having any other type of inconsistent result?

Right, that's one good strategy - but that's still unpredictable for the
user and there's a reasonable class of bugs that don't get flagged up
when breakage causes something to fail to instantiate so catching
regressions is that bit harder.  It does also mean we're restricted to
things which don't require any test environment beyond the simple
existence of the hardware which can be a bit restrictive for some
classes of hardware.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 15:50                                       ` Mark Brown
@ 2016-07-29 16:06                                         ` Steven Rostedt
  2016-07-29 16:48                                           ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29 16:06 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 29 Jul 2016 16:50:13 +0100
Mark Brown <broonie@kernel.org> wrote:


> Right, that's one good strategy - but that's still unpredictable for the
> user and there's a reasonable class of bugs that don't get flagged up
> when breakage causes something to fail to instantiate so catching
> regressions is that bit harder.  It does also mean we're restricted to
> things which don't require any test environment beyond the simple
> existence of the hardware which can be a bit restrictive for some
> classes of hardware.

Well, I don't think there's any answer to that. But I still think it's
better than nothing. If nobody has the hardware, do we ever care if it
gets tested? ;-)

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 16:06                                         ` Steven Rostedt
@ 2016-07-29 16:48                                           ` Mark Brown
  2016-07-29 17:02                                             ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-07-29 16:48 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 543 bytes --]

On Fri, Jul 29, 2016 at 12:06:52PM -0400, Steven Rostedt wrote:

> Well, I don't think there's any answer to that. But I still think it's
> better than nothing. If nobody has the hardware, do we ever care if it
> gets tested? ;-)

I think it'd be better to split such tests out so that there's a clear
distinction between those that we can tell should run reliably and those
that have some hardware dependency.  That way people who just want a
predicatable testsuite without worrying if there's something wrong in
the environment can get one.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 16:48                                           ` Mark Brown
@ 2016-07-29 17:02                                             ` Steven Rostedt
  2016-07-29 21:07                                               ` Alexandre Belloni
  2016-07-30 16:19                                               ` Luis R. Rodriguez
  0 siblings, 2 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29 17:02 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 29 Jul 2016 17:48:42 +0100
Mark Brown <broonie@kernel.org> wrote:

> On Fri, Jul 29, 2016 at 12:06:52PM -0400, Steven Rostedt wrote:
> 
> > Well, I don't think there's any answer to that. But I still think it's
> > better than nothing. If nobody has the hardware, do we ever care if it
> > gets tested? ;-)  
> 
> I think it'd be better to split such tests out so that there's a clear
> distinction between those that we can tell should run reliably and those
> that have some hardware dependency.  That way people who just want a
> predicatable testsuite without worrying if there's something wrong in
> the environment can get one.

Perhaps we should create a separate directory in kselftests for
"hardware dependent" tests.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 17:02                                             ` Steven Rostedt
@ 2016-07-29 21:07                                               ` Alexandre Belloni
  2016-07-29 21:40                                                 ` Steven Rostedt
  2016-07-30 16:19                                               ` Luis R. Rodriguez
  1 sibling, 1 reply; 244+ messages in thread
From: Alexandre Belloni @ 2016-07-29 21:07 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 29/07/2016 at 13:02:44 -0400, Steven Rostedt wrote :
> On Fri, 29 Jul 2016 17:48:42 +0100
> Mark Brown <broonie@kernel.org> wrote:
> 
> > On Fri, Jul 29, 2016 at 12:06:52PM -0400, Steven Rostedt wrote:
> > 
> > > Well, I don't think there's any answer to that. But I still think it's
> > > better than nothing. If nobody has the hardware, do we ever care if it
> > > gets tested? ;-)  
> > 
> > I think it'd be better to split such tests out so that there's a clear
> > distinction between those that we can tell should run reliably and those
> > that have some hardware dependency.  That way people who just want a
> > predicatable testsuite without worrying if there's something wrong in
> > the environment can get one.
> 
> Perhaps we should create a separate directory in kselftests for
> "hardware dependent" tests.
> 

Well, some tests depend on hardware availability but any hardware can
work. I'm obviously thinking about RTCs. rtctest can run with any RTC.
Also, one of my question here is whether kselftests could or couldn't be
destructive. Running rtctest will currently overwrite the next alarm
that may be set in an RTC. I was also planning to extend it in a way
that will unfortunately also overwrite the current date and time.
I'm not sure this is OK, especially for people that want to run those
tests automatically.


-- 
Alexandre Belloni, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 21:07                                               ` Alexandre Belloni
@ 2016-07-29 21:40                                                 ` Steven Rostedt
  2016-08-01 13:41                                                   ` Laurent Pinchart
  0 siblings, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-07-29 21:40 UTC (permalink / raw)
  To: Alexandre Belloni; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, 29 Jul 2016 23:07:39 +0200
Alexandre Belloni <alexandre.belloni@free-electrons.com> wrote:


> Well, some tests depend on hardware availability but any hardware can
> work. I'm obviously thinking about RTCs. rtctest can run with any RTC.
> Also, one of my question here is whether kselftests could or couldn't be
> destructive. Running rtctest will currently overwrite the next alarm
> that may be set in an RTC. I was also planning to extend it in a way
> that will unfortunately also overwrite the current date and time.
> I'm not sure this is OK, especially for people that want to run those
> tests automatically.


Anything that can cause harm to the system probably shouldn't be added
to kselftests. Unless there's a way you can record what the settings
were, and reset them after the test.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 17:02                                             ` Steven Rostedt
  2016-07-29 21:07                                               ` Alexandre Belloni
@ 2016-07-30 16:19                                               ` Luis R. Rodriguez
  1 sibling, 0 replies; 244+ messages in thread
From: Luis R. Rodriguez @ 2016-07-30 16:19 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Fri, Jul 29, 2016 at 01:02:44PM -0400, Steven Rostedt wrote:
> On Fri, 29 Jul 2016 17:48:42 +0100
> Mark Brown <broonie@kernel.org> wrote:
> 
> > On Fri, Jul 29, 2016 at 12:06:52PM -0400, Steven Rostedt wrote:
> > 
> > > Well, I don't think there's any answer to that. But I still think it's
> > > better than nothing. If nobody has the hardware, do we ever care if it
> > > gets tested? ;-)  
> > 
> > I think it'd be better to split such tests out so that there's a clear
> > distinction between those that we can tell should run reliably and those
> > that have some hardware dependency.  That way people who just want a
> > predicatable testsuite without worrying if there's something wrong in
> > the environment can get one.
> 
> Perhaps we should create a separate directory in kselftests for
> "hardware dependent" tests.

Tests could depend on a list of soft kconfig entries, which map to
device drivers present -- whether built-in or modules, at run time it
should then be possible to ensure such kconfig symbols are available.
This of course depends on there being a deterministic mapping of
hardware devices to kconfig symbols, which we don't yet have.

  Luis

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-09 19:15         ` Vlastimil Babka
@ 2016-08-01  9:32           ` Johannes Berg
  2016-08-01 11:10             ` Vlastimil Babka
  0 siblings, 1 reply; 244+ messages in thread
From: Johannes Berg @ 2016-08-01  9:32 UTC (permalink / raw)
  To: Vlastimil Babka, Jiri Kosina, Luck, Tony; +Cc: ksummit-discuss

(sorry for the long delay - was off for a scheduled surgery & vacation)

On Sat, 2016-07-09 at 21:15 +0200, Vlastimil Babka wrote:
> On 07/09/2016 11:29 AM, Johannes Berg wrote:
> > On Sat, 2016-07-09 at 10:34 +0200, Jiri Kosina wrote:
> > 
> > Perhaps a hybrid model, close to what we have today, would work? If
> > a
> > patch is proposed for stable, instead of including it by default,
> > ask
> > the maintainer(s) to separately acknowledge the patch for stable?
> > IOW,
> > rather than sending a patchbomb that requires an explicit NACK
> > (with
> > the previously discussed signal/noise problem), just send a list of
> > commits and ask maintainers to edit it? They could remove and add
> > commits then.
> 
> Does it have to be strictly maintainers? 

Perhaps not.

> What if we just require that
> *somebody* (maintainer or otherwise) tags the patch with something
> like a "Stable-Acked-By", which should mean taking more
> responsibility for it than just forwarding a patch to stable without
> consequences. It should imply that the acker has checked the patch in
> the context of the particular kernel version, and be clearly
> separated from acks/reviews of the mainline commit. It would be of
> course better if stable tree maintainer would check if the acking
> person is a regular contributor of the subsystem (I guess get-
> maintainers.pl with its git checking can help here).
> This could be required initially at least for patches where the Cc:
> stable wasn't already present at time of maintainer's signed-off-by.

It comes down to a question of trust though - who are the people you
trust with a (hypothetical) "stable-acked-by"? There are no doubt a
number of people who will see that this is the process, and send such a
tag to expedite a patch going into the tree, just because of "process"
and little else.

IOW, I fear that such a thing, if anyone was able to give a go ahead,
wouldn't help anything at all.


johannes

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-01  9:32           ` Johannes Berg
@ 2016-08-01 11:10             ` Vlastimil Babka
  0 siblings, 0 replies; 244+ messages in thread
From: Vlastimil Babka @ 2016-08-01 11:10 UTC (permalink / raw)
  To: Johannes Berg, Jiri Kosina, Luck, Tony; +Cc: ksummit-discuss

On 08/01/2016 11:32 AM, Johannes Berg wrote:
> (sorry for the long delay - was off for a scheduled surgery & vacation)
>
>> What if we just require that
>> *somebody* (maintainer or otherwise) tags the patch with something
>> like a "Stable-Acked-By", which should mean taking more
>> responsibility for it than just forwarding a patch to stable without
>> consequences. It should imply that the acker has checked the patch in
>> the context of the particular kernel version, and be clearly
>> separated from acks/reviews of the mainline commit. It would be of
>> course better if stable tree maintainer would check if the acking
>> person is a regular contributor of the subsystem (I guess get-
>> maintainers.pl with its git checking can help here).
>> This could be required initially at least for patches where the Cc:
>> stable wasn't already present at time of maintainer's signed-off-by.
>
> It comes down to a question of trust though - who are the people you
> trust with a (hypothetical) "stable-acked-by"? There are no doubt a

This shouldn't be that different from normal Acked-by or Reviewed-by, 
where the maintainer also shouldn't blindly trust everyone. Of course 
it's definitely easier to know the trusted people for a subsystem 
maintainer with limited number of participants, compared to a 
whole-stable-tree maintainer. That's why I mentioned that track record 
from git can help at least filter out completely unknown people acking 
things.

> number of people who will see that this is the process, and send such a
> tag to expedite a patch going into the tree, just because of "process"
> and little else.

I guess they are to some extend risking their reputation if they ack 
garbage.

> IOW, I fear that such a thing, if anyone was able to give a go ahead,
> wouldn't help anything at all.

Not "anyone", but anyone with sufficient reputation.

>
> johannes
>

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 15:12                                   ` Mark Brown
  2016-07-29 15:20                                     ` Steven Rostedt
@ 2016-08-01 13:35                                     ` Laurent Pinchart
  2016-08-01 14:24                                       ` Mark Brown
  1 sibling, 1 reply; 244+ messages in thread
From: Laurent Pinchart @ 2016-08-01 13:35 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Friday 29 Jul 2016 16:12:47 Mark Brown wrote:
> On Fri, Jul 29, 2016 at 11:59:47AM +0300, Laurent Pinchart wrote:
> > On Thursday 28 Jul 2016 20:10:10 Steven Rostedt wrote:
> > > Does tools/testing/selftests/ not satisfy this?
> > 
> > It does, but lacks features to support driver-related test cases. For
> > instance it doesn't (for quite obvious reasons) provide machine-readable
> > information about the hardware requirements for a particular test.
> 
> Plus in general the hardware related tests can end up requiring some
> specific environment beyond that which is machine enumerable.
> 
> > I'm not sure whether kselftest could/should be extended for that purpose.
> > Due to its integration in the kernel, there is little need to standardize
> > the test case interface beyond providing a Makefile to declare the list
> > of test programs and compile them. Something slightly more formal is in
> > my opinion needed if we want to scale to device driver tests with
> > out-of-tree test cases.
>
> There's also the risk that we make it harder for a random user to pick
> up the tests and predict what the expected results should be - one of
> the things that can really hurt a testsuite is if users don't find it
> consistent and stable.
> 
> > Another limitation of kselftest is the lack of standardization for logging
> > and status reporting. This would be needed to interpret the test output
> > in a consistent way and generate reports. Regardless of whether we extend
> > kselftest to cover device drivers this would in my opinion be worth
> > fixing.
> 
> I thought that was supposed to be logging via stdout/stderr and the
> return code for the result.

Yes, but that's a bit limited. For instance we have no way to differentiate a 
test that failed from a test that can't be run due to a missing dependency as 
the value of the error code isn't standardized.

Standardizing format for the success or failure messages could also improve 
consistency. I'm not advocating (at least for now) for any specific format, 
but outputting messages in a standardized format that can easily be consumed 
by test runners (e.g. TAP [0], but that's just an example) could be 
beneficial.

[0] https://en.wikipedia.org/wiki/Test_Anything_Protocol

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 21:40                                                 ` Steven Rostedt
@ 2016-08-01 13:41                                                   ` Laurent Pinchart
  0 siblings, 0 replies; 244+ messages in thread
From: Laurent Pinchart @ 2016-08-01 13:41 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: James Bottomley, Trond Myklebust

Hi Steven,

On Friday 29 Jul 2016 17:40:35 Steven Rostedt wrote:
> On Fri, 29 Jul 2016 23:07:39 +0200 Alexandre Belloni wrote:
> > Well, some tests depend on hardware availability but any hardware can
> > work. I'm obviously thinking about RTCs. rtctest can run with any RTC.
> > Also, one of my question here is whether kselftests could or couldn't be
> > destructive. Running rtctest will currently overwrite the next alarm
> > that may be set in an RTC. I was also planning to extend it in a way
> > that will unfortunately also overwrite the current date and time.
> > I'm not sure this is OK, especially for people that want to run those
> > tests automatically.
> 
> Anything that can cause harm to the system probably shouldn't be added
> to kselftests. Unless there's a way you can record what the settings
> were, and reset them after the test.

Agreed. If we could standardize the test framework enough to make out-of-tree 
tests possible, the kselftest unit tests and the out-of-tree tests could 
implement the same interface and be handled by the same test runners. 
kselftest could then be a standard library of core tests maintained in the 
kernel, with additional tests available from different locations.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-29 14:28                                   ` Steven Rostedt
@ 2016-08-01 13:53                                     ` Shuah Khan
  2016-08-03  4:47                                       ` Bird, Timothy
  0 siblings, 1 reply; 244+ messages in thread
From: Shuah Khan @ 2016-08-01 13:53 UTC (permalink / raw)
  To: Steven Rostedt, Laurent Pinchart
  Cc: James Bottomley, Trond Myklebust, Shuah Khan, ksummit-discuss

On 07/29/2016 08:28 AM, Steven Rostedt wrote:
> On Fri, 29 Jul 2016 11:59:47 +0300
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> 
>> Another limitation of kselftest is the lack of standardization for logging and 
>> status reporting. This would be needed to interpret the test output in a 
>> consistent way and generate reports. Regardless of whether we extend kselftest 
>> to cover device drivers this would in my opinion be worth fixing.
>>
> 
> Perhaps this should be a core topic at KS.
> 

Yes definitely. There has been some effort in standardizing,
but not enough. We can discuss and see what would make the
kselftest more usable without adding external dependencies.

One thing we could do is add script to interpret and turn the
test output into usable format.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-01 13:35                                     ` Laurent Pinchart
@ 2016-08-01 14:24                                       ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-01 14:24 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1534 bytes --]

On Mon, Aug 01, 2016 at 04:35:58PM +0300, Laurent Pinchart wrote:
> On Friday 29 Jul 2016 16:12:47 Mark Brown wrote:
> > On Fri, Jul 29, 2016 at 11:59:47AM +0300, Laurent Pinchart wrote:

> > > Another limitation of kselftest is the lack of standardization for logging
> > > and status reporting. This would be needed to interpret the test output
> > > in a consistent way and generate reports. Regardless of whether we extend
> > > kselftest to cover device drivers this would in my opinion be worth
> > > fixing.

> > I thought that was supposed to be logging via stdout/stderr and the
> > return code for the result.

> Yes, but that's a bit limited. For instance we have no way to differentiate a 
> test that failed from a test that can't be run due to a missing dependency as 
> the value of the error code isn't standardized.

I actually went and looked to see where we're at now - there are
standard exit codes for this in kselftest.h following a discussion a few
years ago which are getting some use (via helper functions also in
there).  We've got pass/fail, expected pass/fail and skip.

> Standardizing format for the success or failure messages could also improve 
> consistency. I'm not advocating (at least for now) for any specific format, 
> but outputting messages in a standardized format that can easily be consumed 
> by test runners (e.g. TAP [0], but that's just an example) could be 
> beneficial.

There's some stuff for summary lines in there but yes, this could use
some work.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
                   ` (3 preceding siblings ...)
  2016-07-11  7:44 ` Christian Borntraeger
@ 2016-08-02 13:49 ` Jani Nikula
  4 siblings, 0 replies; 244+ messages in thread
From: Jani Nikula @ 2016-08-02 13:49 UTC (permalink / raw)
  To: Jiri Kosina, ksummit-discuss

On Sat, 09 Jul 2016, Jiri Kosina <jikos@kernel.org> wrote:
> Yeah, this topic again. It'd be a sad year on ksummit-discuss@ without it, 
> wouldn't it? :)
>
> As a SUSE Enterprise Linux kernel maintainer, stable kernel is one of the 
> crucial elements I rely on (and I also try to make sure that SUSE 
> contributes back as much as possible).
>
> Hence any planned changes in the workflow / releases are rather essential 
> for me, and I'd like to participate, should any such discussion take 
> place.

Whoa, I haven't even properly begun to read this massive thread yet, but
being the drm/i915 maintainer responsible for herding our fixes, I'd
like to participate.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-07-10  3:00                     ` James Bottomley
  2016-07-10  3:07                       ` Trond Myklebust
  2016-07-26 13:35                       ` David Woodhouse
@ 2016-08-02 14:12                       ` Jani Nikula
  2016-08-02 15:34                         ` Mark Brown
  2 siblings, 1 reply; 244+ messages in thread
From: Jani Nikula @ 2016-08-02 14:12 UTC (permalink / raw)
  To: James Bottomley, Rafael J. Wysocki; +Cc: Trond Myklebust, ksummit-discuss

On Sun, 10 Jul 2016, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> If I'm suspicious of something, I usually mark it not to be backported
> until we've got some testing:
>
> cc: stable@vger.kernel.org # delay until 4.8-rc1
>
> Greg seems to be able to cope with this.

So what do you do to prevent said commit from being backported to stable
kernels if it turns out bad (at any step of the way, really)? How does
it work? cc: stable is fire-and-forget, but sadly without self-destruct
when things go bad.

And where do you set the paranoia level with that "delay until" thing?

Generally adding cc: stable is like, this is clearly a fix to a bug that
is present in stable kernels, and the bug should be fixed, but I have no
idea nor resources to review or test if this is the right fix across all
stable kernels. You end up relying on your gut feeling too much to be
comfortable. You have to make the call too early in the process.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-02 14:12                       ` Jani Nikula
@ 2016-08-02 15:34                         ` Mark Brown
  2016-08-02 23:17                           ` Rafael J. Wysocki
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-08-02 15:34 UTC (permalink / raw)
  To: Jani Nikula; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 912 bytes --]

On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:

> Generally adding cc: stable is like, this is clearly a fix to a bug that
> is present in stable kernels, and the bug should be fixed, but I have no
> idea nor resources to review or test if this is the right fix across all
> stable kernels. You end up relying on your gut feeling too much to be
> comfortable. You have to make the call too early in the process.

I think the problems here are more in the process of how things go from
being tagged stable to appearing in a stable release - the QA or lack
thereof and so on.  While I do share some of your misgivings here I do
also really like the fact that it's really easy for people to push
things out for the attention of those working on backports.  It's
essentially the same as the question I often find myself asking people
who don't upstream - "why would this fix not benefit other users?".

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-07-28 21:09                         ` Laurent Pinchart
  2016-07-28 21:33                           ` Bird, Timothy
@ 2016-08-02 18:42                           ` Kevin Hilman
  2016-08-02 19:44                             ` Laurent Pinchart
  1 sibling, 1 reply; 244+ messages in thread
From: Kevin Hilman @ 2016-08-02 18:42 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

Laurent Pinchart <laurent.pinchart@ideasonboard.com> writes:

> On Monday 11 Jul 2016 13:24:25 Kevin Hilman wrote:
>> Trond Myklebust <trondmy@primarydata.com> writes:
>> > So, we might as well make this a formal proposal.
>> > 
>> > I’d like to propose that we have a discussion around how to make it
>> > easier to implement kernel unit tests. I’ve co-opted Dan as he has
>> > expressed both an interest and hands-on experience. :-)
>> 
>> Count me in.
>> 
>> I'm working on the kernelci.org project, where we're testing
>> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
>> across ~30 unique SoC families: arm, arm64, x86.)
>> 
>> Right now, we're mainly doing basic boot tests, but are starting to run
>> kselftests on all these platforms as well.
>
> Would you be interested in running other test suites as well (for instance 
> http://git.ideasonboard.com/renesas/vsp-tests.git) ? If so we need to decide 
> on an interface for test suites to make it easy for you to integrate them in 
> your test farm.

Sure.  Running tests is the easy part.  It's parsing results and making
useful reports about regressions that involves a little more work.

Kevin

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-08-02 18:42                           ` Kevin Hilman
@ 2016-08-02 19:44                             ` Laurent Pinchart
  2016-08-02 20:33                               ` Mark Brown
  0 siblings, 1 reply; 244+ messages in thread
From: Laurent Pinchart @ 2016-08-02 19:44 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

Hi Kevin,

On Tuesday 02 Aug 2016 11:42:30 Kevin Hilman wrote:
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> writes:
> > On Monday 11 Jul 2016 13:24:25 Kevin Hilman wrote:
> >> Trond Myklebust <trondmy@primarydata.com> writes:
> >> > So, we might as well make this a formal proposal.
> >> > 
> >> > I’d like to propose that we have a discussion around how to make it
> >> > easier to implement kernel unit tests. I’ve co-opted Dan as he has
> >> > expressed both an interest and hands-on experience. :-)
> >> 
> >> Count me in.
> >> 
> >> I'm working on the kernelci.org project, where we're testing
> >> mainline/next/stable-rc/stable etc. on real hardware (~200 unique boards
> >> across ~30 unique SoC families: arm, arm64, x86.)
> >> 
> >> Right now, we're mainly doing basic boot tests, but are starting to run
> >> kselftests on all these platforms as well.
> > 
> > Would you be interested in running other test suites as well (for instance
> > http://git.ideasonboard.com/renesas/vsp-tests.git) ? If so we need to
> > decide on an interface for test suites to make it easy for you to
> > integrate them in your test farm.
> 
> Sure.  Running tests is the easy part.  It's parsing results and making
> useful reports about regressions that involves a little more work.

Which again calls for a common format for test invocation and test results. Is 
that something you've given a thought to already by any chance ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] kernel unit testing
  2016-08-02 19:44                             ` Laurent Pinchart
@ 2016-08-02 20:33                               ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-02 20:33 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 666 bytes --]

On Tue, Aug 02, 2016 at 10:44:11PM +0300, Laurent Pinchart wrote:
> On Tuesday 02 Aug 2016 11:42:30 Kevin Hilman wrote:

> > Sure.  Running tests is the easy part.  It's parsing results and making
> > useful reports about regressions that involves a little more work.

> Which again calls for a common format for test invocation and test results. Is 
> that something you've given a thought to already by any chance ?

Not substantially - the initial priority is to get something running to
generate data to allow reporting work to start and provide something to
start driving adoption in the community and then look at scaling out to
other testsuites.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-02 15:34                         ` Mark Brown
@ 2016-08-02 23:17                           ` Rafael J. Wysocki
  2016-08-03  9:36                             ` Jani Nikula
  0 siblings, 1 reply; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-08-02 23:17 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> 
> > Generally adding cc: stable is like, this is clearly a fix to a bug that
> > is present in stable kernels, and the bug should be fixed, but I have no
> > idea nor resources to review or test if this is the right fix across all
> > stable kernels. You end up relying on your gut feeling too much to be
> > comfortable. You have to make the call too early in the process.
> 
> I think the problems here are more in the process of how things go from
> being tagged stable to appearing in a stable release - the QA or lack
> thereof and so on.  While I do share some of your misgivings here I do
> also really like the fact that it's really easy for people to push
> things out for the attention of those working on backports.  It's
> essentially the same as the question I often find myself asking people
> who don't upstream - "why would this fix not benefit other users?".

Agreed, and I think that's exactly where the expectations don't match what's
delivered in the long-term-stable trees.

It should be made clear that "stable" doesn't mean "no regressions".  What
it reall means is "hey, if you care about backports, this is the stuff to take
into consideration in the first place".

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-01 13:53                                     ` Shuah Khan
@ 2016-08-03  4:47                                       ` Bird, Timothy
  0 siblings, 0 replies; 244+ messages in thread
From: Bird, Timothy @ 2016-08-03  4:47 UTC (permalink / raw)
  To: Shuah Khan, Steven Rostedt, Laurent Pinchart
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss



> -----Original Message-----
> From: ksummit-discuss-bounces@lists.linuxfoundation.org [mailto:ksummit-
> discuss-bounces@lists.linuxfoundation.org] On Behalf Of Shuah Khan
 On 07/29/2016 08:28 AM, Steven Rostedt wrote:
> > On Fri, 29 Jul 2016 11:59:47 +0300
> > Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> >
> >> Another limitation of kselftest is the lack of standardization for logging and
> >> status reporting. This would be needed to interpret the test output in a
> >> consistent way and generate reports. Regardless of whether we extend
> kselftest
> >> to cover device drivers this would in my opinion be worth fixing.
> >>
> >
> > Perhaps this should be a core topic at KS.
> >
> 
> Yes definitely. There has been some effort in standardizing,
> but not enough. We can discuss and see what would make the
> kselftest more usable without adding external dependencies.
> 
> One thing we could do is add script to interpret and turn the
> test output into usable format.

Just FYI on what Fuego [1] does here:

It basically has to take the output from tests with many different output formats,
and convert each one into a single pass/fail value for each test, for the Jenkins interface.

It uses a short shell function called log_compare, which it
uses to scan  a log (the test program output) looking for a regular expression.  It is
passed a test_name, match_count, a regular_expression, and a result_category.
The result category is "p" for positive or "n" for negative.  The regular expression is
passed to "grep -E <regular_expression> <logfile> | wc -l" and the result is compared
to the match_count.  If it matches, then an additional comparison is made between
the logfile filtered by the regular_expression and one saved previously.  If the number
of occurrences match, and the current filtered log matches the previously filtered log,
then the test is considered to have succeeded.  The test_name is used to find the
previously saved filtered log.

Here is the code, in case the description is not clear:
function log_compare {
# 1 - test_name, 2 - match_count, 3 - regular_expression, 4 - n/p (i.e. negative or positive)

  cd "$FUEGO_LOGS_PATH/${JOB_NAME}/testlogs"
  LOGFILE="${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.log"
  PARSED_LOGFILE="${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.{4}.log"

  if [ -e $LOGFILE ]; then
    current_count=`cat $LOGFILE | grep -E "${3}" 2>&1 | wc -l`
    if [ $current_count -eq $2 ];then
      cat $LOGFILE | grep -E "${3}" | tee "$PARSED_LOGFILE"
      local TMP_P=`diff -u ${WORKSPACE}/../ref_logs/${JOB_NAME}/${1}_${4}.log "$PARSED_LOGFILE" 2>&1`
      if [ $? -ne 0 ];then
        echo -e "\nFuego error reason: Unexpected test log output:\n$TMP_P\n"
        check_create_functional_logrun "test error"
        false
      else
        check_create_functional_logrun "passed"
        true
      fi
    else
      echo -e "\nFuego error reason: Mismatch in expected ($2) and actual ($current_count) pos/neg ($4) results. (pattern: $3)\n"
      check_create_functional_logrun "failed"
      false
    fi
  else
    echo -e "\nFuego error reason: 'logs/${JOB_NAME}/testlogs/$LOGFILE' is missing.\n"
    check_create_functional_logrun "test error"
    false
  fi

  cd -
}

This is called with a line like the following:
   log_compare $TESTNAME, "11", "^Test-.*OK", "p" 
or
  log_compare $TESTNAME, "0", "^Test-.*Failed", "n"

The reason for the match_count is that many tests that Fuego runs have
lots of sub-tests, (LTP being a prime example) and you want to figure out
if you're getting the same number of positive or negative results
that you are expecting.  The match_count is sometimes parameterized, so
that you can tune the system to ignore some failures.

The system ships with <test_name>_p.log and <test_name>_n.log files
(previously filtered log files) for each test.

I think in general you want a system that provides default expected results
while still allowing developers to tune it for individual sub-tests that
fail for some reason on their system.  One of the biggest problems with
tests is that users often don't have a baseline of what they should expect
to see (what is "good" output vs. what actually shows a problem).

'grep -E is <regular_expression>' is about the most basic thing you
can do in terms of parsing a log.  Fuego also includes a python-based
parser to extract out benchmarking data, for use in charting and
threshold regression checking, but that seems like overkill for a first pass
at this with kselftest. (IMHO)

FWIW I'm interested in how this shakes out because I want to wrap kselftest into
Fuego.  I'm not on the list for the summit, but I'd like to stay in the discussion via e-mail.
 -- Tim

[1] http://bird.org/fuego/FrontPage

P.S. by the way, there's a bug in the above log_compare code.  Don't use it directly.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-02 23:17                           ` Rafael J. Wysocki
@ 2016-08-03  9:36                             ` Jani Nikula
  2016-08-03 11:09                               ` Greg KH
  2016-08-03 11:12                               ` Mark Brown
  0 siblings, 2 replies; 244+ messages in thread
From: Jani Nikula @ 2016-08-03  9:36 UTC (permalink / raw)
  To: Rafael J. Wysocki, Mark Brown
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
>> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
>> 
>> > Generally adding cc: stable is like, this is clearly a fix to a bug that
>> > is present in stable kernels, and the bug should be fixed, but I have no
>> > idea nor resources to review or test if this is the right fix across all
>> > stable kernels. You end up relying on your gut feeling too much to be
>> > comfortable. You have to make the call too early in the process.
>> 
>> I think the problems here are more in the process of how things go from
>> being tagged stable to appearing in a stable release - the QA or lack
>> thereof and so on.  While I do share some of your misgivings here I do
>> also really like the fact that it's really easy for people to push
>> things out for the attention of those working on backports.  It's
>> essentially the same as the question I often find myself asking people
>> who don't upstream - "why would this fix not benefit other users?".
>
> Agreed, and I think that's exactly where the expectations don't match what's
> delivered in the long-term-stable trees.
>
> It should be made clear that "stable" doesn't mean "no regressions".  What
> it reall means is "hey, if you care about backports, this is the stuff to take
> into consideration in the first place".

I think this interpretation matches reality better than what
Documentation/stable_kernel_rules.txt leads you to believe about adding
cc: stable tag.

However, I presume maintainers don't add cc: stable lightly, even when
the fix could benefit stable kernel users, if there's any risk of the
backport coming back to haunt you. I believe maintainers assume some
degree of responsibility for the backport when they add cc: stable, even
when they don't have the means to do QA. (And with the plethora of
longterm kernels around these days, who does?)

But does being more liberal in adding cc: stable tags and shifting the
responsibility for backports towards stable kernel maintainers work
either? The bugs will anyway be reported to subsystem/driver
maintainers, not stable maintainers.

Side note, I think it would be helpful to be allowed to revert clearly
broken stable kernel backports even without an accompanying mainline
revert. The original commit might be perfectly fine upstream, while the
backport is bogus.


BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03  9:36                             ` Jani Nikula
@ 2016-08-03 11:09                               ` Greg KH
  2016-08-03 13:05                                 ` Jani Nikula
                                                   ` (2 more replies)
  2016-08-03 11:12                               ` Mark Brown
  1 sibling, 3 replies; 244+ messages in thread
From: Greg KH @ 2016-08-03 11:09 UTC (permalink / raw)
  To: Jani Nikula; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
> On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> >> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> >> 
> >> > Generally adding cc: stable is like, this is clearly a fix to a bug that
> >> > is present in stable kernels, and the bug should be fixed, but I have no
> >> > idea nor resources to review or test if this is the right fix across all
> >> > stable kernels. You end up relying on your gut feeling too much to be
> >> > comfortable. You have to make the call too early in the process.
> >> 
> >> I think the problems here are more in the process of how things go from
> >> being tagged stable to appearing in a stable release - the QA or lack
> >> thereof and so on.  While I do share some of your misgivings here I do
> >> also really like the fact that it's really easy for people to push
> >> things out for the attention of those working on backports.  It's
> >> essentially the same as the question I often find myself asking people
> >> who don't upstream - "why would this fix not benefit other users?".
> >
> > Agreed, and I think that's exactly where the expectations don't match what's
> > delivered in the long-term-stable trees.
> >
> > It should be made clear that "stable" doesn't mean "no regressions".  What
> > it reall means is "hey, if you care about backports, this is the stuff to take
> > into consideration in the first place".
> 
> I think this interpretation matches reality better than what
> Documentation/stable_kernel_rules.txt leads you to believe about adding
> cc: stable tag.

really?  Yes, we have regressions at times in stable kernels, but
really, our % is _very_ low.  Probably less than "normal" releases, but
that's just a random guess, it would be good for someone to try to do
research on this before guessing...

> However, I presume maintainers don't add cc: stable lightly, even when
> the fix could benefit stable kernel users, if there's any risk of the
> backport coming back to haunt you. I believe maintainers assume some
> degree of responsibility for the backport when they add cc: stable, even
> when they don't have the means to do QA. (And with the plethora of
> longterm kernels around these days, who does?)
> 
> But does being more liberal in adding cc: stable tags and shifting the
> responsibility for backports towards stable kernel maintainers work
> either? The bugs will anyway be reported to subsystem/driver
> maintainers, not stable maintainers.
> 
> Side note, I think it would be helpful to be allowed to revert clearly
> broken stable kernel backports even without an accompanying mainline
> revert. The original commit might be perfectly fine upstream, while the
> backport is bogus.

Since when do we reject such reverts?  I'd much rather fix something
properly, the way it was fixed in Linus's tree, as that is better off
for users, don't you agree?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03  9:36                             ` Jani Nikula
  2016-08-03 11:09                               ` Greg KH
@ 2016-08-03 11:12                               ` Mark Brown
  1 sibling, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-03 11:12 UTC (permalink / raw)
  To: Jani Nikula; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 966 bytes --]

On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:

> However, I presume maintainers don't add cc: stable lightly, even when
> the fix could benefit stable kernel users, if there's any risk of the
> backport coming back to haunt you. I believe maintainers assume some
> degree of responsibility for the backport when they add cc: stable, even
> when they don't have the means to do QA. (And with the plethora of
> longterm kernels around these days, who does?)

> But does being more liberal in adding cc: stable tags and shifting the
> responsibility for backports towards stable kernel maintainers work

I'm not saying be *more* liberal, I'm saying that's pretty much where
we're at at the minute.

> either? The bugs will anyway be reported to subsystem/driver
> maintainers, not stable maintainers.

That's not happening in my experience, people working with stable seem
to generally involve the stable people in the few cases where there's a
problem.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 11:09                               ` Greg KH
@ 2016-08-03 13:05                                 ` Jani Nikula
  2016-08-03 13:26                                   ` Greg KH
  2016-08-03 13:20                                 ` Rafael J. Wysocki
  2016-08-03 15:47                                 ` Guenter Roeck
  2 siblings, 1 reply; 244+ messages in thread
From: Jani Nikula @ 2016-08-03 13:05 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 03 Aug 2016, Greg KH <greg@kroah.com> wrote:
> On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
>> On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
>> > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
>> >> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
>> >> 
>> >> > Generally adding cc: stable is like, this is clearly a fix to a bug that
>> >> > is present in stable kernels, and the bug should be fixed, but I have no
>> >> > idea nor resources to review or test if this is the right fix across all
>> >> > stable kernels. You end up relying on your gut feeling too much to be
>> >> > comfortable. You have to make the call too early in the process.
>> >> 
>> >> I think the problems here are more in the process of how things go from
>> >> being tagged stable to appearing in a stable release - the QA or lack
>> >> thereof and so on.  While I do share some of your misgivings here I do
>> >> also really like the fact that it's really easy for people to push
>> >> things out for the attention of those working on backports.  It's
>> >> essentially the same as the question I often find myself asking people
>> >> who don't upstream - "why would this fix not benefit other users?".
>> >
>> > Agreed, and I think that's exactly where the expectations don't match what's
>> > delivered in the long-term-stable trees.
>> >
>> > It should be made clear that "stable" doesn't mean "no regressions".  What
>> > it reall means is "hey, if you care about backports, this is the stuff to take
>> > into consideration in the first place".
>> 
>> I think this interpretation matches reality better than what
>> Documentation/stable_kernel_rules.txt leads you to believe about adding
>> cc: stable tag.
>
> really?  Yes, we have regressions at times in stable kernels, but
> really, our % is _very_ low.  Probably less than "normal" releases, but
> that's just a random guess, it would be good for someone to try to do
> research on this before guessing...

Sorry for not being clear. More than anything, I was looking for a
better definition of what cc: stable means when applied by developers
and maintainers, and what the expectation should be. Should it be
considered more a hint that the commit should be considered for
backporting or an explicit request to backport? This will affect how
easily maintainers add cc: stable. Should I add it to any fix that I
think might be useful that satisfies stable rules, or just the severe
ones that are absolutely needed, or something between? I just don't
think we have any consistency on this across the kernel.

>> However, I presume maintainers don't add cc: stable lightly, even when
>> the fix could benefit stable kernel users, if there's any risk of the
>> backport coming back to haunt you. I believe maintainers assume some
>> degree of responsibility for the backport when they add cc: stable, even
>> when they don't have the means to do QA. (And with the plethora of
>> longterm kernels around these days, who does?)
>> 
>> But does being more liberal in adding cc: stable tags and shifting the
>> responsibility for backports towards stable kernel maintainers work
>> either? The bugs will anyway be reported to subsystem/driver
>> maintainers, not stable maintainers.
>> 
>> Side note, I think it would be helpful to be allowed to revert clearly
>> broken stable kernel backports even without an accompanying mainline
>> revert. The original commit might be perfectly fine upstream, while the
>> backport is bogus.
>
> Since when do we reject such reverts?  I'd much rather fix something
> properly, the way it was fixed in Linus's tree, as that is better off
> for users, don't you agree?

Of course I agree with that, but it's not what I was saying! If Linus'
tree + some commit is the perfect fix, it doesn't mean that Linus' tree
half a dozen releases ago + some commit also is. If we tagged that
commit cc: stable and it fails in stable, we should be able to revert
that from stable without touching Linus' tree. Perhaps it's a corner
case and generally not a problem [citation needed], but we've hit
it. Not sure if that's enough to warrant a mention in stable rules?

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 11:09                               ` Greg KH
  2016-08-03 13:05                                 ` Jani Nikula
@ 2016-08-03 13:20                                 ` Rafael J. Wysocki
  2016-08-03 13:21                                   ` Jiri Kosina
  2016-08-03 13:39                                   ` Greg KH
  2016-08-03 15:47                                 ` Guenter Roeck
  2 siblings, 2 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-08-03 13:20 UTC (permalink / raw)
  To: Greg KH; +Cc: ksummit-discuss, James Bottomley, Trond Myklebust

On Wednesday, August 03, 2016 01:09:35 PM Greg KH wrote:
> On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
> > On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> > > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> > >> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> > >> 
> > >> > Generally adding cc: stable is like, this is clearly a fix to a bug that
> > >> > is present in stable kernels, and the bug should be fixed, but I have no
> > >> > idea nor resources to review or test if this is the right fix across all
> > >> > stable kernels. You end up relying on your gut feeling too much to be
> > >> > comfortable. You have to make the call too early in the process.
> > >> 
> > >> I think the problems here are more in the process of how things go from
> > >> being tagged stable to appearing in a stable release - the QA or lack
> > >> thereof and so on.  While I do share some of your misgivings here I do
> > >> also really like the fact that it's really easy for people to push
> > >> things out for the attention of those working on backports.  It's
> > >> essentially the same as the question I often find myself asking people
> > >> who don't upstream - "why would this fix not benefit other users?".
> > >
> > > Agreed, and I think that's exactly where the expectations don't match what's
> > > delivered in the long-term-stable trees.
> > >
> > > It should be made clear that "stable" doesn't mean "no regressions".  What
> > > it reall means is "hey, if you care about backports, this is the stuff to take
> > > into consideration in the first place".
> > 
> > I think this interpretation matches reality better than what
> > Documentation/stable_kernel_rules.txt leads you to believe about adding
> > cc: stable tag.
> 
> really?

Honestly, I think so.

> Yes, we have regressions at times in stable kernels, but
> really, our % is _very_ low.  Probably less than "normal" releases, but
> that's just a random guess, it would be good for someone to try to do
> research on this before guessing...

Jon did some of that at LWN (http://lwn.net/Articles/692866/) and he got
regression rate estimates for various -stable lines in the range between
0.6-1.4% (4.6) and 2.2-9.6% (3.14).

Of course, whether or not these numbers are significant is a matter of
discussion, but they are clearly nonzero.

Now, I understand why there are regressions in -stable and to me it would
be just fine to say that they will be there occasionally, so as to prevent
supporting the "no regressions in -stable at all" expectation that (a) is
unrealistic today and (b) seems to be quite widespread.

Or do we really want to meet that expectation?

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:20                                 ` Rafael J. Wysocki
@ 2016-08-03 13:21                                   ` Jiri Kosina
  2016-08-04  1:05                                     ` Rafael J. Wysocki
  2016-08-03 13:39                                   ` Greg KH
  1 sibling, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 13:21 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, Rafael J. Wysocki wrote:

> Now, I understand why there are regressions in -stable and to me it would
> be just fine to say that they will be there occasionally, so as to prevent
> supporting the "no regressions in -stable at all" expectation that (a) is
> unrealistic today and (b) seems to be quite widespread.
> 
> Or do we really want to meet that expectation?

My primary goal when I was starting this thread (and I certainly didn't 
expect it to become such a gigantic monster :) ) was to try to figure out 
how to do better on this front. Of course, perfect is the enemy of good, 
but *trying* to find ways how to improve the regression rate seems like a 
reasoneble thing to attempt at least.

That's where some of my proposals in this thread some time ago were coming 
from.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:05                                 ` Jani Nikula
@ 2016-08-03 13:26                                   ` Greg KH
  2016-08-03 13:48                                     ` Jiri Kosina
  2016-08-03 14:12                                     ` Jani Nikula
  0 siblings, 2 replies; 244+ messages in thread
From: Greg KH @ 2016-08-03 13:26 UTC (permalink / raw)
  To: Jani Nikula; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Aug 03, 2016 at 04:05:35PM +0300, Jani Nikula wrote:
> > really?  Yes, we have regressions at times in stable kernels, but
> > really, our % is _very_ low.  Probably less than "normal" releases, but
> > that's just a random guess, it would be good for someone to try to do
> > research on this before guessing...
> 
> Sorry for not being clear. More than anything, I was looking for a
> better definition of what cc: stable means when applied by developers
> and maintainers, and what the expectation should be. Should it be
> considered more a hint that the commit should be considered for
> backporting or an explicit request to backport? This will affect how
> easily maintainers add cc: stable. Should I add it to any fix that I
> think might be useful that satisfies stable rules, or just the severe
> ones that are absolutely needed, or something between? I just don't
> think we have any consistency on this across the kernel.

Really?  After 10+ years of stable kernels people still don't understand
this?  Are you sure?  Isn't the stable rule list explicit enough?  What
more do I need to do to say, "stable patches fix bugs"?

It really isn't hard here people, don't make it more difficult than it
has to be.

> >> However, I presume maintainers don't add cc: stable lightly, even when
> >> the fix could benefit stable kernel users, if there's any risk of the
> >> backport coming back to haunt you. I believe maintainers assume some
> >> degree of responsibility for the backport when they add cc: stable, even
> >> when they don't have the means to do QA. (And with the plethora of
> >> longterm kernels around these days, who does?)
> >> 
> >> But does being more liberal in adding cc: stable tags and shifting the
> >> responsibility for backports towards stable kernel maintainers work
> >> either? The bugs will anyway be reported to subsystem/driver
> >> maintainers, not stable maintainers.
> >> 
> >> Side note, I think it would be helpful to be allowed to revert clearly
> >> broken stable kernel backports even without an accompanying mainline
> >> revert. The original commit might be perfectly fine upstream, while the
> >> backport is bogus.
> >
> > Since when do we reject such reverts?  I'd much rather fix something
> > properly, the way it was fixed in Linus's tree, as that is better off
> > for users, don't you agree?
> 
> Of course I agree with that, but it's not what I was saying! If Linus'
> tree + some commit is the perfect fix, it doesn't mean that Linus' tree
> half a dozen releases ago + some commit also is.

Are you sure about that?  My experience here says that is _exactly_ what
the perfect fix is.  Sure there are exceptions, but those are lost in
the noise of the general stable patch flow (8-10 patches a day, every
day, non-stop)

> If we tagged that commit cc: stable and it fails in stable, we should
> be able to revert that from stable without touching Linus' tree.

Sure, let me know, but again, I don't like this as it obviously was a
bug to be fixed, so why wouldn't we want to fix it?

> Perhaps it's a corner case and generally not a problem [citation
> needed], but we've hit it. Not sure if that's enough to warrant a
> mention in stable rules?

No, I don't think it is, as I think you are totally over-thinking this
whole thing.

What _specifically_ is wrong with the current workflow where you have
seen problems that stable kernel users have hit?

Real examples from now on please, if there are problems in the stable
workflow that we have today, everyone needs to show it with examples,
I'm tired of seeing mental gymnastics around stable kernels just because
it is "fun".

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:20                                 ` Rafael J. Wysocki
  2016-08-03 13:21                                   ` Jiri Kosina
@ 2016-08-03 13:39                                   ` Greg KH
  2016-08-03 14:10                                     ` Chris Mason
  2016-08-04  0:37                                     ` Rafael J. Wysocki
  1 sibling, 2 replies; 244+ messages in thread
From: Greg KH @ 2016-08-03 13:39 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: ksummit-discuss, James Bottomley, Trond Myklebust

On Wed, Aug 03, 2016 at 03:20:44PM +0200, Rafael J. Wysocki wrote:
> On Wednesday, August 03, 2016 01:09:35 PM Greg KH wrote:
> > On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
> > > On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> > > > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> > > >> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> > > >> 
> > > >> > Generally adding cc: stable is like, this is clearly a fix to a bug that
> > > >> > is present in stable kernels, and the bug should be fixed, but I have no
> > > >> > idea nor resources to review or test if this is the right fix across all
> > > >> > stable kernels. You end up relying on your gut feeling too much to be
> > > >> > comfortable. You have to make the call too early in the process.
> > > >> 
> > > >> I think the problems here are more in the process of how things go from
> > > >> being tagged stable to appearing in a stable release - the QA or lack
> > > >> thereof and so on.  While I do share some of your misgivings here I do
> > > >> also really like the fact that it's really easy for people to push
> > > >> things out for the attention of those working on backports.  It's
> > > >> essentially the same as the question I often find myself asking people
> > > >> who don't upstream - "why would this fix not benefit other users?".
> > > >
> > > > Agreed, and I think that's exactly where the expectations don't match what's
> > > > delivered in the long-term-stable trees.
> > > >
> > > > It should be made clear that "stable" doesn't mean "no regressions".  What
> > > > it reall means is "hey, if you care about backports, this is the stuff to take
> > > > into consideration in the first place".
> > > 
> > > I think this interpretation matches reality better than what
> > > Documentation/stable_kernel_rules.txt leads you to believe about adding
> > > cc: stable tag.
> > 
> > really?
> 
> Honestly, I think so.
> 
> > Yes, we have regressions at times in stable kernels, but
> > really, our % is _very_ low.  Probably less than "normal" releases, but
> > that's just a random guess, it would be good for someone to try to do
> > research on this before guessing...
> 
> Jon did some of that at LWN (http://lwn.net/Articles/692866/) and he got
> regression rate estimates for various -stable lines in the range between
> 0.6-1.4% (4.6) and 2.2-9.6% (3.14).
> 
> Of course, whether or not these numbers are significant is a matter of
> discussion, but they are clearly nonzero.

I agree, they will always be nonzero, but what is the acceptable number?  :)

> Now, I understand why there are regressions in -stable and to me it would
> be just fine to say that they will be there occasionally, so as to prevent
> supporting the "no regressions in -stable at all" expectation that (a) is
> unrealistic today and (b) seems to be quite widespread.

The way Jon's numbers were made was by just looking at the patches and
seeing if they said they fixed a patch that happened to be in a previous
stable kernel.  Sometimes a "fix" isn't something that people notice as
it didn't really fix the problem.  So that's not a regression that
anyone would notice as the issue is just still there.  Teasing that out
from the patches we have will be a difficult thing to do, as I don't
think it can be automated.

But it might make for a good research paper, and someone could probably
get a master's thesis out of it, so I might propose it to a few
Universities that I am in communication with :)

We do have users that have real numbers saying "We tested every single
3.10-stable kernel on our infrastructure and nothing ever broke".  We
also have a huge body of past kernel releases that people can run
themselves to see how well we are doing on real systems and workloads.

I also know some users that have real problems with stable kernels for
very specific hardware reasons (i.e. some graphics chips), due to large
numbers of backports they are forced to keep on top of their kernel
tree.  That's a different issue, and one that the stable workflow is not
set up to address, as that would be impossible.

> Or do we really want to meet that expectation?

The expectation that I try to meet is "we will address any reported
issues as soon as possible".  After all, no one is paying for the
service we do, so there's not much else we can do here :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:26                                   ` Greg KH
@ 2016-08-03 13:48                                     ` Jiri Kosina
  2016-08-03 13:57                                       ` James Bottomley
  2016-08-03 14:19                                       ` Greg KH
  2016-08-03 14:12                                     ` Jani Nikula
  1 sibling, 2 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 13:48 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, Greg KH wrote:

> Real examples from now on please, if there are problems in the stable 
> workflow that we have today, everyone needs to show it with examples, 
> I'm tired of seeing mental gymnastics around stable kernels just because 
> it is "fun".

Let me pick an example I personally had a lot of issues quite some time 
ago:

	https://lkml.org/lkml/2013/4/22/259

This was a patch that got added to -stable to fix a problem that didn't 
exist there. It caused system bustage almost immediately, which indicates 
that very limited testing has been done prior to releasing the patch.

I believe that patches like this should really be caught during -stable 
review; anyone familiar with the VFS code and actually looking at the 
patch would notice immediately that it's fixing a bug that doesn't exist 
in the code at all in the first place; that seems to indicate that noone 
has actually explicitly reviewed it for -stable, and therefore it's 
questionable whether it should have been applied.

Has anything changed in the process that'd just make patches like this one 
to be not merged these days?

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:48                                     ` Jiri Kosina
@ 2016-08-03 13:57                                       ` James Bottomley
  2016-08-03 13:59                                         ` Jiri Kosina
                                                           ` (2 more replies)
  2016-08-03 14:19                                       ` Greg KH
  1 sibling, 3 replies; 244+ messages in thread
From: James Bottomley @ 2016-08-03 13:57 UTC (permalink / raw)
  To: Jiri Kosina, Greg KH; +Cc: Trond Myklebust, ksummit-discuss

On Wed, 2016-08-03 at 15:48 +0200, Jiri Kosina wrote:
> On Wed, 3 Aug 2016, Greg KH wrote:
> 
> > Real examples from now on please, if there are problems in the 
> > stable workflow that we have today, everyone needs to show it with
> > examples, I'm tired of seeing mental gymnastics around stable 
> > kernels just because it is "fun".
> 
> Let me pick an example I personally had a lot of issues quite some 
> time ago:
> 
> 	https://lkml.org/lkml/2013/4/22/259
> 
> This was a patch that got added to -stable to fix a problem that 
> didn't exist there. It caused system bustage almost immediately, 
> which indicates that very limited testing has been done prior to
> releasing the patch.
> 
> I believe that patches like this should really be caught during 
> -stable review; anyone familiar with the VFS code and actually 
> looking at the patch would notice immediately that it's fixing a bug 
> that doesn't exist in the code at all in the first place; that seems 
> to indicate that noone has actually explicitly reviewed it for 
> -stable, and therefore it's questionable whether it should have been
> applied.

This isn't a viable approach.  Firstly stable review is less thorough
than upstream review because the review mostly goes "yes, I already
reviewed this in upstream".  Secondly, if the upstream review didn't
catch the problems why would we suddenly catch them in a stable review?

The fact that possibly no-one reviewed the upstream patch indicates the
need for a better upstream process (so something like we now have in
SCSI which is no patches applied without one review tag), but expecting
stable to fix our upstream process isn't going to work.

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:57                                       ` James Bottomley
@ 2016-08-03 13:59                                         ` Jiri Kosina
  2016-08-03 14:04                                           ` James Bottomley
  2016-08-03 14:45                                         ` Mark Brown
  2016-08-04 13:48                                         ` Geert Uytterhoeven
  2 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 13:59 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, James Bottomley wrote:

> This isn't a viable approach.  Firstly stable review is less thorough
> than upstream review because the review mostly goes "yes, I already
> reviewed this in upstream".  

Which is exactly the problem I am trying to bring more attention to.

> Secondly, if the upstream review didn't catch the problems why would we 
> suddenly catch them in a stable review?

The patch was pretty fine for upstream, as it fixed a real bug there. But 
the buggy code wasn't present in -stable.

> The fact that possibly no-one reviewed the upstream patch indicates the 
> need for a better upstream process 

Again, that's not really the issue here. The patch was perfectly valid 
upstream.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:59                                         ` Jiri Kosina
@ 2016-08-03 14:04                                           ` James Bottomley
  2016-08-03 14:10                                             ` Jiri Kosina
  2016-08-04  1:23                                             ` Steven Rostedt
  0 siblings, 2 replies; 244+ messages in thread
From: James Bottomley @ 2016-08-03 14:04 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: Trond Myklebust, ksummit-discuss

On Wed, 2016-08-03 at 15:59 +0200, Jiri Kosina wrote:
> On Wed, 3 Aug 2016, James Bottomley wrote:
> 
> > This isn't a viable approach.  Firstly stable review is less 
> > thorough than upstream review because the review mostly goes "yes, 
> > I already reviewed this in upstream".  
> 
> Which is exactly the problem I am trying to bring more attention to.

OK, so let me put the opposite point.  Most of us only keep the current
version of the kernel around for building and testing.  We already tell
people who complain about older kernels on the list to go away and try
upstream, so why would it be reasonable to expect us to go back to
older kernels for stable?  I honestly think the stable review process
doesn't add much value precisely because of this.  I do try to mark
patches for backport either with a Fixes label, so you should
mechanically be able to catch the fact that a patch is applied before
what its fixing or manually with a # 4.7+ tag.  If you expect me to do
more, it's not going to happen.

> > Secondly, if the upstream review didn't catch the problems why 
> > would we suddenly catch them in a stable review?
> 
> The patch was pretty fine for upstream, as it fixed a real bug there. 
> But the buggy code wasn't present in -stable.

OK, so how about you only apply stable patches with a cc stable and a
fixes tag?

James

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:39                                   ` Greg KH
@ 2016-08-03 14:10                                     ` Chris Mason
  2016-08-04  0:37                                     ` Rafael J. Wysocki
  1 sibling, 0 replies; 244+ messages in thread
From: Chris Mason @ 2016-08-03 14:10 UTC (permalink / raw)
  To: Greg KH, Rafael J. Wysocki
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 08/03/2016 09:39 AM, Greg KH wrote:
> On Wed, Aug 03, 2016 at 03:20:44PM +0200, Rafael J. Wysocki wrote:
>>
>>> Yes, we have regressions at times in stable kernels, but
>>> really, our % is _very_ low.  Probably less than "normal" releases, but
>>> that's just a random guess, it would be good for someone to try to do
>>> research on this before guessing...
>>
>> Jon did some of that at LWN (https://urldefense.proofpoint.com/v2/url?u=http-3A__lwn.net_Articles_692866_&d=CwICAg&c=5VD0RTtNlTh3ycd41b3MUw&r=9QPtTAxcitoznaWRKKHoEQ&m=7CKf6C9MIWdArdXqempiUXN4-6S4XJQh__XLpksOD1Q&s=HOtW7kfaOsdoqlw1ZvVCCpMZMCy5HOZKu-WalHk6kbY&e= ) and he got
>> regression rate estimates for various -stable lines in the range between
>> 0.6-1.4% (4.6) and 2.2-9.6% (3.14).
>>
>> Of course, whether or not these numbers are significant is a matter of
>> discussion, but they are clearly nonzero.
>
> I agree, they will always be nonzero, but what is the acceptable number?  :)

Diary of a kernel engineer

Day1:

Crash in procfs, googled the stack trace, found fix upstream, back ported.

Crash in hugepages, googled the stack trace, found the fix upstream, 
backported.

Crash in some filesystem, googled the stack trace, found the fix 
upstream, backported.

Day 68:

Are these backports right?  Has anyone else tried them?  Do I have all 
the dependent patches?  Found fix upstream, backported.

Day 157: Oh shit a new bug!  Oh wait, found fix upstream, backported.

What does stable really do?  Obviously its what you run when you want 
the fixes, but it's a crucial collection point for those fixes and a set 
of discussions (in git) about what fixes are most important and how to 
pull them back to older kernels.  It's a thing with a name that we can 
point to when we want to explain how to turn kernel X into something 
that can survive bug Y without exploding.

I did once say that we'd never had a regression in production caused by 
stable, but eventually it did happen.  I'm pretty sure we found the fix 
in stable.

Of course, every bug isn't already fixed upstream but really a shocking 
number already are.  Stable gives us the chance to focus our energy on 
the bugs that aren't already fixed, and on making our own new exciting 
bugs instead of fixing old boring ones.

Long story short, I'd rather we backported more and worried less.  If a 
maintainer has a proper flow of fixes into stable, everyone trying to 
depend on that subsystem benefits, even (especially?) when there are 
regressions from time to time.

None of this is meant to detract from regression tracking, which is a 
really important part of figuring out which subsystems need more love or 
testing before deploying into production.  Sometimes we have to track in 
our head, but anything to make it more formal is a great thing.

-chris

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 14:04                                           ` James Bottomley
@ 2016-08-03 14:10                                             ` Jiri Kosina
  2016-08-04  1:23                                             ` Steven Rostedt
  1 sibling, 0 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 14:10 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On Wed, 3 Aug 2016, James Bottomley wrote:

> OK, so let me put the opposite point.  Most of us only keep the current
> version of the kernel around for building and testing.  We already tell
> people who complain about older kernels on the list to go away and try
> upstream, so why would it be reasonable to expect us to go back to
> older kernels for stable?  I honestly think the stable review process
> doesn't add much value precisely because of this.  I do try to mark
> patches for backport either with a Fixes label, so you should
> mechanically be able to catch the fact that a patch is applied before
> what its fixing or manually with a # 4.7+ tag.  If you expect me to do
> more, it's not going to happen.

And that's very likely absolutely sufficient. If the person who added the 
base kernel annotation to the patch we're currently discussing, it'd work.

> > > Secondly, if the upstream review didn't catch the problems why 
> > > would we suddenly catch them in a stable review?
> > 
> > The patch was pretty fine for upstream, as it fixed a real bug there. 
> > But the buggy code wasn't present in -stable.
> 
> OK, so how about you only apply stable patches with a cc stable and a
> fixes tag?

Either that, or an explicit version range that would be a big improvement 
I think. Both would make someone actually think before adding a stable 
anotation, which is always a good thing :), but shouldn't be imposing 
unacceptable overhead.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:26                                   ` Greg KH
  2016-08-03 13:48                                     ` Jiri Kosina
@ 2016-08-03 14:12                                     ` Jani Nikula
  2016-08-03 14:33                                       ` Daniel Vetter
  1 sibling, 1 reply; 244+ messages in thread
From: Jani Nikula @ 2016-08-03 14:12 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 03 Aug 2016, Greg KH <greg@kroah.com> wrote:
> On Wed, Aug 03, 2016 at 04:05:35PM +0300, Jani Nikula wrote:
> Really?  After 10+ years of stable kernels people still don't understand
> this?  Are you sure?  Isn't the stable rule list explicit enough?  What
> more do I need to do to say, "stable patches fix bugs"?

I suppose the first rule on the list is most apt in this thread, "It
must be obviously correct and tested." Of course the maintainers test
the stuff on upstream, but who tests the commit on stable kernels and
when, if I add the cc: stable tag on it when I push?

The number of stable/longterm kernels has roughly doubled during those
ten years, and the oldest one is older than before.

> It really isn't hard here people, don't make it more difficult than it
> has to be.
...
> No, I don't think it is, as I think you are totally over-thinking this
> whole thing.

Fair enough. I'll rely on my judgement like I have before, and it hasn't
gone awfully wrong. Just please note that there really are maintainers
out here who haven't been doing this for 10+ years.

> What _specifically_ is wrong with the current workflow where you have
> seen problems that stable kernel users have hit?
>
> Real examples from now on please, if there are problems in the stable
> workflow that we have today, everyone needs to show it with examples,
> I'm tired of seeing mental gymnastics around stable kernels just because
> it is "fun".

The threads starting at [1] and [2]. Something was backported that
shouldn't have. To work properly, it depended on several other upstream
commits that couldn't have been backported. Everything was fine
upstream, but backporting this commit was not. Sure, we cleared it up in
the end (thanks again!), but there was no way for us to pre-emptively
prevent that patch from being tried to backport time and again, and one
backport did slip through.

Perhaps that's a rare corner case for you, but for us it was a hassle
and possibly tweaked our dial towards being more concervative about
adding cc: stable when pushing.

BR,
Jani.


[1] http://marc.info/?l=linux-kernel&m=146584513430303
[2] http://marc.info/?l=linux-stable-commits&m=146214101124509


-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:48                                     ` Jiri Kosina
  2016-08-03 13:57                                       ` James Bottomley
@ 2016-08-03 14:19                                       ` Greg KH
  2016-08-03 14:45                                         ` Jiri Kosina
  1 sibling, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-08-03 14:19 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 03:48:29PM +0200, Jiri Kosina wrote:
> Has anything changed in the process that'd just make patches like this one 
> to be not merged these days?

We have Guenter's test-bot that has helped out immensely here with this.

It seems most of these can all come down to "we need more testing",
which of course we do, no one can ever argue about that.  It's just that
almost[1] no one is willing to step up and do the work beyond what we
are currently doing, which is sad...

thanks,

greg k-h

[1] The people that are doing stable tree testing are doing a great job,
    Guenter, Shuah, kernelci, my build-bot, 0-day, etc.  I rely on them
    to do the majority of the testing of stable kernels as on my own I
    have very limited resources to do so.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 14:12                                     ` Jani Nikula
@ 2016-08-03 14:33                                       ` Daniel Vetter
  0 siblings, 0 replies; 244+ messages in thread
From: Daniel Vetter @ 2016-08-03 14:33 UTC (permalink / raw)
  To: Jani Nikula; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 3, 2016 at 4:12 PM, Jani Nikula <jani.nikula@intel.com> wrote:
> On Wed, 03 Aug 2016, Greg KH <greg@kroah.com> wrote:
>> On Wed, Aug 03, 2016 at 04:05:35PM +0300, Jani Nikula wrote:
>> Really?  After 10+ years of stable kernels people still don't understand
>> this?  Are you sure?  Isn't the stable rule list explicit enough?  What
>> more do I need to do to say, "stable patches fix bugs"?
>
> I suppose the first rule on the list is most apt in this thread, "It
> must be obviously correct and tested." Of course the maintainers test
> the stuff on upstream, but who tests the commit on stable kernels and
> when, if I add the cc: stable tag on it when I push?
>
> The number of stable/longterm kernels has roughly doubled during those
> ten years, and the oldest one is older than before.
>
>> It really isn't hard here people, don't make it more difficult than it
>> has to be.
> ...
>> No, I don't think it is, as I think you are totally over-thinking this
>> whole thing.
>
> Fair enough. I'll rely on my judgement like I have before, and it hasn't
> gone awfully wrong. Just please note that there really are maintainers
> out here who haven't been doing this for 10+ years.
>
>> What _specifically_ is wrong with the current workflow where you have
>> seen problems that stable kernel users have hit?
>>
>> Real examples from now on please, if there are problems in the stable
>> workflow that we have today, everyone needs to show it with examples,
>> I'm tired of seeing mental gymnastics around stable kernels just because
>> it is "fun".
>
> The threads starting at [1] and [2]. Something was backported that
> shouldn't have. To work properly, it depended on several other upstream
> commits that couldn't have been backported. Everything was fine
> upstream, but backporting this commit was not. Sure, we cleared it up in
> the end (thanks again!), but there was no way for us to pre-emptively
> prevent that patch from being tried to backport time and again, and one
> backport did slip through.
>
> Perhaps that's a rare corner case for you, but for us it was a hassle
> and possibly tweaked our dial towards being more concervative about
> adding cc: stable when pushing.

I agree with Jani that at sufficient scale all bugs are shades of
grey. I guess we suffer a bit more since i915 is complex, used by all
kernel hackers, runs on ridiculous diverse hw and not really optional
(at least for kernel hacker use-cases).

Same with "obviously correct", the list of "trivial" patches that I've
misjudged as obviously correct is rather epic ;-) And same with not
breaking stable kernels, we have a pretty solid track-record on that
too :(

Jumping in here since as part of the group maintainership process
withc also switched our handling of bugfixes for -rc kernels over to
explicit cherry-picking from the -next queue - it's the only way to
avoid a horrible coordination mess all the time between the 15
different people. We're still trying to figure out what the best
process is, and we do have very similar (and routine) discussions
about mistagged patches. So all the same, just at a smaller scale.

Or maybe the right answer is indeed what Greg says, folks overthink
-stable and in the end it's just (varying) common sense and best
effort and that's it. At least it feels like everyone has a different
idea about what they're dream world -stable kernel would do.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:57                                       ` James Bottomley
  2016-08-03 13:59                                         ` Jiri Kosina
@ 2016-08-03 14:45                                         ` Mark Brown
  2016-08-04 13:48                                         ` Geert Uytterhoeven
  2 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-03 14:45 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1029 bytes --]

On Wed, Aug 03, 2016 at 09:57:38AM -0400, James Bottomley wrote:

> This isn't a viable approach.  Firstly stable review is less thorough
> than upstream review because the review mostly goes "yes, I already
> reviewed this in upstream".  Secondly, if the upstream review didn't
> catch the problems why would we suddenly catch them in a stable review?

Some of the stable trees don't even want the review - the -ckt ones for
example don't seem to leave any gap between sending out the "this patch
will be added" mails and the applied mails and I guess they're happy
with the results (I have to confess I killfiled their mails at some
point).

> The fact that possibly no-one reviewed the upstream patch indicates the
> need for a better upstream process (so something like we now have in
> SCSI which is no patches applied without one review tag), but expecting
> stable to fix our upstream process isn't going to work.

I think the concern here is fixes that are valid upstream but rely on
context that hasn't been backported.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 14:19                                       ` Greg KH
@ 2016-08-03 14:45                                         ` Jiri Kosina
  2016-08-03 15:48                                           ` Guenter Roeck
  0 siblings, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 14:45 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, Greg KH wrote:

> > Has anything changed in the process that'd just make patches like this one 
> > to be not merged these days?
> 
> We have Guenter's test-bot that has helped out immensely here with this.

That's very good to know, I admit that I have close to zero idea about how 
the stable -rcs are being tested.

> It seems most of these can all come down to "we need more testing", 

That as well, but the main message I am trying to push here is "we need a 
little bit more thinking while anotating patches for stable".

It might very well be that some variation of what has been just proposed 
elsewhere in this thread (requiring all the stable commits to either 
contain explicit 'Fixes' tag, or be explicitly annotated by the kernel 
version range they should be applied to) would help tremendously on that 
front.

> [1] The people that are doing stable tree testing are doing a great job,
>     Guenter, Shuah, kernelci, my build-bot, 0-day, etc.  

Very good to know that the test coverage is being continuously extended. 

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 11:09                               ` Greg KH
  2016-08-03 13:05                                 ` Jani Nikula
  2016-08-03 13:20                                 ` Rafael J. Wysocki
@ 2016-08-03 15:47                                 ` Guenter Roeck
  2016-08-04  8:25                                   ` Greg KH
  2 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 15:47 UTC (permalink / raw)
  To: Greg KH, Jani Nikula; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 08/03/2016 04:09 AM, Greg KH wrote:
> On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
>> On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
>>> On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
>>>> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
>>>>
>>>>> Generally adding cc: stable is like, this is clearly a fix to a bug that
>>>>> is present in stable kernels, and the bug should be fixed, but I have no
>>>>> idea nor resources to review or test if this is the right fix across all
>>>>> stable kernels. You end up relying on your gut feeling too much to be
>>>>> comfortable. You have to make the call too early in the process.
>>>>
>>>> I think the problems here are more in the process of how things go from
>>>> being tagged stable to appearing in a stable release - the QA or lack
>>>> thereof and so on.  While I do share some of your misgivings here I do
>>>> also really like the fact that it's really easy for people to push
>>>> things out for the attention of those working on backports.  It's
>>>> essentially the same as the question I often find myself asking people
>>>> who don't upstream - "why would this fix not benefit other users?".
>>>
>>> Agreed, and I think that's exactly where the expectations don't match what's
>>> delivered in the long-term-stable trees.
>>>
>>> It should be made clear that "stable" doesn't mean "no regressions".  What
>>> it reall means is "hey, if you care about backports, this is the stuff to take
>>> into consideration in the first place".
>>
>> I think this interpretation matches reality better than what
>> Documentation/stable_kernel_rules.txt leads you to believe about adding
>> cc: stable tag.
>
> really?  Yes, we have regressions at times in stable kernels, but
> really, our % is _very_ low.  Probably less than "normal" releases, but
> that's just a random guess, it would be good for someone to try to do
> research on this before guessing...
>

It is, but somehow there seems to be an expectation that it be 0.

We had one regression after merging 4.4.14 into chromeos-4.4, due to a patch
applied from mainline which was later reverted in mainline due to the problems
it caused (dea2cf7c0c6e, ecryptfs: forbid opening files without mmap handler).
The regression percentage (from 4.4.4 to 4.4.14) was 1 bad patch out of 1,044,
or 0.1% (I am sure there are probably more regressions in there, but there was
one that affected us). One should think that such a percentage is acceptable,
but judging from the heat I got for promoting that merge, it sounded like
the end of the world.

This is a problem of perception - it treats regressions in stable releases
much differently than regressions in mainline or in vendor branches, without
taking into account the benefits. The 1,043 bug fixes don't count because
of one regression.

How does one address such perception problems ? I really have no idea.
However, I don't think that it would make sense to change the stable process
because of it. I think it works surprisingly well.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 14:45                                         ` Jiri Kosina
@ 2016-08-03 15:48                                           ` Guenter Roeck
  2016-08-03 16:12                                             ` Dmitry Torokhov
                                                               ` (2 more replies)
  0 siblings, 3 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 15:48 UTC (permalink / raw)
  To: Jiri Kosina, Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> On Wed, 3 Aug 2016, Greg KH wrote:
>
>>> Has anything changed in the process that'd just make patches like this one
>>> to be not merged these days?
>>
>> We have Guenter's test-bot that has helped out immensely here with this.
>
> That's very good to know, I admit that I have close to zero idea about how
> the stable -rcs are being tested.
>

... and when it doesn't work because I messed it up, we get issues such as 3.18
and 4.1 being broken for mips and sparc64 because a couple of patches which don't
apply to those kernels were tagged with an unqualified Cc: stable and applied.

So, if anything, the one problem I see with the current stable process is
those unqualified stable tags. Maybe those should be deprecated; expecting
stable maintainers to figure out if a patch applies to a given stable branch
or not is a bit too much to ask for. With stable releases as far back as
3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
patch tagged with an unqualified Cc: stable doesn't apply to all branches.

>> It seems most of these can all come down to "we need more testing",
>
> That as well, but the main message I am trying to push here is "we need a
> little bit more thinking while anotating patches for stable".
>
> It might very well be that some variation of what has been just proposed
> elsewhere in this thread (requiring all the stable commits to either
> contain explicit 'Fixes' tag, or be explicitly annotated by the kernel
> version range they should be applied to) would help tremendously on that
> front.
>
>> [1] The people that are doing stable tree testing are doing a great job,
>>      Guenter, Shuah, kernelci, my build-bot, 0-day, etc.
>

Maybe I or someone else can give a 10-15 minute presentation about the current
test efforts to bring everyone up to date on what is being tested and how.
Maybe we should make such a presentation a regular event at major conferences.

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 15:48                                           ` Guenter Roeck
@ 2016-08-03 16:12                                             ` Dmitry Torokhov
  2016-08-03 16:44                                               ` Guenter Roeck
                                                                 ` (2 more replies)
  2016-08-04  8:21                                             ` Greg KH
  2016-08-05  4:46                                             ` Jonathan Cameron
  2 siblings, 3 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-08-03 16:12 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> >On Wed, 3 Aug 2016, Greg KH wrote:
> >
> >>>Has anything changed in the process that'd just make patches like this one
> >>>to be not merged these days?
> >>
> >>We have Guenter's test-bot that has helped out immensely here with this.
> >
> >That's very good to know, I admit that I have close to zero idea about how
> >the stable -rcs are being tested.
> >
> 
> ... and when it doesn't work because I messed it up, we get issues such as 3.18
> and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> apply to those kernels were tagged with an unqualified Cc: stable and applied.
> 
> So, if anything, the one problem I see with the current stable process is
> those unqualified stable tags. Maybe those should be deprecated; expecting
> stable maintainers to figure out if a patch applies to a given stable branch
> or not is a bit too much to ask for. With stable releases as far back as
> 3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> patch tagged with an unqualified Cc: stable doesn't apply to all branches.

When I put cc:stable it is simply a suggestion for stable maintainers to
figure out if this commit is suitable for _their_ stable. I might have
an idea about n-1.x stable series but I certainly do not have any desire
nor time to research whether this patch applicable to 3.2 or 3.0 stable
series.

Stable maintaintership should be more than "swipe in everything marked
as cc:stable, try compiling and hope it all good".

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 16:12                                             ` Dmitry Torokhov
@ 2016-08-03 16:44                                               ` Guenter Roeck
  2016-08-03 17:20                                                 ` Dmitry Torokhov
  2016-08-03 18:57                                                 ` Jiri Kosina
  2016-08-04  3:14                                               ` Steven Rostedt
  2016-08-04  8:27                                               ` Greg KH
  2 siblings, 2 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 16:44 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On 08/03/2016 09:12 AM, Dmitry Torokhov wrote:
> On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
>> On 08/03/2016 07:45 AM, Jiri Kosina wrote:
>>> On Wed, 3 Aug 2016, Greg KH wrote:
>>>
>>>>> Has anything changed in the process that'd just make patches like this one
>>>>> to be not merged these days?
>>>>
>>>> We have Guenter's test-bot that has helped out immensely here with this.
>>>
>>> That's very good to know, I admit that I have close to zero idea about how
>>> the stable -rcs are being tested.
>>>
>>
>> ... and when it doesn't work because I messed it up, we get issues such as 3.18
>> and 4.1 being broken for mips and sparc64 because a couple of patches which don't
>> apply to those kernels were tagged with an unqualified Cc: stable and applied.
>>
>> So, if anything, the one problem I see with the current stable process is
>> those unqualified stable tags. Maybe those should be deprecated; expecting
>> stable maintainers to figure out if a patch applies to a given stable branch
>> or not is a bit too much to ask for. With stable releases as far back as
>> 3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
>> patch tagged with an unqualified Cc: stable doesn't apply to all branches.
>
> When I put cc:stable it is simply a suggestion for stable maintainers to
> figure out if this commit is suitable for _their_ stable. I might have
> an idea about n-1.x stable series but I certainly do not have any desire
> nor time to research whether this patch applicable to 3.2 or 3.0 stable
> series.
>
> Stable maintaintership should be more than "swipe in everything marked
> as cc:stable, try compiling and hope it all good".
>

I don't think I can agree to that. Personally I see it as my responsibility
to give stable maintainers as much information as possible. Dave for networking
goes even further, essentially providing stable maintainers with the patches
to apply (granted, I have no idea how he finds the time to do that).

How can one reasonable expect a stable maintainer to determine if a patch for
an oddball architecture applies, or one for a random subsystem ? Following
your argument, stable maintainers would have to be experts on all architectures
and subsystems in the kernel - because that is what they would have to be
in order to do more than "it compiles, therefore it works". Even compilation
is difficult - I suspect I might run the only testbed which builds _all_
supported architectures and runs qemu tests on 14 of them (not counting le/be
and 32/64 bit variants).

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 16:44                                               ` Guenter Roeck
@ 2016-08-03 17:20                                                 ` Dmitry Torokhov
  2016-08-03 18:21                                                   ` Guenter Roeck
  2016-08-03 18:57                                                 ` Jiri Kosina
  1 sibling, 1 reply; 244+ messages in thread
From: Dmitry Torokhov @ 2016-08-03 17:20 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 09:44:56AM -0700, Guenter Roeck wrote:
> On 08/03/2016 09:12 AM, Dmitry Torokhov wrote:
> >On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> >>On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> >>>On Wed, 3 Aug 2016, Greg KH wrote:
> >>>
> >>>>>Has anything changed in the process that'd just make patches like this one
> >>>>>to be not merged these days?
> >>>>
> >>>>We have Guenter's test-bot that has helped out immensely here with this.
> >>>
> >>>That's very good to know, I admit that I have close to zero idea about how
> >>>the stable -rcs are being tested.
> >>>
> >>
> >>... and when it doesn't work because I messed it up, we get issues such as 3.18
> >>and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> >>apply to those kernels were tagged with an unqualified Cc: stable and applied.
> >>
> >>So, if anything, the one problem I see with the current stable process is
> >>those unqualified stable tags. Maybe those should be deprecated; expecting
> >>stable maintainers to figure out if a patch applies to a given stable branch
> >>or not is a bit too much to ask for. With stable releases as far back as
> >>3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> >>patch tagged with an unqualified Cc: stable doesn't apply to all branches.
> >
> >When I put cc:stable it is simply a suggestion for stable maintainers to
> >figure out if this commit is suitable for _their_ stable. I might have
> >an idea about n-1.x stable series but I certainly do not have any desire
> >nor time to research whether this patch applicable to 3.2 or 3.0 stable
> >series.
> >
> >Stable maintaintership should be more than "swipe in everything marked
> >as cc:stable, try compiling and hope it all good".
> >
> 
> I don't think I can agree to that. Personally I see it as my responsibility
> to give stable maintainers as much information as possible. Dave for networking
> goes even further, essentially providing stable maintainers with the patches
> to apply (granted, I have no idea how he finds the time to do that).

Me neither. There probably 100s of him, like Alans were in the days ;)

> 
> How can one reasonable expect a stable maintainer to determine if a patch for
> an oddball architecture applies, or one for a random subsystem ? Following
> your argument, stable maintainers would have to be experts on all architectures
> and subsystems in the kernel - because that is what they would have to be
> in order to do more than "it compiles, therefore it works". Even compilation

Well, yes, they would have. Or they would have to assemble a team who
can cover this. As it is it is quite easy to start a stable tree - do
you need anything except to announce it? I do not see subsystems
maintainers being asked if they have time/resources/desire in
maintaining said stable trees.

> is difficult - I suspect I might run the only testbed which builds _all_
> supported architectures and runs qemu tests on 14 of them (not counting le/be
> and 32/64 bit variants).
> 

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 17:20                                                 ` Dmitry Torokhov
@ 2016-08-03 18:21                                                   ` Guenter Roeck
  2016-08-03 18:59                                                     ` Dmitry Torokhov
  0 siblings, 1 reply; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 18:21 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 10:20:15AM -0700, Dmitry Torokhov wrote:
> On Wed, Aug 03, 2016 at 09:44:56AM -0700, Guenter Roeck wrote:
> > On 08/03/2016 09:12 AM, Dmitry Torokhov wrote:
> > >On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> > >>On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> > >>>On Wed, 3 Aug 2016, Greg KH wrote:
> > >>>
> > >>>>>Has anything changed in the process that'd just make patches like this one
> > >>>>>to be not merged these days?
> > >>>>
> > >>>>We have Guenter's test-bot that has helped out immensely here with this.
> > >>>
> > >>>That's very good to know, I admit that I have close to zero idea about how
> > >>>the stable -rcs are being tested.
> > >>>
> > >>
> > >>... and when it doesn't work because I messed it up, we get issues such as 3.18
> > >>and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> > >>apply to those kernels were tagged with an unqualified Cc: stable and applied.
> > >>
> > >>So, if anything, the one problem I see with the current stable process is
> > >>those unqualified stable tags. Maybe those should be deprecated; expecting
> > >>stable maintainers to figure out if a patch applies to a given stable branch
> > >>or not is a bit too much to ask for. With stable releases as far back as
> > >>3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> > >>patch tagged with an unqualified Cc: stable doesn't apply to all branches.
> > >
> > >When I put cc:stable it is simply a suggestion for stable maintainers to
> > >figure out if this commit is suitable for _their_ stable. I might have
> > >an idea about n-1.x stable series but I certainly do not have any desire
> > >nor time to research whether this patch applicable to 3.2 or 3.0 stable
> > >series.
> > >
> > >Stable maintaintership should be more than "swipe in everything marked
> > >as cc:stable, try compiling and hope it all good".
> > >
> > 
> > I don't think I can agree to that. Personally I see it as my responsibility
> > to give stable maintainers as much information as possible. Dave for networking
> > goes even further, essentially providing stable maintainers with the patches
> > to apply (granted, I have no idea how he finds the time to do that).
> 
> Me neither. There probably 100s of him, like Alans were in the days ;)
> 
> > 
> > How can one reasonable expect a stable maintainer to determine if a patch for
> > an oddball architecture applies, or one for a random subsystem ? Following
> > your argument, stable maintainers would have to be experts on all architectures
> > and subsystems in the kernel - because that is what they would have to be
> > in order to do more than "it compiles, therefore it works". Even compilation
> 
> Well, yes, they would have. Or they would have to assemble a team who
> can cover this. As it is it is quite easy to start a stable tree - do
> you need anything except to announce it? I do not see subsystems
> maintainers being asked if they have time/resources/desire in
> maintaining said stable trees.
> 

I think there are two questions to answer here. One is the level of support
required for stable releases by subsystem maintainers, the other is the level
of validation and testing expected from stable maintainers. Both are valid
questions to ask.

I try to resolve the first part by avoiding "Cc: stable" when possible
and using "Fixes:" instead. In most cases, that works just fine,
and I can even ask patch submitters to provide relevant "Fixes:" tags.
Setting up kerneltests.org was my attempt to address the second part,
though it is now getting overwhelmed by the sheer number of stable
releases.

So, what _are_ the expectations for stable release support from both
subsystem and stable tree maintainers ? And how can we reduce the number
of stable releases ?

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 16:44                                               ` Guenter Roeck
  2016-08-03 17:20                                                 ` Dmitry Torokhov
@ 2016-08-03 18:57                                                 ` Jiri Kosina
  2016-08-03 22:16                                                   ` Guenter Roeck
  1 sibling, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 18:57 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, Guenter Roeck wrote:

> How can one reasonable expect a stable maintainer to determine if a 
> patch for an oddball architecture applies, or one for a random subsystem?

So *who* is supposed to be *the* responsible person for cherry-picking the 
patch and verifying that it is applicable to particular -stable?

The statement above suggests that it's definitely not a responsibility of 
stable branch maintainer.

OTOH Greg (and not only him) argues that maintainers are already 
overloaded, so asking them to actually prepare patches for particular 
stable trees (at least the ones they care about) is too much to ask.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 18:21                                                   ` Guenter Roeck
@ 2016-08-03 18:59                                                     ` Dmitry Torokhov
  2016-08-03 21:25                                                       ` Jiri Kosina
  0 siblings, 1 reply; 244+ messages in thread
From: Dmitry Torokhov @ 2016-08-03 18:59 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 11:21:05AM -0700, Guenter Roeck wrote:
> On Wed, Aug 03, 2016 at 10:20:15AM -0700, Dmitry Torokhov wrote:
> > On Wed, Aug 03, 2016 at 09:44:56AM -0700, Guenter Roeck wrote:
> > > On 08/03/2016 09:12 AM, Dmitry Torokhov wrote:
> > > >On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> > > >>On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> > > >>>On Wed, 3 Aug 2016, Greg KH wrote:
> > > >>>
> > > >>>>>Has anything changed in the process that'd just make patches like this one
> > > >>>>>to be not merged these days?
> > > >>>>
> > > >>>>We have Guenter's test-bot that has helped out immensely here with this.
> > > >>>
> > > >>>That's very good to know, I admit that I have close to zero idea about how
> > > >>>the stable -rcs are being tested.
> > > >>>
> > > >>
> > > >>... and when it doesn't work because I messed it up, we get issues such as 3.18
> > > >>and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> > > >>apply to those kernels were tagged with an unqualified Cc: stable and applied.
> > > >>
> > > >>So, if anything, the one problem I see with the current stable process is
> > > >>those unqualified stable tags. Maybe those should be deprecated; expecting
> > > >>stable maintainers to figure out if a patch applies to a given stable branch
> > > >>or not is a bit too much to ask for. With stable releases as far back as
> > > >>3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> > > >>patch tagged with an unqualified Cc: stable doesn't apply to all branches.
> > > >
> > > >When I put cc:stable it is simply a suggestion for stable maintainers to
> > > >figure out if this commit is suitable for _their_ stable. I might have
> > > >an idea about n-1.x stable series but I certainly do not have any desire
> > > >nor time to research whether this patch applicable to 3.2 or 3.0 stable
> > > >series.
> > > >
> > > >Stable maintaintership should be more than "swipe in everything marked
> > > >as cc:stable, try compiling and hope it all good".
> > > >
> > > 
> > > I don't think I can agree to that. Personally I see it as my responsibility
> > > to give stable maintainers as much information as possible. Dave for networking
> > > goes even further, essentially providing stable maintainers with the patches
> > > to apply (granted, I have no idea how he finds the time to do that).
> > 
> > Me neither. There probably 100s of him, like Alans were in the days ;)
> > 
> > > 
> > > How can one reasonable expect a stable maintainer to determine if a patch for
> > > an oddball architecture applies, or one for a random subsystem ? Following
> > > your argument, stable maintainers would have to be experts on all architectures
> > > and subsystems in the kernel - because that is what they would have to be
> > > in order to do more than "it compiles, therefore it works". Even compilation
> > 
> > Well, yes, they would have. Or they would have to assemble a team who
> > can cover this. As it is it is quite easy to start a stable tree - do
> > you need anything except to announce it? I do not see subsystems
> > maintainers being asked if they have time/resources/desire in
> > maintaining said stable trees.
> > 
> 
> I think there are two questions to answer here. One is the level of support
> required for stable releases by subsystem maintainers, the other is the level
> of validation and testing expected from stable maintainers. Both are valid
> questions to ask.
> 
> I try to resolve the first part by avoiding "Cc: stable" when possible
> and using "Fixes:" instead. In most cases, that works just fine,

I wonder if we could change meaning of naked cc: stable@ to mean latest
stable only, and if fix is important enough then maintainer or somebody
else can annotate how far back the fix should be applied? Ideally with
"Fixes: XXX"?

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 18:59                                                     ` Dmitry Torokhov
@ 2016-08-03 21:25                                                       ` Jiri Kosina
  2016-08-03 21:31                                                         ` Dmitry Torokhov
  2016-08-04 14:02                                                         ` Jan Kara
  0 siblings, 2 replies; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 21:25 UTC (permalink / raw)
  To: Dmitry Torokhov, Greg KH
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 3 Aug 2016, Dmitry Torokhov wrote:

> I wonder if we could change meaning of naked cc: stable@ to mean latest 
> stable only, and if fix is important enough then maintainer or somebody 
> else can annotate how far back the fix should be applied? Ideally with 
> "Fixes: XXX"?

Yeah, James already proposed that and I totally agree with that.

Greg, would you have any objection to formulating some rule a-la "all the 
stable anotations should either be accompanied by explicit 'Fixes:', or 
explicit range of kernel versions the patch is applicable to"?

I believe this is a very reasonable compromise between "maintainers or 
submitters have to do their homework wrt. stable" and "we don't want to 
impose too much overhead to anybody".

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 21:25                                                       ` Jiri Kosina
@ 2016-08-03 21:31                                                         ` Dmitry Torokhov
  2016-08-03 21:36                                                           ` Jiri Kosina
  2016-08-03 22:25                                                           ` Guenter Roeck
  2016-08-04 14:02                                                         ` Jan Kara
  1 sibling, 2 replies; 244+ messages in thread
From: Dmitry Torokhov @ 2016-08-03 21:31 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 3, 2016 at 2:25 PM, Jiri Kosina <jikos@kernel.org> wrote:
> On Wed, 3 Aug 2016, Dmitry Torokhov wrote:
>
>> I wonder if we could change meaning of naked cc: stable@ to mean latest
>> stable only, and if fix is important enough then maintainer or somebody
>> else can annotate how far back the fix should be applied? Ideally with
>> "Fixes: XXX"?
>
> Yeah, James already proposed that and I totally agree with that.
>
> Greg, would you have any objection to formulating some rule a-la "all the
> stable anotations should either be accompanied by explicit 'Fixes:', or
> explicit range of kernel versions the patch is applicable to"?

Umm, that is not what I meant. I said that I wanted "naked" cc:stable
only to apply top the latest stable. The fix may be applicable to
another stable, but I have not researched that. As opposed to say "#
v3.10+" where I definitely want it to be applied to all stables
starting with 3.10.

Compare this to current situation where seeing unqualified CC stable
is taken as "take it as far back as humanly possible" by default.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 21:31                                                         ` Dmitry Torokhov
@ 2016-08-03 21:36                                                           ` Jiri Kosina
  2016-08-04  3:06                                                             ` Steven Rostedt
  2016-08-03 22:25                                                           ` Guenter Roeck
  1 sibling, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-03 21:36 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, 3 Aug 2016, Dmitry Torokhov wrote:

> Umm, that is not what I meant. I said that I wanted "naked" cc:stable
> only to apply top the latest stable. 

I understand that, but I intentionally proposed the rule a little bit more 
"exact", because meaning of "latest stable" might potentially be different 
at the time of the patch being commited to subsystem tree, and at the time 
of the commit actually appearing in Linus' tree (which is the point at 
which -stable notices).

Probably a slight corner case, but why not make it more specific by 
default.

Thanks,

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 18:57                                                 ` Jiri Kosina
@ 2016-08-03 22:16                                                   ` Guenter Roeck
  0 siblings, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 22:16 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 08:57:26PM +0200, Jiri Kosina wrote:
> On Wed, 3 Aug 2016, Guenter Roeck wrote:
> 
> > How can one reasonable expect a stable maintainer to determine if a 
> > patch for an oddball architecture applies, or one for a random subsystem?
> 
> So *who* is supposed to be *the* responsible person for cherry-picking the 
> patch and verifying that it is applicable to particular -stable?
> 

Cherry-picking and verifying applicability are two separate tasks, so mixing
them together makes it a bit difficult to answer the question. I'll focus
on verifying applicability.

> The statement above suggests that it's definitely not a responsibility of 
> stable branch maintainer.
> 

Question is if it _can_ reasonably be the responsibility of a stable branch
maintainer to verify if a patch is applicable. I don't think it is, beyond
"It is tagged for this release". Of course, I may be wrong, and both
stable branch maintainers and the community may have a different opinion.

> OTOH Greg (and not only him) argues that maintainers are already 
> overloaded, so asking them to actually prepare patches for particular 
> stable trees (at least the ones they care about) is too much to ask.
> 

That is a bit different, though. I don't think I (or anyone else) suggested
that subsystem maintainers should start _preparing_ patches for stable trees.
The question is who can make the call if a patch applies to a specific stable
tree or not, especially if it is only marked with a generic "Cc: stable".

Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 21:31                                                         ` Dmitry Torokhov
  2016-08-03 21:36                                                           ` Jiri Kosina
@ 2016-08-03 22:25                                                           ` Guenter Roeck
  1 sibling, 0 replies; 244+ messages in thread
From: Guenter Roeck @ 2016-08-03 22:25 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 02:31:51PM -0700, Dmitry Torokhov wrote:
> On Wed, Aug 3, 2016 at 2:25 PM, Jiri Kosina <jikos@kernel.org> wrote:
> > On Wed, 3 Aug 2016, Dmitry Torokhov wrote:
> >
> >> I wonder if we could change meaning of naked cc: stable@ to mean latest
> >> stable only, and if fix is important enough then maintainer or somebody
> >> else can annotate how far back the fix should be applied? Ideally with
> >> "Fixes: XXX"?
> >
> > Yeah, James already proposed that and I totally agree with that.
> >
> > Greg, would you have any objection to formulating some rule a-la "all the
> > stable anotations should either be accompanied by explicit 'Fixes:', or
> > explicit range of kernel versions the patch is applicable to"?
> 
> Umm, that is not what I meant. I said that I wanted "naked" cc:stable
> only to apply top the latest stable. The fix may be applicable to
> another stable, but I have not researched that. As opposed to say "#
> v3.10+" where I definitely want it to be applied to all stables
> starting with 3.10.
> 
> Compare this to current situation where seeing unqualified CC stable
> is taken as "take it as far back as humanly possible" by default.
> 
I think this is an excellent idea. It would only apply to unqualified
stable annotations, which I suspect are the ones causing most of the trouble
in older branches. Whoever wants a patch applied to older stable kernels
would have to add either a range or a Fixes: tag, which I think is what Jiri
suggested above. This would make the rule something like

"All stable annotations should either be accompanied by explicit 'Fixes:',
 or explicit range of kernel versions the patch is applicable to. Unqualified
 stable annotations will only be applied to the most recent stable release."

Maybe this can be extended to "the most recent longterm stable release"
to find a middle ground.

Thanks,
Guenter

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:39                                   ` Greg KH
  2016-08-03 14:10                                     ` Chris Mason
@ 2016-08-04  0:37                                     ` Rafael J. Wysocki
  1 sibling, 0 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-08-04  0:37 UTC (permalink / raw)
  To: Greg KH; +Cc: ksummit-discuss, James Bottomley, Trond Myklebust

On Wednesday, August 03, 2016 03:39:09 PM Greg KH wrote:
> On Wed, Aug 03, 2016 at 03:20:44PM +0200, Rafael J. Wysocki wrote:
> > On Wednesday, August 03, 2016 01:09:35 PM Greg KH wrote:
> > > On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
> > > > On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> > > > > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> > > > >> On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> > > > >> 
> > > > >> > Generally adding cc: stable is like, this is clearly a fix to a bug that
> > > > >> > is present in stable kernels, and the bug should be fixed, but I have no
> > > > >> > idea nor resources to review or test if this is the right fix across all
> > > > >> > stable kernels. You end up relying on your gut feeling too much to be
> > > > >> > comfortable. You have to make the call too early in the process.
> > > > >> 
> > > > >> I think the problems here are more in the process of how things go from
> > > > >> being tagged stable to appearing in a stable release - the QA or lack
> > > > >> thereof and so on.  While I do share some of your misgivings here I do
> > > > >> also really like the fact that it's really easy for people to push
> > > > >> things out for the attention of those working on backports.  It's
> > > > >> essentially the same as the question I often find myself asking people
> > > > >> who don't upstream - "why would this fix not benefit other users?".
> > > > >
> > > > > Agreed, and I think that's exactly where the expectations don't match what's
> > > > > delivered in the long-term-stable trees.
> > > > >
> > > > > It should be made clear that "stable" doesn't mean "no regressions".  What
> > > > > it reall means is "hey, if you care about backports, this is the stuff to take
> > > > > into consideration in the first place".
> > > > 
> > > > I think this interpretation matches reality better than what
> > > > Documentation/stable_kernel_rules.txt leads you to believe about adding
> > > > cc: stable tag.
> > > 
> > > really?
> > 
> > Honestly, I think so.
> > 
> > > Yes, we have regressions at times in stable kernels, but
> > > really, our % is _very_ low.  Probably less than "normal" releases, but
> > > that's just a random guess, it would be good for someone to try to do
> > > research on this before guessing...
> > 
> > Jon did some of that at LWN (http://lwn.net/Articles/692866/) and he got
> > regression rate estimates for various -stable lines in the range between
> > 0.6-1.4% (4.6) and 2.2-9.6% (3.14).
> > 
> > Of course, whether or not these numbers are significant is a matter of
> > discussion, but they are clearly nonzero.
> 
> I agree, they will always be nonzero, but what is the acceptable number?  :)

Well, that depends.

There are people, like Chris, who mostly care about easy access to backports
that make sense, so to speak.

There are other people who mostly care about having their systems up to date
with respect to security fixes and the like, but without having to follow the
Linus' release points.

The first group would like things to be put into -stable more aggressively,
while the other group would prefer pretty much the opposite.

There is the "right" balance somewhere in between.  I don't know where it is,
but that's where the "acceptable number" comes from.

> > Now, I understand why there are regressions in -stable and to me it would
> > be just fine to say that they will be there occasionally, so as to prevent
> > supporting the "no regressions in -stable at all" expectation that (a) is
> > unrealistic today and (b) seems to be quite widespread.
> 
> The way Jon's numbers were made was by just looking at the patches and
> seeing if they said they fixed a patch that happened to be in a previous
> stable kernel.  Sometimes a "fix" isn't something that people notice as
> it didn't really fix the problem.  So that's not a regression that
> anyone would notice as the issue is just still there.  Teasing that out
> from the patches we have will be a difficult thing to do, as I don't
> think it can be automated.
> 
> But it might make for a good research paper, and someone could probably
> get a master's thesis out of it, so I might propose it to a few
> Universities that I am in communication with :)
> 
> We do have users that have real numbers saying "We tested every single
> 3.10-stable kernel on our infrastructure and nothing ever broke".  We
> also have a huge body of past kernel releases that people can run
> themselves to see how well we are doing on real systems and workloads.
> 
> I also know some users that have real problems with stable kernels for
> very specific hardware reasons (i.e. some graphics chips), due to large
> numbers of backports they are forced to keep on top of their kernel
> tree.  That's a different issue, and one that the stable workflow is not
> set up to address, as that would be impossible.
> 
> > Or do we really want to meet that expectation?
> 
> The expectation that I try to meet is "we will address any reported
> issues as soon as possible".

That's a good one to meet and maybe it would help to just document it this way.

Something like "our process is not guaranteed to be free of regressions, but
we do our best to avoid them and if there are any, we will address them as
soon as reasonably possible"?

>  After all, no one is paying for the service we do, so there's not much
> else we can do here :)

Fair enough. :-)

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:21                                   ` Jiri Kosina
@ 2016-08-04  1:05                                     ` Rafael J. Wysocki
  0 siblings, 0 replies; 244+ messages in thread
From: Rafael J. Wysocki @ 2016-08-04  1:05 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wednesday, August 03, 2016 03:21:15 PM Jiri Kosina wrote:
> On Wed, 3 Aug 2016, Rafael J. Wysocki wrote:
> 
> > Now, I understand why there are regressions in -stable and to me it would
> > be just fine to say that they will be there occasionally, so as to prevent
> > supporting the "no regressions in -stable at all" expectation that (a) is
> > unrealistic today and (b) seems to be quite widespread.
> > 
> > Or do we really want to meet that expectation?
> 
> My primary goal when I was starting this thread (and I certainly didn't 
> expect it to become such a gigantic monster :) ) was to try to figure out 
> how to do better on this front. Of course, perfect is the enemy of good, 
> but *trying* to find ways how to improve the regression rate seems like a 
> reasoneble thing to attempt at least.

I tend to agree with Greg that it would be good to measure the problem
quantitatively for this purpose.

How many regressions are introduced into -stable?  How long do they live on the
average (from the report to the fix)?

It really would be good to know the answers to these questions for the
discussion to be productive IMO.

> That's where some of my proposals in this thread some time ago were coming 
> from.

OTOH, there are some things about the current process that may potentially
become problematic at least occasionally.  For example:

(a) It heavily depends on subsystem mainteiners to DTRT when they tag mainline
    commits for -stable (and the more aggressively they do that, the more likely
    they are to overlook something potentially problematic).

(b) Non-maintainers who send -stable inclusion requests may not be aware of
    some potentially problematic side-effects of the commits they would like
    to see in -stable.

So question is whether or not the above need to be addressed somehow and that's
quite independent of the numbers (the fact that we have been doing quite well
so far doesn't mean that there won't be any problems in the future due to those
things and the other way around).

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 14:04                                           ` James Bottomley
  2016-08-03 14:10                                             ` Jiri Kosina
@ 2016-08-04  1:23                                             ` Steven Rostedt
  2016-08-04  8:20                                               ` Greg KH
  1 sibling, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04  1:23 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

On Wed, 03 Aug 2016 10:04:55 -0400
James Bottomley <James.Bottomley@HansenPartnership.com> wrote:


> OK, so how about you only apply stable patches with a cc stable and a
> fixes tag?

While reading this thread, I thought about replying and suggesting
exactly this. But you did it before I could.

I try to make it a habit to find the commit that a fix is for, and add
that as a Fixes tag and even add a # v<stable-version>+ to the Cc tag.

Maybe we ask that all cc stable commits have this, otherwise it should
only be applied to the previous stable and nothing earlier.

IIUC, Greg et.al. will apply a stable tagged commit to all previous
stable trees as long as they apply cleaning. Greg, is that correct?
Perhaps we shouldn't apply them if they don't have a fixes tag or a
label that states what versions they are for.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 21:36                                                           ` Jiri Kosina
@ 2016-08-04  3:06                                                             ` Steven Rostedt
  0 siblings, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04  3:06 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 3 Aug 2016 23:36:11 +0200 (CEST)
Jiri Kosina <jikos@kernel.org> wrote:


> I understand that, but I intentionally proposed the rule a little bit more 
> "exact", because meaning of "latest stable" might potentially be different 
> at the time of the patch being commited to subsystem tree, and at the time 
> of the commit actually appearing in Linus' tree (which is the point at 
> which -stable notices).

So what if it is committed into a tree with a newer stable then when it
was posted? A naked "stable" tag should *only* go into the most recent
stable when it gets into Linus tree.

Now, a stable maintainer can look at that commit and decide for
themselves if it should go in or not, but it's not guaranteed to be for
that tree. Basically, if unsure, then leave it alone (unless there's a
bug report in your branch that the commit fixes).

I agree that only commits marked with Fixes or explicit stable ranges
should be backported further than the latest stable branch.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 16:12                                             ` Dmitry Torokhov
  2016-08-03 16:44                                               ` Guenter Roeck
@ 2016-08-04  3:14                                               ` Steven Rostedt
  2016-08-04  3:32                                                 ` Dmitry Torokhov
  2016-08-04  8:27                                               ` Greg KH
  2 siblings, 1 reply; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04  3:14 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 3 Aug 2016 09:12:34 -0700
Dmitry Torokhov <dmitry.torokhov@gmail.com> wrote:

> When I put cc:stable it is simply a suggestion for stable maintainers to
> figure out if this commit is suitable for _their_ stable. I might have
> an idea about n-1.x stable series but I certainly do not have any desire
> nor time to research whether this patch applicable to 3.2 or 3.0 stable
> series.

Can you at least find the commit that introduced the bug that a patch
fixes? I do that all the time. It's not that hard. git blame is prefect
for such things.


 git blame broken-file.c

Find the commit that contains the broken code.

 git show SHA1-OF-WHAT-I-FOUND

Oh, it's a whitespace change (curses a little).

 git blame SHA1-OF-WHAT-I-FOUND~1 broken-file.c

(the ~1 will give you the tree before that whitespace fix)

Find the commit that contains the broken code.

Sometimes it takes a few iterations of the above, but once I find the
commit, I have what I need for the "Fixes" tag.

Now, I like to also add the stable range. To do that, I go to my Linus
tree, and do:

 git describe --contains SHA1-OF-BROKEN-COMMIT

and wala, it tells me what release that was added. Like:

  git describe --contains 85f2b08268c01
v3.14-rc1~82^2~40

Then I know to add:

 Cc: stable@vger.kernel.org # v3.14+

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04  3:14                                               ` Steven Rostedt
@ 2016-08-04  3:32                                                 ` Dmitry Torokhov
  2016-08-04  4:05                                                   ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Dmitry Torokhov @ 2016-08-04  3:32 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On August 3, 2016 8:14:22 PM PDT, Steven Rostedt <rostedt@goodmis.org> wrote:
>On Wed, 3 Aug 2016 09:12:34 -0700
>Dmitry Torokhov <dmitry.torokhov@gmail.com> wrote:
>
>> When I put cc:stable it is simply a suggestion for stable maintainers
>to
>> figure out if this commit is suitable for _their_ stable. I might
>have
>> an idea about n-1.x stable series but I certainly do not have any
>desire
>> nor time to research whether this patch applicable to 3.2 or 3.0
>stable
>> series.
>
>Can you at least find the commit that introduced the bug that a patch
>fixes? I do that all the time. It's not that hard. git blame is prefect
>for such things.
>
>
> git blame broken-file.c
>
>Find the commit that contains the broken code.
>
> git show SHA1-OF-WHAT-I-FOUND
>
>Oh, it's a whitespace change (curses a little).
>
> git blame SHA1-OF-WHAT-I-FOUND~1 broken-file.c
>
>(the ~1 will give you the tree before that whitespace fix)
>
>Find the commit that contains the broken code.
>
>Sometimes it takes a few iterations of the above, but once I find the
>commit, I have what I need for the "Fixes" tag.
>
>Now, I like to also add the stable range. To do that, I go to my Linus
>tree, and do:
>
> git describe --contains SHA1-OF-BROKEN-COMMIT
>
>and wala, it tells me what release that was added. Like:
>
>  git describe --contains 85f2b08268c01
>v3.14-rc1~82^2~40
>
>Then I know to add:
>
> Cc: stable@vger.kernel.org # v3.14+

Do you? Are you certain that everything that is needed for your fix for 4.8 to work properly is in 3.14.n?

Also, if it was introduced in 3.4 and you get report from 4.4.n just now so we really needed it in 3.14? Maybe it would be better to simply leave that old kernel alone? It's been what, 3 years?


Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04  3:32                                                 ` Dmitry Torokhov
@ 2016-08-04  4:05                                                   ` Steven Rostedt
  0 siblings, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04  4:05 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, 03 Aug 2016 20:32:48 -0700
Dmitry Torokhov <dmitry.torokhov@gmail.com> wrote:

> Do you? Are you certain that everything that is needed for your fix
> for 4.8 to work properly is in 3.14.n?

Most cases yes. If it doesn't apply nicely all the way back, then I
expect an email report from someone telling me that their stable
backport failed to apply cleanly. And if I get time, I try to fix it.

> 
> Also, if it was introduced in 3.4 and you get report from 4.4.n just
> now so we really needed it in 3.14? Maybe it would be better to
> simply leave that old kernel alone? It's been what, 3 years?

Well, any bug that can cause an oops I like to get backported. Tracing
is a different beast than most of the kernel. There's 1000s of options
one can do, and some of these bugs only trigger if you apply one of
those 1000s of options. Thus, it may go unnotice for years, but all it
takes is to be debugging your subsystem in an old kernel using the
tracing infrastructure, and you happen to stumble on one of these bugs
(that I decided not to backport because it's been what? 3 years?) and
the system crashes due to the tracing system. That person will be
pretty pissed and not trust tracing. So yeah, I do backport them.

I also maintain the stable -rt kernel releases. 8 of them to be exact,
and they range from 3.2-rt to 4.4-rt. I like to have these up to date
with the latest tracing fixes too.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04  1:23                                             ` Steven Rostedt
@ 2016-08-04  8:20                                               ` Greg KH
  2016-08-04 13:33                                                 ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Greg KH @ 2016-08-04  8:20 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Aug 03, 2016 at 09:23:32PM -0400, Steven Rostedt wrote:
> On Wed, 03 Aug 2016 10:04:55 -0400
> James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> 
> > OK, so how about you only apply stable patches with a cc stable and a
> > fixes tag?
> 
> While reading this thread, I thought about replying and suggesting
> exactly this. But you did it before I could.
> 
> I try to make it a habit to find the commit that a fix is for, and add
> that as a Fixes tag and even add a # v<stable-version>+ to the Cc tag.
> 
> Maybe we ask that all cc stable commits have this, otherwise it should
> only be applied to the previous stable and nothing earlier.

No, again, that would put more burden on the maintainer and developer
than I want to "enforce".  I don't even want to do that extra work for
the trees I maintain, I just couldn't scale that way.

> IIUC, Greg et.al. will apply a stable tagged commit to all previous
> stable trees as long as they apply cleaning. Greg, is that correct?
> Perhaps we shouldn't apply them if they don't have a fixes tag or a
> label that states what versions they are for.

I apply them to older kernels based on my best judgement.  That includes
reading the patch, seeing how "cleanly" they apply, and judging the
severity of the patch.  I only notify developers if their patch doesn't
apply to an older kernel tree IF they have marked it as explicitly being
needed for an older kernel tree.

Now I greatly appreciate the use of fixes: and other hints to show how
old a patch should be backported to, don't get me wrong.  But I'm not
going to require that it be present in order to have a patch backported,
again, too much work for maintainers.

It's up to anyone who wants to maintain a "longterm" stable tree to do
this extra work on their own.  It's not easy, and it is work, but that's
just part of the job.  We can't force maintainers to care about older
kernel versions if they don't want to, as maintainers are our most
limited resource right now.

Remember, we _still_ have whole subsystems that never mark anything for
stable, let's focus on them please, that's the biggest issue for stable
trees that I can see right now.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 15:48                                           ` Guenter Roeck
  2016-08-03 16:12                                             ` Dmitry Torokhov
@ 2016-08-04  8:21                                             ` Greg KH
  2016-08-05  4:46                                             ` Jonathan Cameron
  2 siblings, 0 replies; 244+ messages in thread
From: Greg KH @ 2016-08-04  8:21 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> > On Wed, 3 Aug 2016, Greg KH wrote:
> > 
> > > > Has anything changed in the process that'd just make patches like this one
> > > > to be not merged these days?
> > > 
> > > We have Guenter's test-bot that has helped out immensely here with this.
> > 
> > That's very good to know, I admit that I have close to zero idea about how
> > the stable -rcs are being tested.
> > 
> 
> ... and when it doesn't work because I messed it up, we get issues such as 3.18
> and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> apply to those kernels were tagged with an unqualified Cc: stable and applied.
> 
> So, if anything, the one problem I see with the current stable process is
> those unqualified stable tags. Maybe those should be deprecated; expecting
> stable maintainers to figure out if a patch applies to a given stable branch
> or not is a bit too much to ask for. With stable releases as far back as
> 3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> patch tagged with an unqualified Cc: stable doesn't apply to all branches.

As I just wrote to Steve, it's up to the maintainer of such a longterm
kernel branch to do the work of determining what needs to be backported,
we can't force maintainers to do this work, that's not going to scale at
all, sorry.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 15:47                                 ` Guenter Roeck
@ 2016-08-04  8:25                                   ` Greg KH
  0 siblings, 0 replies; 244+ messages in thread
From: Greg KH @ 2016-08-04  8:25 UTC (permalink / raw)
  To: Guenter Roeck; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed, Aug 03, 2016 at 08:47:48AM -0700, Guenter Roeck wrote:
> On 08/03/2016 04:09 AM, Greg KH wrote:
> > On Wed, Aug 03, 2016 at 12:36:29PM +0300, Jani Nikula wrote:
> > > On Wed, 03 Aug 2016, "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> > > > On Tuesday, August 02, 2016 04:34:00 PM Mark Brown wrote:
> > > > > On Tue, Aug 02, 2016 at 05:12:47PM +0300, Jani Nikula wrote:
> > > > > 
> > > > > > Generally adding cc: stable is like, this is clearly a fix to a bug that
> > > > > > is present in stable kernels, and the bug should be fixed, but I have no
> > > > > > idea nor resources to review or test if this is the right fix across all
> > > > > > stable kernels. You end up relying on your gut feeling too much to be
> > > > > > comfortable. You have to make the call too early in the process.
> > > > > 
> > > > > I think the problems here are more in the process of how things go from
> > > > > being tagged stable to appearing in a stable release - the QA or lack
> > > > > thereof and so on.  While I do share some of your misgivings here I do
> > > > > also really like the fact that it's really easy for people to push
> > > > > things out for the attention of those working on backports.  It's
> > > > > essentially the same as the question I often find myself asking people
> > > > > who don't upstream - "why would this fix not benefit other users?".
> > > > 
> > > > Agreed, and I think that's exactly where the expectations don't match what's
> > > > delivered in the long-term-stable trees.
> > > > 
> > > > It should be made clear that "stable" doesn't mean "no regressions".  What
> > > > it reall means is "hey, if you care about backports, this is the stuff to take
> > > > into consideration in the first place".
> > > 
> > > I think this interpretation matches reality better than what
> > > Documentation/stable_kernel_rules.txt leads you to believe about adding
> > > cc: stable tag.
> > 
> > really?  Yes, we have regressions at times in stable kernels, but
> > really, our % is _very_ low.  Probably less than "normal" releases, but
> > that's just a random guess, it would be good for someone to try to do
> > research on this before guessing...
> > 
> 
> It is, but somehow there seems to be an expectation that it be 0.

People are crazy.

> We had one regression after merging 4.4.14 into chromeos-4.4, due to a patch
> applied from mainline which was later reverted in mainline due to the problems
> it caused (dea2cf7c0c6e, ecryptfs: forbid opening files without mmap handler).
> The regression percentage (from 4.4.4 to 4.4.14) was 1 bad patch out of 1,044,
> or 0.1% (I am sure there are probably more regressions in there, but there was
> one that affected us). One should think that such a percentage is acceptable,
> but judging from the heat I got for promoting that merge, it sounded like
> the end of the world.

I love project managers who want stuff for free yet expect no work to be
needed on their own...

> This is a problem of perception - it treats regressions in stable releases
> much differently than regressions in mainline or in vendor branches, without
> taking into account the benefits. The 1,043 bug fixes don't count because
> of one regression.
> 
> How does one address such perception problems ? I really have no idea.
> However, I don't think that it would make sense to change the stable process
> because of it. I think it works surprisingly well.

I've been working on the perception problem in my talks this past year,
but it's going to take time.  Which is fine, we have time, we aren't
going anywhere :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 16:12                                             ` Dmitry Torokhov
  2016-08-03 16:44                                               ` Guenter Roeck
  2016-08-04  3:14                                               ` Steven Rostedt
@ 2016-08-04  8:27                                               ` Greg KH
  2 siblings, 0 replies; 244+ messages in thread
From: Greg KH @ 2016-08-04  8:27 UTC (permalink / raw)
  To: Dmitry Torokhov; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Wed, Aug 03, 2016 at 09:12:34AM -0700, Dmitry Torokhov wrote:
> On Wed, Aug 03, 2016 at 08:48:34AM -0700, Guenter Roeck wrote:
> > On 08/03/2016 07:45 AM, Jiri Kosina wrote:
> > >On Wed, 3 Aug 2016, Greg KH wrote:
> > >
> > >>>Has anything changed in the process that'd just make patches like this one
> > >>>to be not merged these days?
> > >>
> > >>We have Guenter's test-bot that has helped out immensely here with this.
> > >
> > >That's very good to know, I admit that I have close to zero idea about how
> > >the stable -rcs are being tested.
> > >
> > 
> > ... and when it doesn't work because I messed it up, we get issues such as 3.18
> > and 4.1 being broken for mips and sparc64 because a couple of patches which don't
> > apply to those kernels were tagged with an unqualified Cc: stable and applied.
> > 
> > So, if anything, the one problem I see with the current stable process is
> > those unqualified stable tags. Maybe those should be deprecated; expecting
> > stable maintainers to figure out if a patch applies to a given stable branch
> > or not is a bit too much to ask for. With stable releases as far back as
> > 3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
> > patch tagged with an unqualified Cc: stable doesn't apply to all branches.
> 
> When I put cc:stable it is simply a suggestion for stable maintainers to
> figure out if this commit is suitable for _their_ stable. I might have
> an idea about n-1.x stable series but I certainly do not have any desire
> nor time to research whether this patch applicable to 3.2 or 3.0 stable
> series.

Nor should you be expected to.

> Stable maintaintership should be more than "swipe in everything marked
> as cc:stable, try compiling and hope it all good".

It is, at times, a bit more than that :)

Maybe that's why there are very few people doing this work, it's not
as simple as just "throw it all in and hope"...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04  8:20                                               ` Greg KH
@ 2016-08-04 13:33                                                 ` Steven Rostedt
  2016-08-04 15:32                                                   ` Takashi Iwai
  2016-08-04 15:44                                                   ` Mark Brown
  0 siblings, 2 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 13:33 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 4 Aug 2016 10:20:18 +0200
Greg KH <greg@kroah.com> wrote:

> On Wed, Aug 03, 2016 at 09:23:32PM -0400, Steven Rostedt wrote:
> > On Wed, 03 Aug 2016 10:04:55 -0400
> > James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > 
> >   
> > > OK, so how about you only apply stable patches with a cc stable and a
> > > fixes tag?  
> > 
> > While reading this thread, I thought about replying and suggesting
> > exactly this. But you did it before I could.
> > 
> > I try to make it a habit to find the commit that a fix is for, and add
> > that as a Fixes tag and even add a # v<stable-version>+ to the Cc tag.
> > 
> > Maybe we ask that all cc stable commits have this, otherwise it should
> > only be applied to the previous stable and nothing earlier.  
> 
> No, again, that would put more burden on the maintainer and developer
> than I want to "enforce".  I don't even want to do that extra work for
> the trees I maintain, I just couldn't scale that way.

Note, this isn't just good practice for sending patches to stable, it's
general good practice maintaining code. It gives a nice history of a
change. If you look at the change log of code that one might see that
looks "interesting" it may be very educational to see that it was done
as a fix for something else. And a new developer may understand why
code was added in the first place.

I don't buy this as burden on a maintainer. This should be part of the
maintenance procedure, regardless of sending to stable or not. Yes it
does take extra time, but I don't think that time is wasted.

> 
> > IIUC, Greg et.al. will apply a stable tagged commit to all previous
> > stable trees as long as they apply cleaning. Greg, is that correct?
> > Perhaps we shouldn't apply them if they don't have a fixes tag or a
> > label that states what versions they are for.  
> 
> I apply them to older kernels based on my best judgement.  That includes
> reading the patch, seeing how "cleanly" they apply, and judging the
> severity of the patch.  I only notify developers if their patch doesn't
> apply to an older kernel tree IF they have marked it as explicitly being
> needed for an older kernel tree.
> 
> Now I greatly appreciate the use of fixes: and other hints to show how
> old a patch should be backported to, don't get me wrong.  But I'm not
> going to require that it be present in order to have a patch backported,
> again, too much work for maintainers.

I was saying that it be required for backporting beyond the previous
stable. But also may be a hint for other stable maintainers to look at
the patch. Just no guarantee that it goes any farther back than one
release.

> 
> It's up to anyone who wants to maintain a "longterm" stable tree to do
> this extra work on their own.  It's not easy, and it is work, but that's
> just part of the job.  We can't force maintainers to care about older
> kernel versions if they don't want to, as maintainers are our most
> limited resource right now.

I agree. But my suggestion about the Fixes tag was more of trying to
get maintainers into a good practice for the maintenance of the code
itself.
 
> 
> Remember, we _still_ have whole subsystems that never mark anything for
> stable, let's focus on them please, that's the biggest issue for stable
> trees that I can see right now.

I'd be more interested in getting all subsystems to just add Fixes
tags ;-)

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 13:57                                       ` James Bottomley
  2016-08-03 13:59                                         ` Jiri Kosina
  2016-08-03 14:45                                         ` Mark Brown
@ 2016-08-04 13:48                                         ` Geert Uytterhoeven
  2 siblings, 0 replies; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-08-04 13:48 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss, Trond Myklebust

On Wed, Aug 3, 2016 at 3:57 PM, James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Wed, 2016-08-03 at 15:48 +0200, Jiri Kosina wrote:
>> On Wed, 3 Aug 2016, Greg KH wrote:
>> > Real examples from now on please, if there are problems in the
>> > stable workflow that we have today, everyone needs to show it with
>> > examples, I'm tired of seeing mental gymnastics around stable
>> > kernels just because it is "fun".
>>
>> Let me pick an example I personally had a lot of issues quite some
>> time ago:
>>
>>       https://lkml.org/lkml/2013/4/22/259
>>
>> This was a patch that got added to -stable to fix a problem that
>> didn't exist there. It caused system bustage almost immediately,
>> which indicates that very limited testing has been done prior to
>> releasing the patch.
>>
>> I believe that patches like this should really be caught during
>> -stable review; anyone familiar with the VFS code and actually
>> looking at the patch would notice immediately that it's fixing a bug
>> that doesn't exist in the code at all in the first place; that seems
>> to indicate that noone has actually explicitly reviewed it for
>> -stable, and therefore it's questionable whether it should have been
>> applied.
>
> This isn't a viable approach.  Firstly stable review is less thorough
> than upstream review because the review mostly goes "yes, I already
> reviewed this in upstream".  Secondly, if the upstream review didn't
> catch the problems why would we suddenly catch them in a stable review?

Stable review is useful exactly for this. There have been a few occurrences
where I did reply "no, you must not backport this patch to that stable version"
during stable reviews.

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 21:25                                                       ` Jiri Kosina
  2016-08-03 21:31                                                         ` Dmitry Torokhov
@ 2016-08-04 14:02                                                         ` Jan Kara
  1 sibling, 0 replies; 244+ messages in thread
From: Jan Kara @ 2016-08-04 14:02 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Wed 03-08-16 23:25:43, Jiri Kosina wrote:
> On Wed, 3 Aug 2016, Dmitry Torokhov wrote:
> 
> > I wonder if we could change meaning of naked cc: stable@ to mean latest 
> > stable only, and if fix is important enough then maintainer or somebody 
> > else can annotate how far back the fix should be applied? Ideally with 
> > "Fixes: XXX"?
> 
> Yeah, James already proposed that and I totally agree with that.
> 
> Greg, would you have any objection to formulating some rule a-la "all the 
> stable anotations should either be accompanied by explicit 'Fixes:', or 
> explicit range of kernel versions the patch is applicable to"?
> 
> I believe this is a very reasonable compromise between "maintainers or 
> submitters have to do their homework wrt. stable" and "we don't want to 
> impose too much overhead to anybody".

Yeah. What I like about this proposal is that it is a constant burden per
patch rather than per patch per stable tree (so I as a maintainer /
developer don't have to care how many stable trees are out there). So IMHO
this has a chance to work.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 13:33                                                 ` Steven Rostedt
@ 2016-08-04 15:32                                                   ` Takashi Iwai
  2016-08-04 15:40                                                     ` Steven Rostedt
  2016-08-04 15:47                                                     ` Jiri Kosina
  2016-08-04 15:44                                                   ` Mark Brown
  1 sibling, 2 replies; 244+ messages in thread
From: Takashi Iwai @ 2016-08-04 15:32 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Thu, 04 Aug 2016 15:33:55 +0200,
Steven Rostedt wrote:
> 
> On Thu, 4 Aug 2016 10:20:18 +0200
> Greg KH <greg@kroah.com> wrote:
> 
> > On Wed, Aug 03, 2016 at 09:23:32PM -0400, Steven Rostedt wrote:
> > > On Wed, 03 Aug 2016 10:04:55 -0400
> > > James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > > 
> > >   
> > > > OK, so how about you only apply stable patches with a cc stable and a
> > > > fixes tag?  
> > > 
> > > While reading this thread, I thought about replying and suggesting
> > > exactly this. But you did it before I could.
> > > 
> > > I try to make it a habit to find the commit that a fix is for, and add
> > > that as a Fixes tag and even add a # v<stable-version>+ to the Cc tag.
> > > 
> > > Maybe we ask that all cc stable commits have this, otherwise it should
> > > only be applied to the previous stable and nothing earlier.  
> > 
> > No, again, that would put more burden on the maintainer and developer
> > than I want to "enforce".  I don't even want to do that extra work for
> > the trees I maintain, I just couldn't scale that way.
> 
> Note, this isn't just good practice for sending patches to stable, it's
> general good practice maintaining code. It gives a nice history of a
> change. If you look at the change log of code that one might see that
> looks "interesting" it may be very educational to see that it was done
> as a fix for something else. And a new developer may understand why
> code was added in the first place.
> 
> I don't buy this as burden on a maintainer. This should be part of the
> maintenance procedure, regardless of sending to stable or not. Yes it
> does take extra time, but I don't think that time is wasted.

Agreed that it's a good practice.  But what if a fix isn't a
regression fix?  Many stable patches are trivial ones like PCI ID
additions.

We may point any of the commit to the corresponding code, but does it
make sense at all?


thanks,

Takashi

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:32                                                   ` Takashi Iwai
@ 2016-08-04 15:40                                                     ` Steven Rostedt
  2016-08-04 15:47                                                     ` Jiri Kosina
  1 sibling, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 15:40 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Thu, 04 Aug 2016 17:32:48 +0200
Takashi Iwai <tiwai@suse.de> wrote:


> Agreed that it's a good practice.  But what if a fix isn't a
> regression fix?  Many stable patches are trivial ones like PCI ID
> additions.
> 
> We may point any of the commit to the corresponding code, but does it
> make sense at all?

I agree that non-bug fixes don't need the Fixes tag. Because a missing
PCI ID is not a bug fix, it really is an enhancement (supporting new
hardware). Those are fine.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 13:33                                                 ` Steven Rostedt
  2016-08-04 15:32                                                   ` Takashi Iwai
@ 2016-08-04 15:44                                                   ` Mark Brown
  2016-08-04 15:56                                                     ` James Bottomley
  2016-08-04 16:14                                                     ` Steven Rostedt
  1 sibling, 2 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-04 15:44 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 1875 bytes --]

On Thu, Aug 04, 2016 at 09:33:55AM -0400, Steven Rostedt wrote:
> Greg KH <greg@kroah.com> wrote:

> > No, again, that would put more burden on the maintainer and developer
> > than I want to "enforce".  I don't even want to do that extra work for
> > the trees I maintain, I just couldn't scale that way.

> Note, this isn't just good practice for sending patches to stable, it's
> general good practice maintaining code. It gives a nice history of a
> change. If you look at the change log of code that one might see that
> looks "interesting" it may be very educational to see that it was done
> as a fix for something else. And a new developer may understand why
> code was added in the first place.

If it's a choice between me taking a bugfix for mainline and me getting
someone to give me a commit ID for exactly which commit introduced some
change I'm probably not going to do the latter, especially when a lot of
these things are more of the "we now understand the hardware better"
variety.

> I don't buy this as burden on a maintainer. This should be part of the
> maintenance procedure, regardless of sending to stable or not. Yes it
> does take extra time, but I don't think that time is wasted.

I'm really happy we've got people engaging upstream.  I'm happy if
people fill in the extra information but really I'm way more interested
in a clear changelog than in getting a Fixes tag, or in checking that
the tags people are adding are accurate.

One thing I've told driver authors before is that the big thing I'm
looking at for drivers for hardware which has limited distribution is
the impact it has on the subsystem and maintainability - to a great
degree if nobody can tell if the driver works there's a limited extent
to which I'm going to care about things.  That doesn't mean I'm not
reviewing at all but there's always a point where I just can't tell.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:32                                                   ` Takashi Iwai
  2016-08-04 15:40                                                     ` Steven Rostedt
@ 2016-08-04 15:47                                                     ` Jiri Kosina
  2016-08-04 16:18                                                       ` Takashi Iwai
  1 sibling, 1 reply; 244+ messages in thread
From: Jiri Kosina @ 2016-08-04 15:47 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 4 Aug 2016, Takashi Iwai wrote:

> Agreed that it's a good practice.  But what if a fix isn't a
> regression fix?  Many stable patches are trivial ones like PCI ID
> additions.

If it's not a bugfix, the applicability to -stable is very questionable in 
my eyes.

Device IDs might be made the same exception as post-merge window PCI ID 
additions to Linus' tree. Even there, post -rc1, everything should be 
bugfix, but device ID additions are allowed.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:44                                                   ` Mark Brown
@ 2016-08-04 15:56                                                     ` James Bottomley
  2016-08-04 17:01                                                       ` Mark Brown
  2016-08-04 16:14                                                     ` Steven Rostedt
  1 sibling, 1 reply; 244+ messages in thread
From: James Bottomley @ 2016-08-04 15:56 UTC (permalink / raw)
  To: Mark Brown, Steven Rostedt; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 2585 bytes --]

On Thu, 2016-08-04 at 16:44 +0100, Mark Brown wrote:
> On Thu, Aug 04, 2016 at 09:33:55AM -0400, Steven Rostedt wrote:
> > Greg KH <greg@kroah.com> wrote:
> 
> > > No, again, that would put more burden on the maintainer and 
> > > developer than I want to "enforce".  I don't even want to do that 
> > > extra work for the trees I maintain, I just couldn't scale that
> > > way.
> 
> > Note, this isn't just good practice for sending patches to stable, 
> > it's general good practice maintaining code. It gives a nice
> > history of
> > a
> > change. If you look at the change log of code that one might see
> > that
> > looks "interesting" it may be very educational to see that it was
> > done
> > as a fix for something else. And a new developer may understand why
> > code was added in the first place.
> 
> If it's a choice between me taking a bugfix for mainline and me 
> getting someone to give me a commit ID for exactly which commit 
> introduced some change I'm probably not going to do the latter, 
> especially when a lot of these things are more of the "we now 
> understand the hardware better" variety.
> 
> > I don't buy this as burden on a maintainer. This should be part of 
> > the maintenance procedure, regardless of sending to stable or not. 
> > Yes it does take extra time, but I don't think that time is wasted.
> 
> I'm really happy we've got people engaging upstream.  I'm happy if
> people fill in the extra information but really I'm way more 
> interested in a clear changelog than in getting a Fixes tag, or in 
> checking that the tags people are adding are accurate.

Why not look at this in a different way: any bug fix that's correcting
a prior commit is actually a regression.  The reason we treat
regressions differently (and actually have someone to track them) is
that they're the ones users hate: it means that something that was
previously working (and thus relied on) suddenly doesn't work any more.
 Note that not every bug fix with a fixes tag is a technical
regression: if the fixes tag is actually the commit that first
introduced the feature it's not a regression because the non-buggy
behaviour was never visible to the end user.  However, the people who
track regressions are capable of sorting this out.

What I'm pointing out is that Fixes: has uses that go way beyond
stable.  That's why I do think it's good practice for the kernel.  It's
something we should already be doing that can assist the stable
process, not something that's just been invented for the purpose.

James


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:44                                                   ` Mark Brown
  2016-08-04 15:56                                                     ` James Bottomley
@ 2016-08-04 16:14                                                     ` Steven Rostedt
  2016-08-04 17:51                                                       ` Mark Brown
  2016-08-04 18:16                                                       ` Geert Uytterhoeven
  1 sibling, 2 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 16:14 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

On Thu, 4 Aug 2016 16:44:44 +0100
Mark Brown <broonie@kernel.org> wrote:


> If it's a choice between me taking a bugfix for mainline and me getting
> someone to give me a commit ID for exactly which commit introduced some
> change I'm probably not going to do the latter, especially when a lot of
> these things are more of the "we now understand the hardware better"
> variety.

Who said anything about a choice between one or the other?

> 
> > I don't buy this as burden on a maintainer. This should be part of the
> > maintenance procedure, regardless of sending to stable or not. Yes it
> > does take extra time, but I don't think that time is wasted.  
> 
> I'm really happy we've got people engaging upstream.  I'm happy if
> people fill in the extra information but really I'm way more interested
> in a clear changelog than in getting a Fixes tag, or in checking that
> the tags people are adding are accurate.

Having a clear change log is orthogonal to having a Fixes tag. Actually,
in my experience, change logs with Fixes tags tend to have clearer
explanations in the change log than those without. Because to get that
Fixes tag, one did some research to why the bug happened in the first
place.

> 
> One thing I've told driver authors before is that the big thing I'm
> looking at for drivers for hardware which has limited distribution is
> the impact it has on the subsystem and maintainability - to a great
> degree if nobody can tell if the driver works there's a limited extent
> to which I'm going to care about things.  That doesn't mean I'm not
> reviewing at all but there's always a point where I just can't tell.

For some exotic driver it is probably not as important for clear change
logs and fixes tags, as there's not many users and who knows, maybe the
hardware wont exist in the future when we would want to look into the
history of the code.

I mostly work in the core kernel, and any part of the kernel that has a
much larger area of use would greatly benefit from having clear change
logs as well as Fixes tags that point to regressions, or even if the
bug was there all along. It helps out in many areas besides just
stable. As James mentions, it helps for regression tracking which is
something that people have been wanting to return.

I found a bug in an old version of the kernel that I was using and when
I looked upstream, there was a fixes tag. Not only did I easily find
the proper fix, but it helped me see that the broken code was indeed in
the kernel I was playing with. That's something similar to having
stable releases, but it helps out any other users that are using older
kernels.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:47                                                     ` Jiri Kosina
@ 2016-08-04 16:18                                                       ` Takashi Iwai
  2016-08-04 16:26                                                         ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Takashi Iwai @ 2016-08-04 16:18 UTC (permalink / raw)
  To: Jiri Kosina; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 04 Aug 2016 17:47:38 +0200,
Jiri Kosina wrote:
> 
> On Thu, 4 Aug 2016, Takashi Iwai wrote:
> 
> > Agreed that it's a good practice.  But what if a fix isn't a
> > regression fix?  Many stable patches are trivial ones like PCI ID
> > additions.
> 
> If it's not a bugfix, the applicability to -stable is very questionable in 
> my eyes.
> 
> Device IDs might be made the same exception as post-merge window PCI ID 
> additions to Linus' tree. Even there, post -rc1, everything should be 
> bugfix, but device ID additions are allowed.

Yes, I didn't mean a non-fix.  The oft-seen addition of PCI IDs or
addition of device-specific quirks are actually "fixes", but not a
regression fix one can point to a certain commit.  The driver code
itself is fine until the new hardware.  It's just a new device that
came up after the driver code.


Takashi

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 16:18                                                       ` Takashi Iwai
@ 2016-08-04 16:26                                                         ` Steven Rostedt
  0 siblings, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 16:26 UTC (permalink / raw)
  To: Takashi Iwai; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 04 Aug 2016 18:18:46 +0200
Takashi Iwai <tiwai@suse.de> wrote:


> Yes, I didn't mean a non-fix.  The oft-seen addition of PCI IDs or
> addition of device-specific quirks are actually "fixes", but not a
> regression fix one can point to a certain commit.  The driver code
> itself is fine until the new hardware.  It's just a new device that
> came up after the driver code.

I still call that an "enhancement" and not a "fix". That is, adding
support for new hardware is an enhancement to the driver. But because
it's such a trivial change where there's only benefit and not really
any chance of a bug, there's no reason not to add it, and this
"enhancement" becomes the exception to the rule. The only real risk is
that someone adds a wrong number. But even that can be quickly
remedied.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 15:56                                                     ` James Bottomley
@ 2016-08-04 17:01                                                       ` Mark Brown
  2016-08-04 17:11                                                         ` Steven Rostedt
  0 siblings, 1 reply; 244+ messages in thread
From: Mark Brown @ 2016-08-04 17:01 UTC (permalink / raw)
  To: James Bottomley; +Cc: Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]

On Thu, Aug 04, 2016 at 08:56:53AM -0700, James Bottomley wrote:

> What I'm pointing out is that Fixes: has uses that go way beyond
> stable.  That's why I do think it's good practice for the kernel.  It's
> something we should already be doing that can assist the stable
> process, not something that's just been invented for the purpose.

Right, I'm not saying it's not good practice just that I don't think
insisting on it as a matter of pure process and bookeeping is the best
way forwards - if people are providing it because it's good practice and
they've done the analysis against upstream that's great but if someone
is filling it in because they've got to check that box on the form I'm
less convinced.  I'm not sure that the degredation in the quality of
information that gets recorded (I'm pretty sure I at least don't have
the capacity to actively verify every Fixes tag), or cases where fixes
don't end up in stable because the submitter doesn't care about that,
are going to be worth it.

I've had some bad experiences with some similar reporting requirements
in the past - if people aren't actively engaged and supportive they end
up working around rather than with the process which can make it harder
to use the information later on.  Perhaps I'm overreacting to those but
I'd much rather see this promoted as good practice than a stick to beat
people with.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 17:01                                                       ` Mark Brown
@ 2016-08-04 17:11                                                         ` Steven Rostedt
  2016-08-04 17:53                                                           ` Mark Brown
  2016-08-05  8:16                                                           ` Jani Nikula
  0 siblings, 2 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 17:11 UTC (permalink / raw)
  To: Mark Brown; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 4 Aug 2016 18:01:26 +0100
Mark Brown <broonie@kernel.org> wrote:

> I've had some bad experiences with some similar reporting requirements
> in the past - if people aren't actively engaged and supportive they end
> up working around rather than with the process which can make it harder
> to use the information later on.  Perhaps I'm overreacting to those but
> I'd much rather see this promoted as good practice than a stick to beat
> people with.

Note, all this should probably be treated as guidelines or "best
practices" and not "hard requirement"s.

Perhaps we should document what people should strive to achieve, and
state if you really don't have the time, then one may do without. But
the more one has a clear change log and information showing where a bug
happened, makes maintenance of the code in the future a little bit
easier.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 16:14                                                     ` Steven Rostedt
@ 2016-08-04 17:51                                                       ` Mark Brown
  2016-08-04 18:16                                                       ` Geert Uytterhoeven
  1 sibling, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-04 17:51 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, ksummit-discuss, Trond Myklebust

[-- Attachment #1: Type: text/plain, Size: 2151 bytes --]

On Thu, Aug 04, 2016 at 12:14:14PM -0400, Steven Rostedt wrote:
> Mark Brown <broonie@kernel.org> wrote:

> > I'm really happy we've got people engaging upstream.  I'm happy if
> > people fill in the extra information but really I'm way more interested
> > in a clear changelog than in getting a Fixes tag, or in checking that
> > the tags people are adding are accurate.

> Having a clear change log is orthogonal to having a Fixes tag. Actually,
> in my experience, change logs with Fixes tags tend to have clearer
> explanations in the change log than those without. Because to get that
> Fixes tag, one did some research to why the bug happened in the first
> place.

Yes, definitely - but equally the clear changelog will often have
something to the effect of "we did more evaluation of the chip",
"production versions of the chip have such and such a change" or
whatever.  

> I mostly work in the core kernel, and any part of the kernel that has a
> much larger area of use would greatly benefit from having clear change
> logs as well as Fixes tags that point to regressions, or even if the
> bug was there all along. It helps out in many areas besides just

It's a sliding scale what should get the closest scruitiny and the most
detailed changelogs and so on but hard rules affect everyone.

> stable. As James mentions, it helps for regression tracking which is
> something that people have been wanting to return.

Indeed.  More use for things like regression tracking would be a really
good pull reason to get people engaged and motivate them to provide the
information.

> I found a bug in an old version of the kernel that I was using and when
> I looked upstream, there was a fixes tag. Not only did I easily find
> the proper fix, but it helped me see that the broken code was indeed in
> the kernel I was playing with. That's something similar to having
> stable releases, but it helps out any other users that are using older
> kernels.

To be clear I think it's great to have the information, I just don't
think making it a requirement for every fix is going to have the desired
result.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 17:11                                                         ` Steven Rostedt
@ 2016-08-04 17:53                                                           ` Mark Brown
  2016-08-05  8:16                                                           ` Jani Nikula
  1 sibling, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-04 17:53 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 381 bytes --]

On Thu, Aug 04, 2016 at 01:11:46PM -0400, Steven Rostedt wrote:

> Perhaps we should document what people should strive to achieve, and
> state if you really don't have the time, then one may do without. But
> the more one has a clear change log and information showing where a bug
> happened, makes maintenance of the code in the future a little bit
> easier.

Definitely agreed.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 16:14                                                     ` Steven Rostedt
  2016-08-04 17:51                                                       ` Mark Brown
@ 2016-08-04 18:16                                                       ` Geert Uytterhoeven
  2016-08-04 18:44                                                         ` Steven Rostedt
  1 sibling, 1 reply; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-08-04 18:16 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, Aug 4, 2016 at 6:14 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Thu, 4 Aug 2016 16:44:44 +0100
> Mark Brown <broonie@kernel.org> wrote:
>> If it's a choice between me taking a bugfix for mainline and me getting
>> someone to give me a commit ID for exactly which commit introduced some
>> change I'm probably not going to do the latter, especially when a lot of
>> these things are more of the "we now understand the hardware better"
>> variety.
>
> Who said anything about a choice between one or the other?
>
>> > I don't buy this as burden on a maintainer. This should be part of the
>> > maintenance procedure, regardless of sending to stable or not. Yes it
>> > does take extra time, but I don't think that time is wasted.
>>
>> I'm really happy we've got people engaging upstream.  I'm happy if
>> people fill in the extra information but really I'm way more interested
>> in a clear changelog than in getting a Fixes tag, or in checking that
>> the tags people are adding are accurate.
>
> Having a clear change log is orthogonal to having a Fixes tag. Actually,
> in my experience, change logs with Fixes tags tend to have clearer
> explanations in the change log than those without. Because to get that
> Fixes tag, one did some research to why the bug happened in the first
> place.

Would publishing statistics help, like the top 10 of Ackers and Reviewers?

E.g. Hall of Fame of bug fixers, based on the presence of Fixes tags, and
Hall of Shame, based on patches CCed to stable lacking Fixes tags?

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 18:16                                                       ` Geert Uytterhoeven
@ 2016-08-04 18:44                                                         ` Steven Rostedt
  2016-08-04 18:48                                                           ` Geert Uytterhoeven
  2016-08-04 18:52                                                           ` Laurent Pinchart
  0 siblings, 2 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 18:44 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 4 Aug 2016 20:16:45 +0200
Geert Uytterhoeven <geert@linux-m68k.org> wrote:

> Would publishing statistics help, like the top 10 of Ackers and Reviewers?
> 
> E.g. Hall of Fame of bug fixers, based on the presence of Fixes tags, and
> Hall of Shame, based on patches CCed to stable lacking Fixes tags?

I would not have the Hall of Shame, as people just like seeing there
name in print, they may not care if it is fame or shame.

As for the Hall of Fame, I'm not sure what I would think if I made that
list. As most of my Fixes is for code that I originally wrote. Thus it
just points out all the bad code I made in the past.

Reminds me of a story I heard a long time ago, about a manager that
wanted to help encourage his programmers to find more bugs. He gave
them a $20 bonus for every bug they found and fixed. What he forgot to
take into consideration was that these were the same programmers that
were writing the code they would be finding bugs in. I heard that one
programmer racked up $10,000 before they fixed their mistake ;-)

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 18:44                                                         ` Steven Rostedt
@ 2016-08-04 18:48                                                           ` Geert Uytterhoeven
  2016-08-04 19:06                                                             ` Mark Brown
  2016-08-04 18:52                                                           ` Laurent Pinchart
  1 sibling, 1 reply; 244+ messages in thread
From: Geert Uytterhoeven @ 2016-08-04 18:48 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, Aug 4, 2016 at 8:44 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> As for the Hall of Fame, I'm not sure what I would think if I made that
> list. As most of my Fixes is for code that I originally wrote. Thus it
> just points out all the bad code I made in the past.

>From a different point of view, that just means no one else is smart
enough to find bugs in your code ;-)

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 18:44                                                         ` Steven Rostedt
  2016-08-04 18:48                                                           ` Geert Uytterhoeven
@ 2016-08-04 18:52                                                           ` Laurent Pinchart
  2016-08-04 19:30                                                             ` Steven Rostedt
  1 sibling, 1 reply; 244+ messages in thread
From: Laurent Pinchart @ 2016-08-04 18:52 UTC (permalink / raw)
  To: ksummit-discuss; +Cc: James Bottomley, Trond Myklebust

On Thursday 04 Aug 2016 14:44:33 Steven Rostedt wrote:
> On Thu, 4 Aug 2016 20:16:45 +0200 Geert Uytterhoeven wrote:
> > Would publishing statistics help, like the top 10 of Ackers and Reviewers?
> > 
> > E.g. Hall of Fame of bug fixers, based on the presence of Fixes tags, and
> > Hall of Shame, based on patches CCed to stable lacking Fixes tags?
> 
> I would not have the Hall of Shame, as people just like seeing there
> name in print, they may not care if it is fame or shame.
> 
> As for the Hall of Fame, I'm not sure what I would think if I made that
> list. As most of my Fixes is for code that I originally wrote. Thus it
> just points out all the bad code I made in the past.
> 
> Reminds me of a story I heard a long time ago, about a manager that
> wanted to help encourage his programmers to find more bugs. He gave
> them a $20 bonus for every bug they found and fixed. What he forgot to
> take into consideration was that these were the same programmers that
> were writing the code they would be finding bugs in. I heard that one
> programmer racked up $10,000 before they fixed their mistake ;-)

http://dilbert.com/strip/1995-11-13

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 18:48                                                           ` Geert Uytterhoeven
@ 2016-08-04 19:06                                                             ` Mark Brown
  0 siblings, 0 replies; 244+ messages in thread
From: Mark Brown @ 2016-08-04 19:06 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 555 bytes --]

On Thu, Aug 04, 2016 at 08:48:30PM +0200, Geert Uytterhoeven wrote:
> On Thu, Aug 4, 2016 at 8:44 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > As for the Hall of Fame, I'm not sure what I would think if I made that
> > list. As most of my Fixes is for code that I originally wrote. Thus it
> > just points out all the bad code I made in the past.
> 
> From a different point of view, that just means no one else is smart
> enough to find bugs in your code ;-)

I'm generous, I like helping others!  Every bug I write helps someone
else!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 18:52                                                           ` Laurent Pinchart
@ 2016-08-04 19:30                                                             ` Steven Rostedt
  0 siblings, 0 replies; 244+ messages in thread
From: Steven Rostedt @ 2016-08-04 19:30 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 04 Aug 2016 21:52:14 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:


> > Reminds me of a story I heard a long time ago, about a manager that
> > wanted to help encourage his programmers to find more bugs. He gave
> > them a $20 bonus for every bug they found and fixed. What he forgot to
> > take into consideration was that these were the same programmers that
> > were writing the code they would be finding bugs in. I heard that one
> > programmer racked up $10,000 before they fixed their mistake ;-)  
> 
> http://dilbert.com/strip/1995-11-13
> 

Yeah, I remember that strip. I believe it came out shortly after I
heard about this story. Scott Adams is known for basing his cartoons
off of real events that have happened.

-- Steve

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-03 15:48                                           ` Guenter Roeck
  2016-08-03 16:12                                             ` Dmitry Torokhov
  2016-08-04  8:21                                             ` Greg KH
@ 2016-08-05  4:46                                             ` Jonathan Cameron
  2 siblings, 0 replies; 244+ messages in thread
From: Jonathan Cameron @ 2016-08-05  4:46 UTC (permalink / raw)
  To: Guenter Roeck, Jiri Kosina, Greg KH
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss



On 3 August 2016 21:18:34 GMT+05:30, Guenter Roeck <linux@roeck-us.net> wrote:
>On 08/03/2016 07:45 AM, Jiri Kosina wrote:
>> On Wed, 3 Aug 2016, Greg KH wrote:
>>
>>>> Has anything changed in the process that'd just make patches like
>this one
>>>> to be not merged these days?
>>>
>>> We have Guenter's test-bot that has helped out immensely here with
>this.
>>
>> That's very good to know, I admit that I have close to zero idea
>about how
>> the stable -rcs are being tested.
>>
>
>... and when it doesn't work because I messed it up, we get issues such
>as 3.18
>and 4.1 being broken for mips and sparc64 because a couple of patches
>which don't
>apply to those kernels were tagged with an unqualified Cc: stable and
>applied.
>
>So, if anything, the one problem I see with the current stable process
>is
>those unqualified stable tags. Maybe those should be deprecated;
>expecting
>stable maintainers to figure out if a patch applies to a given stable
>branch
>or not is a bit too much to ask for. With stable releases as far back
>as
>3.2 (or 338,020 commits as of right now) it is almost guaranteed that a
>patch tagged with an unqualified Cc: stable doesn't apply to all
>branches.
>
>>> It seems most of these can all come down to "we need more testing",
>>
>> That as well, but the main message I am trying to push here is "we
>need a
>> little bit more thinking while anotating patches for stable".
>>
>> It might very well be that some variation of what has been just
>proposed
>> elsewhere in this thread (requiring all the stable commits to either
>> contain explicit 'Fixes' tag, or be explicitly annotated by the
>kernel
>> version range they should be applied to) would help tremendously on
>that
>> front.
>>
>>> [1] The people that are doing stable tree testing are doing a great
>job,
>>>      Guenter, Shuah, kernelci, my build-bot, 0-day, etc.
>>
>
>Maybe I or someone else can give a 10-15 minute presentation about the
>current
>test efforts to bring everyone up to date on what is being tested and
>how.
>Maybe we should make such a presentation a regular event at major
>conferences.
I would certainly find this or a regular lwn feature or similar both
interesting and useful.

Jonathan
>
>Guenter
>
>_______________________________________________
>Ksummit-discuss mailing list
>Ksummit-discuss@lists.linuxfoundation.org
>https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 244+ messages in thread

* Re: [Ksummit-discuss] [CORE TOPIC] stable workflow
  2016-08-04 17:11                                                         ` Steven Rostedt
  2016-08-04 17:53                                                           ` Mark Brown
@ 2016-08-05  8:16                                                           ` Jani Nikula
  1 sibling, 0 replies; 244+ messages in thread
From: Jani Nikula @ 2016-08-05  8:16 UTC (permalink / raw)
  To: Steven Rostedt, Mark Brown
  Cc: James Bottomley, Trond Myklebust, ksummit-discuss

On Thu, 04 Aug 2016, Steven Rostedt <rostedt@goodmis.org> wrote:
> Perhaps we should document what people should strive to achieve, and
> state if you really don't have the time, then one may do without. But
> the more one has a clear change log and information showing where a bug
> happened, makes maintenance of the code in the future a little bit
> easier.

You can usually tell from the patch and the commit message whether the
author has ever had to dig through git history, desperately looking for
clues why the code was written the way it was, and why it broke. People
burned in the past tend to leave helpful breadcrumbs for their future
selves and others in the commit messages, and keep their patches so
small that identifying why the bisect landed there is trivial.

I am all for improving documentation. However, sadly it seems you can
preach about this all you want, but it is ultimately the personal
banging of the head to the wall that leads to enlightenment.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 244+ messages in thread

end of thread, other threads:[~2016-08-05  8:17 UTC | newest]

Thread overview: 244+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-08 22:35 [Ksummit-discuss] [CORE TOPIC] stable workflow Jiri Kosina
2016-07-08 23:12 ` Guenter Roeck
2016-07-08 23:38   ` Luck, Tony
2016-07-09  8:34     ` Jiri Kosina
2016-07-09  8:58       ` Guenter Roeck
2016-07-09  9:29       ` Johannes Berg
2016-07-09 15:19         ` Jason Cooper
2016-07-09 16:04           ` Guenter Roeck
2016-07-09 19:15         ` Vlastimil Babka
2016-08-01  9:32           ` Johannes Berg
2016-08-01 11:10             ` Vlastimil Babka
2016-07-09 18:39       ` Andrew Lunn
2016-07-10  1:22       ` Rafael J. Wysocki
2016-07-08 23:52   ` Rafael J. Wysocki
2016-07-09  0:06     ` Dmitry Torokhov
2016-07-09  8:37       ` Jiri Kosina
2016-07-09  9:12         ` Mark Brown
2016-07-09  0:10   ` Dmitry Torokhov
2016-07-09  0:37     ` Rafael J. Wysocki
2016-07-09  0:43       ` Dmitry Torokhov
2016-07-09  1:53         ` Guenter Roeck
2016-07-09 10:05         ` James Bottomley
2016-07-09 15:49           ` Trond Myklebust
2016-07-09 22:41             ` Dan Williams
2016-07-10  1:34             ` James Bottomley
2016-07-10  1:43               ` Trond Myklebust
2016-07-10  1:56                 ` James Bottomley
2016-07-10  2:12                   ` Trond Myklebust
2016-07-10  2:15                   ` Rafael J. Wysocki
2016-07-10  3:00                     ` James Bottomley
2016-07-10  3:07                       ` Trond Myklebust
2016-07-26 13:35                       ` David Woodhouse
2016-07-26 13:44                         ` Guenter Roeck
2016-07-26 14:33                           ` David Woodhouse
2016-07-26 15:52                             ` Guenter Roeck
2016-07-28 21:02                             ` Laurent Pinchart
2016-07-29  0:10                               ` Steven Rostedt
2016-07-29  8:59                                 ` Laurent Pinchart
2016-07-29 14:28                                   ` Steven Rostedt
2016-08-01 13:53                                     ` Shuah Khan
2016-08-03  4:47                                       ` Bird, Timothy
2016-07-29 15:12                                   ` Mark Brown
2016-07-29 15:20                                     ` Steven Rostedt
2016-07-29 15:50                                       ` Mark Brown
2016-07-29 16:06                                         ` Steven Rostedt
2016-07-29 16:48                                           ` Mark Brown
2016-07-29 17:02                                             ` Steven Rostedt
2016-07-29 21:07                                               ` Alexandre Belloni
2016-07-29 21:40                                                 ` Steven Rostedt
2016-08-01 13:41                                                   ` Laurent Pinchart
2016-07-30 16:19                                               ` Luis R. Rodriguez
2016-08-01 13:35                                     ` Laurent Pinchart
2016-08-01 14:24                                       ` Mark Brown
2016-08-02 14:12                       ` Jani Nikula
2016-08-02 15:34                         ` Mark Brown
2016-08-02 23:17                           ` Rafael J. Wysocki
2016-08-03  9:36                             ` Jani Nikula
2016-08-03 11:09                               ` Greg KH
2016-08-03 13:05                                 ` Jani Nikula
2016-08-03 13:26                                   ` Greg KH
2016-08-03 13:48                                     ` Jiri Kosina
2016-08-03 13:57                                       ` James Bottomley
2016-08-03 13:59                                         ` Jiri Kosina
2016-08-03 14:04                                           ` James Bottomley
2016-08-03 14:10                                             ` Jiri Kosina
2016-08-04  1:23                                             ` Steven Rostedt
2016-08-04  8:20                                               ` Greg KH
2016-08-04 13:33                                                 ` Steven Rostedt
2016-08-04 15:32                                                   ` Takashi Iwai
2016-08-04 15:40                                                     ` Steven Rostedt
2016-08-04 15:47                                                     ` Jiri Kosina
2016-08-04 16:18                                                       ` Takashi Iwai
2016-08-04 16:26                                                         ` Steven Rostedt
2016-08-04 15:44                                                   ` Mark Brown
2016-08-04 15:56                                                     ` James Bottomley
2016-08-04 17:01                                                       ` Mark Brown
2016-08-04 17:11                                                         ` Steven Rostedt
2016-08-04 17:53                                                           ` Mark Brown
2016-08-05  8:16                                                           ` Jani Nikula
2016-08-04 16:14                                                     ` Steven Rostedt
2016-08-04 17:51                                                       ` Mark Brown
2016-08-04 18:16                                                       ` Geert Uytterhoeven
2016-08-04 18:44                                                         ` Steven Rostedt
2016-08-04 18:48                                                           ` Geert Uytterhoeven
2016-08-04 19:06                                                             ` Mark Brown
2016-08-04 18:52                                                           ` Laurent Pinchart
2016-08-04 19:30                                                             ` Steven Rostedt
2016-08-03 14:45                                         ` Mark Brown
2016-08-04 13:48                                         ` Geert Uytterhoeven
2016-08-03 14:19                                       ` Greg KH
2016-08-03 14:45                                         ` Jiri Kosina
2016-08-03 15:48                                           ` Guenter Roeck
2016-08-03 16:12                                             ` Dmitry Torokhov
2016-08-03 16:44                                               ` Guenter Roeck
2016-08-03 17:20                                                 ` Dmitry Torokhov
2016-08-03 18:21                                                   ` Guenter Roeck
2016-08-03 18:59                                                     ` Dmitry Torokhov
2016-08-03 21:25                                                       ` Jiri Kosina
2016-08-03 21:31                                                         ` Dmitry Torokhov
2016-08-03 21:36                                                           ` Jiri Kosina
2016-08-04  3:06                                                             ` Steven Rostedt
2016-08-03 22:25                                                           ` Guenter Roeck
2016-08-04 14:02                                                         ` Jan Kara
2016-08-03 18:57                                                 ` Jiri Kosina
2016-08-03 22:16                                                   ` Guenter Roeck
2016-08-04  3:14                                               ` Steven Rostedt
2016-08-04  3:32                                                 ` Dmitry Torokhov
2016-08-04  4:05                                                   ` Steven Rostedt
2016-08-04  8:27                                               ` Greg KH
2016-08-04  8:21                                             ` Greg KH
2016-08-05  4:46                                             ` Jonathan Cameron
2016-08-03 14:12                                     ` Jani Nikula
2016-08-03 14:33                                       ` Daniel Vetter
2016-08-03 13:20                                 ` Rafael J. Wysocki
2016-08-03 13:21                                   ` Jiri Kosina
2016-08-04  1:05                                     ` Rafael J. Wysocki
2016-08-03 13:39                                   ` Greg KH
2016-08-03 14:10                                     ` Chris Mason
2016-08-04  0:37                                     ` Rafael J. Wysocki
2016-08-03 15:47                                 ` Guenter Roeck
2016-08-04  8:25                                   ` Greg KH
2016-08-03 11:12                               ` Mark Brown
2016-07-10  2:27                   ` Dan Williams
2016-07-10  6:10                     ` Guenter Roeck
2016-07-11  4:03                     ` [Ksummit-discuss] [CORE TOPIC] kernel unit testing Trond Myklebust
2016-07-11  4:22                       ` James Bottomley
2016-07-11  4:30                         ` Trond Myklebust
2016-07-11  5:23                       ` Guenter Roeck
2016-07-11  8:56                         ` Hannes Reinecke
2016-07-11 16:20                         ` Mark Brown
2016-07-11 19:58                       ` Dan Williams
2016-07-12  9:35                         ` Jan Kara
2016-07-13  4:56                           ` Dan Williams
2016-07-13  9:04                             ` Jan Kara
2016-07-11 20:24                       ` Kevin Hilman
2016-07-11 23:03                         ` Guenter Roeck
2016-07-18  7:44                           ` Christian Borntraeger
2016-07-18  8:44                             ` Hannes Reinecke
2016-07-28 21:09                         ` Laurent Pinchart
2016-07-28 21:33                           ` Bird, Timothy
2016-08-02 18:42                           ` Kevin Hilman
2016-08-02 19:44                             ` Laurent Pinchart
2016-08-02 20:33                               ` Mark Brown
2016-07-13  4:48                       ` Alex Shi
2016-07-13  9:07                         ` Greg KH
2016-07-13 12:37                           ` Alex Shi
2016-07-13 19:59                             ` Olof Johansson
2016-07-13 22:23                               ` Alex Shi
2016-07-14  1:19                             ` Greg KH
2016-07-14  9:48                               ` Alex Shi
2016-07-14  9:54                                 ` Ard Biesheuvel
2016-07-14 14:13                                   ` Alex Shi
2016-07-13 14:34                           ` Mark Brown
2016-07-14  3:17                             ` Greg KH
2016-07-14 10:06                               ` Mark Brown
2016-07-15  0:22                                 ` Greg KH
2016-07-15  0:51                                   ` Guenter Roeck
2016-07-15  1:41                                     ` Greg KH
2016-07-15  2:56                                       ` Guenter Roeck
2016-07-15  4:29                                         ` Greg KH
2016-07-15  5:52                                           ` NeilBrown
2016-07-15  6:14                                             ` Greg KH
2016-07-15  7:02                                               ` Jiri Kosina
2016-07-15 11:42                                                 ` Greg KH
2016-07-15 11:47                                                   ` Jiri Kosina
2016-07-15 12:17                                                   ` Geert Uytterhoeven
2016-07-15  6:19                                             ` Rik van Riel
2016-07-15 12:17                                               ` Mark Brown
2016-07-26 13:45                                                 ` David Woodhouse
2016-07-15  6:32                                             ` James Bottomley
2016-07-15  7:01                                               ` NeilBrown
2016-07-15  7:28                                                 ` James Bottomley
2016-07-15  7:36                                                 ` Dmitry Torokhov
2016-07-15  9:29                                                   ` NeilBrown
2016-07-15 16:08                                                     ` Dmitry Torokhov
2016-07-15 11:05                                               ` Geert Uytterhoeven
2016-07-15 12:35                                                 ` James Bottomley
2016-07-15 12:44                                                   ` Geert Uytterhoeven
2016-07-15 11:24                                             ` Vlastimil Babka
2016-07-28 22:07                                               ` Laurent Pinchart
2016-07-21  7:13                                           ` Daniel Vetter
2016-07-21  7:44                                             ` Josh Triplett
2016-07-15 11:10                                     ` Mark Brown
2016-07-15 11:40                                       ` Greg KH
2016-07-15 12:38                                         ` Mark Brown
2016-07-10  2:07                 ` [Ksummit-discuss] [CORE TOPIC] stable workflow Rafael J. Wysocki
2016-07-10  6:19               ` Olof Johansson
2016-07-10 14:42                 ` Theodore Ts'o
2016-07-11  1:18                   ` Olof Johansson
2016-07-10  7:29           ` Takashi Iwai
2016-07-10 10:20             ` Jiri Kosina
2016-07-10 13:33               ` Guenter Roeck
2016-07-15  9:27                 ` Zefan Li
2016-07-15 13:52                   ` Guenter Roeck
2016-07-26 13:08           ` David Woodhouse
2016-07-10  7:37     ` Takashi Iwai
2016-07-09  0:06 ` Jason Cooper
2016-07-09  0:42   ` James Bottomley
2016-07-09  8:43     ` Jiri Kosina
2016-07-09  9:36       ` Mark Brown
2016-07-09 15:13         ` Guenter Roeck
2016-07-09 19:40           ` Sudip Mukherjee
2016-07-11  8:14             ` Jiri Kosina
2016-07-09 21:21           ` Theodore Ts'o
2016-07-11 15:13             ` Mark Brown
2016-07-11 17:03               ` Theodore Ts'o
2016-07-11 17:07                 ` Justin Forbes
2016-07-11 17:11                 ` Mark Brown
2016-07-11 17:13                   ` Olof Johansson
2016-07-11 17:17                     ` Mark Brown
2016-07-11 17:24                       ` Guenter Roeck
2016-07-11 17:44                         ` Mark Brown
2016-07-13  1:08                   ` Geert Uytterhoeven
2016-07-11 17:15                 ` Dmitry Torokhov
2016-07-11 17:20                   ` Theodore Ts'o
2016-07-11 17:26                     ` Dmitry Torokhov
2016-07-11 17:27                     ` Olof Johansson
2016-07-11 23:13                   ` Guenter Roeck
2016-07-11 17:17                 ` Josh Boyer
2016-07-11 22:42                 ` James Bottomley
2016-07-20 17:50                 ` Stephen Hemminger
2016-07-11  8:18           ` Jiri Kosina
2016-07-11 23:32             ` Guenter Roeck
2016-07-11 14:22           ` Mark Brown
2016-07-10 16:22         ` Vinod Koul
2016-07-10 17:01           ` Theodore Ts'o
2016-07-10 18:28             ` Guenter Roeck
2016-07-10 22:38               ` Rafael J. Wysocki
2016-07-11  8:47                 ` Jiri Kosina
2016-07-27  3:19                 ` Steven Rostedt
2016-07-10 22:39               ` Theodore Ts'o
2016-07-11  1:12                 ` Olof Johansson
2016-07-11  5:00             ` Vinod Koul
2016-07-11  5:13               ` Theodore Ts'o
2016-07-11 10:57                 ` Luis de Bethencourt
2016-07-11 14:18                 ` Vinod Koul
2016-07-11 17:34                   ` Guenter Roeck
2016-07-27  3:12                   ` Steven Rostedt
2016-07-27  4:36                     ` Vinod Koul
2016-07-09 14:57     ` Jason Cooper
2016-07-09 22:51       ` Jonathan Corbet
2016-07-10  7:21 ` Takashi Iwai
2016-07-11  7:44 ` Christian Borntraeger
2016-08-02 13:49 ` Jani Nikula

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.