Ksummit-Discuss Archive on lore.kernel.org
 help / color / Atom feed
* [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
@ 2019-06-06 15:48 James Bottomley
  2019-06-06 15:58 ` Greg KH
                   ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-06 15:48 UTC (permalink / raw)
  To: ksummit-discuss

This is probably best done as two separate topics

1) Pull network: The pull depth is effectively how many pulls your tree
does before it goes to Linus, so pull depth 0 is sent straight to
Linus, pull depth 1 is sent to a maintainer who sends to Linus and so
on.  We've previously spent time discussing how increasing the pull
depth of the network would reduce the amount of time Linus spends
handling pull requests.  However, in the areas I play, like security,
we seem to be moving in the opposite direction (encouraging people to
go from pull depth 1 to pull depth 0).  If we're deciding to move to a
flat tree model, where everything is depth 0, that's fine, I just think
we could do with making a formal decision on it so we don't waste
energy encouraging greater tree depth.

2) Patch Acceptance Consistency: At the moment, we have very different
acceptance criteria for patches into the various maintainer trees. 
Some of these differences are due to deeply held stylistic beliefs, but
some could be more streamlined to give a more consistent experience to
beginners who end up doing batch fixes which cross trees and end up
more confused than anything else.  I'm not proposing to try and unify
our entire submission process, because that would never fly, but I was
thinking we could get a few sample maintainer trees to give their
criteria and then see if we could get any streamlining.  For instance,
SCSI has a fairly weak "match the current driver" style requirement, a
reasonably strong get someone else to review it requirement and the
usual good change log and one patch per substantive change requirement.
 Other subsystems look similar without the review requirement, some
have very strict stylistic requirements (reverse christmas tree, one
variable definition per line, etc).  As I said, the goal wouldn't be to
 beat up on the unusual requirements but to see if we could agree some
global baselines that would at least make submission more uniform.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 15:48 [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency James Bottomley
@ 2019-06-06 15:58 ` Greg KH
  2019-06-06 16:24   ` James Bottomley
  2019-06-06 16:29   ` James Bottomley
  2019-06-06 16:18 ` Bart Van Assche
  2019-06-14 19:53 ` Bjorn Helgaas
  2 siblings, 2 replies; 77+ messages in thread
From: Greg KH @ 2019-06-06 15:58 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:
> This is probably best done as two separate topics
> 
> 1) Pull network: The pull depth is effectively how many pulls your tree
> does before it goes to Linus, so pull depth 0 is sent straight to
> Linus, pull depth 1 is sent to a maintainer who sends to Linus and so
> on.  We've previously spent time discussing how increasing the pull
> depth of the network would reduce the amount of time Linus spends
> handling pull requests.  However, in the areas I play, like security,
> we seem to be moving in the opposite direction (encouraging people to
> go from pull depth 1 to pull depth 0).  If we're deciding to move to a
> flat tree model, where everything is depth 0, that's fine, I just think
> we could do with making a formal decision on it so we don't waste
> energy encouraging greater tree depth.

That depth "change" was due to the perceived problems that having a
deeper pull depth was causing.  To sort that out, Linus asked for things
to go directly to him.

It seems like the real issue is the problem with that subsystem
collection point, and the fact that the depth changed is a sign that our
model works well (i.e. everyone can be routed around.)

So, maybe some work on fixing up subsystems that have problems
aggregating things?  Seems like some areas of the kernel do this just
fine, perhaps some workflow for the developers involved needs to be
adjusted?

> 2) Patch Acceptance Consistency: At the moment, we have very different
> acceptance criteria for patches into the various maintainer trees. 
> Some of these differences are due to deeply held stylistic beliefs, but
> some could be more streamlined to give a more consistent experience to
> beginners who end up doing batch fixes which cross trees and end up
> more confused than anything else.  I'm not proposing to try and unify
> our entire submission process, because that would never fly, but I was
> thinking we could get a few sample maintainer trees to give their
> criteria and then see if we could get any streamlining.  For instance,
> SCSI has a fairly weak "match the current driver" style requirement, a
> reasonably strong get someone else to review it requirement and the
> usual good change log and one patch per substantive change requirement.
>  Other subsystems look similar without the review requirement, some
> have very strict stylistic requirements (reverse christmas tree, one
> variable definition per line, etc).  As I said, the goal wouldn't be to
>  beat up on the unusual requirements but to see if we could agree some
> global baselines that would at least make submission more uniform.

I thought Dan's "maintainer document" was going to help resolve things
like this, both putting in writing just what those rules were, as well
as help point out where things might be going too far in one direction
or another in a much easier way, as they could be compared.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 15:48 [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency James Bottomley
  2019-06-06 15:58 ` Greg KH
@ 2019-06-06 16:18 ` Bart Van Assche
  2019-06-14 19:53 ` Bjorn Helgaas
  2 siblings, 0 replies; 77+ messages in thread
From: Bart Van Assche @ 2019-06-06 16:18 UTC (permalink / raw)
  To: James Bottomley, ksummit-discuss

On 6/6/19 8:48 AM, James Bottomley wrote:
> 2) Patch Acceptance Consistency: At the moment, we have very different
> acceptance criteria for patches into the various maintainer trees. 
> Some of these differences are due to deeply held stylistic beliefs, but
> some could be more streamlined to give a more consistent experience to
> beginners who end up doing batch fixes which cross trees and end up
> more confused than anything else.  I'm not proposing to try and unify
> our entire submission process, because that would never fly, but I was
> thinking we could get a few sample maintainer trees to give their
> criteria and then see if we could get any streamlining.  For instance,
> SCSI has a fairly weak "match the current driver" style requirement, a
> reasonably strong get someone else to review it requirement and the
> usual good change log and one patch per substantive change requirement.
>  Other subsystems look similar without the review requirement, some
> have very strict stylistic requirements (reverse christmas tree, one
> variable definition per line, etc).  As I said, the goal wouldn't be to
>  beat up on the unusual requirements but to see if we could agree some
> global baselines that would at least make submission more uniform.

Thank you James for having brought this up. I agree that more
consistency for patch acceptance criteria would help. This would not
only help beginners but also long-time contributors who contribute to
multiple subsystems.

Bart.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 15:58 ` Greg KH
@ 2019-06-06 16:24   ` James Bottomley
  2019-06-13 13:59     ` Mauro Carvalho Chehab
  2019-06-06 16:29   ` James Bottomley
  1 sibling, 1 reply; 77+ messages in thread
From: James Bottomley @ 2019-06-06 16:24 UTC (permalink / raw)
  To: Greg KH; +Cc: ksummit-discuss

[splitting issues to shorten replies]
On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:
> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:
> > This is probably best done as two separate topics
> > 
> > 1) Pull network: The pull depth is effectively how many pulls your
> > tree does before it goes to Linus, so pull depth 0 is sent straight
> > to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > and so on.  We've previously spent time discussing how increasing
> > the pull depth of the network would reduce the amount of time Linus
> > spends handling pull requests.  However, in the areas I play, like
> > security, we seem to be moving in the opposite direction
> > (encouraging people to go from pull depth 1 to pull depth 0).  If
> > we're deciding to move to a flat tree model, where everything is
> > depth 0, that's fine, I just think we could do with making a formal
> > decision on it so we don't waste energy encouraging greater tree
> > depth.
> 
> That depth "change" was due to the perceived problems that having a
> deeper pull depth was causing.  To sort that out, Linus asked for
> things to go directly to him.

This seems to go beyond problems with one tree and is becoming a trend.

> It seems like the real issue is the problem with that subsystem
> collection point, and the fact that the depth changed is a sign that
> our model works well (i.e. everyone can be routed around.)

I'm not really interested in calling out "problem" maintainers, or
indeed having another "my patch collection method is better than yours"
type discussion.  What I was fishing for is whether the general
impression that greater tree depth is worth striving for is actually
correct, or we should all give up now and simply accept that the
current flat tree is the best we can do, and, indeed is the model that
works best for Linus.  I get the impression this may be the case, but I
think making sure by having an actual discussion among the interested
parties who will be at the kernel summit, would be useful.

> So, maybe some work on fixing up subsystems that have problems
> aggregating things?  Seems like some areas of the kernel do this just
> fine, perhaps some workflow for the developers involved needs to be
> adjusted?

As I said, I'm not really that interested in upbraiding the problem
cases, I'm more interested in discussing the generalities, and what we
as maintainers should be encouraging.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 15:58 ` Greg KH
  2019-06-06 16:24   ` James Bottomley
@ 2019-06-06 16:29   ` James Bottomley
  2019-06-06 18:26     ` Dan Williams
  1 sibling, 1 reply; 77+ messages in thread
From: James Bottomley @ 2019-06-06 16:29 UTC (permalink / raw)
  To: Greg KH; +Cc: ksummit-discuss

On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:
> > 2) Patch Acceptance Consistency: At the moment, we have very
> > different acceptance criteria for patches into the various
> > maintainer trees.  Some of these differences are due to deeply held
> > stylistic beliefs, but some could be more streamlined to give a
> > more consistent experience to beginners who end up doing batch
> > fixes which cross trees and end up more confused than anything
> > else.  I'm not proposing to try and unify our entire submission
> > process, because that would never fly, but I was
> > thinking we could get a few sample maintainer trees to give their
> > criteria and then see if we could get any streamlining.  For
> > instance, SCSI has a fairly weak "match the current driver" style
> > requirement, a reasonably strong get someone else to review it
> > requirement and the usual good change log and one patch per
> > substantive change requirement.  Other subsystems look similar
> > without the review requirement, some have very strict stylistic
> > requirements (reverse christmas tree, one variable definition per
> > line, etc).  As I said, the goal wouldn't be to  beat up on the
> > unusual requirements but to see if we could agree some global
> > baselines that would at least make submission more uniform.
> 
> I thought Dan's "maintainer document" was going to help resolve
> things like this, both putting in writing just what those rules were,
> as well as help point out where things might be going too far in one
> direction or another in a much easier way, as they could be compared.

Well, um, I can't really comment on a document that doesn't yet exist. 
However, I can note that the best kernel process documents describe
what we actually do (mostly because attempting to impose additional
processes by fiat [or by document] really doesn't go over well) and
that's orthogonal to what I'm proposing: I'm proposing that we examine
critically what we currently do and see if there aren't any more areas
where we could strive for greater consistency and uniformity. 
Certainly, if Dan's doc exists by KS time it could be a useful input,
but to effect change in this area requires discussion and agreement by
the franchise holders (i.e. the maintainers) which is what I'm
proposing and which KS is the ideal venue to get.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 16:29   ` James Bottomley
@ 2019-06-06 18:26     ` Dan Williams
  2019-06-07 20:14       ` Martin K. Petersen
  2019-06-13 13:28       ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 77+ messages in thread
From: Dan Williams @ 2019-06-06 18:26 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit

On Thu, Jun 6, 2019 at 9:30 AM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:
> > > 2) Patch Acceptance Consistency: At the moment, we have very
> > > different acceptance criteria for patches into the various
> > > maintainer trees.  Some of these differences are due to deeply held
> > > stylistic beliefs, but some could be more streamlined to give a
> > > more consistent experience to beginners who end up doing batch
> > > fixes which cross trees and end up more confused than anything
> > > else.  I'm not proposing to try and unify our entire submission
> > > process, because that would never fly, but I was
> > > thinking we could get a few sample maintainer trees to give their
> > > criteria and then see if we could get any streamlining.  For
> > > instance, SCSI has a fairly weak "match the current driver" style
> > > requirement, a reasonably strong get someone else to review it
> > > requirement and the usual good change log and one patch per
> > > substantive change requirement.  Other subsystems look similar
> > > without the review requirement, some have very strict stylistic
> > > requirements (reverse christmas tree, one variable definition per
> > > line, etc).  As I said, the goal wouldn't be to  beat up on the
> > > unusual requirements but to see if we could agree some global
> > > baselines that would at least make submission more uniform.
> >
> > I thought Dan's "maintainer document" was going to help resolve
> > things like this, both putting in writing just what those rules were,
> > as well as help point out where things might be going too far in one
> > direction or another in a much easier way, as they could be compared.
>
> Well, um, I can't really comment on a document that doesn't yet exist.
> However, I can note that the best kernel process documents describe
> what we actually do (mostly because attempting to impose additional
> processes by fiat [or by document] really doesn't go over well) and
> that's orthogonal to what I'm proposing: I'm proposing that we examine
> critically what we currently do and see if there aren't any more areas
> where we could strive for greater consistency and uniformity.
> Certainly, if Dan's doc exists by KS time it could be a useful input,
> but to effect change in this area requires discussion and agreement by
> the franchise holders (i.e. the maintainers) which is what I'm
> proposing and which KS is the ideal venue to get.

The doc which has failed to materialize is only meant to be a
lightning rod to prompt conversations like this of "how and why are we
inconsistent across subsystems?". The lightning rod aspect of the
topic partially explains the lack of progress, it needs about the same
level of care / attention as a core-mm patchset and I've kept it on
the backburner until I could dedicate the necessary time.

That said, I do think moving forward with the document would be
necessary pre-work for this conversation. Just the act of putting
subsystem specific policies in writing even if they differ would go
along way towards making the lives of contributors less fraught with
arbitrary peril. Then at ksummit maintainers can compare subsystem
notes and arm wrestle for the policies that do or do not need to
remain differentiated.

The conversation and certainly agreement is secondary in my mind to
just documenting the local policy for contributors

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 18:26     ` Dan Williams
@ 2019-06-07 20:14       ` Martin K. Petersen
  2019-06-13 13:49         ` Mauro Carvalho Chehab
  2019-06-13 13:28       ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 77+ messages in thread
From: Martin K. Petersen @ 2019-06-07 20:14 UTC (permalink / raw)
  To: Dan Williams; +Cc: James Bottomley, ksummit


Dan,

> That said, I do think moving forward with the document would be
> necessary pre-work for this conversation. Just the act of putting
> subsystem specific policies in writing even if they differ would go
> along way towards making the lives of contributors less fraught with
> arbitrary peril.

I think part of the problem is that some subsystems are older than
others.

It is much easier to enforce your favorite bike shed/Xmas tree if the
code is very similar and developed by like-minded people. Or written in
this millennium.

Whereas in SCSI I have 25+ years of changes in coding practices,
numerous vendor drivers influenced by styles in various other operating
systems, etc. to deal with.

I try to enforce current best practices on core code because that is a
very limited subset. And one which I can micro-manage. But trying to
enforce similar rules on old crusty stuff which probably has no active
maintainer is fraught with error. Plus things become completely
unreadable if you start mixing new and 25+ year old style inside a
single file.

So I am perfectly OK with having policies. But communicating and
enforcing them on a per-subsystem basis is too coarse a granularity for
the reality I have to deal with. Consequently, I think your MAINTAINERS
tagging idea is a good approach.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 18:26     ` Dan Williams
  2019-06-07 20:14       ` Martin K. Petersen
@ 2019-06-13 13:28       ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 13:28 UTC (permalink / raw)
  To: Dan Williams; +Cc: James Bottomley, ksummit

Em Thu, 6 Jun 2019 11:26:20 -0700
Dan Williams <dan.j.williams@intel.com> escreveu:

> On Thu, Jun 6, 2019 at 9:30 AM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:  
> > > > 2) Patch Acceptance Consistency: At the moment, we have very
> > > > different acceptance criteria for patches into the various
> > > > maintainer trees.  Some of these differences are due to deeply held
> > > > stylistic beliefs, but some could be more streamlined to give a
> > > > more consistent experience to beginners who end up doing batch
> > > > fixes which cross trees and end up more confused than anything
> > > > else.  I'm not proposing to try and unify our entire submission
> > > > process, because that would never fly, but I was
> > > > thinking we could get a few sample maintainer trees to give their
> > > > criteria and then see if we could get any streamlining.  For
> > > > instance, SCSI has a fairly weak "match the current driver" style
> > > > requirement, a reasonably strong get someone else to review it
> > > > requirement and the usual good change log and one patch per
> > > > substantive change requirement.  Other subsystems look similar
> > > > without the review requirement, some have very strict stylistic
> > > > requirements (reverse christmas tree, one variable definition per
> > > > line, etc).  As I said, the goal wouldn't be to  beat up on the
> > > > unusual requirements but to see if we could agree some global
> > > > baselines that would at least make submission more uniform.  

Agreed. If all/most subsystems could have a common base of minimal
requirements, that would make a lot easier for incoming people to
submit patches on different subsystems.

One of the current problems I face is that people which also work 
on other related subsystems want to have other maintainer's model 
applied to the media subsystem, or sometimes submit patches that
use other coding styles, with doesn't seem to fit to well to the
way we work.

For example, lately, I started receiving a lot of patches following
this comment style:

	/* foo
	 * bar
	 */

Instead of:

	/*
	 * foo
	 * bar
	 */

with seems to be ok to some subsystems, but it violates the style we
adopt all over the subsystem.

> > >
> > > I thought Dan's "maintainer document" was going to help resolve
> > > things like this, both putting in writing just what those rules were,
> > > as well as help point out where things might be going too far in one
> > > direction or another in a much easier way, as they could be compared.  
> >
> > Well, um, I can't really comment on a document that doesn't yet exist.
> > However, I can note that the best kernel process documents describe
> > what we actually do (mostly because attempting to impose additional
> > processes by fiat [or by document] really doesn't go over well) and
> > that's orthogonal to what I'm proposing: I'm proposing that we examine
> > critically what we currently do and see if there aren't any more areas
> > where we could strive for greater consistency and uniformity.
> > Certainly, if Dan's doc exists by KS time it could be a useful input,
> > but to effect change in this area requires discussion and agreement by
> > the franchise holders (i.e. the maintainers) which is what I'm
> > proposing and which KS is the ideal venue to get.  
> 
> The doc which has failed to materialize is only meant to be a
> lightning rod to prompt conversations like this of "how and why are we
> inconsistent across subsystems?". The lightning rod aspect of the
> topic partially explains the lack of progress, it needs about the same
> level of care / attention as a core-mm patchset and I've kept it on
> the backburner until I could dedicate the necessary time.
> 
> That said, I do think moving forward with the document would be
> necessary pre-work for this conversation.

Yeah, I think it is a good starting point.

> Just the act of putting
> subsystem specific policies in writing even if they differ would go
> along way towards making the lives of contributors less fraught with
> arbitrary peril. Then at ksummit maintainers can compare subsystem
> notes and arm wrestle for the policies that do or do not need to
> remain differentiated.

I have a pending followup patch on the top of the Dan's RFC
patch, describing how we work on media:

	https://patchwork.linuxtv.org/patch/52999/

As I wrote this a while ago, some things could have changed, but
the basic idea about how we work are documented over there.

> 
> The conversation and certainly agreement is secondary in my mind to
> just documenting the local policy for contributors

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-07 20:14       ` Martin K. Petersen
@ 2019-06-13 13:49         ` Mauro Carvalho Chehab
  2019-06-13 14:35           ` James Bottomley
  2019-06-13 14:53           ` Martin K. Petersen
  0 siblings, 2 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 13:49 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: James Bottomley, ksummit

Em Fri, 07 Jun 2019 16:14:46 -0400
"Martin K. Petersen" <martin.petersen@oracle.com> escreveu:

> Dan,
> 
> > That said, I do think moving forward with the document would be
> > necessary pre-work for this conversation. Just the act of putting
> > subsystem specific policies in writing even if they differ would go
> > along way towards making the lives of contributors less fraught with
> > arbitrary peril.  
> 
> I think part of the problem is that some subsystems are older than
> others.
> 
> It is much easier to enforce your favorite bike shed/Xmas tree if the
> code is very similar and developed by like-minded people. Or written in
> this millennium.
> 
> Whereas in SCSI I have 25+ years of changes in coding practices,
> numerous vendor drivers influenced by styles in various other operating
> systems, etc. to deal with.
> 
> I try to enforce current best practices on core code because that is a
> very limited subset. And one which I can micro-manage. But trying to
> enforce similar rules on old crusty stuff which probably has no active
> maintainer is fraught with error. Plus things become completely
> unreadable if you start mixing new and 25+ year old style inside a
> single file.
> 
> So I am perfectly OK with having policies. But communicating and
> enforcing them on a per-subsystem basis is too coarse a granularity for
> the reality I have to deal with. Consequently, I think your MAINTAINERS
> tagging idea is a good approach.
> 

That's true, but it doesn't mean that those old style subsystems
can't be improved.

In the case of media we have 20+ years of changes. So, the received code
had a myriad of different coding styles.

Yet, we do enforce the current coding practices to all new code
we receive.

Also, at least at the core (and on some drivers that people use as 
reference for new codes), when we receive patches that do a large 
amount of changes at the code, and we have some spare time, we run
checkpatch.pl at the entire affected file, and we fix the style as
much as possible[1].

Yeah, that's painful, but as we do such practices for quite sime time,
nowadays the code gets improved and now people tend to do first time 
submissions using the current style practices.

[1] We usually ignore 80 column warnings on legacy code though,
as a proper fix would mean to rewrite the code in order to split
functions into smaller ones, with could cause regressions.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 16:24   ` James Bottomley
@ 2019-06-13 13:59     ` Mauro Carvalho Chehab
  2019-06-14 10:12       ` Laurent Pinchart
  0 siblings, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 13:59 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

Em Thu, 06 Jun 2019 19:24:35 +0300
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> [splitting issues to shorten replies]
> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:
> > On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:  
> > > This is probably best done as two separate topics
> > > 
> > > 1) Pull network: The pull depth is effectively how many pulls your
> > > tree does before it goes to Linus, so pull depth 0 is sent straight
> > > to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > > and so on.  We've previously spent time discussing how increasing
> > > the pull depth of the network would reduce the amount of time Linus
> > > spends handling pull requests.  However, in the areas I play, like
> > > security, we seem to be moving in the opposite direction
> > > (encouraging people to go from pull depth 1 to pull depth 0).  If
> > > we're deciding to move to a flat tree model, where everything is
> > > depth 0, that's fine, I just think we could do with making a formal
> > > decision on it so we don't waste energy encouraging greater tree
> > > depth.  
> > 
> > That depth "change" was due to the perceived problems that having a
> > deeper pull depth was causing.  To sort that out, Linus asked for
> > things to go directly to him.  
> 
> This seems to go beyond problems with one tree and is becoming a trend.
> 
> > It seems like the real issue is the problem with that subsystem
> > collection point, and the fact that the depth changed is a sign that
> > our model works well (i.e. everyone can be routed around.)  
> 
> I'm not really interested in calling out "problem" maintainers, or
> indeed having another "my patch collection method is better than yours"
> type discussion.  What I was fishing for is whether the general
> impression that greater tree depth is worth striving for is actually
> correct, or we should all give up now and simply accept that the
> current flat tree is the best we can do, and, indeed is the model that
> works best for Linus.  I get the impression this may be the case, but I
> think making sure by having an actual discussion among the interested
> parties who will be at the kernel summit, would be useful.

On media, we came from a "depth 1" model, moving toward a "depth 2" level: 

patch author -> media/driver maintainer -> subsystem maintainer -> Linus

In other words, I'm trying hard to apply patches directly. Still,
due to the huge number of patches we receive on media [1], I tend to
apply patches directly too (specially trivial ones), in order to avoid
having a patch waiting for a long time to be applied.

This model seems to be working fine for us, as it gives at least two
levels of review to each patch.

[1] Over the last 2 years, we're receiving about 400 to 1000 patches/month:
    https://linuxtv.org/patchwork_stats.php

> > So, maybe some work on fixing up subsystems that have problems
> > aggregating things?  Seems like some areas of the kernel do this just
> > fine, perhaps some workflow for the developers involved needs to be
> > adjusted?  
> 
> As I said, I'm not really that interested in upbraiding the problem
> cases, I'm more interested in discussing the generalities, and what we
> as maintainers should be encouraging.
> 
> James
> 
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss



Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 13:49         ` Mauro Carvalho Chehab
@ 2019-06-13 14:35           ` James Bottomley
  2019-06-13 15:03             ` Martin K. Petersen
  2019-06-13 17:27             ` Mauro Carvalho Chehab
  2019-06-13 14:53           ` Martin K. Petersen
  1 sibling, 2 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-13 14:35 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Martin K. Petersen; +Cc: ksummit

On Thu, 2019-06-13 at 10:49 -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 07 Jun 2019 16:14:46 -0400
> "Martin K. Petersen" <martin.petersen@oracle.com> escreveu:
> 
> > Dan,
> > 
> > > That said, I do think moving forward with the document would be
> > > necessary pre-work for this conversation. Just the act of putting
> > > subsystem specific policies in writing even if they differ would
> > > go along way towards making the lives of contributors less
> > > fraught with arbitrary peril.  
> > 
> > I think part of the problem is that some subsystems are older than
> > others.
> > 
> > It is much easier to enforce your favorite bike shed/Xmas tree if
> > the code is very similar and developed by like-minded people. Or
> > written in this millennium.
> > 
> > Whereas in SCSI I have 25+ years of changes in coding practices,
> > numerous vendor drivers influenced by styles in various other
> > operating systems, etc. to deal with.
> > 
> > I try to enforce current best practices on core code because that
> > is a very limited subset. And one which I can micro-manage. But
> > trying to enforce similar rules on old crusty stuff which probably
> > has no active maintainer is fraught with error. Plus things become
> > completely unreadable if you start mixing new and 25+ year old
> > style inside a single file.
> > 
> > So I am perfectly OK with having policies. But communicating and
> > enforcing them on a per-subsystem basis is too coarse a granularity
> > for the reality I have to deal with. Consequently, I think your
> > MAINTAINERS tagging idea is a good approach.
> > 
> 
> That's true, but it doesn't mean that those old style subsystems
> can't be improved.

It depends: every patch you do to an old driver comes with a risk of
breakage.  What we've found is even apparently sane patches cause
breakage which isn't discovered until months later when someone with
the hardware actually tests.  So the general rule is:

   1. No whitespace/style changes to old drivers without a fix as well
   2. We might take changes in comments only (spelling updates or licence
      stuff) and other stuff that provably doesn't alter the binary.
   3. Fixes which are tested on the actual hardware are welcome.
   4. Any "obvious" bug fixes which aren't hardware tested really have to
      be obvious and well inspected (these are the ones that usually cause
      the problems)
   5. Systemwide sweeps we do and usually just pray it was right

However, if someone comes along with the actual hardware to test and
wants to take over maintaining it, they pretty much get carte blance to
do whatever they want (see NCR 5380), so the above only applies to
unmaintained old drivers.

> In the case of media we have 20+ years of changes. So, the received
> code had a myriad of different coding styles.
> 
> Yet, we do enforce the current coding practices to all new code
> we receive.

We don't.  We enforce style in the existing driver for readability and
consistency unless you're the maintainer of the driver and wish to
change it.  Then we'd insist on changing it to kernel style.

> Also, at least at the core (and on some drivers that people use as 
> reference for new codes), when we receive patches that do a large 
> amount of changes at the code, and we have some spare time, we run
> checkpatch.pl at the entire affected file, and we fix the style as
> much as possible[1].

We have done this, but only if the Maintainer wants to do it.  For
drivers with no maintainer, we definitely don't.

James

> Yeah, that's painful, but as we do such practices for quite sime
> time, nowadays the code gets improved and now people tend to do first
> time  submissions using the current style practices.
> 
> [1] We usually ignore 80 column warnings on legacy code though,
> as a proper fix would mean to rewrite the code in order to split
> functions into smaller ones, with could cause regressions.
> 
> Thanks,
> Mauro
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 13:49         ` Mauro Carvalho Chehab
  2019-06-13 14:35           ` James Bottomley
@ 2019-06-13 14:53           ` Martin K. Petersen
  2019-06-13 17:09             ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 77+ messages in thread
From: Martin K. Petersen @ 2019-06-13 14:53 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit


Mauro,

> Yet, we do enforce the current coding practices to all new code
> we receive.

The problem in SCSI is that standalone new code is rare. Almost every
patch changes existing code.

Mixing code with tabs for indentation with old code that uses for
instance two spaces results in code that is very hard to follow. That's
why the preference is to stick to the existing style of a given file.

Also, attempts to use code formatters to produce sensible results have
failed. Many of the drivers include tables or carefully formatted
comments or data structures. So without a human involved, automatic code
formatting produces complete junk.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 14:35           ` James Bottomley
@ 2019-06-13 15:03             ` Martin K. Petersen
  2019-06-13 15:21               ` Bart Van Assche
                                 ` (4 more replies)
  2019-06-13 17:27             ` Mauro Carvalho Chehab
  1 sibling, 5 replies; 77+ messages in thread
From: Martin K. Petersen @ 2019-06-13 15:03 UTC (permalink / raw)
  To: James Bottomley; +Cc: Mauro Carvalho Chehab, ksummit


James,

> It depends: every patch you do to an old driver comes with a risk of
> breakage.  What we've found is even apparently sane patches cause
> breakage which isn't discovered until months later when someone with
> the hardware actually tests.

My pet peeve is with the constant stream of seemingly innocuous
helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
kfoobar() + memset()". And then a year later somebody decides kzfoobar()
had a subtle adverse side-effect and now we all need to switch to
kpzfoobar().

I appreciate that some of these helpers may have merit in terms of
facilitating static code checkers, etc. But other than that, I really
fail to see the value of this constant churn.

The devil is always in the details. It's almost inevitably these obvious
five-liners that cause regressions down the line.

So why do we keep doing this?

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:03             ` Martin K. Petersen
@ 2019-06-13 15:21               ` Bart Van Assche
  2019-06-13 15:27                 ` James Bottomley
  2019-06-13 15:35                 ` Guenter Roeck
  2019-06-13 19:28               ` James Bottomley
                                 ` (3 subsequent siblings)
  4 siblings, 2 replies; 77+ messages in thread
From: Bart Van Assche @ 2019-06-13 15:21 UTC (permalink / raw)
  To: Martin K. Petersen, James Bottomley; +Cc: Mauro Carvalho Chehab, ksummit

On 6/13/19 8:03 AM, Martin K. Petersen wrote:
> 
> James,
> 
>> It depends: every patch you do to an old driver comes with a risk of
>> breakage.  What we've found is even apparently sane patches cause
>> breakage which isn't discovered until months later when someone with
>> the hardware actually tests.
> 
> My pet peeve is with the constant stream of seemingly innocuous
> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
> kfoobar() + memset()". And then a year later somebody decides kzfoobar()
> had a subtle adverse side-effect and now we all need to switch to
> kpzfoobar().
> 
> I appreciate that some of these helpers may have merit in terms of
> facilitating static code checkers, etc. But other than that, I really
> fail to see the value of this constant churn.
> 
> The devil is always in the details. It's almost inevitably these obvious
> five-liners that cause regressions down the line.
> 
> So why do we keep doing this?

How about discussing at the kernel summit whether or not patches that 
have not been tested on actual hardware should be ignored?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:21               ` Bart Van Assche
@ 2019-06-13 15:27                 ` James Bottomley
  2019-06-13 15:35                 ` Guenter Roeck
  1 sibling, 0 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-13 15:27 UTC (permalink / raw)
  To: Bart Van Assche, Martin K. Petersen; +Cc: Mauro Carvalho Chehab, ksummit

On Thu, 2019-06-13 at 08:21 -0700, Bart Van Assche wrote:
> On 6/13/19 8:03 AM, Martin K. Petersen wrote:
> > 
> > James,
> > 
> > > It depends: every patch you do to an old driver comes with a risk
> > > of breakage.  What we've found is even apparently sane patches
> > > cause breakage which isn't discovered until months later when
> > > someone with the hardware actually tests.
> > 
> > My pet peeve is with the constant stream of seemingly innocuous
> > helper-interface-of-the-week changes. Such as "Use kzfoobar()
> > instead of kfoobar() + memset()". And then a year later somebody
> > decides kzfoobar() had a subtle adverse side-effect and now we all
> > need to switch to kpzfoobar().
> > 
> > I appreciate that some of these helpers may have merit in terms of
> > facilitating static code checkers, etc. But other than that, I
> > really fail to see the value of this constant churn.
> > 
> > The devil is always in the details. It's almost inevitably these
> > obvious five-liners that cause regressions down the line.
> > 
> > So why do we keep doing this?
> 
> How about discussing at the kernel summit whether or not patches
> that have not been tested on actual hardware should be ignored?

Might be a bit harsh, but perhaps "must be tested on hardware or
provably not change the binary or be part of a treewide update" might
be a reasonable rule to shoot for.  However, this would
disproportionately affect the security and coccinelle people because
they do one patch at a time stuff for which they definitely don't have
hardware, so for maintained drivers they could submit to the maintainer
for update and testing, but we'd be disallowing their changes for
unmaintained drivers.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:21               ` Bart Van Assche
  2019-06-13 15:27                 ` James Bottomley
@ 2019-06-13 15:35                 ` Guenter Roeck
  2019-06-13 15:39                   ` Bart Van Assche
                                     ` (2 more replies)
  1 sibling, 3 replies; 77+ messages in thread
From: Guenter Roeck @ 2019-06-13 15:35 UTC (permalink / raw)
  To: Bart Van Assche, Martin K. Petersen, James Bottomley
  Cc: Mauro Carvalho Chehab, ksummit

On 6/13/19 8:21 AM, Bart Van Assche wrote:
> On 6/13/19 8:03 AM, Martin K. Petersen wrote:
>>
>> James,
>>
>>> It depends: every patch you do to an old driver comes with a risk of
>>> breakage.  What we've found is even apparently sane patches cause
>>> breakage which isn't discovered until months later when someone with
>>> the hardware actually tests.
>>
>> My pet peeve is with the constant stream of seemingly innocuous
>> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
>> kfoobar() + memset()". And then a year later somebody decides kzfoobar()
>> had a subtle adverse side-effect and now we all need to switch to
>> kpzfoobar().
>>
>> I appreciate that some of these helpers may have merit in terms of
>> facilitating static code checkers, etc. But other than that, I really
>> fail to see the value of this constant churn.
>>
>> The devil is always in the details. It's almost inevitably these obvious
>> five-liners that cause regressions down the line.
>>
>> So why do we keep doing this?
> 
> How about discussing at the kernel summit whether or not patches that have not been tested on actual hardware should be ignored?
> 

A while ago I spent some time writing unit tests for various i2c based
hwmon drivers (https://github.com/groeck/module-tests). With those,
I found a substantial number of overflow conditions and other problems
in various drivers.

Similar, my qemu boot tests have identified several problems over time,
by nature of qemu often on hardware which is difficult if not almost
impossible to find nowadays (ohci-sm501 is a current example).

Are you saying that such problems should not be fixed unless they can be
verified on real hardware ?

Guenter

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:35                 ` Guenter Roeck
@ 2019-06-13 15:39                   ` Bart Van Assche
  2019-06-14 11:53                     ` Leon Romanovsky
  2019-06-13 15:39                   ` James Bottomley
  2019-06-13 15:42                   ` Takashi Iwai
  2 siblings, 1 reply; 77+ messages in thread
From: Bart Van Assche @ 2019-06-13 15:39 UTC (permalink / raw)
  To: Guenter Roeck, Martin K. Petersen, James Bottomley
  Cc: Mauro Carvalho Chehab, ksummit

On 6/13/19 8:35 AM, Guenter Roeck wrote:
> On 6/13/19 8:21 AM, Bart Van Assche wrote:
>> On 6/13/19 8:03 AM, Martin K. Petersen wrote:
>>>
>>> James,
>>>
>>>> It depends: every patch you do to an old driver comes with a risk of
>>>> breakage.  What we've found is even apparently sane patches cause
>>>> breakage which isn't discovered until months later when someone with
>>>> the hardware actually tests.
>>>
>>> My pet peeve is with the constant stream of seemingly innocuous
>>> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
>>> kfoobar() + memset()". And then a year later somebody decides kzfoobar()
>>> had a subtle adverse side-effect and now we all need to switch to
>>> kpzfoobar().
>>>
>>> I appreciate that some of these helpers may have merit in terms of
>>> facilitating static code checkers, etc. But other than that, I really
>>> fail to see the value of this constant churn.
>>>
>>> The devil is always in the details. It's almost inevitably these obvious
>>> five-liners that cause regressions down the line.
>>>
>>> So why do we keep doing this?
>>
>> How about discussing at the kernel summit whether or not patches that 
>> have not been tested on actual hardware should be ignored?
>>
> 
> A while ago I spent some time writing unit tests for various i2c based
> hwmon drivers (https://github.com/groeck/module-tests). With those,
> I found a substantial number of overflow conditions and other problems
> in various drivers.
> 
> Similar, my qemu boot tests have identified several problems over time,
> by nature of qemu often on hardware which is difficult if not almost
> impossible to find nowadays (ohci-sm501 is a current example).
> 
> Are you saying that such problems should not be fixed unless they can be
> verified on real hardware ?

How about leaving out "on actual hardware" from my e-mail? What you 
described sounds like valuable work to me. I think testing with qemu is 
sufficient.

Bart.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:35                 ` Guenter Roeck
  2019-06-13 15:39                   ` Bart Van Assche
@ 2019-06-13 15:39                   ` James Bottomley
  2019-06-13 15:42                   ` Takashi Iwai
  2 siblings, 0 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-13 15:39 UTC (permalink / raw)
  To: Guenter Roeck, Bart Van Assche, Martin K. Petersen
  Cc: Mauro Carvalho Chehab, ksummit

On Thu, 2019-06-13 at 08:35 -0700, Guenter Roeck wrote:
> On 6/13/19 8:21 AM, Bart Van Assche wrote:
> > On 6/13/19 8:03 AM, Martin K. Petersen wrote:
> > > 
> > > James,
> > > 
> > > > It depends: every patch you do to an old driver comes with a
> > > > risk of breakage.  What we've found is even apparently sane
> > > > patches cause breakage which isn't discovered until months
> > > > later when someone with the hardware actually tests.
> > > 
> > > My pet peeve is with the constant stream of seemingly innocuous
> > > helper-interface-of-the-week changes. Such as "Use kzfoobar()
> > > instead of kfoobar() + memset()". And then a year later somebody
> > > decides kzfoobar() had a subtle adverse side-effect and now we
> > > all need to switch to kpzfoobar().
> > > 
> > > I appreciate that some of these helpers may have merit in terms
> > > of facilitating static code checkers, etc. But other than that, I
> > > really fail to see the value of this constant churn.
> > > 
> > > The devil is always in the details. It's almost inevitably these
> > > obvious five-liners that cause regressions down the line.
> > > 
> > > So why do we keep doing this?
> > 
> > How about discussing at the kernel summit whether or not patches
> > that have not been tested on actual hardware should be ignored?
> > 
> 
> A while ago I spent some time writing unit tests for various i2c
> based hwmon drivers (https://github.com/groeck/module-tests). With
> those, I found a substantial number of overflow conditions and other
> problems in various drivers.
> 
> Similar, my qemu boot tests have identified several problems over
> time, by nature of qemu often on hardware which is difficult if not
> almost impossible to find nowadays (ohci-sm501 is a current example).
> 
> Are you saying that such problems should not be fixed unless they can
> be verified on real hardware ?

I think virtual hardware testing is acceptable: most of the regressions
we get in old drivers from untested updates isn't anywhere near the
hardware interface handling code usually.  It's usually just an
unintended consequence of a well meaning update to the generic part of
the driver.  People who don't have the hardware and don't really
understand the driver rarely touch the core hardware handling pieces
... and if they do, we usually do demand a hardware test.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:35                 ` Guenter Roeck
  2019-06-13 15:39                   ` Bart Van Assche
  2019-06-13 15:39                   ` James Bottomley
@ 2019-06-13 15:42                   ` Takashi Iwai
  2 siblings, 0 replies; 77+ messages in thread
From: Takashi Iwai @ 2019-06-13 15:42 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: James Bottomley, Mauro Carvalho Chehab, ksummit, Bart Van Assche

On Thu, 13 Jun 2019 17:35:11 +0200,
Guenter Roeck wrote:
> 
> On 6/13/19 8:21 AM, Bart Van Assche wrote:
> > On 6/13/19 8:03 AM, Martin K. Petersen wrote:
> >>
> >> James,
> >>
> >>> It depends: every patch you do to an old driver comes with a risk of
> >>> breakage.  What we've found is even apparently sane patches cause
> >>> breakage which isn't discovered until months later when someone with
> >>> the hardware actually tests.
> >>
> >> My pet peeve is with the constant stream of seemingly innocuous
> >> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
> >> kfoobar() + memset()". And then a year later somebody decides kzfoobar()
> >> had a subtle adverse side-effect and now we all need to switch to
> >> kpzfoobar().
> >>
> >> I appreciate that some of these helpers may have merit in terms of
> >> facilitating static code checkers, etc. But other than that, I really
> >> fail to see the value of this constant churn.
> >>
> >> The devil is always in the details. It's almost inevitably these obvious
> >> five-liners that cause regressions down the line.
> >>
> >> So why do we keep doing this?
> >
> > How about discussing at the kernel summit whether or not patches that have not been tested on actual hardware should be ignored?
> >
> 
> A while ago I spent some time writing unit tests for various i2c based
> hwmon drivers (https://github.com/groeck/module-tests). With those,
> I found a substantial number of overflow conditions and other problems
> in various drivers.
> 
> Similar, my qemu boot tests have identified several problems over time,
> by nature of qemu often on hardware which is difficult if not almost
> impossible to find nowadays (ohci-sm501 is a current example).
> 
> Are you saying that such problems should not be fixed unless they can be
> verified on real hardware ?

I think the issue here is about cleanup patches, which are supposedly
safe but often aren't.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 14:53           ` Martin K. Petersen
@ 2019-06-13 17:09             ` Mauro Carvalho Chehab
  2019-06-14  3:03               ` Martin K. Petersen
  0 siblings, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 17:09 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: James Bottomley, ksummit

Em Thu, 13 Jun 2019 10:53:33 -0400
"Martin K. Petersen" <martin.petersen@oracle.com> escreveu:

> Mauro,
> 
> > Yet, we do enforce the current coding practices to all new code
> > we receive.  
> 
> The problem in SCSI is that standalone new code is rare. Almost every
> patch changes existing code.
> 
> Mixing code with tabs for indentation with old code that uses for
> instance two spaces results in code that is very hard to follow. That's
> why the preference is to stick to the existing style of a given file.

If you have code there with an indentation that it is not multiple of 8,
that makes things harder.

Yet, if the file has a consistent indentation[1], you could try
something like:

	$ expand -t 8 drivers/scsi/gdth_proc.c | \
	  unexpand --first-only -t 4 | \
	  sed -E 's,\s+$,\n,' > a && mv a drivers/scsi/gdth_proc.c

[1] and if it doesn't, then indentation is already broken there.

> 
> Also, attempts to use code formatters to produce sensible results have
> failed. Many of the drivers include tables or carefully formatted
> comments or data structures. So without a human involved, automatic code
> formatting produces complete junk.

Yeah, human review is important to avoid such kind of issues.

automatic indentation tools sometimes produce very crappy things.

-

Out of curiosity, I tried using astyle with a "basic" set of options:

	$ astyle --indent=force-tab=8 --convert-tabs --style=linux --lineend=linux --pad-oper --pad-comma --pad-header --align-pointer=name --align-reference=name --break-one-line-headers $(find drivers/scsi -type f -name '*.[ch]')

The result was not perfect, but, at least on a quick look - the result 
seemed a lot better than the original one on most places.

Yet, a human will take some time to check about bad things there
due to the size of the diff:

	$ git diff|wc -l
	507277

Checking if something broke could probably be made automatically,
by checking if the produced .o files (or the corresponding .a files)
are identical to files produced before the changes.

-

Being even more curious, I took a file that uses 3 spaces for alignment:

	$ ./scripts/checkpatch.pl -f drivers/scsi/gdth_proc.c --max-line-length=999
	total: 484 errors, 544 warnings, 586 lines checked

Using astyle:

	$ astyle --indent=force-tab=8 --convert-tabs --style=linux --lineend=linux --pad-oper --pad-comma --pad-header --align-pointer=name --align-reference=name --break-one-line-headers drivers/scsi/gdth_proc.c

A visual inspection on it looked pretty decent to my eyes. The
automatic tool also reported a lot less issues:


	$ ./scripts/checkpatch.pl -f drivers/scsi/gdth_proc.c --max-line-length=999
	total: 6 errors, 18 warnings, 590 lines checked

Running checkpatch on fix mode a few times on fix mode makes it look even
better:

	$ ./scripts/checkpatch.pl --strict --fix-inplace -f drivers/scsi/gdth_proc.c
	total: 4 errors, 17 warnings, 590 lines checked

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 14:35           ` James Bottomley
  2019-06-13 15:03             ` Martin K. Petersen
@ 2019-06-13 17:27             ` Mauro Carvalho Chehab
  2019-06-13 18:41               ` James Bottomley
  1 sibling, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 17:27 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit

Em Thu, 13 Jun 2019 07:35:07 -0700
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> On Thu, 2019-06-13 at 10:49 -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 07 Jun 2019 16:14:46 -0400
> > "Martin K. Petersen" <martin.petersen@oracle.com> escreveu:
> >   
> > > Dan,
> > >   
> > > > That said, I do think moving forward with the document would be
> > > > necessary pre-work for this conversation. Just the act of putting
> > > > subsystem specific policies in writing even if they differ would
> > > > go along way towards making the lives of contributors less
> > > > fraught with arbitrary peril.    
> > > 
> > > I think part of the problem is that some subsystems are older than
> > > others.
> > > 
> > > It is much easier to enforce your favorite bike shed/Xmas tree if
> > > the code is very similar and developed by like-minded people. Or
> > > written in this millennium.
> > > 
> > > Whereas in SCSI I have 25+ years of changes in coding practices,
> > > numerous vendor drivers influenced by styles in various other
> > > operating systems, etc. to deal with.
> > > 
> > > I try to enforce current best practices on core code because that
> > > is a very limited subset. And one which I can micro-manage. But
> > > trying to enforce similar rules on old crusty stuff which probably
> > > has no active maintainer is fraught with error. Plus things become
> > > completely unreadable if you start mixing new and 25+ year old
> > > style inside a single file.
> > > 
> > > So I am perfectly OK with having policies. But communicating and
> > > enforcing them on a per-subsystem basis is too coarse a granularity
> > > for the reality I have to deal with. Consequently, I think your
> > > MAINTAINERS tagging idea is a good approach.
> > >   
> > 
> > That's true, but it doesn't mean that those old style subsystems
> > can't be improved.  
> 
> It depends: every patch you do to an old driver comes with a risk of
> breakage.  What we've found is even apparently sane patches cause
> breakage which isn't discovered until months later when someone with
> the hardware actually tests. 

True, but, if you do the diff between the .o file produced before the 
patch and after it (and/or the associated .a file), you should be able
to discover if the change caused a regression or not.

So, if the patch is a "pure" coding style fix, you could be able to
avoid regressions.

> So the general rule is:
> 
>    1. No whitespace/style changes to old drivers without a fix as well

Yeah, we don't allow that either (except on staging - and on special
cases).

When I started as media maintainer, I did some whitespace/tabs/indent
cleaning, as it is easier to maintain a clean house.

>    2. We might take changes in comments only (spelling updates or licence
>       stuff) and other stuff that provably doesn't alter the binary.
>    3. Fixes which are tested on the actual hardware are welcome.
>    4. Any "obvious" bug fixes which aren't hardware tested really have to
>       be obvious and well inspected (these are the ones that usually cause
>       the problems)
>    5. Systemwide sweeps we do and usually just pray it was right
> 
> However, if someone comes along with the actual hardware to test and
> wants to take over maintaining it, they pretty much get carte blance to
> do whatever they want (see NCR 5380), so the above only applies to
> unmaintained old drivers.
> 
> > In the case of media we have 20+ years of changes. So, the received
> > code had a myriad of different coding styles.
> > 
> > Yet, we do enforce the current coding practices to all new code
> > we receive.  
> 
> We don't.  We enforce style in the existing driver for readability and
> consistency unless you're the maintainer of the driver and wish to
> change it.  Then we'd insist on changing it to kernel style.
> 
> > Also, at least at the core (and on some drivers that people use as 
> > reference for new codes), when we receive patches that do a large 
> > amount of changes at the code, and we have some spare time, we run
> > checkpatch.pl at the entire affected file, and we fix the style as
> > much as possible[1].  
> 
> We have done this, but only if the Maintainer wants to do it.  For
> drivers with no maintainer, we definitely don't.
> 
> James
> 
> > Yeah, that's painful, but as we do such practices for quite sime
> > time, nowadays the code gets improved and now people tend to do first
> > time  submissions using the current style practices.
> > 
> > [1] We usually ignore 80 column warnings on legacy code though,
> > as a proper fix would mean to rewrite the code in order to split
> > functions into smaller ones, with could cause regressions.
> > 
> > Thanks,
> > Mauro
> > _______________________________________________
> > Ksummit-discuss mailing list
> > Ksummit-discuss@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> >   
> 



Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 17:27             ` Mauro Carvalho Chehab
@ 2019-06-13 18:41               ` James Bottomley
  2019-06-13 19:11                 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 77+ messages in thread
From: James Bottomley @ 2019-06-13 18:41 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: ksummit

On Thu, 2019-06-13 at 14:27 -0300, Mauro Carvalho Chehab wrote:
> Em Thu, 13 Jun 2019 07:35:07 -0700
> James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > It depends: every patch you do to an old driver comes with a risk
> > of breakage.  What we've found is even apparently sane patches
> > cause breakage which isn't discovered until months later when
> > someone with the hardware actually tests. 
> 
> True, but, if you do the diff between the .o file produced before
> the patch and after it (and/or the associated .a file), you should be
> able to discover if the change caused a regression or not.
> 
> So, if the patch is a "pure" coding style fix, you could be able to
> avoid regressions.

Right, that's why I said "doesn't change the binary in the rules lower
down".  However, the number of people who actually come with a same
binary before and after section in their changelog is tiny ...

So perhaps we should document somewhere how (or even provide a tool) to
demonstrate the binary remains the same across the patch, because it is
an enormous help to subsystem maintainers.

> > So the general rule is:
> > 
> >    1. No whitespace/style changes to old drivers without a fix as
> > well
> 
> Yeah, we don't allow that either (except on staging - and on special
> cases).
> 
> When I started as media maintainer, I did some whitespace/tabs/indent
> cleaning, as it is easier to maintain a clean house.

We did this for some drivers, but usually only when changing
maintainers.  Even if a maintainer has slightly esoteric style ideas,
it's usually better to keep them happy than to be pedantic about
enforcing kernel style.

James

> >    2. We might take changes in comments only (spelling updates or
> > licence
> >       stuff) and other stuff that provably doesn't alter the
> > binary.
> >    3. Fixes which are tested on the actual hardware are welcome.
> >    4. Any "obvious" bug fixes which aren't hardware tested really
> > have to
> >       be obvious and well inspected (these are the ones that
> > usually cause
> >       the problems)
> >    5. Systemwide sweeps we do and usually just pray it was right
> > 
> > However, if someone comes along with the actual hardware to test
> > and wants to take over maintaining it, they pretty much get carte
> > blance to do whatever they want (see NCR 5380), so the above only
> > applies to unmaintained old drivers.
> > 
> > > In the case of media we have 20+ years of changes. So, the
> > > received code had a myriad of different coding styles.
> > > 
> > > Yet, we do enforce the current coding practices to all new code
> > > we receive.  
> > 
> > We don't.  We enforce style in the existing driver for readability
> > and consistency unless you're the maintainer of the driver and wish
> > to change it.  Then we'd insist on changing it to kernel style.
> > 
> > > Also, at least at the core (and on some drivers that people use
> > > as  reference for new codes), when we receive patches that do a
> > > large  amount of changes at the code, and we have some spare
> > > time, we run checkpatch.pl at the entire affected file, and we
> > > fix the style as much as possible[1].  
> > 
> > We have done this, but only if the Maintainer wants to do it.  For
> > drivers with no maintainer, we definitely don't.
> > 
> > James
> > 
> > > Yeah, that's painful, but as we do such practices for quite sime
> > > time, nowadays the code gets improved and now people tend to do
> > > first time  submissions using the current style practices.
> > > 
> > > [1] We usually ignore 80 column warnings on legacy code though,
> > > as a proper fix would mean to rewrite the code in order to split
> > > functions into smaller ones, with could cause regressions.
> > > 
> > > Thanks,
> > > Mauro
> > > _______________________________________________
> > > Ksummit-discuss mailing list
> > > Ksummit-discuss@lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discus
> > > s
> > >   
> 
> 
> 
> Thanks,
> Mauro
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 18:41               ` James Bottomley
@ 2019-06-13 19:11                 ` Mauro Carvalho Chehab
  2019-06-13 19:20                   ` Joe Perches
  2019-06-13 19:57                   ` Martin K. Petersen
  0 siblings, 2 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-13 19:11 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit

Em Thu, 13 Jun 2019 11:41:32 -0700
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> On Thu, 2019-06-13 at 14:27 -0300, Mauro Carvalho Chehab wrote:
> > Em Thu, 13 Jun 2019 07:35:07 -0700
> > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:  
> > > It depends: every patch you do to an old driver comes with a risk
> > > of breakage.  What we've found is even apparently sane patches
> > > cause breakage which isn't discovered until months later when
> > > someone with the hardware actually tests.   
> > 
> > True, but, if you do the diff between the .o file produced before
> > the patch and after it (and/or the associated .a file), you should be
> > able to discover if the change caused a regression or not.
> > 
> > So, if the patch is a "pure" coding style fix, you could be able to
> > avoid regressions.  
> 
> Right, that's why I said "doesn't change the binary in the rules lower
> down".  However, the number of people who actually come with a same
> binary before and after section in their changelog is tiny ...
> 
> So perhaps we should document somewhere how (or even provide a tool) to
> demonstrate the binary remains the same across the patch, because it is
> an enormous help to subsystem maintainers.

Yeah, a tool or a CI bot test with would help to identify binary-identical
changes would be really helpful, as this could help a lot maintainers to
decide either to take or not cleanup patches (including those coccinelle
stuff) without requiring hardware testing.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 19:11                 ` Mauro Carvalho Chehab
@ 2019-06-13 19:20                   ` Joe Perches
  2019-06-14  2:21                     ` Mauro Carvalho Chehab
  2019-06-13 19:57                   ` Martin K. Petersen
  1 sibling, 1 reply; 77+ messages in thread
From: Joe Perches @ 2019-06-13 19:20 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, James Bottomley; +Cc: ksummit

On Thu, 2019-06-13 at 16:11 -0300, Mauro Carvalho Chehab wrote:
> Yeah, a tool or a CI bot test with would help to identify binary-identical
> changes would be really helpful, as this could help a lot maintainers to
> decide either to take or not cleanup patches (including those coccinelle
> stuff) without requiring hardware testing.

An unfortunate aspect of GCC with any kind of tool
is that binary identical compilation objects are
not guaranteed with identical input source files.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:03             ` Martin K. Petersen
  2019-06-13 15:21               ` Bart Van Assche
@ 2019-06-13 19:28               ` James Bottomley
  2019-06-14  9:08               ` Dan Carpenter
                                 ` (2 subsequent siblings)
  4 siblings, 0 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-13 19:28 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: Mauro Carvalho Chehab, ksummit

On Thu, 2019-06-13 at 11:03 -0400, Martin K. Petersen wrote:
> James,
> 
> > It depends: every patch you do to an old driver comes with a risk
> > of breakage.  What we've found is even apparently sane patches
> > cause breakage which isn't discovered until months later when
> > someone with the hardware actually tests.
> 
> My pet peeve is with the constant stream of seemingly innocuous
> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead
> of kfoobar() + memset()". And then a year later somebody decides
> kzfoobar() had a subtle adverse side-effect and now we all need to
> switch to kpzfoobar().
> 
> I appreciate that some of these helpers may have merit in terms of
> facilitating static code checkers, etc. But other than that, I really
> fail to see the value of this constant churn.
> 
> The devil is always in the details. It's almost inevitably these
> obvious five-liners that cause regressions down the line.
> 
> So why do we keep doing this?

Heh, this reminds me a lot of the module section annotations (you
remember, things like __devinit).  We built a huge tooling system which
allowed hundreds of patches to "fix" the problems for obscure Kconfig
values.  Simply eliminating the lot was a huge relief and no-one at all
has missed it ever ... it was basically a huge make-work patch
generation system.

Perhaps the time has come to require justification of replacing
kmalloc/memset with kzalloc ... I don't think the pattern is hugely
bad, but if it doesn't improve anything the code, why do it?

So what I'd like to see is instead of simply "Coccinelle found this so
I'm sending a patch" require something like "Coccinelle found this and
it occurs in the fast path where using architecturally cleared memory
results in a 5% speed up of the code".

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 19:11                 ` Mauro Carvalho Chehab
  2019-06-13 19:20                   ` Joe Perches
@ 2019-06-13 19:57                   ` Martin K. Petersen
  1 sibling, 0 replies; 77+ messages in thread
From: Martin K. Petersen @ 2019-06-13 19:57 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit


Mauro,

> Yeah, a tool or a CI bot test with would help to identify
> binary-identical changes would be really helpful, as this could help a
> lot maintainers to decide either to take or not cleanup patches
> (including those coccinelle stuff) without requiring hardware testing.

That would be extremely helpful. I would love to make that a requirement
for SCSI acceptance.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 19:20                   ` Joe Perches
@ 2019-06-14  2:21                     ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14  2:21 UTC (permalink / raw)
  To: Joe Perches; +Cc: James Bottomley, ksummit

Em Thu, 13 Jun 2019 12:20:00 -0700
Joe Perches <joe@perches.com> escreveu:

> On Thu, 2019-06-13 at 16:11 -0300, Mauro Carvalho Chehab wrote:
> > Yeah, a tool or a CI bot test with would help to identify binary-identical
> > changes would be really helpful, as this could help a lot maintainers to
> > decide either to take or not cleanup patches (including those coccinelle
> > stuff) without requiring hardware testing.  
> 
> An unfortunate aspect of GCC with any kind of tool
> is that binary identical compilation objects are
> not guaranteed with identical input source files.

A quick look at:

	https://stackoverflow.com/questions/46801881/why-does-gcc-produce-different-compiled-binaries-for-programs-that-use-different

seems to indicate that using objdump -s would produce identical
results. I did a quick test here with gcc 9. It didn't work.

However, using objdump -S worked:

$ rm ./drivers/scsi/isci/port.o; make M=drivers/scsi/; objdump -S ./drivers/scsi/isci/port.o >./drivers/scsi/isci/port-v1.a; \
  rm ./drivers/scsi/isci/port.o; make M=drivers/scsi/; objdump -S ./drivers/scsi/isci/port.o >./drivers/scsi/isci/port-v2.a; \
  diff -u ./drivers/scsi/isci/port-v1.a ./drivers/scsi/isci/port-v2.a

  CC      drivers/scsi//isci/port.o
  AR      drivers/scsi//isci/built-in.a
  AR      drivers/scsi//built-in.a
  Building modules, stage 2.
  MODPOST 5 modules
  CC      drivers/scsi//isci/port.o
  AR      drivers/scsi//isci/built-in.a
  AR      drivers/scsi//built-in.a
  Building modules, stage 2.
  MODPOST 5 modules

<no diffs>

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 17:09             ` Mauro Carvalho Chehab
@ 2019-06-14  3:03               ` Martin K. Petersen
  2019-06-14  3:35                 ` Mauro Carvalho Chehab
  2019-06-14  7:31                 ` Joe Perches
  0 siblings, 2 replies; 77+ messages in thread
From: Martin K. Petersen @ 2019-06-14  3:03 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit


Mauro,

> Using astyle:
>
> 	$ astyle --indent=force-tab=8 --convert-tabs --style=linux
> --lineend=linux --pad-oper --pad-comma --pad-header
> --align-pointer=name --align-reference=name --break-one-line-headers
> drivers/scsi/gdth_proc.c
>
> A visual inspection on it looked pretty decent to my eyes. The
> automatic tool also reported a lot less issues:

Not questioning that things could be cleaned up and that tools can
help. However, many of these drivers will be removed when we get the
chance so it doesn't seem worthwhile to invest in reformatting and
updating them. Especially if cleaning things up will facilitate *more*
drive-by patches. I'd much rather let stale be stale to make it clear
that a given driver will be dropped from the tree unless somebody shows
a real interest.

While this may come across as a desire to discourage patches completely,
that's not actually the case. But I want patches from somebody who takes
ownership and who is willing to validate things. Using real hardware,
QEMU, output comparison, or interpretive dancing. Doesn't matter.

I am a bit concerned that our emphasis on teaching process to attract
new talent has encouraged a culture of non-committal drive-by cleanups.
Whereas I think there is much to be learned from the process of buying
an obsolete SCSI controller on eBay, beating a driver into shape,
getting the changes merged, and committing to maintaining things going
forward.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14  3:03               ` Martin K. Petersen
@ 2019-06-14  3:35                 ` Mauro Carvalho Chehab
  2019-06-14  7:31                 ` Joe Perches
  1 sibling, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14  3:35 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: James Bottomley, ksummit

Em Thu, 13 Jun 2019 23:03:15 -0400
"Martin K. Petersen" <martin.petersen@oracle.com> escreveu:

> Mauro,
> 
> > Using astyle:
> >
> > 	$ astyle --indent=force-tab=8 --convert-tabs --style=linux
> > --lineend=linux --pad-oper --pad-comma --pad-header
> > --align-pointer=name --align-reference=name --break-one-line-headers
> > drivers/scsi/gdth_proc.c
> >
> > A visual inspection on it looked pretty decent to my eyes. The
> > automatic tool also reported a lot less issues:  
> 
> Not questioning that things could be cleaned up and that tools can
> help. However, many of these drivers will be removed when we get the
> chance so it doesn't seem worthwhile to invest in reformatting and
> updating them. Especially if cleaning things up will facilitate *more*
> drive-by patches. I'd much rather let stale be stale to make it clear
> that a given driver will be dropped from the tree unless somebody shows
> a real interest.
> 
> While this may come across as a desire to discourage patches completely,
> that's not actually the case. But I want patches from somebody who takes
> ownership and who is willing to validate things. Using real hardware,
> QEMU, output comparison, or interpretive dancing. Doesn't matter.
>
> I am a bit concerned that our emphasis on teaching process to attract
> new talent has encouraged a culture of non-committal drive-by cleanups.
> Whereas I think there is much to be learned from the process of buying
> an obsolete SCSI controller on eBay, beating a driver into shape,
> getting the changes merged, and committing to maintaining things going
> forward.

Understood. Yeah, investing time on obsolete drivers is probably
not worth and may attract people to do just cleanup patches. We have
a few such drivers on media that are there only because we don't have
any strong reason to send them to /dev/null. So, basically, they're
waiting for an excuse to be trashed.

We actually do that: from time to time, as we need to change some things
at the core, we decide to move obsoleted to drivers/staging, and, if 
nobody takes any action to claim its maintainership, after a couple
of Kernel releases, we send them to the trash can.

The comments I made are more related to things that the subsystem
maintainers want to keep and may have serious coding style issues.

If there are such kind of code, IMO, it makes sense to use a
tool like astyle, plus checkpatch --fix-inline in order to make the
code to be closer to the current coding style, specially at the
subsystem's core, as this may help to keep the Kernel coding style
coherent among different subsystems, with seems to be the point that
James raised with the "Patch Acceptance Consistency" topic.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14  3:03               ` Martin K. Petersen
  2019-06-14  3:35                 ` Mauro Carvalho Chehab
@ 2019-06-14  7:31                 ` Joe Perches
  1 sibling, 0 replies; 77+ messages in thread
From: Joe Perches @ 2019-06-14  7:31 UTC (permalink / raw)
  To: Martin K. Petersen, Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit

On Thu, 2019-06-13 at 23:03 -0400, Martin K. Petersen wrote:
> But I want patches from somebody who takes
> ownership and who is willing to validate things. Using real hardware,
> QEMU, output comparison, or interpretive dancing. Doesn't matter.

One of the strengths of linux is nominal support for old hardware.
But simultaneously those older hardware are also rarely tested.

Perhaps a mechanism to move these old, generally unsupported
by an actual maintainer, and rarely tested drivers out of the
mainline drivers directory into a separate obsolete directory
would help isolate the whitespace and trivial api changes.

> I think there is much to be learned from the process of buying
> an obsolete SCSI controller on eBay, beating a driver into shape,
> getting the changes merged, and committing to maintaining things going
> forward.

There is a also a decided lack of general interest in that too.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:03             ` Martin K. Petersen
  2019-06-13 15:21               ` Bart Van Assche
  2019-06-13 19:28               ` James Bottomley
@ 2019-06-14  9:08               ` Dan Carpenter
  2019-06-14  9:43               ` Dan Carpenter
  2019-06-14 13:27               ` Dan Carpenter
  4 siblings, 0 replies; 77+ messages in thread
From: Dan Carpenter @ 2019-06-14  9:08 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: James Bottomley, Mauro Carvalho Chehab, ksummit

On Thu, Jun 13, 2019 at 11:03:53AM -0400, Martin K. Petersen wrote:
> 
> James,
> 
> > It depends: every patch you do to an old driver comes with a risk of
> > breakage.  What we've found is even apparently sane patches cause
> > breakage which isn't discovered until months later when someone with
> > the hardware actually tests.
> 
> My pet peeve is with the constant stream of seemingly innocuous
> helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
> kfoobar() + memset()". And then a year later somebody decides kzfoobar()
> had a subtle adverse side-effect and now we all need to switch to
> kpzfoobar().
> 
> I appreciate that some of these helpers may have merit in terms of
> facilitating static code checkers, etc. But other than that, I really
> fail to see the value of this constant churn.
> 
> The devil is always in the details. It's almost inevitably these obvious
> five-liners that cause regressions down the line.
> 
> So why do we keep doing this?
> 

You haven't provided any specifics so it's hard to discuss this...  The
only example I can think of are the memdup_user() conversions where
the function uses "goto out;" error handling.

out:
	kfree(ptr);
	return ret;

The problem is that in the original code if kmalloc() fails and we goto
out then kfree(NULL) is a no-op but in the new code if memdup_user()
fails "ptr" is an error pointer and kfree(ERR_PTR(-EFAULT)) is an Oops.

As a reviewer, any time I see a "goto out;" that raises a red flag for
me because this style of error handling tends to be more buggy.  Choose
a better label name and don't free stuff that wasn't allocated.

The good news is that static analysis will find these bugs so it's
probably been a couple years since one of these have made it to a
released kernel.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:03             ` Martin K. Petersen
                                 ` (2 preceding siblings ...)
  2019-06-14  9:08               ` Dan Carpenter
@ 2019-06-14  9:43               ` Dan Carpenter
  2019-06-14 13:27               ` Dan Carpenter
  4 siblings, 0 replies; 77+ messages in thread
From: Dan Carpenter @ 2019-06-14  9:43 UTC (permalink / raw)
  To: Martin K. Petersen; +Cc: James Bottomley, Mauro Carvalho Chehab, ksummit

[-- Attachment #1: Type: text/plain, Size: 551 bytes --]

We review tons and tons of mechanical patches in staging and one thing
which helps is my rename_rev.pl script.  I should put in in a git
archive somewhere.  Anyway, attached.  It tries to strip away mechanical
changes so that I can review interesting parts.

cat patch.txt | rename_rev.pl <options>

-a # detect renames automaricall
-e 's/foo\((.*?),.*)/bar($1)/  # do a perl substitution
-ea 's/[\{\\}]//g' # remove every curly brace in the diff
-r NULL # change "foo == NULL" to "if (!foo)"
-nc # ignore changes to comments

regards,
dan carpenter


[-- Attachment #2: rename_rev.pl --]
[-- Type: text/x-perl, Size: 10703 bytes --]

#!/usr/bin/perl

# This is a tool to help review variable rename patches. The goal is
# to strip out the automatic sed renames and the white space changes
# and leaves the interesting code changes.
#
# Example 1: A patch renames openInfo to open_info:
#     cat diff | rename_review.pl openInfo open_info
#
# Example 2: A patch swaps the first two arguments to some_func():
#     cat diff | rename_review.pl \
#                    -e 's/some_func\((.*?),(.*?),/some_func\($2, $1,/'
#
# Example 3: A patch removes the xkcd_ prefix from some but not all the
# variables.  Instead of trying to figure out which variables were renamed
# just remove the prefix from them all:
#     cat diff | rename_review.pl -ea 's/xkcd_//g'
#
# Example 4: A patch renames 20 CamelCase variables.  To review this let's
# just ignore all case changes and all '_' chars.
#     cat diff | rename_review -ea 'tr/[A-Z]/[a-z]/' -ea 's/_//g'
#
# The other arguments are:
# -nc removes comments
# -ns removes '\' chars if they are at the end of the line.

use strict;
use File::Temp qw/ :mktemp  /;

sub usage() {
    print "usage: cat diff | $0 old new old new old new...\n";
    print "   or: cat diff | $0 -e 's/old/new/g'\n";
    print " -a : auto";
    print " -e : execute on old lines\n";
    print " -ea: execute on all lines\n";
    print " -nc: no comments\n";
    print " -nb: no unneeded braces\n";
    print " -ns: no slashes at the end of a line\n";
    print " -pull: for function pull.  deletes context.\n";
    print " -r <recipe>: NULL, bool";
    exit(1);
}
my @subs;
my @strict_subs;
my @cmds;
my $strip_comments;
my $strip_braces;
my $strip_slashes;
my $pull_context;
my $auto;

sub filter($) {
    my $line = shift();
    my $old = 0;
    if ($line =~ /^-/) {
        $old = 1;
    }
    # remove the first char
    $line =~ s/^[ +-]//;
    if ($strip_comments) {
        $line =~ s/\/\*.*?\*\///g;
        $line =~ s/\/\/.*//;
    }
    foreach my $cmd (@cmds) {
        if ($old || $cmd->[0] =~ /^-ea$/) {
            eval "\$line =~ $cmd->[1]";
        }
    }
    foreach my $sub (@subs) {
        if ($old) {
            $line =~ s/$sub->[0]/$sub->[1]/g;
        }
    }
    foreach my $sub (@strict_subs) {
        if ($old) {
            $line =~ s/\b$sub->[0]\b/$sub->[1]/g;
        }
    }

    # remove the newline so we can move curly braces here if we want.
    $line =~ s/\n//;
    return $line;
}

while (my $param1 = shift()) {
    if ($param1 =~ /^-a$/) {
        $auto = 1;
        next;
    }
    if ($param1 =~ /^-nc$/) {
        $strip_comments = 1;
        next;
    }
    if ($param1 =~ /^-nb$/) {
        $strip_braces = 1;
        next;
    }
    if ($param1 =~ /^-ns$/) {
        $strip_slashes = 1;
        next;
    }
    if ($param1 =~ /^-pull$/) {
        $pull_context = 1;
        next;
    }
    my $param2 = shift();
    if ($param2 =~ /^$/) {
        usage();
    }
    if ($param1 =~ /^-e(a|)$/) {
        push @cmds, [$param1, $param2];
        next;
    }
    if ($param1 =~ /^-r$/) {
        if ($param2 =~ /bool/) {
            push @cmds, ["-e", "s/== true//"];
            push @cmds, ["-e", "s/true ==//"];
            push @cmds, ["-e", "s/([a-zA-Z\-\>\._]+) == false/!\$1/"];
            next;
        } elsif ($param2 =~ /NULL/) {
            push @cmds, ["-e", "s/ != NULL//"];
            push @cmds, ["-e", "s/([a-zA-Z\-\>\._0-9]+) == NULL/!\$1/"];
            next;
        } elsif ($param2 =~ /BIT/) {
            push @cmds, ["-e", 's/1[uU]* *<< *(\d+)/BIT($1)/'];
            push @cmds, ["-e", 's/\(1 *<< *(\w+)\)/BIT($1)/'];
            push @cmds, ["-e", 's/\(BIT\((.*?)\)\)/BIT($1)/'];
            next;
        }
        usage();
    }

    push @subs, [$param1, $param2];
}

my ($oldfh, $oldfile) = mkstemp("/tmp/oldXXXXX");
my ($newfh, $newfile) = mkstemp("/tmp/newXXXXX");

my @input = <STDIN>;

# auto works on the observation that the - line comes before the + line when we
# rename variables.  Take the first - line.  Find the first + line.  Find the
# one word difference.  Test that the old word never occurs in the new text.
if ($auto) {
    my %c_keywords = (  auto => 1,
                        break => 1,
                        case => 1,
                        char => 1,
                        const => 1,
                        continue => 1,
                        default => 1,
                        do => 1,
                        double => 1,
                        else => 1,
                        enum => 1,
                        extern => 1,
                        float => 1,
                        for => 1,
                        goto => 1,
                        if => 1,
                        int => 1,
                        long => 1,
                        register => 1,
                        return => 1,
                        short => 1,
                        signed => 1,
                        sizeof => 1,
                        static => 1,
                        struct => 1,
                        switch => 1,
                        typedef => 1,
                        union => 1,
                        unsigned => 1,
                        void => 1,
                        volatile => 1,
                        while => 1);
    my %old_words;
    my %new_words;
    my %added_cmds;
    my @new_subs;

    my $inside = 0;
    foreach my $line (@input) {
        if ($line =~ /^(---|\+\+\+)/) {
            next;
        }

        if ($line =~ /^@/) {
            $inside = 1;
        }
        if ($inside && !(($_ =~ /^[- @+]/) || ($_ =~ /^$/))) {
            $inside = 0;
        }
        if (!$inside) {
            next;
        }

        if ($line =~ /^-/) {
            s/-//;
            my @words = split(/\W+/, $line);
            foreach my $word (@words) {
                $old_words{$word} = 1;
            }
        } elsif ($line =~ /^\+/) {
            s/\+//;
            my @words = split(/\W+/, $line);
            foreach my $word (@words) {
                $new_words{$word} = 1;
            }
        }
    }

    my $old_line;
    my $new_line;
    $inside = 0;
    foreach my $line (@input) {
        if ($line =~ /^(---|\+\+\+)/) {
            next;
        }

        if ($line =~ /^@/) {
            $inside = 1;
        }
        if ($inside && !(($_ =~ /^[- @+]/) || ($_ =~ /^$/))) {
            $inside = 0;
        }
        if (!$inside) {
            next;
        }


        if ($line =~ /^-/ && !$old_line) {
            s/^-//;
            $old_line = $line;
            next;
        } elsif ($old_line && $line =~ /^\+/) {
            s/^\+//;
            $new_line = $line;
        } else {
            next;
        }

        my @old_words = split(/\W+/, $old_line);
        my @new_words = split(/\W+/, $new_line);
        my @new_cmds;

        my $i;
        my $diff_count = 0;
        for ($i = 0; ; $i++) {
            if (!defined($old_words[$i]) && !defined($new_words[$i])) {
                last;
            }
            if (!defined($old_words[$i]) || !defined($new_words[$i])) {
                $diff_count = 1000;
                last;
            }
            if ($old_words[$i] eq $new_words[$i]) {
                next;
            }
            if ($c_keywords{$old_words[$i]}) {
                $diff_count = 1000;
                last;
            }
            if ($new_words{$old_words[$i]}) {
                $diff_count++;
            }
            push @new_cmds, [$old_words[$i], $new_words[$i]];
        }
        if ($diff_count <= 2) {
            foreach my $sub (@new_cmds) {
                if ($added_cmds{$sub->[0] . $sub->[1]}) {
                    next;
                }
                $added_cmds{$sub->[0] . $sub->[1]} = 1;
                push @new_subs, [$sub->[0] , $sub->[1]];
            }
        }

        $old_line = 0;
    }

    if (@new_subs) {
        print "RENAMES:\n";
        foreach my $sub (@new_subs) {
            print "$sub->[0] => $sub->[1]\n";
            push @strict_subs, [$sub->[0] , $sub->[1]];
        }
        print "---\n";
    }
}

my $output;

#recreate an old file and a new file
my $inside = 0;
foreach (@input) {
    if ($pull_context && !($_ =~ /^[+-@]/)) {
        next;
    }

    if ($_ =~ /^(---|\+\+\+)/) {
        next;
    }

    if ($_ =~ /^@/) {
        $inside = 1;
    }
    if ($inside && !(($_ =~ /^[- @+]/) || ($_ =~ /^$/))) {
        $inside = 0;
    }
    if (!$inside) {
        next;
    }

    $output = filter($_);

    if ($strip_braces && $_ =~ /^(\+|-)\W+{/) {
        $output =~ s/^[\t ]+(.*)/ $1/;
    } else {
        $output = "\n" . $output;
    }

    if ($_ =~ /^-/) {
        print $oldfh $output;
        next;
    }
    if ($_ =~ /^\+/) {
        print $newfh $output;
        next;
    }
    print $oldfh $output;
    print $newfh $output;

}
print $oldfh "\n";
print $newfh "\n";
# git diff puts a -- and version at the end of the diff.  put the -- into the
# new file as well so it's ignored
if ($output =~ /\n-/) {
    print $newfh "-\n";
}

my $hunk;
my $old_txt;
my $new_txt;

open diff, "diff -uw $oldfile $newfile |";
while (<diff>) {
    if ($_ =~ /^(---|\+\+\+)/) {
        next;
    }

    if ($_ =~ /^@/) {

        if ($strip_comments) {
            $old_txt =~ s/\/\*.*?\*\///g;
            $new_txt =~ s/\/\*.*?\*\///g;
        }
        if ($strip_braces) {
            $old_txt =~ s/{([^;{]*?);}/$1;/g;
            $new_txt =~ s/{([^;{]*?);}/$1;/g;
            # this is a hack because i don't know how to replace nested
            # unneeded curly braces.
            $old_txt =~ s/{([^;{]*?);}/$1;/g;
            $new_txt =~ s/{([^;{]*?);}/$1;/g;
        }

        if ($old_txt ne $new_txt) {
            print $hunk;
            print $_;
        }
        $hunk = "";
        $old_txt = "";
        $new_txt = "";
        next;
    }

    $hunk = $hunk . $_;

    if ($strip_slashes) {
        s/\\$//;
    }

    if ($_ =~ /^-/) {
        s/-//;
        s/[ \t\n]//g;
        $old_txt = $old_txt . $_;
        next;
    }
    if ($_ =~ /^\+/) {
        s/\+//;
        s/[ \t\n]//g;
        $new_txt = $new_txt . $_;
        next;
    }
    if ($_ =~ /^ /) {
        s/^ //;
        s/[ \t\n]//g;
        $old_txt = $old_txt . $_;
        $new_txt = $new_txt . $_;
    }
}

if ($old_txt ne $new_txt) {
    if ($strip_comments) {
        $old_txt =~ s/\/\*.*?\*\///g;
        $new_txt =~ s/\/\*.*?\*\///g;
    }
    if ($strip_braces) {
        $old_txt =~ s/{([^;{]*?);}/$1;/g;
        $new_txt =~ s/{([^;{]*?);}/$1;/g;
        $old_txt =~ s/{([^;{]*?);}/$1;/g;
        $new_txt =~ s/{([^;{]*?);}/$1;/g;
    }

    print $hunk;
}

unlink($oldfile);
unlink($newfile);

print "\ndone.\n";

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 13:59     ` Mauro Carvalho Chehab
@ 2019-06-14 10:12       ` Laurent Pinchart
  2019-06-14 13:24         ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-14 10:12 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit-discuss

Hi Mauro,

On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:
> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> 
> > [splitting issues to shorten replies]
> > On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:
> >> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:  
> >>> This is probably best done as two separate topics
> >>> 
> >>> 1) Pull network: The pull depth is effectively how many pulls your
> >>> tree does before it goes to Linus, so pull depth 0 is sent straight
> >>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> >>> and so on.  We've previously spent time discussing how increasing
> >>> the pull depth of the network would reduce the amount of time Linus
> >>> spends handling pull requests.  However, in the areas I play, like
> >>> security, we seem to be moving in the opposite direction
> >>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> >>> we're deciding to move to a flat tree model, where everything is
> >>> depth 0, that's fine, I just think we could do with making a formal
> >>> decision on it so we don't waste energy encouraging greater tree
> >>> depth.  
> >> 
> >> That depth "change" was due to the perceived problems that having a
> >> deeper pull depth was causing.  To sort that out, Linus asked for
> >> things to go directly to him.  
> > 
> > This seems to go beyond problems with one tree and is becoming a trend.
> > 
> >> It seems like the real issue is the problem with that subsystem
> >> collection point, and the fact that the depth changed is a sign that
> >> our model works well (i.e. everyone can be routed around.)  
> > 
> > I'm not really interested in calling out "problem" maintainers, or
> > indeed having another "my patch collection method is better than yours"
> > type discussion.  What I was fishing for is whether the general
> > impression that greater tree depth is worth striving for is actually
> > correct, or we should all give up now and simply accept that the
> > current flat tree is the best we can do, and, indeed is the model that
> > works best for Linus.  I get the impression this may be the case, but I
> > think making sure by having an actual discussion among the interested
> > parties who will be at the kernel summit, would be useful.
> 
> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> 
> patch author -> media/driver maintainer -> subsystem maintainer -> Linus

I'd like to use this opportunity to ask again for pull requests to be
pulled instead of cherry-picked.

> In other words, I'm trying hard to apply patches directly. Still,
> due to the huge number of patches we receive on media [1], I tend to
> apply patches directly too (specially trivial ones), in order to avoid
> having a patch waiting for a long time to be applied.
> 
> This model seems to be working fine for us, as it gives at least two
> levels of review to each patch.
> 
> [1] Over the last 2 years, we're receiving about 400 to 1000 patches/month:
>     https://linuxtv.org/patchwork_stats.php
> 
> >> So, maybe some work on fixing up subsystems that have problems
> >> aggregating things?  Seems like some areas of the kernel do this just
> >> fine, perhaps some workflow for the developers involved needs to be
> >> adjusted?  
> > 
> > As I said, I'm not really that interested in upbraiding the problem
> > cases, I'm more interested in discussing the generalities, and what we
> > as maintainers should be encouraging.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:39                   ` Bart Van Assche
@ 2019-06-14 11:53                     ` Leon Romanovsky
  2019-06-14 17:06                       ` Bart Van Assche
  0 siblings, 1 reply; 77+ messages in thread
From: Leon Romanovsky @ 2019-06-14 11:53 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: James Bottomley, Mauro Carvalho Chehab, ksummit

On Thu, Jun 13, 2019 at 08:39:22AM -0700, Bart Van Assche wrote:
> On 6/13/19 8:35 AM, Guenter Roeck wrote:
> > On 6/13/19 8:21 AM, Bart Van Assche wrote:
> > > On 6/13/19 8:03 AM, Martin K. Petersen wrote:
> > > >
> > > > James,
> > > >
> > > > > It depends: every patch you do to an old driver comes with a risk of
> > > > > breakage.  What we've found is even apparently sane patches cause
> > > > > breakage which isn't discovered until months later when someone with
> > > > > the hardware actually tests.
> > > >
> > > > My pet peeve is with the constant stream of seemingly innocuous
> > > > helper-interface-of-the-week changes. Such as "Use kzfoobar() instead of
> > > > kfoobar() + memset()". And then a year later somebody decides kzfoobar()
> > > > had a subtle adverse side-effect and now we all need to switch to
> > > > kpzfoobar().
> > > >
> > > > I appreciate that some of these helpers may have merit in terms of
> > > > facilitating static code checkers, etc. But other than that, I really
> > > > fail to see the value of this constant churn.
> > > >
> > > > The devil is always in the details. It's almost inevitably these obvious
> > > > five-liners that cause regressions down the line.
> > > >
> > > > So why do we keep doing this?
> > >
> > > How about discussing at the kernel summit whether or not patches
> > > that have not been tested on actual hardware should be ignored?
> > >
> >
> > A while ago I spent some time writing unit tests for various i2c based
> > hwmon drivers (https://github.com/groeck/module-tests). With those,
> > I found a substantial number of overflow conditions and other problems
> > in various drivers.
> >
> > Similar, my qemu boot tests have identified several problems over time,
> > by nature of qemu often on hardware which is difficult if not almost
> > impossible to find nowadays (ohci-sm501 is a current example).
> >
> > Are you saying that such problems should not be fixed unless they can be
> > verified on real hardware ?
>
> How about leaving out "on actual hardware" from my e-mail? What you
> described sounds like valuable work to me. I think testing with qemu is
> sufficient.

There are kernel subsystems without available QEMU virtual hardware
or with special hardware which is not available for most of the active
developers. Sometimes bugs in those drivers stop whole subsystem for
moving forward and needed to be fixed without HW.

Thanks

>
> Bart.
>
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 10:12       ` Laurent Pinchart
@ 2019-06-14 13:24         ` Mauro Carvalho Chehab
  2019-06-14 13:31           ` Laurent Pinchart
  2019-06-14 13:58           ` Greg KH
  0 siblings, 2 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 13:24 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, ksummit-discuss

Em Fri, 14 Jun 2019 13:12:22 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:
> > Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> >   
> > > [splitting issues to shorten replies]
> > > On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:  
> > >> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:    
> > >>> This is probably best done as two separate topics
> > >>> 
> > >>> 1) Pull network: The pull depth is effectively how many pulls your
> > >>> tree does before it goes to Linus, so pull depth 0 is sent straight
> > >>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > >>> and so on.  We've previously spent time discussing how increasing
> > >>> the pull depth of the network would reduce the amount of time Linus
> > >>> spends handling pull requests.  However, in the areas I play, like
> > >>> security, we seem to be moving in the opposite direction
> > >>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> > >>> we're deciding to move to a flat tree model, where everything is
> > >>> depth 0, that's fine, I just think we could do with making a formal
> > >>> decision on it so we don't waste energy encouraging greater tree
> > >>> depth.    
> > >> 
> > >> That depth "change" was due to the perceived problems that having a
> > >> deeper pull depth was causing.  To sort that out, Linus asked for
> > >> things to go directly to him.    
> > > 
> > > This seems to go beyond problems with one tree and is becoming a trend.
> > >   
> > >> It seems like the real issue is the problem with that subsystem
> > >> collection point, and the fact that the depth changed is a sign that
> > >> our model works well (i.e. everyone can be routed around.)    
> > > 
> > > I'm not really interested in calling out "problem" maintainers, or
> > > indeed having another "my patch collection method is better than yours"
> > > type discussion.  What I was fishing for is whether the general
> > > impression that greater tree depth is worth striving for is actually
> > > correct, or we should all give up now and simply accept that the
> > > current flat tree is the best we can do, and, indeed is the model that
> > > works best for Linus.  I get the impression this may be the case, but I
> > > think making sure by having an actual discussion among the interested
> > > parties who will be at the kernel summit, would be useful.  
> > 
> > On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> > 
> > patch author -> media/driver maintainer -> subsystem maintainer -> Linus  
> 
> I'd like to use this opportunity to ask again for pull requests to be
> pulled instead of cherry-picked.

There are other forums for discussing internal media maintainership,
like the weekly meetings we have and our own mailing lists.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-13 15:03             ` Martin K. Petersen
                                 ` (3 preceding siblings ...)
  2019-06-14  9:43               ` Dan Carpenter
@ 2019-06-14 13:27               ` Dan Carpenter
  4 siblings, 0 replies; 77+ messages in thread
From: Dan Carpenter @ 2019-06-14 13:27 UTC (permalink / raw)
  To: Martin K. Petersen
  Cc: James Bottomley, Mauro Carvalho Chehab, ksummit, Greg Kroah-Hartman

[-- Attachment #1: Type: text/plain, Size: 1649 bytes --]

I help in staging and we get tons and tons of these cleanups.  I
decided to take a look at how bugs are introduced into staging.  I
sorted through everything with a Fixes tag and divided it into
"new code", "fixes" and "cleanups".

The initial driver upload only accounts for maybe a third of our bugs.
I was surprised this figure was so low.

Mechanical newbie patches are maybe 3-5% of our bugs.  A good chunk of
these were old patches that didn't go through the normal driver-devel
mailing list and review process.  Most of these bugs were detectable
using static analysis so in terms of bugs that make it into a released
version of the kernel the impact from these cleanups is tiny.

The majority of our bugs come from the maintainers doing complicated to
review cleanups.

Some people are over using the Fixes tag.  If the patch is removing
unused variables, then that's not a runtime bug but we should still use
a Fixes tag.  But if we're just silencing GCC false positives then we
shouldn't use a Fixes tag.

I also looked at how the bugs were found.  Probably in staging the bugs
are reported to the vendor instead of to the driver-devel list.  I don't
know if the vendors are adding Reported-by tags correctly...  Out of the
fixes that have a Reported-by tag, most bugs are found by auto builders
and static analysis.  Another large chunk are found by maintainers (one
maintainer adds a bug and a different maintainer notices it).  It's
probably between two and five times per year that a regular user gets a
Reported-by tag.

I'm going to attach my raw data but I don't know if it's too large for
the mailing list.

regards,
dan carpenter

[-- Attachment #2: fixes --]
[-- Type: text/plain, Size: 66710 bytes --]

*** NEW FEATURES:

03aef4b6dc12 Staging: comedi: add ni_mio_common code
    1cbca5852d6c staging: comedi: ni_mio_common: protect register write overflow
9bdd203b4dc8 s3cmci: add debugfs support for examining driver and hardware state
    c7fc46fd1410 staging: ccree: mark debug_regs[] as static
8fc8598e61f6 Staging: Added Realtek rtl8192u driver to staging
    5e767cca2964 staging: rtl8192u: remove redundant nul check on pointer dev
    e1a7418529e3 staging: rtl8192u: return -ENOMEM on failed allocation of priv->oldaddr
2865d42c78a9 staging: r8712u: Add the new driver to the mainline kernel
    22c971db7dd4 staging: rtl8712: uninitialized memory in read_bbreg_hdl()
        Reported-by: Colin Ian King <colin.king@canonical.com>
    6e017006022a staging: rtl: fix possible NULL pointer dereference
35f6b6b86ede staging: iio: new ADT7316/7/8 and ADT7516/7/9 driver
    78accaea117c staging: iio: adt7316: fix the dac write calculation
    45130fb030ae staging: iio: adt7316: fix the dac read calculation
    76b7fe8d6c4d staging: iio: adt7316: fix handling of dac high resolution option
    e9de475723de staging: iio: adt7316: fix dac_bits assignment
    10bfe7cc1739 staging: iio: adt7316: allow adt751x to use internal vref for all dacs
    85a1c1191331 staging: iio: adt7316: invert the logic of the check for an ldac pin
    53a6f022b4fe staging: iio: adt7316: fix register and bit definitions
8d97a5877b85 staging: iio: meter: new driver for ADE7754 devices
    6cef2ab01636 staging:iio:ade7854: Fix the wrong number of bits to read
    4297b23d927f staging:iio:ade7854: Fix error handling on read/write
2919fa54ef64 staging: iio: meter: new driver for ADE7759 devices
    13ffe9a26df4 staging: iio: ade7759: fix signed extension bug on shift of a u8
fc96d58c1016 [media] v4l: omap4iss: Add support for OMAP4 camera interface - Video devices
    0894da849f14 media: staging: omap4iss: Include asm/cacheflush.h after generic includes
b9618c0cacd7 staging: IIO: ADC: New driver for AD7606/AD7606-6/AD7606-4
2051f25d2a26 iio: adc: New driver for AD7280A Lithium Ion Battery Monitoring System
    53e8785c248d staging: iio: adc: ad7280a: check for devm_kasprint() failure
959d2952d124 staging:iio: make iio_sw_buffer_preenable much more general.
    e10554738cab staging:iio:ade7758: Fix NULL pointer deref when enabling buffer
    4a53d3afa00b staging:iio:ad5933: Fix NULL pointer deref when enabling buffer
    824269c5868d staging:iio:ad5933: Fix NULL pointer deref when enabling buffer
3c97c08b5735 staging: iio: add TAOS tsl2x7x driver
    cf6c77323a96 staging: iio: tsl2x7x_core: Fix standard deviation calculation
        Reported-by: Abhiram Balasubramanian <abhiram@cs.utah.edu>
da4db94080f0 iio staging: add recently added modifiers to iio_event_monitor
7e8401b23e7f staging: comedi: daqboard2000: add back subsystem_device check
    80e162ee9b31 staging: comedi: daqboard2000: bug fix board type matching code
8f567c373c4b staging: comedi: new adl_pci7x3x driver
    ad83dbd974fe staging: comedi: adl_pci7x3x: fix digital output on PCI-7230
622897da67b3 [media] davinci: vpfe: add v4l2 video driver support
    c4a407b91f4b [media] staging: media: davinci_vpfe: unlock on error in vpfe_reqbufs()
d7e09d0397e8 staging: add Lustre file system client support
    134aecbc25fd staging: lustre: libcfs: Prevent harmless read underflow
    c3eec59659cf staging: lustre: ptlrpc: kfree used instead of kvfree
    092c3def24bb staging: lustre: obdclass: return -EFAULT if copy_from_user() fails
71bad7f08641 staging: add bcm2708 vchiq driver
    ca641bae6da9 staging: vc04_services: prevent integer overflow in create_pagelist()
    974d4d03fc02 staging: vchiq_2835_arm: Fix NULL ptr dereference in free_pagelist
61e121047645 staging: gdm7240: adding LTE USB driver
    b58f45c8fc30 staging: gdm724x: gdm_mux: fix use-after-free on module unload
9a7fe54ddc3a staging: r8188eu: Add source files for new driver - part 1
    784047eb2d34 staging: rtl8188eu: prevent an underflow in rtw_check_beacon_data()
7b464c9fa5cc staging: r8188eu: Add files for new driver - part 4
    123c0aab0050 staging: rtl8188eu: avoid a null dereference on pmlmepriv
    329862549c0f staging: rtl8188eu: rtw_mlme_ext.c: remove commented code
1cc18a22b96b staging: r8188eu: Add files for new driver - part 5
    6e017006022a staging: rtl: fix possible NULL pointer dereference
d6846af679e0 staging: r8188eu: Add files for new driver - part 7
    9dbd79aeb984 Staging: rtl8188eu: overflow in update_sta_support_rate()
b8d181e408af staging: drm/imx: add drm plane support
    3a44a2058747 imx-drm: ipuv3-plane: fix ipu_plane_dpms()
    9c74360f9adb staging: imx-drm: Fix modular build of DRM_IMX_IPUV3
fa590c222fba staging: rts5208: add support for rts5208 and rts5288
    c5fae4f4fd28 staging: rts5208: fix missing error check on call to rtsx_write_register
    7f7aeea7cf30 staging: rts5208: Fix "seg_no" calculation in reset_ms_card()
    34ff1bf49204 staging/rts5208: fix incorrect shift to extract upper nybble
    bf2ec0f9ada1 staging: rts5208: fix static checker warnings
ea313b5f88ed gpu: ion: Also shrink memory cached in the deferred free list
    54de9af9f0d7 gpu: ion: dereferencing an ERR_PTR
16c7eb6047bb staging: comedi: adv_pci1710: always enable PCI171x_PARANOIDCHECK code
    abe46b8932dd staging: comedi: adv_pci1710: fix AI INSN_READ for non-zero channel
33aa8d45a4fe staging: emxx_udc: Add Emma Mobile USB Gadget driver
    97972ccc083c staging: emxx_udc: Remove unused device_desc declaration
        Reported-by: Nick Desaulniers <ndesaulniers@google.com>
    4f3445067d5f staging: emxx_udc: remove incorrect __init annotations
    1fa2df0c70da staging: emxx_udc: fix the build error
ea2e813e8cc3 [media] tlg2300: move to staging in preparation for removal
    3cb99af5ea00 [media] tlg2300: Fix media dependencies
        Reported-by: Jim Davis <jim.epost@gmail.com>
c296d5f9957c staging: fbtft: core support
    0d0d4d21a099 staging: fbtft: array underflow in fbtft_request_gpios_match()
99dfc3357e98 staging: comedi: das1800: remove depends on ISA_DMA_API limitation
    d375278d6667 staging: comedi: das1800: fix possible NULL dereference
e56d03dee14a staging: comedi: cb_pcimdas: add main connector digital input/output
    b08ad6657aac staging: comedi: cb_pcimdas: fix handlers for DI and DO subdevices
81dee67e215b staging: sm750fb: add sm750 to staging
    d28fb1ffbaf4 staging: sm750fb: fix a type issue in sm750_set_chip_type()
c5c77ba18ea6 staging: wilc1000: Add SDIO/SPI 802.11 driver
    fea699163604 staging: wilc1000: Fix some double unlock bugs in wilc_wlan_cleanup()
    0a9019cc8ae0 Staging: wilc1000: unlock on error in init_chip()
    c58eef061dda staging: wilc1000: fix missing read_write setting when reading data
    6a27224f964d staging: wicl1000: fix dereference after free in wilc_wlan_cleanup()
68905a14e49c staging: unisys: Add s-Par visornic ethernet driver
    fa15d6d34663 staging: unisys: visornic: correct obvious double-allocation of workqueues
9bc79bbcd0c5 Staging: most: add MOST driver's aim-cdev module
    5ae890780e1b staging: most: cdev: add missing check for cdev_add failure
2870b52bae4c greybus: lights: add lights implementation
    428359cbfe08 media: staging: greybus: light: fix memory leak in v4l2 register
        Reported-by: Sakari Ailus <sakari.ailus@linux.intel.com>
8f83409cf238 staging/lustre: use 64-bit time for pl_recalc
    b8cb86fd95bb staging: lustre: ldlm: pl_recalc time handling is wrong
        Reported-by: James Simmons <jsimmons@infradead.org>
2b40182a19bc staging: android: ion: Add ion driver for Hi6220 SoC platform
    4a236d01b5e0 staging: android ion/hisi: fix dependencies
2d6ca60f3284 iio: Add a DMAengine framework based buffer
    7d2b8e6aaf9e staging: iio: ad5933: switch buffer mode to software
0cf55bbef2f9 staging: comedi: comedi_test: implement commands on AO subdevice
    403fe7f34e33 staging: comedi: comedi_test: fix timer race conditions
        Reported-by: Éric Piel <piel@delmic.com>
30135ce26df2 staging: wilc1000: wilc_wlan_init: add argument struct net_device
12927835d211 greybus: loopback: Add asynchronous bi-directional support
    44b02da39210 staging: greybus: loopback: Fix iteration count on async path
        Reported-by: Mitch Tasman <tasman@leaflabs.com>
b36f04fa9417 greybus: loopback: Convert thread delay to microseconds
    33b8807a6fe1 staging: greybus: loopback: fix broken udelay
73584a40d748 staging: wilc1000: add ops resuem/suspend/wakeup in cfg80211
    abb4f8addf1d staging: wilc1000: fix build failure
f164cbf98fa8 staging: comedi: ni_mio_common: add finite regeneration to dio output
    bafd9c64056c staging: comedi: ni_mio_common: Fix divide-by-zero for DIO cmdtest
        Reported-by: Ivan Vasilyev <grabesstimme@gmail.com>
3abb33ac6521 staging/hfi1: Add TID cache receive init and free funcs
    94158442eb0c IB/hfi1: Don't attempt to free resources if initialization failed
46a80d62e6e0 IB/qib, staging/rdma/hfi1: add s_hlock for use in post send
    747f4d7a9d1b IB/qib, IB/hfi1: Fix up UD loopback use of irq flags
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
14553ca11039 staging/rdma/hfi1: Adaptive PIO for short messages
    17f15bf66884 IB/hfi1: Fix pio wait counter double increment
b15d97d77017 greybus: core: add module abstraction
    e866dd8aab76 greybus: fix a leak on error in gb_module_create()
886aba558b9e greybus: arche-platform: Export fn to allow timesync driver to change the state
    d9966f1de990 staging: greybus: arche-platform: fix device reference leak
e7f63771b60e ION: Sys_heap: Add cached pool to spead up cached buffer alloc
    9bcf065e2812 staging: android: ion: fix sys heap pool's gfp_flags
13a9930d15b4 staging: ks7010: add driver from Nanonote extra-repository
    9afe11e95676 staging: ks7010: declare private functions static
    9d29f14db109 staging: ks7010: fix wait_for_completion_interruptible_timeout return handling
970dc85bd95d greybus: timesync: Add timesync core driver
    b17c1bba9cec staging: greybus: timesync: validate platform state callback
9881fe0ca187 [media] cec: add HDMI CEC framework (adapter)
    6a91d60aba67 [media] cec-adap.c: work around gcc-4.4.4 anon union initializer bug
ca684386e6e2 [media] cec: add HDMI CEC framework (api)
    ea8c535e30c1 [media] cec: add MEDIA_SUPPORT dependency
9d9d3777a9db greybus: es2: Add a new bulk in endpoint for APBridgeA RPC
    1305f2b2f52a greybus: es2: fix error return code in ap_probe()
3147b268400a staging: lustre: osc: Automatically increase the max_dirty_mb
    2fab9faf9b27 staging: lustre: fix bug in osc_enter_cache_try
e28a6c8b3fcc [media] pulse8-cec: sync configuration with adapter
    b82e39f85603 [media] pulse8-cec: avoid uninitialized data use
cc43368a3cde greybus: lights: Control runtime pm suspend/resume on AP side
    5cf62679153e staging: greybus: light: check the correct value of delay_on
13439479c7de staging: ion: Add files for parsing the devicetree
    0047b6e5f1b4 staging: android/ion: testing the wrong variable
02b23803c6af staging: android: ion: Add ioctl to query available heaps
    cf55902b9c30 staging: android: ion: Fix error handling in ion_query_heaps()
e0f3fc9b47e6 iio: accel: sca3000_core: implemented IIO_CHAN_INFO_SAMP_FREQ
    64bc2d02d754 iio: accel: sca3000_core: avoid potentially uninitialized variable
    a1427af59977 iio: accel: sca3000_core: avoid potentially uninitialized variable
239fd5d41f9b staging: lustre: libcfs: shortcut to create CPT from NUMA topology
    1b301e8343d2 staging: lustre: remove broken dead code in cfs_cpt_table_create_pattern
f0cf21abcccc staging: lustre: clio: add CIT_DATA_VERSION and remove IOC_LOV_GETINFO
    01220025b14a staging: lustre: lov: use correct env in lov_io_data_version_end()
2296c0623eb7 staging: iio: cdc: ad7746: implement IIO_CHAN_INFO_SAMP_FREQ
    3089ec2c104c staging: iio: cdc/ad7746: fix missing return value
6572389bcc11 staging: iio: cdc: ad7152: Implement IIO_CHAN_INFO_SAMP_FREQ attribute
    95264c8c6a90 staging: iio: ad7152: Fix deadlock in ad7152_write_raw_samp_freq()
11c647caf74b staging: lustre: obdclass: variable llog chunk size
    6c66a7b097f4 staging: lustre: Use 'kvfree()' for memory allocated by 'kvzalloc()'
96049bd1ecd0 staging: lustre: ptlrpc: embed highest XID in each request
    74e3bb75315c staging: lustre: ptlrpc: avoid warning on missing return
a98461d79ba5 staging: iio: ad9832: add DVDD regulator
    6826fdbd2e20 staging: iio: ad9832: allocate data before using
23b028c871e1 staging: bcm2835-audio: initial staging submission
    f5e2199ae574 staging: bcm2835-audio: allocate enough data for work queues
    021fbaa5fb5b Staging: bcm2835-audio: && vs & typo
    84472ecd7074 staging: bcm2835-audio: off by one in snd_bcm2835_playback_open_generic()
    b07525b89f95 staging: bcm2835-audio: fix empty-body warning
    fe822dc6c12b staging: bcm2835-audio: remove incorrect include path
7b3ad5abf027 staging: Import the BCM2835 MMAL-based V4L2 camera driver.
    ca4e4efbefbb Staging: vc04_services: Fix a couple error codes
    7566f39dfdc1 staging: bcm2835-camera: Abort probe if there is no camera
    b7afce51d957 staging: bcm2835-camera: fix timeout handling in wait_for_completion_timeout
    5b70084f6cbc staging: bcm2835-camera: handle wait_for_completion_timeout return properly
    8e17858a8818 staging: bcm2835-camera: fix error handling in init
    f4082c6f28a8 staging: bcm2835/mmal-vchiq: unlock on error in buffer_from_host()
    757b9bd07431 staging: bcm2835: mark all symbols as 'static'
97b35807cc4d staging: bcm2835-v4l2: Add a build system for the module.
    3ad13763b5f4 staging: bcm2835-v4l: remove incorrect include path
a49d25364dfb staging/atomisp: Add support for the Intel IPU v2
    d5426f4c2eba media: staging: atomisp: use clock framework for camera clocks
    8cd0cd065f37 media: staging/atomisp: fix header guards
    0b56d1c8fd89 media: staging: atomisp: fix bounds checking in mt9m114_s_exposure_selection()
    8033120f36c0 media: atomisp2: array underflow in imx_enum_frame_size()
    115b7ac211d1 media: atomisp2: array underflow in ap1302_enum_frame_size()
    7b065c554ca5 media: atomisp2: Array underflow in atomisp_enum_input()
    c32e3d1b490e [media] atomisp: putting NULs in the wrong place
    d1fec5bdeb18 [media] atomisp: one char read beyond end of string
        Reported-by: David Binderman <dcb314@hotmail.com>
    5795a9a5fed7 staging: atomisp: remove #ifdef for runtime PM functions
    9b7edbb60bb8 staging/atomisp: add ACPI dependency
    19740d6840b5 staging/atomisp: add PCI dependency
    418eaad30eea staging/atomisp: add VIDEO_V4L2_SUBDEV_API dependency
    902ea5fcd577 staging/atomisp: remove sh_css_lace_stat code
    a22933221c43 stating/atomisp: fix -Wold-style-definition warning
    72d2b01e84b2 staging/atomisp: fix empty-body warning
    b5563094d7df Staging: atomisp: fix an uninitialized variable bug
    53044529769b staging: atomicsp: fix a loop timeout
    bc44a73e1737 Staging: atomisp: kfreeing a devm allocated pointer
    f07d4b427067 staging: atomisp: off by one in atomisp_acc_load_extensions()
    39c116dcfd5f staging: atomisp: potential underflow in atomisp_get_metadata_by_type()
6bbfe4a76158 staging: vc04_services: Create new BCM_VIDEOCORE setting for VideoCore services.
    ce95e3a9c599 staging: vc04_services: make BCM_VIDEOCORE tristate
425e586cf95b speakup: add unicode variant of /dev/softsynth
    b96fba8d5855 staging: speakup: fix wraparound in uaccess length check
5569a1260933 staging: vchiq_arm: Add compatibility wrappers for ioctls
    5a96b2d38dc0 staging: vchiq_arm: fix compat VCHIQ_IOC_AWAIT_COMPLETION
d3269bdc7ebc bus: fsl-mc: dpio: add frame descriptor and scatter/gather APIs
    11270059e8d0 staging: fsl-mc/dpio: add cpu <--> LE conversion for dpaa2_fd
321eecb06bfb bus: fsl-mc: dpio: add QBMan portal APIs for DPAA2
    8bae455e57f1 staging: fsl-mc/dpio: remove unused function
    c96d886d7b2e staging: fsl-mc: bus: dpio: fix alter FQ state command
554c0a3abf21 staging: Add rtl8723bs sdio wifi driver
    c3e43d8b958b staging: rtl8723bs: Fix the return value in case of error in 'rtw_wx_read32()'
    a3d2ae043f64 staging: rtl8723bs: fix u8 less than zero check
    ec14121931a2 staging: rtl8723bs: avoid null pointer dereference on pmlmepriv
    c51b46dd5b99 staging: rtl8723bs: add missing range check on id
    ed6456afef0d Staging: rtl8723bs: fix an error code in isFileReadable()
    f55a6d457b21 staging: rtl8723bs: rework debug configuration handling
    c45112e467ab staging: rtl8723bs: fix empty-body warning
abefd6741d54 staging: ccree: introduce CryptoCell HW driver
    0f70db70339d staging: ccree: break send_request and fix ret val
    0f2f02d1b572 staging: ccree: use signal safe completion wait
    26f4b1f7a8da staging: ccree: fix buffer copy
50cfbbb7e627 staging: ccree: add ahash support
    c51831be99e1 staging: ccree: register setkey for none hash macs
    3d51b9562673 staging: ccree: add CRYPTO dependency
302ef8ebb4b2 staging: ccree: add skcipher support
    737aed947f9b staging: ccree: save ciphertext for CTS IV
    6c5ed91b0ca6 staging: ccree: remove unused function argument
0352d1d85201 staging: fsl-dpaa2/eth: Add APIs for DPNI objects
    b72d7451209a staging: fsl-dpaa2/eth: add ETHERNET dependency
6b9ad1c742bf staging: speakup: add send_xchar, tiocmset and input functionality for tty
    a1960e0f1639 staging: speakup: fix tty-operation NULL derefs
    e45423d76f1c staging: speakup: signedness bug in spk_ttyio_in_nowait()
2eccd4aa19fc staging: greybus: enable compile testing of arche driver
    0687090acf0d staging: greybus: mark PM functions as __maybe_unused
158aeefcb82f [media] atomisp: Add __printf validation and fix fallout
    22457cb2de13 media: atomisp: fix misleading addr information
ac669251087d staging: sm750fb: change default screen resolution
    888db9a6e02c staging: sm750fb: change default screen resolution
64b5a49df486 [media] media: imx: Add Capture Device Interface
    537b5c840c2f media: staging/imx: always select VIDEOBUF2_DMA_CONTIG
4a34ec8e470c [media] media: imx: Add CSI subdev driver
    2e0fe66e0a13 media: imx: csi: Disable CSI immediately after last EOF
        Reported-by: Gaël PORTAY <gael.portay@collabora.com>
f0d9c8924e2c [media] media: imx: Add IC subdev drivers
    a19c22677377 media: imx: prpencvf: Stop upstream before disabling IDMA channel
        Reported-by: Gaël PORTAY <gael.portay@collabora.com>
    f9cc48f1b1df media: imx: Fix VDIC CSI1 selection
46949b48568b staging: wilc1000: New cfg packet format in handle_set_wfi_drv_handler
    1bbf6a6d4091 staging: wilc1000: Fix bssid buffer offset in Txq
e130291212df [media] media: Add i.MX media core driver
    dee747f88167 media: imx: Don't register IPU subdevs/links if CSI port missing
    0b2e9e7947e7 media: staging/imx: remove confusing IS_ERR_OR_NULL usage
    4560cb4a0c99 media: imx: add VIDEO_V4L2_SUBDEV_API dependency
874bcba65f9a staging: pi433: New driver
    64c4c4ca6c12 staging: pi433: fix potential null dereference
    99859541a92d staging: pi433: use div_u64 for 64-bit division
    39ae5f1e4b86 staging: pi433: return -EFAULT if copy_to_user() fails
a037b7ec2eb7 staging: fsl-mc: allow the driver compile multi-arch
    5be271ff96ec staging: fsl-mc: fix resource_size.cocci warnings
03274850279c staging: fsl-mc: allow the driver compile multi-arch
    0116ced91d3a staging: fsl-mc: add explicit dependencies for compile-tested arches
56bde846304e staging: r8822be: Add existing rtlwifi and rtl_pci parts for new driver
    89ff9d58e6b6 staging: rtlwifi: add MAC80211 dependency
    03fef6c5c229 staging: rtlwifi: simplify logical operation
    a084cda42ece staging: rtlwifi: shut up -Wmaybe-uninitialized warning
938a0447f094 staging: r8822be: Add code for halmac sub-driver
    e4b08e16b7d9 staging: r8822be: check kzalloc return or bail
    e1bf28868ab0 staging: r8822be: fix null pointer dereferences with a null driver_adapter
    1919b0562bfc staging: r8822be: fix null pointer dereference with a null driver_adapter
    2ffabf50bd00 staging: r8822be: fix memory leak of eeprom_map on error exit return
7e5b796cde7e staging: r8822be: Add the driver code
    3eb23426e174 staging: rtl8822be: fix missing null check on dev_alloc_skb return
3f268f5d6669 staging: ccree: turn compile time debug log to params
    11cc84e708db staging: ccree: use size_t consistently
        Reported-by: kbuild test robot <fengguang.wu@intel.com>
bf3cfaa712e5 media: staging/imx: get CSI bus type from nearest upstream entity
    904371f90b2c media: imx: csi: Allow unknown nearest upstream entities
723fbf563a6a lib/scatterlist: Add SG_CHAIN and SG_END macros for LSB encodings
    b5d013bc09e9 staging: rts5208: rename SG_END macro
1628e2e4dc76 staging: fsl-mc/dpio: allow the driver to compile multi-arch
    7c979b473121 staging: fsl-mc/dpio: qbman_pull_desc_set_token() can be static
8f9439022648 staging: typec: handle vendor defined part and modify drp toggling flow
    7d287a5d5f80 staging: typec: modify parameter of tcpci_irq
44baaa43d7cc staging: fsl-dpaa2/ethsw: Add Freescale DPAA2 Ethernet Switch driver
    5555ebbbac82 staging: fsl-dpaa2/ethsw: fix memory leak of switchdev_work
    11f27765f611 staging: fsl-dpaa2: ethsw: Add missing netdevice check
0853c7a53eb3 staging: mt7621-dma: ralink: add rt2880 dma engine
    354e379684fc staging: mt7621-dma: fix potentially dereferencing uninitialized 'tx_desc'
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
e3cbf478f846 staging: mt7621-eth: add the drivers core files
    144e2643e2f5 staging: mt7621-eth: Use eth_hw_addr_random()
    85e1d42663a0 staging: mt7621-eth: Fix memory leak in mtk_add_mac() error path
    9d350d806a8b staging: mt7621-eth: fix return value check in mtk_connect_phy_node()
    960526d5970f staging: mt7621-eth: fix return value check in mtk_probe()
f079b6406348 staging: mt7621-eth: add gigabit switch driver (GSW)
    3eb3c3e32eef staging: mt7621-eth: fix return value check in mt7621_gsw_probe()
4907c73deefe media: staging: davinci_vpfe: allow building with COMPILE_TEST
    49dc762cffd8 media: staging: davinci_vpfe: disallow building with COMPILE_TEST
ce08eaeb6388 staging: typec: rt1711h typec chip driver
    e16711c32bca staging/typec: fix tcpci_rt1711h build errors
        Reported-by: kbuild test robot <lkp@intel.com>
3d2ec9dcd553 staging: Android: Add 'vsoc' driver for cuttlefish.
    060ea4271a82 staging: android: vsoc: fix copy_from_user overrun
9bdf43b3d40f staging: fsl-dpaa2/rtc: add rtc driver
    916c0c4b83de staging: fsl-dpaa2/rtc: fix PTP dependency
8ce76bff0e6a staging: ks7010: add new helpers to achieve mib set request and simplify code
    eb37430d402d staging: ks7010: call 'hostif_mib_set_request_int' instead of 'hostif_mib_set_request_bool'
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
9a69f5087ccc drivers/staging: Gasket driver framework + Apex driver
    c37a192ef442 Staging: Gasket: shift wrapping bug in gasket_read_modify_write_64()
    7cc6dfd076e8 Staging: Gasket: fix a couple off by one bugs
    97b23455ccd5 Staging: Gasket: uninitialized return in gasket_mmap()
2408898e3b6c staging: vboxvideo: Add page-flip support
    a5aca2057469 staging: vboxvideo: Fix modeset / page_flip error handling
    65aac1742328 staging: vboxvideo: Change address of scanout buffer on page-flip
4a965c5f89de staging: add driver for Xilinx AXI-Stream FIFO v4.1 IP core
    b9f13084580c staging: fix platform_no_drv_owner.cocci warnings
    6d4abf1c0e26 staging: axis-fifo: fix return value check in axis_fifo_probe()
3aa8ec716e52 staging: erofs: add directory operations
    33bac912840f staging: erofs: keep corrupted fs from crashing kernel in erofs_readdir()
d72d1ce60174 staging: erofs: add namei functions
    419d6efc50e9 staging: erofs: keep corrupted fs from crashing kernel in erofs_namei()
    d4104c5e783f staging: erofs: keep corrupted fs from crashing kernel in erofs_namei()
    38c6aa2175c3 staging: erofs: use the wrapped PTR_ERR_OR_ZERO instead of open code
        Reported-by: kbuild test robot <lkp@intel.com>
b17500a0fdba staging: erofs: introduce xattr & acl support
    62dc45979f3f staging: erofs: fix race of initializing xattrs of a inode at the same time
    3b1b5291f79d staging: erofs: fix memleak of inode's shared xattr array
    7077fffcb0b0 staging: erofs: fix fast symlink w/o xattr when fs xattr is on
        Reported-by: Li Guifu <bluce.liguifu@huawei.com>
0d40d6e399c1 staging: erofs: add a generic z_erofs VLE decompressor
    8bce6dcede65 staging: erofs: fix to handle error path of erofs_vmap()
3883a79abd02 staging: erofs: introduce VLE decompression support
    b6391ac73400 staging: erofs: fix error handling when failed to read compresssed data
    11152496021e staging: erofs: fix error handling when failed to read compresssed data
    8bce6dcede65 staging: erofs: fix to handle error path of erofs_vmap()
    a112152f6f3a staging: erofs: fix mis-acted TAIL merging behavior
    1e5ceeab6929 staging: erofs: fix illegal address access under memory pressure
    af692e117cb8 staging: erofs: compressed_pages should not be accessed again after freed
50e761516f2b media: platform: Add Cedrus VPU decoder driver
    53e9d838275d media: cedrus: Fix a NULL vs IS_ERR() check
    b12c7afc10b0 media: platform: fix platform_no_drv_owner.cocci warnings
    b7c56d7bfe83 drivers: staging: cedrus: find ctx before dereferencing it ctx
284db12cfda3 staging: erofs: add trace points for reading zipped data
    ba9ce771b018 staging: erofs: fix `trace_erofs_readpage' position
51fd36738383 staging: comedi: ni_mio_common: implement INSN_CONFIG_GET_CMD_TIMING_CONSTRAINTS
    9a1ec4eb6f37 staging: comedi: ni_mio_common: scale ao INSN_CONFIG_GET_CMD_TIMING_CONSTRAINTS
c893500a16ba media: imx: csi: Register a subdev notifier
    337e90ed0286 media: imx-csi: Input connections to CSI should be optional
347e244884c3 staging: comedi: tio: implement global tio/ctr routing
    4dc2a3cd2785 staging: comedi: clarify/unify macros for NI macro-defined terminals
    a7ed5b3e7dca staging: comedi: tio: fix multiple missing break in switch bugs
f5f2e4273518 media: staging/intel-ipu3: Add css pipeline programming
    c3c2eca87dcd media: staging/intel-ipu3: reduce kernel stack usage
    81a43d10b8ed media: staging: intel-ipu3: fix unsigned comparison with < 0
a0ca1627b450 media: staging/intel-ipu3: Add v4l2 driver based on media framework
    6d5f26f2e045 media: staging/intel-ipu3-v4l: reduce kernel stack usage
7fc7af649ca7 media: staging/intel-ipu3: Add imgu top level pci device driver
    948dff7cfa1d media: staging/intel-ipu3: mark PM function as __maybe_unused
439d8186fb23 media: imx: add capture compose rectangle
    5964cbd86922 media: imx: Set capture compose rectangle in capture_device_set_format
7807063b862b media: staging/imx7: add MIPI CSI-2 receiver subdev for i.MX7
    1fc79c4bb19b media: staging/imx7: Fix an error code in mipi_csis_clk_get()
07173c3ec276 block: enable multipage bvecs
    f4e97f5d4c9e staging: erofs: fix unexpected out-of-bound data access
7dc7967fc39a staging: kpc2000: add initial set of Daktronics drivers
    d4c596ebf627 staging: kpc2000: Fix build error without CONFIG_UIO
        Reported-by: Hulk Robot <hulkci@huawei.com>
        Reported-by: kbuild test robot <lkp@intel.com>
    f998a1180e14 staging: kpc2000: fix resource size calculation
    d687bdefba27 staging: kpc2000: Fix a stack information leak in kp2000_cdev_ioctl()
7df95299b94a staging: kpc2000: Add DMA driver
    c85aa326f5c5 staging: kpc2000: double unlock in error handling in kpc_dma_transfer()
284eb160681c staging: octeon-ethernet: support of_get_mac_address new ERR_PTR error
    da48be337343 staging: octeon-ethernet: Fix of_get_mac_address ERR_PTR check
3ef46bc97ca2 media: staging/imx: Improve pipeline searching
    c89b41343862 media: staging/imx: fix two NULL vs IS_ERR() bugs
43ad38191816 staging: kpc2000: kpc_i2c: add static qual to local symbols in kpc_i2c.c
    99bf7761b7cd staging: kpc2000: kpc_i2c: fix platform_no_drv_owner.cocci warnings

*** FIXES:

36b30d6138f4 staging: nvec: ps2: change serio type to passthrough
    17c1c9ba15b2 Revert "staging: nvec: ps2: change serio type to passthrough"
8b7a13c3f404 staging: r8712u: Fix possible buffer overrun
    300cd664865b staging: rtl8712: Fix possible buffer overrun
d35dcc89fc93 staging: comedi: quatech_daqp_cs: fix daqp_ao_insn_write()
    1376b0a21603 staging: comedi: quatech_daqp_cs: fix no-op loop daqp_ao_insn_write()
0a438d5b381e staging: vt6656: use free_netdev instead of kfree
    cb4855b49deb Staging: vt6655-6: potential NULL dereference in hostap_disable_hostapd()
3030d40b5036 staging: vt6655: use free_netdev instead of kfree
    cb4855b49deb Staging: vt6655-6: potential NULL dereference in hostap_disable_hostapd()
83271f6262c9 ion: hold reference to handle after ion_uhandle_get
    6fa92e2bcf63 staging: ion: fix corruption of ion_import_dma_buf
7ad82572348c staging:wlan-ng:Fix sparse warning
    2c474b8579e9 staging: wlan-ng: add missing byte order conversion
4a9fdbbecc18 staging: core: tiomap3430.c Fix line over 80 characters.
    ff4f58f0ca5d staging: tidspbridge: fix an erroneous removal of parentheses
0557344e2149 staging: comedi: ni_mio_common: fix local var for 32-bit read
    857a661020a2 staging: comedi: ni_mio_common: fix E series ni_ai_insn_read() data
f79b0d9c223c staging: speakup: Fixed warning <linux/serial.h> instead of <asm/serial.h>
    327b882d3bcc Staging: speakup: Fix getting port information
73e0e4dfed4c staging: comedi: comedi_test: fix timer lock-up
    403fe7f34e33 staging: comedi: comedi_test: fix timer race conditions
        Reported-by: Éric Piel <piel@delmic.com>
81fb0b901397 staging: android: ion_test: unregister the platform device
    ccbc2a9e7878 staging: android: ion_test: fix check of platform_device_register_simple() error code
0abb60c1c5c3 staging: unisys: visorchannel_write(): Handle partial channel_header writes
    d253058f490f staging: unisys: fix random memory corruption in visorchannel_write()
9535ebc5e9cc staging/wilc1000: fix Kconfig dependencies
    bcc43a4b5ed7 staging/wilc: fix Kconfig dependencies, second try
e6ffd1ba55a4 staging: fbtft: fix out of bound access
    11f2323ad357 staging: fbtft: fix build error
        Reported-by: kbuild test robot <fengguang.wu@intel.com>
82c2611daaf0 staging/rdma/hfi1: Handle packets with invalid RHF on context 0
    9d2f53ef42c1 staging/rdma/hfi1: Fix error in hfi1 driver build
9d15134d067e greybus: power_supply: rework get descriptors
    47830c1127ef staging: greybus: power_supply: fix prop-descriptor request size
a545f5308b6c staging/rdma/hfi: fix CQ completion order issue
    b96b040445f5 IB/hfi1: Fix potential panic with sdma drained mechanism
    b9b06cb6feda IB/hfi1: Fix missing lock/unlock in verbs drain callback
4d99b2581eff staging: lustre: avoid intensive reconnecting for ko2iblnd
    9b046013e583 staging: lustre: separate a connection destroy from free struct kib_conn
b08bb6bb5af5 staging: lustre: make lustre dependent on LNet
    2cc089e41d8e staging: lustre: really make lustre dependent on LNet
49d200deaa68 debugfs: prevent access to removed files' private data
    0fd9da9a979a staging/android: sync_debug: unproxify debugfs files' fops
ed2f549dc0f6 staging: lustre: libcfs: test if userland data is to small
    62cbe860c5c3 staging: lustre: libcfs: fix test for libcfs_ioctl_hdr minimum size
        Reported-by: Doug Oucharek <doug.s.oucharek@intel.com>
87787e5ef727 Staging: iio: Fix sparse endian warning
    7e982555d89c staging: iio: fix ad7606_spi regression
e88c9271d9f8 IB/hfi1: Fix buffer cache races which may cause corruption
    9565c6a37a9d IB/hfi1: Fix an interval RB node reference count leak
b788dc51e425 staging: lustre: llite: drop acl from cache
    ed7bdf5c9c15 staging: lustre: hide call to Posix ACL in ifdef
5bb2399a4fe4 [media] cec: fix Kconfig dependency problems
    a58d1191ca04 [media] cec: fix Kconfig help text
    cd70c37b5a23 [media] staging: add MEDIA_SUPPORT dependency
d806f30e639b staging: lustre: osc: revise unstable pages accounting
    c89d98e224b4 staging/lustre/llite: Move unstable_stats from sysfs to debugfs
    7894c263f200 staging: lustre: fix unstable pages tracking
eaf47b713b60 staging: rtl8188eu: fix missing unlock on error in rtw_resume_process()
    23bf40424a0f staging: rtl8188eu: fix double unlock error in rtw_resume_process()
57b978ada073 [media] s5p-cec: fix system and runtime PM integration
    eadf081146ec [media] s5p-cec: mark PM functions as __maybe_unused again
5231f7651c55 staging: lustre: statahead: small fixes and cleanup
    f689c72d7dbc staging: lustre: statahead: remove incorrect test on agl_list_empty()
0561155f6fc5 staging: iio: tsl2583: don't shutdown chip when updating the lux table
    c266cda29ae6 staging: iio: tsl2583: fix unused function warning
e895f00a8496 Staging: wlan-ng: hfa384x_usb.c Fixed too long code line warnings.
    a67fedd78818 staging: wlan-ng: fix adapter initialization failure
ec988ad78ed6 phy: Don't increment MDIO bus refcount unless it's a different owner
    e7c9a3d9e432 staging: octeon: Call SET_NETDEV_DEV()
        Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
757b9bd07431 staging: bcm2835: mark all symbols as 'static'
    156650083f7d staging: bcm2835: don't mark 'bcm2835_v4l2_debug' as static
0adbfd4694c2 staging: bcm2835-audio: fix memory leak in bcm2835_audio_open_connection()
    c97d96b4e612 staging: bcm2835-audio: Fix memory corruption
4b4eda001704 Staging: media: Unmap and release region obtained by ioremap_nocache
    3b6471c7becd media: Staging: media: Release the correct resource in an error handling path
2a55e7b5e544 staging: android: ion: Call dma_map_sg for syncing and mapping
    31eb79db420a staging: android: ion: Support cpu access during dma_buf_detach
0e490657c721 staging: wilc1000: Fix problem with wrong vif index
    dda037057a57 staging: wilc1000: fix to set correct value for 'vif_num'
ef9209b642f1 staging: rtl8723bs: Fix indenting errors and an off-by-one mistake in core/rtw_mlme_ext.c
    47dcb0802d28 Merge tag 'staging-4.20-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
    87e4a5405f08 Revert commit ef9209b642f "staging: rtl8723bs: Fix indenting errors and an off-by-one mistake in core/rtw_mlme_ext.c"
74e1e498e84e staging: rtl8188eu: fix comments with lines over 80 characters
    4004a9870bbe staging: rtl8188eu: Revert part of "staging: rtl8188eu: fix comments with lines over 80 characters"
d1eab9dec610 staging: vchiq_core: Bail out in case of invalid tx_pos
    8113b89fc615 staging: vchiq_core: Fix missing semaphore release in error case
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
39163c0ce0f4 staging: fsl-dpaa2/eth: Errors checking update
    58ad0d0263c5 staging: fsl-dpaa2: eth: move generic FD defines to DPIO
    11b86a84bc53 staging: fsl-dpaa2/eth: fix off-by-one FD ctrl bitmaks
0b2e9e7947e7 media: staging/imx: remove confusing IS_ERR_OR_NULL usage
    b605687cf517 media: staging: imx-media-vdic: fix inconsistent IS_ERR and PTR_ERR
9a5a6911aa3f staging: imx: fix non-static declarations
    4a3039e26eba media: staging: atomisp: imx: remove dead code
    09cbc5de1540 Revert "staging: imx: fix non-static declarations"
        Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
737aed947f9b staging: ccree: save ciphertext for CTS IV
    46df8824982e staging: ccree: NULLify backup_info when unused
87eb55e418b7 staging: fsl-dpaa2/eth: Fix potential endless loop
    466bcdc1fa30 staging: fsl-dpaa2/eth: Fix DMA mapping direction
4b2d9fe87950 staging: fsl-dpaa2/eth: Extra headroom in RX buffers
    441851b49a34 staging: fsl-dpaa2/eth: Don't set netdev->needed_headroom
    54ce89177988 staging: fsl-dpaa2/eth: Fix access to FAS field
c5f39d07860c staging: ccree: fix leak of import() after init()
    293edc27f8bc stating: ccree: revert "staging: ccree: fix leak of import() after init()"
    aece09024414 staging: ccree: Uninitialized return in ssi_ahash_import()
ce8a3a9e76d0 staging: android: ashmem: Fix a race condition in pin ioctls
    740a5759bf22 staging: android: ashmem: Fix possible deadlock in ashmem_ioctl
        Reported-by: syzbot+d7a918a7a8e1c952bc36@syzkaller.appspotmail.com
890f27693f2a media: imx: work around false-positive warning
    8d1a4817cce1 media: imx: work around false-positive warning, again
80782927e3aa staging: lustre: Fix unneeded byte-ordering cast
    127aaef460eb staging: lustre: lnet: use correct 'magic' test
aaea2164bdff staging: wilc1000: check for kmalloc allocation failures
    291b93ca2c50 staging: wilc1000: fix memdup.cocci warnings
fe014d4e6b55 staging: wilc1000: free memory allocated for general info message from firmware
    b00e2fd10429 staging: wilc1000: fix NULL pointer exception in host_int_parse_assoc_resp_info()
dc9f65cf9aea media: staging: atomisp: avoid a warning if 32 bits build
    e935dbfc9ff8 media: atomisp: remove an impossible condition
faa657641081 staging: wilc1000: refactor scan() to free kmalloc memory on failure cases
    ad109ba13786 staging: wilc1000: fix infinite loop and out-of-bounds access
69c90cf1b2fa staging: most: sound: call snd_card_new with struct device
    98592c1faca8 staging: most: sound: pass correct device when creating a sound card
        Reported-by: Eugeniu Rosca <erosca@de.adit-jv.com>
aba258b73101 staging: most: cdev: fix chrdev_region leak
    af708900e9a4 staging: most: cdev: fix chrdev_region leak in mod_exit
37b7b3087a2f staging/vc04_services: Register a platform device for the camera driver.
    25c7597af20d staging: vchiq_arm: Register a platform device for audio
    405e2f98637d staging: vchiq_arm: Fix camera device registration
4bebb0312ea9 staging/bcm2835-camera: Set ourselves up as a platform driver.
    3a2c20024a2b staging: bcm2835-camera: fix module autoloading
    4f566194cec3 staging: bcm2835-camera: Fix module section mismatch warnings.
        Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
bfd40eaff5ab mm: fix vma_is_anonymous() false-positives
    44960f2a7b63 staging: ashmem: Fix SIGBUS crash when traversing mmaped ashmem pages
        Reported-by: Amit Pundir <amit.pundir@linaro.org>
        Reported-by: Youling 257 <youling257@gmail.com>
9abc44ba4e2f staging: wilc1000: fix TODO to compile spi and sdio components in single module
    f45b8934b90b staging: wilc1000: revert "fix TODO to compile spi and sdio components in single module"
02211edc9a1f staging: wilc1000: fix endianness warnings reported by sparse
    3f285135bcff staging: wilc1000: fix compilation warning for ARCH PowerPC
        Reported-by: kbuild test robot <lkp@intel.com>
156c3df8d4db staging: erofs: disable compiling temporarile
    aca19723604c Revert "staging: erofs: disable compiling temporarile"
    f86cf25a6091 Revert "staging: erofs: disable compiling temporarile"
852b2876a8a8 staging: vchiq: rework remove_event handling
    77cf3f5dcf35 staging: vchiq: make wait events interruptible
    a50c4c9a6577 staging: vchiq: Fix local event signalling
a772f116702e staging: vchiq: switch to wait_for_completion_killable
    086efbabdc04 staging: vchiq: revert "switch to wait_for_completion_killable"
6e537b58de77 media: imx: vdic: rely on VDIC for correct field order
    ce3c2433b074 media: imx: vdic: Restore default case to prepare_vdi_in_buffers()
        Reported-by: Hans Verkuil <hverkuil@xs4all.nl>
e4b08e16b7d9 staging: r8822be: check kzalloc return or bail
    e8edc32d70a4 staging: rtlwifi: Use proper enum for return in halmac_parse_psd_data_88xx
6d4cd041f0af net: phy: at803x: disable delay only for RGMII mode
    9498da46d1ce staging: octeon-ethernet: fix incorrect PHY mode

*** CLEANUPS:

b37f9e1c3801 staging: rtl8723bs: Fix lines too long in update_recvframe_attrib().
    c948c6915b62 staging: rtl8723bs: Fix incorrect sense of ether_addr_equal
29148543c521 staging:iio:resolver:ad2s1210 minimal chan spec conversion.
    105967ad68d2 staging:iio:resolver:ad2s1210 fix negative IIO_ANGL_VEL read
550268ca1111 staging:iio: scrap scan_count and ensure all drivers use active_scan_mask
    79fa64eb2ee8 staging:iio:ade7758: Fix check if channels are enabled in prenable
d0f47ff17f29 ASoC: OMAP: Build config cleanup for McBSP
    d3921a03a89a staging: tidspbridge: check for CONFIG_SND_OMAP_SOC_MCBSP
fb1ef622e7a3 staging: comedi: usbduxsigma: tidy up analog output command support
    c04a1f17803e staging: comedi: usbduxsigma: don't clobber ao_timer in command test
b986be8527c7 staging: comedi: usbduxsigma: tidy up analog input command support
    423b24c37dd5 staging: comedi: usbduxsigma: don't clobber ai_timer in command test
7dc19d5affd7 drivers: convert shrinkers to new count/scan API
    5957324045ba staging: ashmem: Fix ASHMEM_PURGE_ALL_CACHES return value
        Reported-by: YongQin Liu <yongqin.liu@linaro.org>
b3ff824a81e8 staging: comedi: drivers: use comedi_dio_update_state() for complex cases
    9382c06e2d19 Staging: comedi: pcl730: fix some bitwise vs logical AND bugs
1b3f76756633 imx-drm: initialise drm components directly
    d9fdb9fba7ec imx-drm: imx-ldb: fix NULL pointer in imx_ldb_unbind()
10f74377eec3 staging: comedi: ni_tio: make ni_tio_winsn() a proper comedi (*insn_write)
    5ca05345c56c staging: comedi: ni_mio_common: fix wrong insn_write handler
        Reported-by: Éric Piel <piel@delmic.com>
c6cd0eefb27b staging: comedi: comedi_fops: introduce __comedi_get_user_chanlist()
    238b5ad85592 staging: comedi: fix memory leak / bad pointer freeing for chanlist
        Reported-by: H Hartley Sweeten <hsweeten@visionengravers.com>
    6cab7a37f5c0 staging: comedi: (regression) channel list must be set for COMEDI_CMD ioctl
        Reported-by: Bernd Porr <mail@berndporr.me.uk>
e534f3e9429f staging:nvec: Introduce the use of the managed version of kzalloc
    68fae2f3df45 staging: nvec: remove managed resource from PS2 driver
6a760394d7eb staging: comedi: ni_tiocmd: clarify the cmd->start_arg validation and use
    1fd24a4702d2 staging: comedi: ni_tiocmd: change mistaken use of start_src for start_arg
ebb657babfa9 staging: comedi: ni_mio_common: clarify the cmd->start_arg validation and use
    f0f4b0cc3a8c staging: comedi: ni_mio_common: fix AO inttrig backwards compatibility
        Reported-by: Spencer Olson <olsonse@umich.edu>
28a821c30688 Staging: speakup: Update __speakup_paste_selection() tty (ab)usage to match vt
    f4f9edcf9b52 staging/speakup: Use tty_ldisc_ref() for paste kworker
3fe563249374 staging: rtl8192u: r8192U_core.c: Cleaning up unclear and confusing code
    c3f463484bdd staging: rtl8192u: Fix crash due to pointers being "confusing"
fadbe0cd5292 staging: rtl8188eu:Remove rtw_zmalloc(), wrapper for kzalloc()
    1335a9516d3d staging: r8188eu: Fix scheduling while atomic splat
    33dc85c3c667 staging: r8188eu: Fix scheduling while atomic error introduced in commit fadbe0cd
    11306d1f20ca staging: rtl8188eu: use GFP_ATOMIC under spinlock
817144ae7fda staging: comedi: ni_mio_common: remove unnecessary use of 'board->adbits'
    655c4d442d12 staging: comedi: ni_mio_common: fix M Series ni_ai_insn_read() data mask
0953ee4acca0 staging: comedi: ni_mio_common: checkpatch.pl cleanup (else not useful)
    bd3a3cd6c27b staging: comedi: ni_mio_common: fix the ni_write[blw]() functions
4f9c63fe5333 staging: comedi: amplc_pci230: refactor iobase addresses
    94254d1baec7 staging: comedi: amplc_pci230: fix a precedence bug
91360b02ab48 ashmem: use vfs_llseek()
    97fbfef6bd59 staging: android: ashmem: lseek failed due to no FMODE_LSEEK.
fdedd94509fd staging/lustre/lvfs: remove the lvfs layer
    372d5b560707 staging/lustre/lvfs: fix building without CONFIG_PROC_FS
240512474424 staging: comedi: comedi_test: use comedi_handle_events()
    403fe7f34e33 staging: comedi: comedi_test: fix timer race conditions
        Reported-by: Éric Piel <piel@delmic.com>
    73e0e4dfed4c staging: comedi: comedi_test: fix timer lock-up
b3dd8957c23a staging: lustre: lustre: llite: Use kstrdup
    15f7330be7c0 staging: lustre: llite: initialize xattr->xe_namelen
8c4f13649731 Staging: lustre: Use put_unaligned_le64
    fb1de5a4c825 staging: lustre: Include unaligned.h instead of access_ok.h
b12fdf7da28f staging: unisys: rework signal remove/insert to avoid sparse lock warnings
    24ac1074c1da staging: unisys: fix random hangs with network stress in visornic
817bd7253291 dma-buf: cleanup dma_buf_export() to make it easily extensible
    72449cb47b01 staging: android: ion: fix wrong init of dma_buf_export_info
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
af3fa7c71bf6 staging/lustre/lnet: peer aliveness status and NI status
    9f088dba3cc2 staging/lustre: use jiffies for lp_last_query times
efde234674d9 ARM: OMAP4+: control: remove support for legacy pad read/write
    c02d7da3dd00 Merge tag 'media/v4.1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
    fefad2d54beb [media] v4l: omap4iss: Replace outdated OMAP4 control pad API with syscon
0aca78449b58 staging: unisys: remove ERRDEV macros
    bdbceb4de3e3 staging: unisys: fix some debugfs output
6501c8e7d86c Staging: rtl8712: Eliminate use of _cancel_timer_ex
    39a6e7376af0 staging: rtl8712: fix stack dump
        Reported-by: Arek Rusniak <arek.rusi@gmail.com>
    a1471eb9da4a staging: rtl8712: fix stack dump
        Reported-by: Arek Rusniak <arek.rusi@gmail.com>
382d020f4459 Staging: rtl8712: Eliminate use of _cancel_timer
    39a6e7376af0 staging: rtl8712: fix stack dump
        Reported-by: Arek Rusniak <arek.rusi@gmail.com>
    a1471eb9da4a staging: rtl8712: fix stack dump
        Reported-by: Arek Rusniak <arek.rusi@gmail.com>
bb046fef9668 staging: panel: register reboot
    7d98c63edc45 staging: panel: fix stackdump
c84a083b995b Staging: dgnc: Use goto for spinlock release before return
    5ec293650827 Staging: dgnc: release the lock before testing for nullity
5a2ca43fa54f Staging: lustre: Iterate list using list_for_each_entry
    a8da8e528cb0 staging: lustre: o2iblnd: Fix crash in kiblnd_handle_early_rxs()
45de432775d6 Staging: rtl8712: Use memdup_user() instead of copy_from_user()
    b5eed730bd3f staging: rtl8712: freeing an ERR_PTR
99d56ff7c1c2 staging/lustre: Always try kmalloc first for OBD_ALLOC_LARGE
    7f804436fbd3 staging: lustre: remove unused variable warning
d5b3f1dccee4 staging: unisys: move timskmod.h functionality
    1fc07f99134b staging: unisys: Allow visorbus to autoload
53490b545cb0 staging: unisys: move periodic_work.c into the visorbus directory
    f5ab93fa5e79 staging: unisys: remove reference of visorutil
    b99464b1da04 staging: unisys: cleanup UNISYS_VISORUTIL
795731627c74 staging: unisys: Clean up device sysfs attributes
    fd012d0def47 staging: unisys: correctly NULL-terminate visorbus sysfs attribute array
b32c4997c03d staging: unisys: Move channel creation up the stack
    a3ef1a8e9391 staging: unisys: Lock visorchannels associated with devices
af8a819a2513 [media] lirc_imon: simplify error handling code
    b833d0df943d [media] lirc_imon: do not leave imon_probe() with mutex held
ee0ec1946ec2 lustre: ptlrpc: Replace uses of OBD_{ALLOC,FREE}_LARGE
    c3eec59659cf staging: lustre: ptlrpc: kfree used instead of kvfree
68345dd7bc26 staging: rtl8188eu: rtw_mlme_ext.c: unexport message callbacks
    f996bd10a049 staging: rtl8188eu: don't define OnAuth() in non-AP mode
782eddd748d9 staging: rtl8188eu: unexport internal functions
    2b49e0fce249 staging: rtl8188eu: don't define issue_asocrsp() in non-AP mode
a9b693cd77d7 Staging: rts5208: helper function to manage delink states
    6c6f95a9351b Staging: rts5208: fix CHANGE_LINK_STATE value
4f016420d368 Staging: lustre: obdclass: Use kasprintf
    436630983b00 staging: lustre: obd_mount: use correct niduuid suffix.
1a02387063fb staging: comedi: me4000: remove 'board' from me4000_ai_insn_read()
    358d577ce1a7 staging: comedi: me4000: use bitwise AND instead of logical
db0fa0cb0157 scatterlist: use sg_phys()
    3e6110fd5480 Revert "scatterlist: use sg_phys()"
        Reported-by: Vitaly Lavrov <vel21ripn@gmail.com>
d42ab0838d04 staging: wilc1000: use id value as argument
    6ae9ac0b61a7 staging: wilc1000: off by one in get_handler_from_id()
fd2bb310ca3d Staging: iio: Move evgen interrupt generation to irq_work
    2e9fed42209b staging: iio: dummy: complete IIO events delivery to userspace
    aea545fa9081 staging: iio: select IRQ_WORK for IIO_DUMMY_EVGEN
367e8560e8d7 Staging: fbtbt: Replace timespec with ktime_t
    fc1e2c8ea85e Staging: fbtft: Fix bug in fbtft-core
56293ff232b9 staging: wilc1000: linux_wlan_spi: include header
    92af89d37c41 staging: wilc1000: restore wilc_spi_dev variable
a4ab1ade75a3 staging: wilc1000: replace drvHandler and hWFIDrv with hif_drv
    cc28e4bf6e52 staging: wilc1000: fix a bug when unload driver
8b8ad7bc90bc staging: wilc1000: rename wilc_firmware in the struct wilc
    6f72ed75e5c5 staging: wilc1000: fix rmmod failure
12ba5416dc77 staging: wilc1000: assign pointer of g_linux_wlan to sdio device data
    702c0e50f6b3 staging: wilc1000: fix build error on SPI
ebd43516d387 Staging: panel: usleep_range is preferred over udelay
    b64a1cbef6df Revert "Staging: panel: usleep_range is preferred over udelay"
        Reported-by: Huang, Ying <ying.huang@intel.com>
c1af9db78950 staging: wilc1000: call linux_sdio_init instead of io_init
    e663900aed9b staging: wilc1000: fix always return 0 error
e0c961bdaf27 iio: adc: mxs-lradc: Prefer using the BIT macro
    f89c2b39ce67 staging:iio:mxs-lradc Fix large integer implicitly truncated to unsigned warning
14b93bb6bbf0 staging: comedi: adv_pci_dio: separate out PCI-1760 support
    c71f20ee7634 staging: comedi: adv_pci1760: Do not return EINVAL for CMDF_ROUND_DOWN.
562ed3f1f78a staging/wilc1000: pass struct wilc to most linux_wlan.c functions
    c6866cc4be96 staging: wilc1000: tcp_process: fix a build warning
8e1d6c336d74 greybus: loopback: drop bus aggregate calculation
    5a70524bbf3b staging: greybus: loopback: Hold per-connection mutex across operations
    8563a49c4382 staging: greybus: remove unused kfifo_ts
e7f2b70fd3a9 staging: most: replace multiple if..else with table lookup
    13c45007e0a8 staging: most: use format specifier "%s" in snprintf
bd2f348db503 goldfish: refactor goldfish platform configs
    b0e302b40873 staging: goldfish: use div64_s64 instead of do_div
a44eb74cd413 staging/android: move SW_SYNC_USER to a debugfs file
    0fd9da9a979a staging/android: sync_debug: unproxify debugfs files' fops
080e6795cba3 staging: comedi: ni_mio_common: Cleans up/clarifies ni_ao_cmd
    15d5193104a4 staging: comedi: ni_mio_common: fix AO timer off-by-one regression
        Reported-by: Éric Piel <piel@delmic.com>
06fb9336acdc staging: wilc1000: wilc_wfi_cfgoperations.c: replaces PRINT_ER with netdev_err
    d99ee289b4ac staging: wilc1000: fix mgmt_tx()
633d27399514 staging/rdma/hfi1: use mod_timer when appropriate
    87717f0a7543 IB/hfi1: Remove unreachable code
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
6fba39cf32a3 staging: sm750fb: use BIT macro for PANEL_DISPLAY_CTRL single-bit fields
    992f961480d2 staging: sm750fb: Correctly set CLOCK_PHASE bit of display controller.
e280d71bea18 staging: rtl8723au: use list_for_each_entry*()
6fe5efa1415c staging: octeon: Convert create_singlethread_workqueue()
    8ad253fc0a09 staging: octeon: Fix line over 80 characters
de71daf5c839 Staging: fsl-mc: Replace pr_debug with dev_dbg
    e79e344a3d2f staging: fsl-mc: fix incorrect type passed to dev_dbg macros
        Reported-by: Guenter Roeck <linux@roeck-us.net>
454b0ec8bf99 Staging: fsl-mc: Replace pr_err with dev_err
    2e1159017168 staging: fsl-mc: fix incorrect type passed to dev_err macros
        Reported-by: Guenter Roeck <linux@roeck-us.net>
9899cb68c6c2 Staging: lustre: rpc: Use sizeof type *pointer instead of sizeof type.
    dc7ffefdcc28 staging/lustre/lnet: Fix allocation size for sv_cpt_data
aa94f2888825 staging: comedi: ni_660x: tidy up ni_660x_set_pfi_routing()
    479826cc8611 staging: comedi: ni_660x: fix missing break in switch statement
b42ca86ad605 staging: comedi: ni_tio: remove BUG() checks for ni_tio_get_clock_src()
    55abe8165f31 staging: comedi: ni_tio: fix buggy ni_tio_clock_period_ps() return value
60b3109e5e2d staging: dgnc: use tty_alloc_driver instead of kcalloc
    a0ca97b808c0 staging: dgnc: Fix a NULL pointer dereference
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
7c9574090d30 staging: comedi: dt2811: simplify A/D reference configuration
    5ac5c3bcf574 staging: comedi: dt2811: fix a precedence bug
ba3e67001b42 greybus: SPI: convert to a gpbridge driver
    770b03c2ca4a staging: greybus: spilib: fix use-after-free after deregistration
5a8d651a2bde usb: gadget: move gadget API functions to udc-core
    0bf048abebb6 staging: emxx_udc: allow modular build
32c8728d87dc staging/lustre/ptlrpc: reorganize ptlrpc_request
    9275036ec08f staging: lustre: ptlrpc: restore 64-bit time for struct ptlrpc_cli_req
1e1f9ff406fd staging: lustre: llite: break ll_getxattr_common into 2 functions
    d6a80699bad7 staging: lustre: hide unused variable
2518ac59eb27 staging: wilc1000: Replace kthread with workqueue for host interface
    1d4f1d53e1e2 Staging: wilc1000: Fix kernel Oops on opening the device
        Reported-by: Nicolas Ferre <Nicolas.Ferre@microchip.com>
11fb998986a7 mm: move most file-based accounting to the node
    7894c263f200 staging: lustre: fix unstable pages tracking
8f18c8a48b73 staging: lustre: lmv: separate master object with master stripe
    17556cdbe6ed staging: lustre: lmv: correctly iput lmo_root
70a251f68dea staging: lustre: obd: decruft md_enqueue() and md_intent_lock()
    26d2bf1e7152 staging: lustre: mdc: Make IT_OPEN take lookup bits lock
5c2ba8b85e35 rtl8712: pwrctrl_priv: Replace semaphore lock with mutex
    4db7c0bebdff staging: rtl8712: fix double lock bug in SetPSModeWorkItemCallback()
0a1200991234 staging: lustre: cleanup lustre_lib.h
    4091af4a0948 staging: lustre: Fix variable type declaration after refactoring
3d44a78f0d8b staging: rtl8712: Remove unnecessary 'else'
    8681a1d47b33 Fixes: 3d44a78f0d8b ("staging: rtl8712: Remove unnecessary 'else'")
1e1db2a97be5 staging: lustre: clio: Revise read ahead implementation
    186ae2f38ac2 staging: lustre: remove invariant in cl_io_read_ahead()
e10a431b3fd0 staging: lustre: lov: move LSM to LOV layer
    d4bcd7e75cce staging: lustre: restore initialization of return code
2eb9d8cbb3c3 staging: rts5208: rtsx.c: Alloc sizeof struct
    ef5aa934cf03 staging: rts5208: rtsx.c: Fix invalid use of sizeof in rtsx_probe()
cf9caf192988 staging: vc04_services: Replace dmac_map_area with dmac_map_sg
    ff92b9e3c9f8 staging: vc04_services: Fix bulk cache maintenance
        Reported-by: Stefan Wahren <stefan.wahren@i2se.com>
03140dabf584 staging: sm750fb: Replace functions CamelCase naming with underscores.
    52d0744d751d staging: sm750fb: prefix global identifiers
43a07e48af44 staging: iio: ad9832: clean-up regulator 'reg'
    6826fdbd2e20 staging: iio: ad9832: allocate data before using
735bb39ca3be staging: wilc1000: simplify vif[i]->ndev accesses
    dda037057a57 staging: wilc1000: fix to set correct value for 'vif_num'
    0e490657c721 staging: wilc1000: Fix problem with wrong vif index
8d78f0f2ba76 staging: lustre: lnet: cleanup some of the > 80 line issues
    5bba129eaa00 staging: lustre: lnet: memory corruption in selftest
bdfb95c4baab staging: greybus: remove timesync protocol support
    1e029b836108 staging: greybus: arche: remove timesync remains
0cec463e391e staging: bcm2835-audio: Simplify callback structure for write data
    7dd551e20ea2 staging: bcm2835-audio: use | instead of || otherwise result is just boolean 1
c075b6f2d357 staging: sm750fb: Replace POKE32 and PEEK32 by inline functions
    16808dcf605e staging: sm750fb: Fix parameter mistake in poke32
66812da3a689 staging: octeon: Use net_device_stats from struct net_device
    69eb1596b4df staging: octeon: remove unused variable
e31447f934d3 staging: rtl8188eu: Replace x==NULL by !x
    5629ff0ffe77 staging: rtl8188eu: fix some inverted conditions
7676b72428e8 staging: ks7010: move comparison to right hand side
    7bb6313d011f staging: ks7010: fix off by one error
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
5b6f9b95f7ae staging: unisys: visorbus: get rid of create_bus_type.
    b0512faf6082 staging: unisys: visorbus: fix kernel BUG discovered by day0 testing
da22013f7df4 atomisp: remove indirection from sh_css_malloc
    bfc133515ffb media: staging: atomisp: sh_css_calloc shall return a pointer to the allocated space
184f8e0981ef atomisp: remove satm kernel
    5162e1ae0c58 staging: atomisp: satm include directory is gone
204f672255c2 staging: android: ion: Use CMA APIs directly
    6e42d12ce0da staging: android: ion: Remove leftover comment
    6d79bd5bb6c7 staging: android: ion: Zero CMA allocated memory
    f292b9b28097 staging: ion: Fix ion_cma_heap allocations
15c6098cfec5 staging: android: ion: Remove ion_handle and ion_client
    35ba13e43cfb staging: android: ion: Clean unused debug_show memeber of the heap object
    f7a320ffebe2 staging: android: uapi: drop definitions of removed ION_IOC_{FREE,SHARE} ioctls
5b29aaaa1e3c staging: rtl8188eu: removes comparison to null
    011ce71609e9 staging: rtl8188eu: memory leak in rtw_free_cmd_obj()
c6f7f2f4591f staging: ccree: refactor LLI access macros
    e0b3f39092a1 staging: ccree: fix 64 bit scatter/gather DMA ops
        Reported-by: Stuart Yoder <stuart.yoder@arm.com>
b93ad9a067e1 staging: fsl-mc: be consistent when checking strcmp() return
    47f078339be9 Revert "staging: fsl-mc: be consistent when checking strcmp() return"
b7e607bf33a2 staging: ccree: move FIPS support to kernel infrastructure
    dc5591dc9c03 staging: ccree: fix fips event irq handling build
62f39d49d168 staging: pi433: reduce stack size in tx thread
    da3761feaec3 Staging: Pi433: Bugfix for wrong argument for sizeof() in TX thread
b03679f6a41a staging: lustre: uapi: remove obd_ioctl_popdata() wrapper
    3ca121c2f4be staging: lustre: obdclass: return -EFAULT if copy_to_user() fails
8e55b6fd0660 staging: lustre: lnet: replace list_for_each with list_for_each_entry
    a93639090a27 staging: lustre: lnet: Fix recent breakage from list_for_each conversion
edf188bee1d9 MIPS: Octeon: Remove usage of cvmx_wait() everywhere.
    0590cdfead8c staging: octeon-usb: use __delay() instead of cvmx_wait()
95b3b4238581 staging: rtl8723bs: remove ternary operators in assignmet statments
    f3c3a0b66ab5 staging: rtl8723bs: remove unused variables
b7749656e946 staging: rtl8188eu: Convert timers to use timer_setup()
    d96e8c10f81f staging: rtl8188eu: Fix bug introduced by convert timers to use timer_setup()
f8af6a323368 staging: rtlwifi: Convert timers to use timer_setup()
    2f9115820982 staging: rtlwifi: Remove unused variable
1b10a0316e2d staging: most: video: remove aim designators
    1f447e51c0b9 staging: most: video: fix registration of an empty comp core_component
b3ec9a6736f2 staging: ccree: staging: ccree: replace sysfs by debugfs interface
    c7fc46fd1410 staging: ccree: mark debug_regs[] as static
621b08eabcdd media: staging/imx: remove static media link arrays
    107927fa597c media: imx: Clear fwnode link struct for each endpoint iteration
6106c0f82481 staging: lustre: lnet: convert selftest to use workqueues
    7d70718de014 staging: lustre: lnet/selftest: fix compile error on UP build
        Reported-by: kbuild test robot <fengguang.wu@intel.com>
    e3675875c0a5 staging: lustre: lnet: avoid uninitialized return value
e9d4f0b9f559 staging: lustre: llite: use d_splice_alias for directories.
    1d6e65bedf58 staging: lustre: fix error deref in ll_splice_alias().
2b2ea09e74a5 staging:r8188eu: Use lib80211 to decrypt WEP-frames
    7775665aadc4 staging: rtl8188eu: Fix module loading from tasklet for WEP encryption
2baddf262e98 staging: lustre: use memdup_user to allocate memory and copy from user
    a139834ed6ce staging: lustre: selftest: freeing an error pointer
6bd082af7e36 staging:r8188eu: use lib80211 CCMP decrypt
    84cad97a717f staging: rtl8188eu: Fix module loading from tasklet for CCMP encryption
52e17089d185 media: imx: Don't initialize vars that won't be used
    2b7db29b7919 media: imx-media-csi: Fix inconsistent IS_ERR and PTR_ERR
    dd5747fb9235 media: imx-media-csi: Do not propagate the error when pinctrl is not found
    890f27693f2a media: imx: work around false-positive warning
b83b8b1881c4 staging:r8188eu: Use lib80211 to support TKIP
    69a1d98c831e Revert "staging:r8188eu: Use lib80211 to support TKIP"
9673d9f6f44b staging: mt7621-mmc: Refactor suspend, resume
    ace488268a90 staging: mt7621-mmc: Fix typo in function parameters
8f2395586cf0 staging: mt7621-mmc: Refactor msdc_init_gpd_bd
    e396de684ebe staging: mt7621-mmc: Fix calculation typo in msdc_init_gpd_bd
        Reported-by: NeilBrown <neil@brown.name>
9533b292a7ac IB: remove redundant INFINIBAND kconfig dependencies
    533d1daea8d8 IB: Revert "remove redundant INFINIBAND kconfig dependencies"
7d7cdb4fa552 staging: most: video: remove debugging code
    f7887f741e55 staging: most: video: fix build warnings
        Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
d4b4aaba515a staging: wilc1000: fix line over 80 characters in host_int_parse_join_bss_param()
    979eb0c96be9 staging: wilc1000: Avoid overriding rates_no while parsing ies element.
0922c0084b91 staging: lustre: remove libcfs_all from ptlrpc
    7a5abc3d9699 staging: lustre: fix more build errors in errno.c
        Reported-by: kbuild test robot <lkp@intel.com>
    3c24e170d49d staging: lustre: fix build error in errno.c
        Reported-by: kbuild test robot <lkp@intel.com>
73d65c8d1a85 staging: lustre: remove libcfs_all.h from lustre/include/*.h
    f7a258a8a450 staging: lustre: fix build error in mdc_request.c
        Reported-by: kbuild test robot <lkp@intel.com>
1daddbc8dec5 staging: vboxvideo: Update driver to use drm_dev_register.
    1ebafd1561a0 staging: vboxvideo: Fix IRQs no longer working
ff52a57a7a42 staging: wilc1000: move the allocation of cmd out of wilc_enqueue_cmd()
    15c3381e3abb staging: wilc1000: fix static checker warning to unlock mutex in wilc_deinit()
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
62b6215c11ea staging: mt7621-pinctrl: make use of pinctrl_utils_reserve_map
    0be0debe4a8a staging: mt7621-pinctrl: init *map to NULL for correct memory assignation
e12a1a6e087b staging: mt7621-pinctrl: refactor rt2880_pinctrl_dt_node_to_map function
    0ca1f90861b6 staging: mt7621-pinctrl: use pinconf-generic for 'dt_node_to_map' and 'dt_free_map'
        Reported-by: NeilBrown <neil@brown.name>
    cd56a5141331 staging: mt7621-pinctrl: fix uninitialized variable ngroups
515ce733e86e staging:r8188eu: Use lib80211 to encrypt (CCMP) tx frames
    c5fe50aaa20c Revert "staging:r8188eu: Use lib80211 to encrypt (CCMP) tx frames"
e624c58cf8eb staging: wilc1000: refactor code to avoid use of wilc_set_multicast_list global
    ae26aa844679 staging: wilc1000: Avoid GFP_KERNEL allocation from atomic context.
b3ee105c332e staging: wilc1000: refactor code to move initilization in wilc_netdev_init()
    b4a01d8fa311 staging: wilc1000: fix null checks on wilc
d7ca3a71545b staging: bcm2835-audio: Operate non-atomic PCM ops
    649496b60300 staging: bcm2835-audio: double free in init error path
acceb12a9f8b staging: wilc1000: refactor code to avoid static variables for config parameters
    8f6b8ed3b02e staging: wilc1000: fix incorrect allocation size for structure
35f3288c453e staging: vboxvideo: Atomic phase 1: convert cursor to universal plane
    c00e1d09e305 staging: vboxvideo: unlock on error in vbox_cursor_atomic_update()
2a54e3259e2a staging: mt7621-mmc: Remove #if 0 blocks in sd.c
    e894075934a4 staging: mt7621-mmc: Fix incompletely removed #if 0 block in sd.c
        Reported-by: NeilBrown <neil@brown.name>
745eeeac68d7 staging: mt7621-pci: factor out 'mt7621_pcie_enable_port' function
    e51844bf8251 staging: mt7621-pci: fix reset lines for each pcie port
        Reported-by: NeilBrown <neil@brown.name>
42e764d05712 staging: tegravde: replace bit assignment with macro
    9483804a725a media: staging: tegra-vde: print long unsigned using %lu format specifier
        Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
2159fb372929 staging: olpc_dcon: olpc_dcon_xo_1.c: Switch to the gpio descriptor interface
    ae0a6d2017f7 staging: olpc_dcon_xo_1: add missing 'const' qualifier
05f9d4a0c8c4 staging: erofs: use the new LZ4_decompress_safe_partial()
    7962e63a2f41 staging: erofs: fix undefined LZ4_decompress_safe_partial()
        Reported-by: kbuild test robot <lkp@intel.com>
cc9c58ef6e06 staging: iio: adc: ad7280a: use devm_* APIs
    794e20ee038e staging: iio: adc: ad7280a: fix overwrite of the returned value
f27e47bc6b8b staging: vchiq: use completions instead of semaphores
    a772f116702e staging: vchiq: switch to wait_for_completion_killable
        Reported-by: Arnd Bergmann <arnd@arndb.de>
    852b2876a8a8 staging: vchiq: rework remove_event handling
187ac53e590c staging: vchiq_arm: rework probe and init functions
    9b9c87cf5178 staging: vc04_services: Fix an error code in vchiq_probe()
147ccfd45102 staging: wilc1000: handle mgmt_frame_register ops from cfg82011 context
    b62ce02e157a staging: wilc1000: fix registration frame size
8f1a0ac1eba7 staging: wilc1000: handle scan operation callback from cfg80211 context
    0b7b9b6c3dee staging: wilc1000: fix NULL dereference inside wilc_scan()
        Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
ff5979ad8636 staging: vchiq_2835_arm: quit using custom down_interruptible()
    061ca1401f96 staging: vchiq_2835_arm: revert "quit using custom down_interruptible()"
2b3e88ea6528 net: phy: improve phy state checking
    49230b49c439 staging: octeon: fix broken phylib usage
4165079ba328 net: switch secpath to use skb extension infrastructure
    8762cdcd1d50 staging: octeon: fix build failure with XFRM enabled
        Reported-by: Guenter Roeck <linux@roeck-us.net>
67673ed55084 media: staging/imx: rearrange group id to take in account IPU
    55dde5094698 media: imx: vdic: Fix wrong CSI group ID
1b8b589d9103 staging: fsl-dpaa2: ethsw: Remove getting PORT_BRIDGE_FLAGS
    fd80a14363ee staging: fsl-dpaa2: ethsw: Remove unused port_priv variable
131ac62253db staging: most: core: use device description as name
    3970d0d81816 staging: most: core: replace strcpy() by strscpy()
2411a336c8ce staging: fieldbus: arcx-anybus: change custom -> mmio regmap
    0f2692f7f282 staging: fieldbus: Fix build error without CONFIG_REGMAP_MMIO
        Reported-by: Hulk Robot <hulkci@huawei.com>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:24         ` Mauro Carvalho Chehab
@ 2019-06-14 13:31           ` Laurent Pinchart
  2019-06-14 13:54             ` Mauro Carvalho Chehab
  2019-06-14 14:56             ` Mark Brown
  2019-06-14 13:58           ` Greg KH
  1 sibling, 2 replies; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-14 13:31 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit-discuss

Hi Mauro,

On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 13:12:22 +0300 Laurent Pinchart escreveu:
> > On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:
> >> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> >>   
> >>> [splitting issues to shorten replies]
> >>> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:  
> >>>> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:    
> >>>>> This is probably best done as two separate topics
> >>>>> 
> >>>>> 1) Pull network: The pull depth is effectively how many pulls your
> >>>>> tree does before it goes to Linus, so pull depth 0 is sent straight
> >>>>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> >>>>> and so on.  We've previously spent time discussing how increasing
> >>>>> the pull depth of the network would reduce the amount of time Linus
> >>>>> spends handling pull requests.  However, in the areas I play, like
> >>>>> security, we seem to be moving in the opposite direction
> >>>>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> >>>>> we're deciding to move to a flat tree model, where everything is
> >>>>> depth 0, that's fine, I just think we could do with making a formal
> >>>>> decision on it so we don't waste energy encouraging greater tree
> >>>>> depth.    
> >>>> 
> >>>> That depth "change" was due to the perceived problems that having a
> >>>> deeper pull depth was causing.  To sort that out, Linus asked for
> >>>> things to go directly to him.    
> >>> 
> >>> This seems to go beyond problems with one tree and is becoming a trend.
> >>>   
> >>>> It seems like the real issue is the problem with that subsystem
> >>>> collection point, and the fact that the depth changed is a sign that
> >>>> our model works well (i.e. everyone can be routed around.)    
> >>> 
> >>> I'm not really interested in calling out "problem" maintainers, or
> >>> indeed having another "my patch collection method is better than yours"
> >>> type discussion.  What I was fishing for is whether the general
> >>> impression that greater tree depth is worth striving for is actually
> >>> correct, or we should all give up now and simply accept that the
> >>> current flat tree is the best we can do, and, indeed is the model that
> >>> works best for Linus.  I get the impression this may be the case, but I
> >>> think making sure by having an actual discussion among the interested
> >>> parties who will be at the kernel summit, would be useful.  
> >> 
> >> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> >> 
> >> patch author -> media/driver maintainer -> subsystem maintainer -> Linus  
> > 
> > I'd like to use this opportunity to ask again for pull requests to be
> > pulled instead of cherry-picked.
> 
> There are other forums for discussing internal media maintainership,
> like the weekly meetings we have and our own mailing lists.

Is this really an internal matter ? If the pull network depths
increases, which is the topic of this e-mail thread, I think it's
important to decide on how pull requests should be handled along the
pull chain. This becomes even more important for pull requests that
target multiple subsystems (this affects V4L2 and DRM, but not only) to
avoid conflicts, but is also a topic worth discussing from a testing and
stability point of view (cherry-picking instead of merging a branch
voids, to some extent, the tests performed by the submitter on their
original branch).

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:31           ` Laurent Pinchart
@ 2019-06-14 13:54             ` Mauro Carvalho Chehab
  2019-06-14 14:08               ` Laurent Pinchart
  2019-06-14 14:56             ` Mark Brown
  1 sibling, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 13:54 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, ksummit-discuss

Em Fri, 14 Jun 2019 16:31:32 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> > There are other forums for discussing internal media maintainership,
> > like the weekly meetings we have and our own mailing lists.  
> 
> Is this really an internal matter ?

Yes.

Right now, each subsystem have their own criteria and procedures for
handling patches in a way that it fits better for the subsystem's need.

Also, discussing internal subsystem-specific aspects on a forum where
the affected developers don't participate is not nice.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:24         ` Mauro Carvalho Chehab
  2019-06-14 13:31           ` Laurent Pinchart
@ 2019-06-14 13:58           ` Greg KH
  2019-06-14 15:11             ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 77+ messages in thread
From: Greg KH @ 2019-06-14 13:58 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit-discuss

On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 13:12:22 +0300
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:
> 
> > Hi Mauro,
> > 
> > On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:
> > > Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> > >   
> > > > [splitting issues to shorten replies]
> > > > On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:  
> > > >> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:    
> > > >>> This is probably best done as two separate topics
> > > >>> 
> > > >>> 1) Pull network: The pull depth is effectively how many pulls your
> > > >>> tree does before it goes to Linus, so pull depth 0 is sent straight
> > > >>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > > >>> and so on.  We've previously spent time discussing how increasing
> > > >>> the pull depth of the network would reduce the amount of time Linus
> > > >>> spends handling pull requests.  However, in the areas I play, like
> > > >>> security, we seem to be moving in the opposite direction
> > > >>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> > > >>> we're deciding to move to a flat tree model, where everything is
> > > >>> depth 0, that's fine, I just think we could do with making a formal
> > > >>> decision on it so we don't waste energy encouraging greater tree
> > > >>> depth.    
> > > >> 
> > > >> That depth "change" was due to the perceived problems that having a
> > > >> deeper pull depth was causing.  To sort that out, Linus asked for
> > > >> things to go directly to him.    
> > > > 
> > > > This seems to go beyond problems with one tree and is becoming a trend.
> > > >   
> > > >> It seems like the real issue is the problem with that subsystem
> > > >> collection point, and the fact that the depth changed is a sign that
> > > >> our model works well (i.e. everyone can be routed around.)    
> > > > 
> > > > I'm not really interested in calling out "problem" maintainers, or
> > > > indeed having another "my patch collection method is better than yours"
> > > > type discussion.  What I was fishing for is whether the general
> > > > impression that greater tree depth is worth striving for is actually
> > > > correct, or we should all give up now and simply accept that the
> > > > current flat tree is the best we can do, and, indeed is the model that
> > > > works best for Linus.  I get the impression this may be the case, but I
> > > > think making sure by having an actual discussion among the interested
> > > > parties who will be at the kernel summit, would be useful.  
> > > 
> > > On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> > > 
> > > patch author -> media/driver maintainer -> subsystem maintainer -> Linus  
> > 
> > I'd like to use this opportunity to ask again for pull requests to be
> > pulled instead of cherry-picked.
> 
> There are other forums for discussing internal media maintainership,
> like the weekly meetings we have and our own mailing lists.

You all have weekly meetings?  That's crazy...

Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
cherry-picking does make things a lot easier for contributors.  I know
I'm guilty of it as well as a maintainer, but that's only until I start
trusting the submitter.  Once that happens, pulling is _much_ easier as
a maintainer instead of individual patches for the usual reason that
linux-next has already verified that the sub-tree works properly before
I merge it in.

Try it, it might make your load be reduced, it has for me.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:54             ` Mauro Carvalho Chehab
@ 2019-06-14 14:08               ` Laurent Pinchart
  0 siblings, 0 replies; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-14 14:08 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, ksummit-discuss

Hi Mauro,

On Fri, Jun 14, 2019 at 10:54:13AM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 16:31:32 +0300 Laurent Pinchart escreveu:
> 
> >> There are other forums for discussing internal media maintainership,
> >> like the weekly meetings we have and our own mailing lists.  
> > 
> > Is this really an internal matter ?
> 
> Yes.
> 
> Right now, each subsystem have their own criteria and procedures for
> handling patches in a way that it fits better for the subsystem's need.
> 
> Also, discussing internal subsystem-specific aspects on a forum where
> the affected developers don't participate is not nice.

I realise that my first reply was badly worded and could be interpreted
as an internal matter. While I have my preferences between the multiple
available solutions, what I'm after here is discussing how to best
handle increased pull network depths in general (assuming we conclude,
in a reply to James' initial question, that increased depth are
desired). In particular, as Linus will ultimately pull subsystems in his
tree, we need to ensure best practice rules that won't affect this
aspect of his work negatively.

Another point beside pull vs. cherry picking is whether all pull
requests should use signed tags, or if that requirements only apply to
the top level.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:31           ` Laurent Pinchart
  2019-06-14 13:54             ` Mauro Carvalho Chehab
@ 2019-06-14 14:56             ` Mark Brown
  1 sibling, 0 replies; 77+ messages in thread
From: Mark Brown @ 2019-06-14 14:56 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Mauro Carvalho Chehab, James Bottomley, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 678 bytes --]

On Fri, Jun 14, 2019 at 04:31:32PM +0300, Laurent Pinchart wrote:
> On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:

> > There are other forums for discussing internal media maintainership,
> > like the weekly meetings we have and our own mailing lists.

> Is this really an internal matter ? If the pull network depths
> increases, which is the topic of this e-mail thread, I think it's
> important to decide on how pull requests should be handled along the

It's at least worth noting that this happens since cherry picking means
it's less obvious to scripting that there's a pull model going on so
things might look a lot flatter than they really are.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 13:58           ` Greg KH
@ 2019-06-14 15:11             ` Mauro Carvalho Chehab
  2019-06-14 15:23               ` James Bottomley
                                 ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 15:11 UTC (permalink / raw)
  To: Greg KH; +Cc: James Bottomley, media-submaintainers, ksummit-discuss

Em Fri, 14 Jun 2019 15:58:07 +0200
Greg KH <greg@kroah.com> escreveu:

> On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 13:12:22 +0300
> > Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:
> >   
> > > Hi Mauro,
> > > 
> > > On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:  
> > > > Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> > > >     
> > > > > [splitting issues to shorten replies]
> > > > > On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:    
> > > > >> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:      
> > > > >>> This is probably best done as two separate topics
> > > > >>> 
> > > > >>> 1) Pull network: The pull depth is effectively how many pulls your
> > > > >>> tree does before it goes to Linus, so pull depth 0 is sent straight
> > > > >>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > > > >>> and so on.  We've previously spent time discussing how increasing
> > > > >>> the pull depth of the network would reduce the amount of time Linus
> > > > >>> spends handling pull requests.  However, in the areas I play, like
> > > > >>> security, we seem to be moving in the opposite direction
> > > > >>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> > > > >>> we're deciding to move to a flat tree model, where everything is
> > > > >>> depth 0, that's fine, I just think we could do with making a formal
> > > > >>> decision on it so we don't waste energy encouraging greater tree
> > > > >>> depth.      
> > > > >> 
> > > > >> That depth "change" was due to the perceived problems that having a
> > > > >> deeper pull depth was causing.  To sort that out, Linus asked for
> > > > >> things to go directly to him.      
> > > > > 
> > > > > This seems to go beyond problems with one tree and is becoming a trend.
> > > > >     
> > > > >> It seems like the real issue is the problem with that subsystem
> > > > >> collection point, and the fact that the depth changed is a sign that
> > > > >> our model works well (i.e. everyone can be routed around.)      
> > > > > 
> > > > > I'm not really interested in calling out "problem" maintainers, or
> > > > > indeed having another "my patch collection method is better than yours"
> > > > > type discussion.  What I was fishing for is whether the general
> > > > > impression that greater tree depth is worth striving for is actually
> > > > > correct, or we should all give up now and simply accept that the
> > > > > current flat tree is the best we can do, and, indeed is the model that
> > > > > works best for Linus.  I get the impression this may be the case, but I
> > > > > think making sure by having an actual discussion among the interested
> > > > > parties who will be at the kernel summit, would be useful.    
> > > > 
> > > > On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> > > > 
> > > > patch author -> media/driver maintainer -> subsystem maintainer -> Linus    
> > > 
> > > I'd like to use this opportunity to ask again for pull requests to be
> > > pulled instead of cherry-picked.  
> > 
> > There are other forums for discussing internal media maintainership,
> > like the weekly meetings we have and our own mailing lists.  
> 
> You all have weekly meetings?  That's crazy...

Yep, every week we do a meeting, usually taking about 1 hour via irc,
on this channel:

	https://linuxtv.org/irc/irclogger_logs//media-maint

> 
> Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
> cherry-picking does make things a lot easier for contributors.  I know
> I'm guilty of it as well as a maintainer, but that's only until I start
> trusting the submitter.  Once that happens, pulling is _much_ easier as
> a maintainer instead of individual patches for the usual reason that
> linux-next has already verified that the sub-tree works properly before
> I merge it in.
> 
> Try it, it might make your load be reduced, it has for me.

If you think this is relevant to a broader audience, let me reply with
a long answer about that. I prepared it and intended to reply to our
internal media maintainer's ML (added as c/c). 

Yet, I still think that this is media maintainer's dirty laundry
and should be discussed elsewhere ;-)

---

Laurent,

I already explained a few times, including during the last Media Summit,
but it seems you missed the point.

As shown on our stats:
	https://linuxtv.org/patchwork_stats.php

We're receiving about 400 to 1000 patches per month, meaning 18 to 45
patches per working days (22 days/month). From those, we accept about
100 to 300 patches per month (4.5 to 13.6 patches per day).

Currently, I review all accepted patches.

I have bandwidth to review 4.5 to 13.6 patches per day, not without a lot
of personal efforts. For that, I use part of my spare time, as I have other
duties, plus I develop patches myself. So, in order to be able to handle
those, I typically work almost non-stop starting at 6am and sometimes
going up to 10pm. Also, when there are too much stuff pending (like on
busy months), I handle patches also during weekends.

However, 45 patches/day (225 patches per week) is a lot for me to
review. I can't commit to handle such amount of patches.

That's why I review patches after a first review from the other
media maintainers. The way I identify the patches I should review is
when I receive pull requests.

We could do a different workflow. For example, once a media maintainer
review a patch, it could be delegated to me at patchwork. This would likely 
increase the time for merging stuff, as the workflow would change from:

 +-------+    +------------------+    +---------------+
 | patch | -> | media maintainer | -> | submaintainer | 
 +-------+    +------------------+    +---------------+

to: 

 +-------+    +------------------+    +---------------+    +------------------+    +---------------+
 | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
 +-------+    +------------------+    +---------------+    +------------------+    +---------------+

  \------------------------v--------------------------/    \---------------------v------------------/
			Patchwork                                           Pull Request

The pull request part of the new chain could eventually be (semi-)automated
by some scripting that would just do a checksum sum at the received patches 
that were previously reviewed by me. If matches, and if it passes on the 
usual checks I run for PR patches, it would push on some tree. Still, it 
would take more time than the previous flow.

Also, as also discussed during the media summit, in order to have such
kind of automation, we would need to improve our infrastructure, moving
the tests from a noisy/heated server I have over my desk to some VM
inside the cloud, once we get funds for it.

In any case, a discussion that affects the patch latency and our internal
procedures within the media subsystem is something that should be discussed
with other media mantainers, and not at KS.

-

That's said, one day I may not be able to review all accepted patches.
When this day comes, I'll just apply the pull requests I receive.

-

Finally, if you're so interested on improving our maintenance model,
I beg you: please handle the patches delegated to you:

	https://patchwork.linuxtv.org/project/linux-media/list/?series=&submitter=&state=&q=&archive=&delegate=2510

As we agreed on our media meetings, I handled about ~60 patches that 
were waiting for your review since 2017 a couple of weeks ago - 
basically the ones that are not touching the drivers you currently
maintain, but there are still 23 patches sent between 2013-2018
over there, plus the 48 patches sent in 2019.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:11             ` Mauro Carvalho Chehab
@ 2019-06-14 15:23               ` James Bottomley
  2019-06-14 15:43                 ` Mauro Carvalho Chehab
  2019-06-14 20:52               ` Vlastimil Babka
  2019-06-15 11:01               ` Laurent Pinchart
  2 siblings, 1 reply; 77+ messages in thread
From: James Bottomley @ 2019-06-14 15:23 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Greg KH; +Cc: media-submaintainers, ksummit-discuss

On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
[...]
> If you think this is relevant to a broader audience, let me reply
> with a long answer about that. I prepared it and intended to reply to
> our internal media maintainer's ML (added as c/c). 
> 
> Yet, I still think that this is media maintainer's dirty laundry
> and should be discussed elsewhere ;-)
> 
> ---

So trying not to get into huge email thread, I think this is the key
point:

[...]
> Currently, I review all accepted patches.

This means you effectively have a fully flat tree.  Even if you use
git, you're using it like an email transmission path.  One of the
points I was making about deepening the tree is that the maintainer in
the middle should trust the submaintainer they pull from, so there
should be no need to review all the patches because of that trust. 
This is how deepening the tree helps to offload maintainers because
review is one of the biggest burdens we have and deepening the tree is
a way to share it.  Without trust, we achieve no offloading and
therefore no utility from deepening the tree.

So, to get back to the original question, which was *should* we deepen
the tree: why don't you feel you can let branches with patches you
haven't reviewed into your tree?  I've characterised it as a trust
issue above, but perhaps it isn't. I think this is a key question which
would help us understand whether a deeper tree model is at all
possible.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:23               ` James Bottomley
@ 2019-06-14 15:43                 ` Mauro Carvalho Chehab
  2019-06-14 15:49                   ` James Bottomley
  0 siblings, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 15:43 UTC (permalink / raw)
  To: James Bottomley; +Cc: media-submaintainers, ksummit-discuss

Em Fri, 14 Jun 2019 08:23:05 -0700
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> [...]
> > If you think this is relevant to a broader audience, let me reply
> > with a long answer about that. I prepared it and intended to reply to
> > our internal media maintainer's ML (added as c/c). 
> > 
> > Yet, I still think that this is media maintainer's dirty laundry
> > and should be discussed elsewhere ;-)
> > 
> > ---  
> 
> So trying not to get into huge email thread, I think this is the key
> point:
> 
> [...]
> > Currently, I review all accepted patches.  
> 
> This means you effectively have a fully flat tree.

Yes.

> Even if you use
> git, you're using it like an email transmission path.  One of the
> points I was making about deepening the tree is that the maintainer in
> the middle should trust the submaintainer they pull from, so there
> should be no need to review all the patches because of that trust. 

It is not a matter of trust. It is just that the media subsystem is
a complex puzzle. Just the V4L2 API has more than 80 ioctls.

So, the goal here is to do my best to ensure that patches will get
at least two reviews.

> This is how deepening the tree helps to offload maintainers because
> review is one of the biggest burdens we have and deepening the tree is
> a way to share it.  Without trust, we achieve no offloading and
> therefore no utility from deepening the tree.

Yeah, I know one day this won't scale. The day it happens, I'll
just start picking pull requests. As we already use git, a change 
like that would be trivial.

> So, to get back to the original question, which was *should* we deepen
> the tree: why don't you feel you can let branches with patches you
> haven't reviewed into your tree?  I've characterised it as a trust
> issue above, but perhaps it isn't. I think this is a key question which
> would help us understand whether a deeper tree model is at all
> possible.

One of the aspects is that developers nowadays are specialists on a
subset of the media devices. Most of them are working on complex 
camera support, with envolves a subset of the APIs we have. They
never worked on a driver that would use other parts of the API, like
DVB, Remote Controllers, TV, V4L2 streaming devices, etc.

So, having someone with a more generalist view at the end of the
review process helps to identify potential problems that might
affect other devices, specially when there are API changes involved[1].

[1] Since when I started maintaining the subsystem, back on 2005,
on almost every single Kernel review there are API changes in
order to support new types of hardware.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:43                 ` Mauro Carvalho Chehab
@ 2019-06-14 15:49                   ` James Bottomley
  2019-06-14 16:04                     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 77+ messages in thread
From: James Bottomley @ 2019-06-14 15:49 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: media-submaintainers, ksummit-discuss

On Fri, 2019-06-14 at 12:43 -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 08:23:05 -0700
> James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> 
> > On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> > [...]
> > > If you think this is relevant to a broader audience, let me reply
> > > with a long answer about that. I prepared it and intended to
> > > reply to
> > > our internal media maintainer's ML (added as c/c). 
> > > 
> > > Yet, I still think that this is media maintainer's dirty laundry
> > > and should be discussed elsewhere ;-)
> > > 
> > > ---  
> > 
> > So trying not to get into huge email thread, I think this is the
> > key
> > point:
> > 
> > [...]
> > > Currently, I review all accepted patches.  
> > 
> > This means you effectively have a fully flat tree.
> 
> Yes.
> 
> > Even if you use
> > git, you're using it like an email transmission path.  One of the
> > points I was making about deepening the tree is that the maintainer
> > in
> > the middle should trust the submaintainer they pull from, so there
> > should be no need to review all the patches because of that trust. 
> 
> It is not a matter of trust. It is just that the media subsystem is
> a complex puzzle. Just the V4L2 API has more than 80 ioctls.
> 
> So, the goal here is to do my best to ensure that patches will get
> at least two reviews.
> 
> > This is how deepening the tree helps to offload maintainers because
> > review is one of the biggest burdens we have and deepening the tree
> > is
> > a way to share it.  Without trust, we achieve no offloading and
> > therefore no utility from deepening the tree.
> 
> Yeah, I know one day this won't scale. The day it happens, I'll
> just start picking pull requests. As we already use git, a change 
> like that would be trivial.
> 
> > So, to get back to the original question, which was *should* we
> > deepen
> > the tree: why don't you feel you can let branches with patches you
> > haven't reviewed into your tree?  I've characterised it as a trust
> > issue above, but perhaps it isn't. I think this is a key question
> > which
> > would help us understand whether a deeper tree model is at all
> > possible.
> 
> One of the aspects is that developers nowadays are specialists on a
> subset of the media devices. Most of them are working on complex 
> camera support, with envolves a subset of the APIs we have. They
> never worked on a driver that would use other parts of the API, like
> DVB, Remote Controllers, TV, V4L2 streaming devices, etc.
> 
> So, having someone with a more generalist view at the end of the
> review process helps to identify potential problems that might
> affect other devices, specially when there are API changes
> involved[1].
> 
> [1] Since when I started maintaining the subsystem, back on 2005,
> on almost every single Kernel review there are API changes in
> order to support new types of hardware.

Actually, this leads me to the patch acceptance criteria: Is there
value in requiring reviews?  We try to do this in SCSI (usually only
one review), but if all reviewers add a

Reviewed-by:

tag, which is accumulated in the tree, your pull machinery can detect
it on all commits in the pull and give you an automated decision about
whether to accept the pull or not.  If you require two with one from a
list of designated reviewers, it can do that as well (with a bit more
complexity in the pull hook script).

So here's the question: If I help you script this, would you be willing
to accept pull requests in the media tree with this check in place? 
I'm happy to do this because it's an interesting experiment to see if
we can have automation offload work currently done by humans.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:49                   ` James Bottomley
@ 2019-06-14 16:04                     ` Mauro Carvalho Chehab
  2019-06-14 16:16                       ` James Bottomley
  0 siblings, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 16:04 UTC (permalink / raw)
  To: James Bottomley; +Cc: media-submaintainers, ksummit-discuss

Em Fri, 14 Jun 2019 08:49:46 -0700
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> On Fri, 2019-06-14 at 12:43 -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 08:23:05 -0700
> > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> >   
> > > On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> > > [...]  
> > > > If you think this is relevant to a broader audience, let me reply
> > > > with a long answer about that. I prepared it and intended to
> > > > reply to
> > > > our internal media maintainer's ML (added as c/c). 
> > > > 
> > > > Yet, I still think that this is media maintainer's dirty laundry
> > > > and should be discussed elsewhere ;-)
> > > > 
> > > > ---    
> > > 
> > > So trying not to get into huge email thread, I think this is the
> > > key
> > > point:
> > > 
> > > [...]  
> > > > Currently, I review all accepted patches.    
> > > 
> > > This means you effectively have a fully flat tree.  
> > 
> > Yes.
> >   
> > > Even if you use
> > > git, you're using it like an email transmission path.  One of the
> > > points I was making about deepening the tree is that the maintainer
> > > in
> > > the middle should trust the submaintainer they pull from, so there
> > > should be no need to review all the patches because of that trust.   
> > 
> > It is not a matter of trust. It is just that the media subsystem is
> > a complex puzzle. Just the V4L2 API has more than 80 ioctls.
> > 
> > So, the goal here is to do my best to ensure that patches will get
> > at least two reviews.
> >   
> > > This is how deepening the tree helps to offload maintainers because
> > > review is one of the biggest burdens we have and deepening the tree
> > > is
> > > a way to share it.  Without trust, we achieve no offloading and
> > > therefore no utility from deepening the tree.  
> > 
> > Yeah, I know one day this won't scale. The day it happens, I'll
> > just start picking pull requests. As we already use git, a change 
> > like that would be trivial.
> >   
> > > So, to get back to the original question, which was *should* we
> > > deepen
> > > the tree: why don't you feel you can let branches with patches you
> > > haven't reviewed into your tree?  I've characterised it as a trust
> > > issue above, but perhaps it isn't. I think this is a key question
> > > which
> > > would help us understand whether a deeper tree model is at all
> > > possible.  
> > 
> > One of the aspects is that developers nowadays are specialists on a
> > subset of the media devices. Most of them are working on complex 
> > camera support, with envolves a subset of the APIs we have. They
> > never worked on a driver that would use other parts of the API, like
> > DVB, Remote Controllers, TV, V4L2 streaming devices, etc.
> > 
> > So, having someone with a more generalist view at the end of the
> > review process helps to identify potential problems that might
> > affect other devices, specially when there are API changes
> > involved[1].
> > 
> > [1] Since when I started maintaining the subsystem, back on 2005,
> > on almost every single Kernel review there are API changes in
> > order to support new types of hardware.  
> 
> Actually, this leads me to the patch acceptance criteria: Is there
> value in requiring reviews?  We try to do this in SCSI (usually only
> one review), but if all reviewers add a
> 
> Reviewed-by:
> 
> tag, which is accumulated in the tree, your pull machinery can detect
> it on all commits in the pull and give you an automated decision about
> whether to accept the pull or not.  If you require two with one from a
> list of designated reviewers, it can do that as well (with a bit more
> complexity in the pull hook script).
> 
> So here's the question: If I help you script this, would you be willing
> to accept pull requests in the media tree with this check in place? 
> I'm happy to do this because it's an interesting experiment to see if
> we can have automation offload work currently done by humans.

We could experiment something like that, provided that people will be
aware that it can be undone if something gets wrong.

Yet, as we discussed at the Media Summit, we currently have an
issue: our infrastructure lack resources for such kind of automation. 

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 16:04                     ` Mauro Carvalho Chehab
@ 2019-06-14 16:16                       ` James Bottomley
  2019-06-14 17:48                         ` Mauro Carvalho Chehab
  2019-06-15 10:55                         ` [Ksummit-discuss] " Daniel Vetter
  0 siblings, 2 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-14 16:16 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: media-submaintainers, ksummit-discuss

On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 08:49:46 -0700
> James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> 
> > On Fri, 2019-06-14 at 12:43 -0300, Mauro Carvalho Chehab wrote:
> > > Em Fri, 14 Jun 2019 08:23:05 -0700
> > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > >   
> > > > On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> > > > [...]  
> > > > > If you think this is relevant to a broader audience, let me
> > > > > reply with a long answer about that. I prepared it and
> > > > > intended to reply to our internal media maintainer's ML
> > > > > (added as c/c). 
> > > > > 
> > > > > Yet, I still think that this is media maintainer's dirty
> > > > > laundry and should be discussed elsewhere ;-)
> > > > > 
> > > > > ---    
> > > > 
> > > > So trying not to get into huge email thread, I think this is
> > > > the key point:
> > > > 
> > > > [...]  
> > > > > Currently, I review all accepted patches.    
> > > > 
> > > > This means you effectively have a fully flat tree.  
> > > 
> > > Yes.
> > >   
> > > > Even if you use git, you're using it like an email transmission
> > > > path.  One of the points I was making about deepening the tree
> > > > is that the maintainer in the middle should trust the
> > > > submaintainer they pull from, so there should be no need to
> > > > review all the patches because of that trust.   
> > > 
> > > It is not a matter of trust. It is just that the media subsystem
> > > is a complex puzzle. Just the V4L2 API has more than 80 ioctls.
> > > 
> > > So, the goal here is to do my best to ensure that patches will
> > > get at least two reviews.
> > >   
> > > > This is how deepening the tree helps to offload maintainers
> > > > because review is one of the biggest burdens we have and
> > > > deepening the tree is a way to share it.  Without trust, we
> > > > achieve no offloading and therefore no utility from deepening
> > > > the tree.  
> > > 
> > > Yeah, I know one day this won't scale. The day it happens, I'll
> > > just start picking pull requests. As we already use git, a
> > > change like that would be trivial.
> > >   
> > > > So, to get back to the original question, which was *should* we
> > > > deepen the tree: why don't you feel you can let branches with
> > > > patches you haven't reviewed into your
> > > > tree?  I've characterised it as a
> > > > trust issue above, but perhaps it isn't. I think this is a key
> > > > question which would help us understand whether a deeper tree
> > > > model is at all possible.  
> > > 
> > > One of the aspects is that developers nowadays are specialists on
> > > a subset of the media devices. Most of them are working on
> > > complex camera support, with envolves a subset of the APIs we
> > > have. They never worked on a driver that would use other parts of
> > > the API, like DVB, Remote Controllers, TV, V4L2 streaming
> > > devices, etc.
> > > 
> > > So, having someone with a more generalist view at the end of the
> > > review process helps to identify potential problems that might
> > > affect other devices, specially when there are API changes
> > > involved[1].
> > > 
> > > [1] Since when I started maintaining the subsystem, back on 2005,
> > > on almost every single Kernel review there are API changes in
> > > order to support new types of hardware.  
> > 
> > Actually, this leads me to the patch acceptance criteria: Is there
> > value in requiring reviews?  We try to do this in SCSI (usually
> > only one review), but if all reviewers add a
> > 
> > Reviewed-by:
> > 
> > tag, which is accumulated in the tree, your pull machinery can
> > detect it on all commits in the pull and give you an automated
> > decision about whether to accept the pull or not.  If you require
> > two with one from a list of designated reviewers, it can do that as
> > well (with a bit more complexity in the pull hook script).
> > 
> > So here's the question: If I help you script this, would you be
> > willing to accept pull requests in the media tree with this check
> > in place?  I'm happy to do this because it's an interesting
> > experiment to see if we can have automation offload work currently
> > done by humans.
> 
> We could experiment something like that, provided that people will be
> aware that it can be undone if something gets wrong.
> 
> Yet, as we discussed at the Media Summit, we currently have an
> issue: our infrastructure lack resources for such kind of
> automation. 

This one doesn't require an automation infrastructure: the script runs
as a local pull hook on the machine you accept the pull request from
(presumably your laptop?) so  the workflow is you receive a pull
request, pull it into your tree and if the pull hook finds a bogus
commit it will reject the pull and tell you why; if the script accepts
the pull then you do whatever additional checks you like, then push it
to kernel.org when you're satisfied it didn't break anything.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 11:53                     ` Leon Romanovsky
@ 2019-06-14 17:06                       ` Bart Van Assche
  2019-06-15  7:20                         ` Leon Romanovsky
  0 siblings, 1 reply; 77+ messages in thread
From: Bart Van Assche @ 2019-06-14 17:06 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: James Bottomley, Mauro Carvalho Chehab, ksummit

On 6/14/19 4:53 AM, Leon Romanovsky wrote:
> There are kernel subsystems without available QEMU virtual hardware
> or with special hardware which is not available for most of the active
> developers. Sometimes bugs in those drivers stop whole subsystem for
> moving forward and needed to be fixed without HW.

Hi Leon,

Are you perhaps referring to API refactoring that affects an entire 
subsystem? I was referring to patches that affect a single driver.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 16:16                       ` James Bottomley
@ 2019-06-14 17:48                         ` Mauro Carvalho Chehab
  2019-06-17  7:01                           ` Geert Uytterhoeven
  2019-06-15 10:55                         ` [Ksummit-discuss] " Daniel Vetter
  1 sibling, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-14 17:48 UTC (permalink / raw)
  To: James Bottomley; +Cc: media-submaintainers, ksummit-discuss

Em Fri, 14 Jun 2019 09:16:34 -0700
James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:

> On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 08:49:46 -0700
> > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> >   
> > > On Fri, 2019-06-14 at 12:43 -0300, Mauro Carvalho Chehab wrote:  
> > > > Em Fri, 14 Jun 2019 08:23:05 -0700
> > > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > > >     
> > > > > On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> > > > > [...]    
> > > > > > If you think this is relevant to a broader audience, let me
> > > > > > reply with a long answer about that. I prepared it and
> > > > > > intended to reply to our internal media maintainer's ML
> > > > > > (added as c/c). 
> > > > > > 
> > > > > > Yet, I still think that this is media maintainer's dirty
> > > > > > laundry and should be discussed elsewhere ;-)
> > > > > > 
> > > > > > ---      
> > > > > 
> > > > > So trying not to get into huge email thread, I think this is
> > > > > the key point:
> > > > > 
> > > > > [...]    
> > > > > > Currently, I review all accepted patches.      
> > > > > 
> > > > > This means you effectively have a fully flat tree.    
> > > > 
> > > > Yes.
> > > >     
> > > > > Even if you use git, you're using it like an email transmission
> > > > > path.  One of the points I was making about deepening the tree
> > > > > is that the maintainer in the middle should trust the
> > > > > submaintainer they pull from, so there should be no need to
> > > > > review all the patches because of that trust.     
> > > > 
> > > > It is not a matter of trust. It is just that the media subsystem
> > > > is a complex puzzle. Just the V4L2 API has more than 80 ioctls.
> > > > 
> > > > So, the goal here is to do my best to ensure that patches will
> > > > get at least two reviews.
> > > >     
> > > > > This is how deepening the tree helps to offload maintainers
> > > > > because review is one of the biggest burdens we have and
> > > > > deepening the tree is a way to share it.  Without trust, we
> > > > > achieve no offloading and therefore no utility from deepening
> > > > > the tree.    
> > > > 
> > > > Yeah, I know one day this won't scale. The day it happens, I'll
> > > > just start picking pull requests. As we already use git, a
> > > > change like that would be trivial.
> > > >     
> > > > > So, to get back to the original question, which was *should* we
> > > > > deepen the tree: why don't you feel you can let branches with
> > > > > patches you haven't reviewed into your
> > > > > tree?  I've characterised it as a
> > > > > trust issue above, but perhaps it isn't. I think this is a key
> > > > > question which would help us understand whether a deeper tree
> > > > > model is at all possible.    
> > > > 
> > > > One of the aspects is that developers nowadays are specialists on
> > > > a subset of the media devices. Most of them are working on
> > > > complex camera support, with envolves a subset of the APIs we
> > > > have. They never worked on a driver that would use other parts of
> > > > the API, like DVB, Remote Controllers, TV, V4L2 streaming
> > > > devices, etc.
> > > > 
> > > > So, having someone with a more generalist view at the end of the
> > > > review process helps to identify potential problems that might
> > > > affect other devices, specially when there are API changes
> > > > involved[1].
> > > > 
> > > > [1] Since when I started maintaining the subsystem, back on 2005,
> > > > on almost every single Kernel review there are API changes in
> > > > order to support new types of hardware.    
> > > 
> > > Actually, this leads me to the patch acceptance criteria: Is there
> > > value in requiring reviews?  We try to do this in SCSI (usually
> > > only one review), but if all reviewers add a
> > > 
> > > Reviewed-by:
> > > 
> > > tag, which is accumulated in the tree, your pull machinery can
> > > detect it on all commits in the pull and give you an automated
> > > decision about whether to accept the pull or not.  If you require
> > > two with one from a list of designated reviewers, it can do that as
> > > well (with a bit more complexity in the pull hook script).
> > > 
> > > So here's the question: If I help you script this, would you be
> > > willing to accept pull requests in the media tree with this check
> > > in place?  I'm happy to do this because it's an interesting
> > > experiment to see if we can have automation offload work currently
> > > done by humans.  
> > 
> > We could experiment something like that, provided that people will be
> > aware that it can be undone if something gets wrong.
> > 
> > Yet, as we discussed at the Media Summit, we currently have an
> > issue: our infrastructure lack resources for such kind of
> > automation.   
> 
> This one doesn't require an automation infrastructure: the script runs
> as a local pull hook on the machine you accept the pull request from
> (presumably your laptop?)

No, I run it on a 40-core HP server that it is below my desk. I turn it on
only when doing patch review (to save power, and because it produces a lot
of heat at the small room I work).

Right now, I use a script with converts a pull request into a quilt tree. 
Then, for each patch there, after a manual review, I run:

	- checkpatch --strict
	- make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/staging/media
	- make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/media

where compile_checks is this script:

	#!/bin/bash
	/devel/smatch/smatch -p=kernel $@
	# This is too pedantic and produce lots of false-positives
	#/devel/smatch/smatch --two-passes -- -p=kernel $@
	/devel/sparse/sparse $@

(Currently, I review on one screen, while the check script runs on a
terminal on a second screen)

If a patch at the queue fails, the server beeps, and I manually fix
or I complain.

When the patch series is accepted, for every applied patch, I run
a script that updates the patch status at patchwork, plus the
status of the git pull request.

When I reject a patch, I update patchwork accordingly.

> so  the workflow is you receive a pull
> request, pull it into your tree and if the pull hook finds a bogus
> commit it will reject the pull and tell you why; if the script accepts
> the pull then you do whatever additional checks you like, then push it
> to kernel.org when you're satisfied it didn't break anything.

A script that would work for me should do a similar job:

- apply patch per patch, test with the above programs and check for
  results. If any errors/warnings are returned, mailbomb the involved 
  parties for them to rework the pull request, and update the status
  of the git request at patchwork.

- If the pull request succeeds, update the patches at patchwork, using
  the Patchwork-ID field for the git pull request and the patch diff
  md5sum for the applied patches (and for any past versions of them,
  if the checksum is the same).

Alternatively (and that's what I actually prefer) is that, when someone
sends a pull request, a CI bot would do the above checks. doing the
mailbomb part and marking the pull request as rejected at patchwork,
delegating to me otherwise.

This way, I would have to deal only with already verified pull
requests.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-06 15:48 [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency James Bottomley
  2019-06-06 15:58 ` Greg KH
  2019-06-06 16:18 ` Bart Van Assche
@ 2019-06-14 19:53 ` Bjorn Helgaas
  2019-06-14 23:21   ` Bjorn Helgaas
  2 siblings, 1 reply; 77+ messages in thread
From: Bjorn Helgaas @ 2019-06-14 19:53 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

On Thu, Jun 6, 2019 at 10:49 AM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:

> 2) Patch Acceptance Consistency: At the moment, we have very different
> acceptance criteria for patches into the various maintainer trees.
> Some of these differences are due to deeply held stylistic beliefs, but
> some could be more streamlined to give a more consistent experience to
> beginners who end up doing batch fixes which cross trees and end up
> more confused than anything else.  I'm not proposing to try and unify
> our entire submission process, because that would never fly, but I was
> thinking we could get a few sample maintainer trees to give their
> criteria and then see if we could get any streamlining.  For instance,
> SCSI has a fairly weak "match the current driver" style requirement, a
> reasonably strong get someone else to review it requirement and the
> usual good change log and one patch per substantive change requirement.
>  Other subsystems look similar without the review requirement, some
> have very strict stylistic requirements (reverse christmas tree, one
> variable definition per line, etc).  As I said, the goal wouldn't be to
>  beat up on the unusual requirements but to see if we could agree some
> global baselines that would at least make submission more uniform.

The "when in Rome" rule (follow local conventions) would cover a large
fraction of the style issues without requiring global uniformity or
even documentation.  I'm amazed at how often it is ignored.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:11             ` Mauro Carvalho Chehab
  2019-06-14 15:23               ` James Bottomley
@ 2019-06-14 20:52               ` Vlastimil Babka
  2019-06-15 11:01               ` Laurent Pinchart
  2 siblings, 0 replies; 77+ messages in thread
From: Vlastimil Babka @ 2019-06-14 20:52 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Greg KH
  Cc: James Bottomley, media-submaintainers, ksummit-discuss

On 6/14/19 5:11 PM, Mauro Carvalho Chehab wrote:

> We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> patches per working days (22 days/month). From those, we accept about
> 100 to 300 patches per month (4.5 to 13.6 patches per day).
> 
> Currently, I review all accepted patches.

...

> I typically work almost non-stop starting at 6am and sometimes
> going up to 10pm. Also, when there are too much stuff pending (like on
> busy months), I handle patches also during weekends.

...

> That's said, one day I may not be able to review all accepted patches.
> When this day comes, I'll just apply the pull requests I receive.

I'd say the day should come very soon. This is simply unsustainable and
unhealthy.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 19:53 ` Bjorn Helgaas
@ 2019-06-14 23:21   ` Bjorn Helgaas
  2019-06-17 10:35     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 77+ messages in thread
From: Bjorn Helgaas @ 2019-06-14 23:21 UTC (permalink / raw)
  To: James Bottomley; +Cc: ksummit-discuss

On Fri, Jun 14, 2019 at 2:53 PM Bjorn Helgaas <bhelgaas@google.com> wrote:
>
> On Thu, Jun 6, 2019 at 10:49 AM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
>
> > 2) Patch Acceptance Consistency: At the moment, we have very different
> > acceptance criteria for patches into the various maintainer trees.
> > Some of these differences are due to deeply held stylistic beliefs, but
> > some could be more streamlined to give a more consistent experience to
> > beginners who end up doing batch fixes which cross trees and end up
> > more confused than anything else.  I'm not proposing to try and unify
> > our entire submission process, because that would never fly, but I was
> > thinking we could get a few sample maintainer trees to give their
> > criteria and then see if we could get any streamlining.  For instance,
> > SCSI has a fairly weak "match the current driver" style requirement, a
> > reasonably strong get someone else to review it requirement and the
> > usual good change log and one patch per substantive change requirement.
> >  Other subsystems look similar without the review requirement, some
> > have very strict stylistic requirements (reverse christmas tree, one
> > variable definition per line, etc).  As I said, the goal wouldn't be to
> >  beat up on the unusual requirements but to see if we could agree some
> > global baselines that would at least make submission more uniform.
>
> The "when in Rome" rule (follow local conventions) would cover a large
> fraction of the style issues without requiring global uniformity or
> even documentation.  I'm amazed at how often it is ignored.

I should have expanded this a little.  Somebody pointed out to me off-list that:

| I'm NOT amazed at how often undocumented, strange, local style
|  (and submission and timing) conventions are not followed by new or
| drive-by contributors to a sub-system.  How would one expect local
| conventions to be followed by newbies when they conventions
| are undocumented?

| Many sub-systems have mixed styles. In the past I've wished for
| documentation as simple as: 'file xyz.c is representative
| of the preferred style for this sub-system'.

What I meant was that we should follow the indentation, comment,
declaration, etc. style of the existing code in the same file.  We
should also look at the git history of the file and follow the style
of subject lines and commit logs.

Even if a subsystem has mixed styles, I think the most important rule
is that each file should be internally consistent.  If we want to
unify subsystem style, that's even better, but we should do that with
subsystem-wide patches that specifically improve consistency, not
incrementally as a by-product of other patches.

I'm not necessarily opposed to documenting coding styles, although I
think per-subsystem coding style rules might a little bit onerous to
submitters.  If we pay attention to the surrounding code and commit
history, we can produce good style even without a style guide.

Bjorn

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 17:06                       ` Bart Van Assche
@ 2019-06-15  7:20                         ` Leon Romanovsky
  0 siblings, 0 replies; 77+ messages in thread
From: Leon Romanovsky @ 2019-06-15  7:20 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: James Bottomley, Mauro Carvalho Chehab, ksummit

On Fri, Jun 14, 2019 at 10:06:48AM -0700, Bart Van Assche wrote:
> On 6/14/19 4:53 AM, Leon Romanovsky wrote:
> > There are kernel subsystems without available QEMU virtual hardware
> > or with special hardware which is not available for most of the active
> > developers. Sometimes bugs in those drivers stop whole subsystem for
> > moving forward and needed to be fixed without HW.
>
> Hi Leon,
>
> Are you perhaps referring to API refactoring that affects an entire
> subsystem? I was referring to patches that affect a single driver.

I got impression that refactoring/cleanup/other_mass_changes are usually
the ones who has potential to introduce regressions and not one shot
change in specific driver from some worried user.

Thanks

>
> Thanks,
>
> Bart.
>

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 16:16                       ` James Bottomley
  2019-06-14 17:48                         ` Mauro Carvalho Chehab
@ 2019-06-15 10:55                         ` " Daniel Vetter
  1 sibling, 0 replies; 77+ messages in thread
From: Daniel Vetter @ 2019-06-15 10:55 UTC (permalink / raw)
  To: James Bottomley; +Cc: Mauro Carvalho Chehab, media-submaintainers, ksummit

On Fri, Jun 14, 2019 at 6:16 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 08:49:46 -0700
> > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> >
> > > On Fri, 2019-06-14 at 12:43 -0300, Mauro Carvalho Chehab wrote:
> > > > Em Fri, 14 Jun 2019 08:23:05 -0700
> > > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > > >
> > > > > On Fri, 2019-06-14 at 12:11 -0300, Mauro Carvalho Chehab wrote:
> > > > > [...]
> > > > > > If you think this is relevant to a broader audience, let me
> > > > > > reply with a long answer about that. I prepared it and
> > > > > > intended to reply to our internal media maintainer's ML
> > > > > > (added as c/c).
> > > > > >
> > > > > > Yet, I still think that this is media maintainer's dirty
> > > > > > laundry and should be discussed elsewhere ;-)
> > > > > >
> > > > > > ---
> > > > >
> > > > > So trying not to get into huge email thread, I think this is
> > > > > the key point:
> > > > >
> > > > > [...]
> > > > > > Currently, I review all accepted patches.
> > > > >
> > > > > This means you effectively have a fully flat tree.
> > > >
> > > > Yes.
> > > >
> > > > > Even if you use git, you're using it like an email transmission
> > > > > path.  One of the points I was making about deepening the tree
> > > > > is that the maintainer in the middle should trust the
> > > > > submaintainer they pull from, so there should be no need to
> > > > > review all the patches because of that trust.
> > > >
> > > > It is not a matter of trust. It is just that the media subsystem
> > > > is a complex puzzle. Just the V4L2 API has more than 80 ioctls.
> > > >
> > > > So, the goal here is to do my best to ensure that patches will
> > > > get at least two reviews.
> > > >
> > > > > This is how deepening the tree helps to offload maintainers
> > > > > because review is one of the biggest burdens we have and
> > > > > deepening the tree is a way to share it.  Without trust, we
> > > > > achieve no offloading and therefore no utility from deepening
> > > > > the tree.
> > > >
> > > > Yeah, I know one day this won't scale. The day it happens, I'll
> > > > just start picking pull requests. As we already use git, a
> > > > change like that would be trivial.
> > > >
> > > > > So, to get back to the original question, which was *should* we
> > > > > deepen the tree: why don't you feel you can let branches with
> > > > > patches you haven't reviewed into your
> > > > > tree?  I've characterised it as a
> > > > > trust issue above, but perhaps it isn't. I think this is a key
> > > > > question which would help us understand whether a deeper tree
> > > > > model is at all possible.
> > > >
> > > > One of the aspects is that developers nowadays are specialists on
> > > > a subset of the media devices. Most of them are working on
> > > > complex camera support, with envolves a subset of the APIs we
> > > > have. They never worked on a driver that would use other parts of
> > > > the API, like DVB, Remote Controllers, TV, V4L2 streaming
> > > > devices, etc.
> > > >
> > > > So, having someone with a more generalist view at the end of the
> > > > review process helps to identify potential problems that might
> > > > affect other devices, specially when there are API changes
> > > > involved[1].
> > > >
> > > > [1] Since when I started maintaining the subsystem, back on 2005,
> > > > on almost every single Kernel review there are API changes in
> > > > order to support new types of hardware.
> > >
> > > Actually, this leads me to the patch acceptance criteria: Is there
> > > value in requiring reviews?  We try to do this in SCSI (usually
> > > only one review), but if all reviewers add a
> > >
> > > Reviewed-by:
> > >
> > > tag, which is accumulated in the tree, your pull machinery can
> > > detect it on all commits in the pull and give you an automated
> > > decision about whether to accept the pull or not.  If you require
> > > two with one from a list of designated reviewers, it can do that as
> > > well (with a bit more complexity in the pull hook script).
> > >
> > > So here's the question: If I help you script this, would you be
> > > willing to accept pull requests in the media tree with this check
> > > in place?  I'm happy to do this because it's an interesting
> > > experiment to see if we can have automation offload work currently
> > > done by humans.
> >
> > We could experiment something like that, provided that people will be
> > aware that it can be undone if something gets wrong.
> >
> > Yet, as we discussed at the Media Summit, we currently have an
> > issue: our infrastructure lack resources for such kind of
> > automation.
>
> This one doesn't require an automation infrastructure: the script runs
> as a local pull hook on the machine you accept the pull request from
> (presumably your laptop?) so  the workflow is you receive a pull
> request, pull it into your tree and if the pull hook finds a bogus
> commit it will reject the pull and tell you why; if the script accepts
> the pull then you do whatever additional checks you like, then push it
> to kernel.org when you're satisfied it didn't break anything.

Jumping in here with a +1 on sharing scripts. We have the drm
inglorious maintainer scripts which originated from drm-intel.git, but
is now used for drm-misc.git and drm.git overall two. There's
essentially now three kinds of trees we pull into drm.git:

- the well-maintained ones that use those scripts: Experienced
maintainers and reviewers make sure the big picture is solid, and the
scripting makes sure all the details are done correctly too (e.g.
we've recently added the Fixes: validation that linux-next started
reporting). Processing those pulls is a button-push fire&forget
affair.

- well-maintained trees without good tooling. High level will be all
solid, but our scripting will catch the oddball screwed up Fixes: tag,
misformatted sob or slightly botched last-minute rebase. So
occasionally it's not just single button push but takes another
iteration to report the issues and create a revised pull.

- the not-so-well maintained trees. Usually just a handful of patches
(or less) per release cycle. Those you actually have to look at
patches and check a pile of things. I'm trying to get them merged into
bigger existing teams (like drm-misc.git) so they could benefit&learn
from all the tooling and experience, but for some folks it's a hard
sell. And I don't want to be too obnoxious about enforcing certain
process and maybe annoying the maintainer of that driver.

As a rule everytime someone screws up and the script doesn't catch it,
we volunteer them to improve the scripting.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 15:11             ` Mauro Carvalho Chehab
  2019-06-14 15:23               ` James Bottomley
  2019-06-14 20:52               ` Vlastimil Babka
@ 2019-06-15 11:01               ` Laurent Pinchart
  2019-06-17 11:03                 ` Mauro Carvalho Chehab
  2 siblings, 1 reply; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-15 11:01 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: James Bottomley, media-submaintainers, ksummit-discuss

Hi Mauro,

On Fri, Jun 14, 2019 at 12:11:37PM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 14 Jun 2019 15:58:07 +0200 Greg KH escreveu:
> > On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:
> >> Em Fri, 14 Jun 2019 13:12:22 +0300 Laurent Pinchart escreveu:
> >>> On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:  
> >>>> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> >>>>     
> >>>>> [splitting issues to shorten replies]
> >>>>> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:    
> >>>>>> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:      
> >>>>>>> This is probably best done as two separate topics
> >>>>>>> 
> >>>>>>> 1) Pull network: The pull depth is effectively how many pulls your
> >>>>>>> tree does before it goes to Linus, so pull depth 0 is sent straight
> >>>>>>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> >>>>>>> and so on.  We've previously spent time discussing how increasing
> >>>>>>> the pull depth of the network would reduce the amount of time Linus
> >>>>>>> spends handling pull requests.  However, in the areas I play, like
> >>>>>>> security, we seem to be moving in the opposite direction
> >>>>>>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> >>>>>>> we're deciding to move to a flat tree model, where everything is
> >>>>>>> depth 0, that's fine, I just think we could do with making a formal
> >>>>>>> decision on it so we don't waste energy encouraging greater tree
> >>>>>>> depth.      
> >>>>>> 
> >>>>>> That depth "change" was due to the perceived problems that having a
> >>>>>> deeper pull depth was causing.  To sort that out, Linus asked for
> >>>>>> things to go directly to him.      
> >>>>> 
> >>>>> This seems to go beyond problems with one tree and is becoming a trend.
> >>>>>     
> >>>>>> It seems like the real issue is the problem with that subsystem
> >>>>>> collection point, and the fact that the depth changed is a sign that
> >>>>>> our model works well (i.e. everyone can be routed around.)      
> >>>>> 
> >>>>> I'm not really interested in calling out "problem" maintainers, or
> >>>>> indeed having another "my patch collection method is better than yours"
> >>>>> type discussion.  What I was fishing for is whether the general
> >>>>> impression that greater tree depth is worth striving for is actually
> >>>>> correct, or we should all give up now and simply accept that the
> >>>>> current flat tree is the best we can do, and, indeed is the model that
> >>>>> works best for Linus.  I get the impression this may be the case, but I
> >>>>> think making sure by having an actual discussion among the interested
> >>>>> parties who will be at the kernel summit, would be useful.    
> >>>> 
> >>>> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> >>>> 
> >>>> patch author -> media/driver maintainer -> subsystem maintainer -> Linus    
> >>> 
> >>> I'd like to use this opportunity to ask again for pull requests to be
> >>> pulled instead of cherry-picked.  
> >> 
> >> There are other forums for discussing internal media maintainership,
> >> like the weekly meetings we have and our own mailing lists.  
> > 
> > You all have weekly meetings?  That's crazy...
> 
> Yep, every week we do a meeting, usually taking about 1 hour via irc,
> on this channel:
> 
> 	https://linuxtv.org/irc/irclogger_logs//media-maint
> 
> > Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
> > cherry-picking does make things a lot easier for contributors.  I know
> > I'm guilty of it as well as a maintainer, but that's only until I start
> > trusting the submitter.  Once that happens, pulling is _much_ easier as
> > a maintainer instead of individual patches for the usual reason that
> > linux-next has already verified that the sub-tree works properly before
> > I merge it in.
> > 
> > Try it, it might make your load be reduced, it has for me.
> 
> If you think this is relevant to a broader audience, let me reply with
> a long answer about that. I prepared it and intended to reply to our
> internal media maintainer's ML (added as c/c). 
> 
> Yet, I still think that this is media maintainer's dirty laundry
> and should be discussed elsewhere ;-)

I'll do my best to reply below with comments that are not too specific
to the media subsystem, hoping it will be useful for a wider audience
:-)

> ---
> 
> Laurent,
> 
> I already explained a few times, including during the last Media Summit,
> but it seems you missed the point.
> 
> As shown on our stats:
> 	https://linuxtv.org/patchwork_stats.php
> 
> We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> patches per working days (22 days/month). From those, we accept about
> 100 to 300 patches per month (4.5 to 13.6 patches per day).
> 
> Currently, I review all accepted patches.

As other have said or hinted, this is where things start getting wrong.
As a maintainer your duty isn't to work for 24h a day and review every
single patch. The duty of a maintainer is to help the subsystem stay
healthy and move forward. This can involve lots of technical work, but
it doesn't have to, that can also be delegated (providing, of course,
that the subsysteù would have technically competent and reliable
contributors who would be willing to help there). In my opinion
maintaining a subsystem is partly a technical job, and partly a social
job. Being excellent at both is the icing on the cake, not a minimal
requirement.

> I have bandwidth to review 4.5 to 13.6 patches per day, not without a lot
> of personal efforts. For that, I use part of my spare time, as I have other
> duties, plus I develop patches myself. So, in order to be able to handle
> those, I typically work almost non-stop starting at 6am and sometimes
> going up to 10pm. Also, when there are too much stuff pending (like on
> busy months), I handle patches also during weekends.

I wasn't aware of your personal work schedule, and I'm sorry to hear
it's so extreme. This is not sustainable, and I think this clearly shows
that a purely flat tree model with a single maintainer has difficulty
scaling for large subsystems. If anything, this calls in my opinion for
increasing the pull network depth to make your job bearable again.

> However, 45 patches/day (225 patches per week) is a lot for me to
> review. I can't commit to handle such amount of patches.
> 
> That's why I review patches after a first review from the other
> media maintainers. The way I identify the patches I should review is
> when I receive pull requests.
> 
> We could do a different workflow. For example, once a media maintainer
> review a patch, it could be delegated to me at patchwork. This would likely 
> increase the time for merging stuff, as the workflow would change from:
> 
>  +-------+    +------------------+    +---------------+
>  | patch | -> | media maintainer | -> | submaintainer | 
>  +-------+    +------------------+    +---------------+
> 
> to: 
> 
>  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
>  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
>  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> 
>   \------------------------v--------------------------/    \---------------------v------------------/
> 			Patchwork                                           Pull Request
> 
> The pull request part of the new chain could eventually be (semi-)automated
> by some scripting that would just do a checksum sum at the received patches 
> that were previously reviewed by me. If matches, and if it passes on the 
> usual checks I run for PR patches, it would push on some tree. Still, it 
> would take more time than the previous flow.

I'm sorry, but I don't think this goes in the right direction. With the
number of patches increasing, and the number of hours in a maintainer's
day desperately not agreeing to increase above 24, the only scalable
solution I see is to stop reviewing every single patch that is accepted
in the subsystem tree, through delegation/sharing of maintainer's
duties, and trust. I know it can be difficult to let go of a driver one
has authored and let it live its life, so I can only guess the
psychological effect is much worse for a whole subsystem. I've authored
drivers that I cared and still care about, and I need to constantly
remind me that too much love can lead to suffocating. The most loving
parent has to accept that their children will one day leave home, but
that it doesn't mean their lives will part forever. I think the same
applies to free software.

> Also, as also discussed during the media summit, in order to have such
> kind of automation, we would need to improve our infrastructure, moving
> the tests from a noisy/heated server I have over my desk to some VM
> inside the cloud, once we get funds for it.

Sure, and I think this is a topic that would gain from being discussed
with a wider audience. The media subsystem isn't the only one to be
large enough that it would benefit a lot from automation (I would even
argue that all subsystems could benefit from that), so sharing
experiences, and hearing other subsystem's wishes, would be useful here.

> In any case, a discussion that affects the patch latency and our internal
> procedures within the media subsystem is something that should be discussed
> with other media mantainers, and not at KS.

Isn't improving patch latency something that would be welcome throughout
the kernel ?

> -
> 
> That's said, one day I may not be able to review all accepted patches.
> When this day comes, I'll just apply the pull requests I receive.
> 
> -
> 
> Finally, if you're so interested on improving our maintenance model,
> I beg you: please handle the patches delegated to you:
> 
> 	https://patchwork.linuxtv.org/project/linux-media/list/?series=&submitter=&state=&q=&archive=&delegate=2510
> 
> As we agreed on our media meetings, I handled about ~60 patches that 
> were waiting for your review since 2017 a couple of weeks ago - 
> basically the ones that are not touching the drivers you currently
> maintain, but there are still 23 patches sent between 2013-2018
> over there, plus the 48 patches sent in 2019.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 17:48                         ` Mauro Carvalho Chehab
@ 2019-06-17  7:01                           ` Geert Uytterhoeven
  2019-06-17 13:31                             ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 77+ messages in thread
From: Geert Uytterhoeven @ 2019-06-17  7:01 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, media-submaintainers, ksummit

Hi Mauro,

On Fri, Jun 14, 2019 at 7:48 PM Mauro Carvalho Chehab
<mchehab+samsung@kernel.org> wrote:
> Em Fri, 14 Jun 2019 09:16:34 -0700
> James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:
> > > Em Fri, 14 Jun 2019 08:49:46 -0700
> > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:
> > > > Actually, this leads me to the patch acceptance criteria: Is there
> > > > value in requiring reviews?  We try to do this in SCSI (usually
> > > > only one review), but if all reviewers add a
> > > >
> > > > Reviewed-by:
> > > >
> > > > tag, which is accumulated in the tree, your pull machinery can
> > > > detect it on all commits in the pull and give you an automated
> > > > decision about whether to accept the pull or not.  If you require
> > > > two with one from a list of designated reviewers, it can do that as
> > > > well (with a bit more complexity in the pull hook script).
> > > >
> > > > So here's the question: If I help you script this, would you be
> > > > willing to accept pull requests in the media tree with this check
> > > > in place?  I'm happy to do this because it's an interesting
> > > > experiment to see if we can have automation offload work currently
> > > > done by humans.
> > >
> > > We could experiment something like that, provided that people will be
> > > aware that it can be undone if something gets wrong.
> > >
> > > Yet, as we discussed at the Media Summit, we currently have an
> > > issue: our infrastructure lack resources for such kind of
> > > automation.
> >
> > This one doesn't require an automation infrastructure: the script runs
> > as a local pull hook on the machine you accept the pull request from
> > (presumably your laptop?)
>
> No, I run it on a 40-core HP server that it is below my desk. I turn it on
> only when doing patch review (to save power, and because it produces a lot
> of heat at the small room I work).
>
> Right now, I use a script with converts a pull request into a quilt tree.
> Then, for each patch there, after a manual review, I run:

I think this process can be improved/optimized:

>         - checkpatch --strict

Should have been done by your (trusted) submaintainer that sent you
the pull request.

>         - make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/staging/media
>         - make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/media
>
> where compile_checks is this script:
>
>         #!/bin/bash
>         /devel/smatch/smatch -p=kernel $@
>         # This is too pedantic and produce lots of false-positives
>         #/devel/smatch/smatch --two-passes -- -p=kernel $@
>         /devel/sparse/sparse $@

Should have been done by the various bots, as soon as the public
branch for the pull request was updated.

> (Currently, I review on one screen, while the check script runs on a
> terminal on a second screen)

If all of the above are automated, or already done, you can focus on
reviewing on the mailing list, i.e. before the patch is accepted by your
submaintainer (the earlier an issue is found, the cheaper it is to fix
it).  And you don't have to review everything, review can be done in
parallel by multiple persons.

> If a patch at the queue fails, the server beeps, and I manually fix
> or I complain.

Should have been fixed before the pull request was sent.

> When the patch series is accepted, for every applied patch, I run
> a script that updates the patch status at patchwork, plus the
> status of the git pull request.

There's some WIP automation for this on patchwork.kernel.org, that
can update status based on appearance in linux-next.

> When I reject a patch, I update patchwork accordingly.
>
> > so  the workflow is you receive a pull
> > request, pull it into your tree and if the pull hook finds a bogus
> > commit it will reject the pull and tell you why; if the script accepts
> > the pull then you do whatever additional checks you like, then push it
> > to kernel.org when you're satisfied it didn't break anything.
>
> A script that would work for me should do a similar job:
>
> - apply patch per patch, test with the above programs and check for
>   results. If any errors/warnings are returned, mailbomb the involved
>   parties for them to rework the pull request, and update the status
>   of the git request at patchwork.
>
> - If the pull request succeeds, update the patches at patchwork, using
>   the Patchwork-ID field for the git pull request and the patch diff
>   md5sum for the applied patches (and for any past versions of them,
>   if the checksum is the same).
>
> Alternatively (and that's what I actually prefer) is that, when someone
> sends a pull request, a CI bot would do the above checks. doing the
> mailbomb part and marking the pull request as rejected at patchwork,
> delegating to me otherwise.
>
> This way, I would have to deal only with already verified pull
> requests.

So I think most of the automation is already there?
The interfacing with patchwork seems to be the hardest part.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-14 23:21   ` Bjorn Helgaas
@ 2019-06-17 10:35     ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-17 10:35 UTC (permalink / raw)
  To: Bjorn Helgaas via Ksummit-discuss; +Cc: James Bottomley

Em Fri, 14 Jun 2019 18:21:15 -0500
Bjorn Helgaas via Ksummit-discuss <ksummit-discuss@lists.linuxfoundation.org> escreveu:

> On Fri, Jun 14, 2019 at 2:53 PM Bjorn Helgaas <bhelgaas@google.com> wrote:
> >
> > On Thu, Jun 6, 2019 at 10:49 AM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> >  
> > > 2) Patch Acceptance Consistency: At the moment, we have very different
> > > acceptance criteria for patches into the various maintainer trees.
> > > Some of these differences are due to deeply held stylistic beliefs, but
> > > some could be more streamlined to give a more consistent experience to
> > > beginners who end up doing batch fixes which cross trees and end up
> > > more confused than anything else.  I'm not proposing to try and unify
> > > our entire submission process, because that would never fly, but I was
> > > thinking we could get a few sample maintainer trees to give their
> > > criteria and then see if we could get any streamlining.  For instance,
> > > SCSI has a fairly weak "match the current driver" style requirement, a
> > > reasonably strong get someone else to review it requirement and the
> > > usual good change log and one patch per substantive change requirement.
> > >  Other subsystems look similar without the review requirement, some
> > > have very strict stylistic requirements (reverse christmas tree, one
> > > variable definition per line, etc).  As I said, the goal wouldn't be to
> > >  beat up on the unusual requirements but to see if we could agree some
> > > global baselines that would at least make submission more uniform.  
> >
> > The "when in Rome" rule (follow local conventions) would cover a large
> > fraction of the style issues without requiring global uniformity or
> > even documentation.  I'm amazed at how often it is ignored.  
> 
> I should have expanded this a little.  Somebody pointed out to me off-list that:
> 
> | I'm NOT amazed at how often undocumented, strange, local style
> |  (and submission and timing) conventions are not followed by new or
> | drive-by contributors to a sub-system.  How would one expect local
> | conventions to be followed by newbies when they conventions
> | are undocumented?
> 
> | Many sub-systems have mixed styles. In the past I've wished for
> | documentation as simple as: 'file xyz.c is representative
> | of the preferred style for this sub-system'.
> 
> What I meant was that we should follow the indentation, comment,
> declaration, etc. style of the existing code in the same file.  We
> should also look at the git history of the file and follow the style
> of subject lines and commit logs.

Looking on each file's kernel style would mean a lot of extra work
for someone that is applying a patch that is subsystem-wide (or even
kernel-wide), for no good reason.

My experience from the time we didn't enforce Kernel coding style
on media: people will keep pushing such patches assuming the
Kernel style. It takes a lot more time trying to argue why they
should handle differently for file A, file B, ... than to just 
blindly accept a patch that will mess with the file's coding
Style or than to run a script subsystem-wide to make coding styles
to be more uniform.

Also, even the ones with work at the subsystem will end violating it.

For example, on media, the DVB part used to identify pointers as:

	struct dvb_foo* bar;

While, on the v4l side, it is the Kernel style. As we do have several
drivers that implement both APIs, people tend to use the Kernel style.
So, when a developer is writing a patch that were touching both DVB
and driver-specific stuff, they write the above as:

	struct dvb_foo *bar.

After a few years, the DVB core files become a mess with both styles.

I ended fixing it at the core a couple years ago when we added kernel-doc 
markups to the DVB core files, moving the existing ones at *.c files to be
just at their *.h files. Now, at least the core is closer to the Kernel
style (there are still DVB-specific files with the old style, but those
are very seldom patched) and we don't care much about mixing styles there
anyway.

> 
> Even if a subsystem has mixed styles, I think the most important rule
> is that each file should be internally consistent.  If we want to
> unify subsystem style, that's even better, but we should do that with
> subsystem-wide patches that specifically improve consistency, not
> incrementally as a by-product of other patches.
> 
> I'm not necessarily opposed to documenting coding styles, although I
> think per-subsystem coding style rules might a little bit onerous to
> submitters.  If we pay attention to the surrounding code and commit
> history, we can produce good style even without a style guide.
> 
> Bjorn
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss



Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-15 11:01               ` Laurent Pinchart
@ 2019-06-17 11:03                 ` Mauro Carvalho Chehab
  2019-06-17 12:28                   ` Mark Brown
  2019-06-17 14:18                   ` Laurent Pinchart
  0 siblings, 2 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-17 11:03 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: James Bottomley, media-submaintainers, ksummit-discuss

Em Sat, 15 Jun 2019 14:01:07 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> On Fri, Jun 14, 2019 at 12:11:37PM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 14 Jun 2019 15:58:07 +0200 Greg KH escreveu:  
> > > On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:  
> > >> Em Fri, 14 Jun 2019 13:12:22 +0300 Laurent Pinchart escreveu:  
> > >>> On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:    
> > >>>> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> > >>>>       
> > >>>>> [splitting issues to shorten replies]
> > >>>>> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:      
> > >>>>>> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:        
> > >>>>>>> This is probably best done as two separate topics
> > >>>>>>> 
> > >>>>>>> 1) Pull network: The pull depth is effectively how many pulls your
> > >>>>>>> tree does before it goes to Linus, so pull depth 0 is sent straight
> > >>>>>>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> > >>>>>>> and so on.  We've previously spent time discussing how increasing
> > >>>>>>> the pull depth of the network would reduce the amount of time Linus
> > >>>>>>> spends handling pull requests.  However, in the areas I play, like
> > >>>>>>> security, we seem to be moving in the opposite direction
> > >>>>>>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> > >>>>>>> we're deciding to move to a flat tree model, where everything is
> > >>>>>>> depth 0, that's fine, I just think we could do with making a formal
> > >>>>>>> decision on it so we don't waste energy encouraging greater tree
> > >>>>>>> depth.        
> > >>>>>> 
> > >>>>>> That depth "change" was due to the perceived problems that having a
> > >>>>>> deeper pull depth was causing.  To sort that out, Linus asked for
> > >>>>>> things to go directly to him.        
> > >>>>> 
> > >>>>> This seems to go beyond problems with one tree and is becoming a trend.
> > >>>>>       
> > >>>>>> It seems like the real issue is the problem with that subsystem
> > >>>>>> collection point, and the fact that the depth changed is a sign that
> > >>>>>> our model works well (i.e. everyone can be routed around.)        
> > >>>>> 
> > >>>>> I'm not really interested in calling out "problem" maintainers, or
> > >>>>> indeed having another "my patch collection method is better than yours"
> > >>>>> type discussion.  What I was fishing for is whether the general
> > >>>>> impression that greater tree depth is worth striving for is actually
> > >>>>> correct, or we should all give up now and simply accept that the
> > >>>>> current flat tree is the best we can do, and, indeed is the model that
> > >>>>> works best for Linus.  I get the impression this may be the case, but I
> > >>>>> think making sure by having an actual discussion among the interested
> > >>>>> parties who will be at the kernel summit, would be useful.      
> > >>>> 
> > >>>> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> > >>>> 
> > >>>> patch author -> media/driver maintainer -> subsystem maintainer -> Linus      
> > >>> 
> > >>> I'd like to use this opportunity to ask again for pull requests to be
> > >>> pulled instead of cherry-picked.    
> > >> 
> > >> There are other forums for discussing internal media maintainership,
> > >> like the weekly meetings we have and our own mailing lists.    
> > > 
> > > You all have weekly meetings?  That's crazy...  
> > 
> > Yep, every week we do a meeting, usually taking about 1 hour via irc,
> > on this channel:
> > 
> > 	https://linuxtv.org/irc/irclogger_logs//media-maint
> >   
> > > Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
> > > cherry-picking does make things a lot easier for contributors.  I know
> > > I'm guilty of it as well as a maintainer, but that's only until I start
> > > trusting the submitter.  Once that happens, pulling is _much_ easier as
> > > a maintainer instead of individual patches for the usual reason that
> > > linux-next has already verified that the sub-tree works properly before
> > > I merge it in.
> > > 
> > > Try it, it might make your load be reduced, it has for me.  
> > 
> > If you think this is relevant to a broader audience, let me reply with
> > a long answer about that. I prepared it and intended to reply to our
> > internal media maintainer's ML (added as c/c). 
> > 
> > Yet, I still think that this is media maintainer's dirty laundry
> > and should be discussed elsewhere ;-)  
> 
> I'll do my best to reply below with comments that are not too specific
> to the media subsystem, hoping it will be useful for a wider audience
> :-)
> 
> > ---
> > 
> > Laurent,
> > 
> > I already explained a few times, including during the last Media Summit,
> > but it seems you missed the point.
> > 
> > As shown on our stats:
> > 	https://linuxtv.org/patchwork_stats.php
> > 
> > We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> > patches per working days (22 days/month). From those, we accept about
> > 100 to 300 patches per month (4.5 to 13.6 patches per day).
> > 
> > Currently, I review all accepted patches.  
> 
> As other have said or hinted, this is where things start getting wrong.
> As a maintainer your duty isn't to work for 24h a day and review every
> single patch. The duty of a maintainer is to help the subsystem stay
> healthy and move forward. This can involve lots of technical work, but
> it doesn't have to, that can also be delegated (providing, of course,
> that the subsysteù would have technically competent and reliable
> contributors who would be willing to help there). In my opinion
> maintaining a subsystem is partly a technical job, and partly a social
> job. Being excellent at both is the icing on the cake, not a minimal
> requirement.

There are a couple of reasons why I keep doing that. Among them:

1) I'd like to follow what's happening at the subsystem. Reviewing the
patches allow me to have at least a rough idea about what's happening,
with makes easier when we need to discuss about possible changes at
the core;

2) An additional reviewer improves code quality. One of the feedbacks
I get from sub-maintainers is that we need more core review. So, I'm
doing my part.

3) I like my work.

> 
> > I have bandwidth to review 4.5 to 13.6 patches per day, not without a lot
> > of personal efforts. For that, I use part of my spare time, as I have other
> > duties, plus I develop patches myself. So, in order to be able to handle
> > those, I typically work almost non-stop starting at 6am and sometimes
> > going up to 10pm. Also, when there are too much stuff pending (like on
> > busy months), I handle patches also during weekends.  
> 
> I wasn't aware of your personal work schedule, and I'm sorry to hear
> it's so extreme. This is not sustainable, and I think this clearly shows
> that a purely flat tree model with a single maintainer has difficulty
> scaling for large subsystems. If anything, this calls in my opinion for
> increasing the pull network depth to make your job bearable again.

It has been sustainable. I've doing it over the last 10 years.

It is not every day I go from 6am to 10pm. Also, it is not that I don't have
a social life. I still have time for my hobbies and for my family.

> 
> > However, 45 patches/day (225 patches per week) is a lot for me to
> > review. I can't commit to handle such amount of patches.
> > 
> > That's why I review patches after a first review from the other
> > media maintainers. The way I identify the patches I should review is
> > when I receive pull requests.
> > 
> > We could do a different workflow. For example, once a media maintainer
> > review a patch, it could be delegated to me at patchwork. This would likely 
> > increase the time for merging stuff, as the workflow would change from:
> > 
> >  +-------+    +------------------+    +---------------+
> >  | patch | -> | media maintainer | -> | submaintainer | 
> >  +-------+    +------------------+    +---------------+
> > 
> > to: 
> > 
> >  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> >  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
> >  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> > 
> >   \------------------------v--------------------------/    \---------------------v------------------/
> > 			Patchwork                                           Pull Request
> > 
> > The pull request part of the new chain could eventually be (semi-)automated
> > by some scripting that would just do a checksum sum at the received patches 
> > that were previously reviewed by me. If matches, and if it passes on the 
> > usual checks I run for PR patches, it would push on some tree. Still, it 
> > would take more time than the previous flow.  
> 
> I'm sorry, but I don't think this goes in the right direction. With the
> number of patches increasing, and the number of hours in a maintainer's
> day desperately not agreeing to increase above 24, the only scalable
> solution I see is to stop reviewing every single patch that is accepted
> in the subsystem tree, through delegation/sharing of maintainer's
> duties, and trust. I know it can be difficult to let go of a driver one
> has authored and let it live its life, so I can only guess the
> psychological effect is much worse for a whole subsystem. I've authored
> drivers that I cared and still care about, and I need to constantly
> remind me that too much love can lead to suffocating. The most loving
> parent has to accept that their children will one day leave home, but
> that it doesn't mean their lives will part forever. I think the same
> applies to free software.

That's not the point. The point is that, while I have time for doing
patch reviews, I want to keep doing it.

Also, as I said, this is media dirty laundering: weather I would keep
doing patch reviews or not - and how this will work - is something for
our internal discussions, and not for KS.

> 
> > Also, as also discussed during the media summit, in order to have such
> > kind of automation, we would need to improve our infrastructure, moving
> > the tests from a noisy/heated server I have over my desk to some VM
> > inside the cloud, once we get funds for it.  
> 
> Sure, and I think this is a topic that would gain from being discussed
> with a wider audience. The media subsystem isn't the only one to be
> large enough that it would benefit a lot from automation (I would even
> argue that all subsystems could benefit from that), so sharing
> experiences, and hearing other subsystem's wishes, would be useful here.

Maybe.

Are there any other subsystem currently working to get funding for
hosting/automation?

> 
> > In any case, a discussion that affects the patch latency and our internal
> > procedures within the media subsystem is something that should be discussed
> > with other media mantainers, and not at KS.  
> 
> Isn't improving patch latency something that would be welcome throughout
> the kernel ?

Your proposed change won't improve it. It will either keep the same,
if we keep the current flow:

	  +-------+    +------------------+    +---------------+
	  | patch | -> | media maintainer | -> | submaintainer | 
	  +-------+    +------------------+    +---------------+

(either if I review the patch or not, the flow will be the same - and
so the patch latency)

Or make it higher, if we change it to:

  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
 
   \------------------------v--------------------------/    \---------------------v------------------/
 			Patchwork                                           Pull Request

More below:

> 
> > -
> > 
> > That's said, one day I may not be able to review all accepted patches.
> > When this day comes, I'll just apply the pull requests I receive.
> > 
> > -
> > 
> > Finally, if you're so interested on improving our maintenance model,
> > I beg you: please handle the patches delegated to you:
> > 
> > 	https://patchwork.linuxtv.org/project/linux-media/list/?series=&submitter=&state=&q=&archive=&delegate=2510
> > 
> > As we agreed on our media meetings, I handled about ~60 patches that 
> > were waiting for your review since 2017 a couple of weeks ago - 
> > basically the ones that are not touching the drivers you currently
> > maintain, but there are still 23 patches sent between 2013-2018
> > over there, plus the 48 patches sent in 2019.  

Reviewing the patches on your queue or delegating them to others is
actually a concrete thing you can do in order to reduce the patch
handling latency.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 11:03                 ` Mauro Carvalho Chehab
@ 2019-06-17 12:28                   ` Mark Brown
  2019-06-17 16:48                     ` Tim.Bird
  2019-06-17 14:18                   ` Laurent Pinchart
  1 sibling, 1 reply; 77+ messages in thread
From: Mark Brown @ 2019-06-17 12:28 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: James Bottomley, media-submaintainers, Tim.Bird, ksummit-discuss

[-- Attachment #1: Type: text/plain, Size: 3928 bytes --]

On Mon, Jun 17, 2019 at 08:03:15AM -0300, Mauro Carvalho Chehab wrote:
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:
> > On Fri, Jun 14, 2019 at 12:11:37PM -0300, Mauro Carvalho Chehab wrote:

> > > We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> > > patches per working days (22 days/month). From those, we accept about
> > > 100 to 300 patches per month (4.5 to 13.6 patches per day).

> > > Currently, I review all accepted patches.  

> > As other have said or hinted, this is where things start getting wrong.
> > As a maintainer your duty isn't to work for 24h a day and review every
> > single patch. The duty of a maintainer is to help the subsystem stay
> > healthy and move forward. This can involve lots of technical work, but
> > it doesn't have to, that can also be delegated (providing, of course,

> There are a couple of reasons why I keep doing that. Among them:

> 1) I'd like to follow what's happening at the subsystem. Reviewing the
> patches allow me to have at least a rough idea about what's happening,
> with makes easier when we need to discuss about possible changes at
> the core;

> 2) An additional reviewer improves code quality. One of the feedbacks
> I get from sub-maintainers is that we need more core review. So, I'm
> doing my part.

> 3) I like my work.

This doesn't have to be an either/or thing - one of the things that you
can do is vary how much attention you're paying depending on whatever
factors are useful (which can be very fuzzy sometimes).  Thinking too
much about formailzing things can get in the way sometimes, both with
decision making paralysis and with making things seem scarier for
contributors who are being asked to take on responsibility.

> Also, as I said, this is media dirty laundering: weather I would keep
> doing patch reviews or not - and how this will work - is something for
> our internal discussions, and not for KS.

The specific example is from the media subsystem but these are general
issues.

> > > Also, as also discussed during the media summit, in order to have such
> > > kind of automation, we would need to improve our infrastructure, moving
> > > the tests from a noisy/heated server I have over my desk to some VM
> > > inside the cloud, once we get funds for it.  

> > Sure, and I think this is a topic that would gain from being discussed
> > with a wider audience. The media subsystem isn't the only one to be
> > large enough that it would benefit a lot from automation (I would even
> > argue that all subsystems could benefit from that), so sharing
> > experiences, and hearing other subsystem's wishes, would be useful here.

> Maybe.

> Are there any other subsystem currently working to get funding for
> hosting/automation?

Not sure if it's specifically what you're looking at but there's stuff
going on that's at least very adjacent to this, more from the angle of
providing general infrastructure than subsystem specific things and
currently mainly foucsed on getting tests run.  To me that sort of
approach seems good since it avoids duplicated efforts between
subsystems.

There's people working on things like KernelCI (people are working on
expanding to include runtime tests, and there's active efforts on
securing more funding) and CKI which aren't focused on specific
subsystems but more on general infrastructure.  Tim Bird (CCed) has been
pushing on trying to get people working in this area talking to each
other - there's a mailing list and monthly call:

    https://elinux.org/Automated_Testing

and one of the things people are talking about is what sorts of things
the kernel community would find useful here so it's probably useful at
least putting ideas for things that'd be useful in the heads of people
who are interested in working on the infrastructure and automation end
of things.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17  7:01                           ` Geert Uytterhoeven
@ 2019-06-17 13:31                             ` Mauro Carvalho Chehab
  2019-06-17 14:26                               ` Takashi Iwai
  2019-06-19  7:53                               ` Dan Carpenter
  0 siblings, 2 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-17 13:31 UTC (permalink / raw)
  To: Geert Uytterhoeven; +Cc: James Bottomley, media-submaintainers, ksummit

Em Mon, 17 Jun 2019 09:01:06 +0200
Geert Uytterhoeven <geert@linux-m68k.org> escreveu:

> Hi Mauro,
> 
> On Fri, Jun 14, 2019 at 7:48 PM Mauro Carvalho Chehab
> <mchehab+samsung@kernel.org> wrote:
> > Em Fri, 14 Jun 2019 09:16:34 -0700
> > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:  
> > > On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:  
> > > > Em Fri, 14 Jun 2019 08:49:46 -0700
> > > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:  
> > > > > Actually, this leads me to the patch acceptance criteria: Is there
> > > > > value in requiring reviews?  We try to do this in SCSI (usually
> > > > > only one review), but if all reviewers add a
> > > > >
> > > > > Reviewed-by:
> > > > >
> > > > > tag, which is accumulated in the tree, your pull machinery can
> > > > > detect it on all commits in the pull and give you an automated
> > > > > decision about whether to accept the pull or not.  If you require
> > > > > two with one from a list of designated reviewers, it can do that as
> > > > > well (with a bit more complexity in the pull hook script).
> > > > >
> > > > > So here's the question: If I help you script this, would you be
> > > > > willing to accept pull requests in the media tree with this check
> > > > > in place?  I'm happy to do this because it's an interesting
> > > > > experiment to see if we can have automation offload work currently
> > > > > done by humans.  
> > > >
> > > > We could experiment something like that, provided that people will be
> > > > aware that it can be undone if something gets wrong.
> > > >
> > > > Yet, as we discussed at the Media Summit, we currently have an
> > > > issue: our infrastructure lack resources for such kind of
> > > > automation.  
> > >
> > > This one doesn't require an automation infrastructure: the script runs
> > > as a local pull hook on the machine you accept the pull request from
> > > (presumably your laptop?)  
> >
> > No, I run it on a 40-core HP server that it is below my desk. I turn it on
> > only when doing patch review (to save power, and because it produces a lot
> > of heat at the small room I work).
> >
> > Right now, I use a script with converts a pull request into a quilt tree.
> > Then, for each patch there, after a manual review, I run:  
> 
> I think this process can be improved/optimized:
> 
> >         - checkpatch --strict  
> 
> Should have been done by your (trusted) submaintainer that sent you
> the pull request.

Things are getting better with time, but I still catch issues - with
seems to indicate that people sometimes forget to run it.

On a recent case, I received a few pull requests lacking the SOB from
the patch author.

> 
> >         - make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/staging/media
> >         - make ARCH=i386  CF=-D__CHECK_ENDIAN__ CONFIG_DEBUG_SECTION_MISMATCH=y C=1 W=1 CHECK='compile_checks' M=drivers/media
> >
> > where compile_checks is this script:
> >
> >         #!/bin/bash
> >         /devel/smatch/smatch -p=kernel $@
> >         # This is too pedantic and produce lots of false-positives
> >         #/devel/smatch/smatch --two-passes -- -p=kernel $@
> >         /devel/sparse/sparse $@  
> 
> Should have been done by the various bots, as soon as the public
> branch for the pull request was updated.

True, but we don't have any way, right now, to be able to automatically
parse the bot results in order only move a patch/pull request to a
"ready to merge" queue after getting the bot results.

Also, usually, the bots don't build with W=1, as, on most subsystems,
this cause lots of warnings[1].

[1] On media, we have zero warnings with W=1.

> > (Currently, I review on one screen, while the check script runs on a
> > terminal on a second screen)  
> 
> If all of the above are automated, or already done, you can focus on
> reviewing on the mailing list, i.e. before the patch is accepted by your
> submaintainer (the earlier an issue is found, the cheaper it is to fix
> it).  And you don't have to review everything, review can be done in
> parallel by multiple persons.

Easier said than done.

We agreed to some infra changes during the last Media Summit with
the ~20 people that were there, and we're sticking to the plan.

Trying to rush something before having everything setup due to a demand 
of a single developer is a terrible idea, as we need to do changes in a
way that it won't affect the community as a hole.

Today's count is that we have ~500 patches queued, Over the last two
weeks, we handled ~200 patches per week. We need keep up doing our work
as this is the busiest period over the last 12 months.

At the end of June, we upgraded from patchwork 1.0 to 2.1, and we're 
working towards improving the delegation features on patchwork, in order
to be able to have more people handling patches there, and to upgrade 
soon our web server, as it is using a distro that it is going out of 
support.

> 
> > If a patch at the queue fails, the server beeps, and I manually fix
> > or I complain.  
> 
> Should have been fixed before the pull request was sent.

Agreed, but we still get build breakages from time to time due to
bad pull requests.

> 
> > When the patch series is accepted, for every applied patch, I run
> > a script that updates the patch status at patchwork, plus the
> > status of the git pull request.  
> 
> There's some WIP automation for this on patchwork.kernel.org, that
> can update status based on appearance in linux-next.

We do that already since 2013 using our custom scripting.

> 
> > When I reject a patch, I update patchwork accordingly.
> >  
> > > so  the workflow is you receive a pull
> > > request, pull it into your tree and if the pull hook finds a bogus
> > > commit it will reject the pull and tell you why; if the script accepts
> > > the pull then you do whatever additional checks you like, then push it
> > > to kernel.org when you're satisfied it didn't break anything.  
> >
> > A script that would work for me should do a similar job:
> >
> > - apply patch per patch, test with the above programs and check for
> >   results. If any errors/warnings are returned, mailbomb the involved
> >   parties for them to rework the pull request, and update the status
> >   of the git request at patchwork.
> >
> > - If the pull request succeeds, update the patches at patchwork, using
> >   the Patchwork-ID field for the git pull request and the patch diff
> >   md5sum for the applied patches (and for any past versions of them,
> >   if the checksum is the same).
> >
> > Alternatively (and that's what I actually prefer) is that, when someone
> > sends a pull request, a CI bot would do the above checks. doing the
> > mailbomb part and marking the pull request as rejected at patchwork,
> > delegating to me otherwise.
> >
> > This way, I would have to deal only with already verified pull
> > requests.  
> 
> So I think most of the automation is already there?
> The interfacing with patchwork seems to be the hardest part.
> 
> Gr{oetje,eeting}s,
> 
>                         Geert
> 



Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 11:03                 ` Mauro Carvalho Chehab
  2019-06-17 12:28                   ` Mark Brown
@ 2019-06-17 14:18                   ` Laurent Pinchart
  1 sibling, 0 replies; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-17 14:18 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: James Bottomley, media-submaintainers, ksummit-discuss

Hi Mauro,

On Mon, Jun 17, 2019 at 08:03:15AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 15 Jun 2019 14:01:07 +0300 Laurent Pinchart escreveu:
> > On Fri, Jun 14, 2019 at 12:11:37PM -0300, Mauro Carvalho Chehab wrote:
> >> Em Fri, 14 Jun 2019 15:58:07 +0200 Greg KH escreveu:  
> >>> On Fri, Jun 14, 2019 at 10:24:24AM -0300, Mauro Carvalho Chehab wrote:  
> >>>> Em Fri, 14 Jun 2019 13:12:22 +0300 Laurent Pinchart escreveu:  
> >>>>> On Thu, Jun 13, 2019 at 10:59:16AM -0300, Mauro Carvalho Chehab wrote:    
> >>>>>> Em Thu, 06 Jun 2019 19:24:35 +0300 James Bottomley escreveu:
> >>>>>>       
> >>>>>>> [splitting issues to shorten replies]
> >>>>>>> On Thu, 2019-06-06 at 17:58 +0200, Greg KH wrote:      
> >>>>>>>> On Thu, Jun 06, 2019 at 06:48:36PM +0300, James Bottomley wrote:        
> >>>>>>>>> This is probably best done as two separate topics
> >>>>>>>>> 
> >>>>>>>>> 1) Pull network: The pull depth is effectively how many pulls your
> >>>>>>>>> tree does before it goes to Linus, so pull depth 0 is sent straight
> >>>>>>>>> to Linus, pull depth 1 is sent to a maintainer who sends to Linus
> >>>>>>>>> and so on.  We've previously spent time discussing how increasing
> >>>>>>>>> the pull depth of the network would reduce the amount of time Linus
> >>>>>>>>> spends handling pull requests.  However, in the areas I play, like
> >>>>>>>>> security, we seem to be moving in the opposite direction
> >>>>>>>>> (encouraging people to go from pull depth 1 to pull depth 0).  If
> >>>>>>>>> we're deciding to move to a flat tree model, where everything is
> >>>>>>>>> depth 0, that's fine, I just think we could do with making a formal
> >>>>>>>>> decision on it so we don't waste energy encouraging greater tree
> >>>>>>>>> depth.        
> >>>>>>>> 
> >>>>>>>> That depth "change" was due to the perceived problems that having a
> >>>>>>>> deeper pull depth was causing.  To sort that out, Linus asked for
> >>>>>>>> things to go directly to him.        
> >>>>>>> 
> >>>>>>> This seems to go beyond problems with one tree and is becoming a trend.
> >>>>>>>       
> >>>>>>>> It seems like the real issue is the problem with that subsystem
> >>>>>>>> collection point, and the fact that the depth changed is a sign that
> >>>>>>>> our model works well (i.e. everyone can be routed around.)        
> >>>>>>> 
> >>>>>>> I'm not really interested in calling out "problem" maintainers, or
> >>>>>>> indeed having another "my patch collection method is better than yours"
> >>>>>>> type discussion.  What I was fishing for is whether the general
> >>>>>>> impression that greater tree depth is worth striving for is actually
> >>>>>>> correct, or we should all give up now and simply accept that the
> >>>>>>> current flat tree is the best we can do, and, indeed is the model that
> >>>>>>> works best for Linus.  I get the impression this may be the case, but I
> >>>>>>> think making sure by having an actual discussion among the interested
> >>>>>>> parties who will be at the kernel summit, would be useful.      
> >>>>>> 
> >>>>>> On media, we came from a "depth 1" model, moving toward a "depth 2" level: 
> >>>>>> 
> >>>>>> patch author -> media/driver maintainer -> subsystem maintainer -> Linus      
> >>>>> 
> >>>>> I'd like to use this opportunity to ask again for pull requests to be
> >>>>> pulled instead of cherry-picked.    
> >>>> 
> >>>> There are other forums for discussing internal media maintainership,
> >>>> like the weekly meetings we have and our own mailing lists.    
> >>> 
> >>> You all have weekly meetings?  That's crazy...  
> >> 
> >> Yep, every week we do a meeting, usually taking about 1 hour via irc,
> >> on this channel:
> >> 
> >> 	https://linuxtv.org/irc/irclogger_logs//media-maint
> >>   
> >>> Anyway, I'll reiterate Laurent here, keeping things as a pull instead of
> >>> cherry-picking does make things a lot easier for contributors.  I know
> >>> I'm guilty of it as well as a maintainer, but that's only until I start
> >>> trusting the submitter.  Once that happens, pulling is _much_ easier as
> >>> a maintainer instead of individual patches for the usual reason that
> >>> linux-next has already verified that the sub-tree works properly before
> >>> I merge it in.
> >>> 
> >>> Try it, it might make your load be reduced, it has for me.  
> >> 
> >> If you think this is relevant to a broader audience, let me reply with
> >> a long answer about that. I prepared it and intended to reply to our
> >> internal media maintainer's ML (added as c/c). 
> >> 
> >> Yet, I still think that this is media maintainer's dirty laundry
> >> and should be discussed elsewhere ;-)  
> > 
> > I'll do my best to reply below with comments that are not too specific
> > to the media subsystem, hoping it will be useful for a wider audience
> > :-)
> > 
> >> ---
> >> 
> >> Laurent,
> >> 
> >> I already explained a few times, including during the last Media Summit,
> >> but it seems you missed the point.
> >> 
> >> As shown on our stats:
> >> 	https://linuxtv.org/patchwork_stats.php
> >> 
> >> We're receiving about 400 to 1000 patches per month, meaning 18 to 45
> >> patches per working days (22 days/month). From those, we accept about
> >> 100 to 300 patches per month (4.5 to 13.6 patches per day).
> >> 
> >> Currently, I review all accepted patches.  
> > 
> > As other have said or hinted, this is where things start getting wrong.
> > As a maintainer your duty isn't to work for 24h a day and review every
> > single patch. The duty of a maintainer is to help the subsystem stay
> > healthy and move forward. This can involve lots of technical work, but
> > it doesn't have to, that can also be delegated (providing, of course,
> > that the subsysteù would have technically competent and reliable
> > contributors who would be willing to help there). In my opinion
> > maintaining a subsystem is partly a technical job, and partly a social
> > job. Being excellent at both is the icing on the cake, not a minimal
> > requirement.
> 
> There are a couple of reasons why I keep doing that. Among them:
> 
> 1) I'd like to follow what's happening at the subsystem. Reviewing the
> patches allow me to have at least a rough idea about what's happening,
> with makes easier when we need to discuss about possible changes at
> the core;

I don't think anyone is calling for maintainers not to follow what is
happening :-) That should however not by itself be a reason to introduce
bottlenecks.

> 2) An additional reviewer improves code quality. One of the feedbacks
> I get from sub-maintainers is that we need more core review. So, I'm
> doing my part.

I agree here too, but once again, it's not a reason to introduce
bottlenecks. You can certainly catch issues in patches that are
submitted, and it can then improve quality of the code that gets merged,
but that's true for every experienced reviewer, and we don't require
review coverage of all core developers for every single patch.

> 3) I like my work.
> 
> >> I have bandwidth to review 4.5 to 13.6 patches per day, not without a lot
> >> of personal efforts. For that, I use part of my spare time, as I have other
> >> duties, plus I develop patches myself. So, in order to be able to handle
> >> those, I typically work almost non-stop starting at 6am and sometimes
> >> going up to 10pm. Also, when there are too much stuff pending (like on
> >> busy months), I handle patches also during weekends.  
> > 
> > I wasn't aware of your personal work schedule, and I'm sorry to hear
> > it's so extreme. This is not sustainable, and I think this clearly shows
> > that a purely flat tree model with a single maintainer has difficulty
> > scaling for large subsystems. If anything, this calls in my opinion for
> > increasing the pull network depth to make your job bearable again.
> 
> It has been sustainable. I've doing it over the last 10 years.

I'm sorry, but the above description doesn't look at all sustainable to
me, or even healthy (neither from a person's point of view nor from a
subsystem's point of view). The fact that the status quo has been
preserved for a long time doesn't necessarily mean it's a desirable or
good option.

> It is not every day I go from 6am to 10pm. Also, it is not that I don't have
> a social life. I still have time for my hobbies and for my family.
> 
> >> However, 45 patches/day (225 patches per week) is a lot for me to
> >> review. I can't commit to handle such amount of patches.
> >> 
> >> That's why I review patches after a first review from the other
> >> media maintainers. The way I identify the patches I should review is
> >> when I receive pull requests.
> >> 
> >> We could do a different workflow. For example, once a media maintainer
> >> review a patch, it could be delegated to me at patchwork. This would likely 
> >> increase the time for merging stuff, as the workflow would change from:
> >> 
> >>  +-------+    +------------------+    +---------------+
> >>  | patch | -> | media maintainer | -> | submaintainer | 
> >>  +-------+    +------------------+    +---------------+
> >> 
> >> to: 
> >> 
> >>  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> >>  | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
> >>  +-------+    +------------------+    +---------------+    +------------------+    +---------------+
> >> 
> >>   \------------------------v--------------------------/    \---------------------v------------------/
> >> 			Patchwork                                           Pull Request
> >> 
> >> The pull request part of the new chain could eventually be (semi-)automated
> >> by some scripting that would just do a checksum sum at the received patches 
> >> that were previously reviewed by me. If matches, and if it passes on the 
> >> usual checks I run for PR patches, it would push on some tree. Still, it 
> >> would take more time than the previous flow.  
> > 
> > I'm sorry, but I don't think this goes in the right direction. With the
> > number of patches increasing, and the number of hours in a maintainer's
> > day desperately not agreeing to increase above 24, the only scalable
> > solution I see is to stop reviewing every single patch that is accepted
> > in the subsystem tree, through delegation/sharing of maintainer's
> > duties, and trust. I know it can be difficult to let go of a driver one
> > has authored and let it live its life, so I can only guess the
> > psychological effect is much worse for a whole subsystem. I've authored
> > drivers that I cared and still care about, and I need to constantly
> > remind me that too much love can lead to suffocating. The most loving
> > parent has to accept that their children will one day leave home, but
> > that it doesn't mean their lives will part forever. I think the same
> > applies to free software.
> 
> That's not the point. The point is that, while I have time for doing
> patch reviews, I want to keep doing it.
> 
> Also, as I said, this is media dirty laundering: weather I would keep
> doing patch reviews or not - and how this will work - is something for
> our internal discussions, and not for KS.

As Mark said, while this discussion uses the media subsystem as an
example, the best practices to handle pull networks is a kernel-wide
problem.

> >> Also, as also discussed during the media summit, in order to have such
> >> kind of automation, we would need to improve our infrastructure, moving
> >> the tests from a noisy/heated server I have over my desk to some VM
> >> inside the cloud, once we get funds for it.  
> > 
> > Sure, and I think this is a topic that would gain from being discussed
> > with a wider audience. The media subsystem isn't the only one to be
> > large enough that it would benefit a lot from automation (I would even
> > argue that all subsystems could benefit from that), so sharing
> > experiences, and hearing other subsystem's wishes, would be useful here.
> 
> Maybe.
> 
> Are there any other subsystem currently working to get funding for
> hosting/automation?

The example that immediately comes to my mind is DRM/KMS (which since
v4.0 has merged between 2 and 8 times as many patches as the media
subsystem, depending on the kernel version, so they require lots of
review bandwidth as well). With the X.org fundation, under the SPI
umbrella, they raise a significant amount of money to cover the gitlab
CI hosting costs (the cost comes from hosting an open-source gitlab
instance, not in licenses paid to gitlab). I am personally quite fond of
going through a real non-profit foundation, as it brings transparency to
the accounting.

> >> In any case, a discussion that affects the patch latency and our internal
> >> procedures within the media subsystem is something that should be discussed
> >> with other media mantainers, and not at KS.  
> > 
> > Isn't improving patch latency something that would be welcome throughout
> > the kernel ?
> 
> Your proposed change won't improve it. It will either keep the same,
> if we keep the current flow:
> 
> 	  +-------+    +------------------+    +---------------+
> 	  | patch | -> | media maintainer | -> | submaintainer | 
> 	  +-------+    +------------------+    +---------------+
> 
> (either if I review the patch or not, the flow will be the same - and
> so the patch latency)
> 
> Or make it higher, if we change it to:
> 
>   +-------+    +------------------+    +---------------+    +------------------+    +---------------+
>   | patch | -> | media maintainer | -> | submaintainer | -> | media maintainer | -> | submaintainer | 
>   +-------+    +------------------+    +---------------+    +------------------+    +---------------+
>  
>    \------------------------v--------------------------/    \---------------------v------------------/
>  			Patchwork                                           Pull Request

The whole idea is that it wouldn't need to go through you in the first
place. Patches would be reviewed and applied by "submaintainers" (I've
grown quite unhappy with that word as it can be quite belittling, but
that's a separate issue) to their tree, who would then send pull
requests to you.

> More below:
> 
> >> -
> >> 
> >> That's said, one day I may not be able to review all accepted patches.
> >> When this day comes, I'll just apply the pull requests I receive.
> >> 
> >> -
> >> 
> >> Finally, if you're so interested on improving our maintenance model,
> >> I beg you: please handle the patches delegated to you:
> >> 
> >> 	https://patchwork.linuxtv.org/project/linux-media/list/?series=&submitter=&state=&q=&archive=&delegate=2510
> >> 
> >> As we agreed on our media meetings, I handled about ~60 patches that 
> >> were waiting for your review since 2017 a couple of weeks ago - 
> >> basically the ones that are not touching the drivers you currently
> >> maintain, but there are still 23 patches sent between 2013-2018
> >> over there, plus the 48 patches sent in 2019.  
> 
> Reviewing the patches on your queue or delegating them to others is
> actually a concrete thing you can do in order to reduce the patch
> handling latency.

I've purposefully refrained from answering this part in my previous
e-mail, but as you bring it up, here's what I had written and deleted.

We are here reaching the interesting topic of how to motivate (or
demotivate) contributors. I think it's worth a global discussion as it
applies to the kernel in general (and not just to the linux-media
subsystem), but I think it would overtake this discussion thread. If
someone has an interest in discussing this topic, let me know and I'll
split it to another thread.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 13:31                             ` Mauro Carvalho Chehab
@ 2019-06-17 14:26                               ` Takashi Iwai
  2019-06-19  7:53                               ` Dan Carpenter
  1 sibling, 0 replies; 77+ messages in thread
From: Takashi Iwai @ 2019-06-17 14:26 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: James Bottomley, media-submaintainers, ksummit

On Mon, 17 Jun 2019 15:31:15 +0200,
Mauro Carvalho Chehab wrote:
> 
> Em Mon, 17 Jun 2019 09:01:06 +0200
> Geert Uytterhoeven <geert@linux-m68k.org> escreveu:
> 
> > Hi Mauro,
> > 
> > On Fri, Jun 14, 2019 at 7:48 PM Mauro Carvalho Chehab
> > <mchehab+samsung@kernel.org> wrote:
> > > Em Fri, 14 Jun 2019 09:16:34 -0700
> > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:  
> > > > On Fri, 2019-06-14 at 13:04 -0300, Mauro Carvalho Chehab wrote:  
> > > > > Em Fri, 14 Jun 2019 08:49:46 -0700
> > > > > James Bottomley <James.Bottomley@HansenPartnership.com> escreveu:  
> > > > > > Actually, this leads me to the patch acceptance criteria: Is there
> > > > > > value in requiring reviews?  We try to do this in SCSI (usually
> > > > > > only one review), but if all reviewers add a
> > > > > >
> > > > > > Reviewed-by:
> > > > > >
> > > > > > tag, which is accumulated in the tree, your pull machinery can
> > > > > > detect it on all commits in the pull and give you an automated
> > > > > > decision about whether to accept the pull or not.  If you require
> > > > > > two with one from a list of designated reviewers, it can do that as
> > > > > > well (with a bit more complexity in the pull hook script).
> > > > > >
> > > > > > So here's the question: If I help you script this, would you be
> > > > > > willing to accept pull requests in the media tree with this check
> > > > > > in place?  I'm happy to do this because it's an interesting
> > > > > > experiment to see if we can have automation offload work currently
> > > > > > done by humans.  
> > > > >
> > > > > We could experiment something like that, provided that people will be
> > > > > aware that it can be undone if something gets wrong.
> > > > >
> > > > > Yet, as we discussed at the Media Summit, we currently have an
> > > > > issue: our infrastructure lack resources for such kind of
> > > > > automation.  
> > > >
> > > > This one doesn't require an automation infrastructure: the script runs
> > > > as a local pull hook on the machine you accept the pull request from
> > > > (presumably your laptop?)  
> > >
> > > No, I run it on a 40-core HP server that it is below my desk. I turn it on
> > > only when doing patch review (to save power, and because it produces a lot
> > > of heat at the small room I work).
> > >
> > > Right now, I use a script with converts a pull request into a quilt tree.
> > > Then, for each patch there, after a manual review, I run:  
> > 
> > I think this process can be improved/optimized:
> > 
> > >         - checkpatch --strict  
> > 
> > Should have been done by your (trusted) submaintainer that sent you
> > the pull request.
> 
> Things are getting better with time, but I still catch issues - with
> seems to indicate that people sometimes forget to run it.
> 
> On a recent case, I received a few pull requests lacking the SOB from
> the patch author.

But you can simply refuse pulling in such a fatal-error case, instead
of fixing in your side.  And in a trivial error case, you can apply
the fix after pulling, too.

A push-back stops the flow, but at the same time, it'd help subsystem
maintainers learning something, so it's not always bad.


thanks,

Takashi

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 12:28                   ` Mark Brown
@ 2019-06-17 16:48                     ` Tim.Bird
  2019-06-17 17:23                       ` Geert Uytterhoeven
  2019-06-17 23:13                       ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 77+ messages in thread
From: Tim.Bird @ 2019-06-17 16:48 UTC (permalink / raw)
  To: broonie, mchehab+samsung
  Cc: ksummit-discuss, James.Bottomley, media-submaintainers, dvyukov



> -----Original Message-----
> From: Mark Brown [mailto:broonie@kernel.org]
> 
> On Mon, Jun 17, 2019 at 08:03:15AM -0300, Mauro Carvalho Chehab wrote:
...
> > Are there any other subsystem currently working to get funding for
> > hosting/automation?
> 
> Not sure if it's specifically what you're looking at but there's stuff
> going on that's at least very adjacent to this, more from the angle of
> providing general infrastructure than subsystem specific things and
> currently mainly foucsed on getting tests run.  To me that sort of
> approach seems good since it avoids duplicated efforts between
> subsystems.
> 
> There's people working on things like KernelCI (people are working on
> expanding to include runtime tests, and there's active efforts on
> securing more funding) and CKI which aren't focused on specific
> subsystems but more on general infrastructure.  Tim Bird (CCed) has been
> pushing on trying to get people working in this area talking to each
> other - there's a mailing list and monthly call:
> 
>     https://elinux.org/Automated_Testing
> 
> and one of the things people are talking about is what sorts of things
> the kernel community would find useful here so it's probably useful at
> least putting ideas for things that'd be useful in the heads of people
> who are interested in working on the infrastructure and automation end
> of things.

Indeed.  Although I haven't piped up, I have been paying close attention to
these discussions, and I know from talking to others who are involved with
automated testing that they are as well.

I'm about to go on vacation, so I'll be incommunicado for about a week, but
just to highlight some of the stuff I'm keen on following up on:
 - if Linus like the syzbot notification mechanism, we should definitely
follow up and try to have more tools and frameworks adopt that mechanism
It's on my to-do list to investigate this further and see how it would integrate
with my particular framework (Fuego).  I think Dmitry said we should avoid
introducing bots with lots of different notification mechanisms, as that will
overload developers and just turn them off.
- I recently saw a very cool system for isolating new warnings (at all levels
of W=[0-3]) introduced by a new patch.  This would be a great thing to 
add to the kernel build system, IMHO, and that's also on my to-do list.
- I was very interested in discussions about the mechanism to check whether
a patch modified the binary or not.   It would be nice to make this part of
the build system as well (something like: "make check-for-binary-change"

Both of the latter items require the ability to set a baseline to compare
against.  So the usage sequence would be:
 - make save-baseline
 - <do commit or pull>
 - make show-new-warnings
or
 - make check-for-binary-change

Just FYI, some of the stuff that the automated testing folk are working on
are:
 - standards for lab equipment management (/board management)
 - standards for test definition
 - standards for test results format and aggregation
 - setting up a multi-framework results aggregation site

Keep the ideas flowing!  While we sometimes don't chime in, I know people
are listening and thinking about the ideas presented.  And many of us will be
at plumbers for live discussions, and at ELC Europe.  A few of us are starting a new event
called Automated Testing Summit that we're co-locating (at least this year) with 
OSSUE/ELCE on October 31, in Lyon France.  Check the Linux Foundation events site
for more information.
-- Tim

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 16:48                     ` Tim.Bird
@ 2019-06-17 17:23                       ` Geert Uytterhoeven
  2019-06-17 23:13                       ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 77+ messages in thread
From: Geert Uytterhoeven @ 2019-06-17 17:23 UTC (permalink / raw)
  To: Bird, Timothy
  Cc: ksummit, James Bottomley, media-submaintainers,
	Mauro Carvalho Chehab, Dmitry Vyukov

Hi Tim,

On Mon, Jun 17, 2019 at 6:48 PM <Tim.Bird@sony.com> wrote:
> - I recently saw a very cool system for isolating new warnings (at all levels
> of W=[0-3]) introduced by a new patch.  This would be a great thing to
> add to the kernel build system, IMHO, and that's also on my to-do list.
> - I was very interested in discussions about the mechanism to check whether
> a patch modified the binary or not.   It would be nice to make this part of
> the build system as well (something like: "make check-for-binary-change"
>
> Both of the latter items require the ability to set a baseline to compare
> against.  So the usage sequence would be:
>  - make save-baseline
>  - <do commit or pull>
>  - make show-new-warnings

https://github.com/geertu/linux-scripts/blob/master/linux-log-diff

linux-log-diff build.log.old build.log.new

That's what I use for "Build regressions/improvements in v5.2-rc5"
https://lore.kernel.org/lkml/20190617071937.22838-1-geert@linux-m68k.org/

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 16:48                     ` Tim.Bird
  2019-06-17 17:23                       ` Geert Uytterhoeven
@ 2019-06-17 23:13                       ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-17 23:13 UTC (permalink / raw)
  To: Tim.Bird; +Cc: James.Bottomley, media-submaintainers, dvyukov, ksummit-discuss

Em Mon, 17 Jun 2019 16:48:03 +0000
<Tim.Bird@sony.com> escreveu:

> > -----Original Message-----
> > From: Mark Brown [mailto:broonie@kernel.org]
> > 
> > On Mon, Jun 17, 2019 at 08:03:15AM -0300, Mauro Carvalho Chehab wrote:  
> ...
> > > Are there any other subsystem currently working to get funding for
> > > hosting/automation?  
> > 
> > Not sure if it's specifically what you're looking at but there's stuff
> > going on that's at least very adjacent to this, more from the angle of
> > providing general infrastructure than subsystem specific things and
> > currently mainly foucsed on getting tests run.  To me that sort of
> > approach seems good since it avoids duplicated efforts between
> > subsystems.
> > 
> > There's people working on things like KernelCI (people are working on
> > expanding to include runtime tests, and there's active efforts on
> > securing more funding) and CKI which aren't focused on specific
> > subsystems but more on general infrastructure.  Tim Bird (CCed) has been
> > pushing on trying to get people working in this area talking to each
> > other - there's a mailing list and monthly call:
> > 
> >     https://elinux.org/Automated_Testing
> > 
> > and one of the things people are talking about is what sorts of things
> > the kernel community would find useful here so it's probably useful at
> > least putting ideas for things that'd be useful in the heads of people
> > who are interested in working on the infrastructure and automation end
> > of things.  

Nice to know. Yeah, this is the sort of things we're looking forward,
in order to help our workflow.

> 
> Indeed.  Although I haven't piped up, I have been paying close attention to
> these discussions, and I know from talking to others who are involved with
> automated testing that they are as well.
> 
> I'm about to go on vacation, so I'll be incommunicado for about a week, but
> just to highlight some of the stuff I'm keen on following up on:
>  - if Linus like the syzbot notification mechanism, we should definitely
> follow up and try to have more tools and frameworks adopt that mechanism
> It's on my to-do list to investigate this further and see how it would integrate
> with my particular framework (Fuego). 

At least on media, syzbot had provided us some interesting reports,
and keeping us busy fixing some stuff :-)

> I think Dmitry said we should avoid
> introducing bots with lots of different notification mechanisms, as that will
> overload developers and just turn them off.

Yeah, receiving the same thing from different sources won't help,
and will just bug developers for no gain.

> - I recently saw a very cool system for isolating new warnings (at all levels
> of W=[0-3]) introduced by a new patch. 

Last time I tested W=2, it was almost unusable: lots of warnings due to
char/unsigned char mess. There are simply too many places where this
can be used interchangeable. Cleaning this mess would be a huge effort
for almost no gain.

W=1 did help us to find and fix bugs.

Getting a report about new W=2/W=3 warnings sound interesting, but if 
those things end adding too much noise, better to disable.

> This would be a great thing to 
> add to the kernel build system, IMHO, and that's also on my to-do list.
> - I was very interested in discussions about the mechanism to check whether
> a patch modified the binary or not.   It would be nice to make this part of
> the build system as well (something like: "make check-for-binary-change"

Yeah, something like that doesn't sound complex to implement (using
objdump -S). Tests required. I'll see if I can find some time to do
more tests here.

> 
> Both of the latter items require the ability to set a baseline to compare
> against.  So the usage sequence would be:
>  - make save-baseline
>  - <do commit or pull>
>  - make show-new-warnings
> or
>  - make check-for-binary-change

Probably the most complex part of such script would be to identify
what modules will be affected by a change. I mean, the script would 
need to identify what *.o files will be recompiled after a change at
the source code.

GNU make will know, but I'm not sure if is there a way to retrieve
the information from it. Maybe the build could do something like:

1) build without the patch that need to be checked;

2) replace make by some program that would be listen at the inotify
   events. It would call make internally;

3) It would now have a list of all modified *.o files due to the
   new patch. With that, store the sources, generated with 
   'objdump -S'.

4) revert the patch, rebuild and get the 'objdump -S' from the
   same object files;
 
5) if they're identical, report it. Otherwise, it may show the
   asm differences, for someone to manually check.

I suspect that something like the above may work. Tests required.

I'll try to implement something like that and see what happens.

> Just FYI, some of the stuff that the automated testing folk are working on
> are:
>  - standards for lab equipment management (/board management)
>  - standards for test definition
>  - standards for test results format and aggregation
>  - setting up a multi-framework results aggregation site
> 
> Keep the ideas flowing!  While we sometimes don't chime in, I know people
> are listening and thinking about the ideas presented.  And many of us will be
> at plumbers for live discussions, and at ELC Europe.  A few of us are starting a new event
> called Automated Testing Summit that we're co-locating (at least this year) with 
> OSSUE/ELCE on October 31, in Lyon France.  Check the Linux Foundation events site
> for more information.
> -- Tim
> 
> _______________________________________________
> Ksummit-discuss mailing list
> Ksummit-discuss@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss



Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-17 13:31                             ` Mauro Carvalho Chehab
  2019-06-17 14:26                               ` Takashi Iwai
@ 2019-06-19  7:53                               ` Dan Carpenter
  2019-06-19  8:13                                 ` [Ksummit-discuss] [kbuild] " Philip Li
  2019-06-19  8:33                                 ` [Ksummit-discuss] " Daniel Vetter
  1 sibling, 2 replies; 77+ messages in thread
From: Dan Carpenter @ 2019-06-19  7:53 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, kbuild
  Cc: James Bottomley, media-submaintainers, ksummit

On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:
> Also, usually, the bots don't build with W=1, as, on most subsystems,
> this cause lots of warnings[1].
> 
> [1] On media, we have zero warnings with W=1.
> 

We could ask the kbuild devs if they would consider making W=1 a per
tree option.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [kbuild] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19  7:53                               ` Dan Carpenter
@ 2019-06-19  8:13                                 ` " Philip Li
  2019-06-19  8:33                                 ` [Ksummit-discuss] " Daniel Vetter
  1 sibling, 0 replies; 77+ messages in thread
From: Philip Li @ 2019-06-19  8:13 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild,
	Mauro Carvalho Chehab

On Wed, Jun 19, 2019 at 10:53:51AM +0300, Dan Carpenter wrote:
> On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:
> > Also, usually, the bots don't build with W=1, as, on most subsystems,
> > this cause lots of warnings[1].
> > 
> > [1] On media, we have zero warnings with W=1.
> > 
> 
> We could ask the kbuild devs if they would consider making W=1 a per
> tree option.
thanks for the suggestion, we can consider this, at least for specific
repos.

> 
> regards,
> dan carpenter
> 
> _______________________________________________
> kbuild mailing list
> kbuild@lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19  7:53                               ` Dan Carpenter
  2019-06-19  8:13                                 ` [Ksummit-discuss] [kbuild] " Philip Li
@ 2019-06-19  8:33                                 ` " Daniel Vetter
  2019-06-19 14:39                                   ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 77+ messages in thread
From: Daniel Vetter @ 2019-06-19  8:33 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: Mauro Carvalho Chehab, James Bottomley, media-submaintainers,
	kbuild, ksummit

On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
>
> On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:
> > Also, usually, the bots don't build with W=1, as, on most subsystems,
> > this cause lots of warnings[1].
> >
> > [1] On media, we have zero warnings with W=1.
> >
>
> We could ask the kbuild devs if they would consider making W=1 a per
> tree option.

No need to ask, just add a Kconfig which sets additional cflags for
you for your tree and your good. The usual combinatorial testing will
discover the new warnings. That's at least what we do for i915.ko
(including -Werror). Gets the job done.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19  8:33                                 ` [Ksummit-discuss] " Daniel Vetter
@ 2019-06-19 14:39                                   ` Mauro Carvalho Chehab
  2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
  0 siblings, 1 reply; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-19 14:39 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: James Bottomley, media-submaintainers, kbuild, ksummit, Dan Carpenter

Em Wed, 19 Jun 2019 10:33:23 +0200
Daniel Vetter <daniel.vetter@ffwll.ch> escreveu:

> On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter <dan.carpenter@oracle.com> wrote:
> >
> > On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:  
> > > Also, usually, the bots don't build with W=1, as, on most subsystems,
> > > this cause lots of warnings[1].
> > >
> > > [1] On media, we have zero warnings with W=1.
> > >  
> >
> > We could ask the kbuild devs if they would consider making W=1 a per
> > tree option.  
> 
> No need to ask, just add a Kconfig which sets additional cflags for
> you for your tree and your good. The usual combinatorial testing will
> discover the new warnings. That's at least what we do for i915.ko
> (including -Werror). Gets the job done.

While this works, having a W=1 per tree would, IMHO, work better, as,
as new warnings get added to W=1, we'll get those for free.

-

I don't like the idea of having -Werror being automatically added, as
this may cause problems when people try to compile with a different
compiler version - or on some weird architectures.

Specially on drivers that build with COMPILE_TEST[1], depending on the 
architecture they're built, false-positive warnings rise, specially
on unusual architecture with has different defines for some 
arch-specific typedefs (signed/unsigned, different integer type,
usage or not of volatile, a different address space, etc).

[1] On media, our goal is that everything should build with
COMPILE_TEST.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 14:39                                   ` Mauro Carvalho Chehab
@ 2019-06-19 14:48                                     ` " Laurent Pinchart
  2019-06-19 15:19                                       ` Mauro Carvalho Chehab
                                                         ` (2 more replies)
  0 siblings, 3 replies; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-19 14:48 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild, Dan Carpenter

Hi Mauro,

On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab wrote:
> Em Wed, 19 Jun 2019 10:33:23 +0200 Daniel Vetter escreveu:
> > On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter wrote:
> > > On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:  
> > > > Also, usually, the bots don't build with W=1, as, on most subsystems,
> > > > this cause lots of warnings[1].
> > > >
> > > > [1] On media, we have zero warnings with W=1.
> > >
> > > We could ask the kbuild devs if they would consider making W=1 a per
> > > tree option.  
> > 
> > No need to ask, just add a Kconfig which sets additional cflags for
> > you for your tree and your good. The usual combinatorial testing will
> > discover the new warnings. That's at least what we do for i915.ko
> > (including -Werror). Gets the job done.
> 
> While this works, having a W=1 per tree would, IMHO, work better, as,
> as new warnings get added to W=1, we'll get those for free.
> 
> -
> 
> I don't like the idea of having -Werror being automatically added, as
> this may cause problems when people try to compile with a different
> compiler version - or on some weird architectures.

It's not automatic though, if it depends on a Kconfig option that is
disabled by default. The built bots can enable it, while users would
ignore it. That being said, having it as a per-tree build bot option
should work as well.

> Specially on drivers that build with COMPILE_TEST[1], depending on the 
> architecture they're built, false-positive warnings rise, specially
> on unusual architecture with has different defines for some 
> arch-specific typedefs (signed/unsigned, different integer type,
> usage or not of volatile, a different address space, etc).

All my kernel compilation scripts use -Werror, and that does a great job
at catching problems. It can be a bit annoying at times when someone
introduces a warning, but usually a fix will already be posted when I
notice my build breaks. The more we use -Werror globally, the faster
those new warnings will be caught.

> [1] On media, our goal is that everything should build with
> COMPILE_TEST.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
@ 2019-06-19 15:19                                       ` Mauro Carvalho Chehab
  2019-06-19 15:46                                       ` James Bottomley
  2019-06-19 15:56                                       ` Mark Brown
  2 siblings, 0 replies; 77+ messages in thread
From: Mauro Carvalho Chehab @ 2019-06-19 15:19 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild, Dan Carpenter

Em Wed, 19 Jun 2019 17:48:08 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab wrote:
> > Em Wed, 19 Jun 2019 10:33:23 +0200 Daniel Vetter escreveu:  
> > > On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter wrote:  
> > > > On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab wrote:    
> > > > > Also, usually, the bots don't build with W=1, as, on most subsystems,
> > > > > this cause lots of warnings[1].
> > > > >
> > > > > [1] On media, we have zero warnings with W=1.  
> > > >
> > > > We could ask the kbuild devs if they would consider making W=1 a per
> > > > tree option.    
> > > 
> > > No need to ask, just add a Kconfig which sets additional cflags for
> > > you for your tree and your good. The usual combinatorial testing will
> > > discover the new warnings. That's at least what we do for i915.ko
> > > (including -Werror). Gets the job done.  
> > 
> > While this works, having a W=1 per tree would, IMHO, work better, as,
> > as new warnings get added to W=1, we'll get those for free.
> > 
> > -
> > 
> > I don't like the idea of having -Werror being automatically added, as
> > this may cause problems when people try to compile with a different
> > compiler version - or on some weird architectures.  
> 
> It's not automatic though, if it depends on a Kconfig option that is
> disabled by default. The built bots can enable it, while users would
> ignore it. That being said, having it as a per-tree build bot option
> should work as well.

Having a Kconfig option is OK.

What I'm saying is that I don't like the idea of having something
like:

	ccflags-y := -Werror

Unconditionally added on some Makefile. Having it depending on a Kconfig
option (or manually added with something like "make CFLAGS=-Werror") is OK.

We had -Werror unconditionally enabled in the past on a couple of
Makefiles under media.

-

It sounds, however, that there's not a consensus with that regards, as
some subsystems enableit unconditionally:

	ccflags-y := -Werror

Others enable when there's some other make option:

	ifeq (, $(findstring -W,$(EXTRA_CFLAGS)))
		ccflags-y += -Werror
	endif

And yet other ones have their own subsystem-specific option to
enable it:

	ccflags-$(CONFIG_PPC_WERROR)  += -Werror


Thanks,
Mauro

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
  2019-06-19 15:19                                       ` Mauro Carvalho Chehab
@ 2019-06-19 15:46                                       ` James Bottomley
  2019-06-19 16:23                                         ` Mark Brown
  2019-06-20 10:36                                         ` Jani Nikula
  2019-06-19 15:56                                       ` Mark Brown
  2 siblings, 2 replies; 77+ messages in thread
From: James Bottomley @ 2019-06-19 15:46 UTC (permalink / raw)
  To: Laurent Pinchart, Mauro Carvalho Chehab
  Cc: media-submaintainers, kbuild, Dan Carpenter, ksummit

On Wed, 2019-06-19 at 17:48 +0300, Laurent Pinchart wrote:
> Hi Mauro,
> 
> On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab
> wrote:
> > Em Wed, 19 Jun 2019 10:33:23 +0200 Daniel Vetter escreveu:
> > > On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter wrote:
> > > > On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab
> > > > wrote:  
> > > > > Also, usually, the bots don't build with W=1, as, on most
> > > > > subsystems, this cause lots of warnings[1].
> > > > > 
> > > > > [1] On media, we have zero warnings with W=1.
> > > > 
> > > > We could ask the kbuild devs if they would consider making W=1
> > > > a per tree option.  
> > > 
> > > No need to ask, just add a Kconfig which sets additional cflags
> > > for you for your tree and your good. The usual combinatorial
> > > testing will discover the new warnings. That's at least what we
> > > do for i915.ko (including -Werror). Gets the job done.
> > 
> > While this works, having a W=1 per tree would, IMHO, work better,
> > as, as new warnings get added to W=1, we'll get those for free.
> > 
> > -
> > 
> > I don't like the idea of having -Werror being automatically added,
> > as this may cause problems when people try to compile with a
> > different compiler version - or on some weird architectures.
> 
> It's not automatic though, if it depends on a Kconfig option that is
> disabled by default. The built bots can enable it, while users would
> ignore it. That being said, having it as a per-tree build bot option
> should work as well.

I really don't think well made build bots would enable this.  The
problem with -Werror is it's single threaded on the first problem. 
What a generic build bot wants to do is compile the entire tree and
then diff the output to find the additional warnings for everything.  I
could see a tree specific build bot being more interested (until the
build fails on an unrelated subsystem).

> > Specially on drivers that build with COMPILE_TEST[1], depending on
> > the architecture they're built, false-positive warnings rise,
> > specially on unusual architecture with has different defines for
> > some arch-specific typedefs (signed/unsigned, different integer
> > type, usage or not of volatile, a different address space, etc).
> 
> All my kernel compilation scripts use -Werror, and that does a great
> job at catching problems. It can be a bit annoying at times when
> someone introduces a warning, but usually a fix will already be
> posted when I notice my build breaks. The more we use -Werror
> globally, the faster those new warnings will be caught.

I buy this for small projects, and would own up to using it in my own
because it's a great way to force contributors not to introduce
warnings in their patches if the build breaks.  The problem with
something huge like linux, especially when it is fairly deeply entwined
with compiler specifics, is twofold:

   1. You're going to force us to annotate all those spurious warnings
      that we've been ignoring because gcc should get fixed; incorrectly
      flagged uninitialized variables being the most annoying.
   2. Different versions of gcc produce different warnings: so now we'll
      eventually have to target a specific gcc version and not upgrade
      until we're ready because newer versions come with shiny new
      warnings.

That's not to say we should forbid bots and subsystems from doing this,
that's what we're currently doing with the per-subdir enabling of
-Werror using subdir-ccflags-y if you look, but we shouldn't globally
mandate it.

James

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
  2019-06-19 15:19                                       ` Mauro Carvalho Chehab
  2019-06-19 15:46                                       ` James Bottomley
@ 2019-06-19 15:56                                       ` Mark Brown
  2019-06-19 16:09                                         ` Laurent Pinchart
  2 siblings, 1 reply; 77+ messages in thread
From: Mark Brown @ 2019-06-19 15:56 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild,
	Mauro Carvalho Chehab, Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 1184 bytes --]

On Wed, Jun 19, 2019 at 05:48:08PM +0300, Laurent Pinchart wrote:
> On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab wrote:

> > Specially on drivers that build with COMPILE_TEST[1], depending on the 
> > architecture they're built, false-positive warnings rise, specially
> > on unusual architecture with has different defines for some 
> > arch-specific typedefs (signed/unsigned, different integer type,
> > usage or not of volatile, a different address space, etc).

> All my kernel compilation scripts use -Werror, and that does a great job
> at catching problems. It can be a bit annoying at times when someone
> introduces a warning, but usually a fix will already be posted when I
> notice my build breaks. The more we use -Werror globally, the faster
> those new warnings will be caught.

-Werror is a bit user hostile, it can be incredibly irritating when
you're debugging things to get things like unused variable warnings from
your debug code or to be working with an unusual config/arch that throws
up warnings that aren't normally seen.  A clean build doesn't require us
to enable -Werror, it requires us to pay attention to warnings.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 15:56                                       ` Mark Brown
@ 2019-06-19 16:09                                         ` Laurent Pinchart
  0 siblings, 0 replies; 77+ messages in thread
From: Laurent Pinchart @ 2019-06-19 16:09 UTC (permalink / raw)
  To: Mark Brown
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild,
	Mauro Carvalho Chehab, Dan Carpenter

Hi Mark,

On Wed, Jun 19, 2019 at 04:56:02PM +0100, Mark Brown wrote:
> On Wed, Jun 19, 2019 at 05:48:08PM +0300, Laurent Pinchart wrote:
> > On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab wrote:
> 
> > > Specially on drivers that build with COMPILE_TEST[1], depending on the 
> > > architecture they're built, false-positive warnings rise, specially
> > > on unusual architecture with has different defines for some 
> > > arch-specific typedefs (signed/unsigned, different integer type,
> > > usage or not of volatile, a different address space, etc).
> 
> > All my kernel compilation scripts use -Werror, and that does a great job
> > at catching problems. It can be a bit annoying at times when someone
> > introduces a warning, but usually a fix will already be posted when I
> > notice my build breaks. The more we use -Werror globally, the faster
> > those new warnings will be caught.
> 
> -Werror is a bit user hostile, it can be incredibly irritating when
> you're debugging things to get things like unused variable warnings from
> your debug code or to be working with an unusual config/arch that throws
> up warnings that aren't normally seen.  A clean build doesn't require us
> to enable -Werror, it requires us to pay attention to warnings.

I agree about the latter. Regarding debugging code I found it was just a
matter of getting used to it and avoiding generating warnings, even in
debug code. -Werror saved me from not noticing warnings introduced by my
code, and with it a git rebase -x compile test stops when I made a
mistake, which is really valuable. I'm not saying it should be enabled
through the whole kernel all of a sudden, but if we can slowly expand
its usage I think we'll end up with better code in the end.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 15:46                                       ` James Bottomley
@ 2019-06-19 16:23                                         ` Mark Brown
  2019-06-20 12:24                                           ` Geert Uytterhoeven
  2019-06-20 10:36                                         ` Jani Nikula
  1 sibling, 1 reply; 77+ messages in thread
From: Mark Brown @ 2019-06-19 16:23 UTC (permalink / raw)
  To: James Bottomley
  Cc: ksummit, media-submaintainers, kbuild, Mauro Carvalho Chehab,
	Dan Carpenter

[-- Attachment #1: Type: text/plain, Size: 2514 bytes --]

On Wed, Jun 19, 2019 at 08:46:19AM -0700, James Bottomley wrote:
> On Wed, 2019-06-19 at 17:48 +0300, Laurent Pinchart wrote:

> > It's not automatic though, if it depends on a Kconfig option that is
> > disabled by default. The built bots can enable it, while users would
> > ignore it. That being said, having it as a per-tree build bot option
> > should work as well.

> I really don't think well made build bots would enable this.  The
> problem with -Werror is it's single threaded on the first problem. 
> What a generic build bot wants to do is compile the entire tree and
> then diff the output to find the additional warnings for everything.  I
> could see a tree specific build bot being more interested (until the
> build fails on an unrelated subsystem).

If you're doing build coverage you can always use make -k and still
build everything but yeah.

> > All my kernel compilation scripts use -Werror, and that does a great
> > job at catching problems. It can be a bit annoying at times when

>    1. You're going to force us to annotate all those spurious warnings
>       that we've been ignoring because gcc should get fixed; incorrectly
>       flagged uninitialized variables being the most annoying.

The next time I have to write a "this just shuts up the warning, it
doesn't even consider if there might be a real problem" mail I'm
probably going to turn it into a form letter :(

>    2. Different versions of gcc produce different warnings: so now we'll
>       eventually have to target a specific gcc version and not upgrade
>       until we're ready because newer versions come with shiny new
>       warnings.

This isn't as bad as it used to be since we have people looking at the
new compiler versions as they come down the line and trying to ensure
that things work cleanly before the compilers even get released (Arnd
does a bunch of this, I know some of the clang people are paying
attention as well).  It's something the compiler people have been
interested as part of their QA, there's no guarantee everything will be
perfect but things tend to do reasonably well these days.

A similar issue applies with older compiler versions and false
positives, people do look at that but it gets a bit less coverage.
There's infrastructure for this in KernelCI (which is currently used for
the clang/arm64 testing in production), if there were build capacity
available it'd be relatively easy in technical terms to have some
coverage there.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 15:46                                       ` James Bottomley
  2019-06-19 16:23                                         ` Mark Brown
@ 2019-06-20 10:36                                         ` Jani Nikula
  1 sibling, 0 replies; 77+ messages in thread
From: Jani Nikula @ 2019-06-20 10:36 UTC (permalink / raw)
  To: James Bottomley, Laurent Pinchart, Mauro Carvalho Chehab
  Cc: media-submaintainers, kbuild, ksummit, Dan Carpenter

On Wed, 19 Jun 2019, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> On Wed, 2019-06-19 at 17:48 +0300, Laurent Pinchart wrote:
>> Hi Mauro,
>> 
>> On Wed, Jun 19, 2019 at 11:39:02AM -0300, Mauro Carvalho Chehab
>> wrote:
>> > Em Wed, 19 Jun 2019 10:33:23 +0200 Daniel Vetter escreveu:
>> > > On Wed, Jun 19, 2019 at 9:56 AM Dan Carpenter wrote:
>> > > > On Mon, Jun 17, 2019 at 10:31:15AM -0300, Mauro Carvalho Chehab
>> > > > wrote:  
>> > > > > Also, usually, the bots don't build with W=1, as, on most
>> > > > > subsystems, this cause lots of warnings[1].
>> > > > > 
>> > > > > [1] On media, we have zero warnings with W=1.
>> > > > 
>> > > > We could ask the kbuild devs if they would consider making W=1
>> > > > a per tree option.  
>> > > 
>> > > No need to ask, just add a Kconfig which sets additional cflags
>> > > for you for your tree and your good. The usual combinatorial
>> > > testing will discover the new warnings. That's at least what we
>> > > do for i915.ko (including -Werror). Gets the job done.
>> > 
>> > While this works, having a W=1 per tree would, IMHO, work better,
>> > as, as new warnings get added to W=1, we'll get those for free.
>> > 
>> > -
>> > 
>> > I don't like the idea of having -Werror being automatically added,
>> > as this may cause problems when people try to compile with a
>> > different compiler version - or on some weird architectures.
>> 
>> It's not automatic though, if it depends on a Kconfig option that is
>> disabled by default. The built bots can enable it, while users would
>> ignore it. That being said, having it as a per-tree build bot option
>> should work as well.
>
> I really don't think well made build bots would enable this.  The
> problem with -Werror is it's single threaded on the first problem. 
> What a generic build bot wants to do is compile the entire tree and
> then diff the output to find the additional warnings for everything.  I
> could see a tree specific build bot being more interested (until the
> build fails on an unrelated subsystem).
>
>> > Specially on drivers that build with COMPILE_TEST[1], depending on
>> > the architecture they're built, false-positive warnings rise,
>> > specially on unusual architecture with has different defines for
>> > some arch-specific typedefs (signed/unsigned, different integer
>> > type, usage or not of volatile, a different address space, etc).
>> 
>> All my kernel compilation scripts use -Werror, and that does a great
>> job at catching problems. It can be a bit annoying at times when
>> someone introduces a warning, but usually a fix will already be
>> posted when I notice my build breaks. The more we use -Werror
>> globally, the faster those new warnings will be caught.
>
> I buy this for small projects, and would own up to using it in my own
> because it's a great way to force contributors not to introduce
> warnings in their patches if the build breaks.  The problem with
> something huge like linux, especially when it is fairly deeply entwined
> with compiler specifics, is twofold:
>
>    1. You're going to force us to annotate all those spurious warnings
>       that we've been ignoring because gcc should get fixed; incorrectly
>       flagged uninitialized variables being the most annoying.

In i915 we basically start off with -Wall -Wextra, and then disable the
warnings that we want ignored. Some of the disables are on a per-file
basis. We then have -Werror behind a config knob.

It's a nice way to clean up warnings in our corner of the codebase, and
ensure it stays that way. I'm sure other drivers and subsystems would
have a slightly different set of warning disabled, and necessarily the
global config needs to be a union of those sets of warnings.

>    2. Different versions of gcc produce different warnings: so now we'll
>       eventually have to target a specific gcc version and not upgrade
>       until we're ready because newer versions come with shiny new
>       warnings.

On the other hand this allows us to be more aware of the new warnings
and take advantage of them. But with -Werror it obviously only works in
a limited or local setting.

> That's not to say we should forbid bots and subsystems from doing this,
> that's what we're currently doing with the per-subdir enabling of
> -Werror using subdir-ccflags-y if you look, but we shouldn't globally
> mandate it.

Agreed.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [Ksummit-discuss] [media-submaintainers] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency
  2019-06-19 16:23                                         ` Mark Brown
@ 2019-06-20 12:24                                           ` Geert Uytterhoeven
  0 siblings, 0 replies; 77+ messages in thread
From: Geert Uytterhoeven @ 2019-06-20 12:24 UTC (permalink / raw)
  To: Mark Brown
  Cc: ksummit, James Bottomley, media-submaintainers, kbuild,
	Mauro Carvalho Chehab, Dan Carpenter

Hi Mark,

On Wed, Jun 19, 2019 at 6:24 PM Mark Brown <broonie@kernel.org> wrote:
> On Wed, Jun 19, 2019 at 08:46:19AM -0700, James Bottomley wrote:
> > On Wed, 2019-06-19 at 17:48 +0300, Laurent Pinchart wrote:
> > > It's not automatic though, if it depends on a Kconfig option that is
> > > disabled by default. The built bots can enable it, while users would
> > > ignore it. That being said, having it as a per-tree build bot option
> > > should work as well.
>
> > I really don't think well made build bots would enable this.  The
> > problem with -Werror is it's single threaded on the first problem.
> > What a generic build bot wants to do is compile the entire tree and
> > then diff the output to find the additional warnings for everything.  I
> > could see a tree specific build bot being more interested (until the
> > build fails on an unrelated subsystem).
>
> If you're doing build coverage you can always use make -k and still
> build everything but yeah.

While that does allow building most individual components, you will
fail to catch link errors and section mismatches.

Still better than nothing, of course....

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, back to index

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-06 15:48 [Ksummit-discuss] [MAINTAINERS SUMMIT] Pull network and Patch Acceptance Consistency James Bottomley
2019-06-06 15:58 ` Greg KH
2019-06-06 16:24   ` James Bottomley
2019-06-13 13:59     ` Mauro Carvalho Chehab
2019-06-14 10:12       ` Laurent Pinchart
2019-06-14 13:24         ` Mauro Carvalho Chehab
2019-06-14 13:31           ` Laurent Pinchart
2019-06-14 13:54             ` Mauro Carvalho Chehab
2019-06-14 14:08               ` Laurent Pinchart
2019-06-14 14:56             ` Mark Brown
2019-06-14 13:58           ` Greg KH
2019-06-14 15:11             ` Mauro Carvalho Chehab
2019-06-14 15:23               ` James Bottomley
2019-06-14 15:43                 ` Mauro Carvalho Chehab
2019-06-14 15:49                   ` James Bottomley
2019-06-14 16:04                     ` Mauro Carvalho Chehab
2019-06-14 16:16                       ` James Bottomley
2019-06-14 17:48                         ` Mauro Carvalho Chehab
2019-06-17  7:01                           ` Geert Uytterhoeven
2019-06-17 13:31                             ` Mauro Carvalho Chehab
2019-06-17 14:26                               ` Takashi Iwai
2019-06-19  7:53                               ` Dan Carpenter
2019-06-19  8:13                                 ` [Ksummit-discuss] [kbuild] " Philip Li
2019-06-19  8:33                                 ` [Ksummit-discuss] " Daniel Vetter
2019-06-19 14:39                                   ` Mauro Carvalho Chehab
2019-06-19 14:48                                     ` [Ksummit-discuss] [media-submaintainers] " Laurent Pinchart
2019-06-19 15:19                                       ` Mauro Carvalho Chehab
2019-06-19 15:46                                       ` James Bottomley
2019-06-19 16:23                                         ` Mark Brown
2019-06-20 12:24                                           ` Geert Uytterhoeven
2019-06-20 10:36                                         ` Jani Nikula
2019-06-19 15:56                                       ` Mark Brown
2019-06-19 16:09                                         ` Laurent Pinchart
2019-06-15 10:55                         ` [Ksummit-discuss] " Daniel Vetter
2019-06-14 20:52               ` Vlastimil Babka
2019-06-15 11:01               ` Laurent Pinchart
2019-06-17 11:03                 ` Mauro Carvalho Chehab
2019-06-17 12:28                   ` Mark Brown
2019-06-17 16:48                     ` Tim.Bird
2019-06-17 17:23                       ` Geert Uytterhoeven
2019-06-17 23:13                       ` Mauro Carvalho Chehab
2019-06-17 14:18                   ` Laurent Pinchart
2019-06-06 16:29   ` James Bottomley
2019-06-06 18:26     ` Dan Williams
2019-06-07 20:14       ` Martin K. Petersen
2019-06-13 13:49         ` Mauro Carvalho Chehab
2019-06-13 14:35           ` James Bottomley
2019-06-13 15:03             ` Martin K. Petersen
2019-06-13 15:21               ` Bart Van Assche
2019-06-13 15:27                 ` James Bottomley
2019-06-13 15:35                 ` Guenter Roeck
2019-06-13 15:39                   ` Bart Van Assche
2019-06-14 11:53                     ` Leon Romanovsky
2019-06-14 17:06                       ` Bart Van Assche
2019-06-15  7:20                         ` Leon Romanovsky
2019-06-13 15:39                   ` James Bottomley
2019-06-13 15:42                   ` Takashi Iwai
2019-06-13 19:28               ` James Bottomley
2019-06-14  9:08               ` Dan Carpenter
2019-06-14  9:43               ` Dan Carpenter
2019-06-14 13:27               ` Dan Carpenter
2019-06-13 17:27             ` Mauro Carvalho Chehab
2019-06-13 18:41               ` James Bottomley
2019-06-13 19:11                 ` Mauro Carvalho Chehab
2019-06-13 19:20                   ` Joe Perches
2019-06-14  2:21                     ` Mauro Carvalho Chehab
2019-06-13 19:57                   ` Martin K. Petersen
2019-06-13 14:53           ` Martin K. Petersen
2019-06-13 17:09             ` Mauro Carvalho Chehab
2019-06-14  3:03               ` Martin K. Petersen
2019-06-14  3:35                 ` Mauro Carvalho Chehab
2019-06-14  7:31                 ` Joe Perches
2019-06-13 13:28       ` Mauro Carvalho Chehab
2019-06-06 16:18 ` Bart Van Assche
2019-06-14 19:53 ` Bjorn Helgaas
2019-06-14 23:21   ` Bjorn Helgaas
2019-06-17 10:35     ` Mauro Carvalho Chehab

Ksummit-Discuss Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/ksummit-discuss/0 ksummit-discuss/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ksummit-discuss ksummit-discuss/ https://lore.kernel.org/ksummit-discuss \
		ksummit-discuss@lists.linuxfoundation.org
	public-inbox-index ksummit-discuss

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.linuxfoundation.lists.ksummit-discuss


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git