linux-spdx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Workflow
@ 2019-05-09 14:11 Thomas Gleixner
  2019-05-09 14:45 ` Workflow Kate Stewart
  2019-05-10  9:06 ` Workflow Greg KH
  0 siblings, 2 replies; 9+ messages in thread
From: Thomas Gleixner @ 2019-05-09 14:11 UTC (permalink / raw)
  To: linux-spdx

Folks,

we should ASAP agree on a workflow for going through those 504
patches. Here is my proposal:

  1) Use the one patch per identified pattern approach as demonstrated
     with the first 4.

  2) Focus the review on the 'pattern -> SPDX id' conclusion

  3) Trust the automated patcher to do the right thing.

From my experience with this, it's the most sensible way, as it makes it
scalable.

Versus trusting the patcher: I'm surely spending time on looking at the
actual changes, but that's also massively based on automated analysis which
exposes only unexpected changes and does not force me to stare at 20k+
patched instances.

If we can agree on the above, then I'd like to send out batches of N
patches, where N would be something in the range of 10-25. These patches
are basically changelog only because quite some of the patches are too long
for posting on the list. They surely will contain a git URL so you can look
at the actual file changes as well (if you are masochistic enough).

Ideally we get a quick feedback (OK/NOK) for each patch in a batch. The OK
should be preferrably in the form of a 'Reviewed-by: Your Name <your@mail>'
tag. We'll mention in the changelog that the Review is limited to the
pattern -> SPDX id conclusion and does not cover the actual file level
changes. I'll take the blame when then patcher gets it wrong :)

If a patch is deemed NOK we don't have to sort it out immediately. We can
post pone it and handle it on the side so the queue is not held up.

Once a patch has collected Reviewed-by tags we would apply it to a
repository and send it in batches to Linus.

If a batch is consumed (except for the NOK parts) the next batch would be
posted. Assumed we can handle 10 'pattern -> SPDX id' reviews per day, that
would take ~10 weeks. Which is quite some time when we want to be halfways
SPDX clean for the next LTS kernel. So I rather see larger batches
processed faster :)

Any opinions on the size of the batches and how long it will take to get
the reviews done or any other suggestions for a workable solution?

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 14:11 Workflow Thomas Gleixner
@ 2019-05-09 14:45 ` Kate Stewart
  2019-05-09 21:56   ` Workflow Thomas Gleixner
  2019-05-10  9:06 ` Workflow Greg KH
  1 sibling, 1 reply; 9+ messages in thread
From: Kate Stewart @ 2019-05-09 14:45 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: linux-spdx

Hi Thomas,

On Thu, May 9, 2019 at 9:11 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Folks,
>
> we should ASAP agree on a workflow for going through those 504
> patches. Here is my proposal:
>
>   1) Use the one patch per identified pattern approach as demonstrated
>      with the first 4.
>
>   2) Focus the review on the 'pattern -> SPDX id' conclusion
>
>   3) Trust the automated patcher to do the right thing.
>
> From my experience with this, it's the most sensible way, as it makes it
> scalable.
>
> Versus trusting the patcher: I'm surely spending time on looking at the
> actual changes, but that's also massively based on automated analysis which
> exposes only unexpected changes and does not force me to stare at 20k+
> patched instances.
>
> If we can agree on the above, then I'd like to send out batches of N
> patches, where N would be something in the range of 10-25. These patches
> are basically changelog only because quite some of the patches are too long
> for posting on the list. They surely will contain a git URL so you can look
> at the actual file changes as well (if you are masochistic enough).
>
> Ideally we get a quick feedback (OK/NOK) for each patch in a batch. The OK
> should be preferrably in the form of a 'Reviewed-by: Your Name <your@mail>'
> tag. We'll mention in the changelog that the Review is limited to the
> pattern -> SPDX id conclusion and does not cover the actual file level
> changes. I'll take the blame when then patcher gets it wrong :)
>
> If a patch is deemed NOK we don't have to sort it out immediately. We can
> post pone it and handle it on the side so the queue is not held up.
>
> Once a patch has collected Reviewed-by tags we would apply it to a
> repository and send it in batches to Linus.
>
> If a batch is consumed (except for the NOK parts) the next batch would be
> posted. Assumed we can handle 10 'pattern -> SPDX id' reviews per day, that
> would take ~10 weeks. Which is quite some time when we want to be halfways
> SPDX clean for the next LTS kernel. So I rather see larger batches
> processed faster :)
>
> Any opinions on the size of the batches and how long it will take to get
> the reviews done or any other suggestions for a workable solution?

Grouping them in digestible sets, and organizing each patch around a
pattern makes sense to me.     +1

Kate

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 14:45 ` Workflow Kate Stewart
@ 2019-05-09 21:56   ` Thomas Gleixner
  2019-05-10  9:25     ` Workflow Allison Randal
                       ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Thomas Gleixner @ 2019-05-09 21:56 UTC (permalink / raw)
  To: Kate Stewart; +Cc: linux-spdx

Kate,

On Thu, 9 May 2019, Kate Stewart wrote:
> On Thu, May 9, 2019 at 9:11 AM Thomas Gleixner <tglx@linutronix.de> wrote:
> > Any opinions on the size of the batches and how long it will take to get
> > the reviews done or any other suggestions for a workable solution?
> 
> Grouping them in digestible sets, and organizing each patch around a
> pattern makes sense to me.     +1

I appreciate the +1, but I would appreciate even more some hint what you
think is a digestible set size and how much time you think it takes to
review such a set. You know that this is just a question of time.

At a set size of one and one week per set it will take 504 weeks ~= 9.3
years from now not having taken the changing code base into account. That's
still slightly faster (by 1 year or so) than waiting for it to be cleaned
up manually by random people. :)

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 14:11 Workflow Thomas Gleixner
  2019-05-09 14:45 ` Workflow Kate Stewart
@ 2019-05-10  9:06 ` Greg KH
  1 sibling, 0 replies; 9+ messages in thread
From: Greg KH @ 2019-05-10  9:06 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: linux-spdx

On Thu, May 09, 2019 at 04:11:38PM +0200, Thomas Gleixner wrote:
> Folks,
> 
> we should ASAP agree on a workflow for going through those 504
> patches. Here is my proposal:
> 
>   1) Use the one patch per identified pattern approach as demonstrated
>      with the first 4.
> 
>   2) Focus the review on the 'pattern -> SPDX id' conclusion
> 
>   3) Trust the automated patcher to do the right thing.

Sounds good to me.

> From my experience with this, it's the most sensible way, as it makes it
> scalable.
> 
> Versus trusting the patcher: I'm surely spending time on looking at the
> actual changes, but that's also massively based on automated analysis which
> exposes only unexpected changes and does not force me to stare at 20k+
> patched instances.
> 
> If we can agree on the above, then I'd like to send out batches of N
> patches, where N would be something in the range of 10-25. These patches
> are basically changelog only because quite some of the patches are too long
> for posting on the list. They surely will contain a git URL so you can look
> at the actual file changes as well (if you are masochistic enough).

I like longer batches, as I'm used to them, but 20-25 is fine.

> Ideally we get a quick feedback (OK/NOK) for each patch in a batch. The OK
> should be preferrably in the form of a 'Reviewed-by: Your Name <your@mail>'
> tag. We'll mention in the changelog that the Review is limited to the
> pattern -> SPDX id conclusion and does not cover the actual file level
> changes. I'll take the blame when then patcher gets it wrong :)
> 
> If a patch is deemed NOK we don't have to sort it out immediately. We can
> post pone it and handle it on the side so the queue is not held up.
> 
> Once a patch has collected Reviewed-by tags we would apply it to a
> repository and send it in batches to Linus.

Sounds reasonable.

> If a batch is consumed (except for the NOK parts) the next batch would be
> posted. Assumed we can handle 10 'pattern -> SPDX id' reviews per day, that
> would take ~10 weeks. Which is quite some time when we want to be halfways
> SPDX clean for the next LTS kernel. So I rather see larger batches
> processed faster :)

You should be able to send multiple batches at a time, right?  But 10
weeks isn't all that bad, I would shoot to get these all into the 5.4
kernel, so we can be done with it :)

> Any opinions on the size of the batches and how long it will take to get
> the reviews done or any other suggestions for a workable solution?

Normally I just wait 2 weeks for tiny patches, or one kernel release
cycle.  If no objections, then I merge to a tree that Linus can pull
from.

As a good example of this, I have some debugfs x86 patches that I posted
a while ago, no maintainer said anything, or took them into their own
tree, so I'll just merge them to a local tree to be sent off for 5.3-rc1 :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 21:56   ` Workflow Thomas Gleixner
@ 2019-05-10  9:25     ` Allison Randal
  2019-05-10 10:54       ` Workflow Thomas Gleixner
  2019-05-10 13:01     ` Workflow Kate Stewart
  2019-05-10 13:18     ` Workflow Bradley M. Kuhn
  2 siblings, 1 reply; 9+ messages in thread
From: Allison Randal @ 2019-05-10  9:25 UTC (permalink / raw)
  To: Thomas Gleixner, Kate Stewart; +Cc: linux-spdx

On 5/9/19 10:56 PM, Thomas Gleixner wrote:
> 
> I appreciate the +1, but I would appreciate even more some hint what you
> think is a digestible set size and how much time you think it takes to
> review such a set. You know that this is just a question of time.

I'm fine with the 10-25/day you suggested, as long as I know to expect
them. I'll just do a quick run with my morning coffee. Personally, I'm
also fine if you just want to do all 500+ at once, and I'll work my way
through them in larger batches (dealing with several thousand emails a
day is not unusual for me, and these are quick reviews that don't need a
substantial reply). But, I suspect you're right that smaller batches
will be easier for most people.

I will use the git URL to look at the actual changes, because my brain
is trained to think in diffs, but I'll only review the first few files
in each diff as a sample.

Allison

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-10  9:25     ` Workflow Allison Randal
@ 2019-05-10 10:54       ` Thomas Gleixner
  2019-05-10 11:02         ` Workflow Allison Randal
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Gleixner @ 2019-05-10 10:54 UTC (permalink / raw)
  To: Allison Randal; +Cc: Kate Stewart, linux-spdx

Allison,

On Fri, 10 May 2019, Allison Randal wrote:

> On 5/9/19 10:56 PM, Thomas Gleixner wrote:
> > 
> > I appreciate the +1, but I would appreciate even more some hint what you
> > think is a digestible set size and how much time you think it takes to
> > review such a set. You know that this is just a question of time.
> 
> I'm fine with the 10-25/day you suggested, as long as I know to expect
> them. I'll just do a quick run with my morning coffee. Personally, I'm
> also fine if you just want to do all 500+ at once, and I'll work my way
> through them in larger batches (dealing with several thousand emails a
> day is not unusual for me, and these are quick reviews that don't need a
> substantial reply). But, I suspect you're right that smaller batches
> will be easier for most people.
> 
> I will use the git URL to look at the actual changes, because my brain
> is trained to think in diffs, but I'll only review the first few files
> in each diff as a sample.

I'm keeping the first few file diffs in the mail anyway, I just need to
cutoff for the larger patches to avoid the size limit of the list. The URL
is there for reference and for those who really want to look at every
change :)

Thanks,

	tglx



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-10 10:54       ` Workflow Thomas Gleixner
@ 2019-05-10 11:02         ` Allison Randal
  0 siblings, 0 replies; 9+ messages in thread
From: Allison Randal @ 2019-05-10 11:02 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: linux-spdx

On 5/10/19 11:54 AM, Thomas Gleixner wrote:
> 
> I'm keeping the first few file diffs in the mail anyway, I just need to
> cutoff for the larger patches to avoid the size limit of the list. The URL
> is there for reference and for those who really want to look at every
> change :)

Makes sense. That'll be perfect for me. :)

Allison

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 21:56   ` Workflow Thomas Gleixner
  2019-05-10  9:25     ` Workflow Allison Randal
@ 2019-05-10 13:01     ` Kate Stewart
  2019-05-10 13:18     ` Workflow Bradley M. Kuhn
  2 siblings, 0 replies; 9+ messages in thread
From: Kate Stewart @ 2019-05-10 13:01 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: linux-spdx

On Thu, May 9, 2019 at 4:56 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Kate,
>
> On Thu, 9 May 2019, Kate Stewart wrote:
> > On Thu, May 9, 2019 at 9:11 AM Thomas Gleixner <tglx@linutronix.de> wrote:
> > > Any opinions on the size of the batches and how long it will take to get
> > > the reviews done or any other suggestions for a workable solution?
> >
> > Grouping them in digestible sets, and organizing each patch around a
> > pattern makes sense to me.     +1
>
> I appreciate the +1, but I would appreciate even more some hint what you
> think is a digestible set size and how much time you think it takes to
> review such a set. You know that this is just a question of time.

Heh,  fair enough.    I meant I am fine with the proposal of 10-25 a
day you outlined.
Biasing more to the 25 side is ok - sooner we're through the bulk of
them the better.  :-)

Thanks, Kate

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Workflow
  2019-05-09 21:56   ` Workflow Thomas Gleixner
  2019-05-10  9:25     ` Workflow Allison Randal
  2019-05-10 13:01     ` Workflow Kate Stewart
@ 2019-05-10 13:18     ` Bradley M. Kuhn
  2 siblings, 0 replies; 9+ messages in thread
From: Bradley M. Kuhn @ 2019-05-10 13:18 UTC (permalink / raw)
  To: linux-spdx

Thomas Gleixner wrote:
> I appreciate the +1, but I would appreciate even more some hint what you
> think is a digestible set size and how much time you think it takes to
> review such a set.

I think these questions are hard to answer until we try some.  License
provenance review is very different than code review.  I suggest we just try
it in the batch-size you suggest and then be willing to readjust based on
everyone's feedback after doing it for a while.

--
Bradley M. Kuhn

Pls. support the charity where I work, Software Freedom Conservancy:
https://sfconservancy.org/supporter/

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-05-10 14:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-09 14:11 Workflow Thomas Gleixner
2019-05-09 14:45 ` Workflow Kate Stewart
2019-05-09 21:56   ` Workflow Thomas Gleixner
2019-05-10  9:25     ` Workflow Allison Randal
2019-05-10 10:54       ` Workflow Thomas Gleixner
2019-05-10 11:02         ` Workflow Allison Randal
2019-05-10 13:01     ` Workflow Kate Stewart
2019-05-10 13:18     ` Workflow Bradley M. Kuhn
2019-05-10  9:06 ` Workflow Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).