All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: linux-spdx@vger.kernel.org
Subject: Workflow
Date: Thu, 9 May 2019 16:11:38 +0200 (CEST)	[thread overview]
Message-ID: <alpine.DEB.2.21.1905091542340.3139@nanos.tec.linutronix.de> (raw)

Folks,

we should ASAP agree on a workflow for going through those 504
patches. Here is my proposal:

  1) Use the one patch per identified pattern approach as demonstrated
     with the first 4.

  2) Focus the review on the 'pattern -> SPDX id' conclusion

  3) Trust the automated patcher to do the right thing.

From my experience with this, it's the most sensible way, as it makes it
scalable.

Versus trusting the patcher: I'm surely spending time on looking at the
actual changes, but that's also massively based on automated analysis which
exposes only unexpected changes and does not force me to stare at 20k+
patched instances.

If we can agree on the above, then I'd like to send out batches of N
patches, where N would be something in the range of 10-25. These patches
are basically changelog only because quite some of the patches are too long
for posting on the list. They surely will contain a git URL so you can look
at the actual file changes as well (if you are masochistic enough).

Ideally we get a quick feedback (OK/NOK) for each patch in a batch. The OK
should be preferrably in the form of a 'Reviewed-by: Your Name <your@mail>'
tag. We'll mention in the changelog that the Review is limited to the
pattern -> SPDX id conclusion and does not cover the actual file level
changes. I'll take the blame when then patcher gets it wrong :)

If a patch is deemed NOK we don't have to sort it out immediately. We can
post pone it and handle it on the side so the queue is not held up.

Once a patch has collected Reviewed-by tags we would apply it to a
repository and send it in batches to Linus.

If a batch is consumed (except for the NOK parts) the next batch would be
posted. Assumed we can handle 10 'pattern -> SPDX id' reviews per day, that
would take ~10 weeks. Which is quite some time when we want to be halfways
SPDX clean for the next LTS kernel. So I rather see larger batches
processed faster :)

Any opinions on the size of the batches and how long it will take to get
the reviews done or any other suggestions for a workable solution?

Thanks,

	tglx



             reply	other threads:[~2019-05-09 14:11 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09 14:11 Thomas Gleixner [this message]
2019-05-09 14:45 ` Workflow Kate Stewart
2019-05-09 21:56   ` Workflow Thomas Gleixner
2019-05-10  9:25     ` Workflow Allison Randal
2019-05-10 10:54       ` Workflow Thomas Gleixner
2019-05-10 11:02         ` Workflow Allison Randal
2019-05-10 13:01     ` Workflow Kate Stewart
2019-05-10 13:18     ` Workflow Bradley M. Kuhn
2019-05-10  9:06 ` Workflow Greg KH
2021-05-09  7:33 workflow Fabio Aiuto
2021-05-09  9:31 ` workflow Geert Stappers
2021-05-09 10:42   ` workflow Fabio Aiuto
2021-05-09 12:13 ` workflow Miguel Ojeda
2021-05-09 17:37   ` workflow Fabio Aiuto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.1905091542340.3139@nanos.tec.linutronix.de \
    --to=tglx@linutronix.de \
    --cc=linux-spdx@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.