All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: "Thomas Huth" <thuth@redhat.com>,
	qemu-devel@nongnu.org,
	"Wainer dos Santos Moschetta" <wainersm@redhat.com>,
	"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
	"Willian Rampazzo" <willianr@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>
Subject: Re: [PATCH 2/2] gitlab: don't run CI jobs by default on push to user forks
Date: Mon, 16 Aug 2021 13:01:46 +0100	[thread overview]
Message-ID: <YRpTqmv/yXU0cK5H@redhat.com> (raw)
In-Reply-To: <87v945txvn.fsf@redhat.com>

On Mon, Aug 16, 2021 at 01:47:08PM +0200, Cornelia Huck wrote:
> On Mon, Aug 16 2021, Daniel P. Berrangé <berrange@redhat.com> wrote:
> 
> > On Mon, Aug 16, 2021 at 12:44:02PM +0200, Cornelia Huck wrote:
> >> On Thu, Aug 12 2021, Daniel P. Berrangé <berrange@redhat.com> wrote:
> >> > The minimal job set covers:
> >> >
> >> >   * Fedora, CentOS, Ubuntu & Debian system emulator builds
> >> >     to cover all arch targets.
> >> >   * Linux user static build
> >> >   * Cross build i386 (to identify any 32-bit build issues)
> >> >   * Cross build s390x (to identify any big endian build issues)
> >> >   * Containers needed for the above
> >> >   * Cirrus CI FreeBSD 12 and macOS 11 (to identify non-Linux issues)
> >> >   * Simple checks - python unittests, DCO check, checkpatch.pl, etc
> >> >
> >> > This gives about 30 jobs instead of 120 from the "Full" group. It
> >> > is likely reasonable to cut the minimal set down even more, as IMHO
> >> > there are too many system emulator jobs there.
> >> 
> >> Where do you think we should start to cut them down? Limit the set of
> >> tested arch targets to the most common ones?
> >
> > Some of our targets are obviously much more important and
> > frequently changed than others.  For contributors our goal is
> > to mimimize breakage after patches are submitted. Most of our
> > contributors changes will be well covered by x86-64 + aarch64
> > alone. Other targets give varying degrees of extra benefit.
> 
> I'd probably add s390x to that list, not just because of personal bias
> :), but also because it has a unique set of devices, is big endian, and
> has been broken in the past.

The desire to have something covering big endian is why I included
the s390x cross-build in the "minimal" set. Even if we don't run
the tests, building it is still worthwhile.

> > When I'm working on changing gitlab CI rules, then I burn loads of
> > minutes which is especially troubling - limited CI minutes will make
> > it very hard for me to debug future CI rule changes :-(
> 
> I hope that will not make gitlab CI a complete non-starter -- if you
> cannot easily debug a test case that is failing, it's mostly
> useless. We've seen too many cases where a failure could not be
> reproduced when the test case was running locally.

One of the best things about GitLab compared to what we had with
Travis is that the build environment is 100% container based (until
Cleber's custom runners arrived).  This meant that when something
does fail in GitLab, you can pull the container image down from
the gitlab registry and run the build locally. You still have
differences due to hardware or CPUs resources, but its a hell of
alot easier to reproduce than before. This is good enough for most
contributors in general, but not for the CI maintainers, who need
to debug especially the CI system as it exists on GitLab.


> >> (c) a subsystem maintainer is preparing a pull request
> >> 
> >> Ideally, that should run the 'gating' set, to eliminate needless bounces
> >> of the pull request; plus some subsystem-specific manual testing on
> >> top. In practice, the 'full' set might be good enough.
> >
> > Yeah, the full/gating set is what I would thing subsys maintainers
> > would want to use, to minimize risk that Peter's tests throw back
> > the merge due to failure. The only difference of gating vs full
> > is whether the acceptance tests run.
> 
> I can at least run a subset of the acceptance tests locally, but I think
> I may be missing some platforms? Still, better than nothing.

As ever the big problem for most people is non-x86_64 platforms. The
custom runners solve this for gitlab, and in theory people can deploy
a VM using QEMU TCG to do this locally, but massively slower


> >> - define a 'ci-subsystem' set that covers usual pain points for a
> >>   subsystem
> >> - have subsystem maintainers run a pull req test in the qemu-project
> >>   context (i.e. using extra CI minutes that the project may have), or
> >>   put them on a special list on subsystem maintainers so they can use
> >>   more minutes
> >
> > I think ultimately we should be looking to take email out of the loop
> > for merging pull requests from subsys maintainers.
> >
> > If we expect subsys maintainers to use gitlab CI before sending, then
> > we have a crazy situation where subsys maintainers pushes to gitlab,
> > has CI run, then send email PULL to list, then Peter downloads and
> > applies the mails, and pushes back to gitlab and runs CI again and
> > then pushes to gitlab again for master.
> 
> Hm, I thought Peter pulled the subsys maintainers' signed tags?

We could have a CI job that validates the merge request is associated
with a signed tag whose key is on a designated keyring containing the
known maintainers.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



  reply	other threads:[~2021-08-16 12:04 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-12 18:04 [PATCH 0/2] gitlab: prepare for limited CI minutes by not running by default Daniel P. Berrangé
2021-08-12 18:04 ` [PATCH 1/2] docs: split the CI docs into two files Daniel P. Berrangé
2021-08-16 11:02   ` Philippe Mathieu-Daudé
2021-08-24 16:29   ` Willian Rampazzo
2021-08-12 18:04 ` [PATCH 2/2] gitlab: don't run CI jobs by default on push to user forks Daniel P. Berrangé
2021-08-16 10:44   ` Cornelia Huck
2021-08-16 11:03     ` Daniel P. Berrangé
2021-08-16 11:20       ` Philippe Mathieu-Daudé
2021-08-16 11:35         ` Daniel P. Berrangé
2021-08-16 11:45           ` Philippe Mathieu-Daudé
2021-08-16 11:47       ` Cornelia Huck
2021-08-16 12:01         ` Daniel P. Berrangé [this message]
2021-08-16 13:19           ` Cornelia Huck
2021-08-16 13:23             ` Daniel P. Berrangé
2021-08-16 15:16           ` Philippe Mathieu-Daudé
2021-08-25 10:42   ` Thomas Huth
2021-09-15 14:45     ` Daniel P. Berrangé
2022-05-10  8:51   ` Thomas Huth
2022-05-10  9:13     ` Daniel P. Berrangé

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YRpTqmv/yXU0cK5H@redhat.com \
    --to=berrange@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=cohuck@redhat.com \
    --cc=f4bug@amsat.org \
    --cc=qemu-devel@nongnu.org \
    --cc=thuth@redhat.com \
    --cc=wainersm@redhat.com \
    --cc=willianr@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.