All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Guillaume Tucker" <gtucker@collabora.com>
To: kernelci@groups.io, dan.rue@linaro.org,
	Kevin Hilman <khilman@baylibre.com>
Subject: Re: kci_build proposal
Date: Thu, 28 May 2020 07:43:40 +0100	[thread overview]
Message-ID: <77677a1d-1862-408d-9169-c92eb4f81066@collabora.com> (raw)
In-Reply-To: <20200527195831.nl2ptfjlcfkh4tyb@xps.therub.org>

On 27/05/2020 20:58, Dan Rue wrote:
> On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
>> "Dan Rue" <dan.rue@linaro.org> writes:
>>
>>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
>>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
>>
>> [...]
>>
>>>> Basically, rather than adapting kci_build for other purposes, I'm
>>>> suggesting to create a generic tool that can be used by kci_build
>>>> as well as any other kernel CI system.
>>>>
>>>> I don't know how to call this toolbox and where to host it, but
>>>> it would seem wise imho to keep it separate from KernelCI, LKFT
>>>> or any other CI system that would be using it.  It would probably
>>>> also make sense to at least keep it kernel-centric to focus on a
>>>> group of use-cases.  How does that sound?
>>>
>>> If there's agreement to do this together with kernelci, which I'm really
>>> happy about, then I propose it be something like
>>> github.com/kernelci/<somename>, where somename is available in the pypi
>>> namespace (spanner isn't).
>>>
>>> I think our requirements are largely the same. We do need to decide
>>> about the docker bits, and about how opinionated the tool should be.
>>
>> Having spent the last few weeks getting the existing kci_build working
>> in k8s environment[1], I think how docker/containers fits in here is the
>> key thing we should agree upon.
>>
>> For the initial k8s migration, I started with the requirement to use
>> kci_build as-is, but some of the issues Dan raised are already issues
>> with the k8s build e.g. non-trivial to discover which part of the build
>> failed (and if it's critical or not.)
>>
>> I'm fully supportive rethinking/reworking this tool to be cloud-native.
>> I would also add we also need to be intentional about how
>> artifacts/results are passed between the various phases so we build
>> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
>> really like to find someone with the time explore reworking our jenkins
>> pipline into tekton[2])

I think that passing artifacts between build stages is really
part of the pipeline integration work and shouldn't be imposed by
the tools.  The tools however should produce intermediary data
and files in a way that can be integrated into various pipelines
with different requirements.

It seems we can improve this in kci_build by first generating a
JSON file with the static meta-data before starting a kernel
build, then the build would add to that data, then the install
step would add more data again.

See also my comment below about having the ability to build
individual steps, which should help with error handling.

> Hi Kevin!
> 
> OK, we've started to put some thoughts together on an implementation at
> https://gitlab.com/Linaro/tuxmake.
> 
> Some of the details are still a bit preliminary, but we plan to start on
> it soon and get the basic interface and use-cases in place.
> 
> Would the design meet your needs? If you have any feedback, we would
> really appreciate it. I'm really not sure how it would fit in with k8s,
> but I anticipate if it were used as a python library there would be a
> lot of fine grained control of the steps.

I still strongly believe that making the tool call Docker itself
is going to be a big issue - it should really be the other way
round, or at least there should be a way to run the kernel build
directly with an option to wrap it inside a Docker container at
the user's discretion.

It's also important to have the possibility to run each
individual make command separately, to create the .config, build
the image, build the modules, the dtbs, or other arbitrary things
such as kselftest.  The current kci_build implementation does too
much already in the build_kernel() function, I've had to hack it
so many times to force it to not build modules...

About k8s, yes I agree a Python library should provide the
required flexibility.  It shouldn't need to have any "cloud"
special features as it's just one use-case among many.  The key
thing is to provide just enough added value without turning it
into a complete integration.  I guess one could see the stack as
follows:


user
------------
environment
------------
tuxmake
------------
make
compiler
kernel source code


The "environment" can be a plain shell in a terminal, or in a
Docker container, which may be managed by Kubernetes, which may
have been started by Jenkins or Tekton or anything else.  By
keeping the environment as a higher-level entity in the stack we
also keep the tool (tuxmake) generic.

> We hope that this would be useful for kernelci and tuxbuild projects as
> a library, but also for kernel engineers to be able to more easily
> perform and reproduce a variety of builds locally using the cli.

Err, do you want to call it tuxbuild or tuxmake? ;)

I wonder if tuxmake would sound more like a lower-lavel "make"
tool, such as Ninja or Meson.  So tuxbuild is probably a bit
clearer in that it's about facilitating builds (i.e. above
the "make" layer in the stack...).

You suggested to have a project under github.com/kernelci, which
I believe probably makes more sense if we want KernelCI to
provide a common set of tools for others to reuse (not being
biased at all here).  Or, we can do what I was trying to suggest
in my previous email and have a new "namespace" for these tools
that don't need to be associated with KernelCI, or LKFT or
anything else.

>>> If there's wide agreement then we could establish such a repository and
>>> start iterating on a design document. I have someone available to start
>>> working on the actual implementation in a few weeks.
>>>
>>> There are other details to discuss, like how to deal with config (we had
>>> to do a bit of a workaround for it outside of kci_build), licensing,
>>> etc.
>>
>> I propose that we we dedicate one of our tuesday weekly calls to this
>> topic.  Would Tues, June 2nd work?
> 
> I can make that work,

Well, the main topic for that meeting is going to be to review
the KernelCI plans for the next 3 months.  You're welcome to join
in any case, and if we don't spend too much time on the plan we
may have time to discuss that.  However, maybe we could aim for
the 9th instead?

Thanks,
Guillaume

>> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
>> [2] https://tekton.dev/

  reply	other threads:[~2020-05-28  6:43 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-20 16:36 kci_build proposal Dan Rue
2020-04-21 15:46 ` Guillaume Tucker
2020-04-21 15:53   ` Mark Brown
2020-04-21 17:32     ` Dan Rue
2020-04-21 17:28   ` Dan Rue
2020-05-22 18:22     ` Kevin Hilman
2020-05-27 19:58       ` Dan Rue
2020-05-28  6:43         ` Guillaume Tucker [this message]
2020-05-28 17:28           ` Dan Rue
2020-05-28 21:03             ` Guillaume Tucker
2020-05-29 15:53               ` Dan Rue
2020-06-15 13:22                 ` "Audience"/"Guest" can join for this meeting running on 23rd? koti koti
2020-06-16 16:16                   ` Mark Brown
2020-06-17  1:49                     ` koti koti
2020-06-17 10:31                       ` Mark Brown
2020-06-17 10:55                         ` koti koti
2020-05-28 23:31             ` kci_build proposal Kevin Hilman
2020-05-29  7:42               ` Mathieu Acher
2020-05-29 10:44                 ` Mark Brown
2020-05-29 14:27                 ` Guillaume Tucker
2020-06-16 16:33                 ` Nick Desaulniers
2020-06-23  7:28                   ` Mathieu Acher
2020-06-23 23:48                     ` Nick Desaulniers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=77677a1d-1862-408d-9169-c92eb4f81066@collabora.com \
    --to=gtucker@collabora.com \
    --cc=dan.rue@linaro.org \
    --cc=kernelci@groups.io \
    --cc=khilman@baylibre.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.