All of lore.kernel.org
 help / color / mirror / Atom feed
* kci_build proposal
@ 2020-04-20 16:36 Dan Rue
  2020-04-21 15:46 ` Guillaume Tucker
  0 siblings, 1 reply; 23+ messages in thread
From: Dan Rue @ 2020-04-20 16:36 UTC (permalink / raw)
  To: kernelci

Hey Everyone!

I've been working on a kernel building service/api that's designed to perform
massive numbers of builds. The actual kernel build implementation reuses
kernelci's kci_build script.

So far we have not had to modify kernelci-core's kci_build, but we're
approaching a point where we think we're going to have to modify it
fundamentally to meet our needs. I would really like to continue sharing a
build implementation with kernelci, but some of the changes needed are
architectural in nature, and so I'd like to share a brief proposal for
consideration:

Standalone python-based kernel builder with the following properties:
- Independent python package (library and cli) that only does a local kernel
  build
- Takes as arguments things like architecture, toolchain, kernel config, etc.
- Dockerzed toolchain environments are supported directly (e.g. it will run
  "docker run make")
- Builds are done within the context of an existing kernel tree (doing git
  checkout, git fetch, etc are out of scope)
- Artifacts go to some named or default directory path. Uploading them
  elsewhere is out of scope.
- Exception handling is built in and makes it usable as a python library
- Logging is built in with flexibility in mind (log to console, verbose or
  silent, log to file, etc.)
- Simple, one-line dockerized reproducers are given as a build artifact, for
  reproducing builds locally.
- Partial build failure support - e.g. what if kselftest fails but the rest of
  the build passes?
- Rigorous unit testing
- Implementation is modular enough to support things like a documentation build
  and other targets in the tree such as various tools/, sparse checker,
  coccinelle, etc.
- Support for kernel fragement files or urls in addition to supporting
  individual kernel config arguments

Use Case Examples:
- Install using pip, build using the cli interface
  - "pip install --user kernel-builder; cd linux; kernel-builder --arch arm64 --toolchain clang-9"
- Run using a published docker container. This example would need permission to
  launch additional docker containers to perform the build steps.
  - "docker run kernel-builder --arch arm64 --toolchain clang-9"
- Install using pip, and use it from python as a library
  - "import kernel-builder"

In the example above, "kernel-builder" is a placeholder for whatever the name
would be.

The artifacts produced would be similar to the set of artifacts kci_build
produces today, with the addition of a build reproducer script and probably
some differences in metadata files.

Docker support would be optional (you could use a locally installed toolchain),
but allows for arbitrary toolchains to be available and perfect
reproducibility.

It's hard to change kci_build to be anything like this because it's tightly
coupled with the kernelci-core repo, and kernelci itself. The design above also
fundamentally changes how the builder interfaces with docker, such that it
becomes supported directly rather than something that's implemented outside the
scope of the builder.

It's important that docker is supported directly because it makes the
reproducible build use-case a one-liner, it solves the problem of having to get
python and the builder into the build containers, and it allows for more
granular control of the build.

So, thoughts? How close or how far is this from the roadmap for kci_build as it
stands today? We are likely to start working on this soon, and would really
like to work together on it if possible.

Thanks for your feedback,
Dan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-04-20 16:36 kci_build proposal Dan Rue
@ 2020-04-21 15:46 ` Guillaume Tucker
  2020-04-21 15:53   ` Mark Brown
  2020-04-21 17:28   ` Dan Rue
  0 siblings, 2 replies; 23+ messages in thread
From: Guillaume Tucker @ 2020-04-21 15:46 UTC (permalink / raw)
  To: kernelci, Dan Rue

[-- Attachment #1: Type: text/plain, Size: 7734 bytes --]

On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:

> Hey Everyone!
>

Nice to have you back on the mailing list :)


> I've been working on a kernel building service/api that's designed to
> perform
> massive numbers of builds. The actual kernel build implementation reuses
> kernelci's kci_build script.
>

Out of interest, how do you manage the builders?

We're looking into Kubernetes now so we might find issues similar
to those in your use-case.


> So far we have not had to modify kernelci-core's kci_build, but we're
> approaching a point where we think we're going to have to modify it
> fundamentally to meet our needs. I would really like to continue sharing a
> build implementation with kernelci, but some of the changes needed are
> architectural in nature, and so I'd like to share a brief proposal for
> consideration:
>
> Standalone python-based kernel builder with the following properties:
> - Independent python package (library and cli) that only does a local
> kernel
>   build
>

Yes - see my more detailed reply further down.


> - Takes as arguments things like architecture, toolchain, kernel config,
> etc.
> - Dockerzed toolchain environments are supported directly (e.g. it will run
>   "docker run make")
> - Builds are done within the context of an existing kernel tree (doing git
>   checkout, git fetch, etc are out of scope)
> - Artifacts go to some named or default directory path. Uploading them
>   elsewhere is out of scope.
> - Exception handling is built in and makes it usable as a python library
> - Logging is built in with flexibility in mind (log to console, verbose or
>   silent, log to file, etc.)
> - Simple, one-line dockerized reproducers are given as a build artifact,
> for
>   reproducing builds locally.
> - Partial build failure support - e.g. what if kselftest fails but the
> rest of
>   the build passes?
>

Another "out of interest" question: have you modified kci_build
already to cover kselftest?

We have an open PR for that, if you're already using something in
production then it would be worth comparing what you've come up
with.


> - Rigorous unit testing
> - Implementation is modular enough to support things like a documentation
> build
>   and other targets in the tree such as various tools/, sparse checker,
>   coccinelle, etc.
>

Yes, however we need to draw the line somewhere.  I guess that
would be, anything that can already be built from a plain kernel
source tree.


> - Support for kernel fragement files or urls in addition to supporting
>   individual kernel config arguments
>
> Use Case Examples:
> - Install using pip, build using the cli interface
>   - "pip install --user kernel-builder; cd linux; kernel-builder --arch
> arm64 --toolchain clang-9"
> - Run using a published docker container. This example would need
> permission to
>   launch additional docker containers to perform the build steps.
>   - "docker run kernel-builder --arch arm64 --toolchain clang-9"
> - Install using pip, and use it from python as a library
>   - "import kernel-builder"
>
> In the example above, "kernel-builder" is a placeholder for whatever the
> name
> would be.
>
> The artifacts produced would be similar to the set of artifacts kci_build
> produces today, with the addition of a build reproducer script and probably
> some differences in metadata files.
>
> Docker support would be optional (you could use a locally installed
> toolchain),
> but allows for arbitrary toolchains to be available and perfect
> reproducibility.
>

I agree that we need to have a way to fully reproduce builds and
that includes having control over the environment, typically a
Docker image.  However, I'm a bit skeptical about making the tool
call Docker rather than have a wrapper to call it.  I think it's
more a matter of hierarchy in the stack, I see it as:

  docker -> kernel-builder -> make -> gcc

Say, what if someone wanted to use another environment than
Docker?  Rather than make it an optional feature, I think it may
be better to keep it one level higher if we can still generate
reproducers like you're suggesting.


> It's hard to change kci_build to be anything like this because it's tightly
> coupled with the kernelci-core repo, and kernelci itself. The design above
> also
> fundamentally changes how the builder interfaces with docker, such that it
> becomes supported directly rather than something that's implemented
> outside the
> scope of the builder.
>
> It's important that docker is supported directly because it makes the
> reproducible build use-case a one-liner, it solves the problem of having
> to get
> python and the builder into the build containers, and it allows for more
> granular control of the build.
>
> So, thoughts? How close or how far is this from the roadmap for kci_build
> as it
> stands today? We are likely to start working on this soon, and would really
> like to work together on it if possible.
>

Above all, thanks for the detailed proposal and for collaborating
on a common kernel build utility to share with KernelCI.  What
you're suggesting makes a lot of sense to me, and it's broadly in
line with where kernelci-core is going.

The initial goal of kernelci-core was to decouple the KernelCI
steps from Jenkins or any other execution environment.  It has
succeeded in this respect as it's now possible to do everything
on the command line or in other systems such as Gitlab CI, and
you've been able to reuse kci_build too.  I think it is important
that we keep a central repository with the KernelCI specific
orchestration logic and tools to handle configuration.  That
should become a proper "kernelci" Python package at some point,
when the repository has completed its transition.

Now, this kernelci package should ideally be a very thin layer to
just link components together and create pipeline steps.  What I
think we should have next is a toolbox of low-level stand-alone
packages that specialise in only one task, and I believe this is
where it would meet your use-case.  For example, I started
working on the "scalpel" tool last year to do bisections with the
aim of making it independent of KernelCI.  Now I guess we need
another tool to do kernel builds following the same approach,
maybe call it "spanner" or something constructive like that :)

That way, we may even keep kci_build with its current cli but
rather than have all the build steps in kernelci/build.py it
would be importing a common module (say, toolbox.spanner for the
sake of the argument) and your build service would be doing that
too.  Or there would also be some command line tool provided with
the package to run a build directly.

Basically, rather than adapting kci_build for other purposes, I'm
suggesting to create a generic tool that can be used by kci_build
as well as any other kernel CI system.

I don't know how to call this toolbox and where to host it, but
it would seem wise imho to keep it separate from KernelCI, LKFT
or any other CI system that would be using it.  It would probably
also make sense to at least keep it kernel-centric to focus on a
group of use-cases.  How does that sound?

If you agree with this approach, the next steps would be to find
the least common denominator between the use-cases that we
already know about, which I guess are KernelCI and LKFT.  I think
it's largely doable, with probably some compromises on all parts
and some built-in flexibility to make this a truly generic tool.
The details of how to build each artifact, how to use containers,
and the format of the meta-data are probably areas where we may
need some custom code to do slightly different things.  But this
is like any software library, and it can be done iteratively.

Best wishes,
Guillaume

[-- Attachment #2: Type: text/html, Size: 9688 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-04-21 15:46 ` Guillaume Tucker
@ 2020-04-21 15:53   ` Mark Brown
  2020-04-21 17:32     ` Dan Rue
  2020-04-21 17:28   ` Dan Rue
  1 sibling, 1 reply; 23+ messages in thread
From: Mark Brown @ 2020-04-21 15:53 UTC (permalink / raw)
  To: kernelci, guillaume.tucker; +Cc: Dan Rue

[-- Attachment #1: Type: text/plain, Size: 696 bytes --]

On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:

> > I've been working on a kernel building service/api that's designed to
> > perform
> > massive numbers of builds. The actual kernel build implementation reuses
> > kernelci's kci_build script.

> Out of interest, how do you manage the builders?

> We're looking into Kubernetes now so we might find issues similar
> to those in your use-case.

Indeed if the tool that you're working on is free software (I'm guessing
it might be if it's for Linaro?) then it might make sense to just deploy
a copy of that software rather than developing something similar.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-04-21 15:46 ` Guillaume Tucker
  2020-04-21 15:53   ` Mark Brown
@ 2020-04-21 17:28   ` Dan Rue
  2020-05-22 18:22     ` Kevin Hilman
  1 sibling, 1 reply; 23+ messages in thread
From: Dan Rue @ 2020-04-21 17:28 UTC (permalink / raw)
  To: Guillaume Tucker; +Cc: kernelci

On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
> 
> > Hey Everyone!
> >
> 
> Nice to have you back on the mailing list :)
> 
> 
> > I've been working on a kernel building service/api that's designed to
> > perform
> > massive numbers of builds. The actual kernel build implementation reuses
> > kernelci's kci_build script.
> >
> 
> Out of interest, how do you manage the builders?
> 
> We're looking into Kubernetes now so we might find issues similar
> to those in your use-case.

We have built a custom cloud-native service to do linux kernel builds
which has a public API. It's still in limited beta mode but we have a
number of users, including LKFT, that rely on it. I think it probably
could be usable by kernelci, because the build implementation is very
similar, and it's designed for scale.

> 
> 
> > So far we have not had to modify kernelci-core's kci_build, but we're
> > approaching a point where we think we're going to have to modify it
> > fundamentally to meet our needs. I would really like to continue sharing a
> > build implementation with kernelci, but some of the changes needed are
> > architectural in nature, and so I'd like to share a brief proposal for
> > consideration:
> >
> > Standalone python-based kernel builder with the following properties:
> > - Independent python package (library and cli) that only does a local
> > kernel
> >   build
> >
> 
> Yes - see my more detailed reply further down.
> 
> 
> > - Takes as arguments things like architecture, toolchain, kernel config,
> > etc.
> > - Dockerzed toolchain environments are supported directly (e.g. it will run
> >   "docker run make")
> > - Builds are done within the context of an existing kernel tree (doing git
> >   checkout, git fetch, etc are out of scope)
> > - Artifacts go to some named or default directory path. Uploading them
> >   elsewhere is out of scope.
> > - Exception handling is built in and makes it usable as a python library
> > - Logging is built in with flexibility in mind (log to console, verbose or
> >   silent, log to file, etc.)
> > - Simple, one-line dockerized reproducers are given as a build artifact,
> > for
> >   reproducing builds locally.
> > - Partial build failure support - e.g. what if kselftest fails but the
> > rest of
> >   the build passes?
> >
> 
> Another "out of interest" question: have you modified kci_build
> already to cover kselftest?
> 
> We have an open PR for that, if you're already using something in
> production then it would be worth comparing what you've come up
> with.

No, we haven't implemented kselftest yet either. It's not easy because
of the reasons I'll get into below.

> 
> 
> > - Rigorous unit testing
> > - Implementation is modular enough to support things like a documentation
> > build
> >   and other targets in the tree such as various tools/, sparse checker,
> >   coccinelle, etc.
> >
> 
> Yes, however we need to draw the line somewhere.  I guess that
> would be, anything that can already be built from a plain kernel
> source tree.

I agree that it should be restricted to things in the kernel tree; but
that is a huge surface area that requires a certain robustness in the
builder architecture to be able to support so many different targets.

For example, let's take kselftest's case (PR #339) - it gets the job
done, but if it fails the whole build fails. That's probably correct in
a strict sense, but what if you are building 10 additional targets - do
you really want the whole build to fail if any one fails? In reality, we
need to be able to deal with a failure in one part of the tree without
blocking testing of the rest of the tree.

e.g. recently -next was broken for syzkaller and lkft, but for different
reasons. In LKFT's case, it was merely because the perf build failed,
and caused the entire kernel build to fail. As a result, we had a
multi-week gap in data and once perf was fixed, we had a backlog of
problems to deal with.

Different targets also have different build-time requirements. Things
like the documentation build could have a specific container because of
its specific requirements, rather than putting all requirements for
every target into every build container.

> 
> 
> > - Support for kernel fragement files or urls in addition to supporting
> >   individual kernel config arguments
> >
> > Use Case Examples:
> > - Install using pip, build using the cli interface
> >   - "pip install --user kernel-builder; cd linux; kernel-builder --arch
> > arm64 --toolchain clang-9"
> > - Run using a published docker container. This example would need
> > permission to
> >   launch additional docker containers to perform the build steps.
> >   - "docker run kernel-builder --arch arm64 --toolchain clang-9"
> > - Install using pip, and use it from python as a library
> >   - "import kernel-builder"
> >
> > In the example above, "kernel-builder" is a placeholder for whatever the
> > name
> > would be.
> >
> > The artifacts produced would be similar to the set of artifacts kci_build
> > produces today, with the addition of a build reproducer script and probably
> > some differences in metadata files.
> >
> > Docker support would be optional (you could use a locally installed
> > toolchain),
> > but allows for arbitrary toolchains to be available and perfect
> > reproducibility.
> >
> 
> I agree that we need to have a way to fully reproduce builds and
> that includes having control over the environment, typically a
> Docker image.  However, I'm a bit skeptical about making the tool
> call Docker rather than have a wrapper to call it.  I think it's
> more a matter of hierarchy in the stack, I see it as:
> 
>   docker -> kernel-builder -> make -> gcc
> 
> Say, what if someone wanted to use another environment than
> Docker?  Rather than make it an optional feature, I think it may
> be better to keep it one level higher if we can still generate
> reproducers like you're suggesting.

Aye. This is what caused me to end up here in the first place.

What we did is we wrapped kci_build like the following:

kci_build_wrapper -> docker -> kci_build -> make -> gcc.

Because there's "docker" in between our python wrapper and the kci_build
implementation, the interface is subprocess() rather than python. And,
the containers need python and any other kci_build run-time requirements
installed, even if they're not needed to build the kernel.

One consequence of this is it's really hard for example to tell if a
kernel build failed or if the defconfig failed to build, because they're
both done in kci_build build_kernel. The wrapper just sees a bad exit
code from kci_build and has to try to figure out what went wrong.

So why not move the wrapper into the container then? Then the containers
would also need any additional run-time requirements of the wrapper,
which also have nothing to do with building kernels. I don't think it's
correct to install e.g. boto3 into a linux kernel toolchain container,
and it makes the containers themselves non-portable.

If the containers' responsibility were to only contain the things
necessary to perform the kernel build, then their implementation,
hosting, etc could be shared.

Another problem is user runtime when trying to reproduce builds. You
have to either publish linux kernel toolchain containers with the
"kernel-builder" inside it, or you have to explain to users how to
mount/copy it in, or you have to do reproducers without kernel-builder
(which makes it a much harder problem). If kernel-builder is inside, now
you have a container versioning problem because both the toolchain
version and the kernel-builder version matter; which is to say, they
should be separate containers each with their own versioning.

Finally, I'll say that it's possible but not easy to reuse kci_build as
it is today - in part because it leaves docker as an exercise for the
user. Getting it into a container, getting the python paths correct, and
dealing with the yaml files are barriers to entry for casual users.

So, the options for keeping kernel-builder unaware of docker are all
suboptimal. Trust me, I would have much preferred a workaround to what
I'm proposing here.

> 
> 
> > It's hard to change kci_build to be anything like this because it's tightly
> > coupled with the kernelci-core repo, and kernelci itself. The design above
> > also
> > fundamentally changes how the builder interfaces with docker, such that it
> > becomes supported directly rather than something that's implemented
> > outside the
> > scope of the builder.
> >
> > It's important that docker is supported directly because it makes the
> > reproducible build use-case a one-liner, it solves the problem of having
> > to get
> > python and the builder into the build containers, and it allows for more
> > granular control of the build.
> >
> > So, thoughts? How close or how far is this from the roadmap for kci_build
> > as it
> > stands today? We are likely to start working on this soon, and would really
> > like to work together on it if possible.
> >
> 
> Above all, thanks for the detailed proposal and for collaborating
> on a common kernel build utility to share with KernelCI.  What
> you're suggesting makes a lot of sense to me, and it's broadly in
> line with where kernelci-core is going.
> 
> The initial goal of kernelci-core was to decouple the KernelCI
> steps from Jenkins or any other execution environment.  It has
> succeeded in this respect as it's now possible to do everything
> on the command line or in other systems such as Gitlab CI, and
> you've been able to reuse kci_build too.  I think it is important
> that we keep a central repository with the KernelCI specific
> orchestration logic and tools to handle configuration.  That
> should become a proper "kernelci" Python package at some point,
> when the repository has completed its transition.
> 
> Now, this kernelci package should ideally be a very thin layer to
> just link components together and create pipeline steps.  What I
> think we should have next is a toolbox of low-level stand-alone
> packages that specialise in only one task, and I believe this is
> where it would meet your use-case.  For example, I started
> working on the "scalpel" tool last year to do bisections with the
> aim of making it independent of KernelCI.  Now I guess we need
> another tool to do kernel builds following the same approach,
> maybe call it "spanner" or something constructive like that :)
> 
> That way, we may even keep kci_build with its current cli but
> rather than have all the build steps in kernelci/build.py it
> would be importing a common module (say, toolbox.spanner for the
> sake of the argument) and your build service would be doing that
> too.  Or there would also be some command line tool provided with
> the package to run a build directly.

I agree with most of that - kci_build stays as is but eventually uses
some external build library rather than most of what's in build.py
today.

I'm not sure what you mean about the notion of a "toolbox"? I would make
the kernel build implementation totally independent of any other tools,
rather than trying to co-locate them in the same python package/git repo
(if that's what you meant).

> 
> Basically, rather than adapting kci_build for other purposes, I'm
> suggesting to create a generic tool that can be used by kci_build
> as well as any other kernel CI system.
> 
> I don't know how to call this toolbox and where to host it, but
> it would seem wise imho to keep it separate from KernelCI, LKFT
> or any other CI system that would be using it.  It would probably
> also make sense to at least keep it kernel-centric to focus on a
> group of use-cases.  How does that sound?

If there's agreement to do this together with kernelci, which I'm really
happy about, then I propose it be something like
github.com/kernelci/<somename>, where somename is available in the pypi
namespace (spanner isn't).

I think our requirements are largely the same. We do need to decide
about the docker bits, and about how opinionated the tool should be.

If there's wide agreement then we could establish such a repository and
start iterating on a design document. I have someone available to start
working on the actual implementation in a few weeks.

There are other details to discuss, like how to deal with config (we had
to do a bit of a workaround for it outside of kci_build), licensing,
etc.

> 
> If you agree with this approach, the next steps would be to find
> the least common denominator between the use-cases that we
> already know about, which I guess are KernelCI and LKFT.  I think
> it's largely doable, with probably some compromises on all parts
> and some built-in flexibility to make this a truly generic tool.
> The details of how to build each artifact, how to use containers,
> and the format of the meta-data are probably areas where we may
> need some custom code to do slightly different things.  But this
> is like any software library, and it can be done iteratively.

Yea - the nice thing is that if it's usable as a library or a cli, it's
easy to wrap and transform things like metadata into different use-cases
to maintain backward compatibility.

> 
> Best wishes,
> Guillaume

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-04-21 15:53   ` Mark Brown
@ 2020-04-21 17:32     ` Dan Rue
  0 siblings, 0 replies; 23+ messages in thread
From: Dan Rue @ 2020-04-21 17:32 UTC (permalink / raw)
  To: Mark Brown; +Cc: kernelci, guillaume.tucker

On Tue, Apr 21, 2020 at 04:53:59PM +0100, Mark Brown wrote:
> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> > On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
> 
> > > I've been working on a kernel building service/api that's designed to
> > > perform
> > > massive numbers of builds. The actual kernel build implementation reuses
> > > kernelci's kci_build script.
> 
> > Out of interest, how do you manage the builders?
> 
> > We're looking into Kubernetes now so we might find issues similar
> > to those in your use-case.
> 
> Indeed if the tool that you're working on is free software (I'm guessing
> it might be if it's for Linaro?) then it might make sense to just deploy
> a copy of that software rather than developing something similar.

Hi Mark -

The cloud orchestration stuff isn't public at this point, but it is
available as a service for interested parties.

Thanks,
Dan

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-04-21 17:28   ` Dan Rue
@ 2020-05-22 18:22     ` Kevin Hilman
  2020-05-27 19:58       ` Dan Rue
  0 siblings, 1 reply; 23+ messages in thread
From: Kevin Hilman @ 2020-05-22 18:22 UTC (permalink / raw)
  To: kernelci, dan.rue, Guillaume Tucker

"Dan Rue" <dan.rue@linaro.org> writes:

> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:

[...]

>> Basically, rather than adapting kci_build for other purposes, I'm
>> suggesting to create a generic tool that can be used by kci_build
>> as well as any other kernel CI system.
>> 
>> I don't know how to call this toolbox and where to host it, but
>> it would seem wise imho to keep it separate from KernelCI, LKFT
>> or any other CI system that would be using it.  It would probably
>> also make sense to at least keep it kernel-centric to focus on a
>> group of use-cases.  How does that sound?
>
> If there's agreement to do this together with kernelci, which I'm really
> happy about, then I propose it be something like
> github.com/kernelci/<somename>, where somename is available in the pypi
> namespace (spanner isn't).
>
> I think our requirements are largely the same. We do need to decide
> about the docker bits, and about how opinionated the tool should be.

Having spent the last few weeks getting the existing kci_build working
in k8s environment[1], I think how docker/containers fits in here is the
key thing we should agree upon.

For the initial k8s migration, I started with the requirement to use
kci_build as-is, but some of the issues Dan raised are already issues
with the k8s build e.g. non-trivial to discover which part of the build
failed (and if it's critical or not.)

I'm fully supportive rethinking/reworking this tool to be cloud-native.
I would also add we also need to be intentional about how
artifacts/results are passed between the various phases so we build
flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
really like to find someone with the time explore reworking our jenkins
pipline into tekton[2])

> If there's wide agreement then we could establish such a repository and
> start iterating on a design document. I have someone available to start
> working on the actual implementation in a few weeks.
>
> There are other details to discuss, like how to deal with config (we had
> to do a bit of a workaround for it outside of kci_build), licensing,
> etc.

I propose that we we dedicate one of our tuesday weekly calls to this
topic.  Would Tues, June 2nd work?

Kevin

[1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
[2] https://tekton.dev/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-22 18:22     ` Kevin Hilman
@ 2020-05-27 19:58       ` Dan Rue
  2020-05-28  6:43         ` Guillaume Tucker
  0 siblings, 1 reply; 23+ messages in thread
From: Dan Rue @ 2020-05-27 19:58 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: kernelci, Guillaume Tucker

On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
> "Dan Rue" <dan.rue@linaro.org> writes:
> 
> > On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> >> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
> 
> [...]
> 
> >> Basically, rather than adapting kci_build for other purposes, I'm
> >> suggesting to create a generic tool that can be used by kci_build
> >> as well as any other kernel CI system.
> >> 
> >> I don't know how to call this toolbox and where to host it, but
> >> it would seem wise imho to keep it separate from KernelCI, LKFT
> >> or any other CI system that would be using it.  It would probably
> >> also make sense to at least keep it kernel-centric to focus on a
> >> group of use-cases.  How does that sound?
> >
> > If there's agreement to do this together with kernelci, which I'm really
> > happy about, then I propose it be something like
> > github.com/kernelci/<somename>, where somename is available in the pypi
> > namespace (spanner isn't).
> >
> > I think our requirements are largely the same. We do need to decide
> > about the docker bits, and about how opinionated the tool should be.
> 
> Having spent the last few weeks getting the existing kci_build working
> in k8s environment[1], I think how docker/containers fits in here is the
> key thing we should agree upon.
> 
> For the initial k8s migration, I started with the requirement to use
> kci_build as-is, but some of the issues Dan raised are already issues
> with the k8s build e.g. non-trivial to discover which part of the build
> failed (and if it's critical or not.)
> 
> I'm fully supportive rethinking/reworking this tool to be cloud-native.
> I would also add we also need to be intentional about how
> artifacts/results are passed between the various phases so we build
> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
> really like to find someone with the time explore reworking our jenkins
> pipline into tekton[2])

Hi Kevin!

OK, we've started to put some thoughts together on an implementation at
https://gitlab.com/Linaro/tuxmake.

Some of the details are still a bit preliminary, but we plan to start on
it soon and get the basic interface and use-cases in place.

Would the design meet your needs? If you have any feedback, we would
really appreciate it. I'm really not sure how it would fit in with k8s,
but I anticipate if it were used as a python library there would be a
lot of fine grained control of the steps.

We hope that this would be useful for kernelci and tuxbuild projects as
a library, but also for kernel engineers to be able to more easily
perform and reproduce a variety of builds locally using the cli.

> 
> > If there's wide agreement then we could establish such a repository and
> > start iterating on a design document. I have someone available to start
> > working on the actual implementation in a few weeks.
> >
> > There are other details to discuss, like how to deal with config (we had
> > to do a bit of a workaround for it outside of kci_build), licensing,
> > etc.
> 
> I propose that we we dedicate one of our tuesday weekly calls to this
> topic.  Would Tues, June 2nd work?

I can make that work,

Thanks,
Dan

> 
> Kevin
> 
> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
> [2] https://tekton.dev/

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-27 19:58       ` Dan Rue
@ 2020-05-28  6:43         ` Guillaume Tucker
  2020-05-28 17:28           ` Dan Rue
  0 siblings, 1 reply; 23+ messages in thread
From: Guillaume Tucker @ 2020-05-28  6:43 UTC (permalink / raw)
  To: kernelci, dan.rue, Kevin Hilman

On 27/05/2020 20:58, Dan Rue wrote:
> On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
>> "Dan Rue" <dan.rue@linaro.org> writes:
>>
>>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
>>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
>>
>> [...]
>>
>>>> Basically, rather than adapting kci_build for other purposes, I'm
>>>> suggesting to create a generic tool that can be used by kci_build
>>>> as well as any other kernel CI system.
>>>>
>>>> I don't know how to call this toolbox and where to host it, but
>>>> it would seem wise imho to keep it separate from KernelCI, LKFT
>>>> or any other CI system that would be using it.  It would probably
>>>> also make sense to at least keep it kernel-centric to focus on a
>>>> group of use-cases.  How does that sound?
>>>
>>> If there's agreement to do this together with kernelci, which I'm really
>>> happy about, then I propose it be something like
>>> github.com/kernelci/<somename>, where somename is available in the pypi
>>> namespace (spanner isn't).
>>>
>>> I think our requirements are largely the same. We do need to decide
>>> about the docker bits, and about how opinionated the tool should be.
>>
>> Having spent the last few weeks getting the existing kci_build working
>> in k8s environment[1], I think how docker/containers fits in here is the
>> key thing we should agree upon.
>>
>> For the initial k8s migration, I started with the requirement to use
>> kci_build as-is, but some of the issues Dan raised are already issues
>> with the k8s build e.g. non-trivial to discover which part of the build
>> failed (and if it's critical or not.)
>>
>> I'm fully supportive rethinking/reworking this tool to be cloud-native.
>> I would also add we also need to be intentional about how
>> artifacts/results are passed between the various phases so we build
>> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
>> really like to find someone with the time explore reworking our jenkins
>> pipline into tekton[2])

I think that passing artifacts between build stages is really
part of the pipeline integration work and shouldn't be imposed by
the tools.  The tools however should produce intermediary data
and files in a way that can be integrated into various pipelines
with different requirements.

It seems we can improve this in kci_build by first generating a
JSON file with the static meta-data before starting a kernel
build, then the build would add to that data, then the install
step would add more data again.

See also my comment below about having the ability to build
individual steps, which should help with error handling.

> Hi Kevin!
> 
> OK, we've started to put some thoughts together on an implementation at
> https://gitlab.com/Linaro/tuxmake.
> 
> Some of the details are still a bit preliminary, but we plan to start on
> it soon and get the basic interface and use-cases in place.
> 
> Would the design meet your needs? If you have any feedback, we would
> really appreciate it. I'm really not sure how it would fit in with k8s,
> but I anticipate if it were used as a python library there would be a
> lot of fine grained control of the steps.

I still strongly believe that making the tool call Docker itself
is going to be a big issue - it should really be the other way
round, or at least there should be a way to run the kernel build
directly with an option to wrap it inside a Docker container at
the user's discretion.

It's also important to have the possibility to run each
individual make command separately, to create the .config, build
the image, build the modules, the dtbs, or other arbitrary things
such as kselftest.  The current kci_build implementation does too
much already in the build_kernel() function, I've had to hack it
so many times to force it to not build modules...

About k8s, yes I agree a Python library should provide the
required flexibility.  It shouldn't need to have any "cloud"
special features as it's just one use-case among many.  The key
thing is to provide just enough added value without turning it
into a complete integration.  I guess one could see the stack as
follows:


user
------------
environment
------------
tuxmake
------------
make
compiler
kernel source code


The "environment" can be a plain shell in a terminal, or in a
Docker container, which may be managed by Kubernetes, which may
have been started by Jenkins or Tekton or anything else.  By
keeping the environment as a higher-level entity in the stack we
also keep the tool (tuxmake) generic.

> We hope that this would be useful for kernelci and tuxbuild projects as
> a library, but also for kernel engineers to be able to more easily
> perform and reproduce a variety of builds locally using the cli.

Err, do you want to call it tuxbuild or tuxmake? ;)

I wonder if tuxmake would sound more like a lower-lavel "make"
tool, such as Ninja or Meson.  So tuxbuild is probably a bit
clearer in that it's about facilitating builds (i.e. above
the "make" layer in the stack...).

You suggested to have a project under github.com/kernelci, which
I believe probably makes more sense if we want KernelCI to
provide a common set of tools for others to reuse (not being
biased at all here).  Or, we can do what I was trying to suggest
in my previous email and have a new "namespace" for these tools
that don't need to be associated with KernelCI, or LKFT or
anything else.

>>> If there's wide agreement then we could establish such a repository and
>>> start iterating on a design document. I have someone available to start
>>> working on the actual implementation in a few weeks.
>>>
>>> There are other details to discuss, like how to deal with config (we had
>>> to do a bit of a workaround for it outside of kci_build), licensing,
>>> etc.
>>
>> I propose that we we dedicate one of our tuesday weekly calls to this
>> topic.  Would Tues, June 2nd work?
> 
> I can make that work,

Well, the main topic for that meeting is going to be to review
the KernelCI plans for the next 3 months.  You're welcome to join
in any case, and if we don't spend too much time on the plan we
may have time to discuss that.  However, maybe we could aim for
the 9th instead?

Thanks,
Guillaume

>> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
>> [2] https://tekton.dev/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-28  6:43         ` Guillaume Tucker
@ 2020-05-28 17:28           ` Dan Rue
  2020-05-28 21:03             ` Guillaume Tucker
  2020-05-28 23:31             ` kci_build proposal Kevin Hilman
  0 siblings, 2 replies; 23+ messages in thread
From: Dan Rue @ 2020-05-28 17:28 UTC (permalink / raw)
  To: Guillaume Tucker; +Cc: kernelci, Kevin Hilman

On Thu, May 28, 2020 at 07:43:40AM +0100, Guillaume Tucker wrote:
> On 27/05/2020 20:58, Dan Rue wrote:
> > On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
> >> "Dan Rue" <dan.rue@linaro.org> writes:
> >>
> >>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> >>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
> >>
> >> [...]
> >>
> >>>> Basically, rather than adapting kci_build for other purposes, I'm
> >>>> suggesting to create a generic tool that can be used by kci_build
> >>>> as well as any other kernel CI system.
> >>>>
> >>>> I don't know how to call this toolbox and where to host it, but
> >>>> it would seem wise imho to keep it separate from KernelCI, LKFT
> >>>> or any other CI system that would be using it.  It would probably
> >>>> also make sense to at least keep it kernel-centric to focus on a
> >>>> group of use-cases.  How does that sound?
> >>>
> >>> If there's agreement to do this together with kernelci, which I'm really
> >>> happy about, then I propose it be something like
> >>> github.com/kernelci/<somename>, where somename is available in the pypi
> >>> namespace (spanner isn't).
> >>>
> >>> I think our requirements are largely the same. We do need to decide
> >>> about the docker bits, and about how opinionated the tool should be.
> >>
> >> Having spent the last few weeks getting the existing kci_build working
> >> in k8s environment[1], I think how docker/containers fits in here is the
> >> key thing we should agree upon.
> >>
> >> For the initial k8s migration, I started with the requirement to use
> >> kci_build as-is, but some of the issues Dan raised are already issues
> >> with the k8s build e.g. non-trivial to discover which part of the build
> >> failed (and if it's critical or not.)
> >>
> >> I'm fully supportive rethinking/reworking this tool to be cloud-native.
> >> I would also add we also need to be intentional about how
> >> artifacts/results are passed between the various phases so we build
> >> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
> >> really like to find someone with the time explore reworking our jenkins
> >> pipline into tekton[2])
> 
> I think that passing artifacts between build stages is really
> part of the pipeline integration work and shouldn't be imposed by
> the tools.  The tools however should produce intermediary data
> and files in a way that can be integrated into various pipelines
> with different requirements.
> 
> It seems we can improve this in kci_build by first generating a
> JSON file with the static meta-data before starting a kernel
> build, then the build would add to that data, then the install
> step would add more data again.
> 
> See also my comment below about having the ability to build
> individual steps, which should help with error handling.
> 
> > Hi Kevin!
> > 
> > OK, we've started to put some thoughts together on an implementation at
> > https://gitlab.com/Linaro/tuxmake.
> > 
> > Some of the details are still a bit preliminary, but we plan to start on
> > it soon and get the basic interface and use-cases in place.
> > 
> > Would the design meet your needs? If you have any feedback, we would
> > really appreciate it. I'm really not sure how it would fit in with k8s,
> > but I anticipate if it were used as a python library there would be a
> > lot of fine grained control of the steps.
> 
> I still strongly believe that making the tool call Docker itself
> is going to be a big issue - it should really be the other way
> round, or at least there should be a way to run the kernel build
> directly with an option to wrap it inside a Docker container at
> the user's discretion.

I'll address this further down.

> 
> It's also important to have the possibility to run each
> individual make command separately, to create the .config, build
> the image, build the modules, the dtbs, or other arbitrary things
> such as kselftest.  The current kci_build implementation does too
> much already in the build_kernel() function, I've had to hack it
> so many times to force it to not build modules...

We agree on this point - it should be possible as a cli and as a library
to run each build command separately.

In the tuxmake readme, we have the following proposed to handle this
(from a cli point of view):

    tuxmake --targets kernel,selftests,htmldocs

If --targets were specified, only those targets (and any dependent
targets) would be called. If --targets is not specified,
kernel+modules+dtbs (as applicable) would be built by default.

Config itself is a special case which I have some ideas for, but are not
yet described in the readme.

> 
> About k8s, yes I agree a Python library should provide the
> required flexibility.  It shouldn't need to have any "cloud"
> special features as it's just one use-case among many.  The key
> thing is to provide just enough added value without turning it
> into a complete integration.  I guess one could see the stack as
> follows:
> 
> 
> user
> ------------
> environment
> ------------
> tuxmake
> ------------
> make
> compiler
> kernel source code
> 
> 
> The "environment" can be a plain shell in a terminal, or in a
> Docker container, which may be managed by Kubernetes, which may
> have been started by Jenkins or Tekton or anything else.  By
> keeping the environment as a higher-level entity in the stack we
> also keep the tool (tuxmake) generic.

Either the environment needs to be aware of the tool, or the tool needs
to be aware of the environment. Right now, our container environments
are not generic because they have to have support for kci_build, for
example, built in (python, required python modules, even jenkins
requirements are leaking in). Someone else reusing the tools would have
to bake their own dependencies in, making the environments non-portable.

Worse, I'm really trying to focus on the kernel developer use-cases, and
reproducing a kci_build today is non-trivial - largely because it's up
to the user to perform multiple steps to get a container running, get
their source code into the container, get kci_build into the container,
and rerun the kernelci build.

>From a correctness point of view, a container that is designed to
provide an environment for a kernel build should *only* contain things
that are required by the kernel build itself.

I believe there's an opportunity here to provide a layer of abstraction
that makes build environments portable, and then builds themselves more
portable - making it easier for more developers to do more build
combinations more easily. Right now it's a hassle that every developer
has to solve for themselves (or just not solve it). And if a developer
finds a build problem, it may be difficult for another developer to
reproduce the problem if it's environmental.

> 
> > We hope that this would be useful for kernelci and tuxbuild projects as
> > a library, but also for kernel engineers to be able to more easily
> > perform and reproduce a variety of builds locally using the cli.
> 
> Err, do you want to call it tuxbuild or tuxmake? ;)

tuxbuild is a commercial build service and tuxmake is an open source
build implementation. For what it's worth, I think kernelci should
consider using tuxbuild; it's designed to be highly scalable (thousands
of concurrent builds) and reliable, and would solve kernelci's build
capacity problems. LKFT has been using it in production since January
and it has reduced our build times dramatically.

> 
> I wonder if tuxmake would sound more like a lower-lavel "make"
> tool, such as Ninja or Meson.  So tuxbuild is probably a bit
> clearer in that it's about facilitating builds (i.e. above
> the "make" layer in the stack...).
> 
> You suggested to have a project under github.com/kernelci, which
> I believe probably makes more sense if we want KernelCI to
> provide a common set of tools for others to reuse (not being
> biased at all here).  Or, we can do what I was trying to suggest
> in my previous email and have a new "namespace" for these tools
> that don't need to be associated with KernelCI, or LKFT or
> anything else.

I don't think a new namespace is necessary. At this point, it's a Linaro
project so I've put it there. We can re-assess as necessary but I don't
think the location of the git repo is a critical factor at this point,
nor is it necessarily permanent.

> 
> >>> If there's wide agreement then we could establish such a repository and
> >>> start iterating on a design document. I have someone available to start
> >>> working on the actual implementation in a few weeks.
> >>>
> >>> There are other details to discuss, like how to deal with config (we had
> >>> to do a bit of a workaround for it outside of kci_build), licensing,
> >>> etc.
> >>
> >> I propose that we we dedicate one of our tuesday weekly calls to this
> >> topic.  Would Tues, June 2nd work?
> > 
> > I can make that work,
> 
> Well, the main topic for that meeting is going to be to review
> the KernelCI plans for the next 3 months.  You're welcome to join
> in any case, and if we don't spend too much time on the plan we
> may have time to discuss that.  However, maybe we could aim for
> the 9th instead?

I'm unavailable on the 9th and the 16th of June. Just ping me on irc
if it ends up on the agenda.

Dan

> 
> Thanks,
> Guillaume
> 
> >> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
> >> [2] https://tekton.dev/

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-28 17:28           ` Dan Rue
@ 2020-05-28 21:03             ` Guillaume Tucker
  2020-05-29 15:53               ` Dan Rue
  2020-05-28 23:31             ` kci_build proposal Kevin Hilman
  1 sibling, 1 reply; 23+ messages in thread
From: Guillaume Tucker @ 2020-05-28 21:03 UTC (permalink / raw)
  To: dan.rue; +Cc: kernelci, Kevin Hilman

On 28/05/2020 18:28, Dan Rue wrote:
> On Thu, May 28, 2020 at 07:43:40AM +0100, Guillaume Tucker wrote:
>> On 27/05/2020 20:58, Dan Rue wrote:
>>> On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
>>>> "Dan Rue" <dan.rue@linaro.org> writes:
>>>>
>>>>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
>>>>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
>>>>
>>>> [...]
>>>>
>>>>>> Basically, rather than adapting kci_build for other purposes, I'm
>>>>>> suggesting to create a generic tool that can be used by kci_build
>>>>>> as well as any other kernel CI system.
>>>>>>
>>>>>> I don't know how to call this toolbox and where to host it, but
>>>>>> it would seem wise imho to keep it separate from KernelCI, LKFT
>>>>>> or any other CI system that would be using it.  It would probably
>>>>>> also make sense to at least keep it kernel-centric to focus on a
>>>>>> group of use-cases.  How does that sound?
>>>>>
>>>>> If there's agreement to do this together with kernelci, which I'm really
>>>>> happy about, then I propose it be something like
>>>>> github.com/kernelci/<somename>, where somename is available in the pypi
>>>>> namespace (spanner isn't).
>>>>>
>>>>> I think our requirements are largely the same. We do need to decide
>>>>> about the docker bits, and about how opinionated the tool should be.
>>>>
>>>> Having spent the last few weeks getting the existing kci_build working
>>>> in k8s environment[1], I think how docker/containers fits in here is the
>>>> key thing we should agree upon.
>>>>
>>>> For the initial k8s migration, I started with the requirement to use
>>>> kci_build as-is, but some of the issues Dan raised are already issues
>>>> with the k8s build e.g. non-trivial to discover which part of the build
>>>> failed (and if it's critical or not.)
>>>>
>>>> I'm fully supportive rethinking/reworking this tool to be cloud-native.
>>>> I would also add we also need to be intentional about how
>>>> artifacts/results are passed between the various phases so we build
>>>> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
>>>> really like to find someone with the time explore reworking our jenkins
>>>> pipline into tekton[2])
>>
>> I think that passing artifacts between build stages is really
>> part of the pipeline integration work and shouldn't be imposed by
>> the tools.  The tools however should produce intermediary data
>> and files in a way that can be integrated into various pipelines
>> with different requirements.
>>
>> It seems we can improve this in kci_build by first generating a
>> JSON file with the static meta-data before starting a kernel
>> build, then the build would add to that data, then the install
>> step would add more data again.
>>
>> See also my comment below about having the ability to build
>> individual steps, which should help with error handling.
>>
>>> Hi Kevin!
>>>
>>> OK, we've started to put some thoughts together on an implementation at
>>> https://gitlab.com/Linaro/tuxmake.
>>>
>>> Some of the details are still a bit preliminary, but we plan to start on
>>> it soon and get the basic interface and use-cases in place.
>>>
>>> Would the design meet your needs? If you have any feedback, we would
>>> really appreciate it. I'm really not sure how it would fit in with k8s,
>>> but I anticipate if it were used as a python library there would be a
>>> lot of fine grained control of the steps.
>>
>> I still strongly believe that making the tool call Docker itself
>> is going to be a big issue - it should really be the other way
>> round, or at least there should be a way to run the kernel build
>> directly with an option to wrap it inside a Docker container at
>> the user's discretion.
> 
> I'll address this further down.
> 
>>
>> It's also important to have the possibility to run each
>> individual make command separately, to create the .config, build
>> the image, build the modules, the dtbs, or other arbitrary things
>> such as kselftest.  The current kci_build implementation does too
>> much already in the build_kernel() function, I've had to hack it
>> so many times to force it to not build modules...
> 
> We agree on this point - it should be possible as a cli and as a library
> to run each build command separately.
> 
> In the tuxmake readme, we have the following proposed to handle this
> (from a cli point of view):
> 
>     tuxmake --targets kernel,selftests,htmldocs
> 
> If --targets were specified, only those targets (and any dependent
> targets) would be called. If --targets is not specified,
> kernel+modules+dtbs (as applicable) would be built by default.
> 
> Config itself is a special case which I have some ideas for, but are not
> yet described in the readme.
> 
>>
>> About k8s, yes I agree a Python library should provide the
>> required flexibility.  It shouldn't need to have any "cloud"
>> special features as it's just one use-case among many.  The key
>> thing is to provide just enough added value without turning it
>> into a complete integration.  I guess one could see the stack as
>> follows:
>>
>>
>> user
>> ------------
>> environment
>> ------------
>> tuxmake
>> ------------
>> make
>> compiler
>> kernel source code
>>
>>
>> The "environment" can be a plain shell in a terminal, or in a
>> Docker container, which may be managed by Kubernetes, which may
>> have been started by Jenkins or Tekton or anything else.  By
>> keeping the environment as a higher-level entity in the stack we
>> also keep the tool (tuxmake) generic.
> 
> Either the environment needs to be aware of the tool, or the tool needs
> to be aware of the environment. Right now, our container environments
> are not generic because they have to have support for kci_build, for
> example, built in (python, required python modules, even jenkins
> requirements are leaking in). Someone else reusing the tools would have
> to bake their own dependencies in, making the environments non-portable.

What do you mean exactly by "our container environments"?  Are
they Docker containers used by LKFT?

> Worse, I'm really trying to focus on the kernel developer use-cases, and
> reproducing a kci_build today is non-trivial - largely because it's up
> to the user to perform multiple steps to get a container running, get
> their source code into the container, get kci_build into the container,
> and rerun the kernelci build.

That's right, kci_build is currently mostly designed to get
integrated into an automated build pipeline using say, Jenkins or
Gitlab CI.  We all know that having a way to wrap that in a
Docker call would make things easier for developers - and there's
nothing really hard about it.  We could for example have a "kci"
higher-level command which would act as a shell-oriented
integration to first pull a Docker image, then run some commands
inside it.  It could have one command for each step in a typical
automated pipeline: build, test, report...

> From a correctness point of view, a container that is designed to
> provide an environment for a kernel build should *only* contain things
> that are required by the kernel build itself.

I'm sorry but I really don't get what you mean here.  If you mean
that the container should only have the dependencies to be able
to run plain build commands such as "make defconfig && make",
then how can you have any tool on top of that?  Would it need to
be merged in the upstream kernel build system to satisfy your
criteria?

The README says "Requirements: python 3.6+, docker", so you
actually want to use Docker images but _still_ require users to
install a particular version of Python outside the container, and
tuxmake not depend on any other Python module?

> I believe there's an opportunity here to provide a layer of abstraction
> that makes build environments portable, and then builds themselves more
> portable - making it easier for more developers to do more build
> combinations more easily. Right now it's a hassle that every developer
> has to solve for themselves (or just not solve it). And if a developer
> finds a build problem, it may be difficult for another developer to
> reproduce the problem if it's environmental.

Sounds great, although I'm still struggling to understand how
you're proposing to achieve that in practice.

>>> We hope that this would be useful for kernelci and tuxbuild projects as
>>> a library, but also for kernel engineers to be able to more easily
>>> perform and reproduce a variety of builds locally using the cli.
>>
>> Err, do you want to call it tuxbuild or tuxmake? ;)
> 
> tuxbuild is a commercial build service and tuxmake is an open source
> build implementation. For what it's worth, I think kernelci should
> consider using tuxbuild; it's designed to be highly scalable (thousands
> of concurrent builds) and reliable, and would solve kernelci's build
> capacity problems. LKFT has been using it in production since January
> and it has reduced our build times dramatically.

OK, makes more sense now that I've read this:

  https://pypi.org/project/tuxbuild/

About using it in KernelCI, for sure there are lots of things we
could do to improve our build efficiency.  Being tied to a
particular service is probably not going to play out well though,
unless it's only an option when deploying instances (i.e. same as
we use Jenkins and now Kubernetes but don't require them etc...).

So assuming KernelCI could depend on tuxmake, and one possible
way to run it was via tuxbuild, then that might work.

Otherwise, importing tuxmake into kci_build would in theory be
fine as long as tuxmake does cover our use-case in practice.

>> I wonder if tuxmake would sound more like a lower-lavel "make"
>> tool, such as Ninja or Meson.  So tuxbuild is probably a bit
>> clearer in that it's about facilitating builds (i.e. above
>> the "make" layer in the stack...).
>>
>> You suggested to have a project under github.com/kernelci, which
>> I believe probably makes more sense if we want KernelCI to
>> provide a common set of tools for others to reuse (not being
>> biased at all here).  Or, we can do what I was trying to suggest
>> in my previous email and have a new "namespace" for these tools
>> that don't need to be associated with KernelCI, or LKFT or
>> anything else.
> 
> I don't think a new namespace is necessary. At this point, it's a Linaro
> project so I've put it there. We can re-assess as necessary but I don't
> think the location of the git repo is a critical factor at this point,
> nor is it necessarily permanent.

Right, sorry I was missing some background about tuxbuild.

The git URL obviously has very little significance, but the
organisation which has the destiny of the tool in its hands does.
If it's under /linaro, it's a Linaro tool, and obviously it will
at least appear to be designed for the very specific use-case
that is tuxbuild and LKFT:

  https://gitlab.com/Linaro/lkft/ci-scripts/-/blob/master/branch-gitlab-ci.yml#L52

>>>>> If there's wide agreement then we could establish such a repository and
>>>>> start iterating on a design document. I have someone available to start
>>>>> working on the actual implementation in a few weeks.
>>>>>
>>>>> There are other details to discuss, like how to deal with config (we had
>>>>> to do a bit of a workaround for it outside of kci_build), licensing,
>>>>> etc.

To clarify one thing, have you completely stopped using kci_build
with LKFT?

>>>> I propose that we we dedicate one of our tuesday weekly calls to this
>>>> topic.  Would Tues, June 2nd work?
>>>
>>> I can make that work,
>>
>> Well, the main topic for that meeting is going to be to review
>> the KernelCI plans for the next 3 months.  You're welcome to join
>> in any case, and if we don't spend too much time on the plan we
>> may have time to discuss that.  However, maybe we could aim for
>> the 9th instead?
> 
> I'm unavailable on the 9th and the 16th of June. Just ping me on irc
> if it ends up on the agenda.

OK let's catch up again by Monday at the latest.

Thanks,
Guillaume

>>>> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
>>>> [2] https://tekton.dev/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-28 17:28           ` Dan Rue
  2020-05-28 21:03             ` Guillaume Tucker
@ 2020-05-28 23:31             ` Kevin Hilman
  2020-05-29  7:42               ` Mathieu Acher
  1 sibling, 1 reply; 23+ messages in thread
From: Kevin Hilman @ 2020-05-28 23:31 UTC (permalink / raw)
  To: Dan Rue, Guillaume Tucker; +Cc: kernelci

Dan Rue <dan.rue@linaro.org> writes:

[...]

> tuxbuild is a commercial build service and tuxmake is an open source
> build implementation. For what it's worth, I think kernelci should
> consider using tuxbuild; it's designed to be highly scalable (thousands
> of concurrent builds) and reliable, and would solve kernelci's build
> capacity problems. LKFT has been using it in production since January
> and it has reduced our build times dramatically.

Cool, thanks for sharing about tuxbuild.  It's the first I've heard of
it.  It definitely looks promising.

I think we need to know a lot more about how it works, what are the
underlying compute resources, how (and who) is capable of bringing more
compute capacity to the table, how (and who) will be capable of
contributing code, etc. etc.  Using a commercial service like this is
probably unlikely, but I certainly hope we can collaborate on the
tooling as the use-case is identical.

After a quick glance at the gitlab project
(https://gitlab.com/Linaro/tuxbuild) and screencast, it seems to me that
whatever your backend is, it's basically k8s batch jobs (or a
reinvention of them.)  We're in the process of converting to k8s batch
jobs (currently running on kci staging), and with that we can scale
build parallelism dramatically.  We're able to use compute from any
cloud-provider that supports k8s (our staging instance is currently
using both Google Compute and MS Azure) but the kci-tools are
cloud-provider agnostic.  We just use the standard kubectl cmdline tool
and the python3-kubernetes package.

Kevin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-28 23:31             ` kci_build proposal Kevin Hilman
@ 2020-05-29  7:42               ` Mathieu Acher
  2020-05-29 10:44                 ` Mark Brown
                                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Mathieu Acher @ 2020-05-29  7:42 UTC (permalink / raw)
  To: Kevin Hilman, kernelci

Dear all, 

I'm glad to hear the ongoing effort about containerization (Docker + Python 3) of KernelCI. 
I didn't know the existence of tuxbuild and I'm as curious as Kevin for having more details. 

As an academic, we have some use cases that can further motivate the use of a "cloud-based solution".
In 2017, we have developed TuxML (https://github.com/TuxML/ProjetIrma) which a solution with Docker and Python 3 for massively compiling kernel configurations. 
The "ml" part stands for statistical learning. So far, we have collected 200K+ configurations with a distributed computing infrastructure (a kind of cloud). 
Our focus is to explore the configuration space of the Linux kernel in the large; we fed several config files (mainly with randconfig and some pre-set options) and then perform some "tests" (in a broad sense). 

We have two major use cases:
 * bug-finding related to configurations: we are able to locate faulty (combinations of) options and even prevent/fix some failures at the Kconfig level (more details here: https://hal.inria.fr/hal-02147012)
 * size prediction of a configuration (without compiling it) and identificaton of options that influence size (more details here: https://hal.inria.fr/hal-02314830) 

In both cases, we need an infrastructure capable of compiling *any* configuration of the kernel.
For measuring the size, we need to control all factors, like gcc version and so forth. Docker was helpful here to have a canonical environment. 
One issue we found is to build the right Docker image: some configurations of the kernel require specific tools and anticipating all can be tricky, leading to some false-positive failures.
Have you encountered this situation? 

In general, my colleagues at University of Rennes 1/Inria and I are really interested to be part of this new effort through discussions or technical contributions. 
We plan to modernize TuxML with KernelCI toolchain, especially if it's possible to deploy it on several distributed machines. 

Can I somehow participate in the next call? Mainly to learn and as a first step to contribute in this very nice project. 

Best, 

Mathieu Acher 



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-29  7:42               ` Mathieu Acher
@ 2020-05-29 10:44                 ` Mark Brown
  2020-05-29 14:27                 ` Guillaume Tucker
  2020-06-16 16:33                 ` Nick Desaulniers
  2 siblings, 0 replies; 23+ messages in thread
From: Mark Brown @ 2020-05-29 10:44 UTC (permalink / raw)
  To: kernelci, mathieu.acher; +Cc: Kevin Hilman

[-- Attachment #1: Type: text/plain, Size: 657 bytes --]

On Fri, May 29, 2020 at 12:42:04AM -0700, Mathieu Acher wrote:

> One issue we found is to build the right Docker image: some
> configurations of the kernel require specific tools and anticipating
> all can be tricky, leading to some false-positive failures.  Have you
> encountered this situation? 

We do, generally when we notice something like this has happened we will
just add the new tool to our images and move on.  Since linux-next is
included in our coverage we tend to notice when things hit that and
people are a bit more tolerant of breakage there than they might be in
other trees so long as it gets addressed reasonably promptly.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-29  7:42               ` Mathieu Acher
  2020-05-29 10:44                 ` Mark Brown
@ 2020-05-29 14:27                 ` Guillaume Tucker
  2020-06-16 16:33                 ` Nick Desaulniers
  2 siblings, 0 replies; 23+ messages in thread
From: Guillaume Tucker @ 2020-05-29 14:27 UTC (permalink / raw)
  To: mathieu.acher; +Cc: kernelci, Kevin Hilman

Hi Mathieu,

On 29/05/2020 08:42, Mathieu Acher wrote:
> Dear all, 
> 
> I'm glad to hear the ongoing effort about containerization (Docker + Python 3) of KernelCI. 
> I didn't know the existence of tuxbuild and I'm as curious as Kevin for having more details. 
> 
> As an academic, we have some use cases that can further motivate the use of a "cloud-based solution".
> In 2017, we have developed TuxML (https://github.com/TuxML/ProjetIrma) which a solution with Docker and Python 3 for massively compiling kernel configurations. 
> The "ml" part stands for statistical learning. So far, we have collected 200K+ configurations with a distributed computing infrastructure (a kind of cloud). 
> Our focus is to explore the configuration space of the Linux kernel in the large; we fed several config files (mainly with randconfig and some pre-set options) and then perform some "tests" (in a broad sense). 
> 
> We have two major use cases:
>  * bug-finding related to configurations: we are able to locate faulty (combinations of) options and even prevent/fix some failures at the Kconfig level (more details here: https://hal.inria.fr/hal-02147012)
>  * size prediction of a configuration (without compiling it) and identificaton of options that influence size (more details here: https://hal.inria.fr/hal-02314830) 
> 
> In both cases, we need an infrastructure capable of compiling *any* configuration of the kernel.
> For measuring the size, we need to control all factors, like gcc version and so forth. Docker was helpful here to have a canonical environment. 
> One issue we found is to build the right Docker image: some configurations of the kernel require specific tools and anticipating all can be tricky, leading to some false-positive failures.
> Have you encountered this situation? 
> 
> In general, my colleagues at University of Rennes 1/Inria and I are really interested to be part of this new effort through discussions or technical contributions. 
> We plan to modernize TuxML with KernelCI toolchain, especially if it's possible to deploy it on several distributed machines. 

Thank you for getting in touch, your project sounds really
interesting.  To echo what Mark wrote in his email, it does seem
like we're all fighting for a common cause.

> Can I somehow participate in the next call? Mainly to learn and as a first step to contribute in this very nice project. 

As it turns out, we're now planning to have a meeting with Dan on
Tuesday 23rd June to focus on the tuxmake/tuxbuild topic he just
brought up.  Please feel free to join us then, as well as any of
the weekly Tuesday meetings so we can introduce ourselves.  I've
added you to the calendar invitation.

Out of interest, do you cover other compilers than gcc?  We're
working closely with the Clang-built Linux team and this may be
of interest to them too.

Best wishes,
Guillaume

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-28 21:03             ` Guillaume Tucker
@ 2020-05-29 15:53               ` Dan Rue
  2020-06-15 13:22                 ` "Audience"/"Guest" can join for this meeting running on 23rd? koti koti
  0 siblings, 1 reply; 23+ messages in thread
From: Dan Rue @ 2020-05-29 15:53 UTC (permalink / raw)
  To: Guillaume Tucker; +Cc: kernelci, Kevin Hilman

Let's add the questions and items raised in this thread to the agenda
for the discussion scheduled for June 23rd.

Thanks,
Dan

On Thu, May 28, 2020 at 10:03:20PM +0100, Guillaume Tucker wrote:
> On 28/05/2020 18:28, Dan Rue wrote:
> > On Thu, May 28, 2020 at 07:43:40AM +0100, Guillaume Tucker wrote:
> >> On 27/05/2020 20:58, Dan Rue wrote:
> >>> On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
> >>>> "Dan Rue" <dan.rue@linaro.org> writes:
> >>>>
> >>>>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> >>>>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org> wrote:
> >>>>
> >>>> [...]
> >>>>
> >>>>>> Basically, rather than adapting kci_build for other purposes, I'm
> >>>>>> suggesting to create a generic tool that can be used by kci_build
> >>>>>> as well as any other kernel CI system.
> >>>>>>
> >>>>>> I don't know how to call this toolbox and where to host it, but
> >>>>>> it would seem wise imho to keep it separate from KernelCI, LKFT
> >>>>>> or any other CI system that would be using it.  It would probably
> >>>>>> also make sense to at least keep it kernel-centric to focus on a
> >>>>>> group of use-cases.  How does that sound?
> >>>>>
> >>>>> If there's agreement to do this together with kernelci, which I'm really
> >>>>> happy about, then I propose it be something like
> >>>>> github.com/kernelci/<somename>, where somename is available in the pypi
> >>>>> namespace (spanner isn't).
> >>>>>
> >>>>> I think our requirements are largely the same. We do need to decide
> >>>>> about the docker bits, and about how opinionated the tool should be.
> >>>>
> >>>> Having spent the last few weeks getting the existing kci_build working
> >>>> in k8s environment[1], I think how docker/containers fits in here is the
> >>>> key thing we should agree upon.
> >>>>
> >>>> For the initial k8s migration, I started with the requirement to use
> >>>> kci_build as-is, but some of the issues Dan raised are already issues
> >>>> with the k8s build e.g. non-trivial to discover which part of the build
> >>>> failed (and if it's critical or not.)
> >>>>
> >>>> I'm fully supportive rethinking/reworking this tool to be cloud-native.
> >>>> I would also add we also need to be intentional about how
> >>>> artifacts/results are passed between the various phases so we build
> >>>> flexible piplines that fit into a variety of CI/CD pipeline tools (I'd
> >>>> really like to find someone with the time explore reworking our jenkins
> >>>> pipline into tekton[2])
> >>
> >> I think that passing artifacts between build stages is really
> >> part of the pipeline integration work and shouldn't be imposed by
> >> the tools.  The tools however should produce intermediary data
> >> and files in a way that can be integrated into various pipelines
> >> with different requirements.
> >>
> >> It seems we can improve this in kci_build by first generating a
> >> JSON file with the static meta-data before starting a kernel
> >> build, then the build would add to that data, then the install
> >> step would add more data again.
> >>
> >> See also my comment below about having the ability to build
> >> individual steps, which should help with error handling.
> >>
> >>> Hi Kevin!
> >>>
> >>> OK, we've started to put some thoughts together on an implementation at
> >>> https://gitlab.com/Linaro/tuxmake.
> >>>
> >>> Some of the details are still a bit preliminary, but we plan to start on
> >>> it soon and get the basic interface and use-cases in place.
> >>>
> >>> Would the design meet your needs? If you have any feedback, we would
> >>> really appreciate it. I'm really not sure how it would fit in with k8s,
> >>> but I anticipate if it were used as a python library there would be a
> >>> lot of fine grained control of the steps.
> >>
> >> I still strongly believe that making the tool call Docker itself
> >> is going to be a big issue - it should really be the other way
> >> round, or at least there should be a way to run the kernel build
> >> directly with an option to wrap it inside a Docker container at
> >> the user's discretion.
> > 
> > I'll address this further down.
> > 
> >>
> >> It's also important to have the possibility to run each
> >> individual make command separately, to create the .config, build
> >> the image, build the modules, the dtbs, or other arbitrary things
> >> such as kselftest.  The current kci_build implementation does too
> >> much already in the build_kernel() function, I've had to hack it
> >> so many times to force it to not build modules...
> > 
> > We agree on this point - it should be possible as a cli and as a library
> > to run each build command separately.
> > 
> > In the tuxmake readme, we have the following proposed to handle this
> > (from a cli point of view):
> > 
> >     tuxmake --targets kernel,selftests,htmldocs
> > 
> > If --targets were specified, only those targets (and any dependent
> > targets) would be called. If --targets is not specified,
> > kernel+modules+dtbs (as applicable) would be built by default.
> > 
> > Config itself is a special case which I have some ideas for, but are not
> > yet described in the readme.
> > 
> >>
> >> About k8s, yes I agree a Python library should provide the
> >> required flexibility.  It shouldn't need to have any "cloud"
> >> special features as it's just one use-case among many.  The key
> >> thing is to provide just enough added value without turning it
> >> into a complete integration.  I guess one could see the stack as
> >> follows:
> >>
> >>
> >> user
> >> ------------
> >> environment
> >> ------------
> >> tuxmake
> >> ------------
> >> make
> >> compiler
> >> kernel source code
> >>
> >>
> >> The "environment" can be a plain shell in a terminal, or in a
> >> Docker container, which may be managed by Kubernetes, which may
> >> have been started by Jenkins or Tekton or anything else.  By
> >> keeping the environment as a higher-level entity in the stack we
> >> also keep the tool (tuxmake) generic.
> > 
> > Either the environment needs to be aware of the tool, or the tool needs
> > to be aware of the environment. Right now, our container environments
> > are not generic because they have to have support for kci_build, for
> > example, built in (python, required python modules, even jenkins
> > requirements are leaking in). Someone else reusing the tools would have
> > to bake their own dependencies in, making the environments non-portable.
> 
> What do you mean exactly by "our container environments"?  Are
> they Docker containers used by LKFT?
> 
> > Worse, I'm really trying to focus on the kernel developer use-cases, and
> > reproducing a kci_build today is non-trivial - largely because it's up
> > to the user to perform multiple steps to get a container running, get
> > their source code into the container, get kci_build into the container,
> > and rerun the kernelci build.
> 
> That's right, kci_build is currently mostly designed to get
> integrated into an automated build pipeline using say, Jenkins or
> Gitlab CI.  We all know that having a way to wrap that in a
> Docker call would make things easier for developers - and there's
> nothing really hard about it.  We could for example have a "kci"
> higher-level command which would act as a shell-oriented
> integration to first pull a Docker image, then run some commands
> inside it.  It could have one command for each step in a typical
> automated pipeline: build, test, report...
> 
> > From a correctness point of view, a container that is designed to
> > provide an environment for a kernel build should *only* contain things
> > that are required by the kernel build itself.
> 
> I'm sorry but I really don't get what you mean here.  If you mean
> that the container should only have the dependencies to be able
> to run plain build commands such as "make defconfig && make",
> then how can you have any tool on top of that?  Would it need to
> be merged in the upstream kernel build system to satisfy your
> criteria?
> 
> The README says "Requirements: python 3.6+, docker", so you
> actually want to use Docker images but _still_ require users to
> install a particular version of Python outside the container, and
> tuxmake not depend on any other Python module?
> 
> > I believe there's an opportunity here to provide a layer of abstraction
> > that makes build environments portable, and then builds themselves more
> > portable - making it easier for more developers to do more build
> > combinations more easily. Right now it's a hassle that every developer
> > has to solve for themselves (or just not solve it). And if a developer
> > finds a build problem, it may be difficult for another developer to
> > reproduce the problem if it's environmental.
> 
> Sounds great, although I'm still struggling to understand how
> you're proposing to achieve that in practice.
> 
> >>> We hope that this would be useful for kernelci and tuxbuild projects as
> >>> a library, but also for kernel engineers to be able to more easily
> >>> perform and reproduce a variety of builds locally using the cli.
> >>
> >> Err, do you want to call it tuxbuild or tuxmake? ;)
> > 
> > tuxbuild is a commercial build service and tuxmake is an open source
> > build implementation. For what it's worth, I think kernelci should
> > consider using tuxbuild; it's designed to be highly scalable (thousands
> > of concurrent builds) and reliable, and would solve kernelci's build
> > capacity problems. LKFT has been using it in production since January
> > and it has reduced our build times dramatically.
> 
> OK, makes more sense now that I've read this:
> 
>   https://pypi.org/project/tuxbuild/
> 
> About using it in KernelCI, for sure there are lots of things we
> could do to improve our build efficiency.  Being tied to a
> particular service is probably not going to play out well though,
> unless it's only an option when deploying instances (i.e. same as
> we use Jenkins and now Kubernetes but don't require them etc...).
> 
> So assuming KernelCI could depend on tuxmake, and one possible
> way to run it was via tuxbuild, then that might work.
> 
> Otherwise, importing tuxmake into kci_build would in theory be
> fine as long as tuxmake does cover our use-case in practice.
> 
> >> I wonder if tuxmake would sound more like a lower-lavel "make"
> >> tool, such as Ninja or Meson.  So tuxbuild is probably a bit
> >> clearer in that it's about facilitating builds (i.e. above
> >> the "make" layer in the stack...).
> >>
> >> You suggested to have a project under github.com/kernelci, which
> >> I believe probably makes more sense if we want KernelCI to
> >> provide a common set of tools for others to reuse (not being
> >> biased at all here).  Or, we can do what I was trying to suggest
> >> in my previous email and have a new "namespace" for these tools
> >> that don't need to be associated with KernelCI, or LKFT or
> >> anything else.
> > 
> > I don't think a new namespace is necessary. At this point, it's a Linaro
> > project so I've put it there. We can re-assess as necessary but I don't
> > think the location of the git repo is a critical factor at this point,
> > nor is it necessarily permanent.
> 
> Right, sorry I was missing some background about tuxbuild.
> 
> The git URL obviously has very little significance, but the
> organisation which has the destiny of the tool in its hands does.
> If it's under /linaro, it's a Linaro tool, and obviously it will
> at least appear to be designed for the very specific use-case
> that is tuxbuild and LKFT:
> 
>   https://gitlab.com/Linaro/lkft/ci-scripts/-/blob/master/branch-gitlab-ci.yml#L52
> 
> >>>>> If there's wide agreement then we could establish such a repository and
> >>>>> start iterating on a design document. I have someone available to start
> >>>>> working on the actual implementation in a few weeks.
> >>>>>
> >>>>> There are other details to discuss, like how to deal with config (we had
> >>>>> to do a bit of a workaround for it outside of kci_build), licensing,
> >>>>> etc.
> 
> To clarify one thing, have you completely stopped using kci_build
> with LKFT?
> 
> >>>> I propose that we we dedicate one of our tuesday weekly calls to this
> >>>> topic.  Would Tues, June 2nd work?
> >>>
> >>> I can make that work,
> >>
> >> Well, the main topic for that meeting is going to be to review
> >> the KernelCI plans for the next 3 months.  You're welcome to join
> >> in any case, and if we don't spend too much time on the plan we
> >> may have time to discuss that.  However, maybe we could aim for
> >> the 9th instead?
> > 
> > I'm unavailable on the 9th and the 16th of June. Just ping me on irc
> > if it ends up on the agenda.
> 
> OK let's catch up again by Monday at the latest.
> 
> Thanks,
> Guillaume
> 
> >>>> [1] https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
> >>>> [2] https://tekton.dev/

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* "Audience"/"Guest" can join for this meeting running on 23rd?
  2020-05-29 15:53               ` Dan Rue
@ 2020-06-15 13:22                 ` koti koti
  2020-06-16 16:16                   ` Mark Brown
  0 siblings, 1 reply; 23+ messages in thread
From: koti koti @ 2020-06-15 13:22 UTC (permalink / raw)
  To: kernelci, Dan Rue; +Cc: Guillaume Tucker, Kevin Hilman

Hi ,

As per below Dan email, there is a meeting on 23rd June.

Is it possible to join Audience/"Guest Members" ?

If yes, Please can someone let me know the steps to join this meeting?

Regards,
Koti

On Fri, 29 May 2020 at 21:24, Dan Rue <dan.rue@linaro.org> wrote:

> Let's add the questions and items raised in this thread to the agenda
> for the discussion scheduled for June 23rd.
>
> Thanks,
> Dan
>
> On Thu, May 28, 2020 at 10:03:20PM +0100, Guillaume Tucker wrote:
> > On 28/05/2020 18:28, Dan Rue wrote:
> > > On Thu, May 28, 2020 at 07:43:40AM +0100, Guillaume Tucker wrote:
> > >> On 27/05/2020 20:58, Dan Rue wrote:
> > >>> On Fri, May 22, 2020 at 11:22:17AM -0700, Kevin Hilman wrote:
> > >>>> "Dan Rue" <dan.rue@linaro.org> writes:
> > >>>>
> > >>>>> On Tue, Apr 21, 2020 at 04:46:34PM +0100, Guillaume Tucker wrote:
> > >>>>>> On Mon, Apr 20, 2020 at 5:36 PM Dan Rue <dan.rue@linaro.org>
> wrote:
> > >>>>
> > >>>> [...]
> > >>>>
> > >>>>>> Basically, rather than adapting kci_build for other purposes, I'm
> > >>>>>> suggesting to create a generic tool that can be used by kci_build
> > >>>>>> as well as any other kernel CI system.
> > >>>>>>
> > >>>>>> I don't know how to call this toolbox and where to host it, but
> > >>>>>> it would seem wise imho to keep it separate from KernelCI, LKFT
> > >>>>>> or any other CI system that would be using it.  It would probably
> > >>>>>> also make sense to at least keep it kernel-centric to focus on a
> > >>>>>> group of use-cases.  How does that sound?
> > >>>>>
> > >>>>> If there's agreement to do this together with kernelci, which I'm
> really
> > >>>>> happy about, then I propose it be something like
> > >>>>> github.com/kernelci/<somename>, where somename is available in
> the pypi
> > >>>>> namespace (spanner isn't).
> > >>>>>
> > >>>>> I think our requirements are largely the same. We do need to decide
> > >>>>> about the docker bits, and about how opinionated the tool should
> be.
> > >>>>
> > >>>> Having spent the last few weeks getting the existing kci_build
> working
> > >>>> in k8s environment[1], I think how docker/containers fits in here
> is the
> > >>>> key thing we should agree upon.
> > >>>>
> > >>>> For the initial k8s migration, I started with the requirement to use
> > >>>> kci_build as-is, but some of the issues Dan raised are already
> issues
> > >>>> with the k8s build e.g. non-trivial to discover which part of the
> build
> > >>>> failed (and if it's critical or not.)
> > >>>>
> > >>>> I'm fully supportive rethinking/reworking this tool to be
> cloud-native.
> > >>>> I would also add we also need to be intentional about how
> > >>>> artifacts/results are passed between the various phases so we build
> > >>>> flexible piplines that fit into a variety of CI/CD pipeline tools
> (I'd
> > >>>> really like to find someone with the time explore reworking our
> jenkins
> > >>>> pipline into tekton[2])
> > >>
> > >> I think that passing artifacts between build stages is really
> > >> part of the pipeline integration work and shouldn't be imposed by
> > >> the tools.  The tools however should produce intermediary data
> > >> and files in a way that can be integrated into various pipelines
> > >> with different requirements.
> > >>
> > >> It seems we can improve this in kci_build by first generating a
> > >> JSON file with the static meta-data before starting a kernel
> > >> build, then the build would add to that data, then the install
> > >> step would add more data again.
> > >>
> > >> See also my comment below about having the ability to build
> > >> individual steps, which should help with error handling.
> > >>
> > >>> Hi Kevin!
> > >>>
> > >>> OK, we've started to put some thoughts together on an implementation
> at
> > >>> https://gitlab.com/Linaro/tuxmake.
> > >>>
> > >>> Some of the details are still a bit preliminary, but we plan to
> start on
> > >>> it soon and get the basic interface and use-cases in place.
> > >>>
> > >>> Would the design meet your needs? If you have any feedback, we would
> > >>> really appreciate it. I'm really not sure how it would fit in with
> k8s,
> > >>> but I anticipate if it were used as a python library there would be a
> > >>> lot of fine grained control of the steps.
> > >>
> > >> I still strongly believe that making the tool call Docker itself
> > >> is going to be a big issue - it should really be the other way
> > >> round, or at least there should be a way to run the kernel build
> > >> directly with an option to wrap it inside a Docker container at
> > >> the user's discretion.
> > >
> > > I'll address this further down.
> > >
> > >>
> > >> It's also important to have the possibility to run each
> > >> individual make command separately, to create the .config, build
> > >> the image, build the modules, the dtbs, or other arbitrary things
> > >> such as kselftest.  The current kci_build implementation does too
> > >> much already in the build_kernel() function, I've had to hack it
> > >> so many times to force it to not build modules...
> > >
> > > We agree on this point - it should be possible as a cli and as a
> library
> > > to run each build command separately.
> > >
> > > In the tuxmake readme, we have the following proposed to handle this
> > > (from a cli point of view):
> > >
> > >     tuxmake --targets kernel,selftests,htmldocs
> > >
> > > If --targets were specified, only those targets (and any dependent
> > > targets) would be called. If --targets is not specified,
> > > kernel+modules+dtbs (as applicable) would be built by default.
> > >
> > > Config itself is a special case which I have some ideas for, but are
> not
> > > yet described in the readme.
> > >
> > >>
> > >> About k8s, yes I agree a Python library should provide the
> > >> required flexibility.  It shouldn't need to have any "cloud"
> > >> special features as it's just one use-case among many.  The key
> > >> thing is to provide just enough added value without turning it
> > >> into a complete integration.  I guess one could see the stack as
> > >> follows:
> > >>
> > >>
> > >> user
> > >> ------------
> > >> environment
> > >> ------------
> > >> tuxmake
> > >> ------------
> > >> make
> > >> compiler
> > >> kernel source code
> > >>
> > >>
> > >> The "environment" can be a plain shell in a terminal, or in a
> > >> Docker container, which may be managed by Kubernetes, which may
> > >> have been started by Jenkins or Tekton or anything else.  By
> > >> keeping the environment as a higher-level entity in the stack we
> > >> also keep the tool (tuxmake) generic.
> > >
> > > Either the environment needs to be aware of the tool, or the tool needs
> > > to be aware of the environment. Right now, our container environments
> > > are not generic because they have to have support for kci_build, for
> > > example, built in (python, required python modules, even jenkins
> > > requirements are leaking in). Someone else reusing the tools would have
> > > to bake their own dependencies in, making the environments
> non-portable.
> >
> > What do you mean exactly by "our container environments"?  Are
> > they Docker containers used by LKFT?
> >
> > > Worse, I'm really trying to focus on the kernel developer use-cases,
> and
> > > reproducing a kci_build today is non-trivial - largely because it's up
> > > to the user to perform multiple steps to get a container running, get
> > > their source code into the container, get kci_build into the container,
> > > and rerun the kernelci build.
> >
> > That's right, kci_build is currently mostly designed to get
> > integrated into an automated build pipeline using say, Jenkins or
> > Gitlab CI.  We all know that having a way to wrap that in a
> > Docker call would make things easier for developers - and there's
> > nothing really hard about it.  We could for example have a "kci"
> > higher-level command which would act as a shell-oriented
> > integration to first pull a Docker image, then run some commands
> > inside it.  It could have one command for each step in a typical
> > automated pipeline: build, test, report...
> >
> > > From a correctness point of view, a container that is designed to
> > > provide an environment for a kernel build should *only* contain things
> > > that are required by the kernel build itself.
> >
> > I'm sorry but I really don't get what you mean here.  If you mean
> > that the container should only have the dependencies to be able
> > to run plain build commands such as "make defconfig && make",
> > then how can you have any tool on top of that?  Would it need to
> > be merged in the upstream kernel build system to satisfy your
> > criteria?
> >
> > The README says "Requirements: python 3.6+, docker", so you
> > actually want to use Docker images but _still_ require users to
> > install a particular version of Python outside the container, and
> > tuxmake not depend on any other Python module?
> >
> > > I believe there's an opportunity here to provide a layer of abstraction
> > > that makes build environments portable, and then builds themselves more
> > > portable - making it easier for more developers to do more build
> > > combinations more easily. Right now it's a hassle that every developer
> > > has to solve for themselves (or just not solve it). And if a developer
> > > finds a build problem, it may be difficult for another developer to
> > > reproduce the problem if it's environmental.
> >
> > Sounds great, although I'm still struggling to understand how
> > you're proposing to achieve that in practice.
> >
> > >>> We hope that this would be useful for kernelci and tuxbuild projects
> as
> > >>> a library, but also for kernel engineers to be able to more easily
> > >>> perform and reproduce a variety of builds locally using the cli.
> > >>
> > >> Err, do you want to call it tuxbuild or tuxmake? ;)
> > >
> > > tuxbuild is a commercial build service and tuxmake is an open source
> > > build implementation. For what it's worth, I think kernelci should
> > > consider using tuxbuild; it's designed to be highly scalable (thousands
> > > of concurrent builds) and reliable, and would solve kernelci's build
> > > capacity problems. LKFT has been using it in production since January
> > > and it has reduced our build times dramatically.
> >
> > OK, makes more sense now that I've read this:
> >
> >   https://pypi.org/project/tuxbuild/
> >
> > About using it in KernelCI, for sure there are lots of things we
> > could do to improve our build efficiency.  Being tied to a
> > particular service is probably not going to play out well though,
> > unless it's only an option when deploying instances (i.e. same as
> > we use Jenkins and now Kubernetes but don't require them etc...).
> >
> > So assuming KernelCI could depend on tuxmake, and one possible
> > way to run it was via tuxbuild, then that might work.
> >
> > Otherwise, importing tuxmake into kci_build would in theory be
> > fine as long as tuxmake does cover our use-case in practice.
> >
> > >> I wonder if tuxmake would sound more like a lower-lavel "make"
> > >> tool, such as Ninja or Meson.  So tuxbuild is probably a bit
> > >> clearer in that it's about facilitating builds (i.e. above
> > >> the "make" layer in the stack...).
> > >>
> > >> You suggested to have a project under github.com/kernelci, which
> > >> I believe probably makes more sense if we want KernelCI to
> > >> provide a common set of tools for others to reuse (not being
> > >> biased at all here).  Or, we can do what I was trying to suggest
> > >> in my previous email and have a new "namespace" for these tools
> > >> that don't need to be associated with KernelCI, or LKFT or
> > >> anything else.
> > >
> > > I don't think a new namespace is necessary. At this point, it's a
> Linaro
> > > project so I've put it there. We can re-assess as necessary but I don't
> > > think the location of the git repo is a critical factor at this point,
> > > nor is it necessarily permanent.
> >
> > Right, sorry I was missing some background about tuxbuild.
> >
> > The git URL obviously has very little significance, but the
> > organisation which has the destiny of the tool in its hands does.
> > If it's under /linaro, it's a Linaro tool, and obviously it will
> > at least appear to be designed for the very specific use-case
> > that is tuxbuild and LKFT:
> >
> >
> https://gitlab.com/Linaro/lkft/ci-scripts/-/blob/master/branch-gitlab-ci.yml#L52
> >
> > >>>>> If there's wide agreement then we could establish such a
> repository and
> > >>>>> start iterating on a design document. I have someone available to
> start
> > >>>>> working on the actual implementation in a few weeks.
> > >>>>>
> > >>>>> There are other details to discuss, like how to deal with config
> (we had
> > >>>>> to do a bit of a workaround for it outside of kci_build),
> licensing,
> > >>>>> etc.
> >
> > To clarify one thing, have you completely stopped using kci_build
> > with LKFT?
> >
> > >>>> I propose that we we dedicate one of our tuesday weekly calls to
> this
> > >>>> topic.  Would Tues, June 2nd work?
> > >>>
> > >>> I can make that work,
> > >>
> > >> Well, the main topic for that meeting is going to be to review
> > >> the KernelCI plans for the next 3 months.  You're welcome to join
> > >> in any case, and if we don't spend too much time on the plan we
> > >> may have time to discuss that.  However, maybe we could aim for
> > >> the 9th instead?
> > >
> > > I'm unavailable on the 9th and the 16th of June. Just ping me on irc
> > > if it ends up on the agenda.
> >
> > OK let's catch up again by Monday at the latest.
> >
> > Thanks,
> > Guillaume
> >
> > >>>> [1]
> https://github.com/kernelci/kernelci-core/pull/366/commits/4ec097537de61fa8a4f6de4fe9c1cf6b26f6ac04
> > >>>> [2] https://tekton.dev/
>
> --
> Linaro LKFT
> https://lkft.linaro.org
>
> 
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: "Audience"/"Guest" can join for this meeting running on 23rd?
  2020-06-15 13:22                 ` "Audience"/"Guest" can join for this meeting running on 23rd? koti koti
@ 2020-06-16 16:16                   ` Mark Brown
  2020-06-17  1:49                     ` koti koti
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Brown @ 2020-06-16 16:16 UTC (permalink / raw)
  To: kernelci, kotisoftwaretest; +Cc: Dan Rue, Guillaume Tucker, Kevin Hilman

[-- Attachment #1: Type: text/plain, Size: 455 bytes --]

On Mon, Jun 15, 2020 at 06:52:22PM +0530, koti koti wrote:

> As per below Dan email, there is a meeting on 23rd June.

> Is it possible to join Audience/"Guest Members" ?

> If yes, Please can someone let me know the steps to join this meeting?

No problem at all - the meeting is weekly at 9am PST/5pm BST and you can
join via the URL below:

   https://meet.kernel.social/kernelci-dev

from a web browser with WebRTC support (most work).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-05-29  7:42               ` Mathieu Acher
  2020-05-29 10:44                 ` Mark Brown
  2020-05-29 14:27                 ` Guillaume Tucker
@ 2020-06-16 16:33                 ` Nick Desaulniers
  2020-06-23  7:28                   ` Mathieu Acher
  2 siblings, 1 reply; 23+ messages in thread
From: Nick Desaulniers @ 2020-06-16 16:33 UTC (permalink / raw)
  To: mathieu.acher; +Cc: Kevin Hilman, kernelci, clang-built-linux

+ clang-built-linux

On Fri, May 29, 2020 at 12:42 AM Mathieu Acher <mathieu.acher@irisa.fr> wrote:
>
> Dear all,
>
> I'm glad to hear the ongoing effort about containerization (Docker + Python 3) of KernelCI.
> I didn't know the existence of tuxbuild and I'm as curious as Kevin for having more details.
>
> As an academic, we have some use cases that can further motivate the use of a "cloud-based solution".
> In 2017, we have developed TuxML (https://github.com/TuxML/ProjetIrma) which a solution with Docker and Python 3 for massively compiling kernel configurations.
> The "ml" part stands for statistical learning. So far, we have collected 200K+ configurations with a distributed computing infrastructure (a kind of cloud).
> Our focus is to explore the configuration space of the Linux kernel in the large; we fed several config files (mainly with randconfig and some pre-set options) and then perform some "tests" (in a broad sense).
>
> We have two major use cases:
>  * bug-finding related to configurations: we are able to locate faulty (combinations of) options and even prevent/fix some failures at the Kconfig level (more details here: https://hal.inria.fr/hal-02147012)
>  * size prediction of a configuration (without compiling it) and identificaton of options that influence size (more details here: https://hal.inria.fr/hal-02314830)

Hi Mathieu,
I'm focused on building the Linux kernel with Clang.  One of the
issues we're running into is long tail bug reports from configs beyond
defconfigs or the defconfigs from major Linux distros.  In particular,
randconfig builds tend to dig up all kinds of issues for us.  It can
also take time to differentiate whether this issue is
toolchain-agnostic or specific to Clang.

>From your presentation
https://static.sched.com/hosted_files/osseu19/ca/TuxML-OSS2019-v3.pdf
the slides on classification trees have me curious.  I suppose if you
had data from builds with GCC+Clang, you could help us spot what
configs are still problematic just for Clang and not for GCC?  That
kind of pattern analysis would be invaluable in trying to automate bug
finding.  Already the configuration space is unfathomable, and adding
yet another label (toolchain) doesn't help, but it is something that
could have a quantifiable impact and really help.

If it's something that you have capacity for, let's chat more?

>
> In both cases, we need an infrastructure capable of compiling *any* configuration of the kernel.
> For measuring the size, we need to control all factors, like gcc version and so forth. Docker was helpful here to have a canonical environment.
> One issue we found is to build the right Docker image: some configurations of the kernel require specific tools and anticipating all can be tricky, leading to some false-positive failures.
> Have you encountered this situation?
>
> In general, my colleagues at University of Rennes 1/Inria and I are really interested to be part of this new effort through discussions or technical contributions.
> We plan to modernize TuxML with KernelCI toolchain, especially if it's possible to deploy it on several distributed machines.
>
> Can I somehow participate in the next call? Mainly to learn and as a first step to contribute in this very nice project.
>
> Best,
>
> Mathieu Acher
>
>
>
> 
>


-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: "Audience"/"Guest" can join for this meeting running on 23rd?
  2020-06-16 16:16                   ` Mark Brown
@ 2020-06-17  1:49                     ` koti koti
  2020-06-17 10:31                       ` Mark Brown
  0 siblings, 1 reply; 23+ messages in thread
From: koti koti @ 2020-06-17  1:49 UTC (permalink / raw)
  To: Mark Brown; +Cc: kernelci, Dan Rue, Guillaume Tucker, Kevin Hilman

Thanks Mark Brown.

Is it every week Monday or how it is?

Regards,
Koti

On Tuesday, 16 June 2020, Mark Brown <broonie@kernel.org> wrote:

> On Mon, Jun 15, 2020 at 06:52:22PM +0530, koti koti wrote:
>
> > As per below Dan email, there is a meeting on 23rd June.
>
> > Is it possible to join Audience/"Guest Members" ?
>
> > If yes, Please can someone let me know the steps to join this meeting?
>
> No problem at all - the meeting is weekly at 9am PST/5pm BST and you can
> join via the URL below:
>
>    https://meet.kernel.social/kernelci-dev
>
> from a web browser with WebRTC support (most work).
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: "Audience"/"Guest" can join for this meeting running on 23rd?
  2020-06-17  1:49                     ` koti koti
@ 2020-06-17 10:31                       ` Mark Brown
  2020-06-17 10:55                         ` koti koti
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Brown @ 2020-06-17 10:31 UTC (permalink / raw)
  To: koti koti; +Cc: kernelci, Dan Rue, Guillaume Tucker, Kevin Hilman

[-- Attachment #1: Type: text/plain, Size: 184 bytes --]

On Wed, Jun 17, 2020 at 07:19:43AM +0530, koti koti wrote:
> Thanks Mark Brown.
> 
> Is it every week Monday or how it is?

Ah, sorry - should have said that it's every Tuesday!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: "Audience"/"Guest" can join for this meeting running on 23rd?
  2020-06-17 10:31                       ` Mark Brown
@ 2020-06-17 10:55                         ` koti koti
  0 siblings, 0 replies; 23+ messages in thread
From: koti koti @ 2020-06-17 10:55 UTC (permalink / raw)
  To: Mark Brown; +Cc: kernelci, Dan Rue, Guillaume Tucker, Kevin Hilman

Thanks.

Regards,
Koti

On Wed, 17 Jun 2020 at 16:01, Mark Brown <broonie@kernel.org> wrote:

> On Wed, Jun 17, 2020 at 07:19:43AM +0530, koti koti wrote:
> > Thanks Mark Brown.
> >
> > Is it every week Monday or how it is?
>
> Ah, sorry - should have said that it's every Tuesday!
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-06-16 16:33                 ` Nick Desaulniers
@ 2020-06-23  7:28                   ` Mathieu Acher
  2020-06-23 23:48                     ` Nick Desaulniers
  0 siblings, 1 reply; 23+ messages in thread
From: Mathieu Acher @ 2020-06-23  7:28 UTC (permalink / raw)
  To: Nick Desaulniers, kernelci

Hi Nick, 

Thanks for your interest. 
We didn't target and gather data about Clang, but it was only a technical limitation at that time. 
Right now, it seems possible to build kernel configurations with Clang (thanks to kernelci tool chain) and we are very interested to invest some resources/time here. 

Indeed, we could differentiate GCC and Clang build and see what's going on. 
We can also pinpoint combinations of options that lead to failures: it can be useful to indicate the root cause of the issues and investigate whether it's specific to Clang. 

I'm available to have a chat


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: kci_build proposal
  2020-06-23  7:28                   ` Mathieu Acher
@ 2020-06-23 23:48                     ` Nick Desaulniers
  0 siblings, 0 replies; 23+ messages in thread
From: Nick Desaulniers @ 2020-06-23 23:48 UTC (permalink / raw)
  To: Mathieu Acher; +Cc: clang-built-linux

bcc: kernelci
cc: clangbuiltlinux

On Tue, Jun 23, 2020 at 12:28 AM Mathieu Acher <mathieu.acher@irisa.fr> wrote:
>
> Hi Nick,
>
> Thanks for your interest.
> We didn't target and gather data about Clang, but it was only a technical limitation at that time.
> Right now, it seems possible to build kernel configurations with Clang (thanks to kernelci tool chain) and we are very interested to invest some resources/time here.
>
> Indeed, we could differentiate GCC and Clang build and see what's going on.
> We can also pinpoint combinations of options that lead to failures: it can be useful to indicate the root cause of the issues and investigate whether it's specific to Clang.
>
> I'm available to have a chat
>

Cool, we have a bi-weekly (every other week) public meeting:
https://calendar.google.com/calendar/embed?src=google.com_bbf8m6m4n8nq5p2bfjpele0n5s%40group.calendar.google.com
IDK if that works for you, but I think if you gave a 10 minute demo
that would be neat, then we could discuss more?  Otherwise happy to
stick to email, too?
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-06-23 23:48 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-20 16:36 kci_build proposal Dan Rue
2020-04-21 15:46 ` Guillaume Tucker
2020-04-21 15:53   ` Mark Brown
2020-04-21 17:32     ` Dan Rue
2020-04-21 17:28   ` Dan Rue
2020-05-22 18:22     ` Kevin Hilman
2020-05-27 19:58       ` Dan Rue
2020-05-28  6:43         ` Guillaume Tucker
2020-05-28 17:28           ` Dan Rue
2020-05-28 21:03             ` Guillaume Tucker
2020-05-29 15:53               ` Dan Rue
2020-06-15 13:22                 ` "Audience"/"Guest" can join for this meeting running on 23rd? koti koti
2020-06-16 16:16                   ` Mark Brown
2020-06-17  1:49                     ` koti koti
2020-06-17 10:31                       ` Mark Brown
2020-06-17 10:55                         ` koti koti
2020-05-28 23:31             ` kci_build proposal Kevin Hilman
2020-05-29  7:42               ` Mathieu Acher
2020-05-29 10:44                 ` Mark Brown
2020-05-29 14:27                 ` Guillaume Tucker
2020-06-16 16:33                 ` Nick Desaulniers
2020-06-23  7:28                   ` Mathieu Acher
2020-06-23 23:48                     ` Nick Desaulniers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.