kernelci.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* Re: Linking CIP testing with KernelCI
       [not found] <OSAPR01MB2385E0367B8CA545E1D66422B7890@OSAPR01MB2385.jpnprd01.prod.outlook.com>
@ 2020-06-11 21:11 ` Guillaume Tucker
  2020-06-12  7:54   ` santiago.esteban
  2020-06-12 11:03   ` Chris Paterson
  0 siblings, 2 replies; 5+ messages in thread
From: Guillaume Tucker @ 2020-06-11 21:11 UTC (permalink / raw)
  To: Chris Paterson; +Cc: Kevin Hilman, kernelci

[-- Attachment #1: Type: text/plain, Size: 5811 bytes --]

+ kernelci@groups.io as discussed

Hi Chris,

Thanks for starting this discussion, connecting the CIP test
system with KernelCI sounds like a perfect example of how the
project can grow.

On Thu, Jun 4, 2020 at 11:15 AM Chris Paterson <Chris.Paterson2@renesas.com>
wrote:

> Hello Guillaume, Kevin,
>
> I'd really like to get CIP linked into KernelCI somehow before ELC-NA in a
> few weeks.
> Are you both available for a quick chat this week/next to determine what
> kind of integration would be suitable, and how we can go about achieving it?
>

Sure, let's try to find a slot next week.


> Some ramblings below to start the discussion...
>
> # Background on the current CIP CI setup
> - We have a GitLab mirror of our kernel.org kernel repo
> - Our branches contain a simple .gitlab-ci.yml file that links to our
> build/test GitLab CI pipeline repository
> - Kernel builds are kicked off using the various arches/configs supported
> by CIP
> - Test jobs are then submitted to our LAVA master
> - We currently run simple boot tests, spectre/meltdown checker,
> cyclictest+hackbench (RT configs only) and LTP (only run on CIP release
> candidates)
> - If the builds and tests finish okay then GitLab shows a green tick, if
> there is some kind of build error, or test infrastructure error then GitLab
> shows a red cross
> - If individual test cases fail, or there is a test regression, currently
> there is no way of knowing in the Gitlab UI. This is why I want to start
> using KernelCI
> - We follow a similar process for Greg's stable-rc 4.4 and 4.19 branches,
> but don't run LTP
>

This sounds very similar to the experiment Kevin did with his own
tree and Gitlab CI, except he's using the command line tools from
kernelci-core.


> # Options for KernelCI integration (?)
> 1) Keep our current build/test pipeline the same, just add a notify
> section to the LAVA jobs to send the results to kernelci.org
> - We'd need to include the correct metadata etc.
>

I think we might be able to have commands that generate just the
meta-data for a given kernel build without actually doing a
build.  That would allow you to keep your current build system
and also stay in sync with the kernelci meta-data definitions.


> - Would this work if there is no corresponding 'build job'?
>

I don't think anyone is doing this right now but in any case it's
something we need to support, for example with static analysis
and other tests that don't require any build.  The only thing
that should be really required is the kernel revision (tree,
branch, sha1, git describe).  So, "yes" this needs to work.


> - Would you allow test results from the CIP SLTS Kernels on kernelci.org?
> Or would we need to set up our own instance?
>

Ideally, only upstream-oriented branches should be built and
tested on kernelci.org to keep it in line with the project's
mission and upstream kernel development.  If the CIP branches
consist only of backported patches from mainline, maybe it could
be accepted in the same way that we have stable branches.  If
however there are patches that are not intended to get merged
upstream and are there for specific products, then in my opinion
that would not be acceptable - see below for an alternative as
you suggested.


> 1b) Same as 1) but using either cip.kernelci.org or
> kernelci.ciplatform.org


Yes, having a separate instance would enable a separate
configuration, separate database, special features etc... so no
possible interference with the main kernelci.org
activities.  Having it as cip.kernelci.org means it would
probably be managed by the KernelCI LF project, I guess that
would make sense for a member.  But then if you host and manage
your own instance on kernelci.ciplatform.org you're of course
free to do anything you want with it.


> 2) kernelci.org starts monitoring/building/testing the CIP SLTS Kernels
> - Tests would run through our LAVA master
> - Perhaps kernelci.org could make use of our build infrastructure?
> 2b) Same as 2) but using either cip.kernelci.org or
> kernelci.ciplatform.org
>

The question of kernelci.org vs another instance remains the
same, in my opinion it all hinges on what is on the branches.

Doing this would give you automated bisection since the KernelCI
pipeline would be able to run arbitrary jobs in the LAVA lab.
Using your own build infrastructure would probably be another
argument in favour of a separate instance (e.g. cip.kernelci.org)
as this would be a custom use-case, unless you're happy to have
any kernelci.org kernel builds run on your infrastructure.

3) kernelci.org start submitting their usual test jobs to the CIP LAVA
> master
> - CIP currently have 11 different device-types (20 boards in total) which
> should help expand what is available to KernelCI
>

Maybe we could do that anyway, with filters to limit the upstream
branches and test suites to run if you want, and also work out a
solution for the CIP SLTS branches?


> My preferred option would be 1) as it would be nice not to lose all the
> work spent on the GitLab CI setup
> (or perhaps integrating the GitLab CI approach into KernelCI's setup is a
> worthwhile task, but it'll take time)
>

Yes I think Option 1 would be good given the timeframe you've
set.  Also it's going to be a recurring use-case, KernelCI is all
about being flexible enough to integrate existing CI systems with
no real modifications to them.  We can try things out with
staging.kernelci.org initially.  Option 2 could be done as a
follow-up if you think the added features justify the extra
efforts.

And Option 3 seems really straightforward to me, any LAVA lab can
be added to kernelci.org with an API token and a bit of YAML
configuration.  That would definitely be a quick win if you want
to be sure to have something to show at ELC-NA :)

Best wishes,
Guillaume

[-- Attachment #2: Type: text/html, Size: 8972 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Linking CIP testing with KernelCI
  2020-06-11 21:11 ` Linking CIP testing with KernelCI Guillaume Tucker
@ 2020-06-12  7:54   ` santiago.esteban
  2020-06-12 11:03   ` Chris Paterson
  1 sibling, 0 replies; 5+ messages in thread
From: santiago.esteban @ 2020-06-12  7:54 UTC (permalink / raw)
  To: kernelci, guillaume.tucker, Chris.Paterson2; +Cc: khilman

Dear Guillaume, Chris, Kevin,

I apologize for the blunt intrusion, but, could I join your discussions as a listener? I'm very interested on them.

At some point in the future I would like, if possible, to connect Microchip's CI system with Kernelci.org. We are building a board farm for testing Linux using Kernelci-core (command line tools) and Jenkins+Labgrid for testing. Currently, we can only perform basic boot tests on 4 boards, but are working on extending the board set and the test set. This corresponds to something close to "Option 1" mentioned on your discussion.

Best regards,

Santiago Esteban


On 11/6/20 23:11, Guillaume Tucker wrote:
+ kernelci@groups.io<mailto:kernelci@groups.io> as discussed

Hi Chris,

Thanks for starting this discussion, connecting the CIP test
system with KernelCI sounds like a perfect example of how the
project can grow.

On Thu, Jun 4, 2020 at 11:15 AM Chris Paterson <Chris.Paterson2@renesas.com<mailto:Chris.Paterson2@renesas.com>> wrote:
Hello Guillaume, Kevin,

I'd really like to get CIP linked into KernelCI somehow before ELC-NA in a few weeks.
Are you both available for a quick chat this week/next to determine what kind of integration would be suitable, and how we can go about achieving it?

Sure, let's try to find a slot next week.

Some ramblings below to start the discussion...

# Background on the current CIP CI setup
- We have a GitLab mirror of our kernel.org<http://kernel.org> kernel repo
- Our branches contain a simple .gitlab-ci.yml file that links to our build/test GitLab CI pipeline repository
- Kernel builds are kicked off using the various arches/configs supported by CIP
- Test jobs are then submitted to our LAVA master
- We currently run simple boot tests, spectre/meltdown checker, cyclictest+hackbench (RT configs only) and LTP (only run on CIP release candidates)
- If the builds and tests finish okay then GitLab shows a green tick, if there is some kind of build error, or test infrastructure error then GitLab shows a red cross
- If individual test cases fail, or there is a test regression, currently there is no way of knowing in the Gitlab UI. This is why I want to start using KernelCI
- We follow a similar process for Greg's stable-rc 4.4 and 4.19 branches, but don't run LTP

This sounds very similar to the experiment Kevin did with his own
tree and Gitlab CI, except he's using the command line tools from
kernelci-core.

# Options for KernelCI integration (?)
1) Keep our current build/test pipeline the same, just add a notify section to the LAVA jobs to send the results to kernelci.org<http://kernelci.org>
- We'd need to include the correct metadata etc.

I think we might be able to have commands that generate just the
meta-data for a given kernel build without actually doing a
build.  That would allow you to keep your current build system
and also stay in sync with the kernelci meta-data definitions.

- Would this work if there is no corresponding 'build job'?

I don't think anyone is doing this right now but in any case it's
something we need to support, for example with static analysis
and other tests that don't require any build.  The only thing
that should be really required is the kernel revision (tree,
branch, sha1, git describe).  So, "yes" this needs to work.

- Would you allow test results from the CIP SLTS Kernels on kernelci.org<http://kernelci.org>? Or would we need to set up our own instance?

Ideally, only upstream-oriented branches should be built and
tested on kernelci.org<http://kernelci.org> to keep it in line with the project's
mission and upstream kernel development.  If the CIP branches
consist only of backported patches from mainline, maybe it could
be accepted in the same way that we have stable branches.  If
however there are patches that are not intended to get merged
upstream and are there for specific products, then in my opinion
that would not be acceptable - see below for an alternative as
you suggested.

1b) Same as 1) but using either cip.kernelci.org<http://cip.kernelci.org> or kernelci.ciplatform.org<http://kernelci.ciplatform.org>

Yes, having a separate instance would enable a separate
configuration, separate database, special features etc... so no
possible interference with the main kernelci.org<http://kernelci.org>
activities.  Having it as cip.kernelci.org<http://cip.kernelci.org> means it would
probably be managed by the KernelCI LF project, I guess that
would make sense for a member.  But then if you host and manage
your own instance on kernelci.ciplatform.org<http://kernelci.ciplatform.org> you're of course
free to do anything you want with it.

2) kernelci.org<http://kernelci.org> starts monitoring/building/testing the CIP SLTS Kernels
- Tests would run through our LAVA master
- Perhaps kernelci.org<http://kernelci.org> could make use of our build infrastructure?
2b) Same as 2) but using either cip.kernelci.org<http://cip.kernelci.org> or kernelci.ciplatform.org<http://kernelci.ciplatform.org>

The question of kernelci.org<http://kernelci.org> vs another instance remains the
same, in my opinion it all hinges on what is on the branches.

Doing this would give you automated bisection since the KernelCI
pipeline would be able to run arbitrary jobs in the LAVA lab.
Using your own build infrastructure would probably be another
argument in favour of a separate instance (e.g. cip.kernelci.org<http://cip.kernelci.org>)
as this would be a custom use-case, unless you're happy to have
any kernelci.org<http://kernelci.org> kernel builds run on your infrastructure.

3) kernelci.org<http://kernelci.org> start submitting their usual test jobs to the CIP LAVA master
- CIP currently have 11 different device-types (20 boards in total) which should help expand what is available to KernelCI

Maybe we could do that anyway, with filters to limit the upstream
branches and test suites to run if you want, and also work out a
solution for the CIP SLTS branches?

My preferred option would be 1) as it would be nice not to lose all the work spent on the GitLab CI setup
(or perhaps integrating the GitLab CI approach into KernelCI's setup is a worthwhile task, but it'll take time)

Yes I think Option 1 would be good given the timeframe you've
set.  Also it's going to be a recurring use-case, KernelCI is all
about being flexible enough to integrate existing CI systems with
no real modifications to them.  We can try things out with
staging.kernelci.org<http://staging.kernelci.org> initially.  Option 2 could be done as a
follow-up if you think the added features justify the extra
efforts.

And Option 3 seems really straightforward to me, any LAVA lab can
be added to kernelci.org<http://kernelci.org> with an API token and a bit of YAML
configuration.  That would definitely be a quick win if you want
to be sure to have something to show at ELC-NA :)

Best wishes,
Guillaume



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Linking CIP testing with KernelCI
  2020-06-11 21:11 ` Linking CIP testing with KernelCI Guillaume Tucker
  2020-06-12  7:54   ` santiago.esteban
@ 2020-06-12 11:03   ` Chris Paterson
  2021-06-24 19:31     ` Guillaume Tucker
  1 sibling, 1 reply; 5+ messages in thread
From: Chris Paterson @ 2020-06-12 11:03 UTC (permalink / raw)
  To: Guillaume Tucker; +Cc: Kevin Hilman, kernelci

Hello Guillaume,

Thank you for the feedback.
I’ve added some comments below (sorry, email didn’t come through in plan text…)

Kind regards, Chris

From: Guillaume Tucker <guillaume.tucker@gmail.com>
Sent: 11 June 2020 22:12
To: Chris Paterson <Chris.Paterson2@renesas.com>
Cc: Kevin Hilman <khilman@baylibre.com>; kernelci@groups.io
Subject: Re: Linking CIP testing with KernelCI

+ kernelci@groups.io<mailto:kernelci@groups.io> as discussed

Hi Chris,

Thanks for starting this discussion, connecting the CIP test
system with KernelCI sounds like a perfect example of how the
project can grow.

On Thu, Jun 4, 2020 at 11:15 AM Chris Paterson <Chris.Paterson2@renesas.com<mailto:Chris.Paterson2@renesas.com>> wrote:
Hello Guillaume, Kevin,

I'd really like to get CIP linked into KernelCI somehow before ELC-NA in a few weeks.
Are you both available for a quick chat this week/next to determine what kind of integration would be suitable, and how we can go about achieving it?

Sure, let's try to find a slot next week.

Some ramblings below to start the discussion...

# Background on the current CIP CI setup
- We have a GitLab mirror of our kernel.org<http://kernel.org> kernel repo
- Our branches contain a simple .gitlab-ci.yml file that links to our build/test GitLab CI pipeline repository
- Kernel builds are kicked off using the various arches/configs supported by CIP
- Test jobs are then submitted to our LAVA master
- We currently run simple boot tests, spectre/meltdown checker, cyclictest+hackbench (RT configs only) and LTP (only run on CIP release candidates)
- If the builds and tests finish okay then GitLab shows a green tick, if there is some kind of build error, or test infrastructure error then GitLab shows a red cross
- If individual test cases fail, or there is a test regression, currently there is no way of knowing in the Gitlab UI. This is why I want to start using KernelCI
- We follow a similar process for Greg's stable-rc 4.4 and 4.19 branches, but don't run LTP

This sounds very similar to the experiment Kevin did with his own
tree and Gitlab CI, except he's using the command line tools from
kernelci-core.

# Options for KernelCI integration (?)
1) Keep our current build/test pipeline the same, just add a notify section to the LAVA jobs to send the results to kernelci.org<http://kernelci.org>
- We'd need to include the correct metadata etc.

I think we might be able to have commands that generate just the
meta-data for a given kernel build without actually doing a
build.  That would allow you to keep your current build system
and also stay in sync with the kernelci meta-data definitions.

CPA: Sounds sensible.

- Would this work if there is no corresponding 'build job'?

I don't think anyone is doing this right now but in any case it's
something we need to support, for example with static analysis
and other tests that don't require any build.  The only thing
that should be really required is the kernel revision (tree,
branch, sha1, git describe).  So, "yes" this needs to work.

CPA: Presumably this information could be provided as part of the LAVA job notification?

- Would you allow test results from the CIP SLTS Kernels on kernelci.org<http://kernelci.org>? Or would we need to set up our own instance?

Ideally, only upstream-oriented branches should be built and
tested on kernelci.org<http://kernelci.org> to keep it in line with the project's
mission and upstream kernel development.  If the CIP branches
consist only of backported patches from mainline, maybe it could
be accepted in the same way that we have stable branches.  If
however there are patches that are not intended to get merged
upstream and are there for specific products, then in my opinion
that would not be acceptable - see below for an alternative as
you suggested.

CPA: CIP follow an upstream first policy, so almost every patch comes from mainline, mostly via the stable LTS releases.
CPA: The SLTS Kernels aren’t focused at a particular product. It’s meant to be part of a base layer for end users.
CPA: They consist of LTS + some extra board support that has been backported from mainline. Sometimes a novel patch is required when adding the extra board support, as v5.7 is a lot different to v4.4.
CPA: When LTS support from Greg ends on the Kernels CIP are using (currently v4.4 & v4.19) CIP aims to pick up a very similar role.


1b) Same as 1) but using either cip.kernelci.org<http://cip.kernelci.org> or kernelci.ciplatform.org<http://kernelci.ciplatform.org>

Yes, having a separate instance would enable a separate
configuration, separate database, special features etc... so no
possible interference with the main kernelci.org<http://kernelci.org>
activities.  Having it as cip.kernelci.org<http://cip.kernelci.org> means it would
probably be managed by the KernelCI LF project, I guess that
would make sense for a member.  But then if you host and manage
your own instance on kernelci.ciplatform.org<http://kernelci.ciplatform.org> you're of course
free to do anything you want with it.

CPA: I agree that it may make most sense to have a separate frontend for CIP’s test efforts – this will avoid confusion. Especially as many would consider CIP’s Kernels a ‘downstream’ product.
CPA: cip.kernelci.org, (partly)managed by the KernelCI project would should a good benefit of joining the KernelCI project, perhaps helping attract new members.


2) kernelci.org<http://kernelci.org> starts monitoring/building/testing the CIP SLTS Kernels
- Tests would run through our LAVA master
- Perhaps kernelci.org<http://kernelci.org> could make use of our build infrastructure?
2b) Same as 2) but using either cip.kernelci.org<http://cip.kernelci.org> or kernelci.ciplatform.org<http://kernelci.ciplatform.org>

The question of kernelci.org<http://kernelci.org> vs another instance remains the
same, in my opinion it all hinges on what is on the branches.

Doing this would give you automated bisection since the KernelCI
pipeline would be able to run arbitrary jobs in the LAVA lab.
Using your own build infrastructure would probably be another
argument in favour of a separate instance (e.g. cip.kernelci.org<http://cip.kernelci.org>)
as this would be a custom use-case, unless you're happy to have
any kernelci.org<http://kernelci.org> kernel builds run on your infrastructure.

CPA: This would be something to explore at a later point. We’d have to evaluate the cost impact etc.

3) kernelci.org<http://kernelci.org> start submitting their usual test jobs to the CIP LAVA master
- CIP currently have 11 different device-types (20 boards in total) which should help expand what is available to KernelCI

Maybe we could do that anyway, with filters to limit the upstream
branches and test suites to run if you want, and also work out a
solution for the CIP SLTS branches?

CPA: This would be easy to set up and I’m all for it, seeing as we’re members of the project 😊
CPA: Our current testing load isn’t huge.

My preferred option would be 1) as it would be nice not to lose all the work spent on the GitLab CI setup
(or perhaps integrating the GitLab CI approach into KernelCI's setup is a worthwhile task, but it'll take time)

Yes I think Option 1 would be good given the timeframe you've
set.  Also it's going to be a recurring use-case, KernelCI is all
about being flexible enough to integrate existing CI systems with
no real modifications to them.  We can try things out with
staging.kernelci.org<http://staging.kernelci.org> initially.  Option 2 could be done as a
follow-up if you think the added features justify the extra
efforts.

And Option 3 seems really straightforward to me, any LAVA lab can
be added to kernelci.org<http://kernelci.org> with an API token and a bit of YAML
configuration.  That would definitely be a quick win if you want
to be sure to have something to show at ELC-NA :)

CPA: Fantastic. Thanks Guillaume.

Best wishes,
Guillaume

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Linking CIP testing with KernelCI
  2020-06-12 11:03   ` Chris Paterson
@ 2021-06-24 19:31     ` Guillaume Tucker
  2021-06-28 12:06       ` Chris Paterson
  0 siblings, 1 reply; 5+ messages in thread
From: Guillaume Tucker @ 2021-06-24 19:31 UTC (permalink / raw)
  To: chris.paterson2, Alice Ferrazzi; +Cc: Kevin Hilman, kernelci

+Alice

It's been one year already since the last reply on this thread!

On 12/06/2020 12:03, Chris Paterson wrote:
> Hello Guillaume,
>
> Thank you for the feedback.
> I’ve added some comments below (sorry, email didn’t come through in plan text…)
>
> Kind regards, Chris
>
>> From: Guillaume Tucker <guillaume.tucker@gmail.com>
>> Sent: 11 June 2020 22:12
>> To: Chris Paterson <Chris.Paterson2@renesas.com>
>> Cc: Kevin Hilman <khilman@baylibre.com>; kernelci@groups.io
>> Subject: Re: Linking CIP testing with KernelCI
>>
>> + kernelci@groups.io<mailto:kernelci@groups.io> as discussed
>>
>> Hi Chris,
>>
>> Thanks for starting this discussion, connecting the CIP test
>> system with KernelCI sounds like a perfect example of how the
>> project can grow.
>>
>> On Thu, Jun 4, 2020 at 11:15 AM Chris Paterson <Chris.Paterson2@renesas.com<mailto:Chris.Paterson2@renesas.com> wrote:
>>> Hello Guillaume, Kevin,
>>>
>>> I'd really like to get CIP linked into KernelCI somehow before ELC-NA in a few weeks.
>>> Are you both available for a quick chat this week/next to determine what kind of integration would be suitable, and how we can go about achieving it?
>>
>> Sure, let's try to find a slot next week.

This did happen, we enabled some CIP branches on the main
KernelCI instance:

  https://linux.kernelci.org/job/cip/

It's running regular KernelCI tests, just with these CIP branches
which are like "special" LTS branches.

>>> Some ramblings below to start the discussion...
>>>
>>> # Background on the current CIP CI setup
>>> - We have a GitLab mirror of our kernel.org kernel repo
>>> - Our branches contain a simple .gitlab-ci.yml file that links to our build/test GitLab CI pipeline repository
>>> - Kernel builds are kicked off using the various arches/configs supported by CIP
>>> - Test jobs are then submitted to our LAVA master
>>> - We currently run simple boot tests, spectre/meltdown checker, cyclictest+hackbench (RT configs only) and LTP (only run on CIP release candidates)
>>> - If the builds and tests finish okay then GitLab shows a green tick, if there is some kind of build error, or test infrastructure error then GitLab shows a red cross
>>> - If individual test cases fail, or there is a test regression, currently there is no way of knowing in the Gitlab UI. This is why I want to start using KernelCI
>>> - We follow a similar process for Greg's stable-rc 4.4 and 4.19 branches, but don't run LTP
>>
>> This sounds very similar to the experiment Kevin did with his own
>> tree and Gitlab CI, except he's using the command line tools from
>> kernelci-core.
>>
>>> # Options for KernelCI integration (?)
>>> 1) Keep our current build/test pipeline the same, just add a notify section to the LAVA jobs to send the results to kernelci.org
>>> - We'd need to include the correct metadata etc.
>>
>> I think we might be able to have commands that generate just the
>> meta-data for a given kernel build without actually doing a
>> build.  That would allow you to keep your current build system
>> and also stay in sync with the kernelci meta-data definitions.
>
> CPA: Sounds sensible.

What I've heard this time is that maybe CIP would like to
completely adopt KernelCI's build tools.  Is that right?

If that was the case, it would of course be easier to integrate
CIP tests with KernelCI.  In the same way that we're starting to
have specific kernel builds on chromeos.kernelci.org using the
same Kubernetes build infrastructure as the main instance, we
could do that for CIP kernels with particular defconfigs or
toolchains.

>>> - Would this work if there is no corresponding 'build job'?
>>
>> I don't think anyone is doing this right now but in any case it's
>> something we need to support, for example with static analysis
>> and other tests that don't require any build.  The only thing
>> that should be really required is the kernel revision (tree,
>> branch, sha1, git describe).  So, "yes" this needs to work.
>
> CPA: Presumably this information could be provided as part of the LAVA job notification?
>
>>> - Would you allow test results from the CIP SLTS Kernels on kernelci.org? Or would we need to set up our own instance?
>>
>> Ideally, only upstream-oriented branches should be built and
>> tested on kernelci.org to keep it in line with the project's
>> mission and upstream kernel development.  If the CIP branches
>> consist only of backported patches from mainline, maybe it could
>> be accepted in the same way that we have stable branches.  If
>> however there are patches that are not intended to get merged
>> upstream and are there for specific products, then in my opinion
>> that would not be acceptable - see below for an alternative as
>> you suggested.
>>
> CPA: CIP follow an upstream first policy, so almost every patch comes from mainline, mostly via the stable LTS releases.
> CPA: The SLTS Kernels aren’t focused at a particular product. It’s meant to be part of a base layer for end users.
> CPA: They consist of LTS + some extra board support that has been backported from mainline. Sometimes a novel patch is required when adding the extra board support, as v5.7 is a lot different to v4.4.
> CPA: When LTS support from Greg ends on the Kernels CIP are using (currently v4.4 & v4.19) CIP aims to pick up a very similar role.

As mentioned earlier, this has now been enabled as the branches
are considered close enough to upstream.

>>> 1b) Same as 1) but using either cip.kernelci.org or kernelci.ciplatform.org
>>
>> Yes, having a separate instance would enable a separate
>> configuration, separate database, special features etc... so no
>> possible interference with the main kernelci.org
>> activities.  Having it as cip.kernelci.org means it would
>> probably be managed by the KernelCI LF project, I guess that
>> would make sense for a member.  But then if you host and manage
>> your own instance on kernelci.ciplatform.org you're of course
>> free to do anything you want with it.
>
> CPA: I agree that it may make most sense to have a separate frontend for CIP’s test efforts – this will avoid confusion. Especially as many would consider CIP’s Kernels a ‘downstream’ product.
> CPA: cip.kernelci.org, (partly)managed by the KernelCI project would should a good benefit of joining the KernelCI project, perhaps helping attract new members.

Do you still agree it would be worthwhile having a separate
frontend on cip.kernelci.org with extra tests and special builds
that aren't on linux.kernelci.org?

>>> 2) kernelci.org starts monitoring/building/testing the CIP SLTS Kernels
>>> - Tests would run through our LAVA master
>>> - Perhaps kernelci.org could make use of our build infrastructure?
>>> 2b) Same as 2) but using either cip.kernelci.org or kernelci.ciplatform.org
>>
>> The question of kernelci.org vs another instance remains the
>> same, in my opinion it all hinges on what is on the branches.
>>
>> Doing this would give you automated bisection since the KernelCI
>> pipeline would be able to run arbitrary jobs in the LAVA lab.
>> Using your own build infrastructure would probably be another
>> argument in favour of a separate instance (e.g. cip.kernelci.org)
>> as this would be a custom use-case, unless you're happy to have
>> any kernelci.org kernel builds run on your infrastructure.
>
> CPA: This would be something to explore at a later point. We’d have to evaluate the cost impact etc.
 
What are your thoughts on this now?  Would you want to keep using
your own build tools, or use kci_build but with your build infra,
or just rely on the KernelCI Kubernetes build infra as mentioned
earlier?

>>> 3) kernelci.org start submitting their usual test jobs to the CIP LAVA master
>>> - CIP currently have 11 different device-types (20 boards in total) which should help expand what is available to KernelCI
>>
>> Maybe we could do that anyway, with filters to limit the upstream
>> branches and test suites to run if you want, and also work out a
>> solution for the CIP SLTS branches?
>>
> CPA: This would be easy to set up and I’m all for it, seeing as we’re members of the project
> CPA: Our current testing load isn’t huge.

That's already done, for regular KernelCI tests.

>>> My preferred option would be 1) as it would be nice not to lose all the work spent on the GitLab CI setup
>>> (or perhaps integrating the GitLab CI approach into KernelCI's setup is a worthwhile task, but it'll take time)
>>
>> Yes I think Option 1 would be good given the timeframe you've
>> set.  Also it's going to be a recurring use-case, KernelCI is all
>> about being flexible enough to integrate existing CI systems with
>> no real modifications to them.  We can try things out with
>> staging.kernelci.org initially.  Option 2 could be done as a
>> follow-up if you think the added features justify the extra
>> efforts.
>>
>> And Option 3 seems really straightforward to me, any LAVA lab can
>> be added to kernelci.org with an API token and a bit of YAML
>> configuration.  That would definitely be a quick win if you want
>> to be sure to have something to show at ELC-NA :)
>
> CPA: Fantastic. Thanks Guillaume.

I guess all these aspects are still relevant, except the ones
that have already been implemented.  Basically, has CIP's point
of view changed since last year with regards to your existing
Gitlab CI system?

It would probably be good to go through this again in next
Tuesday's meeting if Chris can make it.  Otherwise we can
schedule a specific meeting at another time.

Best wishes,
Guillaume

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Linking CIP testing with KernelCI
  2021-06-24 19:31     ` Guillaume Tucker
@ 2021-06-28 12:06       ` Chris Paterson
  0 siblings, 0 replies; 5+ messages in thread
From: Chris Paterson @ 2021-06-28 12:06 UTC (permalink / raw)
  To: Guillaume Tucker, Alice Ferrazzi; +Cc: Kevin Hilman, kernelci, masashi.kudo

Hello Guillaume,

> From: Guillaume Tucker <guillaume.tucker@collabora.com>
> Sent: 24 June 2021 20:31
> 
> +Alice
> 
> It's been one year already since the last reply on this thread!

Scary!
Thank you for getting the conversation moving again.

> 
> On 12/06/2020 12:03, Chris Paterson wrote:
> > Hello Guillaume,
> >
> > Thank you for the feedback.
> > I’ve added some comments below (sorry, email didn’t come through in
> plan text…)
> >
> > Kind regards, Chris
> >
> >> From: Guillaume Tucker <guillaume.tucker@gmail.com>
> >> Sent: 11 June 2020 22:12
> >> To: Chris Paterson <Chris.Paterson2@renesas.com>
> >> Cc: Kevin Hilman <khilman@baylibre.com>; kernelci@groups.io
> >> Subject: Re: Linking CIP testing with KernelCI
> >>
> >> + kernelci@groups.io<mailto:kernelci@groups.io> as discussed
> >>
> >> Hi Chris,
> >>
> >> Thanks for starting this discussion, connecting the CIP test
> >> system with KernelCI sounds like a perfect example of how the
> >> project can grow.
> >>
> >> On Thu, Jun 4, 2020 at 11:15 AM Chris Paterson
> <Chris.Paterson2@renesas.com<mailto:Chris.Paterson2@renesas.com>
> wrote:
> >>> Hello Guillaume, Kevin,
> >>>
> >>> I'd really like to get CIP linked into KernelCI somehow before ELC-NA in a
> few weeks.
> >>> Are you both available for a quick chat this week/next to determine
> what kind of integration would be suitable, and how we can go about
> achieving it?
> >>
> >> Sure, let's try to find a slot next week.
> 
> This did happen, we enabled some CIP branches on the main
> KernelCI instance:
> 
> 
> https://jpn01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flinux.
> kernelci.org%2Fjob%2Fcip%2F&amp;data=04%7C01%7Cchris.paterson2%40r
> enesas.com%7Cb58d15bce49d427cc4ec08d93746ab91%7C53d82571da1947e4
> 9cb4625a166a4a2a%7C0%7C0%7C637601598937236465%7CUnknown%7CTWF
> pbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXV
> CI6Mn0%3D%7C1000&amp;sdata=LTDMkHbjR7X4TBN7k2jzE8NRb2z7%2B0iI
> m65W1QNBWZw%3D&amp;reserved=0
> 
> It's running regular KernelCI tests, just with these CIP branches
> which are like "special" LTS branches.

And seems to be working well, thank you.

> 
> >>> Some ramblings below to start the discussion...
> >>>
> >>> # Background on the current CIP CI setup
> >>> - We have a GitLab mirror of our kernel.org kernel repo
> >>> - Our branches contain a simple .gitlab-ci.yml file that links to our
> build/test GitLab CI pipeline repository
> >>> - Kernel builds are kicked off using the various arches/configs supported
> by CIP
> >>> - Test jobs are then submitted to our LAVA master
> >>> - We currently run simple boot tests, spectre/meltdown checker,
> cyclictest+hackbench (RT configs only) and LTP (only run on CIP release
> candidates)
> >>> - If the builds and tests finish okay then GitLab shows a green tick, if
> there is some kind of build error, or test infrastructure error then GitLab
> shows a red cross
> >>> - If individual test cases fail, or there is a test regression, currently there
> is no way of knowing in the Gitlab UI. This is why I want to start using
> KernelCI
> >>> - We follow a similar process for Greg's stable-rc 4.4 and 4.19 branches,
> but don't run LTP
> >>
> >> This sounds very similar to the experiment Kevin did with his own
> >> tree and Gitlab CI, except he's using the command line tools from
> >> kernelci-core.
> >>
> >>> # Options for KernelCI integration (?)
> >>> 1) Keep our current build/test pipeline the same, just add a notify
> section to the LAVA jobs to send the results to kernelci.org
> >>> - We'd need to include the correct metadata etc.
> >>
> >> I think we might be able to have commands that generate just the
> >> meta-data for a given kernel build without actually doing a
> >> build.  That would allow you to keep your current build system
> >> and also stay in sync with the kernelci meta-data definitions.
> >
> > CPA: Sounds sensible.
> 
> What I've heard this time is that maybe CIP would like to
> completely adopt KernelCI's build tools.  Is that right?

Maybe. We're still working out what the best approach may be.

> 
> If that was the case, it would of course be easier to integrate
> CIP tests with KernelCI.  In the same way that we're starting to
> have specific kernel builds on chromeos.kernelci.org using the
> same Kubernetes build infrastructure as the main instance, we
> could do that for CIP kernels with particular defconfigs or
> toolchains.

This would probably be the "easiest" approach, but I'm guess we don't want to use KernelCI's build resources for our project.
This is the main point for me, if we were to use all of KernelCI's tooling - should we add GitLab CI integration (to use CIP's GitLab runners), or donate money/EC2 instances to KernelCI?

> 
> >>> - Would this work if there is no corresponding 'build job'?
> >>
> >> I don't think anyone is doing this right now but in any case it's
> >> something we need to support, for example with static analysis
> >> and other tests that don't require any build.  The only thing
> >> that should be really required is the kernel revision (tree,
> >> branch, sha1, git describe).  So, "yes" this needs to work.
> >
> > CPA: Presumably this information could be provided as part of the LAVA
> job notification?
> >
> >>> - Would you allow test results from the CIP SLTS Kernels on kernelci.org?
> Or would we need to set up our own instance?
> >>
> >> Ideally, only upstream-oriented branches should be built and
> >> tested on kernelci.org to keep it in line with the project's
> >> mission and upstream kernel development.  If the CIP branches
> >> consist only of backported patches from mainline, maybe it could
> >> be accepted in the same way that we have stable branches.  If
> >> however there are patches that are not intended to get merged
> >> upstream and are there for specific products, then in my opinion
> >> that would not be acceptable - see below for an alternative as
> >> you suggested.
> >>
> > CPA: CIP follow an upstream first policy, so almost every patch comes from
> mainline, mostly via the stable LTS releases.
> > CPA: The SLTS Kernels aren’t focused at a particular product. It’s meant to
> be part of a base layer for end users.
> > CPA: They consist of LTS + some extra board support that has been
> backported from mainline. Sometimes a novel patch is required when adding
> the extra board support, as v5.7 is a lot different to v4.4.
> > CPA: When LTS support from Greg ends on the Kernels CIP are using
> (currently v4.4 & v4.19) CIP aims to pick up a very similar role.
> 
> As mentioned earlier, this has now been enabled as the branches
> are considered close enough to upstream.
> 
> >>> 1b) Same as 1) but using either cip.kernelci.org or kernelci.ciplatform.org
> >>
> >> Yes, having a separate instance would enable a separate
> >> configuration, separate database, special features etc... so no
> >> possible interference with the main kernelci.org
> >> activities.  Having it as cip.kernelci.org means it would
> >> probably be managed by the KernelCI LF project, I guess that
> >> would make sense for a member.  But then if you host and manage
> >> your own instance on kernelci.ciplatform.org you're of course
> >> free to do anything you want with it.
> >
> > CPA: I agree that it may make most sense to have a separate frontend for
> CIP’s test efforts – this will avoid confusion. Especially as many would
> consider CIP’s Kernels a ‘downstream’ product.
> > CPA: cip.kernelci.org, (partly)managed by the KernelCI project would
> should a good benefit of joining the KernelCI project, perhaps helping attract
> new members.
> 
> Do you still agree it would be worthwhile having a separate
> frontend on cip.kernelci.org with extra tests and special builds
> that aren't on linux.kernelci.org?

Yes. With our testing we'd want to use our own filesystem, and start adding in more userspace testing.
Whilst we're happy to upstream such support, there may be some functionality that doesn't fit in with KernelCI's scope.

So running a fork on a different subdomain may be a sensible approach.

> 
> >>> 2) kernelci.org starts monitoring/building/testing the CIP SLTS Kernels
> >>> - Tests would run through our LAVA master
> >>> - Perhaps kernelci.org could make use of our build infrastructure?
> >>> 2b) Same as 2) but using either cip.kernelci.org or kernelci.ciplatform.org
> >>
> >> The question of kernelci.org vs another instance remains the
> >> same, in my opinion it all hinges on what is on the branches.
> >>
> >> Doing this would give you automated bisection since the KernelCI
> >> pipeline would be able to run arbitrary jobs in the LAVA lab.
> >> Using your own build infrastructure would probably be another
> >> argument in favour of a separate instance (e.g. cip.kernelci.org)
> >> as this would be a custom use-case, unless you're happy to have
> >> any kernelci.org kernel builds run on your infrastructure.
> >
> > CPA: This would be something to explore at a later point. We’d have to
> evaluate the cost impact etc.
> 
> What are your thoughts on this now?  Would you want to keep using
> your own build tools, or use kci_build but with your build infra,
> or just rely on the KernelCI Kubernetes build infra as mentioned
> earlier?
> 
> >>> 3) kernelci.org start submitting their usual test jobs to the CIP LAVA
> master
> >>> - CIP currently have 11 different device-types (20 boards in total) which
> should help expand what is available to KernelCI
> >>
> >> Maybe we could do that anyway, with filters to limit the upstream
> >> branches and test suites to run if you want, and also work out a
> >> solution for the CIP SLTS branches?
> >>
> > CPA: This would be easy to set up and I’m all for it, seeing as we’re
> members of the project
> > CPA: Our current testing load isn’t huge.
> 
> That's already done, for regular KernelCI tests.

Let me know if you want to expand the list of tests run in our labs.

> 
> >>> My preferred option would be 1) as it would be nice not to lose all the
> work spent on the GitLab CI setup
> >>> (or perhaps integrating the GitLab CI approach into KernelCI's setup is a
> worthwhile task, but it'll take time)
> >>
> >> Yes I think Option 1 would be good given the timeframe you've
> >> set.  Also it's going to be a recurring use-case, KernelCI is all
> >> about being flexible enough to integrate existing CI systems with
> >> no real modifications to them.  We can try things out with
> >> staging.kernelci.org initially.  Option 2 could be done as a
> >> follow-up if you think the added features justify the extra
> >> efforts.
> >>
> >> And Option 3 seems really straightforward to me, any LAVA lab can
> >> be added to kernelci.org with an API token and a bit of YAML
> >> configuration.  That would definitely be a quick win if you want
> >> to be sure to have something to show at ELC-NA :)
> >
> > CPA: Fantastic. Thanks Guillaume.
> 
> I guess all these aspects are still relevant, except the ones
> that have already been implemented.  Basically, has CIP's point
> of view changed since last year with regards to your existing
> Gitlab CI system?

Our current approach, whilst it works, isn't overly scalable if we want to add more and more test cases in the long run.
Whilst obviously we could improve the current scripts, it may be more efficient to just use KernelCI's approach.
This way both projects can easily benefit from any improvements/test case expansion down the line.

> 
> It would probably be good to go through this again in next
> Tuesday's meeting if Chris can make it.  Otherwise we can
> schedule a specific meeting at another time.

Alice and I will join on Tuesday.
Thank you for the time.

Kind regards, Chris

> 
> Best wishes,
> Guillaume

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-06-28 12:06 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <OSAPR01MB2385E0367B8CA545E1D66422B7890@OSAPR01MB2385.jpnprd01.prod.outlook.com>
2020-06-11 21:11 ` Linking CIP testing with KernelCI Guillaume Tucker
2020-06-12  7:54   ` santiago.esteban
2020-06-12 11:03   ` Chris Paterson
2021-06-24 19:31     ` Guillaume Tucker
2021-06-28 12:06       ` Chris Paterson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).