All of lore.kernel.org
 help / color / mirror / Atom feed
* Baseline test plan and bootrr
@ 2019-02-07 18:54 Guillaume Tucker
  2019-02-07 19:49 ` Anibal Limon
  2019-02-11 13:16 ` Tomeu Vizoso
  0 siblings, 2 replies; 10+ messages in thread
From: Guillaume Tucker @ 2019-02-07 18:54 UTC (permalink / raw)
  To: kernelci; +Cc: Enric Balletbo i Serra

Hi,

Currently, boot tests entirely rely on the LAVA logic to detect
when a login prompt has been reached.  While this mostly works,
it does not guarantee that the platform is actually usable or
that everything went smoothly during the boot.

For this reason, it would seem useful to introduce a "baseline"
test plan which would essentially boot and then do some fast
checks to verify that nothing is obviously broken.  This can
include things like grepping the kernel log for any errors and
checking that drivers have been initialised correctly.  There's a
lot that can be done in a few seconds with a basic ramdisk.


So we could have a list of regular expressions to detect any
issues in the kernel log and report a LAVA test case result for
each of them.  The log fragment associated with each match should
also be available to include the actual errors in a report.
Doing this on the device means we keep the test definition in the
same location as the other test plans, and we can run it like any
test suite.


Another useful tool is bootrr, which my colleague Enric has
started to use on some Chromebook devices.  It can check that
drivers have been loaded and devices probed correctly.  It's
entirely written in shell scripts so it can be run in our current
buildroot rootfs and can easily be extended to run other checks.
It looks up the device tree platform name and uses that to call a
platform-specific script with relevant drivers and devices.
Here's Enric's branch with such scripts for a few devices and a
LAVA test definition:

  https://github.com/eballetbo/bootrr/commits/master

and some sample runs with buildroot:

  https://lava.collabora.co.uk/scheduler/job/1478554
  https://lava.collabora.co.uk/scheduler/job/1478553


Does this look like something worth running in KernelCI?  Should
we just have a bootrr test plan or go for the "baseline" test
plan I'm suggesting, to be run on all devices and ultimately
instead of the current boot-to-login jobs?

Best wishes,
Guillaume

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-07 18:54 Baseline test plan and bootrr Guillaume Tucker
@ 2019-02-07 19:49 ` Anibal Limon
  2019-02-11 13:16 ` Tomeu Vizoso
  1 sibling, 0 replies; 10+ messages in thread
From: Anibal Limon @ 2019-02-07 19:49 UTC (permalink / raw)
  To: kernelci, Guillaume Tucker, Enric Balletbo i Serra
  Cc: Nicolas Dechesne, Bjorn Andersson

[-- Attachment #1: Type: text/plain, Size: 3020 bytes --]

Hi Guillaume and Enric,

I have been working on bootrr testing for Linux mainline and our
integration branch in dragonboard{410,820}c devices [1][2], basically it
takes KernelCI binaries (kernel, dt, modules) and
built an Android boot image with our generated ramdisk (using
Openembedded). [3]

I'm trying to push back results to Kernel CI using notify block in LAVA
definition without success [4] (I need to debug), will be good if KernelCI
creates an bootrr test plan taking into account the possibility to specify
the bootrr repo/branch and ramdisk to use.

Regards,
Anibal

[1 ]https://validation.linaro.org/scheduler/job/1905654
[2] https://validation.linaro.org/scheduler/job/1905579
[3] http://snapshots.linaro.org/member-builds/qcomlt/testimages/arm64/17/
[4] https://validation.linaro.org/scheduler/job/1905579/definition#defline13


On Thu, 7 Feb 2019 at 12:55, Guillaume Tucker <guillaume.tucker@gmail.com>
wrote:

> Hi,
>
> Currently, boot tests entirely rely on the LAVA logic to detect
> when a login prompt has been reached.  While this mostly works,
> it does not guarantee that the platform is actually usable or
> that everything went smoothly during the boot.
>
> For this reason, it would seem useful to introduce a "baseline"
> test plan which would essentially boot and then do some fast
> checks to verify that nothing is obviously broken.  This can
> include things like grepping the kernel log for any errors and
> checking that drivers have been initialised correctly.  There's a
> lot that can be done in a few seconds with a basic ramdisk.
>
>
> So we could have a list of regular expressions to detect any
> issues in the kernel log and report a LAVA test case result for
> each of them.  The log fragment associated with each match should
> also be available to include the actual errors in a report.
> Doing this on the device means we keep the test definition in the
> same location as the other test plans, and we can run it like any
> test suite.
>
>
> Another useful tool is bootrr, which my colleague Enric has
> started to use on some Chromebook devices.  It can check that
> drivers have been loaded and devices probed correctly.  It's
> entirely written in shell scripts so it can be run in our current
> buildroot rootfs and can easily be extended to run other checks.
> It looks up the device tree platform name and uses that to call a
> platform-specific script with relevant drivers and devices.
> Here's Enric's branch with such scripts for a few devices and a
> LAVA test definition:
>
>   https://github.com/eballetbo/bootrr/commits/master
>
> and some sample runs with buildroot:
>
>   https://lava.collabora.co.uk/scheduler/job/1478554
>   https://lava.collabora.co.uk/scheduler/job/1478553
>
>
> Does this look like something worth running in KernelCI?  Should
> we just have a bootrr test plan or go for the "baseline" test
> plan I'm suggesting, to be run on all devices and ultimately
> instead of the current boot-to-login jobs?
>
> Best wishes,
> Guillaume
>
> 
>
>

[-- Attachment #2: Type: text/html, Size: 4374 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-07 18:54 Baseline test plan and bootrr Guillaume Tucker
  2019-02-07 19:49 ` Anibal Limon
@ 2019-02-11 13:16 ` Tomeu Vizoso
  2019-02-11 17:18   ` eballetbo
  1 sibling, 1 reply; 10+ messages in thread
From: Tomeu Vizoso @ 2019-02-11 13:16 UTC (permalink / raw)
  To: kernelci, guillaume.tucker; +Cc: Enric Balletbo i Serra

On 2/7/19 7:54 PM, Guillaume Tucker wrote:
> Hi,
> 
> Currently, boot tests entirely rely on the LAVA logic to detect
> when a login prompt has been reached.  While this mostly works,
> it does not guarantee that the platform is actually usable or
> that everything went smoothly during the boot.
> 
> For this reason, it would seem useful to introduce a "baseline"
> test plan which would essentially boot and then do some fast
> checks to verify that nothing is obviously broken.  This can
> include things like grepping the kernel log for any errors and
> checking that drivers have been initialised correctly.  There's a
> lot that can be done in a few seconds with a basic ramdisk.
> 
> 
> So we could have a list of regular expressions to detect any
> issues in the kernel log and report a LAVA test case result for
> each of them.  The log fragment associated with each match should
> also be available to include the actual errors in a report.
> Doing this on the device means we keep the test definition in the
> same location as the other test plans, and we can run it like any
> test suite.
> 
> 
> Another useful tool is bootrr, which my colleague Enric has
> started to use on some Chromebook devices.  It can check that
> drivers have been loaded and devices probed correctly.  It's
> entirely written in shell scripts so it can be run in our current
> buildroot rootfs and can easily be extended to run other checks.
> It looks up the device tree platform name and uses that to call a
> platform-specific script with relevant drivers and devices.
> Here's Enric's branch with such scripts for a few devices and a
> LAVA test definition:
> 
>    https://github.com/eballetbo/bootrr/commits/master
> 
> and some sample runs with buildroot:
> 
>    https://lava.collabora.co.uk/scheduler/job/1478554
>    https://lava.collabora.co.uk/scheduler/job/1478553
> 
> 
> Does this look like something worth running in KernelCI?  Should
> we just have a bootrr test plan or go for the "baseline" test
> plan I'm suggesting, to be run on all devices and ultimately
> instead of the current boot-to-login jobs?

I think we could get a substantially part of the benefits that bootrr 
provides by trivially checking that /sys/kernel/debug/devices_deferred is 
empty after all modules have been loaded.

We probably want to do that even if we had bootrr, as I find unlikely 
that people will maintain device-specific scripts for all the devices we 
test on.

Cheers,

Tomeu



> Best wishes,
> Guillaume
> 
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-11 13:16 ` Tomeu Vizoso
@ 2019-02-11 17:18   ` eballetbo
  2019-02-12  6:52     ` Tomeu Vizoso
  0 siblings, 1 reply; 10+ messages in thread
From: eballetbo @ 2019-02-11 17:18 UTC (permalink / raw)
  To: kernelci, Tomeu Vizoso
  Cc: guillaume.tucker, Enric Balletbo i Serra, anibal.limon,
	nicolas.dechesne, bjorn.andersson

cc'ing: anibal.limon@linaro.org, nicolas.dechesne@linaro.org,
bjorn.andersson@linaro.org

Missatge de Tomeu Vizoso <tomeu.vizoso@collabora.com> del dia dl., 11
de febr. 2019 a les 14:16:
>
> On 2/7/19 7:54 PM, Guillaume Tucker wrote:
> > Hi,
> >
> > Currently, boot tests entirely rely on the LAVA logic to detect
> > when a login prompt has been reached.  While this mostly works,
> > it does not guarantee that the platform is actually usable or
> > that everything went smoothly during the boot.
> >
> > For this reason, it would seem useful to introduce a "baseline"
> > test plan which would essentially boot and then do some fast
> > checks to verify that nothing is obviously broken.  This can
> > include things like grepping the kernel log for any errors and
> > checking that drivers have been initialised correctly.  There's a
> > lot that can be done in a few seconds with a basic ramdisk.
> >
> >
> > So we could have a list of regular expressions to detect any
> > issues in the kernel log and report a LAVA test case result for
> > each of them.  The log fragment associated with each match should
> > also be available to include the actual errors in a report.
> > Doing this on the device means we keep the test definition in the
> > same location as the other test plans, and we can run it like any
> > test suite.
> >
> >
> > Another useful tool is bootrr, which my colleague Enric has
> > started to use on some Chromebook devices.  It can check that
> > drivers have been loaded and devices probed correctly.  It's
> > entirely written in shell scripts so it can be run in our current
> > buildroot rootfs and can easily be extended to run other checks.
> > It looks up the device tree platform name and uses that to call a
> > platform-specific script with relevant drivers and devices.
> > Here's Enric's branch with such scripts for a few devices and a
> > LAVA test definition:
> >
> >    https://github.com/eballetbo/bootrr/commits/master
> >
> > and some sample runs with buildroot:
> >
> >    https://lava.collabora.co.uk/scheduler/job/1478554
> >    https://lava.collabora.co.uk/scheduler/job/1478553
> >
> >
> > Does this look like something worth running in KernelCI?  Should
> > we just have a bootrr test plan or go for the "baseline" test
> > plan I'm suggesting, to be run on all devices and ultimately
> > instead of the current boot-to-login jobs?
>
> I think we could get a substantially part of the benefits that bootrr
> provides by trivially checking that /sys/kernel/debug/devices_deferred is
> empty after all modules have been loaded.
>

Agree, although this doesn't cover when a driver simply fails (not
returning -EPROBE_DEFER) or when a driver is not instantiated for some
reason (so the probe is not called) but should be. But is a simple
trivial test that *we should definitely add.*

I was also thinking about the idea of implementing a
/sys/kernel/debug/devices_failed with a list of devices that failed to
probe, but I'm not sure if makes sense or not. Again, this will not
solve the problem when the driver is not instantiated for some reason
but should be, and also, we can get some drivers that fail but are
expected to fail for the device-specific platform. So in such case, we
will need some kind of ignore list per device-specific to avoid false
boot regressions.

> We probably want to do that even if we had bootrr, as I find unlikely
> that people will maintain device-specific scripts for all the devices we
> test on.
>

That's also one of my worries, how difficult will be maintain these
device-specific scripts. I'm wondering if the bootrr developers have
or thought at some point on some kind of automatic tool to generate
the scripts (i.e parsing the DT, however, this doesn't cover all the
cases) or if is done always manually.

Cheers,
  Enric

> Cheers,
>
> Tomeu
>
>
>
> > Best wishes,
> > Guillaume
> >
> >
> >
>
> 
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-11 17:18   ` eballetbo
@ 2019-02-12  6:52     ` Tomeu Vizoso
  2019-02-12 12:33       ` Mark Brown
  2019-02-12 13:41       ` Guillaume Tucker
  0 siblings, 2 replies; 10+ messages in thread
From: Tomeu Vizoso @ 2019-02-12  6:52 UTC (permalink / raw)
  To: Enric Balletbo Serra, kernelci
  Cc: guillaume.tucker, Enric Balletbo i Serra, anibal.limon,
	nicolas.dechesne, bjorn.andersson

On 2/11/19 6:18 PM, Enric Balletbo Serra wrote:
> cc'ing: anibal.limon@linaro.org, nicolas.dechesne@linaro.org,
> bjorn.andersson@linaro.org
> 
> Missatge de Tomeu Vizoso <tomeu.vizoso@collabora.com> del dia dl., 11
> de febr. 2019 a les 14:16:
>>
>> On 2/7/19 7:54 PM, Guillaume Tucker wrote:
>>> Hi,
>>>
>>> Currently, boot tests entirely rely on the LAVA logic to detect
>>> when a login prompt has been reached.  While this mostly works,
>>> it does not guarantee that the platform is actually usable or
>>> that everything went smoothly during the boot.
>>>
>>> For this reason, it would seem useful to introduce a "baseline"
>>> test plan which would essentially boot and then do some fast
>>> checks to verify that nothing is obviously broken.  This can
>>> include things like grepping the kernel log for any errors and
>>> checking that drivers have been initialised correctly.  There's a
>>> lot that can be done in a few seconds with a basic ramdisk.
>>>
>>>
>>> So we could have a list of regular expressions to detect any
>>> issues in the kernel log and report a LAVA test case result for
>>> each of them.  The log fragment associated with each match should
>>> also be available to include the actual errors in a report.
>>> Doing this on the device means we keep the test definition in the
>>> same location as the other test plans, and we can run it like any
>>> test suite.
>>>
>>>
>>> Another useful tool is bootrr, which my colleague Enric has
>>> started to use on some Chromebook devices.  It can check that
>>> drivers have been loaded and devices probed correctly.  It's
>>> entirely written in shell scripts so it can be run in our current
>>> buildroot rootfs and can easily be extended to run other checks.
>>> It looks up the device tree platform name and uses that to call a
>>> platform-specific script with relevant drivers and devices.
>>> Here's Enric's branch with such scripts for a few devices and a
>>> LAVA test definition:
>>>
>>>     https://github.com/eballetbo/bootrr/commits/master
>>>
>>> and some sample runs with buildroot:
>>>
>>>     https://lava.collabora.co.uk/scheduler/job/1478554
>>>     https://lava.collabora.co.uk/scheduler/job/1478553
>>>
>>>
>>> Does this look like something worth running in KernelCI?  Should
>>> we just have a bootrr test plan or go for the "baseline" test
>>> plan I'm suggesting, to be run on all devices and ultimately
>>> instead of the current boot-to-login jobs?
>>
>> I think we could get a substantially part of the benefits that bootrr
>> provides by trivially checking that /sys/kernel/debug/devices_deferred is
>> empty after all modules have been loaded.
>>
> 
> Agree, although this doesn't cover when a driver simply fails (not
> returning -EPROBE_DEFER)

In these cases, there should have been an error or warning printed that 
should have been parsed and considered as a test failure.

Of course, misbehaved drivers could fail silently, but that's not that 
different from a driver not returning an error in probe() but remaining 
in a non-functional state.

> or when a driver is not instantiated for some
> reason (so the probe is not called) but should be.

Yeah, that requires I guess some way to specify that we expect these 
drivers to be available in these boards.

But that's information that we'll need to give anyway when we enable more 
advanced test suites. For each device, we'll need to give information 
such as on which storage device to run a specific test case, etc. Right 
now we are only telling which device types should be able to run specific 
test suites.

> But is a simple
> trivial test that *we should definitely add.*

My exact point, I think there's a lot of value in there.

> I was also thinking about the idea of implementing a
> /sys/kernel/debug/devices_failed with a list of devices that failed to
> probe, but I'm not sure if makes sense or not. Again, this will not
> solve the problem when the driver is not instantiated for some reason
> but should be, and also, we can get some drivers that fail but are
> expected to fail for the device-specific platform. So in such case, we
> will need some kind of ignore list per device-specific to avoid false
> boot regressions.

Yeah, past devices_deferred the benefit / cost ratio starts declining 
sharply :)

>> We probably want to do that even if we had bootrr, as I find unlikely
>> that people will maintain device-specific scripts for all the devices we
>> test on.
>>
> 
> That's also one of my worries, how difficult will be maintain these
> device-specific scripts. I'm wondering if the bootrr developers have
> or thought at some point on some kind of automatic tool to generate
> the scripts (i.e parsing the DT, however, this doesn't cover all the
> cases) or if is done always manually.

The kernel already exposes the device list to userspace, doesn't it? But 
that doesn't help with knowing which of them aren't expected to probe.

I think I would personally just keep adding test suites, which of course 
would fail if the target device isn't present. This should also catch 
bugs that caused a dependency to fail probing (clocks, regulators, 
pinmux, etc).

Cheers,

Tomeu



> Cheers,
>    Enric
> 
>> Cheers,
>>
>> Tomeu
>>
>>
>>
>>> Best wishes,
>>> Guillaume
>>>
>>>
>>>
>>
>> 
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-12  6:52     ` Tomeu Vizoso
@ 2019-02-12 12:33       ` Mark Brown
  2019-02-12 13:41       ` Guillaume Tucker
  1 sibling, 0 replies; 10+ messages in thread
From: Mark Brown @ 2019-02-12 12:33 UTC (permalink / raw)
  To: kernelci, tomeu.vizoso
  Cc: Enric Balletbo Serra, guillaume.tucker, Enric Balletbo i Serra,
	anibal.limon, nicolas.dechesne, bjorn.andersson

[-- Attachment #1: Type: text/plain, Size: 674 bytes --]

On Tue, Feb 12, 2019 at 07:52:03AM +0100, Tomeu Vizoso wrote:
> On 2/11/19 6:18 PM, Enric Balletbo Serra wrote:

> > device-specific scripts. I'm wondering if the bootrr developers have
> > or thought at some point on some kind of automatic tool to generate
> > the scripts (i.e parsing the DT, however, this doesn't cover all the
> > cases) or if is done always manually.

> The kernel already exposes the device list to userspace, doesn't it? But
> that doesn't help with knowing which of them aren't expected to probe.

The kernel gives you a list of devices available to bind and a list of
drivers loaded, from these you can infer which devices should be able to
probe.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-12  6:52     ` Tomeu Vizoso
  2019-02-12 12:33       ` Mark Brown
@ 2019-02-12 13:41       ` Guillaume Tucker
  1 sibling, 0 replies; 10+ messages in thread
From: Guillaume Tucker @ 2019-02-12 13:41 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Enric Balletbo Serra, kernelci, Enric Balletbo i Serra,
	Anibal Limon, nicolas.dechesne, bjorn.andersson

[-- Attachment #1: Type: text/plain, Size: 619 bytes --]

On Tue, Feb 12, 2019 at 6:52 AM Tomeu Vizoso <tomeu.vizoso@collabora.com>
wrote:

> I think I would personally just keep adding test suites, which of course
> would fail if the target device isn't present. This should also catch
> bugs that caused a dependency to fail probing (clocks, regulators,
> pinmux, etc).
>

Sure, we need to do that as well.  The idea here was to make an
improved boot test plan (I called it "baseline") to do quick
checks that can help detect if anything obvious went wrong during
boot, something we can run on all the devices ultimately instead
of the current boot-to-login test.

Guillaume

[-- Attachment #2: Type: text/html, Size: 1055 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-14 19:51   ` Kevin Hilman
@ 2019-05-02 11:27     ` Guillaume Tucker
  0 siblings, 0 replies; 10+ messages in thread
From: Guillaume Tucker @ 2019-05-02 11:27 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: kernelci, Enric Balletbo i Serra

[-- Attachment #1: Type: text/plain, Size: 2171 bytes --]

On Thu, Feb 14, 2019 at 7:51 PM Kevin Hilman <khilman@baylibre.com> wrote:

> "Guillaume Tucker" <guillaume.tucker@gmail.com> writes:
>
> > On Thu, Feb 7, 2019 at 6:55 PM Guillaume Tucker via Groups.Io
> > <guillaume.tucker=gmail.com@groups.io> wrote:
> >
> >> Hi,
> >>
> >> Currently, boot tests entirely rely on the LAVA logic to detect
> >> when a login prompt has been reached.  While this mostly works,
> >> it does not guarantee that the platform is actually usable or
> >> that everything went smoothly during the boot.
> >>
> >> For this reason, it would seem useful to introduce a "baseline"
> >> test plan which would essentially boot and then do some fast
> >> checks to verify that nothing is obviously broken.  This can
> >> include things like grepping the kernel log for any errors and
> >> checking that drivers have been initialised correctly.  There's a
> >> lot that can be done in a few seconds with a basic ramdisk.
> >>
> >>
> >> So we could have a list of regular expressions to detect any
> >> issues in the kernel log and report a LAVA test case result for
> >> each of them.  The log fragment associated with each match should
> >> also be available to include the actual errors in a report.
> >> Doing this on the device means we keep the test definition in the
> >> same location as the other test plans, and we can run it like any
> >> test suite.
> >>
> >
> > Any thoughts on this part?  Does anyone see any issue with having
> > a series of patterns to grep the kernel log and have a test case
> > result for each of them in LAVA? (i.e. pass if pattern was not
> > matched...)
>
> I think this is a good idea.
>
> The difficulty comes in looking for device/board specific stuff, but
> that shouldn't stop us from doing something generic that looks for the
> obvious stuff.
>
> It could also look for (and count) kernel errors, warnings, backtraces,
> etc. etc.
>

Thanks for the feedback so far.  I've now created an initial PR
to start testing this on staging:

  https://github.com/kernelci/kernelci-core/pull/55

I believe the bootrr script should be part of the rootfs before
we can merge this and roll it out in production.

Guillaume

[-- Attachment #2: Type: text/html, Size: 3146 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
  2019-02-12 13:46 ` Guillaume Tucker
@ 2019-02-14 19:51   ` Kevin Hilman
  2019-05-02 11:27     ` Guillaume Tucker
  0 siblings, 1 reply; 10+ messages in thread
From: Kevin Hilman @ 2019-02-14 19:51 UTC (permalink / raw)
  To: kernelci, guillaume.tucker, kernelci, Guillaume Tucker
  Cc: Enric Balletbo i Serra

"Guillaume Tucker" <guillaume.tucker@gmail.com> writes:

> On Thu, Feb 7, 2019 at 6:55 PM Guillaume Tucker via Groups.Io
> <guillaume.tucker=gmail.com@groups.io> wrote:
>
>> Hi,
>>
>> Currently, boot tests entirely rely on the LAVA logic to detect
>> when a login prompt has been reached.  While this mostly works,
>> it does not guarantee that the platform is actually usable or
>> that everything went smoothly during the boot.
>>
>> For this reason, it would seem useful to introduce a "baseline"
>> test plan which would essentially boot and then do some fast
>> checks to verify that nothing is obviously broken.  This can
>> include things like grepping the kernel log for any errors and
>> checking that drivers have been initialised correctly.  There's a
>> lot that can be done in a few seconds with a basic ramdisk.
>>
>>
>> So we could have a list of regular expressions to detect any
>> issues in the kernel log and report a LAVA test case result for
>> each of them.  The log fragment associated with each match should
>> also be available to include the actual errors in a report.
>> Doing this on the device means we keep the test definition in the
>> same location as the other test plans, and we can run it like any
>> test suite.
>>
>
> Any thoughts on this part?  Does anyone see any issue with having
> a series of patterns to grep the kernel log and have a test case
> result for each of them in LAVA? (i.e. pass if pattern was not
> matched...)

I think this is a good idea.

The difficulty comes in looking for device/board specific stuff, but
that shouldn't stop us from doing something generic that looks for the
obvious stuff.

It could also look for (and count) kernel errors, warnings, backtraces,
etc. etc.

Kevin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Baseline test plan and bootrr
       [not found] <158129CBC2E3B0B8.7959@groups.io>
@ 2019-02-12 13:46 ` Guillaume Tucker
  2019-02-14 19:51   ` Kevin Hilman
  0 siblings, 1 reply; 10+ messages in thread
From: Guillaume Tucker @ 2019-02-12 13:46 UTC (permalink / raw)
  To: kernelci, Guillaume Tucker; +Cc: Enric Balletbo i Serra

[-- Attachment #1: Type: text/plain, Size: 1384 bytes --]

On Thu, Feb 7, 2019 at 6:55 PM Guillaume Tucker via Groups.Io
<guillaume.tucker=gmail.com@groups.io> wrote:

> Hi,
>
> Currently, boot tests entirely rely on the LAVA logic to detect
> when a login prompt has been reached.  While this mostly works,
> it does not guarantee that the platform is actually usable or
> that everything went smoothly during the boot.
>
> For this reason, it would seem useful to introduce a "baseline"
> test plan which would essentially boot and then do some fast
> checks to verify that nothing is obviously broken.  This can
> include things like grepping the kernel log for any errors and
> checking that drivers have been initialised correctly.  There's a
> lot that can be done in a few seconds with a basic ramdisk.
>
>
> So we could have a list of regular expressions to detect any
> issues in the kernel log and report a LAVA test case result for
> each of them.  The log fragment associated with each match should
> also be available to include the actual errors in a report.
> Doing this on the device means we keep the test definition in the
> same location as the other test plans, and we can run it like any
> test suite.
>

Any thoughts on this part?  Does anyone see any issue with having
a series of patterns to grep the kernel log and have a test case
result for each of them in LAVA? (i.e. pass if pattern was not
matched...)

Guillaume

[-- Attachment #2: Type: text/html, Size: 1857 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-05-02 11:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-07 18:54 Baseline test plan and bootrr Guillaume Tucker
2019-02-07 19:49 ` Anibal Limon
2019-02-11 13:16 ` Tomeu Vizoso
2019-02-11 17:18   ` eballetbo
2019-02-12  6:52     ` Tomeu Vizoso
2019-02-12 12:33       ` Mark Brown
2019-02-12 13:41       ` Guillaume Tucker
     [not found] <158129CBC2E3B0B8.7959@groups.io>
2019-02-12 13:46 ` Guillaume Tucker
2019-02-14 19:51   ` Kevin Hilman
2019-05-02 11:27     ` Guillaume Tucker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.