All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] image specific configuration with oeqa runtime tests
@ 2022-11-17  7:12 Mikko Rapeli
  2022-11-17  7:12 ` [PATCH 1/2] oeqa: add utils/data.py with get_data() function Mikko Rapeli
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-17  7:12 UTC (permalink / raw)
  To: openembedded-core; +Cc: Mikko Rapeli

Many runtime tests would need customization for different
machines and images. Currently some tests like parselogs.py are hard
coding machine specific exceptions into the test itself. I think these
machine specific exceptions fit better as image specific ones, since a
single machine config can generate multiple images which behave
differently. Thus create a "testimage_data.json" file format which image
recipes can deploy. This is then used by tests like parselogs.py to find
the image specific exception list.

Same approach would fit other runtime tests too. For example systemd
tests could include a test case which checks that an image specific list of
services are running.

I don't know how this data storage would be used with SDK or selftests,
but maybe it could work there too with some small tweaks.

Mikko Rapeli (2):
  oeqa: add utils/data.py with get_data() function
  oeqa parselogs.py: use get_data() to fetch image specific error list

 meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
 meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 4 deletions(-)
 create mode 100644 meta/lib/oeqa/utils/data.py

-- 
2.34.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/2] oeqa: add utils/data.py with get_data() function
  2022-11-17  7:12 [PATCH 0/2] image specific configuration with oeqa runtime tests Mikko Rapeli
@ 2022-11-17  7:12 ` Mikko Rapeli
  2022-11-17  7:12 ` [PATCH 2/2] oeqa parselogs.py: use get_data() to fetch image specific error list Mikko Rapeli
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-17  7:12 UTC (permalink / raw)
  To: openembedded-core; +Cc: Mikko Rapeli

get_data() uses oeqa test method name and an optional
key to get data from image specific "testimage_data.json"
file located in image deploy directory. Image recipes can
provide custom versions of this file which configures
generic tests for a specific image when testing with
testimage.bbclass

For example, the parselogs.py runtime test needs image
specific configuration when the image has new errors from
the kernel which acceptable and can be ignored.

Same machine can be used to generate multiple images with different
runtime behavior so using image as the key and not machine.

Signed-off-by: Mikko Rapeli <mikko.rapeli@linaro.org>
---
 meta/lib/oeqa/utils/data.py | 41 +++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)
 create mode 100644 meta/lib/oeqa/utils/data.py

diff --git a/meta/lib/oeqa/utils/data.py b/meta/lib/oeqa/utils/data.py
new file mode 100644
index 0000000000..4b8c10c1d0
--- /dev/null
+++ b/meta/lib/oeqa/utils/data.py
@@ -0,0 +1,41 @@
+# Copyright (C) 2022 Linaro Limited
+#
+# SPDX-License-Identifier: MIT
+
+import os
+import json
+
+from oeqa.core.utils.test import getCaseID, getCaseFile, getCaseMethod
+
+
+def get_data(self, key = None):
+    """get_data() returns test case specific data to the test case implementation.
+    Data is stored in image specific json file called "testimage_data.json" in
+    image deploy directory. Image recipes can provide custom versions of this file.
+    Data matching test method name and an optional key is returned to the test case.
+    This data can then be used by generic test cases to match image specific functionality
+    and expected behavior. For example list of expected kernel error strings, list
+    of active systemd services etc. can be image specific while the test case
+    implementation to check them is generic. Example json file for runtime
+    test parselogs.py to ignore image specific kernel error strings in dmesg:
+
+    {"test_parselogs":{"ignore_errors":[
+        "Error to be ignored in dmesg"
+    ]}}
+    """
+    test_method = getCaseMethod(self)
+    self.logger.info("%s: get_data() called by test_method =  %s, key = %s" % (__file__, test_method, key))
+
+    json_file_name = os.path.join(self.td['DEPLOY_DIR_IMAGE'], "testimage_data.json")
+    self.logger.debug("%s: json_file_name = %s" % (__file__, json_file_name))
+
+    with open(json_file_name) as json_file:
+        self.logger.debug("%s: json_file = %s" % (__file__, json_file))
+        json_data = json.load(json_file)
+        self.logger.debug("%s: json_data = %s" % (__file__, json_data))
+        if key:
+            data = json_data[test_method][key]
+        else:
+            data = json_data[test_method]
+        self.logger.debug("%s: data = %s" % (__file__, data))
+        return data
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] oeqa parselogs.py: use get_data() to fetch image specific error list
  2022-11-17  7:12 [PATCH 0/2] image specific configuration with oeqa runtime tests Mikko Rapeli
  2022-11-17  7:12 ` [PATCH 1/2] oeqa: add utils/data.py with get_data() function Mikko Rapeli
@ 2022-11-17  7:12 ` Mikko Rapeli
  2022-11-17 14:22 ` [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests Alexandre Belloni
  2022-11-17 15:17 ` Richard Purdie
  3 siblings, 0 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-17  7:12 UTC (permalink / raw)
  To: openembedded-core; +Cc: Mikko Rapeli

Runtime oeqa test parselogs.py checks dmesg output for errors. It has
hard coded machine specific exceptions for errors which can be ignored.
To re-use of this test on other machine targets and images, use
get_data() function to get the list of error strings to ignore
"ignore_errors" from image specific "testimage_data.json" file.
The json file stores this data as list under test method name and key
"ignore_errors. For example:

{"test_parselogs":{"ignore_errors":[
    "error strings which will be ignored",
    "another error strings which will be ignored"
]}}

If the json file does not exist, parselogs.py still falls back to using
the hardcoded defaults.

Signed-off-by: Mikko Rapeli <mikko.rapeli@linaro.org>
---
 meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/meta/lib/oeqa/runtime/cases/parselogs.py b/meta/lib/oeqa/runtime/cases/parselogs.py
index e67d3750da..c1d92db5d6 100644
--- a/meta/lib/oeqa/runtime/cases/parselogs.py
+++ b/meta/lib/oeqa/runtime/cases/parselogs.py
@@ -12,6 +12,7 @@ from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.data import skipIfDataVar
 from oeqa.runtime.decorator.package import OEHasPackage
+from oeqa.utils.data import get_data
 
 #in the future these lists could be moved outside of module
 errors = ["error", "cannot", "can\'t", "failed"]
@@ -316,10 +317,18 @@ class ParseLogsTest(OERuntimeTestCase):
         grepcmd += '" ' + str(log) + " | grep -Eiv \'"
 
         try:
-            errorlist = ignore_errors[self.getMachine()]
-        except KeyError:
-            self.msg += 'No ignore list found for this machine, using default\n'
-            errorlist = ignore_errors['default']
+            # get list of strings to ignore from image specific testimage_data.json with format:
+            # {"test_parselogs": {"ignore_errors":["string to ignore", "second string to ignore"]}}
+            errorlist = get_data(self, key = "ignore_errors")
+        except Exception as e:
+            self.logger.debug("%s: Exception e = %s" % (__file__, e))
+            try:
+                errorlist = ignore_errors[self.getMachine()]
+            except KeyError:
+                warning_string = 'No ignore list found for this machine and no valid testimage_data.json, using defaults'
+                self.msg += '%s\n' % (warning_string)
+                self.logger.warn("%s" % (warning_string))
+                errorlist = ignore_errors['default']
 
         for ignore_error in errorlist:
             ignore_error = ignore_error.replace('(', r'\(')
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17  7:12 [PATCH 0/2] image specific configuration with oeqa runtime tests Mikko Rapeli
  2022-11-17  7:12 ` [PATCH 1/2] oeqa: add utils/data.py with get_data() function Mikko Rapeli
  2022-11-17  7:12 ` [PATCH 2/2] oeqa parselogs.py: use get_data() to fetch image specific error list Mikko Rapeli
@ 2022-11-17 14:22 ` Alexandre Belloni
  2022-11-17 14:28   ` Mikko Rapeli
  2022-11-17 15:17 ` Richard Purdie
  3 siblings, 1 reply; 14+ messages in thread
From: Alexandre Belloni @ 2022-11-17 14:22 UTC (permalink / raw)
  To: Mikko Rapeli; +Cc: openembedded-core

Hello,

With this two patches, I have multiple new warnings on the autobuilders
for qemuarm and qemuarm-alt

https://autobuilder.yoctoproject.org/typhoon/#/builders/53/builds/6185/steps/13/logs/stdio
https://autobuilder.yoctoproject.org/typhoon/#/builders/110/builds/5064/steps/12/logs/stdio

On 17/11/2022 09:12:21+0200, Mikko Rapeli wrote:
> Many runtime tests would need customization for different
> machines and images. Currently some tests like parselogs.py are hard
> coding machine specific exceptions into the test itself. I think these
> machine specific exceptions fit better as image specific ones, since a
> single machine config can generate multiple images which behave
> differently. Thus create a "testimage_data.json" file format which image
> recipes can deploy. This is then used by tests like parselogs.py to find
> the image specific exception list.
> 
> Same approach would fit other runtime tests too. For example systemd
> tests could include a test case which checks that an image specific list of
> services are running.
> 
> I don't know how this data storage would be used with SDK or selftests,
> but maybe it could work there too with some small tweaks.
> 
> Mikko Rapeli (2):
>   oeqa: add utils/data.py with get_data() function
>   oeqa parselogs.py: use get_data() to fetch image specific error list
> 
>  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
>  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 4 deletions(-)
>  create mode 100644 meta/lib/oeqa/utils/data.py
> 
> -- 
> 2.34.1
> 

> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#173404): https://lists.openembedded.org/g/openembedded-core/message/173404
> Mute This Topic: https://lists.openembedded.org/mt/95085492/3617179
> Group Owner: openembedded-core+owner@lists.openembedded.org
> Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [alexandre.belloni@bootlin.com]
> -=-=-=-=-=-=-=-=-=-=-=-
> 


-- 
Alexandre Belloni, co-owner and COO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17 14:22 ` [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests Alexandre Belloni
@ 2022-11-17 14:28   ` Mikko Rapeli
  0 siblings, 0 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-17 14:28 UTC (permalink / raw)
  To: Alexandre Belloni; +Cc: openembedded-core

Hi,

On Thu, Nov 17, 2022 at 03:22:20PM +0100, Alexandre Belloni wrote:
> Hello,
> 
> With this two patches, I have multiple new warnings on the autobuilders
> for qemuarm and qemuarm-alt
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/53/builds/6185/steps/13/logs/stdio
> https://autobuilder.yoctoproject.org/typhoon/#/builders/110/builds/5064/steps/12/logs/stdio

WARNING: core-image-sato-sdk-1.0-r0 do_testimage: No ignore list found for this machine and no valid testimage_data.json, using defaults

I can change these to be info messages to be at info level. Previously
these were only in the test output log, not in the bitbake output.
I think they need to be in both.

Cheers,

-Mikko

> On 17/11/2022 09:12:21+0200, Mikko Rapeli wrote:
> > Many runtime tests would need customization for different
> > machines and images. Currently some tests like parselogs.py are hard
> > coding machine specific exceptions into the test itself. I think these
> > machine specific exceptions fit better as image specific ones, since a
> > single machine config can generate multiple images which behave
> > differently. Thus create a "testimage_data.json" file format which image
> > recipes can deploy. This is then used by tests like parselogs.py to find
> > the image specific exception list.
> > 
> > Same approach would fit other runtime tests too. For example systemd
> > tests could include a test case which checks that an image specific list of
> > services are running.
> > 
> > I don't know how this data storage would be used with SDK or selftests,
> > but maybe it could work there too with some small tweaks.
> > 
> > Mikko Rapeli (2):
> >   oeqa: add utils/data.py with get_data() function
> >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > 
> >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 4 deletions(-)
> >  create mode 100644 meta/lib/oeqa/utils/data.py
> > 
> > -- 
> > 2.34.1
> > 
> 
> > 
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > View/Reply Online (#173404): https://lists.openembedded.org/g/openembedded-core/message/173404
> > Mute This Topic: https://lists.openembedded.org/mt/95085492/3617179
> > Group Owner: openembedded-core+owner@lists.openembedded.org
> > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [alexandre.belloni@bootlin.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
> > 
> 
> 
> -- 
> Alexandre Belloni, co-owner and COO, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17  7:12 [PATCH 0/2] image specific configuration with oeqa runtime tests Mikko Rapeli
                   ` (2 preceding siblings ...)
  2022-11-17 14:22 ` [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests Alexandre Belloni
@ 2022-11-17 15:17 ` Richard Purdie
  2022-11-17 15:39   ` Mikko Rapeli
  3 siblings, 1 reply; 14+ messages in thread
From: Richard Purdie @ 2022-11-17 15:17 UTC (permalink / raw)
  To: Mikko Rapeli, openembedded-core

On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> Many runtime tests would need customization for different
> machines and images. Currently some tests like parselogs.py are hard
> coding machine specific exceptions into the test itself. I think these
> machine specific exceptions fit better as image specific ones, since a
> single machine config can generate multiple images which behave
> differently. Thus create a "testimage_data.json" file format which image
> recipes can deploy. This is then used by tests like parselogs.py to find
> the image specific exception list.
> 
> Same approach would fit other runtime tests too. For example systemd
> tests could include a test case which checks that an image specific list of
> services are running.
> 
> I don't know how this data storage would be used with SDK or selftests,
> but maybe it could work there too with some small tweaks.
> 
> Mikko Rapeli (2):
>   oeqa: add utils/data.py with get_data() function
>   oeqa parselogs.py: use get_data() to fetch image specific error list
> 
>  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
>  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 4 deletions(-)
>  create mode 100644 meta/lib/oeqa/utils/data.py

This patch looks like it is one side of the equation, i.e. importing
the data into the tests. How does the data get into the deploy
directory in the first place? I assume there are other patches which do
that?

We have a bit of contention with two approaches to data management in
OEQA. One is where the runtime tests are directly run against an image,
in which case the datastore is available. You could therefore have
markup in the recipe as normal variables and access them directly in
the tests.

The second is the "testexport" approach where the tests are run without
the main metadata. I know Ross and I would like to see testexport
dropped as it complicates things and is a pain.

This new file "feels" a lot like more extensions in the testexport
direction and I'm not sure we need to do that. Could we handle this
with more markup in the image recipe?

Cheers,

Richard






^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17 15:17 ` Richard Purdie
@ 2022-11-17 15:39   ` Mikko Rapeli
  2022-11-17 16:57     ` Richard Purdie
  0 siblings, 1 reply; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-17 15:39 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-core

Hi,

On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > Many runtime tests would need customization for different
> > machines and images. Currently some tests like parselogs.py are hard
> > coding machine specific exceptions into the test itself. I think these
> > machine specific exceptions fit better as image specific ones, since a
> > single machine config can generate multiple images which behave
> > differently. Thus create a "testimage_data.json" file format which image
> > recipes can deploy. This is then used by tests like parselogs.py to find
> > the image specific exception list.
> > 
> > Same approach would fit other runtime tests too. For example systemd
> > tests could include a test case which checks that an image specific list of
> > services are running.
> > 
> > I don't know how this data storage would be used with SDK or selftests,
> > but maybe it could work there too with some small tweaks.
> > 
> > Mikko Rapeli (2):
> >   oeqa: add utils/data.py with get_data() function
> >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > 
> >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 4 deletions(-)
> >  create mode 100644 meta/lib/oeqa/utils/data.py
> 
> This patch looks like it is one side of the equation, i.e. importing
> the data into the tests. How does the data get into the deploy
> directory in the first place? I assume there are other patches which do
> that?

Patches in other layers do that, yes.

> We have a bit of contention with two approaches to data management in
> OEQA. One is where the runtime tests are directly run against an image,
> in which case the datastore is available. You could therefore have
> markup in the recipe as normal variables and access them directly in
> the tests.

My use case is running tests right after build, but I would like to export
them to execute later as well.

> The second is the "testexport" approach where the tests are run without
> the main metadata. I know Ross and I would like to see testexport
> dropped as it complicates things and is a pain.
> 
> This new file "feels" a lot like more extensions in the testexport
> direction and I'm not sure we need to do that. Could we handle this
> with more markup in the image recipe?

For simple variables this would do but how about a long list of strings
like poky/meta/lib/oeqa/runtime/cases/parselogs.py:

common_errors = [
    "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
    "dma timeout",
    "can\'t add hid device:",
    "usbhid: probe of ",
    "_OSC failed (AE_ERROR)",
    "_OSC failed (AE_SUPPORT)",
    "AE_ALREADY_EXISTS",
    "ACPI _OSC request failed (AE_SUPPORT)",
    "can\'t disable ASPM",
    "Failed to load module \"vesa\"",
    "Failed to load module vesa",
    "Failed to load module \"modesetting\"",
    "Failed to load module modesetting",
    "Failed to load module \"glx\"",
    "Failed to load module \"fbdev\"",
    "Failed to load module fbdev",
    "Failed to load module glx"
]

Embed json into a bitbake variable? Or embed directly as python code?

Cheers,

-Mikko


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17 15:39   ` Mikko Rapeli
@ 2022-11-17 16:57     ` Richard Purdie
  2022-11-18 14:32       ` Mikko Rapeli
  0 siblings, 1 reply; 14+ messages in thread
From: Richard Purdie @ 2022-11-17 16:57 UTC (permalink / raw)
  To: Mikko Rapeli; +Cc: openembedded-core

On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> Hi,
> 
> On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > Many runtime tests would need customization for different
> > > machines and images. Currently some tests like parselogs.py are hard
> > > coding machine specific exceptions into the test itself. I think these
> > > machine specific exceptions fit better as image specific ones, since a
> > > single machine config can generate multiple images which behave
> > > differently. Thus create a "testimage_data.json" file format which image
> > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > the image specific exception list.
> > > 
> > > Same approach would fit other runtime tests too. For example systemd
> > > tests could include a test case which checks that an image specific list of
> > > services are running.
> > > 
> > > I don't know how this data storage would be used with SDK or selftests,
> > > but maybe it could work there too with some small tweaks.
> > > 
> > > Mikko Rapeli (2):
> > >   oeqa: add utils/data.py with get_data() function
> > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > 
> > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > 
> > This patch looks like it is one side of the equation, i.e. importing
> > the data into the tests. How does the data get into the deploy
> > directory in the first place? I assume there are other patches which do
> > that?
> 
> Patches in other layers do that, yes.
> 
> > We have a bit of contention with two approaches to data management in
> > OEQA. One is where the runtime tests are directly run against an image,
> > in which case the datastore is available. You could therefore have
> > markup in the recipe as normal variables and access them directly in
> > the tests.
> 
> My use case is running tests right after build, but I would like to export
> them to execute later as well.

When you execute later, are you going to use testexport or will the
metadata still be available? As I mentioned, removing testexport would
be desirable for a number of reasons but I suspect there are people who
might want it.
> 

> > The second is the "testexport" approach where the tests are run without
> > the main metadata. I know Ross and I would like to see testexport
> > dropped as it complicates things and is a pain.
> > 
> > This new file "feels" a lot like more extensions in the testexport
> > direction and I'm not sure we need to do that. Could we handle this
> > with more markup in the image recipe?
> 
> For simple variables this would do but how about a long list of strings
> like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> 
> common_errors = [
>     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
>     "dma timeout",
>     "can\'t add hid device:",
>     "usbhid: probe of ",
>     "_OSC failed (AE_ERROR)",
>     "_OSC failed (AE_SUPPORT)",
>     "AE_ALREADY_EXISTS",
>     "ACPI _OSC request failed (AE_SUPPORT)",
>     "can\'t disable ASPM",
>     "Failed to load module \"vesa\"",
>     "Failed to load module vesa",
>     "Failed to load module \"modesetting\"",
>     "Failed to load module modesetting",
>     "Failed to load module \"glx\"",
>     "Failed to load module \"fbdev\"",
>     "Failed to load module fbdev",
>     "Failed to load module glx"
> ]
> 
> Embed json into a bitbake variable? Or embed directly as python code?

I've wondered if we could add some new syntax to bitbake to support
this somehow, does anyone have any ideas to propose?

I'd wondered about both python data and/or json format (at which point
someone will want yaml :/).

Cheers,

Richard


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-17 16:57     ` Richard Purdie
@ 2022-11-18 14:32       ` Mikko Rapeli
  2022-11-18 15:04         ` Richard Purdie
  0 siblings, 1 reply; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-18 14:32 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-core

Hi,

On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > Hi,
> > 
> > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > Many runtime tests would need customization for different
> > > > machines and images. Currently some tests like parselogs.py are hard
> > > > coding machine specific exceptions into the test itself. I think these
> > > > machine specific exceptions fit better as image specific ones, since a
> > > > single machine config can generate multiple images which behave
> > > > differently. Thus create a "testimage_data.json" file format which image
> > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > the image specific exception list.
> > > > 
> > > > Same approach would fit other runtime tests too. For example systemd
> > > > tests could include a test case which checks that an image specific list of
> > > > services are running.
> > > > 
> > > > I don't know how this data storage would be used with SDK or selftests,
> > > > but maybe it could work there too with some small tweaks.
> > > > 
> > > > Mikko Rapeli (2):
> > > >   oeqa: add utils/data.py with get_data() function
> > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > 
> > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > 
> > > This patch looks like it is one side of the equation, i.e. importing
> > > the data into the tests. How does the data get into the deploy
> > > directory in the first place? I assume there are other patches which do
> > > that?
> > 
> > Patches in other layers do that, yes.

Note to self and anyone else interested in this, it is rather
tricky to get SRC_URI and do_deploy() working in image recipes.
Something like this will do it though:

SUMMARY = "Test image"
LICENSE = "MIT"

SRC_URI = "file://testimage_data.json"

inherit deploy

# re-enable SRC_URI handling, it's disabled in image.bbclass
python __anonymous() {
    d.delVarFlag("do_fetch", "noexec")
    d.delVarFlag("do_unpack", "noexec")
}
...
do_deploy() {
    # to customise oeqa tests
    mkdir -p "${DEPLOYDIR}"
    install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
}
# do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
addtask deploy before do_build after do_rootfs do_unpack

> > > We have a bit of contention with two approaches to data management in
> > > OEQA. One is where the runtime tests are directly run against an image,
> > > in which case the datastore is available. You could therefore have
> > > markup in the recipe as normal variables and access them directly in
> > > the tests.
> > 
> > My use case is running tests right after build, but I would like to export
> > them to execute later as well.
> 
> When you execute later, are you going to use testexport or will the
> metadata still be available? As I mentioned, removing testexport would
> be desirable for a number of reasons but I suspect there are people who
> might want it.

I was planning to use testexport and also make sure all images and other
things needed for running tests are in the output of a build.

> > > The second is the "testexport" approach where the tests are run without
> > > the main metadata. I know Ross and I would like to see testexport
> > > dropped as it complicates things and is a pain.
> > > 
> > > This new file "feels" a lot like more extensions in the testexport
> > > direction and I'm not sure we need to do that. Could we handle this
> > > with more markup in the image recipe?
> > 
> > For simple variables this would do but how about a long list of strings
> > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > 
> > common_errors = [
> >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> >     "dma timeout",
> >     "can\'t add hid device:",
> >     "usbhid: probe of ",
> >     "_OSC failed (AE_ERROR)",
> >     "_OSC failed (AE_SUPPORT)",
> >     "AE_ALREADY_EXISTS",
> >     "ACPI _OSC request failed (AE_SUPPORT)",
> >     "can\'t disable ASPM",
> >     "Failed to load module \"vesa\"",
> >     "Failed to load module vesa",
> >     "Failed to load module \"modesetting\"",
> >     "Failed to load module modesetting",
> >     "Failed to load module \"glx\"",
> >     "Failed to load module \"fbdev\"",
> >     "Failed to load module fbdev",
> >     "Failed to load module glx"
> > ]
> > 
> > Embed json into a bitbake variable? Or embed directly as python code?
> 
> I've wondered if we could add some new syntax to bitbake to support
> this somehow, does anyone have any ideas to propose?
> 
> I'd wondered about both python data and/or json format (at which point
> someone will want yaml :/).

This sounds pretty far fetched currently. json files are quite simple
to work with in python so I'd just stick to this. If this approach is ok
I could update the testimage.bbclass documentation with these details.
I really want to re-use tests and infratructure for running them but I need
to customize various details.

Cheers,

-Mikko


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-18 14:32       ` Mikko Rapeli
@ 2022-11-18 15:04         ` Richard Purdie
  2022-11-18 15:57           ` Mikko Rapeli
  0 siblings, 1 reply; 14+ messages in thread
From: Richard Purdie @ 2022-11-18 15:04 UTC (permalink / raw)
  To: Mikko Rapeli; +Cc: openembedded-core

On Fri, 2022-11-18 at 16:32 +0200, Mikko Rapeli wrote:
> Hi,
> 
> On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> > On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > > Hi,
> > > 
> > > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > > Many runtime tests would need customization for different
> > > > > machines and images. Currently some tests like parselogs.py are hard
> > > > > coding machine specific exceptions into the test itself. I think these
> > > > > machine specific exceptions fit better as image specific ones, since a
> > > > > single machine config can generate multiple images which behave
> > > > > differently. Thus create a "testimage_data.json" file format which image
> > > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > > the image specific exception list.
> > > > > 
> > > > > Same approach would fit other runtime tests too. For example systemd
> > > > > tests could include a test case which checks that an image specific list of
> > > > > services are running.
> > > > > 
> > > > > I don't know how this data storage would be used with SDK or selftests,
> > > > > but maybe it could work there too with some small tweaks.
> > > > > 
> > > > > Mikko Rapeli (2):
> > > > >   oeqa: add utils/data.py with get_data() function
> > > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > > 
> > > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > > 
> > > > This patch looks like it is one side of the equation, i.e. importing
> > > > the data into the tests. How does the data get into the deploy
> > > > directory in the first place? I assume there are other patches which do
> > > > that?
> > > 
> > > Patches in other layers do that, yes.
> 
> Note to self and anyone else interested in this, it is rather
> tricky to get SRC_URI and do_deploy() working in image recipes.
> Something like this will do it though:
> 
> SUMMARY = "Test image"
> LICENSE = "MIT"
> 
> SRC_URI = "file://testimage_data.json"
> 
> inherit deploy
> 
> # re-enable SRC_URI handling, it's disabled in image.bbclass
> python __anonymous() {
>     d.delVarFlag("do_fetch", "noexec")
>     d.delVarFlag("do_unpack", "noexec")
> }
> ...
> do_deploy() {
>     # to customise oeqa tests
>     mkdir -p "${DEPLOYDIR}"
>     install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
> }
> # do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
> addtask deploy before do_build after do_rootfs do_unpack

Since the image code doesn't need SRC_URI and has it's own handling of
deployment, we didn't really anyone should be needing to do that :/.

> > > 
> > When you execute later, are you going to use testexport or will the
> > metadata still be available? As I mentioned, removing testexport would
> > be desirable for a number of reasons but I suspect there are people who
> > might want it.
> 
> I was planning to use testexport and also make sure all images and other
> things needed for running tests are in the output of a build.

I guess that means if we were to propose patches removing testexport
functionality you'd be very much opposed then? :(


> 
> > > > The second is the "testexport" approach where the tests are run without
> > > > the main metadata. I know Ross and I would like to see testexport
> > > > dropped as it complicates things and is a pain.
> > > > 
> > > > This new file "feels" a lot like more extensions in the testexport
> > > > direction and I'm not sure we need to do that. Could we handle this
> > > > with more markup in the image recipe?
> > > 
> > > For simple variables this would do but how about a long list of strings
> > > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > > 
> > > common_errors = [
> > >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> > >     "dma timeout",
> > >     "can\'t add hid device:",
> > >     "usbhid: probe of ",
> > >     "_OSC failed (AE_ERROR)",
> > >     "_OSC failed (AE_SUPPORT)",
> > >     "AE_ALREADY_EXISTS",
> > >     "ACPI _OSC request failed (AE_SUPPORT)",
> > >     "can\'t disable ASPM",
> > >     "Failed to load module \"vesa\"",
> > >     "Failed to load module vesa",
> > >     "Failed to load module \"modesetting\"",
> > >     "Failed to load module modesetting",
> > >     "Failed to load module \"glx\"",
> > >     "Failed to load module \"fbdev\"",
> > >     "Failed to load module fbdev",
> > >     "Failed to load module glx"
> > > ]
> > > 
> > > Embed json into a bitbake variable? Or embed directly as python code?
> > 
> > I've wondered if we could add some new syntax to bitbake to support
> > this somehow, does anyone have any ideas to propose?
> > 
> > I'd wondered about both python data and/or json format (at which point
> > someone will want yaml :/).
> 
> This sounds pretty far fetched currently.

Not really. If we can find a syntax that works, the rest of the code in
bitbake can support that fairly easily. The datastore already handles
objects of different types.

> json files are quite simple to work with in python so I'd just stick to 
> this. If this approach is ok I could update the testimage.bbclass 
> documentation with these details.
> I really want to re-use tests and infratructure for running them but I need
> to customize various details.

My concern is having multiple different file formats and data streams.
It means we no longer have one definitive data mechanism but two, then
the argument for people also shipping yaml and other files with recipes
also becomes difficult. We'd also have people wanting to query from one
to the other eventually.

The real issue here seems to be that our data format (.bb) is
struggling with some forms of data. I've therefore a preference for
fixing that rather than encouraging working around it.

Cheers,

Richard







^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-18 15:04         ` Richard Purdie
@ 2022-11-18 15:57           ` Mikko Rapeli
  2022-11-18 16:04             ` Richard Purdie
  2022-11-18 16:11             ` Richard Purdie
  0 siblings, 2 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-18 15:57 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-core

On Fri, Nov 18, 2022 at 03:04:29PM +0000, Richard Purdie wrote:
> On Fri, 2022-11-18 at 16:32 +0200, Mikko Rapeli wrote:
> > Hi,
> > 
> > On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> > > On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > > > Hi,
> > > > 
> > > > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > > > Many runtime tests would need customization for different
> > > > > > machines and images. Currently some tests like parselogs.py are hard
> > > > > > coding machine specific exceptions into the test itself. I think these
> > > > > > machine specific exceptions fit better as image specific ones, since a
> > > > > > single machine config can generate multiple images which behave
> > > > > > differently. Thus create a "testimage_data.json" file format which image
> > > > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > > > the image specific exception list.
> > > > > > 
> > > > > > Same approach would fit other runtime tests too. For example systemd
> > > > > > tests could include a test case which checks that an image specific list of
> > > > > > services are running.
> > > > > > 
> > > > > > I don't know how this data storage would be used with SDK or selftests,
> > > > > > but maybe it could work there too with some small tweaks.
> > > > > > 
> > > > > > Mikko Rapeli (2):
> > > > > >   oeqa: add utils/data.py with get_data() function
> > > > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > > > 
> > > > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > > > 
> > > > > This patch looks like it is one side of the equation, i.e. importing
> > > > > the data into the tests. How does the data get into the deploy
> > > > > directory in the first place? I assume there are other patches which do
> > > > > that?
> > > > 
> > > > Patches in other layers do that, yes.
> > 
> > Note to self and anyone else interested in this, it is rather
> > tricky to get SRC_URI and do_deploy() working in image recipes.
> > Something like this will do it though:
> > 
> > SUMMARY = "Test image"
> > LICENSE = "MIT"
> > 
> > SRC_URI = "file://testimage_data.json"
> > 
> > inherit deploy
> > 
> > # re-enable SRC_URI handling, it's disabled in image.bbclass
> > python __anonymous() {
> >     d.delVarFlag("do_fetch", "noexec")
> >     d.delVarFlag("do_unpack", "noexec")
> > }
> > ...
> > do_deploy() {
> >     # to customise oeqa tests
> >     mkdir -p "${DEPLOYDIR}"
> >     install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
> > }
> > # do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
> > addtask deploy before do_build after do_rootfs do_unpack
> 
> Since the image code doesn't need SRC_URI and has it's own handling of
> deployment, we didn't really anyone should be needing to do that :/.

Yep, but it can be done. Images can deploy files from SRC_URI too.

> > > > 
> > > When you execute later, are you going to use testexport or will the
> > > metadata still be available? As I mentioned, removing testexport would
> > > be desirable for a number of reasons but I suspect there are people who
> > > might want it.
> > 
> > I was planning to use testexport and also make sure all images and other
> > things needed for running tests are in the output of a build.
> 
> I guess that means if we were to propose patches removing testexport
> functionality you'd be very much opposed then? :(

An alternative would be nice. I like that build environment provides
full infrastructure for running tests also elsewhere. But if that is too
tricky then running tests only works on build machines after the build.

> > > > > The second is the "testexport" approach where the tests are run without
> > > > > the main metadata. I know Ross and I would like to see testexport
> > > > > dropped as it complicates things and is a pain.
> > > > > 
> > > > > This new file "feels" a lot like more extensions in the testexport
> > > > > direction and I'm not sure we need to do that. Could we handle this
> > > > > with more markup in the image recipe?
> > > > 
> > > > For simple variables this would do but how about a long list of strings
> > > > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > > > 
> > > > common_errors = [
> > > >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> > > >     "dma timeout",
> > > >     "can\'t add hid device:",
> > > >     "usbhid: probe of ",
> > > >     "_OSC failed (AE_ERROR)",
> > > >     "_OSC failed (AE_SUPPORT)",
> > > >     "AE_ALREADY_EXISTS",
> > > >     "ACPI _OSC request failed (AE_SUPPORT)",
> > > >     "can\'t disable ASPM",
> > > >     "Failed to load module \"vesa\"",
> > > >     "Failed to load module vesa",
> > > >     "Failed to load module \"modesetting\"",
> > > >     "Failed to load module modesetting",
> > > >     "Failed to load module \"glx\"",
> > > >     "Failed to load module \"fbdev\"",
> > > >     "Failed to load module fbdev",
> > > >     "Failed to load module glx"
> > > > ]
> > > > 
> > > > Embed json into a bitbake variable? Or embed directly as python code?
> > > 
> > > I've wondered if we could add some new syntax to bitbake to support
> > > this somehow, does anyone have any ideas to propose?
> > > 
> > > I'd wondered about both python data and/or json format (at which point
> > > someone will want yaml :/).
> > 
> > This sounds pretty far fetched currently.
> 
> Not really. If we can find a syntax that works, the rest of the code in
> bitbake can support that fairly easily. The datastore already handles
> objects of different types.
> 
> > json files are quite simple to work with in python so I'd just stick to�
> > this. If this approach is ok I could update the testimage.bbclass�
> > documentation with these details.
> > I really want to re-use tests and infratructure for running them but I need
> > to customize various details.
> 
> My concern is having multiple different file formats and data streams.
> It means we no longer have one definitive data mechanism but two, then
> the argument for people also shipping yaml and other files with recipes
> also becomes difficult. We'd also have people wanting to query from one
> to the other eventually.
> 
> The real issue here seems to be that our data format (.bb) is
> struggling with some forms of data. I've therefore a preference for
> fixing that rather than encouraging working around it.

For oeqa runtime tests I propose this json file. If tests have any
customization need they should use either image recipe variables or this
file format if recipe variables can't support the format. For other
alternatives I'd need pointers where to implement and what. ptests are
normal packages so they don't complicate this.

Additionally I'm currently interested in kirkstone LTS branch so would like
any changes to be there too...

Cheers,

-Mikko

> Cheers,
> 
> Richard
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-18 15:57           ` Mikko Rapeli
@ 2022-11-18 16:04             ` Richard Purdie
  2022-11-18 16:09               ` Mikko Rapeli
  2022-11-18 16:11             ` Richard Purdie
  1 sibling, 1 reply; 14+ messages in thread
From: Richard Purdie @ 2022-11-18 16:04 UTC (permalink / raw)
  To: Mikko Rapeli; +Cc: openembedded-core

On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> Additionally I'm currently interested in kirkstone LTS branch so would like
> any changes to be there too...

The idea is that we develop things in master, then they become
available in the next LTS release, not the previous one.

People somehow seem to think that we add development patches to master
and immediately backport them to the last LTS. This is not how
development or the LTS is meant to work :(.

We've not going to be forced into poor interface/API choices just
because people want things backported to the current LTS.

This is why I'm so worried about the lack of planning and development
happening on master, people need to be thinking and planning ahead. To
be clear, this isn't just about this issue but a pattern we're seeing
far more widely.

Cheers,

Richard






^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-18 16:04             ` Richard Purdie
@ 2022-11-18 16:09               ` Mikko Rapeli
  0 siblings, 0 replies; 14+ messages in thread
From: Mikko Rapeli @ 2022-11-18 16:09 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-core

Hi,

On Fri, Nov 18, 2022 at 04:04:01PM +0000, Richard Purdie wrote:
> On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> > Additionally I'm currently interested in kirkstone LTS branch so would like
> > any changes to be there too...
> 
> The idea is that we develop things in master, then they become
> available in the next LTS release, not the previous one.
> 
> People somehow seem to think that we add development patches to master
> and immediately backport them to the last LTS. This is not how
> development or the LTS is meant to work :(.
> 
> We've not going to be forced into poor interface/API choices just
> because people want things backported to the current LTS.
> 
> This is why I'm so worried about the lack of planning and development
> happening on master, people need to be thinking and planning ahead. To
> be clear, this isn't just about this issue but a pattern we're seeing
> far more widely.

Agreed, I'm trying to get into working closer to master but not there
yet...

Cheers,

-Mikko


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests
  2022-11-18 15:57           ` Mikko Rapeli
  2022-11-18 16:04             ` Richard Purdie
@ 2022-11-18 16:11             ` Richard Purdie
  1 sibling, 0 replies; 14+ messages in thread
From: Richard Purdie @ 2022-11-18 16:11 UTC (permalink / raw)
  To: Mikko Rapeli; +Cc: openembedded-core

On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> On Fri, Nov 18, 2022 at 03:04:29PM +0000, Richard Purdie wrote:
> 
> > My concern is having multiple different file formats and data streams.
> > It means we no longer have one definitive data mechanism but two, then
> > the argument for people also shipping yaml and other files with recipes
> > also becomes difficult. We'd also have people wanting to query from one
> > to the other eventually.
> > 
> > The real issue here seems to be that our data format (.bb) is
> > struggling with some forms of data. I've therefore a preference for
> > fixing that rather than encouraging working around it.
> 
> For oeqa runtime tests I propose this json file. If tests have any
> customization need they should use either image recipe variables or this
> file format if recipe variables can't support the format. For other
> alternatives I'd need pointers where to implement and what. ptests are
> normal packages so they don't complicate this.

The key question this comes down to is can anyone suggest a syntax for
including python data structures in our metadata (and/or json data)?

Cheers,

Richard



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-11-18 16:11 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-17  7:12 [PATCH 0/2] image specific configuration with oeqa runtime tests Mikko Rapeli
2022-11-17  7:12 ` [PATCH 1/2] oeqa: add utils/data.py with get_data() function Mikko Rapeli
2022-11-17  7:12 ` [PATCH 2/2] oeqa parselogs.py: use get_data() to fetch image specific error list Mikko Rapeli
2022-11-17 14:22 ` [OE-core] [PATCH 0/2] image specific configuration with oeqa runtime tests Alexandre Belloni
2022-11-17 14:28   ` Mikko Rapeli
2022-11-17 15:17 ` Richard Purdie
2022-11-17 15:39   ` Mikko Rapeli
2022-11-17 16:57     ` Richard Purdie
2022-11-18 14:32       ` Mikko Rapeli
2022-11-18 15:04         ` Richard Purdie
2022-11-18 15:57           ` Mikko Rapeli
2022-11-18 16:04             ` Richard Purdie
2022-11-18 16:09               ` Mikko Rapeli
2022-11-18 16:11             ` Richard Purdie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.