All of lore.kernel.org
 help / color / mirror / Atom feed
* Device Tree runtime unit tests: Harmonisation
@ 2022-02-02 11:31 Naresh Kamboju
  2022-02-02 18:38 ` Frank Rowand
  0 siblings, 1 reply; 9+ messages in thread
From: Naresh Kamboju @ 2022-02-02 11:31 UTC (permalink / raw)
  To: Brendan Higgins, Frank Rowand, Rob Herring
  Cc: open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

Linaro started doing Linux kernel Functional Validation (LKFT).
As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
daily test CI.

The output of the test looks as below. The current problem is that we
have a hard time to see (grep) pass/fail for each individual test. We
only see a summary at the end with x pass and y fails.
We would like to get your opinion of how hard it would be to include
that in the output per test. Maybe like TAP version 14?
Another question would be how hard do you think it would be to rewrite
this to a kunit test, if even applicable? I have provided the kunit
output links at the end of this email.


Test output:
------------
[    0.000000] Booting Linux on physical CPU 0x0000000100 [0x410fd033]
[    0.000000] Linux version 5.17.0-rc1-next-20220127
(tuxmake@tuxmake) (aarch64-linux-gnu-gcc (Debian 11.2.0-9) 11.2.0, GNU
ld (GNU Binutils for Debian) 2.37) #1 SMP PREEMPT @1643255563
[    0.000000] Machine model: ARM Juno development board (r2)

<trimmed output>

[    3.285226] ### dt-test ### start of unittest - you will see error messages
[    3.293269] ### dt-test ### EXPECT \ : Duplicate name in
testcase-data, renamed to \"duplicate-name#1\"
[    3.293456] Duplicate name in testcase-data, renamed to \"duplicate-name#1\"
[    3.313367] ### dt-test ### EXPECT / : Duplicate name in
testcase-data, renamed to \"duplicate-name#1\"
[    3.314709] ### dt-test ### EXPECT \ : OF:
/testcase-data/phandle-tests/consumer-a: could not get
#phandle-cells-missing for /testcase-data/phandle-tests/provider1
[    3.323968] OF: /testcase-data/phandle-tests/consumer-a: could not
get #phandle-cells-missing for /testcase-data/phandle-tests/provider1

<trimmed output>

[    5.118400] ### dt-test ### EXPECT / : OF: overlay: ERROR: multiple
fragments add and/or delete node
/testcase-data-2/substation@100/motor-1/electric
[    5.121358] atkbd serio1: keyboard reset failed on 1c070000.kmi
[    5.134160] ### dt-test ### end of unittest - 257 passed, 0 failed


Ref:
Full test output of of-unittest
https://lkft.validation.linaro.org/scheduler/job/4458582#L1019
https://lkft.validation.linaro.org/scheduler/job/4404330#L428

Kunit example test output that we are running in our daily CI loop.
https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/tests/

Kunit Full test logs:
https://lkft.validation.linaro.org/scheduler/job/3643324

https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/test/kunit_log_test/log


--
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 11:31 Device Tree runtime unit tests: Harmonisation Naresh Kamboju
@ 2022-02-02 18:38 ` Frank Rowand
  2022-02-02 20:29   ` Rob Herring
  2022-02-02 20:54   ` Brendan Higgins
  0 siblings, 2 replies; 9+ messages in thread
From: Frank Rowand @ 2022-02-02 18:38 UTC (permalink / raw)
  To: Naresh Kamboju, Brendan Higgins, Rob Herring
  Cc: open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On 2/2/22 5:31 AM, Naresh Kamboju wrote:
> Linaro started doing Linux kernel Functional Validation (LKFT).
> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
> daily test CI.
> 
> The output of the test looks as below. The current problem is that we
> have a hard time to see (grep) pass/fail for each individual test. We
> only see a summary at the end with x pass and y fails.

The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
loglevel to allow debug output.

Unfortunately this can add lots of debug output, unless you use dynamic
debug to only enable debug for drivers/of/unittest.o.  There are only
a few other pr_debug() messages in unittest.

I think a better solution would be to add a config option, something
like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
at loglevel KERN_ERR.  I'll submit a patch for that and see what the
review responses are.

> We would like to get your opinion of how hard it would be to include
> that in the output per test. Maybe like TAP version 14?
> Another question would be how hard do you think it would be to rewrite
> this to a kunit test, if even applicable? I have provided the kunit
> output links at the end of this email.

Devicetree unittests were suggested as a good candidate as a first
test to convert to kunit when kunit was implemented.  Brendan tried
to convert it, and we quickly saw that it was not a good candidate.
Devicetree unittests do not fit the unit test mold; they are a very
different creature.  Brendan has a good term for this type of test
(Brendan, was it "acceptance" test?).

> 
> 
> Test output:
> ------------
> [    0.000000] Booting Linux on physical CPU 0x0000000100 [0x410fd033]
> [    0.000000] Linux version 5.17.0-rc1-next-20220127
> (tuxmake@tuxmake) (aarch64-linux-gnu-gcc (Debian 11.2.0-9) 11.2.0, GNU
> ld (GNU Binutils for Debian) 2.37) #1 SMP PREEMPT @1643255563
> [    0.000000] Machine model: ARM Juno development board (r2)
> 
> <trimmed output>
> 
> [    3.285226] ### dt-test ### start of unittest - you will see error messages
> [    3.293269] ### dt-test ### EXPECT \ : Duplicate name in
> testcase-data, renamed to \"duplicate-name#1\"
> [    3.293456] Duplicate name in testcase-data, renamed to \"duplicate-name#1\"
> [    3.313367] ### dt-test ### EXPECT / : Duplicate name in
> testcase-data, renamed to \"duplicate-name#1\"
> [    3.314709] ### dt-test ### EXPECT \ : OF:
> /testcase-data/phandle-tests/consumer-a: could not get
> #phandle-cells-missing for /testcase-data/phandle-tests/provider1
> [    3.323968] OF: /testcase-data/phandle-tests/consumer-a: could not
> get #phandle-cells-missing for /testcase-data/phandle-tests/provider1
> 
> <trimmed output>
> 
> [    5.118400] ### dt-test ### EXPECT / : OF: overlay: ERROR: multiple
> fragments add and/or delete node
> /testcase-data-2/substation@100/motor-1/electric
> [    5.121358] atkbd serio1: keyboard reset failed on 1c070000.kmi
> [    5.134160] ### dt-test ### end of unittest - 257 passed, 0 failed
> 
> 
> Ref:
> Full test output of of-unittest
> https://lkft.validation.linaro.org/scheduler/job/4458582#L1019
> https://lkft.validation.linaro.org/scheduler/job/4404330#L428
> 
> Kunit example test output that we are running in our daily CI loop.
> https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/tests/
> 
> Kunit Full test logs:
> https://lkft.validation.linaro.org/scheduler/job/3643324
> 
> https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/test/kunit_log_test/log
> 
> 
> --
> Linaro LKFT
> https://lkft.linaro.org
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 18:38 ` Frank Rowand
@ 2022-02-02 20:29   ` Rob Herring
  2022-02-02 21:14     ` Brendan Higgins
  2022-02-02 22:01     ` Frank Rowand
  2022-02-02 20:54   ` Brendan Higgins
  1 sibling, 2 replies; 9+ messages in thread
From: Rob Herring @ 2022-02-02 20:29 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Naresh Kamboju, Brendan Higgins,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
> > Linaro started doing Linux kernel Functional Validation (LKFT).
> > As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
> > daily test CI.
> >
> > The output of the test looks as below. The current problem is that we
> > have a hard time to see (grep) pass/fail for each individual test. We
> > only see a summary at the end with x pass and y fails.
>
> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
> loglevel to allow debug output.

That alone is not enough. Unless there's a DEBUG define, the
pr_debug() is going to print nothing.

> Unfortunately this can add lots of debug output, unless you use dynamic
> debug to only enable debug for drivers/of/unittest.o.  There are only
> a few other pr_debug() messages in unittest.

Dynamic debug is one option. Another would be a module param to enable
running the tests. Then it can be built, but has to be explicitly
enabled at boot time. A 3rd option is making it work as a module, then
it's run when loaded. (That was the original plan.)

> I think a better solution would be to add a config option, something
> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
> review responses are.

Nak for another config option.

> > We would like to get your opinion of how hard it would be to include
> > that in the output per test. Maybe like TAP version 14?
> > Another question would be how hard do you think it would be to rewrite
> > this to a kunit test, if even applicable? I have provided the kunit
> > output links at the end of this email.
>
> Devicetree unittests were suggested as a good candidate as a first
> test to convert to kunit when kunit was implemented.  Brendan tried
> to convert it, and we quickly saw that it was not a good candidate.
> Devicetree unittests do not fit the unit test mold; they are a very
> different creature.  Brendan has a good term for this type of test
> (Brendan, was it "acceptance" test?).

I thought you ended up agreeing with using kunit? Whatever you want to
call the DT tests, there's not really any good reason to do our own
pass/fail messages.

Rob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 18:38 ` Frank Rowand
  2022-02-02 20:29   ` Rob Herring
@ 2022-02-02 20:54   ` Brendan Higgins
  2022-02-02 22:04     ` Frank Rowand
  1 sibling, 1 reply; 9+ messages in thread
From: Brendan Higgins @ 2022-02-02 20:54 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Naresh Kamboju, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On Wed, Feb 2, 2022 at 1:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
> > Linaro started doing Linux kernel Functional Validation (LKFT).
> > As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
> > daily test CI.
> >
> > The output of the test looks as below. The current problem is that we
> > have a hard time to see (grep) pass/fail for each individual test. We
> > only see a summary at the end with x pass and y fails.
>
> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
> loglevel to allow debug output.
>
> Unfortunately this can add lots of debug output, unless you use dynamic
> debug to only enable debug for drivers/of/unittest.o.  There are only
> a few other pr_debug() messages in unittest.
>
> I think a better solution would be to add a config option, something
> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
> review responses are.
>
> > We would like to get your opinion of how hard it would be to include
> > that in the output per test. Maybe like TAP version 14?
> > Another question would be how hard do you think it would be to rewrite
> > this to a kunit test, if even applicable? I have provided the kunit
> > output links at the end of this email.
>
> Devicetree unittests were suggested as a good candidate as a first
> test to convert to kunit when kunit was implemented.  Brendan tried
> to convert it, and we quickly saw that it was not a good candidate.
> Devicetree unittests do not fit the unit test mold; they are a very
> different creature.  Brendan has a good term for this type of test
> (Brendan, was it "acceptance" test?).

I understood that it was either an integration test or end-to-end test
(probably an integration test): https://lkml.org/lkml/2019/3/21/1124

Standardizing integration tests in the kernel is still something that
hasn't happened yet, but there are some examples of integration tests
being written in KUnit (the KASAN KUnit test is probably the most
notable example). There are definitely some others written in
kselftest. It's kind of a tough area because integration tests are
kind of defined by being in between unit tests and end-to-end tests.

> > Test output:
> > ------------
> > [    0.000000] Booting Linux on physical CPU 0x0000000100 [0x410fd033]
> > [    0.000000] Linux version 5.17.0-rc1-next-20220127
> > (tuxmake@tuxmake) (aarch64-linux-gnu-gcc (Debian 11.2.0-9) 11.2.0, GNU
> > ld (GNU Binutils for Debian) 2.37) #1 SMP PREEMPT @1643255563
> > [    0.000000] Machine model: ARM Juno development board (r2)
> >
> > <trimmed output>
> >
> > [    3.285226] ### dt-test ### start of unittest - you will see error messages
> > [    3.293269] ### dt-test ### EXPECT \ : Duplicate name in
> > testcase-data, renamed to \"duplicate-name#1\"
> > [    3.293456] Duplicate name in testcase-data, renamed to \"duplicate-name#1\"
> > [    3.313367] ### dt-test ### EXPECT / : Duplicate name in
> > testcase-data, renamed to \"duplicate-name#1\"
> > [    3.314709] ### dt-test ### EXPECT \ : OF:
> > /testcase-data/phandle-tests/consumer-a: could not get
> > #phandle-cells-missing for /testcase-data/phandle-tests/provider1
> > [    3.323968] OF: /testcase-data/phandle-tests/consumer-a: could not
> > get #phandle-cells-missing for /testcase-data/phandle-tests/provider1
> >
> > <trimmed output>
> >
> > [    5.118400] ### dt-test ### EXPECT / : OF: overlay: ERROR: multiple
> > fragments add and/or delete node
> > /testcase-data-2/substation@100/motor-1/electric
> > [    5.121358] atkbd serio1: keyboard reset failed on 1c070000.kmi
> > [    5.134160] ### dt-test ### end of unittest - 257 passed, 0 failed
> >
> >
> > Ref:
> > Full test output of of-unittest
> > https://lkft.validation.linaro.org/scheduler/job/4458582#L1019
> > https://lkft.validation.linaro.org/scheduler/job/4404330#L428
> >
> > Kunit example test output that we are running in our daily CI loop.
> > https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/tests/
> >
> > Kunit Full test logs:
> > https://lkft.validation.linaro.org/scheduler/job/3643324
> >
> > https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/test/kunit_log_test/log
> >
> >
> > --
> > Linaro LKFT
> > https://lkft.linaro.org
> >
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 20:29   ` Rob Herring
@ 2022-02-02 21:14     ` Brendan Higgins
  2022-02-02 22:01     ` Frank Rowand
  1 sibling, 0 replies; 9+ messages in thread
From: Brendan Higgins @ 2022-02-02 21:14 UTC (permalink / raw)
  To: Rob Herring
  Cc: Frank Rowand, Naresh Kamboju,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On Wed, Feb 2, 2022 at 3:29 PM Rob Herring <robh+dt@kernel.org> wrote:
>
> On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > On 2/2/22 5:31 AM, Naresh Kamboju wrote:
> > > Linaro started doing Linux kernel Functional Validation (LKFT).
> > > As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
> > > daily test CI.
> > >
> > > The output of the test looks as below. The current problem is that we
> > > have a hard time to see (grep) pass/fail for each individual test. We
> > > only see a summary at the end with x pass and y fails.
> >
> > The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
> > are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
> > loglevel to allow debug output.
>
> That alone is not enough. Unless there's a DEBUG define, the
> pr_debug() is going to print nothing.
>
> > Unfortunately this can add lots of debug output, unless you use dynamic
> > debug to only enable debug for drivers/of/unittest.o.  There are only
> > a few other pr_debug() messages in unittest.
>
> Dynamic debug is one option. Another would be a module param to enable
> running the tests. Then it can be built, but has to be explicitly
> enabled at boot time. A 3rd option is making it work as a module, then
> it's run when loaded. (That was the original plan.)
>
> > I think a better solution would be to add a config option, something
> > like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
> > at loglevel KERN_ERR.  I'll submit a patch for that and see what the
> > review responses are.
>
> Nak for another config option.
>
> > > We would like to get your opinion of how hard it would be to include
> > > that in the output per test. Maybe like TAP version 14?
> > > Another question would be how hard do you think it would be to rewrite
> > > this to a kunit test, if even applicable? I have provided the kunit
> > > output links at the end of this email.
> >
> > Devicetree unittests were suggested as a good candidate as a first
> > test to convert to kunit when kunit was implemented.  Brendan tried
> > to convert it, and we quickly saw that it was not a good candidate.
> > Devicetree unittests do not fit the unit test mold; they are a very
> > different creature.  Brendan has a good term for this type of test
> > (Brendan, was it "acceptance" test?).
>
> I thought you ended up agreeing with using kunit? Whatever you want to
> call the DT tests, there's not really any good reason to do our own
> pass/fail messages.

I think you are referring to this email[1]?

I talked to Frank since then a number of times at conferences and on
email, and I think this topic came up a couple of times, but I don't
remember where things ended up. I just assumed that nothing was going
to happen here because of how much time had passed.

Nevertheless, if it helps, we now have an option for printing test
statistics to dmesg[2]; I remember that is something that Frank had
asked for.

Cheers!

[1] https://lkml.org/lkml/2019/9/21/188
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=acd8e8407b8fcc3229d6d8558cac338bea801aed

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 20:29   ` Rob Herring
  2022-02-02 21:14     ` Brendan Higgins
@ 2022-02-02 22:01     ` Frank Rowand
  2022-02-03  0:15       ` Rob Herring
  1 sibling, 1 reply; 9+ messages in thread
From: Frank Rowand @ 2022-02-02 22:01 UTC (permalink / raw)
  To: Rob Herring
  Cc: Naresh Kamboju, Brendan Higgins,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On 2/2/22 2:29 PM, Rob Herring wrote:
> On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
>>> Linaro started doing Linux kernel Functional Validation (LKFT).
>>> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
>>> daily test CI.
>>>
>>> The output of the test looks as below. The current problem is that we
>>> have a hard time to see (grep) pass/fail for each individual test. We
>>> only see a summary at the end with x pass and y fails.
>>
>> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
>> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
>> loglevel to allow debug output.
> 
> That alone is not enough. Unless there's a DEBUG define, the
> pr_debug() is going to print nothing.

I almost mentioned that detail, but decided I didn't need to given my
reference below to dynamic debug.

> 
>> Unfortunately this can add lots of debug output, unless you use dynamic
>> debug to only enable debug for drivers/of/unittest.o.  There are only
>> a few other pr_debug() messages in unittest.
> 
> Dynamic debug is one option. Another would be a module param to enable
> running the tests

I could implement that.

But that does not address the issue of the individual test pass messages
being printed at loglevel KERN_DEBUG.  Are you thinking I should add a
second module param that would enable printing the test pass messages
at the same loglevel as the test fail messages?

I'm not up to date on module params.  I'm assuming that I can pass these
new params on the boot command line if I build unittest as a built-in
instead of as a module.

> Then it can be built, but has to be explicitly
> enabled at boot time.

> A 3rd option is making it work as a module, then
> it's run when loaded. (That was the original plan.)
> 
>> I think a better solution would be to add a config option, something
>> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
>> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
>> review responses are.
> 
> Nak for another config option.

Because?

> 
>>> We would like to get your opinion of how hard it would be to include
>>> that in the output per test. Maybe like TAP version 14?
>>> Another question would be how hard do you think it would be to rewrite
>>> this to a kunit test, if even applicable? I have provided the kunit
>>> output links at the end of this email.
>>
>> Devicetree unittests were suggested as a good candidate as a first
>> test to convert to kunit when kunit was implemented.  Brendan tried
>> to convert it, and we quickly saw that it was not a good candidate.
>> Devicetree unittests do not fit the unit test mold; they are a very
>> different creature.  Brendan has a good term for this type of test
>> (Brendan, was it "acceptance" test?).
> 
> I thought you ended up agreeing with using kunit? Whatever you want to

Not the kunit _framework_.

> call the DT tests, there's not really any good reason to do our own
> pass/fail messages.

Yes, I would like to change the pass fail messages to follow the same
standard as kunit, so that the test frameworks could easily process
the unittest results.  That has been on my todo list.

> 
> Rob
> .
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 20:54   ` Brendan Higgins
@ 2022-02-02 22:04     ` Frank Rowand
  0 siblings, 0 replies; 9+ messages in thread
From: Frank Rowand @ 2022-02-02 22:04 UTC (permalink / raw)
  To: Brendan Higgins
  Cc: Naresh Kamboju, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On 2/2/22 2:54 PM, Brendan Higgins wrote:
> On Wed, Feb 2, 2022 at 1:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
>>> Linaro started doing Linux kernel Functional Validation (LKFT).
>>> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
>>> daily test CI.
>>>
>>> The output of the test looks as below. The current problem is that we
>>> have a hard time to see (grep) pass/fail for each individual test. We
>>> only see a summary at the end with x pass and y fails.
>>
>> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
>> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
>> loglevel to allow debug output.
>>
>> Unfortunately this can add lots of debug output, unless you use dynamic
>> debug to only enable debug for drivers/of/unittest.o.  There are only
>> a few other pr_debug() messages in unittest.
>>
>> I think a better solution would be to add a config option, something
>> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
>> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
>> review responses are.
>>
>>> We would like to get your opinion of how hard it would be to include
>>> that in the output per test. Maybe like TAP version 14?
>>> Another question would be how hard do you think it would be to rewrite
>>> this to a kunit test, if even applicable? I have provided the kunit
>>> output links at the end of this email.
>>
>> Devicetree unittests were suggested as a good candidate as a first
>> test to convert to kunit when kunit was implemented.  Brendan tried
>> to convert it, and we quickly saw that it was not a good candidate.
>> Devicetree unittests do not fit the unit test mold; they are a very
>> different creature.  Brendan has a good term for this type of test
>> (Brendan, was it "acceptance" test?).
> 
> I understood that it was either an integration test or end-to-end test
> (probably an integration test): https://lkml.org/lkml/2019/3/21/1124

Yes, thanks.  Those are the terms I was trying to remember.

-Frank

> 
> Standardizing integration tests in the kernel is still something that
> hasn't happened yet, but there are some examples of integration tests
> being written in KUnit (the KASAN KUnit test is probably the most
> notable example). There are definitely some others written in
> kselftest. It's kind of a tough area because integration tests are
> kind of defined by being in between unit tests and end-to-end tests.
> 
>>> Test output:
>>> ------------
>>> [    0.000000] Booting Linux on physical CPU 0x0000000100 [0x410fd033]
>>> [    0.000000] Linux version 5.17.0-rc1-next-20220127
>>> (tuxmake@tuxmake) (aarch64-linux-gnu-gcc (Debian 11.2.0-9) 11.2.0, GNU
>>> ld (GNU Binutils for Debian) 2.37) #1 SMP PREEMPT @1643255563
>>> [    0.000000] Machine model: ARM Juno development board (r2)
>>>
>>> <trimmed output>
>>>
>>> [    3.285226] ### dt-test ### start of unittest - you will see error messages
>>> [    3.293269] ### dt-test ### EXPECT \ : Duplicate name in
>>> testcase-data, renamed to \"duplicate-name#1\"
>>> [    3.293456] Duplicate name in testcase-data, renamed to \"duplicate-name#1\"
>>> [    3.313367] ### dt-test ### EXPECT / : Duplicate name in
>>> testcase-data, renamed to \"duplicate-name#1\"
>>> [    3.314709] ### dt-test ### EXPECT \ : OF:
>>> /testcase-data/phandle-tests/consumer-a: could not get
>>> #phandle-cells-missing for /testcase-data/phandle-tests/provider1
>>> [    3.323968] OF: /testcase-data/phandle-tests/consumer-a: could not
>>> get #phandle-cells-missing for /testcase-data/phandle-tests/provider1
>>>
>>> <trimmed output>
>>>
>>> [    5.118400] ### dt-test ### EXPECT / : OF: overlay: ERROR: multiple
>>> fragments add and/or delete node
>>> /testcase-data-2/substation@100/motor-1/electric
>>> [    5.121358] atkbd serio1: keyboard reset failed on 1c070000.kmi
>>> [    5.134160] ### dt-test ### end of unittest - 257 passed, 0 failed
>>>
>>>
>>> Ref:
>>> Full test output of of-unittest
>>> https://lkft.validation.linaro.org/scheduler/job/4458582#L1019
>>> https://lkft.validation.linaro.org/scheduler/job/4404330#L428
>>>
>>> Kunit example test output that we are running in our daily CI loop.
>>> https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/tests/
>>>
>>> Kunit Full test logs:
>>> https://lkft.validation.linaro.org/scheduler/job/3643324
>>>
>>> https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.10.y/build/v5.10.70/testrun/5965109/suite/kunit/test/kunit_log_test/log
>>>
>>>
>>> --
>>> Linaro LKFT
>>> https://lkft.linaro.org
>>>
>>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-02 22:01     ` Frank Rowand
@ 2022-02-03  0:15       ` Rob Herring
  2022-02-03  4:52         ` Frank Rowand
  0 siblings, 1 reply; 9+ messages in thread
From: Rob Herring @ 2022-02-03  0:15 UTC (permalink / raw)
  To: Frank Rowand
  Cc: Naresh Kamboju, Brendan Higgins,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On Wed, Feb 2, 2022 at 4:01 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/2/22 2:29 PM, Rob Herring wrote:
> > On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
> >>> Linaro started doing Linux kernel Functional Validation (LKFT).
> >>> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
> >>> daily test CI.
> >>>
> >>> The output of the test looks as below. The current problem is that we
> >>> have a hard time to see (grep) pass/fail for each individual test. We
> >>> only see a summary at the end with x pass and y fails.
> >>
> >> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
> >> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
> >> loglevel to allow debug output.
> >
> > That alone is not enough. Unless there's a DEBUG define, the
> > pr_debug() is going to print nothing.
>
> I almost mentioned that detail, but decided I didn't need to given my
> reference below to dynamic debug.
>
> >
> >> Unfortunately this can add lots of debug output, unless you use dynamic
> >> debug to only enable debug for drivers/of/unittest.o.  There are only
> >> a few other pr_debug() messages in unittest.
> >
> > Dynamic debug is one option. Another would be a module param to enable
> > running the tests
>
> I could implement that.
>
> But that does not address the issue of the individual test pass messages
> being printed at loglevel KERN_DEBUG.  Are you thinking I should add a
> second module param that would enable printing the test pass messages
> at the same loglevel as the test fail messages?

Make them info level perhaps. If someone wants to run the unittests,
then I think we should just print everything. It's already
incomprehensible with all the EXPECT lines...

> I'm not up to date on module params.  I'm assuming that I can pass these
> new params on the boot command line if I build unittest as a built-in
> instead of as a module.

Yes.

> > Then it can be built, but has to be explicitly
> > enabled at boot time.
>
> > A 3rd option is making it work as a module, then
> > it's run when loaded. (That was the original plan.)
> >
> >> I think a better solution would be to add a config option, something
> >> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
> >> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
> >> review responses are.
> >
> > Nak for another config option.
>
> Because?

It's another config option... Another build combination to test...
Users have to rebuild to change behavior...

> >>> We would like to get your opinion of how hard it would be to include
> >>> that in the output per test. Maybe like TAP version 14?
> >>> Another question would be how hard do you think it would be to rewrite
> >>> this to a kunit test, if even applicable? I have provided the kunit
> >>> output links at the end of this email.
> >>
> >> Devicetree unittests were suggested as a good candidate as a first
> >> test to convert to kunit when kunit was implemented.  Brendan tried
> >> to convert it, and we quickly saw that it was not a good candidate.
> >> Devicetree unittests do not fit the unit test mold; they are a very
> >> different creature.  Brendan has a good term for this type of test
> >> (Brendan, was it "acceptance" test?).
> >
> > I thought you ended up agreeing with using kunit? Whatever you want to
>
> Not the kunit _framework_.
>
> > call the DT tests, there's not really any good reason to do our own
> > pass/fail messages.
>
> Yes, I would like to change the pass fail messages to follow the same
> standard as kunit, so that the test frameworks could easily process
> the unittest results.  That has been on my todo list.

Ah, I misunderstood.

Rob

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Device Tree runtime unit tests: Harmonisation
  2022-02-03  0:15       ` Rob Herring
@ 2022-02-03  4:52         ` Frank Rowand
  0 siblings, 0 replies; 9+ messages in thread
From: Frank Rowand @ 2022-02-03  4:52 UTC (permalink / raw)
  To: Rob Herring
  Cc: Naresh Kamboju, Brendan Higgins,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Anders Roxell

On 2/2/22 6:15 PM, Rob Herring wrote:
> On Wed, Feb 2, 2022 at 4:01 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/2/22 2:29 PM, Rob Herring wrote:
>>> On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/2/22 5:31 AM, Naresh Kamboju wrote:
>>>>> Linaro started doing Linux kernel Functional Validation (LKFT).
>>>>> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our
>>>>> daily test CI.
>>>>>
>>>>> The output of the test looks as below. The current problem is that we
>>>>> have a hard time to see (grep) pass/fail for each individual test. We
>>>>> only see a summary at the end with x pass and y fails.
>>>>
>>>> The FAIL messages are printed at loglevel KERN_ERR.  The pass messages
>>>> are printed at loglevel KERN_DEBUG.  To see the pass messages, set the
>>>> loglevel to allow debug output.
>>>
>>> That alone is not enough. Unless there's a DEBUG define, the
>>> pr_debug() is going to print nothing.
>>
>> I almost mentioned that detail, but decided I didn't need to given my
>> reference below to dynamic debug.
>>
>>>
>>>> Unfortunately this can add lots of debug output, unless you use dynamic
>>>> debug to only enable debug for drivers/of/unittest.o.  There are only
>>>> a few other pr_debug() messages in unittest.
>>>
>>> Dynamic debug is one option. Another would be a module param to enable
>>> running the tests
>>
>> I could implement that.
>>
>> But that does not address the issue of the individual test pass messages
>> being printed at loglevel KERN_DEBUG.  Are you thinking I should add a
>> second module param that would enable printing the test pass messages
>> at the same loglevel as the test fail messages?
> 
> Make them info level perhaps. If someone wants to run the unittests,
> then I think we should just print everything. It's already
> incomprehensible with all the EXPECT lines...

OK.  I thought there would be pushback against just printing everything.
I'll redo the patch to have unittest print the pass messages always.


> 
>> I'm not up to date on module params.  I'm assuming that I can pass these
>> new params on the boot command line if I build unittest as a built-in
>> instead of as a module.
> 
> Yes.
> 
>>> Then it can be built, but has to be explicitly
>>> enabled at boot time.
>>
>>> A 3rd option is making it work as a module, then
>>> it's run when loaded. (That was the original plan.)
>>>
>>>> I think a better solution would be to add a config option, something
>>>> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages
>>>> at loglevel KERN_ERR.  I'll submit a patch for that and see what the
>>>> review responses are.
>>>
>>> Nak for another config option.
>>
>> Because?
> 
> It's another config option... Another build combination to test...
> Users have to rebuild to change behavior...

Thanks for the explanation.

-Frank

> 
>>>>> We would like to get your opinion of how hard it would be to include
>>>>> that in the output per test. Maybe like TAP version 14?
>>>>> Another question would be how hard do you think it would be to rewrite
>>>>> this to a kunit test, if even applicable? I have provided the kunit
>>>>> output links at the end of this email.
>>>>
>>>> Devicetree unittests were suggested as a good candidate as a first
>>>> test to convert to kunit when kunit was implemented.  Brendan tried
>>>> to convert it, and we quickly saw that it was not a good candidate.
>>>> Devicetree unittests do not fit the unit test mold; they are a very
>>>> different creature.  Brendan has a good term for this type of test
>>>> (Brendan, was it "acceptance" test?).
>>>
>>> I thought you ended up agreeing with using kunit? Whatever you want to
>>
>> Not the kunit _framework_.
>>
>>> call the DT tests, there's not really any good reason to do our own
>>> pass/fail messages.
>>
>> Yes, I would like to change the pass fail messages to follow the same
>> standard as kunit, so that the test frameworks could easily process
>> the unittest results.  That has been on my todo list.
> 
> Ah, I misunderstood.
> 
> Rob
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-02-03  4:52 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-02 11:31 Device Tree runtime unit tests: Harmonisation Naresh Kamboju
2022-02-02 18:38 ` Frank Rowand
2022-02-02 20:29   ` Rob Herring
2022-02-02 21:14     ` Brendan Higgins
2022-02-02 22:01     ` Frank Rowand
2022-02-03  0:15       ` Rob Herring
2022-02-03  4:52         ` Frank Rowand
2022-02-02 20:54   ` Brendan Higgins
2022-02-02 22:04     ` Frank Rowand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.