All of lore.kernel.org
 help / color / mirror / Atom feed
* [Buildroot] Buildroot runtime test infrastructure prototype
@ 2015-06-25 18:02 Thomas Petazzoni
  2015-06-26 15:48 ` Andreas Naumann
  2015-06-26 18:12 ` Thomas De Schampheleire
  0 siblings, 2 replies; 17+ messages in thread
From: Thomas Petazzoni @ 2015-06-25 18:02 UTC (permalink / raw)
  To: buildroot

Hello,

Our http://autobuild.buildroot.org tests are great, but they are only
build-time tests, and they only test random configurations of packages.
Things like bootloader packages, filesystem images, kernel build are
never tested in an automated way, and no runtime testing is done. I
also want to be able to build small test programs exercising specific
features and run them on the target.

Since a while, I wanted to setup a small infrastructure to be able to
describe specific test cases: a Buildroot configuration to build, with
the ability to boot some of those configurations automatically in Qemu.

So I finally went ahead and create such a small infrastructure. It's
very basic and minimal right now, and should be considered as an
experiment. It's available at:

   https://github.com/tpetazzoni/buildroot-runtime-test

It's based on the Python unittest mechanism, and allows to describe
test cases that consist in a Buildroot defconfig, plus a Python
function that can do some checks on the resulting build, boot the
system under Qemu, run some commands inside the Qemu system and check
their results.

I've written a few tests to show how it could work: simple tests for
Dropbear and Python (on the package side) and several tests for
filesystem images: ext2/3/4, squashfs, jffs2, ubi, iso9660, yaffs2 (all
of them are boot tested, except yaffs2).

For now, it's very crude and basic. The README file explains how it
works, and gives a TODO listing some of the things that for sure need
to be improved.

Currently, this runtime test infrastructure is not set up to be
executed every day on the latest Git, but it is obviously the goal.

Contributions, comments and suggestions are welcome.

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-25 18:02 [Buildroot] Buildroot runtime test infrastructure prototype Thomas Petazzoni
@ 2015-06-26 15:48 ` Andreas Naumann
  2015-06-26 16:20   ` Jeremy Rosen
  2015-06-26 18:12 ` Thomas De Schampheleire
  1 sibling, 1 reply; 17+ messages in thread
From: Andreas Naumann @ 2015-06-26 15:48 UTC (permalink / raw)
  To: buildroot

Dear Thomas,


Am 25.06.2015 um 20:02 schrieb Thomas Petazzoni:
> Hello,
>
> Our http://autobuild.buildroot.org tests are great, but they are only
> build-time tests, and they only test random configurations of packages.
> Things like bootloader packages, filesystem images, kernel build are
> never tested in an automated way, and no runtime testing is done. I
> also want to be able to build small test programs exercising specific
> features and run them on the target.

this is something I have been looking into for a while. Our goal was to
have some kind of acceptance test for our BSPs. I started testing
internals with qemu, but soon the focus shifted to physical boards and
real IO, so the qemu target is hardly maintained any more.
Anyway, in the beginning I took a similar approach to the JUnit
framework, writing the testcases in Python using the TAP (test anything
protocol) for output and creating a number of actor objects to control
the input/output of the DUT (UART, RS485, telnet, ssh, powerswitch, CAN
transceiver, ...).
While this worked well for the developers, it became apparent that
understanding what was tested and how was rather difficult for people
that dont do this 100% of the time. In addition, documentation, test
reports, reusability of test-steps was less than ideal. We then toyed a
little with a self invented test language, but discarded all those
efforts once we discovered the Python based 'Robot Framework'.
Their keyword-driven approach fits our needs very well because of the
resulting abstraction and readability of testcases, the configurability,
the many available libraries (most important for us ssh and telnet, easy
extensible with Python modules), the testcase editor RIDE, Jenkins
plugin, xUnit output if needed ...

Maybe it's too high level for the buildroot project and I'm not an
expert in the xUnit world so I cant really compare. But anyway Robot
provides functionality and structure that matches the need of embedded
testing quite well, so you may want to give their Quickstart a try.


best regards,
Andreas


>
> Since a while, I wanted to setup a small infrastructure to be able to
> describe specific test cases: a Buildroot configuration to build, with
> the ability to boot some of those configurations automatically in Qemu.
>
> So I finally went ahead and create such a small infrastructure. It's
> very basic and minimal right now, and should be considered as an
> experiment. It's available at:
>
>     https://github.com/tpetazzoni/buildroot-runtime-test
>
> It's based on the Python unittest mechanism, and allows to describe
> test cases that consist in a Buildroot defconfig, plus a Python
> function that can do some checks on the resulting build, boot the
> system under Qemu, run some commands inside the Qemu system and check
> their results.
>
> I've written a few tests to show how it could work: simple tests for
> Dropbear and Python (on the package side) and several tests for
> filesystem images: ext2/3/4, squashfs, jffs2, ubi, iso9660, yaffs2 (all
> of them are boot tested, except yaffs2).
>
> For now, it's very crude and basic. The README file explains how it
> works, and gives a TODO listing some of the things that for sure need
> to be improved.
>
> Currently, this runtime test infrastructure is not set up to be
> executed every day on the latest Git, but it is obviously the goal.
>
> Contributions, comments and suggestions are welcome.
>
> Best regards,
>
> Thomas
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-26 15:48 ` Andreas Naumann
@ 2015-06-26 16:20   ` Jeremy Rosen
  2015-06-28  9:50     ` Thomas Petazzoni
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Rosen @ 2015-06-26 16:20 UTC (permalink / raw)
  To: buildroot

Hello Thomas

more answers in the mail below, but we have a trainee working on an
automated test framework based on "Robot Framework" this summer and
we planned to upstream it when we have something to show...

Fortunately the work we are doing is not solving the same problem 
as yours but at this point it's really worth discussing so the 
two aspects can work together.

Our approch was mainly to have each package (on the buildroot side
have its own RFW scriptlet and have buildroot assemble these 
scriptlets at make time to provide a test for the resulting system
that would only contain the tests for whatever package was actually
compiled in.

We don't intend to automate the testing itself. Once a RFW script is
available, running it is not too complicated. The hard part is 
connecting to the target and that is very target dependant.

At this point it would be possible for us to switch back to any test
framework if RFW is not the one the buildroot project wants to use
but I personally think that RFW is really a good framework for high
level system testing (opposed to single-software unit testing) it
is very simple to learn by looking at examples, the keyword approch
is very readable and adding our own modules in python is fairly 
easy


> 
> Am 25.06.2015 um 20:02 schrieb Thomas Petazzoni:
> > Hello,
> >
> > Our http://autobuild.buildroot.org tests are great, but they are
> > only
> > build-time tests, and they only test random configurations of
> > packages.
> > Things like bootloader packages, filesystem images, kernel build
> > are
> > never tested in an automated way, and no runtime testing is done. I
> > also want to be able to build small test programs exercising
> > specific
> > features and run them on the target.
> 
> this is something I have been looking into for a while. Our goal was
> to
> have some kind of acceptance test for our BSPs. I started testing
> internals with qemu, but soon the focus shifted to physical boards
> and
> real IO, so the qemu target is hardly maintained any more.
> Anyway, in the beginning I took a similar approach to the JUnit
> framework, writing the testcases in Python using the TAP (test
> anything
> protocol) for output and creating a number of actor objects to
> control
> the input/output of the DUT (UART, RS485, telnet, ssh, powerswitch,
> CAN
> transceiver, ...).

I second that need. This is the big motivation behind the job. The 
systematic testing and retesting of system functionalities is very
long and tedious. Some manual testing will always be needed, but
being able to check things like "there is a running ssh server" or
"a specific user was created" would save us a huge amount of time.
It is very important for us that those tests can be tuned to match
our specific tuning of the distro (i.e when we overlay config file
for a daemon, we need to do the corresponding tests)

> While this worked well for the developers, it became apparent that
> understanding what was tested and how was rather difficult for people
> that dont do this 100% of the time. In addition, documentation, test
> reports, reusability of test-steps was less than ideal. We then toyed
> a
> little with a self invented test language, but discarded all those
> efforts once we discovered the Python based 'Robot Framework'.
> Their keyword-driven approach fits our needs very well because of the
> resulting abstraction and readability of testcases, the
> configurability,
> the many available libraries (most important for us ssh and telnet,
> easy
> extensible with Python modules), the testcase editor RIDE, Jenkins
> plugin, xUnit output if needed ...
> 

This repeats our story perfectly, though I had never heard of RIDE. i'll
have a look.

RFW is also able to export its test results (including the keywords which
is the source code of the test for all practical purpose) as XML. We used
that to autogenerate complete test reports.


> Maybe it's too high level for the buildroot project and I'm not an
> expert in the xUnit world so I cant really compare. But anyway Robot
> provides functionality and structure that matches the need of
> embedded
> testing quite well, so you may want to give their Quickstart a try.
> 

As said above, I highly second that. RFW is really targeting system 
testing and it would be a shame not to use it where its strength is...


Again, we have a trainee working on that, which means things can get 
done here, and the point is to upstream. So feel free to discuss what you
want and we will do our best to fit it in our own approch

best regards

J?r?my

> 
> best regards,
> Andreas
> 
> 
> >
> > Since a while, I wanted to setup a small infrastructure to be able
> > to
> > describe specific test cases: a Buildroot configuration to build,
> > with
> > the ability to boot some of those configurations automatically in
> > Qemu.
> >
> > So I finally went ahead and create such a small infrastructure.
> > It's
> > very basic and minimal right now, and should be considered as an
> > experiment. It's available at:
> >
> >     https://github.com/tpetazzoni/buildroot-runtime-test
> >
> > It's based on the Python unittest mechanism, and allows to describe
> > test cases that consist in a Buildroot defconfig, plus a Python
> > function that can do some checks on the resulting build, boot the
> > system under Qemu, run some commands inside the Qemu system and
> > check
> > their results.
> >
> > I've written a few tests to show how it could work: simple tests
> > for
> > Dropbear and Python (on the package side) and several tests for
> > filesystem images: ext2/3/4, squashfs, jffs2, ubi, iso9660, yaffs2
> > (all
> > of them are boot tested, except yaffs2).
> >
> > For now, it's very crude and basic. The README file explains how it
> > works, and gives a TODO listing some of the things that for sure
> > need
> > to be improved.
> >
> > Currently, this runtime test infrastructure is not set up to be
> > executed every day on the latest Git, but it is obviously the goal.
> >
> > Contributions, comments and suggestions are welcome.
> >
> > Best regards,
> >
> > Thomas
> >
> _______________________________________________
> buildroot mailing list
> buildroot at busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-25 18:02 [Buildroot] Buildroot runtime test infrastructure prototype Thomas Petazzoni
  2015-06-26 15:48 ` Andreas Naumann
@ 2015-06-26 18:12 ` Thomas De Schampheleire
  2015-06-26 18:26   ` Thomas De Schampheleire
  1 sibling, 1 reply; 17+ messages in thread
From: Thomas De Schampheleire @ 2015-06-26 18:12 UTC (permalink / raw)
  To: buildroot

Hi Thomas,

On Thu, Jun 25, 2015 at 8:02 PM, Thomas Petazzoni
<thomas.petazzoni@free-electrons.com> wrote:
> Hello,
>
> Our http://autobuild.buildroot.org tests are great, but they are only
> build-time tests, and they only test random configurations of packages.
> Things like bootloader packages, filesystem images, kernel build are
> never tested in an automated way, and no runtime testing is done. I
> also want to be able to build small test programs exercising specific
> features and run them on the target.
>
> Since a while, I wanted to setup a small infrastructure to be able to
> describe specific test cases: a Buildroot configuration to build, with
> the ability to boot some of those configurations automatically in Qemu.
>
> So I finally went ahead and create such a small infrastructure. It's
> very basic and minimal right now, and should be considered as an
> experiment. It's available at:
>
>    https://github.com/tpetazzoni/buildroot-runtime-test
>
> It's based on the Python unittest mechanism, and allows to describe
> test cases that consist in a Buildroot defconfig, plus a Python
> function that can do some checks on the resulting build, boot the
> system under Qemu, run some commands inside the Qemu system and check
> their results.
>
> I've written a few tests to show how it could work: simple tests for
> Dropbear and Python (on the package side) and several tests for
> filesystem images: ext2/3/4, squashfs, jffs2, ubi, iso9660, yaffs2 (all
> of them are boot tested, except yaffs2).
>
> For now, it's very crude and basic. The README file explains how it
> works, and gives a TODO listing some of the things that for sure need
> to be improved.
>
> Currently, this runtime test infrastructure is not set up to be
> executed every day on the latest Git, but it is obviously the goal.
>
> Contributions, comments and suggestions are welcome.

A while ago I learned about pyTest, an alternative test suite over unittest.
The website http://pytest.org/latest/ contains good documentation, but
some benefits:

- intelligent asserts: you can just write 'assert <expression>' where
expression can be any Python expression. For example, 'assert x == 3'
or 'assert "Login:" in output'. You don't need special 'assertEqual',
'assertNotEqual', 'assertIn', etc. Pytest will analyze these assert
statements and do the right thing. Also, the error reporting in case
the assert is not met is quite intelligent and much better than
standard output of unittest.
(http://pytest.org/latest/assert.html#assert-with-the-assert-statement)

- parametrization of tests: it is very easy to run the same test with
different parameters. Say that you want to repeat the JFFS2 test, but
with a slightly different config.
(http://pytest.org/latest/parametrize.html#parametrized-test-functions)

To convert the existing code you have to use pytest isn't much work, really.
In fact, you can already run the existing unittests through pytest
without changes. You would just call
py.test. (after installing it with 'pip install pytest'; you may like
'pip install pytest-sugar' too.

Best regards,
Thomas

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-26 18:12 ` Thomas De Schampheleire
@ 2015-06-26 18:26   ` Thomas De Schampheleire
  0 siblings, 0 replies; 17+ messages in thread
From: Thomas De Schampheleire @ 2015-06-26 18:26 UTC (permalink / raw)
  To: buildroot

On Fri, Jun 26, 2015 at 8:12 PM, Thomas De Schampheleire
<patrickdepinguin@gmail.com> wrote:
> Hi Thomas,
>
> On Thu, Jun 25, 2015 at 8:02 PM, Thomas Petazzoni
> <thomas.petazzoni@free-electrons.com> wrote:
>> Hello,
>>
>> Our http://autobuild.buildroot.org tests are great, but they are only
>> build-time tests, and they only test random configurations of packages.
>> Things like bootloader packages, filesystem images, kernel build are
>> never tested in an automated way, and no runtime testing is done. I
>> also want to be able to build small test programs exercising specific
>> features and run them on the target.
>>
>> Since a while, I wanted to setup a small infrastructure to be able to
>> describe specific test cases: a Buildroot configuration to build, with
>> the ability to boot some of those configurations automatically in Qemu.
>>
>> So I finally went ahead and create such a small infrastructure. It's
>> very basic and minimal right now, and should be considered as an
>> experiment. It's available at:
>>
>>    https://github.com/tpetazzoni/buildroot-runtime-test
>>
>> It's based on the Python unittest mechanism, and allows to describe
>> test cases that consist in a Buildroot defconfig, plus a Python
>> function that can do some checks on the resulting build, boot the
>> system under Qemu, run some commands inside the Qemu system and check
>> their results.
>>
>> I've written a few tests to show how it could work: simple tests for
>> Dropbear and Python (on the package side) and several tests for
>> filesystem images: ext2/3/4, squashfs, jffs2, ubi, iso9660, yaffs2 (all
>> of them are boot tested, except yaffs2).
>>
>> For now, it's very crude and basic. The README file explains how it
>> works, and gives a TODO listing some of the things that for sure need
>> to be improved.
>>
>> Currently, this runtime test infrastructure is not set up to be
>> executed every day on the latest Git, but it is obviously the goal.
>>
>> Contributions, comments and suggestions are welcome.
>
> A while ago I learned about pyTest, an alternative test suite over unittest.
> The website http://pytest.org/latest/ contains good documentation, but
> some benefits:
>
> - intelligent asserts: you can just write 'assert <expression>' where
> expression can be any Python expression. For example, 'assert x == 3'
> or 'assert "Login:" in output'. You don't need special 'assertEqual',
> 'assertNotEqual', 'assertIn', etc. Pytest will analyze these assert
> statements and do the right thing. Also, the error reporting in case
> the assert is not met is quite intelligent and much better than
> standard output of unittest.
> (http://pytest.org/latest/assert.html#assert-with-the-assert-statement)
>
> - parametrization of tests: it is very easy to run the same test with
> different parameters. Say that you want to repeat the JFFS2 test, but
> with a slightly different config.
> (http://pytest.org/latest/parametrize.html#parametrized-test-functions)

- and I forgot: fixtures. It makes it easy to create some building
blocks that different tests can build upon. A fixture can then perform
some common action without needing to copy/paste it all the time.
http://pytest.org/latest/fixture.html#fixture

>
> To convert the existing code you have to use pytest isn't much work, really.
> In fact, you can already run the existing unittests through pytest
> without changes. You would just call
> py.test. (after installing it with 'pip install pytest'; you may like
> 'pip install pytest-sugar' too.
>
> Best regards,
> Thomas

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-26 16:20   ` Jeremy Rosen
@ 2015-06-28  9:50     ` Thomas Petazzoni
  2015-06-30  7:06       ` Andreas Naumann
  2015-06-30 16:06       ` Jeremy Rosen
  0 siblings, 2 replies; 17+ messages in thread
From: Thomas Petazzoni @ 2015-06-28  9:50 UTC (permalink / raw)
  To: buildroot

Jeremy,

On Fri, 26 Jun 2015 18:20:33 +0200 (CEST), Jeremy Rosen wrote:

> more answers in the mail below, but we have a trainee working on an
> automated test framework based on "Robot Framework" this summer and
> we planned to upstream it when we have something to show...

In fact, when I started working on this runtime test infrastructure, I
also posted on Google Plus about the available frameworks, and one of
the answers was Robot Framework. So I had a quick look, and definitely
could not understand how it can work: you are apparently not writing
the tests in some real programming language, but in some sort of weird
"tabular" format. I really don't understand how you can express
complicated test scenarios with such a limited language.

The Python unittest stuff just runs Python code, so you can express
whatever complicated test logic you want.

> Our approch was mainly to have each package (on the buildroot side
> have its own RFW scriptlet and have buildroot assemble these 
> scriptlets at make time to provide a test for the resulting system
> that would only contain the tests for whatever package was actually
> compiled in.

At some point I indeed though of having some per-package test cases
directly in package/<foo>/, for each package. But you also anyway need
to describe a complete Buildroot configuration for each test, in order
to have a complete system that actually boots and make sense.

And in fact, testing packages is not the only target of this "runtime
test infrastructure". I also want to validate core features like rootfs
overlay, post-build script, users/permission/device table, the myriad
possibilities of Linux kernel building, global patch directories, etc.

For example, not later than this week, I discovered that if you write:

FOO_OVERRIDE_SRCDIR = /path/to/sources/ 

Buildroot will rsync your *entire* root filesystem in
output/build/foo-custom/ if you had the crazy idea of leaving a
trailing space at the end of /path/to/sources.

> At this point it would be possible for us to switch back to any test
> framework if RFW is not the one the buildroot project wants to use
> but I personally think that RFW is really a good framework for high
> level system testing (opposed to single-software unit testing) it
> is very simple to learn by looking at examples, the keyword approch
> is very readable and adding our own modules in python is fairly 
> easy

Do you have examples on what the RFW test cases for Buildroot look like?

I've released my code with the intent that others can have a look and
get a clear view of what the prototype looks like. Without seeing how
it looks with RFW, I can't really make up my mind on whether it is a
good alternate solution or not.

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-28  9:50     ` Thomas Petazzoni
@ 2015-06-30  7:06       ` Andreas Naumann
  2015-06-30  7:39         ` Thomas Petazzoni
  2015-06-30 16:06       ` Jeremy Rosen
  1 sibling, 1 reply; 17+ messages in thread
From: Andreas Naumann @ 2015-06-30  7:06 UTC (permalink / raw)
  To: buildroot

Hi Thomas,

Am 28.06.2015 um 11:50 schrieb Thomas Petazzoni:

> the answers was Robot Framework. So I had a quick look, and definitely
> could not understand how it can work: you are apparently not writing
> the tests in some real programming language, but in some sort of weird
> "tabular" format. I really don't understand how you can express
> complicated test scenarios with such a limited language.
>
> The Python unittest stuff just runs Python code, so you can express
> whatever complicated test logic you want.

RFW also runs plain Python code, the only difference is, it does it 
exclusively via its internal, external or user-written libraries. The 
libraries are simply Python classes with function calls like 
open_connection(self, host, port=23, prompt, ...) or 
execute_command(self, command, loglevel=None) .
In the RFWs tabular format this then looks like
Open Connection | 192.168.0.37 | prompt=[root]#
Login           | root         | pw
Execute Command | echo Hello World

which in an HTML table is quite readable. RFW calls it keywords. You can 
also create custom keywords consisting of any other keywords.
So while the libraries are the equivalent of the fixtures that Thomas S. 
was talking about, you can group keywords that form certain steps into 
domain specific resource files, like filesystem, package, qemu...
Using those higher level keywords you then can hide the lengthy 
parameter set from the top level testcase table, which leads e.g. to 
something like:
Buildroot.Add Package To Defconfig | dropbear
Buildroot.Compile
Qemu.Start System
Telnet.Connect And Login
Command Output Should Contain | netstat -ltn 2 | 0.0.0.0:22

There is another resource type, Variables, which allows for 
configuration of keywords and testcases but before explaining any 
longer, I'll try to come up with a working example...

regards,
Andreas

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30  7:06       ` Andreas Naumann
@ 2015-06-30  7:39         ` Thomas Petazzoni
  2015-06-30  8:38           ` Jeremy Rosen
  0 siblings, 1 reply; 17+ messages in thread
From: Thomas Petazzoni @ 2015-06-30  7:39 UTC (permalink / raw)
  To: buildroot

Andreas,

On Tue, 30 Jun 2015 09:06:25 +0200, Andreas Naumann wrote:

> RFW also runs plain Python code, the only difference is, it does it 
> exclusively via its internal, external or user-written libraries. The 
> libraries are simply Python classes with function calls like 
> open_connection(self, host, port=23, prompt, ...) or 
> execute_command(self, command, loglevel=None) .
> In the RFWs tabular format this then looks like
> Open Connection | 192.168.0.37 | prompt=[root]#
> Login           | root         | pw
> Execute Command | echo Hello World
> 
> which in an HTML table is quite readable. RFW calls it keywords. You can 
> also create custom keywords consisting of any other keywords.
> So while the libraries are the equivalent of the fixtures that Thomas S. 
> was talking about, you can group keywords that form certain steps into 
> domain specific resource files, like filesystem, package, qemu...
> Using those higher level keywords you then can hide the lengthy 
> parameter set from the top level testcase table, which leads e.g. to 
> something like:
> Buildroot.Add Package To Defconfig | dropbear
> Buildroot.Compile
> Qemu.Start System
> Telnet.Connect And Login
> Command Output Should Contain | netstat -ltn 2 | 0.0.0.0:22
> 
> There is another resource type, Variables, which allows for 
> configuration of keywords and testcases but before explaining any 
> longer, I'll try to come up with a working example...

Thanks a lot for giving more details about this. Indeed having a
working example would be nice. However, I'm not entirely convinced this
higher-level tabular format is really much more readable/useful than a
pure Python solution. This higher-level tabular format remains in any
case more limited than a real programming language, and it's a special
syntax you have to learn, while Python is known by a large number of
people already.

But maybe I would be more convinced by some other features of RFW. What
are its reporting capabilities? Can it run tests in parallel? Can we
easily integrate the tests with Jenkins to have them run everyday?

Thanks,

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30  7:39         ` Thomas Petazzoni
@ 2015-06-30  8:38           ` Jeremy Rosen
  2015-06-30  8:46             ` Thomas Petazzoni
  2015-06-30 20:26             ` Andreas Naumann
  0 siblings, 2 replies; 17+ messages in thread
From: Jeremy Rosen @ 2015-06-30  8:38 UTC (permalink / raw)
  To: buildroot



> 
> Thanks a lot for giving more details about this. Indeed having a
> working example would be nice. However, I'm not entirely convinced
> this
> higher-level tabular format is really much more readable/useful than
> a
> pure Python solution. This higher-level tabular format remains in any
> case more limited than a real programming language, and it's a
> special
> syntax you have to learn, while Python is known by a large number of
> people already.
> 

We are in the process of reimplementing your examples in RFW to provide
some food for thought, stay tuned...

> But maybe I would be more convinced by some other features of RFW.
> What
> are its reporting capabilities?

pretty good, RFW can report in an xunit-compatible xml which can be
easily parsed by whatever tool you prefer. I have been autogenerating
reports with it for quite some time

RFW also generates some HTML pages ready to be pushed on a server,
but that's less usefull for the buildroot use-case.

An exemple of generated HTML can be found here:

http://robotframework.org/robotframework/latest/images/log_passed.html

> Can it run tests in parallel?

no, RFW core has no parallel testing capabilities by itself. There 
are plugins to do that, though...

> Can we
> easily integrate the tests with Jenkins to have them run everyday?

RFW has its own jenkins plugin to harness test results. integration
is very easy

the plugin is available here :

https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin


> 
> Thanks,
> 
> Thomas
> --
> Thomas Petazzoni, CTO, Free Electrons
> Embedded Linux, Kernel and Android engineering
> http://free-electrons.com
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30  8:38           ` Jeremy Rosen
@ 2015-06-30  8:46             ` Thomas Petazzoni
  2015-06-30 20:26             ` Andreas Naumann
  1 sibling, 0 replies; 17+ messages in thread
From: Thomas Petazzoni @ 2015-06-30  8:46 UTC (permalink / raw)
  To: buildroot

Jeremy,

On Tue, 30 Jun 2015 10:38:49 +0200 (CEST), Jeremy Rosen wrote:

> We are in the process of reimplementing your examples in RFW to provide
> some food for thought, stay tuned...

Ok, good. Will be interesting to see.

> pretty good, RFW can report in an xunit-compatible xml which can be
> easily parsed by whatever tool you prefer. I have been autogenerating
> reports with it for quite some time
> 
> RFW also generates some HTML pages ready to be pushed on a server,
> but that's less usefull for the buildroot use-case.
> 
> An exemple of generated HTML can be found here:
> 
> http://robotframework.org/robotframework/latest/images/log_passed.html

Ok.

> > Can it run tests in parallel?
> 
> no, RFW core has no parallel testing capabilities by itself. There 
> are plugins to do that, though...

This is a very problematic thing. We will have lots of tests, and they
will be long. Not being able to run them in parallel is a big issue.

> > Can we
> > easily integrate the tests with Jenkins to have them run everyday?
> 
> RFW has its own jenkins plugin to harness test results. integration
> is very easy
> 
> the plugin is available here :
> 
> https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin

This is good, however.

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-28  9:50     ` Thomas Petazzoni
  2015-06-30  7:06       ` Andreas Naumann
@ 2015-06-30 16:06       ` Jeremy Rosen
  2015-07-01  7:30         ` Andreas Naumann
  1 sibling, 1 reply; 17+ messages in thread
From: Jeremy Rosen @ 2015-06-30 16:06 UTC (permalink / raw)
  To: buildroot

I am replying to an earlier mail of Thomas because there were 
unanswered questions in there. see below...


----- Mail original -----
> Jeremy,
> 
> On Fri, 26 Jun 2015 18:20:33 +0200 (CEST), Jeremy Rosen wrote:
> 
> > more answers in the mail below, but we have a trainee working on an
> > automated test framework based on "Robot Framework" this summer and
> > we planned to upstream it when we have something to show...
> 
> In fact, when I started working on this runtime test infrastructure,
> I
> also posted on Google Plus about the available frameworks, and one of
> the answers was Robot Framework. So I had a quick look, and
> definitely
> could not understand how it can work: you are apparently not writing
> the tests in some real programming language, but in some sort of
> weird
> "tabular" format. I really don't understand how you can express
> complicated test scenarios with such a limited language.
> 

There is the possibility to do loops in RFW (see the "continue for loop"
and related keywords in the builtin library) but complicated things are
usually implemented as python modules (or other languages, but mainly
python).

Note that the philosophy of RFW is to implement building blocks as 
libraries and assamble them into actual tests using the tabular language.

Tests suites tend to be lots of very similar tests with slight variation
and the tabular language is designed to make that kind of tests easy
to read an write

The existing libraries are pretty handy to implement all the common tasks
of system testing, like starting/stoping/checking processes, opening network 
connections and crafting packets, using a SSH collection, dealing with
XML to talk to an application etc... Python can do that too, of course
(RFW is coded in python after all) but RFW does all the heavy lifting
for us. Using raw python or a lower level python framework would
probably mean reinventing the wheel in that regard...

The point is that the high level language is very readable, esp for non-
coders, which allows autogenerated documentation of test. 

> The Python unittest stuff just runs Python code, so you can express
> whatever complicated test logic you want.
> 
> > Our approch was mainly to have each package (on the buildroot side
> > have its own RFW scriptlet and have buildroot assemble these
> > scriptlets at make time to provide a test for the resulting system
> > that would only contain the tests for whatever package was actually
> > compiled in.
> 
> At some point I indeed though of having some per-package test cases
> directly in package/<foo>/, for each package. But you also anyway
> need
> to describe a complete Buildroot configuration for each test, in
> order
> to have a complete system that actually boots and make sense.
> 

yes and no... your approch is targeting the autobuild system, I was more
thinking of providing tests for a user's own configuration or a 
semi-randomly generated config. Easily testing on real hardware was
also part of the idea. 

If we want to combine the two, there would be a need for a base, bootable
config (for example based on qemu) with a randomized overlay.

The per-package part would help mainly with the randomly added packages.
That would not prevent to also have a package independant base of test
for the most basic functionalities. Per-Package also has the advantage
that the tests will be maintained with the packages, thus by the people
with the knowhow of what is being tested.

overall I don't really see a reason not to have both approch. they are
not really exclusive...

now, one of the big difference is that you have host tests and target
tests intermixed. That is a bit problematic if one wants to use your
framework on a real board since the part responsible for starting 
the card and login is intermixed with the test. Again, my use-case is
slightly different from yours. Having host-side tests is interesting 
and something we hadn't thought of. It should not be very hard to add
with a RFW approch if we go in that direction

separating the target and host test (whatever approch is taken) seems
like a good idea to me, a way to override/disable the boot/login part
would allow users to run your tests on real hardware. That seems like
a good idea independantly of the framework used.

> And in fact, testing packages is not the only target of this "runtime
> test infrastructure". I also want to validate core features like
> rootfs
> overlay, post-build script, users/permission/device table, the myriad
> possibilities of Linux kernel building, global patch directories,
> etc.
> 

Ok, so you want to test not just the image generated by buildroot but
also buildroot itself. That is different than what we have in mind,
but I can see the need for that. A complete framework would have to 
deal with both.

Again, I think it is important to separate target testing from 
host/buildroot testing. The former can be qemu but might also be
real hardware. In the case of real hardware buildroot can't guess
how to upload and boot the image, but there is still things to 
do. Either by sending the tests to the target or by running them
through a remote connection (serial or ssh) 


> 
> > At this point it would be possible for us to switch back to any
> > test
> > framework if RFW is not the one the buildroot project wants to use
> > but I personally think that RFW is really a good framework for high
> > level system testing (opposed to single-software unit testing) it
> > is very simple to learn by looking at examples, the keyword approch
> > is very readable and adding our own modules in python is fairly
> > easy
> 
> Do you have examples on what the RFW test cases for Buildroot look
> like?
> 
> I've released my code with the intent that others can have a look and
> get a clear view of what the prototype looks like. Without seeing how
> it looks with RFW, I can't really make up my mind on whether it is a
> good alternate solution or not.
> 

Denis reimplemented an infrastructure to redo your tests. The complete
infrastructure can be found here :

https://github.com/etkaDT/Rfw-buildroot-tests

Note that this is still a quick and dirty job that is not integrated
into buildroot, but it can start the custom qemu to test things on
the target. A better integration could lead to easier keywords etc.

Denis also included the output of running the python package test in
the output/ subdirectory. The RFW specific XML is quite interesting
because it contains the complete decomposition of keywords and how
they failed (there is a failed test in the example)

(Denis is the Open Wide trainee working on buildroot this summer,
he is the one pushing the scanpypy patch and the related 
robotframework package)

We hope this helps with the overall debate. 


> Best regards,
> 
> Thomas
> --
> Thomas Petazzoni, CTO, Free Electrons
> Embedded Linux, Kernel and Android engineering
> http://free-electrons.com
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30  8:38           ` Jeremy Rosen
  2015-06-30  8:46             ` Thomas Petazzoni
@ 2015-06-30 20:26             ` Andreas Naumann
  2015-07-01  8:53               ` Jeremy Rosen
  1 sibling, 1 reply; 17+ messages in thread
From: Andreas Naumann @ 2015-06-30 20:26 UTC (permalink / raw)
  To: buildroot

Hi Thomas, Jeremy,

here is my first shot:
https://bitbucket.org/anaum/br_robot

For now I have implemented building a config and running the ext2/3/4 
filesystem checks. Didnt get around doing the qemu part yet, but it 
should be enough to get the idea. Just follow the README.
Install the RIDE editor if you can, even though it crashes sometimes it 
makes the structure very clear and provides additional debug output 
while executing the tests.

Also I have not invested in test documentation, but look at the 
report/log.html and see if that's suitable. Comments on other 
interesting features below...


>> syntax you have to learn, while Python is known by a large number of
>> people already.
True to some extend, and I must admit that the variable syntax takes a 
little getting used to.
But besides that the API documentation for the libraries is very good 
and what's more, if you use RIDE editor to write the testcases (which I 
do almost exclusively), you hardly have to worry about the syntax. It 
works a bit Excel-like, suggests keywords and shows their documentation 
in a tooltip when pressing CTRL). Helps in navigation from testcases to 
keyword implementation, does global rename...


> pretty good, RFW can report in an xunit-compatible xml which can be
> easily parsed by whatever tool you prefer. I have been autogenerating
> reports with it for quite some time

I'd be interested, how is that different from the report.html that you 
mention below? What tools are you using?

> RFW also generates some HTML pages ready to be pushed on a server,
> but that's less usefull for the buildroot use-case.

On feature that I really like is the html linkage of the test report to 
the log, so often it's possible to find the problematic spot without 
digging through ascii-logfiles. Have a look at that and maybe make a 
testcase fail.
>
>> Can it run tests in parallel?
>
> no, RFW core has no parallel testing capabilities by itself. There
> are plugins to do that, though...

Well I havnt looked at any parallel plugin, but you can start the pybot 
execution as often as you like. It's just a matter of defining different 
tests for different runs. I think the tagging feature helps a lot with 
that. E.g. each package specific tests would be tagged with their 
package name and then you would have a run for each tag starting with 
a*, b*, ... so about 26 threads running side by side.
Another solution could be running each suite separatly.

There's a tool called rebot which can be used to combine the results 
(output.xml) of the different runs.

And there's an option in pybot to run just tests that either failed or 
have not been executed in a previous run (using the previous output.xml).

>> Can we
>> easily integrate the tests with Jenkins to have them run everyday?

The jenkins plugin works great, we use it in combination with jenkins 
matrix jobs to get the parallelization. At the moment we run about 10 
different targets in our testbed, but they mostly do the same tests with 
different configuration data. Anyway there's a job at the end that 
accumulates all the results into one report.

And finally we are interested in getting more coverage in that report as 
well as we would be able to provide results from real HW instead of qemu 
only. So of course I'd be happy to see a way of integrating buildroots 
test with our system...


regards,
Andreas

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30 16:06       ` Jeremy Rosen
@ 2015-07-01  7:30         ` Andreas Naumann
  2015-07-01  7:57           ` Jeremy Rosen
  2015-07-01 10:28           ` Denis Thulin
  0 siblings, 2 replies; 17+ messages in thread
From: Andreas Naumann @ 2015-07-01  7:30 UTC (permalink / raw)
  To: buildroot

Hi Jeremy,

Am 30.06.2015 um 18:06 schrieb Jeremy Rosen:

>>> Our approch was mainly to have each package (on the buildroot side
>>> have its own RFW scriptlet and have buildroot assemble these
>>> scriptlets at make time to provide a test for the resulting system
>>> that would only contain the tests for whatever package was actually
>>> compiled in.
>>
>> At some point I indeed though of having some per-package test cases
>> directly in package/<foo>/, for each package. But you also anyway
>> need
>> to describe a complete Buildroot configuration for each test, in
>> order
>> to have a complete system that actually boots and make sense.
>>
>
> yes and no... your approch is targeting the autobuild system, I was more
> thinking of providing tests for a user's own configuration or a
> semi-randomly generated config. Easily testing on real hardware was
> also part of the idea.
>
> If we want to combine the two, there would be a need for a base, bootable
> config (for example based on qemu) with a randomized overlay.
>
> The per-package part would help mainly with the randomly added packages.
> That would not prevent to also have a package independant base of test
> for the most basic functionalities. Per-Package also has the advantage
> that the tests will be maintained with the packages, thus by the people
> with the knowhow of what is being tested.
>
> overall I don't really see a reason not to have both approch. they are
> not really exclusive...
>
> now, one of the big difference is that you have host tests and target
> tests intermixed. That is a bit problematic if one wants to use your
> framework on a real board since the part responsible for starting
> the card and login is intermixed with the test. Again, my use-case is
> slightly different from yours. Having host-side tests is interesting
> and something we hadn't thought of. It should not be very hard to add
> with a RFW approch if we go in that direction

I agree with most things above and indeed, what to test is not exactly 
that same in our approach and buildroots. But I think it's very possible 
and beneficial to combine the efforts.
E.g. the deploy routines for our boards differ quite a bit. We solved 
this by implementing the same keyword in different (target-specific) 
resources, so the testcases using it doesnt need to change. When running 
the tests, we supply a DUT-specific variable file which then takes care 
of importing the correct resources with the applicable deploy step (and 
setting HW capabilities, and so on).

>
> separating the target and host test (whatever approch is taken) seems
> like a good idea to me, a way to override/disable the boot/login part
> would allow users to run your tests on real hardware. That seems like
> a good idea independantly of the framework used.

This is something that can easily be done via tags and then just run 
--include host* or --include target*


>> I've released my code with the intent that others can have a look and
>> get a clear view of what the prototype looks like. Without seeing how
>> it looks with RFW, I can't really make up my mind on whether it is a
>> good alternate solution or not.
>>
>
> Denis reimplemented an infrastructure to redo your tests. The complete
> infrastructure can be found here :
>
> https://github.com/etkaDT/Rfw-buildroot-tests
>
> Note that this is still a quick and dirty job that is not integrated
> into buildroot, but it can start the custom qemu to test things on
> the target. A better integration could lead to easier keywords etc.
>
> Denis also included the output of running the python package test in
> the output/ subdirectory. The RFW specific XML is quite interesting
> because it contains the complete decomposition of keywords and how
> they failed (there is a failed test in the example)
>
> (Denis is the Open Wide trainee working on buildroot this summer,
> he is the one pushing the scanpypy patch and the related
> robotframework package)
>
> We hope this helps with the overall debate.

Well, great to have another RFW implementation to look at and learn! 
Remarkable also that you guys implemented the qemu part while I did the 
building part without even talking about it :)
One obvious difference is that you create a large number of small almost 
atomic testcases while my testcase contain more of the steps that you 
probably would put into the prepare step. But I like the much more 
detailed report this approach leads to. Since RFW provides only for one 
keyword in the prepare step, this probably leads to having one 
prepare-keyword for almost every suite.
So far I'm using a global keyword 'Connect And Login' in all my suites, 
but it may be helpful to change that.

Btw, what Editor did you use for the creation?


regards,
Andreas



>
>
>> Best regards,
>>
>> Thomas
>> --
>> Thomas Petazzoni, CTO, Free Electrons
>> Embedded Linux, Kernel and Android engineering
>> http://free-electrons.com
>>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-07-01  7:30         ` Andreas Naumann
@ 2015-07-01  7:57           ` Jeremy Rosen
  2015-07-01 10:28           ` Denis Thulin
  1 sibling, 0 replies; 17+ messages in thread
From: Jeremy Rosen @ 2015-07-01  7:57 UTC (permalink / raw)
  To: buildroot


----- Mail original -----
> Hi Jeremy,
> 
> Am 30.06.2015 um 18:06 schrieb Jeremy Rosen:
> 
> >>> Our approch was mainly to have each package (on the buildroot
> >>> side
> >>> have its own RFW scriptlet and have buildroot assemble these
> >>> scriptlets at make time to provide a test for the resulting
> >>> system
> >>> that would only contain the tests for whatever package was
> >>> actually
> >>> compiled in.
> >>
> >> At some point I indeed though of having some per-package test
> >> cases
> >> directly in package/<foo>/, for each package. But you also anyway
> >> need
> >> to describe a complete Buildroot configuration for each test, in
> >> order
> >> to have a complete system that actually boots and make sense.
> >>
> >
> > yes and no... your approch is targeting the autobuild system, I was
> > more
> > thinking of providing tests for a user's own configuration or a
> > semi-randomly generated config. Easily testing on real hardware was
> > also part of the idea.
> >
> > If we want to combine the two, there would be a need for a base,
> > bootable
> > config (for example based on qemu) with a randomized overlay.
> >
> > The per-package part would help mainly with the randomly added
> > packages.
> > That would not prevent to also have a package independant base of
> > test
> > for the most basic functionalities. Per-Package also has the
> > advantage
> > that the tests will be maintained with the packages, thus by the
> > people
> > with the knowhow of what is being tested.
> >
> > overall I don't really see a reason not to have both approch. they
> > are
> > not really exclusive...
> >
> > now, one of the big difference is that you have host tests and
> > target
> > tests intermixed. That is a bit problematic if one wants to use
> > your
> > framework on a real board since the part responsible for starting
> > the card and login is intermixed with the test. Again, my use-case
> > is
> > slightly different from yours. Having host-side tests is
> > interesting
> > and something we hadn't thought of. It should not be very hard to
> > add
> > with a RFW approch if we go in that direction
> 
> I agree with most things above and indeed, what to test is not
> exactly
> that same in our approach and buildroots. But I think it's very
> possible
> and beneficial to combine the efforts.
> E.g. the deploy routines for our boards differ quite a bit. We solved
> this by implementing the same keyword in different (target-specific)
> resources, so the testcases using it doesnt need to change. When
> running
> the tests, we supply a DUT-specific variable file which then takes
> care
> of importing the correct resources with the applicable deploy step
> (and
> setting HW capabilities, and so on).
> 

your RFW/buildroot integration seems more advanced than ours at this
point, is it possible to share it ?

> >
> > separating the target and host test (whatever approch is taken)
> > seems
> > like a good idea to me, a way to override/disable the boot/login
> > part
> > would allow users to run your tests on real hardware. That seems
> > like
> > a good idea independantly of the framework used.
> 
> This is something that can easily be done via tags and then just run
> --include host* or --include target*
> 
> 
> >> I've released my code with the intent that others can have a look
> >> and
> >> get a clear view of what the prototype looks like. Without seeing
> >> how
> >> it looks with RFW, I can't really make up my mind on whether it is
> >> a
> >> good alternate solution or not.
> >>
> >
> > Denis reimplemented an infrastructure to redo your tests. The
> > complete
> > infrastructure can be found here :
> >
> > https://github.com/etkaDT/Rfw-buildroot-tests
> >
> > Note that this is still a quick and dirty job that is not
> > integrated
> > into buildroot, but it can start the custom qemu to test things on
> > the target. A better integration could lead to easier keywords etc.
> >
> > Denis also included the output of running the python package test
> > in
> > the output/ subdirectory. The RFW specific XML is quite interesting
> > because it contains the complete decomposition of keywords and how
> > they failed (there is a failed test in the example)
> >
> > (Denis is the Open Wide trainee working on buildroot this summer,
> > he is the one pushing the scanpypy patch and the related
> > robotframework package)
> >
> > We hope this helps with the overall debate.
> 
> Well, great to have another RFW implementation to look at and learn!
> Remarkable also that you guys implemented the qemu part while I did
> the
> building part without even talking about it :)
> One obvious difference is that you create a large number of small
> almost
> atomic testcases while my testcase contain more of the steps that you
> probably would put into the prepare step. But I like the much more
> detailed report this approach leads to. Since RFW provides only for
> one
> keyword in the prepare step, this probably leads to having one
> prepare-keyword for almost every suite.
> So far I'm using a global keyword 'Connect And Login' in all my
> suites,
> but it may be helpful to change that.
> 
> Btw, what Editor did you use for the creation?
> 

I'll let Denis answer that one :) 



> 
> regards,
> Andreas
> 
> 
> 
> >
> >
> >> Best regards,
> >>
> >> Thomas
> >> --
> >> Thomas Petazzoni, CTO, Free Electrons
> >> Embedded Linux, Kernel and Android engineering
> >> http://free-electrons.com
> >>
> >
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-06-30 20:26             ` Andreas Naumann
@ 2015-07-01  8:53               ` Jeremy Rosen
  0 siblings, 0 replies; 17+ messages in thread
From: Jeremy Rosen @ 2015-07-01  8:53 UTC (permalink / raw)
  To: buildroot


> 
> I'd be interested, how is that different from the report.html that
> you
> mention below? What tools are you using?
> 

RFW can generate multiple outputs 

* html reports for human viewers
* output.xml which is a very complete XML report, but in a RFW specific
  format. That is the one I used to autogenerate my weekly reports
  and corresponding slides. I genereate some libreoffice files from
  parsing that data
* xunit.xml which contains very little information but is compatible
  with the standard. This is usefull to integrate with other tools
  that already know that format.

Overall, output.xml is the usefull output if you want to automatically
analyze test results. Since that was what I was doing, I used that file

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-07-01  7:30         ` Andreas Naumann
  2015-07-01  7:57           ` Jeremy Rosen
@ 2015-07-01 10:28           ` Denis Thulin
  2015-07-02 13:57             ` Andreas Naumann
  1 sibling, 1 reply; 17+ messages in thread
From: Denis Thulin @ 2015-07-01 10:28 UTC (permalink / raw)
  To: buildroot

Hi Andreas,

I'm answering your two emails inside this one

----- Mail original -----
> Hi Jeremy,
> 
> Am 30.06.2015 um 18:06 schrieb Jeremy Rosen:
> 
> >>> Our approch was mainly to have each package (on the buildroot
> >>> side
> >>> have its own RFW scriptlet and have buildroot assemble these
> >>> scriptlets at make time to provide a test for the resulting
> >>> system
> >>> that would only contain the tests for whatever package was
> >>> actually
> >>> compiled in.
> >>
> >> At some point I indeed though of having some per-package test
> >> cases
> >> directly in package/<foo>/, for each package. But you also anyway
> >> need
> >> to describe a complete Buildroot configuration for each test, in
> >> order
> >> to have a complete system that actually boots and make sense.
> >>
> >
> > yes and no... your approch is targeting the autobuild system, I was
> > more
> > thinking of providing tests for a user's own configuration or a
> > semi-randomly generated config. Easily testing on real hardware was
> > also part of the idea.
> >
> > If we want to combine the two, there would be a need for a base,
> > bootable
> > config (for example based on qemu) with a randomized overlay.
> >
> > The per-package part would help mainly with the randomly added
> > packages.
> > That would not prevent to also have a package independant base of
> > test
> > for the most basic functionalities. Per-Package also has the
> > advantage
> > that the tests will be maintained with the packages, thus by the
> > people
> > with the knowhow of what is being tested.
> >
> > overall I don't really see a reason not to have both approch. they
> > are
> > not really exclusive...
> >
> > now, one of the big difference is that you have host tests and
> > target
> > tests intermixed. That is a bit problematic if one wants to use
> > your
> > framework on a real board since the part responsible for starting
> > the card and login is intermixed with the test. Again, my use-case
> > is
> > slightly different from yours. Having host-side tests is
> > interesting
> > and something we hadn't thought of. It should not be very hard to
> > add
> > with a RFW approch if we go in that direction
> 
> I agree with most things above and indeed, what to test is not
> exactly
> that same in our approach and buildroots. But I think it's very
> possible
> and beneficial to combine the efforts.
> E.g. the deploy routines for our boards differ quite a bit. We solved
> this by implementing the same keyword in different (target-specific)
> resources, so the testcases using it doesnt need to change. When
> running
> the tests, we supply a DUT-specific variable file which then takes
> care
> of importing the correct resources with the applicable deploy step
> (and
> setting HW capabilities, and so on).
> 
> >
> > separating the target and host test (whatever approch is taken)
> > seems
> > like a good idea to me, a way to override/disable the boot/login
> > part
> > would allow users to run your tests on real hardware. That seems
> > like
> > a good idea independantly of the framework used.
> 
> This is something that can easily be done via tags and then just run
> --include host* or --include target*
> 
> 
> >> I've released my code with the intent that others can have a look
> >> and
> >> get a clear view of what the prototype looks like. Without seeing
> >> how
> >> it looks with RFW, I can't really make up my mind on whether it is
> >> a
> >> good alternate solution or not.
> >>
> >
> > Denis reimplemented an infrastructure to redo your tests. The
> > complete
> > infrastructure can be found here :
> >
> > https://github.com/etkaDT/Rfw-buildroot-tests
> >
> > Note that this is still a quick and dirty job that is not
> > integrated
> > into buildroot, but it can start the custom qemu to test things on
> > the target. A better integration could lead to easier keywords etc.
> >
> > Denis also included the output of running the python package test
> > in
> > the output/ subdirectory. The RFW specific XML is quite interesting
> > because it contains the complete decomposition of keywords and how
> > they failed (there is a failed test in the example)
> >
> > (Denis is the Open Wide trainee working on buildroot this summer,
> > he is the one pushing the scanpypy patch and the related
> > robotframework package)
> >
> > We hope this helps with the overall debate.
> 
> Well, great to have another RFW implementation to look at and learn!
> Remarkable also that you guys implemented the qemu part while I did
> the
> building part without even talking about it :)
> One obvious difference is that you create a large number of small
> almost
> atomic testcases while my testcase contain more of the steps that you
> probably would put into the prepare step. But I like the much more
> detailed report this approach leads to. Since RFW provides only for
> one
> keyword in the prepare step, this probably leads to having one
> prepare-keyword for almost every suite.
> So far I'm using a global keyword 'Connect And Login' in all my
> suites,

Well, I like having tests as atomic as possible and since the log
file gives a detailed report of everything including setups and
teardowns you can find easily why it failed.

I Also have a 'Boot And Connect'(it logs in too) keyword that I use
almost everywhere. It needs some variable to be set in each test
suite and a Keyword


> but it may be helpful to change that.
> 
> Btw, what Editor did you use for the creation?
> 

I used Vim with the robotframework plugin.
Unfortunately, the plugin only does syntax highlighting.

https://github.com/mfukar/robotframework-vim

Don't stop reading here, answer to your other mail is bellow

> 
> regards,
> Andreas
> 
> 
> 
> >
> >
> >> Best regards,
> >>
> >> Thomas
> >> --
> >> Thomas Petazzoni, CTO, Free Electrons
> >> Embedded Linux, Kernel and Android engineering
> >> http://free-electrons.com
> >>
> >
> _______________________________________________
> buildroot mailing list
> buildroot at busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
> 



> Hi Thomas, Jeremy,
> 
> here is my first shot:
> https://bitbucket.org/anaum/br_robot

I noticed you used the Operating System library's keyword
"Run And Return Rc" but I read in the Process Library's documentation
that using Operating System for running programs was no longer
recommanded and that functionnality would most likely become
deprecated. It seems like using Process was the correct way to go.
See:
http://robotframework.org/robotframework/latest/libraries/OperatingSystem.html
and
http://robotframework.org/robotframework/latest/libraries/Process.html

> 
> For now I have implemented building a config and running the ext2/3/4
> filesystem checks. Didnt get around doing the qemu part yet, but it
> should be enough to get the idea. Just follow the README.
> Install the RIDE editor if you can, even though it crashes sometimes
> it
> makes the structure very clear and provides additional debug output
> while executing the tests.
> 
> Also I have not invested in test documentation, but look at the
> report/log.html and see if that's suitable. Comments on other
> interesting features below...
> 
> 
> >> syntax you have to learn, while Python is known by a large number
> >> of
> >> people already.
> True to some extend, and I must admit that the variable syntax takes
> a
> little getting used to.
> But besides that the API documentation for the libraries is very good
> and what's more, if you use RIDE editor to write the testcases (which
> I
> do almost exclusively), you hardly have to worry about the syntax. It
> works a bit Excel-like, suggests keywords and shows their
> documentation
> in a tooltip when pressing CTRL). Helps in navigation from testcases
> to
> keyword implementation, does global rename...
> 
> 
> > pretty good, RFW can report in an xunit-compatible xml which can be
> > easily parsed by whatever tool you prefer. I have been
> > autogenerating
> > reports with it for quite some time
> 
> I'd be interested, how is that different from the report.html that
> you
> mention below? What tools are you using?

Just pybot with option '-x filename' create a xunit file in addition to the
usual log/output/report file.


Regards,

Denis

> 
> > RFW also generates some HTML pages ready to be pushed on a server,
> > but that's less usefull for the buildroot use-case.
> 
> On feature that I really like is the html linkage of the test report
> to
> the log, so often it's possible to find the problematic spot without
> digging through ascii-logfiles. Have a look at that and maybe make a
> testcase fail.
> >
> >> Can it run tests in parallel?
> >
> > no, RFW core has no parallel testing capabilities by itself. There
> > are plugins to do that, though...
> 
> Well I havnt looked at any parallel plugin, but you can start the
> pybot
> execution as often as you like. It's just a matter of defining
> different
> tests for different runs. I think the tagging feature helps a lot
> with
> that. E.g. each package specific tests would be tagged with their
> package name and then you would have a run for each tag starting with
> a*, b*, ... so about 26 threads running side by side.
> Another solution could be running each suite separatly.
> 
> There's a tool called rebot which can be used to combine the results
> (output.xml) of the different runs.
> 
> And there's an option in pybot to run just tests that either failed
> or
> have not been executed in a previous run (using the previous
> output.xml).
> 
> >> Can we
> >> easily integrate the tests with Jenkins to have them run everyday?
> 
> The jenkins plugin works great, we use it in combination with jenkins
> matrix jobs to get the parallelization. At the moment we run about 10
> different targets in our testbed, but they mostly do the same tests
> with
> different configuration data. Anyway there's a job at the end that
> accumulates all the results into one report.
> 
> And finally we are interested in getting more coverage in that report
> as
> well as we would be able to provide results from real HW instead of
> qemu
> only. So of course I'd be happy to see a way of integrating
> buildroots
> test with our system...
> 
> 
> regards,
> Andreas
> 
> _______________________________________________
> buildroot mailing list
> buildroot at busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Buildroot] Buildroot runtime test infrastructure prototype
  2015-07-01 10:28           ` Denis Thulin
@ 2015-07-02 13:57             ` Andreas Naumann
  0 siblings, 0 replies; 17+ messages in thread
From: Andreas Naumann @ 2015-07-02 13:57 UTC (permalink / raw)
  To: buildroot

Hi Denis,

Am 01.07.2015 um 12:28 schrieb Denis Thulin:

>
> Well, I like having tests as atomic as possible and since the log
> file gives a detailed report of everything including setups and
> teardowns you can find easily why it failed.

Ok, i can see the benefit of having a more detailed report and simpler 
testcases.


>>
>> Btw, what Editor did you use for the creation?
>>
>
> I used Vim with the robotframework plugin.
> Unfortunately, the plugin only does syntax highlighting.

I see. Our vim geek tried that too but finally converted to using RIDE :)

>
> I noticed you used the Operating System library's keyword
> "Run And Return Rc" but I read in the Process Library's documentation
> that using Operating System for running programs was no longer
> recommanded and that functionnality would most likely become
> deprecated. It seems like using Process was the correct way to go.
> See:
> http://robotframework.org/robotframework/latest/libraries/OperatingSystem.html
> and
> http://robotframework.org/robotframework/latest/libraries/Process.html

Thanks for pointing that out, I will check and adjust my keywords.


regards,
Andreas

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2015-07-02 13:57 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-25 18:02 [Buildroot] Buildroot runtime test infrastructure prototype Thomas Petazzoni
2015-06-26 15:48 ` Andreas Naumann
2015-06-26 16:20   ` Jeremy Rosen
2015-06-28  9:50     ` Thomas Petazzoni
2015-06-30  7:06       ` Andreas Naumann
2015-06-30  7:39         ` Thomas Petazzoni
2015-06-30  8:38           ` Jeremy Rosen
2015-06-30  8:46             ` Thomas Petazzoni
2015-06-30 20:26             ` Andreas Naumann
2015-07-01  8:53               ` Jeremy Rosen
2015-06-30 16:06       ` Jeremy Rosen
2015-07-01  7:30         ` Andreas Naumann
2015-07-01  7:57           ` Jeremy Rosen
2015-07-01 10:28           ` Denis Thulin
2015-07-02 13:57             ` Andreas Naumann
2015-06-26 18:12 ` Thomas De Schampheleire
2015-06-26 18:26   ` Thomas De Schampheleire

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.