All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] Stuff Tim's been working on
@ 2017-01-13  4:32 Bird, Timothy
  2017-01-18  8:55 ` Daniel Sangorrin
  2017-01-19  1:48 ` Daniel Sangorrin
  0 siblings, 2 replies; 5+ messages in thread
From: Bird, Timothy @ 2017-01-13  4:32 UTC (permalink / raw)
  To: fuego

Hello all,

Daniel has done some excellent work over the last few weeks. 
Despite my taking a bit of time off for Christmas and New Years,
I've also been working on a few things for Fuego as well.

They're not quite ready to publish, but I thought I'd mention them so you know what's up.

My top priorities are:
 - implementing a system for test dependencies
 - simplifying the testplan and test spec features
 - working towards a test package format

The test dependencies are modeled after something I saw in 0day, that allows a test to indicate
any dependencies it has on features, kernel config, memory, etc. on the target.
Theoretically, you can do this in an ad-hoc fashion already with test variables and assert_define
statements, and shell code.  But I'm implementing something that 1) is easy to use to specify
some well-known and common dependencies, and 2) allows the dependencies
to be expressed in a declarative, rather than imperative, fashion.  This is how 0day does it,
and I feel like that will be useful (sometime in the future) for fuego to be able to automatically
detect and record test dependencies.   I feel like one of the bigger problem areas that fuego
has to deal with is making sure we avoid false positives due to missing dependencies on
the target.

The testplan and test spec features of fuego are, IMHO, overengineered for what they 
accomplish.  0day does the same thing using simple shell variables.  Fuego is using json.
I'm not changing that just yet, but I do want to modify ovgen.py so that if a test has
no test variables (nothing to put in a spec file), and no test plans besides 'testplan_default',
then no JSON files are required.  We have too many (false) failures based on issues
around test specs and plans not working correctly.  I'd like to fix this.

Finally, I think it's important to create a test package system.  I would like, eventually,
for there to be an ecosystem of tests, outside of a single git repository.  Modifying
the tests so that they do not need to be accompanied by binaries (tarfiles), will be
important to this.  So I'm interested in the ideas we've discussed about retrieving
test program source using git (or other fetch mechanisms).  Also, I want to improve
the support for using test programs already present on target because they are
already in the target Linux distribution.

I think formalizing the test packages will also help us isolate the tests themselves
from the framework.  This will help us implement update mechanisms, for when
fundamental framework changes occur.  I want the system to be modular enough
to handle changes to different system parts without stuff breaking.  I strongly applaud
Daniel for his work on refactoring stuff to make it possible to update the Jenkins
front end more easily.

I haven't forgotten about the ideas we've discussed for unified test output
parsing (or a canonical test output format).  Right now, those are on the back-burner
for me.

That's it for now.  I've been preoccupied with work on ELC the last few weeks, but
hope to have some time to finish up some of the above items shortly (hopefully
within the next few weeks, but we'll see.  The dependency stuff has taken longer
than I expected already.)

Regards,
  -- Tim


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Fuego] Stuff Tim's been working on
  2017-01-13  4:32 [Fuego] Stuff Tim's been working on Bird, Timothy
@ 2017-01-18  8:55 ` Daniel Sangorrin
  2017-01-19  1:48 ` Daniel Sangorrin
  1 sibling, 0 replies; 5+ messages in thread
From: Daniel Sangorrin @ 2017-01-18  8:55 UTC (permalink / raw)
  To: 'Bird, Timothy', fuego

Hi Tim,

> From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego-bounces@lists.linuxfoundation.org] On Behalf Of Bird, Timothy

> My top priorities are:
>  - implementing a system for test dependencies
>  - simplifying the testplan and test spec features
>  - working towards a test package format
> 
> The test dependencies are modeled after something I saw in 0day, that allows a test to indicate
> any dependencies it has on features, kernel config, memory, etc. on the target.
> Theoretically, you can do this in an ad-hoc fashion already with test variables and assert_define
> statements, and shell code.  But I'm implementing something that 1) is easy to use to specify
> some well-known and common dependencies, and 2) allows the dependencies
> to be expressed in a declarative, rather than imperative, fashion.  This is how 0day does it,
> and I feel like that will be useful (sometime in the future) for fuego to be able to automatically
> detect and record test dependencies.   I feel like one of the bigger problem areas that fuego
> has to deal with is making sure we avoid false positives due to missing dependencies on
> the target.
> 
> The testplan and test spec features of fuego are, IMHO, overengineered for what they
> accomplish.  0day does the same thing using simple shell variables.  Fuego is using json.
> I'm not changing that just yet, but I do want to modify ovgen.py so that if a test has
> no test variables (nothing to put in a spec file), and no test plans besides 'testplan_default',
> then no JSON files are required.  We have too many (false) failures based on issues
> around test specs and plans not working correctly.  I'd like to fix this.

Yeah, if a test doesn't need parameters it shouldn't fail. I also think the ovgen.py has some
problems that need to be fixed.

I am using testplans to generate jobs in my upgraded version. One thing I learned is that
they can be useful to define which tests will execute on a board and with which specs. They
also allow specifying timeouts for example.

About 0day, is that the same project as LKP (Linux Kernel Performance)?
As far as I can see, they are defining the parameters through YAML files, not shell variables. Check here:

https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/tree/jobs/hackbench.yaml

# I may have missed something because I only looked at the code briefly.

...
> That's it for now.  I've been preoccupied with work on ELC the last few weeks, but
> hope to have some time to finish up some of the above items shortly (hopefully
> within the next few weeks, but we'll see.  The dependency stuff has taken longer
> than I expected already.)

I will be at ELC as well. 
In my case, I'm going to try to get kernelci working with fuego by then. Although I'm going
to be busy so I can't really promise.

Cheers,
Daniel



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Fuego] Stuff Tim's been working on
  2017-01-13  4:32 [Fuego] Stuff Tim's been working on Bird, Timothy
  2017-01-18  8:55 ` Daniel Sangorrin
@ 2017-01-19  1:48 ` Daniel Sangorrin
  2017-01-20  1:39   ` Bird, Timothy
  1 sibling, 1 reply; 5+ messages in thread
From: Daniel Sangorrin @ 2017-01-19  1:48 UTC (permalink / raw)
  To: 'Bird, Timothy', fuego

> Finally, I think it's important to create a test package system.  I would like, eventually,
> for there to be an ecosystem of tests, outside of a single git repository.  Modifying
> the tests so that they do not need to be accompanied by binaries (tarfiles), will be
> important to this.  So I'm interested in the ideas we've discussed about retrieving
> test program source using git (or other fetch mechanisms).  
> 
> I think formalizing the test packages will also help us isolate the tests themselves
> from the framework.  This will help us implement update mechanisms, for when
> fundamental framework changes occur.  I want the system to be modular enough
> to handle changes to different system parts without stuff breaking.  I strongly applaud

I've given it a thought and I'd like to propose the following. Let me know your opinions:

Installing a test package
    - Officially registered test packages
        + Database stored as a single centralized file (testpackagelist.yaml) online:
            -> testpackagelist.yaml
                name: testname
                    description: this is a test for this and that.
                    categories: security, network
                    arch: arm, x86 (supported architectures)
                    stars: ** (number of stars 0-5 depending on the quality of your test package)
                    url: https://bitbucket.org/fuego/fuego-testname.git
                    sha512: safwerwr23fwefwfwsfawfwe...
                name: iozone
                    description: this is iozone for testing disks
                    url: https://www.github.com/fuego/fuego-iozone.git
                    alt-url: ftp://123.123.123.123/fuego-iozone.zip
                    sha512: asfeewrfwfsdfsdfsd...
                ...
        + Workflow
            docker# fuego-update-testpackagelist
                -> downloads the latest testpackagelist.yaml to /etc/fuego/
            docker# fuego-search-test disk
                iozone
                    this is iozone for testing disks
                bonnie++
                    this is bonnie++ for testing disks
                [Note] you can also search by category, supported arch, etc..
            docker# fuego-install-test iozone bonnie++
    - Non-official test packages
        docker# fuego-install-test https://www.gitlab.com/pepe/fuego-supertest.git
        docker# fuego-install-test /path/to/developer/test/folder/

Installing from a testplan
    docker# fuego-install-test -p testplan_raspi.json (install tests in a testplan)
        -> non-official packages: write the url/folder in the testplan

Updating a test package
    docker# fuego-update-tests [testname]
        -> if no name is provided all tests will be updated

Contents of fuego-testname.git:
    - testname.yaml: 
        [Note] used to autogenerate testpackagelist.yaml periodically
        description: this is a test for fuego
        type: Functional | Benchmark
        categories: security, network
        stars: ** (number of stars 0-5 depending on the quality of your test package)
        arch: arm, x86 (supported architectures)
            -> when a job is created (fuego-create-jobs), the board's 
               architecture is matched to the ones supported by the test. 
               If not found, an error is displayed unless -f (force) is provided
               which converts it into a warning.
        host-dep: python-pandas, .. (debian packages to install in docker, apart from default or the toolchain)
            -> a script (install.sh) can be passed instead (e.g.: pip install, yum install ..)
            -> target dependencies are checked by test_pre_check
    - test/ (contents go to fuego-core/engine/tests/Category.testname/)
        testname.sh
            + It should contain the same functions as of now:
                test_pre_check, test_build, test_deploy, test_run, test_processing
            + It should also contain a new one: test_download
                -> downloads stuff needed for the test into buildzone (source code, binaries..)
                    + Executed as jenkins (not as root), so no installing
                -> e.g.: git clone -b next https://www.gitlab.com/ramon/testname.git
                -> e.g.: tar zxvf $TESTDIR/testname.tar.gz (for tarballs)
        other files: parser.py, tarball, .. (optional)
    - testname.spec (optional)
        + Test specification (with default parameters first)
            -> Tests without parameters don't need to provide it
        + An entry will be added to testplan_default by fuego-install-test

> Also, I want to improve
> the support for using test programs already present on target because they are
> already in the target Linux distribution.

This should be easy: add a new variable "deploy" to indicate whether the test_deploy
function needs to be called or not ("rebuild" should be false).

Regards,
Daniel




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Fuego] Stuff Tim's been working on
  2017-01-19  1:48 ` Daniel Sangorrin
@ 2017-01-20  1:39   ` Bird, Timothy
  2017-01-20  8:04     ` Daniel Sangorrin
  0 siblings, 1 reply; 5+ messages in thread
From: Bird, Timothy @ 2017-01-20  1:39 UTC (permalink / raw)
  To: Daniel Sangorrin, fuego

Comments inline below

> -----Original Message-----
> From: Daniel Sangorrin on Wednesday, January 18, 2017 5:48 PM
> > Finally, I think it's important to create a test package system.  I would like,
> eventually,
> > for there to be an ecosystem of tests, outside of a single git repository.
> Modifying
> > the tests so that they do not need to be accompanied by binaries (tarfiles),
> will be
> > important to this.  So I'm interested in the ideas we've discussed about
> retrieving
> > test program source using git (or other fetch mechanisms).
> >
> > I think formalizing the test packages will also help us isolate the tests
> themselves
> > from the framework.  This will help us implement update mechanisms, for
> when
> > fundamental framework changes occur.  I want the system to be modular
> enough
> > to handle changes to different system parts without stuff breaking.  I
> strongly applaud
> 
> I've given it a thought and I'd like to propose the following. Let me know your
> opinions:
> 
> Installing a test package
>     - Officially registered test packages
>         + Database stored as a single centralized file (testpackagelist.yaml) online:

Yes.  There should be a set of "curated" tests, that comprise the official "fuego test distribution".
I've come to the conclusion that lots of features I envision for fuego are similar to
features that Linux distributions have, so a lot of the same ideas and principles apply.

>             -> testpackagelist.yaml
>                 name: testname
>                     description: this is a test for this and that.
>                     categories: security, network
>                     arch: arm, x86 (supported architectures)
>                     stars: ** (number of stars 0-5 depending on the quality of your test
> package)
>                     url: https://bitbucket.org/fuego/fuego-testname.git
>                     sha512: safwerwr23fwefwfwsfawfwe...

I'm not sure about the format (yaml), but this captures the basic idea.
From this, a web interface can also be presented very easily, so humans
can browse for tests that they want to download to their fuego instance.

What is the sha512 for?

You have a url that points to a git (and below
you have a reference to a test directory in the local filesystem).  This saves
packaging issues, but I'd still like to see a test stored in a repository as a
single standalone file.  So maybe something like this would also be
supported:
url: https://test-repository.fuego.org/version3.0/fuego-testname.deb

The use of '.deb' in that filename is only intended to convey that it's a single
standalone file that is a package archive with stuff inside it (and that we
might re-use a standard package manager for managing test packages).
It does not imply that you would use 'dpkg' to install a test to your host
(or the docker container), instead of a specialized fuego package install command.

>                 name: iozone
>                     description: this is iozone for testing disks
>                     url: https://www.github.com/fuego/fuego-iozone.git
>                     alt-url: ftp://123.123.123.123/fuego-iozone.zip
>                     sha512: asfeewrfwfsdfsdfsd...
>                 ...
>         + Workflow
>             docker# fuego-update-testpackagelist
>                 -> downloads the latest testpackagelist.yaml to /etc/fuego/
sounds OK, but does this need to be a separate operation?

is the package list too big to just download every time we add a test
to the system.  I guess the question would be why do Linux distros
have offline lists of the available packages?  Is the update operation so costly 
that it can't be done just as an artifact of package search or install?

>             docker# fuego-search-test disk
>                 iozone
>                     this is iozone for testing disks
>                 bonnie++
>                     this is bonnie++ for testing disks
>                 [Note] you can also search by category, supported arch, etc..

Yes.  That's what I had in mind.  I envision something like an app store,
with recommendations, ratings, etc., except for tests.

Note that I also envision test meta-data for tests as well, stored
somewhere (on fuego.org?).  IMHO there needs to be a central
repository of results that people can use to compare their results
with, for each test, as well.

>             docker# fuego-install-test iozone bonnie++
Yes.

>     - Non-official test packages
>         docker# fuego-install-test https://www.gitlab.com/pepe/fuego-
> supertest.git
>         docker# fuego-install-test /path/to/developer/test/folder/

Yes, and something like a standalone file (like a .deb, or .tar, or .fgo)

I think there should also be:
docker# fuego-create-package tims-test
that will create a standalone test package from the materials in the docker
container, suitable for sharing with the world.

> 
> Installing from a testplan
>     docker# fuego-install-test -p testplan_raspi.json (install tests in a testplan)
>         -> non-official packages: write the url/folder in the testplan

I'm not sure I understand this.  Fuego can have multiple testplans per test,
with each one specifying a selection of test variables that control the execution
of that test.  This sounds like something else.  Can you elaborate?

> Updating a test package
>     docker# fuego-update-tests [testname]
>         -> if no name is provided all tests will be updated

Sounds good.

> 
> Contents of fuego-testname.git:
>     - testname.yaml:

We need some kind of test manifest.  I'm not exactly sure the list of variables to have
in the manifest (or the desired format), but this looks like a reasonable set to 
start with.

>         [Note] used to autogenerate testpackagelist.yaml periodically
>         description: this is a test for fuego
>         type: Functional | Benchmark
>         categories: security, network
>         stars: ** (number of stars 0-5 depending on the quality of your test
> package)
>         arch: arm, x86 (supported architectures)
>             -> when a job is created (fuego-create-jobs), the board's
>                architecture is matched to the ones supported by the test.
>                If not found, an error is displayed unless -f (force) is provided
>                which converts it into a warning.
>         host-dep: python-pandas, .. (debian packages to install in docker, apart
> from default or the toolchain)
yes, this would be good.

>             -> a script (install.sh) can be passed instead (e.g.: pip install, yum install
> ..)
>             -> target dependencies are checked by test_pre_check
>     - test/ (contents go to fuego-core/engine/tests/Category.testname/)
>         testname.sh
>             + It should contain the same functions as of now:
>                 test_pre_check, test_build, test_deploy, test_run, test_processing
>             + It should also contain a new one: test_download
>                 -> downloads stuff needed for the test into buildzone (source code,
> binaries..)
>                     + Executed as jenkins (not as root), so no installing
>                 -> e.g.: git clone -b next
> https://www.gitlab.com/ramon/testname.git
>                 -> e.g.: tar zxvf $TESTDIR/testname.tar.gz (for tarballs)
>         other files: parser.py, tarball, .. (optional)
>     - testname.spec (optional)
>         + Test specification (with default parameters first)
>             -> Tests without parameters don't need to provide it
>         + An entry will be added to testplan_default by fuego-install-test

Yeah - these all sound good.

I would add some dependency statements (but maybe these are just in the
testname.sh).  e.g. NEED_KCONFIG=CONFIG_PRINTK_TIME=y
 
> > Also, I want to improve
> > the support for using test programs already present on target because they
> are
> > already in the target Linux distribution.
> 
> This should be easy: add a new variable "deploy" to indicate whether the
> test_deploy
> function needs to be called or not ("rebuild" should be false).
I think it can be done pretty easily with a new variable, but there are additional
issues beside build and deploy.

I'd like to make it so that an individual test can be used either way.  That
is, it should be a fuego user decision whether to use iozone built from
fuego sources, or to use the iozone built in to a distribution.  But I'd
like the same iozone.sh (base script) to work in either case.   This will
require more than just modifying the deploy step.  For example, the
test_run step needs to execute something off the path, or use
an absolute path to the program, rather than executing it from the
test directory on the target.

Also, I want fuego to be able to find the test binary on the target, so that the test
script can be independent of where different distributions might place the 
program.  I did some of the work on this with the function is_on_target(),
which is intended to be called during pre_test.  After this call the test
has a variable it can use to invoke the test program with an absolute path.

But I haven't yet integrated the trigger for using a pre-existing program versus doing the build
and deploy, or figured out how to make it so that test_run isn't required
to do an 'if' statement depending on the mode.
Also, the control variable shouldn't go in the board file (well, it could, but
it might be something that should be a global fuego setting (which we don't have
yet.))

I'm still thinking about the mechanism for this.

Thanks for the suggestions.  What you've outlined is very similar to what I've
been thinking about.
 -- Tim


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Fuego] Stuff Tim's been working on
  2017-01-20  1:39   ` Bird, Timothy
@ 2017-01-20  8:04     ` Daniel Sangorrin
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Sangorrin @ 2017-01-20  8:04 UTC (permalink / raw)
  To: 'Bird, Timothy', fuego

> >             -> testpackagelist.yaml
> >                 name: testname
> >                     description: this is a test for this and that.
> >                     categories: security, network
> >                     arch: arm, x86 (supported architectures)
> >                     stars: ** (number of stars 0-5 depending on the quality of your test
> > package)
> >                     url: https://bitbucket.org/fuego/fuego-testname.git
> >                     sha512: safwerwr23fwefwfwsfawfwe...
> 
> I'm not sure about the format (yaml), but this captures the basic idea.
> From this, a web interface can also be presented very easily, so humans
> can browse for tests that they want to download to their fuego instance.
> 
> What is the sha512 for?

For checking that you download the test correctly and nobody has modified it.
# In practice, when checking we should use the sha512 from the testpackagelist file (which should be signed). 

> You have a url that points to a git (and below
> you have a reference to a test directory in the local filesystem).  This saves
> packaging issues, but I'd still like to see a test stored in a repository as a
> single standalone file.  So maybe something like this would also be
> supported:
> url: https://test-repository.fuego.org/version3.0/fuego-testname.deb

Yes, sure. Actually, I already provided another similar one in my mail:
alt-url: ftp://123.123.123.123/fuego-iozone.zip

I think there should be a generic way to download certain formats, and
for the rest the test's author will need to provide its own method in
the test_download() function. This would provide a generic but flexible
way to download the code.
[Note] the alt- part means that if the first one fails, try this one. Probably
you need separate sha512 values in this case.

> The use of '.deb' in that filename is only intended to convey that it's a single
> standalone file that is a package archive with stuff inside it (and that we
> might re-use a standard package manager for managing test packages).
> It does not imply that you would use 'dpkg' to install a test to your host
> (or the docker container), instead of a specialized fuego package install command.
> 
> >                 name: iozone
> >                     description: this is iozone for testing disks
> >                     url: https://www.github.com/fuego/fuego-iozone.git
> >                     alt-url: ftp://123.123.123.123/fuego-iozone.zip
> >                     sha512: asfeewrfwfsdfsdfsd...
> >                 ...
> >         + Workflow
> >             docker# fuego-update-testpackagelist
> >                 -> downloads the latest testpackagelist.yaml to /etc/fuego/
> sounds OK, but does this need to be a separate operation?

I was mimicking 'apt update' 'apt install', but i think it actually makes sense
in certain situations.
For example, suppose you want to run exactly the same tests one year later.
Then all you have to do is save the testpackagelist.yaml, and download
the old test versions (we need to add version information for this such as
the commit id and minor numbers).

> Note that I also envision test meta-data for tests as well, stored
> somewhere (on fuego.org?).  IMHO there needs to be a central
> repository of results that people can use to compare their results
> with, for each test, as well.

Maybe we need to create a REST API for that. 

I think it can be done in Jenkins but probably would need some groovy. 
How about a separate tornado or flask web application?  We can probably
use kernel-ci's front and backend for that.

> I think there should also be:
> docker# fuego-create-package tims-test
> that will create a standalone test package from the materials in the docker
> container, suitable for sharing with the world.

Do you mean a package containing the results of a test?

> > Installing from a testplan
> >     docker# fuego-install-test -p testplan_raspi.json (install tests in a testplan)
> >         -> non-official packages: write the url/folder in the testplan
> 
> I'm not sure I understand this.  Fuego can have multiple testplans per test,
> with each one specifying a selection of test variables that control the execution
> of that test.  This sounds like something else.  Can you elaborate?

I think it's explained above. Otherwise let me know :)

> I would add some dependency statements (but maybe these are just in the
> testname.sh).  e.g. NEED_KCONFIG=CONFIG_PRINTK_TIME=y
Mmm good idea, but I think that that should be in the test spec's json file because
depending on the test parameters the required kernel functionality may differ.

> > > Also, I want to improve
> > > the support for using test programs already present on target because they
> > are
> > > already in the target Linux distribution.
> >
> > This should be easy: add a new variable "deploy" to indicate whether the
> > test_deploy
> > function needs to be called or not ("rebuild" should be false).
> I think it can be done pretty easily with a new variable, but there are additional
> issues beside build and deploy.
> 
> I'd like to make it so that an individual test can be used either way.  That
> is, it should be a fuego user decision whether to use iozone built from
> fuego sources, or to use the iozone built in to a distribution.  But I'd
> like the same iozone.sh (base script) to work in either case.   This will
> require more than just modifying the deploy step.  For example, the
> test_run step needs to execute something off the path, or use
> an absolute path to the program, rather than executing it from the
> test directory on the target.
> 
> Also, I want fuego to be able to find the test binary on the target, so that the test
> script can be independent of where different distributions might place the
> program.  I did some of the work on this with the function is_on_target(),
> which is intended to be called during pre_test.  After this call the test
> has a variable it can use to invoke the test program with an absolute path.

OK, I see your point now.
 
> But I haven't yet integrated the trigger for using a pre-existing program versus doing the build
> and deploy, or figured out how to make it so that test_run isn't required
> to do an 'if' statement depending on the mode.
> Also, the control variable shouldn't go in the board file (well, it could, but
> it might be something that should be a global fuego setting (which we don't have
> yet.))

Does it need to be global? Couldn't it be a per-job parameter that you can specify in
your testplan file?

> Thanks for the suggestions.  What you've outlined is very similar to what I've
> been thinking about.

Thanks for your feedback. I'm glad we are on the same page.
Daniel



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-01-20  8:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-13  4:32 [Fuego] Stuff Tim's been working on Bird, Timothy
2017-01-18  8:55 ` Daniel Sangorrin
2017-01-19  1:48 ` Daniel Sangorrin
2017-01-20  1:39   ` Bird, Timothy
2017-01-20  8:04     ` Daniel Sangorrin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.