All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] Integration of Fuego and Linaro test-definitons
@ 2019-02-14  1:53 daniel.sangorrin
  2019-02-14  8:10 ` daniel.sangorrin
  2019-02-14  8:27 ` Chase Qi
  0 siblings, 2 replies; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-14  1:53 UTC (permalink / raw)
  To: chase.qi; +Cc: fuego

Hi Chase,

Thanks for your advice and comments. I have created a new thread for discussing the integration of Fuego and Linaro test-definitions.

> From: Chase Qi <chase.qi@linaro.org>
> BTW, I see you also started working on running fuego tests with LAVA.
> I did some investigation before Chinese New Year holiday. Here are my
> findings:
> 
> * fuego is very much docker and jenkins depended, it is not possible,
> at least no easy way to run without them.
> * it is possible to run fuego tests from command line.

I agree that there is no easy way.

Fuego depends on Jenkins for serializing/scheduling tests to boards/nodes. But it can also run without Jenkins when used from the command line, as you mention. To be sure I would have to check if there is some loose end that affects ftc when removing Jenkins from the Dockerfile.

Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects the host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host system in the future.

> * as you pointed, parsing fuego's test result file in LAVA is easy to do.

The only problem is that I would need to run the Fuego parser on the target board.
For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board would need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it during the last Fuego jamboree. I will try to do it as soon as i can.

What alternatives do I have?
- send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not possible.
- create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible. 

In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while Linaro uses grep/awk/sed directly on the target. There is a trade-off there.

> * existing way to run fuego tests in LAVA are hacks. The problem is
> they don't scale, 'scale' means remote and distributed CI setup.

Yes, it is a hack.
I think Fuego is not supposed to run with LAVA, because the goals are very different.
But parts of Fuego can run with LAVA. This is what I think we can collaborate on.

> * I am tring to hanld both fuego host controller and DUT with LAVA.
> The first part is hard part. Still tring to find a way. About the host
> controller part, I started with LAVA-lxc protocol, but hit some
> jenkins and docker related issues. I feel build, publish and pull a
> fuego docker image is the way to go now.

I think this approach might be too hard.

This is my current work-in-progress approach:
https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego

- Manual usage (run locally)
	$ git clone https://github.com/sangorrin/test-definitions
	$ cd test-definitions
	$ . ./automated/bin/setenv.sh
	$ cd automated/linux/fuego/
	$ ./fuego.sh -d Functional.hello_world
	$  tree output/
		output/
		├── build <- equivalent to fuego buildzone
		│   ├── hello
		│   ├── hello.c
		│   ├── Makefile
		│   └── README.md
		├── fuego.Functional.hello_world <- equivalent to board test folder
		│   └── hello
		└── logs <- equivalent to logdir
			└── testlog.txt
- test-runner usage (run on remote board)
	$ cd test-definitions
	$ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
	$ ls ../output
		result.csv
		result.json

I have yet to add the LAVA messages and prepare result.txt but it will be working soon.

By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
 
Thanks,
Daniel

> We probably should start a new thread for this topic to share progress?
> 
> Thanks,
> Chase
> 
> [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> 
> 
> > Thanks,
> > Daniel
> >
> > > -----Original Message-----
> > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > Sent: Thursday, February 14, 2019 6:51 AM
> > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > fuego@lists.linuxfoundation.org
> > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > >
> > > Comments inline below.
> > >
> > > > -----Original Message-----
> > > > From: Daniel Sangorrin
> > > >
> > > > This adds initial support for reusing Linaro test-definitions.
> > > > It is still a proof of concept and only tested with
> > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > is left.
> > > >
> > > > To try it follow these steps:
> > > >
> > > > - prepare SSH_KEY for your board
> > > >     Eg: Inside fuego's docker container do
> > > >     > su jenkins
> > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > >     > vi ~/.ssh/config
> > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > - ftc add-job -b bbb -t Functional.linaro
> > > > - execute the job from jenkins
> > > > - expected results
> > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > >     - run.json
> > > >     - csv
> > > >
> > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > ---
> > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > +++++++++++++++++++++++++++++++
> > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > >  5 files changed, 130 insertions(+)
> > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > >
> > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > b/tests/Functional.linaro/chart_config.json
> > > > new file mode 100644
> > > > index 0000000..b8c8fb6
> > > > --- /dev/null
> > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > @@ -0,0 +1,3 @@
> > > > +{
> > > > +    "chart_type": "testcase_table"
> > > > +}
> > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > b/tests/Functional.linaro/fuego_test.sh
> > > > new file mode 100755
> > > > index 0000000..17b56a9
> > > > --- /dev/null
> > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > @@ -0,0 +1,59 @@
> > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > +
> > > > +# Root permissions required for
> > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > specified
> > > > +# - executing some of the tests
> > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > +NEED_ROOT=1
> > > > +
> > > > +function test_pre_check {
> > > > +    # linaro parser dependencies
> > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > +    assert_has_program sed
> > > > +    assert_has_program awk
> > > > +    assert_has_program grep
> > > > +    assert_has_program egrep
> > > > +    assert_has_program tee
> > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > no need to check for them here.
> > > I already made a patch to remove those lines.
> > >
> > > > +
> > > > +    # test-runner requires a password-less connection
> > > > +    # Eg: Inside fuego's docker container do
> > > > +    # su jenkins
> > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > +    # vi ~/.ssh/config
> > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > ro/boards/$NODE_NAME.board)"
> > > > +}
> > > > +
> > > > +function test_build {
> > > > +    source ./automated/bin/setenv.sh
> > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > >
> > > OK.  I gave this a spin, and here's an error I got:
> > >
> > > ===== doing fuego phase: build =====
> > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > Cloning into 'fuego_git_repo'...
> > > Checkout branch/tag/commit id master.
> > > Already on 'master'
> > > Your branch is up-to-date with 'origin/master'.
> > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > BIN_PATH:
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > Downloading/unpacking pexpect (from -r
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages (from -r
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 2))
> > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages (from
> -r
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 3))
> > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > Installing collected packages: pexpect, ptyprocess
> > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > >     transport, pw = yield from asyncio.get_event_loop()\
> > >                              ^
> > > SyntaxError: invalid syntax
> > >
> > > Successfully installed pexpect ptyprocess
> > > Cleaning up...
> > > Fuego test_build duration=1.56257462502 seconds
> > >
> > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > error after the first build of the job.
> > >
> > > > +}
> > > > +
> > > > +function test_run {
> > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > +
> > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > "automated/linux/smoke/smoke.yaml"}
> > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > +            abort_job "$yaml_file not found"
> > > > +    fi
> > > > +
> > > > +    if startswith "$yaml_file" "plans"; then
> > > > +            echo "using test plan: $yaml_file"
> > > > +            test_or_plan_flag="-p"
> > > > +    else
> > > > +            echo "using test definition: $yaml_file"
> > > > +            test_or_plan_flag="-d"
> > > > +    fi
> > > > +
> > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > +    else
> > > > +        PARAMS=""
> > > > +    fi
> > > > +
> > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > +}
> > > > +
> > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > repository, clean unnecessary files
> > > > diff --git a/tests/Functional.linaro/parser.py
> > > > b/tests/Functional.linaro/parser.py
> > > > new file mode 100755
> > > > index 0000000..48b502b
> > > > --- /dev/null
> > > > +++ b/tests/Functional.linaro/parser.py
> > > > @@ -0,0 +1,25 @@
> > > > +#!/usr/bin/python
> > > > +
> > > > +import os, sys, collections
> > > > +import common as plib
> > > > +import json
> > > > +
> > > > +# allocate variable to store the results
> > > > +measurements = {}
> > > > +measurements = collections.OrderedDict()
> > > > +
> > > > +# read results from linaro result.json format
> > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > +    data = json.load(f)[0]
> > > > +
> > > > +for test_case in data['metrics']:
> > > > +    test_case_id = test_case['test_case_id']
> > > > +    result = test_case['result']
> > > > +    # FIXTHIS: add measurements when available
> > > > +    # measurement = test_case['measurement']
> > > > +    # units = test_case['units']
> > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > +
> > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > +
> > > > +sys.exit(plib.process(measurements))
> > > > diff --git a/tests/Functional.linaro/spec.json
> > > > b/tests/Functional.linaro/spec.json
> > > > new file mode 100644
> > > > index 0000000..561e2ab
> > > > --- /dev/null
> > > > +++ b/tests/Functional.linaro/spec.json
> > > > @@ -0,0 +1,16 @@
> > > > +{
> > > > +    "testName": "Functional.linaro",
> > > > +    "specs": {
> > > > +        "default": {
> > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > +        },
> > > > +        "smoke": {
> > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > +            "params": "TESTS='pwd'",
> > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > +        }
> > > > +    }
> > > > +}
> > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > b/tests/Functional.linaro/test.yaml
> > > > new file mode 100644
> > > > index 0000000..a2efee8
> > > > --- /dev/null
> > > > +++ b/tests/Functional.linaro/test.yaml
> > > > @@ -0,0 +1,27 @@
> > > > +fuego_package_version: 1
> > > > +name: Functional.linaro
> > > > +description: |
> > > > +    Linaro test-definitions
> > > > +license: GPL-2.0
> > > > +author: Milosz Wasilewski, Chase Qi
> > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > +version: latest git commits
> > > > +fuego_release: 1
> > > > +type: Functional
> > > > +tags: ['kernel', 'linaro']
> > > > +git_src: https://github.com/Linaro/test-definitions
> > > > +params:
> > > > +    - YAML:
> > > > +        description: test definiton or plan.
> > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > +        optional: no
> > > > +    - PARAMS:
> > > > +        description: List of params for the test PARAM1=VALUE1
> > > > [PARAM2=VALUE2]
> > > > +        example: "TESTS='pwd'"
> > > > +        optional: yes
> > > > +data_files:
> > > > +    - chart_config.json
> > > > +    - fuego_test.sh
> > > > +    - parser.py
> > > > +    - spec.json
> > > > +    - test.yaml
> > > > --
> > > > 2.7.4
> > >
> > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > The issue may be something weird in my board file or configuration.
> > >
> > > ===== doing fuego phase: run =====
> > > -------------------------------------------------
> > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > BIN_PATH:
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > using test definition: automated/linux/smoke/smoke.yaml
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > {'path':
> > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > + export TESTRUN_ID=smoke-tests-basic
> > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > + cat uuid
> > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > + cd ./automated/linux/smoke/
> > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > INFO: install_deps skipped
> > >
> > > INFO: Running pwd test...
> > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > pwd pass
> > >
> > > INFO: Running lsb_release test...
> > > ./smoke.sh: 1: eval: lsb_release: not found
> > > lsb_release fail
> > >
> > > INFO: Running uname test...
> > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > uname pass
> > >
> > > INFO: Running ip test...
> > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > >     inet 127.0.0.1/8 scope host lo
> > >        valid_lft forever preferred_lft forever
> > >     inet6 ::1/128 scope host
> > >        valid_lft forever preferred_lft forever
> > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> > > qlen 1000
> > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > >        valid_lft forever preferred_lft forever
> > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > >     link/can
> > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > >     link/can
> > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
> > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > >        valid_lft forever preferred_lft forever
> > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
> > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > >        valid_lft forever preferred_lft forever
> > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > >        valid_lft forever preferred_lft forever
> > > ip pass
> > >
> > > INFO: Running lscpu test...
> > > Architecture:          armv7l
> > > Byte Order:            Little Endian
> > > CPU(s):                1
> > > On-line CPU(s) list:   0
> > > Thread(s) per core:    1
> > > Core(s) per socket:    1
> > > Socket(s):             1
> > > Model:                 2
> > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > CPU max MHz:           1000.0000
> > > CPU min MHz:           300.0000
> > > BogoMIPS:              995.32
> > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > lscpu pass
> > >
> > > INFO: Running vmstat test...
> > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > vmstat pass
> > >
> > > INFO: Running lsblk test...
> > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > mmcblk0      179:0    0 14.9G  0 disk
> > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > mmcblk1      179:8    0  1.8G  0 disk
> > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > mmcblk1boot0 179:16   0    1M  1 disk
> > > mmcblk1boot1 179:24   0    1M  1 disk
> > > lsblk pass
> > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > <TEST_CASE_ID=pwd RESULT=pass>
> > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > <TEST_CASE_ID=uname RESULT=pass>
> > > <TEST_CASE_ID=ip RESULT=pass>
> > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > >
> > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > =vmstat RESULT=pass>
> > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > --- Printing result.csv ---
> > > name,test_case_id,result,measurement,units,test_params
> > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > lsblk;SKIP_INSTALL=False"
> > >
> > > -------------------------------------------------
> > > ===== doing fuego phase: post_test =====
> > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > INFO: the test did not produce a test log on the target
> > > ===== doing fuego phase: processing =====
> > > ### WARNING: Program returned exit code ''
> > > ### WARNING: Log evaluation may be invalid
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ### Unrecognized results format
> > > ERROR: results did not satisfy the threshold
> > > Fuego: requested test phases complete!
> > > Build step 'Execute shell' marked build as failure
> > >
> > > ----------
> > >
> > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > that I should fix, please let me know.
> > >
> > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > >
> > > Just FYI.  Thanks for the code.
> > >  -- Tim
> >
> > _______________________________________________
> > Fuego mailing list
> > Fuego@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-14  1:53 [Fuego] Integration of Fuego and Linaro test-definitons daniel.sangorrin
@ 2019-02-14  8:10 ` daniel.sangorrin
  2019-02-14  8:51   ` Chase Qi
  2019-02-14  8:27 ` Chase Qi
  1 sibling, 1 reply; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-14  8:10 UTC (permalink / raw)
  To: daniel.sangorrin, chase.qi; +Cc: fuego

Hi again Chase,

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org <fuego-bounces@lists.linuxfoundation.org> On Behalf Of
[...]
> This is my current work-in-progress approach:
> https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> 
> - Manual usage (run locally)
> 	$ git clone https://github.com/sangorrin/test-definitions
> 	$ cd test-definitions
> 	$ . ./automated/bin/setenv.sh
> 	$ cd automated/linux/fuego/
> 	$ ./fuego.sh -d Functional.hello_world
> 	$  tree output/
> 		output/
> 		├── build <- equivalent to fuego buildzone
> 		│   ├── hello
> 		│   ├── hello.c
> 		│   ├── Makefile
> 		│   └── README.md
> 		├── fuego.Functional.hello_world <- equivalent to board test folder
> 		│   └── hello
> 		└── logs <- equivalent to logdir
> 			└── testlog.txt
> - test-runner usage (run on remote board)
> 	$ cd test-definitions
> 	$ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> 	$ ls ../output
> 		result.csv
> 		result.json
> 
> I have yet to add the LAVA messages and prepare result.txt but it will be working soon.

I have modified the code to prepare a result.txt and generate LAVA messages but I have no LAVA setup right now. If you test it please let me know if LAVA recognized the test result.

Thanks,
Daniel

> 
> By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> 
> Thanks,
> Daniel
> 
> > We probably should start a new thread for this topic to share progress?
> >
> > Thanks,
> > Chase
> >
> > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> >
> >
> > > Thanks,
> > > Daniel
> > >
> > > > -----Original Message-----
> > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > fuego@lists.linuxfoundation.org
> > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > >
> > > > Comments inline below.
> > > >
> > > > > -----Original Message-----
> > > > > From: Daniel Sangorrin
> > > > >
> > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > It is still a proof of concept and only tested with
> > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > is left.
> > > > >
> > > > > To try it follow these steps:
> > > > >
> > > > > - prepare SSH_KEY for your board
> > > > >     Eg: Inside fuego's docker container do
> > > > >     > su jenkins
> > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > >     > vi ~/.ssh/config
> > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > - execute the job from jenkins
> > > > > - expected results
> > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > >     - run.json
> > > > >     - csv
> > > > >
> > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > ---
> > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > +++++++++++++++++++++++++++++++
> > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > >  5 files changed, 130 insertions(+)
> > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > >
> > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > b/tests/Functional.linaro/chart_config.json
> > > > > new file mode 100644
> > > > > index 0000000..b8c8fb6
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > @@ -0,0 +1,3 @@
> > > > > +{
> > > > > +    "chart_type": "testcase_table"
> > > > > +}
> > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > new file mode 100755
> > > > > index 0000000..17b56a9
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > @@ -0,0 +1,59 @@
> > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > +
> > > > > +# Root permissions required for
> > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > specified
> > > > > +# - executing some of the tests
> > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > +NEED_ROOT=1
> > > > > +
> > > > > +function test_pre_check {
> > > > > +    # linaro parser dependencies
> > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > +    assert_has_program sed
> > > > > +    assert_has_program awk
> > > > > +    assert_has_program grep
> > > > > +    assert_has_program egrep
> > > > > +    assert_has_program tee
> > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > no need to check for them here.
> > > > I already made a patch to remove those lines.
> > > >
> > > > > +
> > > > > +    # test-runner requires a password-less connection
> > > > > +    # Eg: Inside fuego's docker container do
> > > > > +    # su jenkins
> > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > +    # vi ~/.ssh/config
> > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > ro/boards/$NODE_NAME.board)"
> > > > > +}
> > > > > +
> > > > > +function test_build {
> > > > > +    source ./automated/bin/setenv.sh
> > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > >
> > > > OK.  I gave this a spin, and here's an error I got:
> > > >
> > > > ===== doing fuego phase: build =====
> > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > Cloning into 'fuego_git_repo'...
> > > > Checkout branch/tag/commit id master.
> > > > Already on 'master'
> > > > Your branch is up-to-date with 'origin/master'.
> > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > BIN_PATH:
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > Downloading/unpacking pexpect (from -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages (from
> -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 2))
> > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages
> (from
> > -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 3))
> > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > Installing collected packages: pexpect, ptyprocess
> > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > >                              ^
> > > > SyntaxError: invalid syntax
> > > >
> > > > Successfully installed pexpect ptyprocess
> > > > Cleaning up...
> > > > Fuego test_build duration=1.56257462502 seconds
> > > >
> > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > error after the first build of the job.
> > > >
> > > > > +}
> > > > > +
> > > > > +function test_run {
> > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > +
> > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > +            abort_job "$yaml_file not found"
> > > > > +    fi
> > > > > +
> > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > +            echo "using test plan: $yaml_file"
> > > > > +            test_or_plan_flag="-p"
> > > > > +    else
> > > > > +            echo "using test definition: $yaml_file"
> > > > > +            test_or_plan_flag="-d"
> > > > > +    fi
> > > > > +
> > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > +    else
> > > > > +        PARAMS=""
> > > > > +    fi
> > > > > +
> > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > +}
> > > > > +
> > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > repository, clean unnecessary files
> > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > b/tests/Functional.linaro/parser.py
> > > > > new file mode 100755
> > > > > index 0000000..48b502b
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > @@ -0,0 +1,25 @@
> > > > > +#!/usr/bin/python
> > > > > +
> > > > > +import os, sys, collections
> > > > > +import common as plib
> > > > > +import json
> > > > > +
> > > > > +# allocate variable to store the results
> > > > > +measurements = {}
> > > > > +measurements = collections.OrderedDict()
> > > > > +
> > > > > +# read results from linaro result.json format
> > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > +    data = json.load(f)[0]
> > > > > +
> > > > > +for test_case in data['metrics']:
> > > > > +    test_case_id = test_case['test_case_id']
> > > > > +    result = test_case['result']
> > > > > +    # FIXTHIS: add measurements when available
> > > > > +    # measurement = test_case['measurement']
> > > > > +    # units = test_case['units']
> > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > +
> > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > +
> > > > > +sys.exit(plib.process(measurements))
> > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > b/tests/Functional.linaro/spec.json
> > > > > new file mode 100644
> > > > > index 0000000..561e2ab
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > @@ -0,0 +1,16 @@
> > > > > +{
> > > > > +    "testName": "Functional.linaro",
> > > > > +    "specs": {
> > > > > +        "default": {
> > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > +        },
> > > > > +        "smoke": {
> > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > +            "params": "TESTS='pwd'",
> > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > +        }
> > > > > +    }
> > > > > +}
> > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > b/tests/Functional.linaro/test.yaml
> > > > > new file mode 100644
> > > > > index 0000000..a2efee8
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > @@ -0,0 +1,27 @@
> > > > > +fuego_package_version: 1
> > > > > +name: Functional.linaro
> > > > > +description: |
> > > > > +    Linaro test-definitions
> > > > > +license: GPL-2.0
> > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > +version: latest git commits
> > > > > +fuego_release: 1
> > > > > +type: Functional
> > > > > +tags: ['kernel', 'linaro']
> > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > +params:
> > > > > +    - YAML:
> > > > > +        description: test definiton or plan.
> > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > +        optional: no
> > > > > +    - PARAMS:
> > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > [PARAM2=VALUE2]
> > > > > +        example: "TESTS='pwd'"
> > > > > +        optional: yes
> > > > > +data_files:
> > > > > +    - chart_config.json
> > > > > +    - fuego_test.sh
> > > > > +    - parser.py
> > > > > +    - spec.json
> > > > > +    - test.yaml
> > > > > --
> > > > > 2.7.4
> > > >
> > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > The issue may be something weird in my board file or configuration.
> > > >
> > > > ===== doing fuego phase: run =====
> > > > -------------------------------------------------
> > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > BIN_PATH:
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > {'path':
> > > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > + export TESTRUN_ID=smoke-tests-basic
> > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > + cat uuid
> > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > + cd ./automated/linux/smoke/
> > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > INFO: install_deps skipped
> > > >
> > > > INFO: Running pwd test...
> > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > pwd pass
> > > >
> > > > INFO: Running lsb_release test...
> > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > lsb_release fail
> > > >
> > > > INFO: Running uname test...
> > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > uname pass
> > > >
> > > > INFO: Running ip test...
> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > >     inet 127.0.0.1/8 scope host lo
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 ::1/128 scope host
> > > >        valid_lft forever preferred_lft forever
> > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> default
> > > > qlen 1000
> > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > >     link/can
> > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > >     link/can
> > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen
> 1000
> > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen
> 1000
> > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > ip pass
> > > >
> > > > INFO: Running lscpu test...
> > > > Architecture:          armv7l
> > > > Byte Order:            Little Endian
> > > > CPU(s):                1
> > > > On-line CPU(s) list:   0
> > > > Thread(s) per core:    1
> > > > Core(s) per socket:    1
> > > > Socket(s):             1
> > > > Model:                 2
> > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > CPU max MHz:           1000.0000
> > > > CPU min MHz:           300.0000
> > > > BogoMIPS:              995.32
> > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > lscpu pass
> > > >
> > > > INFO: Running vmstat test...
> > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > vmstat pass
> > > >
> > > > INFO: Running lsblk test...
> > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > lsblk pass
> > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > >
> > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > =vmstat RESULT=pass>
> > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > --- Printing result.csv ---
> > > > name,test_case_id,result,measurement,units,test_params
> > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > >
> > > > -------------------------------------------------
> > > > ===== doing fuego phase: post_test =====
> > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > INFO: the test did not produce a test log on the target
> > > > ===== doing fuego phase: processing =====
> > > > ### WARNING: Program returned exit code ''
> > > > ### WARNING: Log evaluation may be invalid
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ERROR: results did not satisfy the threshold
> > > > Fuego: requested test phases complete!
> > > > Build step 'Execute shell' marked build as failure
> > > >
> > > > ----------
> > > >
> > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > that I should fix, please let me know.
> > > >
> > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > >
> > > > Just FYI.  Thanks for the code.
> > > >  -- Tim
> > >
> > > _______________________________________________
> > > Fuego mailing list
> > > Fuego@lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/fuego
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-14  1:53 [Fuego] Integration of Fuego and Linaro test-definitons daniel.sangorrin
  2019-02-14  8:10 ` daniel.sangorrin
@ 2019-02-14  8:27 ` Chase Qi
  2019-02-21  5:45   ` daniel.sangorrin
  1 sibling, 1 reply; 14+ messages in thread
From: Chase Qi @ 2019-02-14  8:27 UTC (permalink / raw)
  To: daniel.sangorrin; +Cc: fuego

Hi Daniel,

Thanks for the comments.

On Thu, Feb 14, 2019 at 9:53 AM <daniel.sangorrin@toshiba.co.jp> wrote:
>
> Hi Chase,
>
> Thanks for your advice and comments. I have created a new thread for discussing the integration of Fuego and Linaro test-definitions.
>
> > From: Chase Qi <chase.qi@linaro.org>
> > BTW, I see you also started working on running fuego tests with LAVA.
> > I did some investigation before Chinese New Year holiday. Here are my
> > findings:
> >
> > * fuego is very much docker and jenkins depended, it is not possible,
> > at least no easy way to run without them.
> > * it is possible to run fuego tests from command line.
>
> I agree that there is no easy way.
>
> Fuego depends on Jenkins for serializing/scheduling tests to boards/nodes. But it can also run without Jenkins when used from the command line, as you mention. To be sure I would have to check if there is some loose end that affects ftc when removing Jenkins from the Dockerfile.
>
> Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects the host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host system in the future.

Please post on the ML or just let me know when you have it. I *want* it.

>
> > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
>
> The only problem is that I would need to run the Fuego parser on the target board.
> For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board would need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it during the last Fuego jamboree. I will try to do it as soon as i can.
>
> What alternatives do I have?
> - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not possible.
> - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
>
> In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
>
> > * existing way to run fuego tests in LAVA are hacks. The problem is
> > they don't scale, 'scale' means remote and distributed CI setup.
>
> Yes, it is a hack.
> I think Fuego is not supposed to run with LAVA, because the goals are very different.
> But parts of Fuego can run with LAVA. This is what I think we can collaborate on.

Yes, +1. When running with LAVA, IMHO, only the backend and real tests
are needed.

>
> > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > The first part is hard part. Still tring to find a way. About the host
> > controller part, I started with LAVA-lxc protocol, but hit some
> > jenkins and docker related issues. I feel build, publish and pull a
> > fuego docker image is the way to go now.
>
> I think this approach might be too hard.

LAVA v2 introduced lxc-protocol. With the protocol, single node test
job can deploy and boot a lxc container to control DUT. Here is an
example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
example job use lxc contianer to deploy imgs to DUT. If DUT was
configed with static IP, the IP is known to lxc container with LAVA
helper lava-target-ip, then ssh connection between lxc and DUT is
possible. Based on these features, I thought we can run fuego tests
with LAVA just like how we run it now. As mentioned above, there is no
and will be no support for docker-protocol in LAVA, and migrating
fuego installation to lxc also is problemic. Please do let me know
once you have a script for fuego installation. I am having problem to
do that, hit jenkins missing, docker missing, permission issues, etc.
Once I am alble to install fuego within lxc, I can propare a job
example. It would be one test definition for all fuego tests. This is
how we do it before. `automated/linux/workload-automation3
` is a good example.

Alternatively, I can lunch docker device and DUT with multinode job,
but that is complex. And fuego docker container eats a lot of
memory(blame jenkins?). The exsting docker devices in our lib only
have 1G memory configured.

>
> This is my current work-in-progress approach:
> https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
>
> - Manual usage (run locally)
>         $ git clone https://github.com/sangorrin/test-definitions
>         $ cd test-definitions
>         $ . ./automated/bin/setenv.sh
>         $ cd automated/linux/fuego/
>         $ ./fuego.sh -d Functional.hello_world
>         $  tree output/
>                 output/
>                 ├── build <- equivalent to fuego buildzone
>                 │   ├── hello
>                 │   ├── hello.c
>                 │   ├── Makefile
>                 │   └── README.md
>                 ├── fuego.Functional.hello_world <- equivalent to board test folder
>                 │   └── hello
>                 └── logs <- equivalent to logdir
>                         └── testlog.txt
> - test-runner usage (run on remote board)
>         $ cd test-definitions
>         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
>         $ ls ../output
>                 result.csv
>                 result.json
>
> I have yet to add the LAVA messages and prepare result.txt but it will be working soon.

You don't have to. It looks like a done job to me. send-to-lava.sh
will take care of it. When running in LAVA, the helper uses
lava-test-case for result collecting, and when running without LAVA,
the helper prints result lines in a fixed format for result parsing
within test-runner. (When I writing this, I noticed your next reply,
maybe I am looking at the latest code already, I will give it a spin
with LAVA and come back to you)

So basically, we are running in two different directions. From my
point of view, you are porting fuego tests to Linaro test-definitions
natively. Although I am not yet sure how the integration between these
two projects goes, we are happy to see this happening :)

>
> By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
>

lava-test-shell requires POSIX shell. We normally use /bin/sh which
links to dash on Debian based distros, and we also have some test
definitions like ltp and android trandfed using bash. bash has some
extensions are not POSIX compatiable. IMHO, using bash without these
extensions is totally fine. We are using shellcheck in sanity check to
dedect potential POSIX issues.

Thanks,
Chase


> Thanks,
> Daniel
>
> > We probably should start a new thread for this topic to share progress?
> >
> > Thanks,
> > Chase
> >
> > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> >
> >
> > > Thanks,
> > > Daniel
> > >
> > > > -----Original Message-----
> > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > fuego@lists.linuxfoundation.org
> > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > >
> > > > Comments inline below.
> > > >
> > > > > -----Original Message-----
> > > > > From: Daniel Sangorrin
> > > > >
> > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > It is still a proof of concept and only tested with
> > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > is left.
> > > > >
> > > > > To try it follow these steps:
> > > > >
> > > > > - prepare SSH_KEY for your board
> > > > >     Eg: Inside fuego's docker container do
> > > > >     > su jenkins
> > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > >     > vi ~/.ssh/config
> > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > - execute the job from jenkins
> > > > > - expected results
> > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > >     - run.json
> > > > >     - csv
> > > > >
> > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > ---
> > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > +++++++++++++++++++++++++++++++
> > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > >  5 files changed, 130 insertions(+)
> > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > >
> > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > b/tests/Functional.linaro/chart_config.json
> > > > > new file mode 100644
> > > > > index 0000000..b8c8fb6
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > @@ -0,0 +1,3 @@
> > > > > +{
> > > > > +    "chart_type": "testcase_table"
> > > > > +}
> > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > new file mode 100755
> > > > > index 0000000..17b56a9
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > @@ -0,0 +1,59 @@
> > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > +
> > > > > +# Root permissions required for
> > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > specified
> > > > > +# - executing some of the tests
> > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > +NEED_ROOT=1
> > > > > +
> > > > > +function test_pre_check {
> > > > > +    # linaro parser dependencies
> > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > +    assert_has_program sed
> > > > > +    assert_has_program awk
> > > > > +    assert_has_program grep
> > > > > +    assert_has_program egrep
> > > > > +    assert_has_program tee
> > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > no need to check for them here.
> > > > I already made a patch to remove those lines.
> > > >
> > > > > +
> > > > > +    # test-runner requires a password-less connection
> > > > > +    # Eg: Inside fuego's docker container do
> > > > > +    # su jenkins
> > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > +    # vi ~/.ssh/config
> > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > ro/boards/$NODE_NAME.board)"
> > > > > +}
> > > > > +
> > > > > +function test_build {
> > > > > +    source ./automated/bin/setenv.sh
> > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > >
> > > > OK.  I gave this a spin, and here's an error I got:
> > > >
> > > > ===== doing fuego phase: build =====
> > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > Cloning into 'fuego_git_repo'...
> > > > Checkout branch/tag/commit id master.
> > > > Already on 'master'
> > > > Your branch is up-to-date with 'origin/master'.
> > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > BIN_PATH:
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > Downloading/unpacking pexpect (from -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages (from -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 2))
> > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages (from
> > -r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 3))
> > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > Installing collected packages: pexpect, ptyprocess
> > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > >                              ^
> > > > SyntaxError: invalid syntax
> > > >
> > > > Successfully installed pexpect ptyprocess
> > > > Cleaning up...
> > > > Fuego test_build duration=1.56257462502 seconds
> > > >
> > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > error after the first build of the job.
> > > >
> > > > > +}
> > > > > +
> > > > > +function test_run {
> > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > +
> > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > +            abort_job "$yaml_file not found"
> > > > > +    fi
> > > > > +
> > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > +            echo "using test plan: $yaml_file"
> > > > > +            test_or_plan_flag="-p"
> > > > > +    else
> > > > > +            echo "using test definition: $yaml_file"
> > > > > +            test_or_plan_flag="-d"
> > > > > +    fi
> > > > > +
> > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > +    else
> > > > > +        PARAMS=""
> > > > > +    fi
> > > > > +
> > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > +}
> > > > > +
> > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > repository, clean unnecessary files
> > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > b/tests/Functional.linaro/parser.py
> > > > > new file mode 100755
> > > > > index 0000000..48b502b
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > @@ -0,0 +1,25 @@
> > > > > +#!/usr/bin/python
> > > > > +
> > > > > +import os, sys, collections
> > > > > +import common as plib
> > > > > +import json
> > > > > +
> > > > > +# allocate variable to store the results
> > > > > +measurements = {}
> > > > > +measurements = collections.OrderedDict()
> > > > > +
> > > > > +# read results from linaro result.json format
> > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > +    data = json.load(f)[0]
> > > > > +
> > > > > +for test_case in data['metrics']:
> > > > > +    test_case_id = test_case['test_case_id']
> > > > > +    result = test_case['result']
> > > > > +    # FIXTHIS: add measurements when available
> > > > > +    # measurement = test_case['measurement']
> > > > > +    # units = test_case['units']
> > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > +
> > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > +
> > > > > +sys.exit(plib.process(measurements))
> > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > b/tests/Functional.linaro/spec.json
> > > > > new file mode 100644
> > > > > index 0000000..561e2ab
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > @@ -0,0 +1,16 @@
> > > > > +{
> > > > > +    "testName": "Functional.linaro",
> > > > > +    "specs": {
> > > > > +        "default": {
> > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > +        },
> > > > > +        "smoke": {
> > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > +            "params": "TESTS='pwd'",
> > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > +        }
> > > > > +    }
> > > > > +}
> > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > b/tests/Functional.linaro/test.yaml
> > > > > new file mode 100644
> > > > > index 0000000..a2efee8
> > > > > --- /dev/null
> > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > @@ -0,0 +1,27 @@
> > > > > +fuego_package_version: 1
> > > > > +name: Functional.linaro
> > > > > +description: |
> > > > > +    Linaro test-definitions
> > > > > +license: GPL-2.0
> > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > +version: latest git commits
> > > > > +fuego_release: 1
> > > > > +type: Functional
> > > > > +tags: ['kernel', 'linaro']
> > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > +params:
> > > > > +    - YAML:
> > > > > +        description: test definiton or plan.
> > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > +        optional: no
> > > > > +    - PARAMS:
> > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > [PARAM2=VALUE2]
> > > > > +        example: "TESTS='pwd'"
> > > > > +        optional: yes
> > > > > +data_files:
> > > > > +    - chart_config.json
> > > > > +    - fuego_test.sh
> > > > > +    - parser.py
> > > > > +    - spec.json
> > > > > +    - test.yaml
> > > > > --
> > > > > 2.7.4
> > > >
> > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > The issue may be something weird in my board file or configuration.
> > > >
> > > > ===== doing fuego phase: run =====
> > > > -------------------------------------------------
> > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > BIN_PATH:
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > {'path':
> > > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > + export TESTRUN_ID=smoke-tests-basic
> > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > + cat uuid
> > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > + cd ./automated/linux/smoke/
> > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > INFO: install_deps skipped
> > > >
> > > > INFO: Running pwd test...
> > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > pwd pass
> > > >
> > > > INFO: Running lsb_release test...
> > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > lsb_release fail
> > > >
> > > > INFO: Running uname test...
> > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > uname pass
> > > >
> > > > INFO: Running ip test...
> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > >     inet 127.0.0.1/8 scope host lo
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 ::1/128 scope host
> > > >        valid_lft forever preferred_lft forever
> > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> > > > qlen 1000
> > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > >     link/can
> > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > >     link/can
> > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
> > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
> > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > >        valid_lft forever preferred_lft forever
> > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > >        valid_lft forever preferred_lft forever
> > > > ip pass
> > > >
> > > > INFO: Running lscpu test...
> > > > Architecture:          armv7l
> > > > Byte Order:            Little Endian
> > > > CPU(s):                1
> > > > On-line CPU(s) list:   0
> > > > Thread(s) per core:    1
> > > > Core(s) per socket:    1
> > > > Socket(s):             1
> > > > Model:                 2
> > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > CPU max MHz:           1000.0000
> > > > CPU min MHz:           300.0000
> > > > BogoMIPS:              995.32
> > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > lscpu pass
> > > >
> > > > INFO: Running vmstat test...
> > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > vmstat pass
> > > >
> > > > INFO: Running lsblk test...
> > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > lsblk pass
> > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > >
> > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > =vmstat RESULT=pass>
> > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > --- Printing result.csv ---
> > > > name,test_case_id,result,measurement,units,test_params
> > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > lsblk;SKIP_INSTALL=False"
> > > >
> > > > -------------------------------------------------
> > > > ===== doing fuego phase: post_test =====
> > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > INFO: the test did not produce a test log on the target
> > > > ===== doing fuego phase: processing =====
> > > > ### WARNING: Program returned exit code ''
> > > > ### WARNING: Log evaluation may be invalid
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ### Unrecognized results format
> > > > ERROR: results did not satisfy the threshold
> > > > Fuego: requested test phases complete!
> > > > Build step 'Execute shell' marked build as failure
> > > >
> > > > ----------
> > > >
> > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > that I should fix, please let me know.
> > > >
> > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > >
> > > > Just FYI.  Thanks for the code.
> > > >  -- Tim
> > >
> > > _______________________________________________
> > > Fuego mailing list
> > > Fuego@lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-14  8:10 ` daniel.sangorrin
@ 2019-02-14  8:51   ` Chase Qi
  2019-02-19  5:28     ` daniel.sangorrin
  0 siblings, 1 reply; 14+ messages in thread
From: Chase Qi @ 2019-02-14  8:51 UTC (permalink / raw)
  To: daniel.sangorrin; +Cc: fuego

On Thu, Feb 14, 2019 at 4:10 PM <daniel.sangorrin@toshiba.co.jp> wrote:
>
> Hi again Chase,
>
> > -----Original Message-----
> > From: fuego-bounces@lists.linuxfoundation.org <fuego-bounces@lists.linuxfoundation.org> On Behalf Of
> [...]
> > This is my current work-in-progress approach:
> > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> >
> > - Manual usage (run locally)
> >       $ git clone https://github.com/sangorrin/test-definitions
> >       $ cd test-definitions
> >       $ . ./automated/bin/setenv.sh
> >       $ cd automated/linux/fuego/
> >       $ ./fuego.sh -d Functional.hello_world
> >       $  tree output/
> >               output/
> >               ├── build <- equivalent to fuego buildzone
> >               │   ├── hello
> >               │   ├── hello.c
> >               │   ├── Makefile
> >               │   └── README.md
> >               ├── fuego.Functional.hello_world <- equivalent to board test folder
> >               │   └── hello
> >               └── logs <- equivalent to logdir
> >                       └── testlog.txt
> > - test-runner usage (run on remote board)
> >       $ cd test-definitions
> >       $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> >       $ ls ../output
> >               result.csv
> >               result.json
> >
> > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
>
> I have modified the code to prepare a result.txt and generate LAVA messages but I have no LAVA setup right now. If you test it please let me know if LAVA recognized the test result.

Result looks good in LAVA. I tested with lxc device,
https://validation.linaro.org/scheduler/job/1906258 test failed as
make is missing. Once build-essential installed, test passed
https://validation.linaro.org/results/1906260/1_fuego-hello-world

Thanks,
Chase

>
> Thanks,
> Daniel
>
> >
> > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> >
> > Thanks,
> > Daniel
> >
> > > We probably should start a new thread for this topic to share progress?
> > >
> > > Thanks,
> > > Chase
> > >
> > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > >
> > >
> > > > Thanks,
> > > > Daniel
> > > >
> > > > > -----Original Message-----
> > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > fuego@lists.linuxfoundation.org
> > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > >
> > > > > Comments inline below.
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Daniel Sangorrin
> > > > > >
> > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > It is still a proof of concept and only tested with
> > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > is left.
> > > > > >
> > > > > > To try it follow these steps:
> > > > > >
> > > > > > - prepare SSH_KEY for your board
> > > > > >     Eg: Inside fuego's docker container do
> > > > > >     > su jenkins
> > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > >     > vi ~/.ssh/config
> > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > - execute the job from jenkins
> > > > > > - expected results
> > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > >     - run.json
> > > > > >     - csv
> > > > > >
> > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > ---
> > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > +++++++++++++++++++++++++++++++
> > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > >  5 files changed, 130 insertions(+)
> > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > >
> > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > new file mode 100644
> > > > > > index 0000000..b8c8fb6
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > @@ -0,0 +1,3 @@
> > > > > > +{
> > > > > > +    "chart_type": "testcase_table"
> > > > > > +}
> > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > new file mode 100755
> > > > > > index 0000000..17b56a9
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > @@ -0,0 +1,59 @@
> > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > +
> > > > > > +# Root permissions required for
> > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > specified
> > > > > > +# - executing some of the tests
> > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > +NEED_ROOT=1
> > > > > > +
> > > > > > +function test_pre_check {
> > > > > > +    # linaro parser dependencies
> > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > +    assert_has_program sed
> > > > > > +    assert_has_program awk
> > > > > > +    assert_has_program grep
> > > > > > +    assert_has_program egrep
> > > > > > +    assert_has_program tee
> > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > no need to check for them here.
> > > > > I already made a patch to remove those lines.
> > > > >
> > > > > > +
> > > > > > +    # test-runner requires a password-less connection
> > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > +    # su jenkins
> > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > +    # vi ~/.ssh/config
> > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > +}
> > > > > > +
> > > > > > +function test_build {
> > > > > > +    source ./automated/bin/setenv.sh
> > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > >
> > > > > OK.  I gave this a spin, and here's an error I got:
> > > > >
> > > > > ===== doing fuego phase: build =====
> > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > Cloning into 'fuego_git_repo'...
> > > > > Checkout branch/tag/commit id master.
> > > > > Already on 'master'
> > > > > Your branch is up-to-date with 'origin/master'.
> > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > BIN_PATH:
> > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > Downloading/unpacking pexpect (from -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages (from
> > -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 2))
> > > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages
> > (from
> > > -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 3))
> > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt (line 1))
> > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > Installing collected packages: pexpect, ptyprocess
> > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > >                              ^
> > > > > SyntaxError: invalid syntax
> > > > >
> > > > > Successfully installed pexpect ptyprocess
> > > > > Cleaning up...
> > > > > Fuego test_build duration=1.56257462502 seconds
> > > > >
> > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > error after the first build of the job.
> > > > >
> > > > > > +}
> > > > > > +
> > > > > > +function test_run {
> > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > +
> > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > +            abort_job "$yaml_file not found"
> > > > > > +    fi
> > > > > > +
> > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > +            echo "using test plan: $yaml_file"
> > > > > > +            test_or_plan_flag="-p"
> > > > > > +    else
> > > > > > +            echo "using test definition: $yaml_file"
> > > > > > +            test_or_plan_flag="-d"
> > > > > > +    fi
> > > > > > +
> > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > +    else
> > > > > > +        PARAMS=""
> > > > > > +    fi
> > > > > > +
> > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > +}
> > > > > > +
> > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > repository, clean unnecessary files
> > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > b/tests/Functional.linaro/parser.py
> > > > > > new file mode 100755
> > > > > > index 0000000..48b502b
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > @@ -0,0 +1,25 @@
> > > > > > +#!/usr/bin/python
> > > > > > +
> > > > > > +import os, sys, collections
> > > > > > +import common as plib
> > > > > > +import json
> > > > > > +
> > > > > > +# allocate variable to store the results
> > > > > > +measurements = {}
> > > > > > +measurements = collections.OrderedDict()
> > > > > > +
> > > > > > +# read results from linaro result.json format
> > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > +    data = json.load(f)[0]
> > > > > > +
> > > > > > +for test_case in data['metrics']:
> > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > +    result = test_case['result']
> > > > > > +    # FIXTHIS: add measurements when available
> > > > > > +    # measurement = test_case['measurement']
> > > > > > +    # units = test_case['units']
> > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > +
> > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > +
> > > > > > +sys.exit(plib.process(measurements))
> > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > b/tests/Functional.linaro/spec.json
> > > > > > new file mode 100644
> > > > > > index 0000000..561e2ab
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > @@ -0,0 +1,16 @@
> > > > > > +{
> > > > > > +    "testName": "Functional.linaro",
> > > > > > +    "specs": {
> > > > > > +        "default": {
> > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > +        },
> > > > > > +        "smoke": {
> > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > +            "params": "TESTS='pwd'",
> > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > +        }
> > > > > > +    }
> > > > > > +}
> > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > new file mode 100644
> > > > > > index 0000000..a2efee8
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > @@ -0,0 +1,27 @@
> > > > > > +fuego_package_version: 1
> > > > > > +name: Functional.linaro
> > > > > > +description: |
> > > > > > +    Linaro test-definitions
> > > > > > +license: GPL-2.0
> > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > +version: latest git commits
> > > > > > +fuego_release: 1
> > > > > > +type: Functional
> > > > > > +tags: ['kernel', 'linaro']
> > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > +params:
> > > > > > +    - YAML:
> > > > > > +        description: test definiton or plan.
> > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > +        optional: no
> > > > > > +    - PARAMS:
> > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > [PARAM2=VALUE2]
> > > > > > +        example: "TESTS='pwd'"
> > > > > > +        optional: yes
> > > > > > +data_files:
> > > > > > +    - chart_config.json
> > > > > > +    - fuego_test.sh
> > > > > > +    - parser.py
> > > > > > +    - spec.json
> > > > > > +    - test.yaml
> > > > > > --
> > > > > > 2.7.4
> > > > >
> > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > The issue may be something weird in my board file or configuration.
> > > > >
> > > > > ===== doing fuego phase: run =====
> > > > > -------------------------------------------------
> > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > BIN_PATH:
> > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > > {'path':
> > > > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > + cat uuid
> > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > + cd ./automated/linux/smoke/
> > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > INFO: install_deps skipped
> > > > >
> > > > > INFO: Running pwd test...
> > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > pwd pass
> > > > >
> > > > > INFO: Running lsb_release test...
> > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > lsb_release fail
> > > > >
> > > > > INFO: Running uname test...
> > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > uname pass
> > > > >
> > > > > INFO: Running ip test...
> > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > >     inet 127.0.0.1/8 scope host lo
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 ::1/128 scope host
> > > > >        valid_lft forever preferred_lft forever
> > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > default
> > > > > qlen 1000
> > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > >     link/can
> > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > >     link/can
> > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen
> > 1000
> > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen
> > 1000
> > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > ip pass
> > > > >
> > > > > INFO: Running lscpu test...
> > > > > Architecture:          armv7l
> > > > > Byte Order:            Little Endian
> > > > > CPU(s):                1
> > > > > On-line CPU(s) list:   0
> > > > > Thread(s) per core:    1
> > > > > Core(s) per socket:    1
> > > > > Socket(s):             1
> > > > > Model:                 2
> > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > CPU max MHz:           1000.0000
> > > > > CPU min MHz:           300.0000
> > > > > BogoMIPS:              995.32
> > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > lscpu pass
> > > > >
> > > > > INFO: Running vmstat test...
> > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > vmstat pass
> > > > >
> > > > > INFO: Running lsblk test...
> > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > lsblk pass
> > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > >
> > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > =vmstat RESULT=pass>
> > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > --- Printing result.csv ---
> > > > > name,test_case_id,result,measurement,units,test_params
> > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > >
> > > > > -------------------------------------------------
> > > > > ===== doing fuego phase: post_test =====
> > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > INFO: the test did not produce a test log on the target
> > > > > ===== doing fuego phase: processing =====
> > > > > ### WARNING: Program returned exit code ''
> > > > > ### WARNING: Log evaluation may be invalid
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ERROR: results did not satisfy the threshold
> > > > > Fuego: requested test phases complete!
> > > > > Build step 'Execute shell' marked build as failure
> > > > >
> > > > > ----------
> > > > >
> > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > that I should fix, please let me know.
> > > > >
> > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > >
> > > > > Just FYI.  Thanks for the code.
> > > > >  -- Tim
> > > >
> > > > _______________________________________________
> > > > Fuego mailing list
> > > > Fuego@lists.linuxfoundation.org
> > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego
> > _______________________________________________
> > Fuego mailing list
> > Fuego@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-14  8:51   ` Chase Qi
@ 2019-02-19  5:28     ` daniel.sangorrin
  0 siblings, 0 replies; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-19  5:28 UTC (permalink / raw)
  To: chase.qi; +Cc: fuego

Hello Chase!

> -----Original Message-----
> From: Chase Qi <chase.qi@linaro.org>
> Sent: Thursday, February 14, 2019 5:51 PM
> To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>
> Cc: fuego@lists.linuxfoundation.org
> Subject: Re: Integration of Fuego and Linaro test-definitons
> 
> On Thu, Feb 14, 2019 at 4:10 PM <daniel.sangorrin@toshiba.co.jp> wrote:
> >
> > Hi again Chase,
> >
> > > -----Original Message-----
> > > From: fuego-bounces@lists.linuxfoundation.org <fuego-bounces@lists.linuxfoundation.org> On Behalf Of
> > [...]
> > > This is my current work-in-progress approach:
> > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > >
> > > - Manual usage (run locally)
> > >       $ git clone https://github.com/sangorrin/test-definitions
> > >       $ cd test-definitions
> > >       $ . ./automated/bin/setenv.sh
> > >       $ cd automated/linux/fuego/
> > >       $ ./fuego.sh -d Functional.hello_world
> > >       $  tree output/
> > >               output/
> > >               ├── build <- equivalent to fuego buildzone
> > >               │   ├── hello
> > >               │   ├── hello.c
> > >               │   ├── Makefile
> > >               │   └── README.md
> > >               ├── fuego.Functional.hello_world <- equivalent to board test folder
> > >               │   └── hello
> > >               └── logs <- equivalent to logdir
> > >                       └── testlog.txt
> > > - test-runner usage (run on remote board)
> > >       $ cd test-definitions
> > >       $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > >       $ ls ../output
> > >               result.csv
> > >               result.json
> > >
> > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> >
> > I have modified the code to prepare a result.txt and generate LAVA messages but I have no LAVA setup right now.
> If you test it please let me know if LAVA recognized the test result.
> 
> Result looks good in LAVA. I tested with lxc device,
> https://validation.linaro.org/scheduler/job/1906258 test failed as
> make is missing. Once build-essential installed, test passed
> https://validation.linaro.org/results/1906260/1_fuego-hello-world

Sorry for the late reply!
Thank you very much for testing, I am glad that it worked. I will add build-essentials to the dependencies.

Thanks,
Daniel

> Thanks,
> Chase
> 
> >
> > Thanks,
> > Daniel
> >
> > >
> > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > >
> > > Thanks,
> > > Daniel
> > >
> > > > We probably should start a new thread for this topic to share progress?
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > >
> > > >
> > > > > Thanks,
> > > > > Daniel
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > fuego@lists.linuxfoundation.org
> > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > >
> > > > > > Comments inline below.
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Daniel Sangorrin
> > > > > > >
> > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > It is still a proof of concept and only tested with
> > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > is left.
> > > > > > >
> > > > > > > To try it follow these steps:
> > > > > > >
> > > > > > > - prepare SSH_KEY for your board
> > > > > > >     Eg: Inside fuego's docker container do
> > > > > > >     > su jenkins
> > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > >     > vi ~/.ssh/config
> > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > - execute the job from jenkins
> > > > > > > - expected results
> > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > >     - run.json
> > > > > > >     - csv
> > > > > > >
> > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > ---
> > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > +++++++++++++++++++++++++++++++
> > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > >  5 files changed, 130 insertions(+)
> > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > >
> > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > new file mode 100644
> > > > > > > index 0000000..b8c8fb6
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > @@ -0,0 +1,3 @@
> > > > > > > +{
> > > > > > > +    "chart_type": "testcase_table"
> > > > > > > +}
> > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > new file mode 100755
> > > > > > > index 0000000..17b56a9
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > @@ -0,0 +1,59 @@
> > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > +
> > > > > > > +# Root permissions required for
> > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > specified
> > > > > > > +# - executing some of the tests
> > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > +NEED_ROOT=1
> > > > > > > +
> > > > > > > +function test_pre_check {
> > > > > > > +    # linaro parser dependencies
> > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > +    assert_has_program sed
> > > > > > > +    assert_has_program awk
> > > > > > > +    assert_has_program grep
> > > > > > > +    assert_has_program egrep
> > > > > > > +    assert_has_program tee
> > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > no need to check for them here.
> > > > > > I already made a patch to remove those lines.
> > > > > >
> > > > > > > +
> > > > > > > +    # test-runner requires a password-less connection
> > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > +    # su jenkins
> > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > +    # vi ~/.ssh/config
> > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > +}
> > > > > > > +
> > > > > > > +function test_build {
> > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > >
> > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > >
> > > > > > ===== doing fuego phase: build =====
> > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > Cloning into 'fuego_git_repo'...
> > > > > > Checkout branch/tag/commit id master.
> > > > > > Already on 'master'
> > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > BIN_PATH:
> > > > > >
> > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > Downloading/unpacking pexpect (from -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 1))
> > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages
> (from
> > > -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 2))
> > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages
> > > (from
> > > > -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 3))
> > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 1))
> > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > >                              ^
> > > > > > SyntaxError: invalid syntax
> > > > > >
> > > > > > Successfully installed pexpect ptyprocess
> > > > > > Cleaning up...
> > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > >
> > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > error after the first build of the job.
> > > > > >
> > > > > > > +}
> > > > > > > +
> > > > > > > +function test_run {
> > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > +
> > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > +            test_or_plan_flag="-p"
> > > > > > > +    else
> > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > +            test_or_plan_flag="-d"
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > +    else
> > > > > > > +        PARAMS=""
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > +}
> > > > > > > +
> > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > repository, clean unnecessary files
> > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > new file mode 100755
> > > > > > > index 0000000..48b502b
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > @@ -0,0 +1,25 @@
> > > > > > > +#!/usr/bin/python
> > > > > > > +
> > > > > > > +import os, sys, collections
> > > > > > > +import common as plib
> > > > > > > +import json
> > > > > > > +
> > > > > > > +# allocate variable to store the results
> > > > > > > +measurements = {}
> > > > > > > +measurements = collections.OrderedDict()
> > > > > > > +
> > > > > > > +# read results from linaro result.json format
> > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > +    data = json.load(f)[0]
> > > > > > > +
> > > > > > > +for test_case in data['metrics']:
> > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > +    result = test_case['result']
> > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > +    # measurement = test_case['measurement']
> > > > > > > +    # units = test_case['units']
> > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > +
> > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > +
> > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > new file mode 100644
> > > > > > > index 0000000..561e2ab
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > @@ -0,0 +1,16 @@
> > > > > > > +{
> > > > > > > +    "testName": "Functional.linaro",
> > > > > > > +    "specs": {
> > > > > > > +        "default": {
> > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > +        },
> > > > > > > +        "smoke": {
> > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > +        }
> > > > > > > +    }
> > > > > > > +}
> > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > new file mode 100644
> > > > > > > index 0000000..a2efee8
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > @@ -0,0 +1,27 @@
> > > > > > > +fuego_package_version: 1
> > > > > > > +name: Functional.linaro
> > > > > > > +description: |
> > > > > > > +    Linaro test-definitions
> > > > > > > +license: GPL-2.0
> > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > +version: latest git commits
> > > > > > > +fuego_release: 1
> > > > > > > +type: Functional
> > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > +params:
> > > > > > > +    - YAML:
> > > > > > > +        description: test definiton or plan.
> > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > +        optional: no
> > > > > > > +    - PARAMS:
> > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > [PARAM2=VALUE2]
> > > > > > > +        example: "TESTS='pwd'"
> > > > > > > +        optional: yes
> > > > > > > +data_files:
> > > > > > > +    - chart_config.json
> > > > > > > +    - fuego_test.sh
> > > > > > > +    - parser.py
> > > > > > > +    - spec.json
> > > > > > > +    - test.yaml
> > > > > > > --
> > > > > > > 2.7.4
> > > > > >
> > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > The issue may be something weird in my board file or configuration.
> > > > > >
> > > > > > ===== doing fuego phase: run =====
> > > > > > -------------------------------------------------
> > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > BIN_PATH:
> > > > > >
> > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > > > {'path':
> > > > > >
> '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > + cat uuid
> > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > + cd ./automated/linux/smoke/
> > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > INFO: install_deps skipped
> > > > > >
> > > > > > INFO: Running pwd test...
> > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > pwd pass
> > > > > >
> > > > > > INFO: Running lsb_release test...
> > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > lsb_release fail
> > > > > >
> > > > > > INFO: Running uname test...
> > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > uname pass
> > > > > >
> > > > > > INFO: Running ip test...
> > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 ::1/128 scope host
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > > default
> > > > > > qlen 1000
> > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > >     link/can
> > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > >     link/can
> > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> qlen
> > > 1000
> > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> qlen
> > > 1000
> > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > ip pass
> > > > > >
> > > > > > INFO: Running lscpu test...
> > > > > > Architecture:          armv7l
> > > > > > Byte Order:            Little Endian
> > > > > > CPU(s):                1
> > > > > > On-line CPU(s) list:   0
> > > > > > Thread(s) per core:    1
> > > > > > Core(s) per socket:    1
> > > > > > Socket(s):             1
> > > > > > Model:                 2
> > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > CPU max MHz:           1000.0000
> > > > > > CPU min MHz:           300.0000
> > > > > > BogoMIPS:              995.32
> > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > lscpu pass
> > > > > >
> > > > > > INFO: Running vmstat test...
> > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > vmstat pass
> > > > > >
> > > > > > INFO: Running lsblk test...
> > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > lsblk pass
> > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > >
> > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > =vmstat RESULT=pass>
> > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > --- Printing result.csv ---
> > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > >
> > > > > > -------------------------------------------------
> > > > > > ===== doing fuego phase: post_test =====
> > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > INFO: the test did not produce a test log on the target
> > > > > > ===== doing fuego phase: processing =====
> > > > > > ### WARNING: Program returned exit code ''
> > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ERROR: results did not satisfy the threshold
> > > > > > Fuego: requested test phases complete!
> > > > > > Build step 'Execute shell' marked build as failure
> > > > > >
> > > > > > ----------
> > > > > >
> > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > that I should fix, please let me know.
> > > > > >
> > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > >
> > > > > > Just FYI.  Thanks for the code.
> > > > > >  -- Tim
> > > > >
> > > > > _______________________________________________
> > > > > Fuego mailing list
> > > > > Fuego@lists.linuxfoundation.org
> > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego
> > > _______________________________________________
> > > Fuego mailing list
> > > Fuego@lists.linuxfoundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-14  8:27 ` Chase Qi
@ 2019-02-21  5:45   ` daniel.sangorrin
  2019-02-22  7:00     ` Chase Qi
  0 siblings, 1 reply; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-21  5:45 UTC (permalink / raw)
  To: chase.qi; +Cc: fuego

Hello Chase,

> -----Original Message-----
> From: Chase Qi <chase.qi@linaro.org>
[...]
> > Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects the
> host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host system
> in the future.
> 
> Please post on the ML or just let me know when you have it. I *want* it.

OK, I am going to start this.
I need to know what OS you would like to install Fuego first.
Debian Jessie would be the easiest, because Fuego docker uses Jessie but I can port it to Debian Buster or Ubuntu 18.04 for example.

> > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> >
> > The only problem is that I would need to run the Fuego parser on the target board.
> > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board would
> need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it during
> the last Fuego jamboree. I will try to do it as soon as i can.
> >
> > What alternatives do I have?
> > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not
> possible.
> > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> >
> > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while
> Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> >
> > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > they don't scale, 'scale' means remote and distributed CI setup.
> >
> > Yes, it is a hack.
> > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> 
> Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> are needed.
> 
> >
> > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > The first part is hard part. Still tring to find a way. About the host
> > > controller part, I started with LAVA-lxc protocol, but hit some
> > > jenkins and docker related issues. I feel build, publish and pull a
> > > fuego docker image is the way to go now.
> >
> > I think this approach might be too hard.
> 
> LAVA v2 introduced lxc-protocol. With the protocol, single node test
> job can deploy and boot a lxc container to control DUT. Here is an
> example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> example job use lxc contianer to deploy imgs to DUT. If DUT was
> configed with static IP, the IP is known to lxc container with LAVA
> helper lava-target-ip, then ssh connection between lxc and DUT is
> possible. Based on these features, I thought we can run fuego tests
> with LAVA just like how we run it now. As mentioned above, there is no
> and will be no support for docker-protocol in LAVA, and migrating
> fuego installation to lxc also is problemic. Please do let me know
> once you have a script for fuego installation. I am having problem to
> do that, hit jenkins missing, docker missing, permission issues, etc.
> Once I am alble to install fuego within lxc, I can propare a job
> example. It would be one test definition for all fuego tests. This is
> how we do it before. `automated/linux/workload-automation3
> ` is a good example.

I see what you want to do. Using LXC sounds doable.
But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.
I am going to work on the installation of Fuego natively then.
By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT with Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board) although to run with "root" permissions you need to change jenkins permissions.
 
> Alternatively, I can lunch docker device and DUT with multinode job,
> but that is complex. And fuego docker container eats a lot of
> memory(blame jenkins?). The exsting docker devices in our lib only
> have 1G memory configured.

I haven't checked the memory consumed, I guess the reason is Java.

> > This is my current work-in-progress approach:
> > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> >
> > - Manual usage (run locally)
> >         $ git clone https://github.com/sangorrin/test-definitions
> >         $ cd test-definitions
> >         $ . ./automated/bin/setenv.sh
> >         $ cd automated/linux/fuego/
> >         $ ./fuego.sh -d Functional.hello_world
> >         $  tree output/
> >                 output/
> >                 ├── build <- equivalent to fuego buildzone
> >                 │   ├── hello
> >                 │   ├── hello.c
> >                 │   ├── Makefile
> >                 │   └── README.md
> >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> >                 │   └── hello
> >                 └── logs <- equivalent to logdir
> >                         └── testlog.txt
> > - test-runner usage (run on remote board)
> >         $ cd test-definitions
> >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> >         $ ls ../output
> >                 result.csv
> >                 result.json
> >
> > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> 
> You don't have to. It looks like a done job to me. send-to-lava.sh
> will take care of it. When running in LAVA, the helper uses
> lava-test-case for result collecting, and when running without LAVA,
> the helper prints result lines in a fixed format for result parsing
> within test-runner. (When I writing this, I noticed your next reply,
> maybe I am looking at the latest code already, I will give it a spin
> with LAVA and come back to you)

Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so I have started to do tests there. 

> So basically, we are running in two different directions. From my
> point of view, you are porting fuego tests to Linaro test-definitions
> natively. Although I am not yet sure how the integration between these
> two projects goes, we are happy to see this happening :)

Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing criteria. Perhaps your approach is better.

> > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> >
> 
> lava-test-shell requires POSIX shell. We normally use /bin/sh which
> links to dash on Debian based distros, and we also have some test
> definitions like ltp and android trandfed using bash. bash has some
> extensions are not POSIX compatiable. IMHO, using bash without these
> extensions is totally fine. We are using shellcheck in sanity check to
> dedect potential POSIX issues.

OK, I got it. Thank you!

Kind regards,
Daniel

> 
> Thanks,
> Chase
> 
> 
> > Thanks,
> > Daniel
> >
> > > We probably should start a new thread for this topic to share progress?
> > >
> > > Thanks,
> > > Chase
> > >
> > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > >
> > >
> > > > Thanks,
> > > > Daniel
> > > >
> > > > > -----Original Message-----
> > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > fuego@lists.linuxfoundation.org
> > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > >
> > > > > Comments inline below.
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Daniel Sangorrin
> > > > > >
> > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > It is still a proof of concept and only tested with
> > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > is left.
> > > > > >
> > > > > > To try it follow these steps:
> > > > > >
> > > > > > - prepare SSH_KEY for your board
> > > > > >     Eg: Inside fuego's docker container do
> > > > > >     > su jenkins
> > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > >     > vi ~/.ssh/config
> > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > - execute the job from jenkins
> > > > > > - expected results
> > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > >     - run.json
> > > > > >     - csv
> > > > > >
> > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > ---
> > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > +++++++++++++++++++++++++++++++
> > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > >  5 files changed, 130 insertions(+)
> > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > >
> > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > new file mode 100644
> > > > > > index 0000000..b8c8fb6
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > @@ -0,0 +1,3 @@
> > > > > > +{
> > > > > > +    "chart_type": "testcase_table"
> > > > > > +}
> > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > new file mode 100755
> > > > > > index 0000000..17b56a9
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > @@ -0,0 +1,59 @@
> > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > +
> > > > > > +# Root permissions required for
> > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > specified
> > > > > > +# - executing some of the tests
> > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > +NEED_ROOT=1
> > > > > > +
> > > > > > +function test_pre_check {
> > > > > > +    # linaro parser dependencies
> > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > +    assert_has_program sed
> > > > > > +    assert_has_program awk
> > > > > > +    assert_has_program grep
> > > > > > +    assert_has_program egrep
> > > > > > +    assert_has_program tee
> > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > no need to check for them here.
> > > > > I already made a patch to remove those lines.
> > > > >
> > > > > > +
> > > > > > +    # test-runner requires a password-less connection
> > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > +    # su jenkins
> > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > +    # vi ~/.ssh/config
> > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > +}
> > > > > > +
> > > > > > +function test_build {
> > > > > > +    source ./automated/bin/setenv.sh
> > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > >
> > > > > OK.  I gave this a spin, and here's an error I got:
> > > > >
> > > > > ===== doing fuego phase: build =====
> > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > Cloning into 'fuego_git_repo'...
> > > > > Checkout branch/tag/commit id master.
> > > > > Already on 'master'
> > > > > Your branch is up-to-date with 'origin/master'.
> > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > BIN_PATH:
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > Downloading/unpacking pexpect (from -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 1))
> > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages
> (from -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 2))
> > > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages
> (from
> > > -r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 3))
> > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> (line 1))
> > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > Installing collected packages: pexpect, ptyprocess
> > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > >                              ^
> > > > > SyntaxError: invalid syntax
> > > > >
> > > > > Successfully installed pexpect ptyprocess
> > > > > Cleaning up...
> > > > > Fuego test_build duration=1.56257462502 seconds
> > > > >
> > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > error after the first build of the job.
> > > > >
> > > > > > +}
> > > > > > +
> > > > > > +function test_run {
> > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > +
> > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > +            abort_job "$yaml_file not found"
> > > > > > +    fi
> > > > > > +
> > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > +            echo "using test plan: $yaml_file"
> > > > > > +            test_or_plan_flag="-p"
> > > > > > +    else
> > > > > > +            echo "using test definition: $yaml_file"
> > > > > > +            test_or_plan_flag="-d"
> > > > > > +    fi
> > > > > > +
> > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > +    else
> > > > > > +        PARAMS=""
> > > > > > +    fi
> > > > > > +
> > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > +}
> > > > > > +
> > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > repository, clean unnecessary files
> > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > b/tests/Functional.linaro/parser.py
> > > > > > new file mode 100755
> > > > > > index 0000000..48b502b
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > @@ -0,0 +1,25 @@
> > > > > > +#!/usr/bin/python
> > > > > > +
> > > > > > +import os, sys, collections
> > > > > > +import common as plib
> > > > > > +import json
> > > > > > +
> > > > > > +# allocate variable to store the results
> > > > > > +measurements = {}
> > > > > > +measurements = collections.OrderedDict()
> > > > > > +
> > > > > > +# read results from linaro result.json format
> > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > +    data = json.load(f)[0]
> > > > > > +
> > > > > > +for test_case in data['metrics']:
> > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > +    result = test_case['result']
> > > > > > +    # FIXTHIS: add measurements when available
> > > > > > +    # measurement = test_case['measurement']
> > > > > > +    # units = test_case['units']
> > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > +
> > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > +
> > > > > > +sys.exit(plib.process(measurements))
> > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > b/tests/Functional.linaro/spec.json
> > > > > > new file mode 100644
> > > > > > index 0000000..561e2ab
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > @@ -0,0 +1,16 @@
> > > > > > +{
> > > > > > +    "testName": "Functional.linaro",
> > > > > > +    "specs": {
> > > > > > +        "default": {
> > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > +        },
> > > > > > +        "smoke": {
> > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > +            "params": "TESTS='pwd'",
> > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > +        }
> > > > > > +    }
> > > > > > +}
> > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > new file mode 100644
> > > > > > index 0000000..a2efee8
> > > > > > --- /dev/null
> > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > @@ -0,0 +1,27 @@
> > > > > > +fuego_package_version: 1
> > > > > > +name: Functional.linaro
> > > > > > +description: |
> > > > > > +    Linaro test-definitions
> > > > > > +license: GPL-2.0
> > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > +version: latest git commits
> > > > > > +fuego_release: 1
> > > > > > +type: Functional
> > > > > > +tags: ['kernel', 'linaro']
> > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > +params:
> > > > > > +    - YAML:
> > > > > > +        description: test definiton or plan.
> > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > +        optional: no
> > > > > > +    - PARAMS:
> > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > [PARAM2=VALUE2]
> > > > > > +        example: "TESTS='pwd'"
> > > > > > +        optional: yes
> > > > > > +data_files:
> > > > > > +    - chart_config.json
> > > > > > +    - fuego_test.sh
> > > > > > +    - parser.py
> > > > > > +    - spec.json
> > > > > > +    - test.yaml
> > > > > > --
> > > > > > 2.7.4
> > > > >
> > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > The issue may be something weird in my board file or configuration.
> > > > >
> > > > > ===== doing fuego phase: run =====
> > > > > -------------------------------------------------
> > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > BIN_PATH:
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > > {'path':
> > > > >
> '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > + cat uuid
> > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > + cd ./automated/linux/smoke/
> > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > INFO: install_deps skipped
> > > > >
> > > > > INFO: Running pwd test...
> > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > pwd pass
> > > > >
> > > > > INFO: Running lsb_release test...
> > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > lsb_release fail
> > > > >
> > > > > INFO: Running uname test...
> > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > uname pass
> > > > >
> > > > > INFO: Running ip test...
> > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > >     inet 127.0.0.1/8 scope host lo
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 ::1/128 scope host
> > > > >        valid_lft forever preferred_lft forever
> > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> default
> > > > > qlen 1000
> > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > >     link/can
> > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > >     link/can
> > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> qlen 1000
> > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> qlen 1000
> > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > >        valid_lft forever preferred_lft forever
> > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > >        valid_lft forever preferred_lft forever
> > > > > ip pass
> > > > >
> > > > > INFO: Running lscpu test...
> > > > > Architecture:          armv7l
> > > > > Byte Order:            Little Endian
> > > > > CPU(s):                1
> > > > > On-line CPU(s) list:   0
> > > > > Thread(s) per core:    1
> > > > > Core(s) per socket:    1
> > > > > Socket(s):             1
> > > > > Model:                 2
> > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > CPU max MHz:           1000.0000
> > > > > CPU min MHz:           300.0000
> > > > > BogoMIPS:              995.32
> > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > lscpu pass
> > > > >
> > > > > INFO: Running vmstat test...
> > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > vmstat pass
> > > > >
> > > > > INFO: Running lsblk test...
> > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > lsblk pass
> > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > >
> > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > =vmstat RESULT=pass>
> > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > --- Printing result.csv ---
> > > > > name,test_case_id,result,measurement,units,test_params
> > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > lsblk;SKIP_INSTALL=False"
> > > > >
> > > > > -------------------------------------------------
> > > > > ===== doing fuego phase: post_test =====
> > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > INFO: the test did not produce a test log on the target
> > > > > ===== doing fuego phase: processing =====
> > > > > ### WARNING: Program returned exit code ''
> > > > > ### WARNING: Log evaluation may be invalid
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ### Unrecognized results format
> > > > > ERROR: results did not satisfy the threshold
> > > > > Fuego: requested test phases complete!
> > > > > Build step 'Execute shell' marked build as failure
> > > > >
> > > > > ----------
> > > > >
> > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > that I should fix, please let me know.
> > > > >
> > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > >
> > > > > Just FYI.  Thanks for the code.
> > > > >  -- Tim
> > > >
> > > > _______________________________________________
> > > > Fuego mailing list
> > > > Fuego@lists.linuxfoundation.org
> > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-21  5:45   ` daniel.sangorrin
@ 2019-02-22  7:00     ` Chase Qi
  2019-02-22  8:14       ` daniel.sangorrin
  0 siblings, 1 reply; 14+ messages in thread
From: Chase Qi @ 2019-02-22  7:00 UTC (permalink / raw)
  To: daniel.sangorrin; +Cc: fuego

Hi Daniel,

On Thu, Feb 21, 2019 at 1:45 PM <daniel.sangorrin@toshiba.co.jp> wrote:
>
> Hello Chase,
>
> > -----Original Message-----
> > From: Chase Qi <chase.qi@linaro.org>
> [...]
> > > Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects the
> > host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host system
> > in the future.
> >
> > Please post on the ML or just let me know when you have it. I *want* it.
>
> OK, I am going to start this.
> I need to know what OS you would like to install Fuego first.
> Debian Jessie would be the easiest, because Fuego docker uses Jessie but I can port it to Debian Buster or Ubuntu 18.04 for example.
>

Thanks a lot for doing so. Jessie is good enough. We have some jobs
use Jessie for fastboot deployment, they work well.

> > > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> > >
> > > The only problem is that I would need to run the Fuego parser on the target board.
> > > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board would
> > need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it during
> > the last Fuego jamboree. I will try to do it as soon as i can.
> > >
> > > What alternatives do I have?
> > > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not
> > possible.
> > > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> > >
> > > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while
> > Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> > >
> > > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > > they don't scale, 'scale' means remote and distributed CI setup.
> > >
> > > Yes, it is a hack.
> > > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> >
> > Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> > are needed.
> >
> > >
> > > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > > The first part is hard part. Still tring to find a way. About the host
> > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > jenkins and docker related issues. I feel build, publish and pull a
> > > > fuego docker image is the way to go now.
> > >
> > > I think this approach might be too hard.
> >
> > LAVA v2 introduced lxc-protocol. With the protocol, single node test
> > job can deploy and boot a lxc container to control DUT. Here is an
> > example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > configed with static IP, the IP is known to lxc container with LAVA
> > helper lava-target-ip, then ssh connection between lxc and DUT is
> > possible. Based on these features, I thought we can run fuego tests
> > with LAVA just like how we run it now. As mentioned above, there is no
> > and will be no support for docker-protocol in LAVA, and migrating
> > fuego installation to lxc also is problemic. Please do let me know
> > once you have a script for fuego installation. I am having problem to
> > do that, hit jenkins missing, docker missing, permission issues, etc.
> > Once I am alble to install fuego within lxc, I can propare a job
> > example. It would be one test definition for all fuego tests. This is
> > how we do it before. `automated/linux/workload-automation3
> > ` is a good example.
>
> I see what you want to do. Using LXC sounds doable.
> But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.

Yeah, I guess if target run Debian based distros, then installing on
DUT will be easier. Most of our targets run openembedded based distro,
it is hard to install fuego on them. It is possible to build docker
into these OE images and run fuego on target within docker container,
but some of the boards don't have the resource for that...  In LAVA,
LXC or LXC DUT are mainly used as host to control other ARM based
DUTs.

> I am going to work on the installation of Fuego natively then.
> By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT with Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board) although to run with "root" permissions you need to change jenkins permissions.

I tried to test docker.board within fuego docker container, it works
well, and yes, I hit the root permissions issue. I haven't tried to
import fuego docker filesystem in LXC, that is a new concept to me.
Does it require docker installed and running in LXC container? If yes,
that is a problem in LAVA. I think we will need to modify lxc
cofiguration somehow on lava-dispatcher to support docker in lxc.

I am getting close to get my whole setup working with LAVA multinode
job. Here is the test definitions in case anyone interested in
https://github.com/chase-qi/test-definitions/tree/fuego/automated/linux/fuego-multinode
. I will share a job example once I have it.

Thanks,
Chase

>
> > Alternatively, I can lunch docker device and DUT with multinode job,
> > but that is complex. And fuego docker container eats a lot of
> > memory(blame jenkins?). The exsting docker devices in our lib only
> > have 1G memory configured.
>
> I haven't checked the memory consumed, I guess the reason is Java.
>
> > > This is my current work-in-progress approach:
> > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > >
> > > - Manual usage (run locally)
> > >         $ git clone https://github.com/sangorrin/test-definitions
> > >         $ cd test-definitions
> > >         $ . ./automated/bin/setenv.sh
> > >         $ cd automated/linux/fuego/
> > >         $ ./fuego.sh -d Functional.hello_world
> > >         $  tree output/
> > >                 output/
> > >                 ├── build <- equivalent to fuego buildzone
> > >                 │   ├── hello
> > >                 │   ├── hello.c
> > >                 │   ├── Makefile
> > >                 │   └── README.md
> > >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> > >                 │   └── hello
> > >                 └── logs <- equivalent to logdir
> > >                         └── testlog.txt
> > > - test-runner usage (run on remote board)
> > >         $ cd test-definitions
> > >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > >         $ ls ../output
> > >                 result.csv
> > >                 result.json
> > >
> > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> >
> > You don't have to. It looks like a done job to me. send-to-lava.sh
> > will take care of it. When running in LAVA, the helper uses
> > lava-test-case for result collecting, and when running without LAVA,
> > the helper prints result lines in a fixed format for result parsing
> > within test-runner. (When I writing this, I noticed your next reply,
> > maybe I am looking at the latest code already, I will give it a spin
> > with LAVA and come back to you)
>
> Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so I have started to do tests there.
>
> > So basically, we are running in two different directions. From my
> > point of view, you are porting fuego tests to Linaro test-definitions
> > natively. Although I am not yet sure how the integration between these
> > two projects goes, we are happy to see this happening :)
>
> Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing criteria. Perhaps your approach is better.
>
> > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on bash.
> > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > >
> >
> > lava-test-shell requires POSIX shell. We normally use /bin/sh which
> > links to dash on Debian based distros, and we also have some test
> > definitions like ltp and android trandfed using bash. bash has some
> > extensions are not POSIX compatiable. IMHO, using bash without these
> > extensions is totally fine. We are using shellcheck in sanity check to
> > dedect potential POSIX issues.
>
> OK, I got it. Thank you!
>
> Kind regards,
> Daniel
>
> >
> > Thanks,
> > Chase
> >
> >
> > > Thanks,
> > > Daniel
> > >
> > > > We probably should start a new thread for this topic to share progress?
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > >
> > > >
> > > > > Thanks,
> > > > > Daniel
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > fuego@lists.linuxfoundation.org
> > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > >
> > > > > > Comments inline below.
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Daniel Sangorrin
> > > > > > >
> > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > It is still a proof of concept and only tested with
> > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > is left.
> > > > > > >
> > > > > > > To try it follow these steps:
> > > > > > >
> > > > > > > - prepare SSH_KEY for your board
> > > > > > >     Eg: Inside fuego's docker container do
> > > > > > >     > su jenkins
> > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > >     > vi ~/.ssh/config
> > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > - execute the job from jenkins
> > > > > > > - expected results
> > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > >     - run.json
> > > > > > >     - csv
> > > > > > >
> > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > ---
> > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > +++++++++++++++++++++++++++++++
> > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > >  5 files changed, 130 insertions(+)
> > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > >
> > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > new file mode 100644
> > > > > > > index 0000000..b8c8fb6
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > @@ -0,0 +1,3 @@
> > > > > > > +{
> > > > > > > +    "chart_type": "testcase_table"
> > > > > > > +}
> > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > new file mode 100755
> > > > > > > index 0000000..17b56a9
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > @@ -0,0 +1,59 @@
> > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > +
> > > > > > > +# Root permissions required for
> > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > specified
> > > > > > > +# - executing some of the tests
> > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > +NEED_ROOT=1
> > > > > > > +
> > > > > > > +function test_pre_check {
> > > > > > > +    # linaro parser dependencies
> > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > +    assert_has_program sed
> > > > > > > +    assert_has_program awk
> > > > > > > +    assert_has_program grep
> > > > > > > +    assert_has_program egrep
> > > > > > > +    assert_has_program tee
> > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > no need to check for them here.
> > > > > > I already made a patch to remove those lines.
> > > > > >
> > > > > > > +
> > > > > > > +    # test-runner requires a password-less connection
> > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > +    # su jenkins
> > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > +    # vi ~/.ssh/config
> > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > +}
> > > > > > > +
> > > > > > > +function test_build {
> > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > >
> > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > >
> > > > > > ===== doing fuego phase: build =====
> > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > Cloning into 'fuego_git_repo'...
> > > > > > Checkout branch/tag/commit id master.
> > > > > > Already on 'master'
> > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > BIN_PATH:
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > Downloading/unpacking pexpect (from -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > (line 1))
> > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in /usr/lib/python2.7/dist-packages
> > (from -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > (line 2))
> > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages
> > (from
> > > > -r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > (line 3))
> > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > (line 1))
> > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > >                              ^
> > > > > > SyntaxError: invalid syntax
> > > > > >
> > > > > > Successfully installed pexpect ptyprocess
> > > > > > Cleaning up...
> > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > >
> > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > error after the first build of the job.
> > > > > >
> > > > > > > +}
> > > > > > > +
> > > > > > > +function test_run {
> > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > +
> > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > +            test_or_plan_flag="-p"
> > > > > > > +    else
> > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > +            test_or_plan_flag="-d"
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > +    else
> > > > > > > +        PARAMS=""
> > > > > > > +    fi
> > > > > > > +
> > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > +}
> > > > > > > +
> > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > repository, clean unnecessary files
> > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > new file mode 100755
> > > > > > > index 0000000..48b502b
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > @@ -0,0 +1,25 @@
> > > > > > > +#!/usr/bin/python
> > > > > > > +
> > > > > > > +import os, sys, collections
> > > > > > > +import common as plib
> > > > > > > +import json
> > > > > > > +
> > > > > > > +# allocate variable to store the results
> > > > > > > +measurements = {}
> > > > > > > +measurements = collections.OrderedDict()
> > > > > > > +
> > > > > > > +# read results from linaro result.json format
> > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > +    data = json.load(f)[0]
> > > > > > > +
> > > > > > > +for test_case in data['metrics']:
> > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > +    result = test_case['result']
> > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > +    # measurement = test_case['measurement']
> > > > > > > +    # units = test_case['units']
> > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > +
> > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > +
> > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > new file mode 100644
> > > > > > > index 0000000..561e2ab
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > @@ -0,0 +1,16 @@
> > > > > > > +{
> > > > > > > +    "testName": "Functional.linaro",
> > > > > > > +    "specs": {
> > > > > > > +        "default": {
> > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > +        },
> > > > > > > +        "smoke": {
> > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > +        }
> > > > > > > +    }
> > > > > > > +}
> > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > new file mode 100644
> > > > > > > index 0000000..a2efee8
> > > > > > > --- /dev/null
> > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > @@ -0,0 +1,27 @@
> > > > > > > +fuego_package_version: 1
> > > > > > > +name: Functional.linaro
> > > > > > > +description: |
> > > > > > > +    Linaro test-definitions
> > > > > > > +license: GPL-2.0
> > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > +version: latest git commits
> > > > > > > +fuego_release: 1
> > > > > > > +type: Functional
> > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > +params:
> > > > > > > +    - YAML:
> > > > > > > +        description: test definiton or plan.
> > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > +        optional: no
> > > > > > > +    - PARAMS:
> > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > [PARAM2=VALUE2]
> > > > > > > +        example: "TESTS='pwd'"
> > > > > > > +        optional: yes
> > > > > > > +data_files:
> > > > > > > +    - chart_config.json
> > > > > > > +    - fuego_test.sh
> > > > > > > +    - parser.py
> > > > > > > +    - spec.json
> > > > > > > +    - test.yaml
> > > > > > > --
> > > > > > > 2.7.4
> > > > > >
> > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > The issue may be something weird in my board file or configuration.
> > > > > >
> > > > > > ===== doing fuego phase: run =====
> > > > > > -------------------------------------------------
> > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > BIN_PATH:
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on root@10.0.1.74
> > > > > > {'path':
> > > > > >
> > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > + cat uuid
> > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > + cd ./automated/linux/smoke/
> > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > INFO: install_deps skipped
> > > > > >
> > > > > > INFO: Running pwd test...
> > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > pwd pass
> > > > > >
> > > > > > INFO: Running lsb_release test...
> > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > lsb_release fail
> > > > > >
> > > > > > INFO: Running uname test...
> > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > uname pass
> > > > > >
> > > > > > INFO: Running ip test...
> > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
> > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 ::1/128 scope host
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > default
> > > > > > qlen 1000
> > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > >     link/can
> > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > >     link/can
> > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> > qlen 1000
> > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default
> > qlen 1000
> > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > >        valid_lft forever preferred_lft forever
> > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > >        valid_lft forever preferred_lft forever
> > > > > > ip pass
> > > > > >
> > > > > > INFO: Running lscpu test...
> > > > > > Architecture:          armv7l
> > > > > > Byte Order:            Little Endian
> > > > > > CPU(s):                1
> > > > > > On-line CPU(s) list:   0
> > > > > > Thread(s) per core:    1
> > > > > > Core(s) per socket:    1
> > > > > > Socket(s):             1
> > > > > > Model:                 2
> > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > CPU max MHz:           1000.0000
> > > > > > CPU min MHz:           300.0000
> > > > > > BogoMIPS:              995.32
> > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > lscpu pass
> > > > > >
> > > > > > INFO: Running vmstat test...
> > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > vmstat pass
> > > > > >
> > > > > > INFO: Running lsblk test...
> > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > lsblk pass
> > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > >
> > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > =vmstat RESULT=pass>
> > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > --- Printing result.csv ---
> > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > lsblk;SKIP_INSTALL=False"
> > > > > >
> > > > > > -------------------------------------------------
> > > > > > ===== doing fuego phase: post_test =====
> > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > INFO: the test did not produce a test log on the target
> > > > > > ===== doing fuego phase: processing =====
> > > > > > ### WARNING: Program returned exit code ''
> > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ### Unrecognized results format
> > > > > > ERROR: results did not satisfy the threshold
> > > > > > Fuego: requested test phases complete!
> > > > > > Build step 'Execute shell' marked build as failure
> > > > > >
> > > > > > ----------
> > > > > >
> > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > that I should fix, please let me know.
> > > > > >
> > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > >
> > > > > > Just FYI.  Thanks for the code.
> > > > > >  -- Tim
> > > > >
> > > > > _______________________________________________
> > > > > Fuego mailing list
> > > > > Fuego@lists.linuxfoundation.org
> > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-22  7:00     ` Chase Qi
@ 2019-02-22  8:14       ` daniel.sangorrin
  2019-02-25  5:24         ` Chase Qi
  0 siblings, 1 reply; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-22  8:14 UTC (permalink / raw)
  To: chase.qi; +Cc: fuego

[-- Attachment #1: Type: text/plain, Size: 35153 bytes --]

Hello Chase,

> From: Chase Qi <chase.qi@linaro.org>
> > > > Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects
> the
> > > host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host
> system
> > > in the future.
> > >
> > > Please post on the ML or just let me know when you have it. I *want* it.
> >
> > OK, I am going to start this.
> > I need to know what OS you would like to install Fuego first.
> > Debian Jessie would be the easiest, because Fuego docker uses Jessie but I can port it to Debian Buster or
> Ubuntu 18.04 for example.
> >
> 
> Thanks a lot for doing so. Jessie is good enough. We have some jobs
> use Jessie for fastboot deployment, they work well.

Today, I sent a patch series that allows you to install Fuego without Jenkins on Docker. Maybe that will solve your previous problems. I also submitted a few more changes to allow users changing the port where Jenkins listens.

I am also preparing a native install script. Unfortunately, my time is up today and I couldn't test it. I send it to you attached in case you want to give it a try. But make sure you do it on a container or VM where nothing bad can happen ;)

> > > > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> > > >
> > > > The only problem is that I would need to run the Fuego parser on the target board.
> > > > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board
> would
> > > need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it
> during
> > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > >
> > > > What alternatives do I have?
> > > > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not
> > > possible.
> > > > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> > > >
> > > > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while
> > > Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> > > >
> > > > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > > > they don't scale, 'scale' means remote and distributed CI setup.
> > > >
> > > > Yes, it is a hack.
> > > > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > > > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> > >
> > > Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> > > are needed.
> > >
> > > >
> > > > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > > > The first part is hard part. Still tring to find a way. About the host
> > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > jenkins and docker related issues. I feel build, publish and pull a
> > > > > fuego docker image is the way to go now.
> > > >
> > > > I think this approach might be too hard.
> > >
> > > LAVA v2 introduced lxc-protocol. With the protocol, single node test
> > > job can deploy and boot a lxc container to control DUT. Here is an
> > > example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > configed with static IP, the IP is known to lxc container with LAVA
> > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > possible. Based on these features, I thought we can run fuego tests
> > > with LAVA just like how we run it now. As mentioned above, there is no
> > > and will be no support for docker-protocol in LAVA, and migrating
> > > fuego installation to lxc also is problemic. Please do let me know
> > > once you have a script for fuego installation. I am having problem to
> > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > Once I am alble to install fuego within lxc, I can propare a job
> > > example. It would be one test definition for all fuego tests. This is
> > > how we do it before. `automated/linux/workload-automation3
> > > ` is a good example.
> >
> > I see what you want to do. Using LXC sounds doable.
> > But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.
> 
> Yeah, I guess if target run Debian based distros, then installing on
> DUT will be easier. Most of our targets run openembedded based distro,
> it is hard to install fuego on them. It is possible to build docker
> into these OE images and run fuego on target within docker container,
> but some of the boards don't have the resource for that...  In LAVA,
> LXC or LXC DUT are mainly used as host to control other ARM based
> DUTs.

I see, maybe running Fuego on OE might require some work. Maybe it is easier to start with Debian images.

> > I am going to work on the installation of Fuego natively then.
> > By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT with
> Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board) although
> to run with "root" permissions you need to change jenkins permissions.
> 
> I tried to test docker.board within fuego docker container, it works
> well, and yes, I hit the root permissions issue. I haven't tried to
> import fuego docker filesystem in LXC, that is a new concept to me.
> Does it require docker installed and running in LXC container? If yes,
> that is a problem in LAVA. I think we will need to modify lxc
> cofiguration somehow on lava-dispatcher to support docker in lxc.

No, I just meant to use Docker to create the filesystem tree and then use it in LXC.

> I am getting close to get my whole setup working with LAVA multinode
> job. Here is the test definitions in case anyone interested in
> https://github.com/chase-qi/test-definitions/tree/fuego/automated/linux/fuego-multinode
> . I will share a job example once I have it.

Great! Thanks a lot!

Kind regards,
Daniel


> 
> Thanks,
> Chase
> 
> >
> > > Alternatively, I can lunch docker device and DUT with multinode job,
> > > but that is complex. And fuego docker container eats a lot of
> > > memory(blame jenkins?). The exsting docker devices in our lib only
> > > have 1G memory configured.
> >
> > I haven't checked the memory consumed, I guess the reason is Java.
> >
> > > > This is my current work-in-progress approach:
> > > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > > >
> > > > - Manual usage (run locally)
> > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > >         $ cd test-definitions
> > > >         $ . ./automated/bin/setenv.sh
> > > >         $ cd automated/linux/fuego/
> > > >         $ ./fuego.sh -d Functional.hello_world
> > > >         $  tree output/
> > > >                 output/
> > > >                 ├── build <- equivalent to fuego buildzone
> > > >                 │   ├── hello
> > > >                 │   ├── hello.c
> > > >                 │   ├── Makefile
> > > >                 │   └── README.md
> > > >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> > > >                 │   └── hello
> > > >                 └── logs <- equivalent to logdir
> > > >                         └── testlog.txt
> > > > - test-runner usage (run on remote board)
> > > >         $ cd test-definitions
> > > >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > >         $ ls ../output
> > > >                 result.csv
> > > >                 result.json
> > > >
> > > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> > >
> > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > will take care of it. When running in LAVA, the helper uses
> > > lava-test-case for result collecting, and when running without LAVA,
> > > the helper prints result lines in a fixed format for result parsing
> > > within test-runner. (When I writing this, I noticed your next reply,
> > > maybe I am looking at the latest code already, I will give it a spin
> > > with LAVA and come back to you)
> >
> > Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so I have
> started to do tests there.
> >
> > > So basically, we are running in two different directions. From my
> > > point of view, you are porting fuego tests to Linaro test-definitions
> > > natively. Although I am not yet sure how the integration between these
> > > two projects goes, we are happy to see this happening :)
> >
> > Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing
> criteria. Perhaps your approach is better.
> >
> > > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on
> bash.
> > > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > > >
> > >
> > > lava-test-shell requires POSIX shell. We normally use /bin/sh which
> > > links to dash on Debian based distros, and we also have some test
> > > definitions like ltp and android trandfed using bash. bash has some
> > > extensions are not POSIX compatiable. IMHO, using bash without these
> > > extensions is totally fine. We are using shellcheck in sanity check to
> > > dedect potential POSIX issues.
> >
> > OK, I got it. Thank you!
> >
> > Kind regards,
> > Daniel
> >
> > >
> > > Thanks,
> > > Chase
> > >
> > >
> > > > Thanks,
> > > > Daniel
> > > >
> > > > > We probably should start a new thread for this topic to share progress?
> > > > >
> > > > > Thanks,
> > > > > Chase
> > > > >
> > > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > > >
> > > > >
> > > > > > Thanks,
> > > > > > Daniel
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > > fuego@lists.linuxfoundation.org
> > > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > > >
> > > > > > > Comments inline below.
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Daniel Sangorrin
> > > > > > > >
> > > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > > It is still a proof of concept and only tested with
> > > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > > is left.
> > > > > > > >
> > > > > > > > To try it follow these steps:
> > > > > > > >
> > > > > > > > - prepare SSH_KEY for your board
> > > > > > > >     Eg: Inside fuego's docker container do
> > > > > > > >     > su jenkins
> > > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > >     > vi ~/.ssh/config
> > > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > > - execute the job from jenkins
> > > > > > > > - expected results
> > > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > > >     - run.json
> > > > > > > >     - csv
> > > > > > > >
> > > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > ---
> > > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > > +++++++++++++++++++++++++++++++
> > > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > > >  5 files changed, 130 insertions(+)
> > > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > > >
> > > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > > new file mode 100644
> > > > > > > > index 0000000..b8c8fb6
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > > @@ -0,0 +1,3 @@
> > > > > > > > +{
> > > > > > > > +    "chart_type": "testcase_table"
> > > > > > > > +}
> > > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > new file mode 100755
> > > > > > > > index 0000000..17b56a9
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > @@ -0,0 +1,59 @@
> > > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > > +
> > > > > > > > +# Root permissions required for
> > > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > > specified
> > > > > > > > +# - executing some of the tests
> > > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > > +NEED_ROOT=1
> > > > > > > > +
> > > > > > > > +function test_pre_check {
> > > > > > > > +    # linaro parser dependencies
> > > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > > +    assert_has_program sed
> > > > > > > > +    assert_has_program awk
> > > > > > > > +    assert_has_program grep
> > > > > > > > +    assert_has_program egrep
> > > > > > > > +    assert_has_program tee
> > > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > > no need to check for them here.
> > > > > > > I already made a patch to remove those lines.
> > > > > > >
> > > > > > > > +
> > > > > > > > +    # test-runner requires a password-less connection
> > > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > > +    # su jenkins
> > > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > +    # vi ~/.ssh/config
> > > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > +function test_build {
> > > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > > >
> > > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > > >
> > > > > > > ===== doing fuego phase: build =====
> > > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > > Cloning into 'fuego_git_repo'...
> > > > > > > Checkout branch/tag/commit id master.
> > > > > > > Already on 'master'
> > > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > BIN_PATH:
> > > > > > >
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > Downloading/unpacking pexpect (from -r
> > > > > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > (line 1))
> > > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in
> /usr/lib/python2.7/dist-packages
> > > (from -r
> > > > > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > (line 2))
> > > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in
> /usr/lib/python2.7/dist-packages
> > > (from
> > > > > -r
> > > > > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > (line 3))
> > > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > (line 1))
> > > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > > >                              ^
> > > > > > > SyntaxError: invalid syntax
> > > > > > >
> > > > > > > Successfully installed pexpect ptyprocess
> > > > > > > Cleaning up...
> > > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > > >
> > > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > > error after the first build of the job.
> > > > > > >
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > +function test_run {
> > > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > > +
> > > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > > +    fi
> > > > > > > > +
> > > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > > +            test_or_plan_flag="-p"
> > > > > > > > +    else
> > > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > > +            test_or_plan_flag="-d"
> > > > > > > > +    fi
> > > > > > > > +
> > > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > > +    else
> > > > > > > > +        PARAMS=""
> > > > > > > > +    fi
> > > > > > > > +
> > > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > > repository, clean unnecessary files
> > > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > > new file mode 100755
> > > > > > > > index 0000000..48b502b
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > > @@ -0,0 +1,25 @@
> > > > > > > > +#!/usr/bin/python
> > > > > > > > +
> > > > > > > > +import os, sys, collections
> > > > > > > > +import common as plib
> > > > > > > > +import json
> > > > > > > > +
> > > > > > > > +# allocate variable to store the results
> > > > > > > > +measurements = {}
> > > > > > > > +measurements = collections.OrderedDict()
> > > > > > > > +
> > > > > > > > +# read results from linaro result.json format
> > > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > > +    data = json.load(f)[0]
> > > > > > > > +
> > > > > > > > +for test_case in data['metrics']:
> > > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > > +    result = test_case['result']
> > > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > > +    # measurement = test_case['measurement']
> > > > > > > > +    # units = test_case['units']
> > > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > > +
> > > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > > +
> > > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > > new file mode 100644
> > > > > > > > index 0000000..561e2ab
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > > @@ -0,0 +1,16 @@
> > > > > > > > +{
> > > > > > > > +    "testName": "Functional.linaro",
> > > > > > > > +    "specs": {
> > > > > > > > +        "default": {
> > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > +        },
> > > > > > > > +        "smoke": {
> > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > +        }
> > > > > > > > +    }
> > > > > > > > +}
> > > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > > new file mode 100644
> > > > > > > > index 0000000..a2efee8
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > > @@ -0,0 +1,27 @@
> > > > > > > > +fuego_package_version: 1
> > > > > > > > +name: Functional.linaro
> > > > > > > > +description: |
> > > > > > > > +    Linaro test-definitions
> > > > > > > > +license: GPL-2.0
> > > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > +version: latest git commits
> > > > > > > > +fuego_release: 1
> > > > > > > > +type: Functional
> > > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > > +params:
> > > > > > > > +    - YAML:
> > > > > > > > +        description: test definiton or plan.
> > > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > > +        optional: no
> > > > > > > > +    - PARAMS:
> > > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > > [PARAM2=VALUE2]
> > > > > > > > +        example: "TESTS='pwd'"
> > > > > > > > +        optional: yes
> > > > > > > > +data_files:
> > > > > > > > +    - chart_config.json
> > > > > > > > +    - fuego_test.sh
> > > > > > > > +    - parser.py
> > > > > > > > +    - spec.json
> > > > > > > > +    - test.yaml
> > > > > > > > --
> > > > > > > > 2.7.4
> > > > > > >
> > > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > > The issue may be something weird in my board file or configuration.
> > > > > > >
> > > > > > > ===== doing fuego phase: run =====
> > > > > > > -------------------------------------------------
> > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > BIN_PATH:
> > > > > > >
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > >
> /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on
> root@10.0.1.74
> > > > > > > {'path':
> > > > > > >
> > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > + cat uuid
> > > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > > + cd ./automated/linux/smoke/
> > > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > > INFO: install_deps skipped
> > > > > > >
> > > > > > > INFO: Running pwd test...
> > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > > pwd pass
> > > > > > >
> > > > > > > INFO: Running lsb_release test...
> > > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > > lsb_release fail
> > > > > > >
> > > > > > > INFO: Running uname test...
> > > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > > uname pass
> > > > > > >
> > > > > > > INFO: Running ip test...
> > > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen
> 1
> > > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > >     inet6 ::1/128 scope host
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
> group
> > > default
> > > > > > > qlen 1000
> > > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > >     link/can
> > > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > >     link/can
> > > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> default
> > > qlen 1000
> > > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> default
> > > qlen 1000
> > > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > ip pass
> > > > > > >
> > > > > > > INFO: Running lscpu test...
> > > > > > > Architecture:          armv7l
> > > > > > > Byte Order:            Little Endian
> > > > > > > CPU(s):                1
> > > > > > > On-line CPU(s) list:   0
> > > > > > > Thread(s) per core:    1
> > > > > > > Core(s) per socket:    1
> > > > > > > Socket(s):             1
> > > > > > > Model:                 2
> > > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > > CPU max MHz:           1000.0000
> > > > > > > CPU min MHz:           300.0000
> > > > > > > BogoMIPS:              995.32
> > > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > > lscpu pass
> > > > > > >
> > > > > > > INFO: Running vmstat test...
> > > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > > vmstat pass
> > > > > > >
> > > > > > > INFO: Running lsblk test...
> > > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > > lsblk pass
> > > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > > >
> > > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > >
> /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > =vmstat RESULT=pass>
> > > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > --- Printing result.csv ---
> > > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > >
> > > > > > > -------------------------------------------------
> > > > > > > ===== doing fuego phase: post_test =====
> > > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > > INFO: the test did not produce a test log on the target
> > > > > > > ===== doing fuego phase: processing =====
> > > > > > > ### WARNING: Program returned exit code ''
> > > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ### Unrecognized results format
> > > > > > > ERROR: results did not satisfy the threshold
> > > > > > > Fuego: requested test phases complete!
> > > > > > > Build step 'Execute shell' marked build as failure
> > > > > > >
> > > > > > > ----------
> > > > > > >
> > > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > > that I should fix, please let me know.
> > > > > > >
> > > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > > >
> > > > > > > Just FYI.  Thanks for the code.
> > > > > > >  -- Tim
> > > > > >
> > > > > > _______________________________________________
> > > > > > Fuego mailing list
> > > > > > Fuego@lists.linuxfoundation.org
> > > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

[-- Attachment #2: native-install-jessie.sh --]
[-- Type: application/octet-stream, Size: 3933 bytes --]

#!/bin/bash
# 2019 (c) Toshiba corp. <daniel.sangorrin@toshiba.co.jp>
#
# Usage:
#  $ sudo /fuego/native-install-jessie.sh
#
# Proxy users:
# - echo "use_proxy = on" >> /etc/wgetrc
# - make sure http(s)_proxy is defined

if [[ $EUID -ne 0 ]]; then
	echo "Sorry, you need root permissions"
	exit 1
fi

# make sure that Fuego is cloned
if [ ! -d "/fuego" ]; then
	cd /
	git clone https://bitbucket.org/nirrognas/fuego.git
	ln -s /fuego/fuego-ro /fuego-ro
	cd /fuego
	git clone https://bitbucket.org/nirrognas/fuego-core.git
fi

# ==============================================================================
# Install dependencies
# ==============================================================================

apt-get update

# Fuego python dependencies
apt-get -yV install \
	python-lxml python-simplejson python-yaml python-openpyxl \
	python-requests python-reportlab python-parsedatetime \
	python-pip
pip install filelock

# Fuego command dependencies
apt-get -yV install \
	git sshpass openssh-client sudo net-tools wget curl lava-tool

# Default SDK for testing locally or on an x86 board
apt-get -yV install \
	gcc g++ make cmake bison flex autoconf automake libtool \
	libelf-dev libssl-dev libsdl1.2-dev libcairo2-dev libxmu-dev \
	libxmuu-dev libglib2.0-dev libaio-dev u-boot-tools pkg-config

# Default test host dependencies
apt-get -yV install \
	iperf iperf3 netperf bzip2 bc python-matplotlib python-xmltodict
pip install flake8

echo "dash dash/sh boolean false" | debconf-set-selections
dpkg-reconfigure dash

# ==============================================================================
# get ttc script and helpers
# ==============================================================================
git clone https://github.com/tbird20d/ttc.git /usr/local/src/ttc
/usr/local/src/ttc/install.sh /usr/local/bin
perl -p -i -e "s#config_dir = \"/etc\"#config_dir = \"/fuego-ro/conf\"#" /usr/local/bin/ttc

# ==============================================================================
# Serial Config
# ==============================================================================

git clone https://github.com/frowand/serio.git /usr/local/src/serio
cp /usr/local/src/serio/serio /usr/local/bin/
ln -s /usr/local/bin/serio /usr/local/bin/sercp
ln -s /usr/local/bin/serio /usr/local/bin/sersh

git clone https://github.com/tbird20d/serlogin.git /usr/local/src/serlogin
cp /usr/local/src/serlogin/serlogin /usr/local/bin/

# ==============================================================================
# fserver
# ==============================================================================

git clone https://github.com/tbird20d/fserver.git /usr/local/lib/fserver
ln -s /usr/local/lib/fserver/start_local_bg_server /usr/local/bin/start_local_bg_server

# ==============================================================================
# ftc post installation
# ==============================================================================

ln -s /fuego-core/scripts/ftc /usr/local/bin/

# ==============================================================================
# Lava
# ==============================================================================

ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin

# ==============================================================================
# Small guide
# ==============================================================================
echo "Edit /fuego-ro/conf/fuego.conf, and set jenkins_enabled=0"
echo "Run 'service netperf start' to start a netperf server"
echo "Run 'iperf3 -V -s -D -f M' to start an iperf3 server"
echo "Run 'ftc list-boards' to see the available boards"
echo "Run 'ftc list-tests' to see the available tests"
echo "Run 'ftc -b local -t Functional.hello_world' to run a hello world"
echo "Run 'ftc -b local -t Benchmark.Dhrystone -s 500M' to run Dhrystone"


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-22  8:14       ` daniel.sangorrin
@ 2019-02-25  5:24         ` Chase Qi
  2019-02-25  5:35           ` daniel.sangorrin
  0 siblings, 1 reply; 14+ messages in thread
From: Chase Qi @ 2019-02-25  5:24 UTC (permalink / raw)
  To: daniel.sangorrin; +Cc: fuego

On Fri, Feb 22, 2019 at 4:14 PM <daniel.sangorrin@toshiba.co.jp> wrote:
>
> Hello Chase,
>
> > From: Chase Qi <chase.qi@linaro.org>
> > > > > Having Fuego run on docker makes sure that anyone can get the same environment quickly and it protects
> > the
> > > > host system from Fuego bugs. Having said that, I would like to prepare a script to install fuego on the host
> > system
> > > > in the future.
> > > >
> > > > Please post on the ML or just let me know when you have it. I *want* it.
> > >
> > > OK, I am going to start this.
> > > I need to know what OS you would like to install Fuego first.
> > > Debian Jessie would be the easiest, because Fuego docker uses Jessie but I can port it to Debian Buster or
> > Ubuntu 18.04 for example.
> > >
> >
> > Thanks a lot for doing so. Jessie is good enough. We have some jobs
> > use Jessie for fastboot deployment, they work well.
>
> Today, I sent a patch series that allows you to install Fuego without Jenkins on Docker. Maybe that will solve your previous problems. I also submitted a few more changes to allow users changing the port where Jenkins listens.

I noticed the patches. I definitely will give it a spin later on. I am
currently still using fuego v1.40 based docker image for prototyping
with LAVA multinode job. I built and uploaded fuego docker imge with
fuego test code included here
https://cloud.docker.com/repository/docker/chaseqi/standalone-fuego/tags
.  BTW, are you guys plan to publish fuego official docker image?

Here is the changes I made in the dockerfile.

```
$ git diff
diff --git a/Dockerfile b/Dockerfile
index 269e1f6..16586fa 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent --fail \
 RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers


+# ==============================================================================
+#Install fuego
+# ==============================================================================
+RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
+    && git clone https://bitbucket.org/tbird20d/fuego-core.git /fuego-core \
+    && ln -s /fuego/fuego-ro/ / \
+    && ln -s /fuego/fuego-rw/ /
+
+
 # ==============================================================================
 # get ttc script and helpers
 # ==============================================================================
@@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
 # Lava
 # ==============================================================================

-RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
-RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
+# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
+# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
 # CONVENIENCE HACKS
 # not mounted, yet
 #RUN echo "fuego-create-node --board raspberrypi3" >> /root/firststart.sh
@@ -218,6 +227,14 @@ RUN ln -s
/fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
 #RUN DEBIAN_FRONTEND=noninteractive apt-get update
 #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
crossbuild-essential-armhf cpp-arm-linux-gnueabihf
gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf

+# ==============================================================================
+#Install arm64 toolchain
+# ==============================================================================
+RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
+    && apt-get clean \
+    && rm -rf /tmp/* /var/tmp/*
+
+
 # ==============================================================================
 # Setup startup command
 # ==============================================================================
```

>
> I am also preparing a native install script. Unfortunately, my time is up today and I couldn't test it. I send it to you attached in case you want to give it a try. But make sure you do it on a container or VM where nothing bad can happen ;)

Thanks a lot. I just tried the script within lxc jessie container.
Most of it just works. The following run with ftc cmds are
problematic, which probably are expected. I guess we also need to
patch ftc to make jenkins and docker optional.

```
root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
sudo: docker: command not found
$ diff native-install-jessie.sh native-install-jessie.sh.original
86,92c86
< # ftc list-boards reports
< # sudo: docker: command not found
< # TODO: patch ftc to make docker optional.
< curl -fsSL https://get.docker.com -o get-docker.sh
< sh get-docker.sh
<
< ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
---
> ln -s /fuego-core/scripts/ftc /usr/local/bin/

root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
diff --git a/engine/scripts/ftc b/engine/scripts/ftc
index ab2d2cb..1b09812 100755
--- a/engine/scripts/ftc
+++ b/engine/scripts/ftc
@@ -4665,7 +4665,7 @@ def main():
                 print "Can't do rm-jobs outside the container! Aborting."
                 sys.exit(1)
             command += arg + " "
-        container_command(command)
+        #container_command(command)

     if len(sys.argv) < 2:
         error_out('Missing command\nUse "ftc help" to get usage help.', 1)
@@ -4781,7 +4781,7 @@ def main():
         # shows fuego boards
         do_list_boards(conf)

-    import jenkins
+    #import jenkins
     server = jenkins.Jenkins('http://localhost:8080/fuego')

     if command.startswith("add-job"):

# ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
Traceback (most recent call last):
  File "/usr/local/bin/ftc", line 4929, in <module>
    main()
  File "/usr/local/bin/ftc", line 4785, in main
    server = jenkins.Jenkins('http://localhost:8080/fuego')
AttributeError: 'NoneType' object has no attribute 'Jenkins
```

Thanks,
Chase

>
> > > > > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> > > > >
> > > > > The only problem is that I would need to run the Fuego parser on the target board.
> > > > > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board
> > would
> > > > need to install the python modules required by fuego-parser. This is on my TODO list since I proposed it
> > during
> > > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > > >
> > > > > What alternatives do I have?
> > > > > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably not
> > > > possible.
> > > > > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> > > > >
> > > > > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python), while
> > > > Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> > > > >
> > > > > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > > > > they don't scale, 'scale' means remote and distributed CI setup.
> > > > >
> > > > > Yes, it is a hack.
> > > > > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > > > > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> > > >
> > > > Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> > > > are needed.
> > > >
> > > > >
> > > > > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > > > > The first part is hard part. Still tring to find a way. About the host
> > > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > > jenkins and docker related issues. I feel build, publish and pull a
> > > > > > fuego docker image is the way to go now.
> > > > >
> > > > > I think this approach might be too hard.
> > > >
> > > > LAVA v2 introduced lxc-protocol. With the protocol, single node test
> > > > job can deploy and boot a lxc container to control DUT. Here is an
> > > > example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> > > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > > configed with static IP, the IP is known to lxc container with LAVA
> > > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > > possible. Based on these features, I thought we can run fuego tests
> > > > with LAVA just like how we run it now. As mentioned above, there is no
> > > > and will be no support for docker-protocol in LAVA, and migrating
> > > > fuego installation to lxc also is problemic. Please do let me know
> > > > once you have a script for fuego installation. I am having problem to
> > > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > > Once I am alble to install fuego within lxc, I can propare a job
> > > > example. It would be one test definition for all fuego tests. This is
> > > > how we do it before. `automated/linux/workload-automation3
> > > > ` is a good example.
> > >
> > > I see what you want to do. Using LXC sounds doable.
> > > But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.
> >
> > Yeah, I guess if target run Debian based distros, then installing on
> > DUT will be easier. Most of our targets run openembedded based distro,
> > it is hard to install fuego on them. It is possible to build docker
> > into these OE images and run fuego on target within docker container,
> > but some of the boards don't have the resource for that...  In LAVA,
> > LXC or LXC DUT are mainly used as host to control other ARM based
> > DUTs.
>
> I see, maybe running Fuego on OE might require some work. Maybe it is easier to start with Debian images.
>
> > > I am going to work on the installation of Fuego natively then.
> > > By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT with
> > Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board) although
> > to run with "root" permissions you need to change jenkins permissions.
> >
> > I tried to test docker.board within fuego docker container, it works
> > well, and yes, I hit the root permissions issue. I haven't tried to
> > import fuego docker filesystem in LXC, that is a new concept to me.
> > Does it require docker installed and running in LXC container? If yes,
> > that is a problem in LAVA. I think we will need to modify lxc
> > cofiguration somehow on lava-dispatcher to support docker in lxc.
>
> No, I just meant to use Docker to create the filesystem tree and then use it in LXC.
>
> > I am getting close to get my whole setup working with LAVA multinode
> > job. Here is the test definitions in case anyone interested in
> > https://github.com/chase-qi/test-definitions/tree/fuego/automated/linux/fuego-multinode
> > . I will share a job example once I have it.
>
> Great! Thanks a lot!
>
> Kind regards,
> Daniel
>
>
> >
> > Thanks,
> > Chase
> >
> > >
> > > > Alternatively, I can lunch docker device and DUT with multinode job,
> > > > but that is complex. And fuego docker container eats a lot of
> > > > memory(blame jenkins?). The exsting docker devices in our lib only
> > > > have 1G memory configured.
> > >
> > > I haven't checked the memory consumed, I guess the reason is Java.
> > >
> > > > > This is my current work-in-progress approach:
> > > > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > > > >
> > > > > - Manual usage (run locally)
> > > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > > >         $ cd test-definitions
> > > > >         $ . ./automated/bin/setenv.sh
> > > > >         $ cd automated/linux/fuego/
> > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > >         $  tree output/
> > > > >                 output/
> > > > >                 ├── build <- equivalent to fuego buildzone
> > > > >                 │   ├── hello
> > > > >                 │   ├── hello.c
> > > > >                 │   ├── Makefile
> > > > >                 │   └── README.md
> > > > >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> > > > >                 │   └── hello
> > > > >                 └── logs <- equivalent to logdir
> > > > >                         └── testlog.txt
> > > > > - test-runner usage (run on remote board)
> > > > >         $ cd test-definitions
> > > > >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > >         $ ls ../output
> > > > >                 result.csv
> > > > >                 result.json
> > > > >
> > > > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> > > >
> > > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > > will take care of it. When running in LAVA, the helper uses
> > > > lava-test-case for result collecting, and when running without LAVA,
> > > > the helper prints result lines in a fixed format for result parsing
> > > > within test-runner. (When I writing this, I noticed your next reply,
> > > > maybe I am looking at the latest code already, I will give it a spin
> > > > with LAVA and come back to you)
> > >
> > > Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so I have
> > started to do tests there.
> > >
> > > > So basically, we are running in two different directions. From my
> > > > point of view, you are porting fuego tests to Linaro test-definitions
> > > > natively. Although I am not yet sure how the integration between these
> > > > two projects goes, we are happy to see this happening :)
> > >
> > > Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing
> > criteria. Perhaps your approach is better.
> > >
> > > > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend on
> > bash.
> > > > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test definitions.
> > > > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > > > >
> > > >
> > > > lava-test-shell requires POSIX shell. We normally use /bin/sh which
> > > > links to dash on Debian based distros, and we also have some test
> > > > definitions like ltp and android trandfed using bash. bash has some
> > > > extensions are not POSIX compatiable. IMHO, using bash without these
> > > > extensions is totally fine. We are using shellcheck in sanity check to
> > > > dedect potential POSIX issues.
> > >
> > > OK, I got it. Thank you!
> > >
> > > Kind regards,
> > > Daniel
> > >
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > >
> > > > > Thanks,
> > > > > Daniel
> > > > >
> > > > > > We probably should start a new thread for this topic to share progress?
> > > > > >
> > > > > > Thanks,
> > > > > > Chase
> > > > > >
> > > > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > > > >
> > > > > >
> > > > > > > Thanks,
> > > > > > > Daniel
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > > > fuego@lists.linuxfoundation.org
> > > > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > > > >
> > > > > > > > Comments inline below.
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Daniel Sangorrin
> > > > > > > > >
> > > > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > > > It is still a proof of concept and only tested with
> > > > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > > > is left.
> > > > > > > > >
> > > > > > > > > To try it follow these steps:
> > > > > > > > >
> > > > > > > > > - prepare SSH_KEY for your board
> > > > > > > > >     Eg: Inside fuego's docker container do
> > > > > > > > >     > su jenkins
> > > > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > >     > vi ~/.ssh/config
> > > > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > > > - execute the job from jenkins
> > > > > > > > > - expected results
> > > > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > > > >     - run.json
> > > > > > > > >     - csv
> > > > > > > > >
> > > > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > ---
> > > > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > > > +++++++++++++++++++++++++++++++
> > > > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > > > >  5 files changed, 130 insertions(+)
> > > > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > > > >
> > > > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > > > new file mode 100644
> > > > > > > > > index 0000000..b8c8fb6
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > > > @@ -0,0 +1,3 @@
> > > > > > > > > +{
> > > > > > > > > +    "chart_type": "testcase_table"
> > > > > > > > > +}
> > > > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > new file mode 100755
> > > > > > > > > index 0000000..17b56a9
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > @@ -0,0 +1,59 @@
> > > > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > > > +
> > > > > > > > > +# Root permissions required for
> > > > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > > > specified
> > > > > > > > > +# - executing some of the tests
> > > > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > > > +NEED_ROOT=1
> > > > > > > > > +
> > > > > > > > > +function test_pre_check {
> > > > > > > > > +    # linaro parser dependencies
> > > > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > > > +    assert_has_program sed
> > > > > > > > > +    assert_has_program awk
> > > > > > > > > +    assert_has_program grep
> > > > > > > > > +    assert_has_program egrep
> > > > > > > > > +    assert_has_program tee
> > > > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > > > no need to check for them here.
> > > > > > > > I already made a patch to remove those lines.
> > > > > > > >
> > > > > > > > > +
> > > > > > > > > +    # test-runner requires a password-less connection
> > > > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > > > +    # su jenkins
> > > > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > > +    # vi ~/.ssh/config
> > > > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +function test_build {
> > > > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > > > >
> > > > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > > > >
> > > > > > > > ===== doing fuego phase: build =====
> > > > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > > > Cloning into 'fuego_git_repo'...
> > > > > > > > Checkout branch/tag/commit id master.
> > > > > > > > Already on 'master'
> > > > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > BIN_PATH:
> > > > > > > >
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > Downloading/unpacking pexpect (from -r
> > > > > > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > (line 1))
> > > > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in
> > /usr/lib/python2.7/dist-packages
> > > > (from -r
> > > > > > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > (line 2))
> > > > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in
> > /usr/lib/python2.7/dist-packages
> > > > (from
> > > > > > -r
> > > > > > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > (line 3))
> > > > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > (line 1))
> > > > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > > > >                              ^
> > > > > > > > SyntaxError: invalid syntax
> > > > > > > >
> > > > > > > > Successfully installed pexpect ptyprocess
> > > > > > > > Cleaning up...
> > > > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > > > >
> > > > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > > > error after the first build of the job.
> > > > > > > >
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +function test_run {
> > > > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > > > +
> > > > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > > > +    fi
> > > > > > > > > +
> > > > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > > > +            test_or_plan_flag="-p"
> > > > > > > > > +    else
> > > > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > > > +            test_or_plan_flag="-d"
> > > > > > > > > +    fi
> > > > > > > > > +
> > > > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > > > +    else
> > > > > > > > > +        PARAMS=""
> > > > > > > > > +    fi
> > > > > > > > > +
> > > > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > > > repository, clean unnecessary files
> > > > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > > > new file mode 100755
> > > > > > > > > index 0000000..48b502b
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > > > @@ -0,0 +1,25 @@
> > > > > > > > > +#!/usr/bin/python
> > > > > > > > > +
> > > > > > > > > +import os, sys, collections
> > > > > > > > > +import common as plib
> > > > > > > > > +import json
> > > > > > > > > +
> > > > > > > > > +# allocate variable to store the results
> > > > > > > > > +measurements = {}
> > > > > > > > > +measurements = collections.OrderedDict()
> > > > > > > > > +
> > > > > > > > > +# read results from linaro result.json format
> > > > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > > > +    data = json.load(f)[0]
> > > > > > > > > +
> > > > > > > > > +for test_case in data['metrics']:
> > > > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > > > +    result = test_case['result']
> > > > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > > > +    # measurement = test_case['measurement']
> > > > > > > > > +    # units = test_case['units']
> > > > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > > > +
> > > > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > > > +
> > > > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > > > new file mode 100644
> > > > > > > > > index 0000000..561e2ab
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > > > @@ -0,0 +1,16 @@
> > > > > > > > > +{
> > > > > > > > > +    "testName": "Functional.linaro",
> > > > > > > > > +    "specs": {
> > > > > > > > > +        "default": {
> > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > +        },
> > > > > > > > > +        "smoke": {
> > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > +        }
> > > > > > > > > +    }
> > > > > > > > > +}
> > > > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > > > new file mode 100644
> > > > > > > > > index 0000000..a2efee8
> > > > > > > > > --- /dev/null
> > > > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > > > @@ -0,0 +1,27 @@
> > > > > > > > > +fuego_package_version: 1
> > > > > > > > > +name: Functional.linaro
> > > > > > > > > +description: |
> > > > > > > > > +    Linaro test-definitions
> > > > > > > > > +license: GPL-2.0
> > > > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > +version: latest git commits
> > > > > > > > > +fuego_release: 1
> > > > > > > > > +type: Functional
> > > > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > > > +params:
> > > > > > > > > +    - YAML:
> > > > > > > > > +        description: test definiton or plan.
> > > > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > > > +        optional: no
> > > > > > > > > +    - PARAMS:
> > > > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > > > [PARAM2=VALUE2]
> > > > > > > > > +        example: "TESTS='pwd'"
> > > > > > > > > +        optional: yes
> > > > > > > > > +data_files:
> > > > > > > > > +    - chart_config.json
> > > > > > > > > +    - fuego_test.sh
> > > > > > > > > +    - parser.py
> > > > > > > > > +    - spec.json
> > > > > > > > > +    - test.yaml
> > > > > > > > > --
> > > > > > > > > 2.7.4
> > > > > > > >
> > > > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > > > The issue may be something weird in my board file or configuration.
> > > > > > > >
> > > > > > > > ===== doing fuego phase: run =====
> > > > > > > > -------------------------------------------------
> > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > BIN_PATH:
> > > > > > > >
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > > >
> > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on
> > root@10.0.1.74
> > > > > > > > {'path':
> > > > > > > >
> > > > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > + cat uuid
> > > > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > > > + cd ./automated/linux/smoke/
> > > > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > > > INFO: install_deps skipped
> > > > > > > >
> > > > > > > > INFO: Running pwd test...
> > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > > > pwd pass
> > > > > > > >
> > > > > > > > INFO: Running lsb_release test...
> > > > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > > > lsb_release fail
> > > > > > > >
> > > > > > > > INFO: Running uname test...
> > > > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > > > uname pass
> > > > > > > >
> > > > > > > > INFO: Running ip test...
> > > > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen
> > 1
> > > > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > >     inet6 ::1/128 scope host
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
> > group
> > > > default
> > > > > > > > qlen 1000
> > > > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > >     link/can
> > > > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > >     link/can
> > > > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > default
> > > > qlen 1000
> > > > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > default
> > > > qlen 1000
> > > > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > ip pass
> > > > > > > >
> > > > > > > > INFO: Running lscpu test...
> > > > > > > > Architecture:          armv7l
> > > > > > > > Byte Order:            Little Endian
> > > > > > > > CPU(s):                1
> > > > > > > > On-line CPU(s) list:   0
> > > > > > > > Thread(s) per core:    1
> > > > > > > > Core(s) per socket:    1
> > > > > > > > Socket(s):             1
> > > > > > > > Model:                 2
> > > > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > > > CPU max MHz:           1000.0000
> > > > > > > > CPU min MHz:           300.0000
> > > > > > > > BogoMIPS:              995.32
> > > > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > > > lscpu pass
> > > > > > > >
> > > > > > > > INFO: Running vmstat test...
> > > > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > > > vmstat pass
> > > > > > > >
> > > > > > > > INFO: Running lsblk test...
> > > > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > > > lsblk pass
> > > > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > > > >
> > > > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > > >
> > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > =vmstat RESULT=pass>
> > > > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > --- Printing result.csv ---
> > > > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > >
> > > > > > > > -------------------------------------------------
> > > > > > > > ===== doing fuego phase: post_test =====
> > > > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > > > INFO: the test did not produce a test log on the target
> > > > > > > > ===== doing fuego phase: processing =====
> > > > > > > > ### WARNING: Program returned exit code ''
> > > > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ### Unrecognized results format
> > > > > > > > ERROR: results did not satisfy the threshold
> > > > > > > > Fuego: requested test phases complete!
> > > > > > > > Build step 'Execute shell' marked build as failure
> > > > > > > >
> > > > > > > > ----------
> > > > > > > >
> > > > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > > > that I should fix, please let me know.
> > > > > > > >
> > > > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > > > >
> > > > > > > > Just FYI.  Thanks for the code.
> > > > > > > >  -- Tim
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > Fuego mailing list
> > > > > > > Fuego@lists.linuxfoundation.org
> > > > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-25  5:24         ` Chase Qi
@ 2019-02-25  5:35           ` daniel.sangorrin
  2019-02-26  8:50             ` Chase Qi
  0 siblings, 1 reply; 14+ messages in thread
From: daniel.sangorrin @ 2019-02-25  5:35 UTC (permalink / raw)
  To: chase.qi; +Cc: fuego

Hello Chase,

> From: Chase Qi <chase.qi@linaro.org>
[...]
> > Today, I sent a patch series that allows you to install Fuego without Jenkins on Docker. Maybe that will solve
> your previous problems. I also submitted a few more changes to allow users changing the port where Jenkins
> listens.
> 
> I noticed the patches. I definitely will give it a spin later on. I am
> currently still using fuego v1.40 based docker image for prototyping
> with LAVA multinode job. I built and uploaded fuego docker imge with
> fuego test code included here
> https://cloud.docker.com/repository/docker/chaseqi/standalone-fuego/tags
> .  BTW, are you guys plan to publish fuego official docker image?

Nice.
We have talked about that but we haven't published an official image yet.
 
> Here is the changes I made in the dockerfile.
> 
> ```
> $ git diff
> diff --git a/Dockerfile b/Dockerfile
> index 269e1f6..16586fa 100644
> --- a/Dockerfile
> +++ b/Dockerfile
> @@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent --fail \
>  RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers
> 
> 
> +#
> =================================================================
> =============
> +#Install fuego
> +#
> =================================================================
> =============
> +RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
> +    && git clone https://bitbucket.org/tbird20d/fuego-core.git /fuego-core \
> +    && ln -s /fuego/fuego-ro/ / \
> +    && ln -s /fuego/fuego-rw/ /

The upstream repository has changed to 
https://bitbucket.org/fuegotest/fuego.git
https://bitbucket.org/fuegotest/fuego-core.git
Also, if you use the next branch you have to clone fuego-core within the fuego folder now.

> +
> +
>  #
> =================================================================
> =============
>  # get ttc script and helpers
>  #
> =================================================================
> =============
> @@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
>  # Lava
>  #
> =================================================================
> =============
> 
> -RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> -RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
>  # CONVENIENCE HACKS
>  # not mounted, yet
>  #RUN echo "fuego-create-node --board raspberrypi3" >> /root/firststart.sh
> @@ -218,6 +227,14 @@ RUN ln -s
> /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
>  #RUN DEBIAN_FRONTEND=noninteractive apt-get update
>  #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
> crossbuild-essential-armhf cpp-arm-linux-gnueabihf
> gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf
> 
> +#
> =================================================================
> =============
> +#Install arm64 toolchain
> +#
> =================================================================
> =============
> +RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
> +    && apt-get clean \
> +    && rm -rf /tmp/* /var/tmp/*
> +
> +
>  #
> =================================================================
> =============
>  # Setup startup command
>  #
> =================================================================
> =============
> ```
> 
> >
> > I am also preparing a native install script. Unfortunately, my time is up today and I couldn't test it. I send it to
> you attached in case you want to give it a try. But make sure you do it on a container or VM where nothing bad
> can happen ;)
> 
> Thanks a lot. I just tried the script within lxc jessie container.
> Most of it just works. The following run with ftc cmds are
> problematic, which probably are expected. I guess we also need to
> patch ftc to make jenkins and docker optional.

Sorry about that, I am working on it. I will release a better script soon, maybe today or tommorow. I will let you know when it's ready.

Thanks,
Daniel


> ```
> root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
> sudo: docker: command not found
> $ diff native-install-jessie.sh native-install-jessie.sh.original
> 86,92c86
> < # ftc list-boards reports
> < # sudo: docker: command not found
> < # TODO: patch ftc to make docker optional.
> < curl -fsSL https://get.docker.com -o get-docker.sh
> < sh get-docker.sh
> <
> < ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
> ---
> > ln -s /fuego-core/scripts/ftc /usr/local/bin/
> 
> root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
> diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> index ab2d2cb..1b09812 100755
> --- a/engine/scripts/ftc
> +++ b/engine/scripts/ftc
> @@ -4665,7 +4665,7 @@ def main():
>                  print "Can't do rm-jobs outside the container! Aborting."
>                  sys.exit(1)
>              command += arg + " "
> -        container_command(command)
> +        #container_command(command)
> 
>      if len(sys.argv) < 2:
>          error_out('Missing command\nUse "ftc help" to get usage help.', 1)
> @@ -4781,7 +4781,7 @@ def main():
>          # shows fuego boards
>          do_list_boards(conf)
> 
> -    import jenkins
> +    #import jenkins
>      server = jenkins.Jenkins('http://localhost:8080/fuego')
> 
>      if command.startswith("add-job"):
> 
> # ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
> Traceback (most recent call last):
>   File "/usr/local/bin/ftc", line 4929, in <module>
>     main()
>   File "/usr/local/bin/ftc", line 4785, in main
>     server = jenkins.Jenkins('http://localhost:8080/fuego')
> AttributeError: 'NoneType' object has no attribute 'Jenkins
> ```
> 
> Thanks,
> Chase
> 
> >
> > > > > > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> > > > > >
> > > > > > The only problem is that I would need to run the Fuego parser on the target board.
> > > > > > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board
> > > would
> > > > > need to install the python modules required by fuego-parser. This is on my TODO list since I proposed
> it
> > > during
> > > > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > > > >
> > > > > > What alternatives do I have?
> > > > > > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably
> not
> > > > > possible.
> > > > > > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> > > > > >
> > > > > > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python),
> while
> > > > > Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> > > > > >
> > > > > > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > > > > > they don't scale, 'scale' means remote and distributed CI setup.
> > > > > >
> > > > > > Yes, it is a hack.
> > > > > > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > > > > > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> > > > >
> > > > > Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> > > > > are needed.
> > > > >
> > > > > >
> > > > > > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > > > > > The first part is hard part. Still tring to find a way. About the host
> > > > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > > > jenkins and docker related issues. I feel build, publish and pull a
> > > > > > > fuego docker image is the way to go now.
> > > > > >
> > > > > > I think this approach might be too hard.
> > > > >
> > > > > LAVA v2 introduced lxc-protocol. With the protocol, single node test
> > > > > job can deploy and boot a lxc container to control DUT. Here is an
> > > > > example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> > > > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > > > configed with static IP, the IP is known to lxc container with LAVA
> > > > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > > > possible. Based on these features, I thought we can run fuego tests
> > > > > with LAVA just like how we run it now. As mentioned above, there is no
> > > > > and will be no support for docker-protocol in LAVA, and migrating
> > > > > fuego installation to lxc also is problemic. Please do let me know
> > > > > once you have a script for fuego installation. I am having problem to
> > > > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > > > Once I am alble to install fuego within lxc, I can propare a job
> > > > > example. It would be one test definition for all fuego tests. This is
> > > > > how we do it before. `automated/linux/workload-automation3
> > > > > ` is a good example.
> > > >
> > > > I see what you want to do. Using LXC sounds doable.
> > > > But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.
> > >
> > > Yeah, I guess if target run Debian based distros, then installing on
> > > DUT will be easier. Most of our targets run openembedded based distro,
> > > it is hard to install fuego on them. It is possible to build docker
> > > into these OE images and run fuego on target within docker container,
> > > but some of the boards don't have the resource for that...  In LAVA,
> > > LXC or LXC DUT are mainly used as host to control other ARM based
> > > DUTs.
> >
> > I see, maybe running Fuego on OE might require some work. Maybe it is easier to start with Debian images.
> >
> > > > I am going to work on the installation of Fuego natively then.
> > > > By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT
> with
> > > Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board)
> although
> > > to run with "root" permissions you need to change jenkins permissions.
> > >
> > > I tried to test docker.board within fuego docker container, it works
> > > well, and yes, I hit the root permissions issue. I haven't tried to
> > > import fuego docker filesystem in LXC, that is a new concept to me.
> > > Does it require docker installed and running in LXC container? If yes,
> > > that is a problem in LAVA. I think we will need to modify lxc
> > > cofiguration somehow on lava-dispatcher to support docker in lxc.
> >
> > No, I just meant to use Docker to create the filesystem tree and then use it in LXC.
> >
> > > I am getting close to get my whole setup working with LAVA multinode
> > > job. Here is the test definitions in case anyone interested in
> > > https://github.com/chase-qi/test-definitions/tree/fuego/automated/linux/fuego-multinode
> > > . I will share a job example once I have it.
> >
> > Great! Thanks a lot!
> >
> > Kind regards,
> > Daniel
> >
> >
> > >
> > > Thanks,
> > > Chase
> > >
> > > >
> > > > > Alternatively, I can lunch docker device and DUT with multinode job,
> > > > > but that is complex. And fuego docker container eats a lot of
> > > > > memory(blame jenkins?). The exsting docker devices in our lib only
> > > > > have 1G memory configured.
> > > >
> > > > I haven't checked the memory consumed, I guess the reason is Java.
> > > >
> > > > > > This is my current work-in-progress approach:
> > > > > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > > > > >
> > > > > > - Manual usage (run locally)
> > > > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > > > >         $ cd test-definitions
> > > > > >         $ . ./automated/bin/setenv.sh
> > > > > >         $ cd automated/linux/fuego/
> > > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > > >         $  tree output/
> > > > > >                 output/
> > > > > >                 ├── build <- equivalent to fuego buildzone
> > > > > >                 │   ├── hello
> > > > > >                 │   ├── hello.c
> > > > > >                 │   ├── Makefile
> > > > > >                 │   └── README.md
> > > > > >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> > > > > >                 │   └── hello
> > > > > >                 └── logs <- equivalent to logdir
> > > > > >                         └── testlog.txt
> > > > > > - test-runner usage (run on remote board)
> > > > > >         $ cd test-definitions
> > > > > >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > > >         $ ls ../output
> > > > > >                 result.csv
> > > > > >                 result.json
> > > > > >
> > > > > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> > > > >
> > > > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > > > will take care of it. When running in LAVA, the helper uses
> > > > > lava-test-case for result collecting, and when running without LAVA,
> > > > > the helper prints result lines in a fixed format for result parsing
> > > > > within test-runner. (When I writing this, I noticed your next reply,
> > > > > maybe I am looking at the latest code already, I will give it a spin
> > > > > with LAVA and come back to you)
> > > >
> > > > Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so
> I have
> > > started to do tests there.
> > > >
> > > > > So basically, we are running in two different directions. From my
> > > > > point of view, you are porting fuego tests to Linaro test-definitions
> > > > > natively. Although I am not yet sure how the integration between these
> > > > > two projects goes, we are happy to see this happening :)
> > > >
> > > > Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing
> > > criteria. Perhaps your approach is better.
> > > >
> > > > > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend
> on
> > > bash.
> > > > > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test
> definitions.
> > > > > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > > > > >
> > > > >
> > > > > lava-test-shell requires POSIX shell. We normally use /bin/sh which
> > > > > links to dash on Debian based distros, and we also have some test
> > > > > definitions like ltp and android trandfed using bash. bash has some
> > > > > extensions are not POSIX compatiable. IMHO, using bash without these
> > > > > extensions is totally fine. We are using shellcheck in sanity check to
> > > > > dedect potential POSIX issues.
> > > >
> > > > OK, I got it. Thank you!
> > > >
> > > > Kind regards,
> > > > Daniel
> > > >
> > > > >
> > > > > Thanks,
> > > > > Chase
> > > > >
> > > > >
> > > > > > Thanks,
> > > > > > Daniel
> > > > > >
> > > > > > > We probably should start a new thread for this topic to share progress?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Chase
> > > > > > >
> > > > > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > > > > >
> > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Daniel
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > > > > fuego@lists.linuxfoundation.org
> > > > > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > > > > >
> > > > > > > > > Comments inline below.
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: Daniel Sangorrin
> > > > > > > > > >
> > > > > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > > > > It is still a proof of concept and only tested with
> > > > > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > > > > is left.
> > > > > > > > > >
> > > > > > > > > > To try it follow these steps:
> > > > > > > > > >
> > > > > > > > > > - prepare SSH_KEY for your board
> > > > > > > > > >     Eg: Inside fuego's docker container do
> > > > > > > > > >     > su jenkins
> > > > > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > > >     > vi ~/.ssh/config
> > > > > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > > > > - execute the job from jenkins
> > > > > > > > > > - expected results
> > > > > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > > > > >     - run.json
> > > > > > > > > >     - csv
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > > ---
> > > > > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > > > > +++++++++++++++++++++++++++++++
> > > > > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > > > > >  5 files changed, 130 insertions(+)
> > > > > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > > > > >
> > > > > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > > > > new file mode 100644
> > > > > > > > > > index 0000000..b8c8fb6
> > > > > > > > > > --- /dev/null
> > > > > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > > > > @@ -0,0 +1,3 @@
> > > > > > > > > > +{
> > > > > > > > > > +    "chart_type": "testcase_table"
> > > > > > > > > > +}
> > > > > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > new file mode 100755
> > > > > > > > > > index 0000000..17b56a9
> > > > > > > > > > --- /dev/null
> > > > > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > @@ -0,0 +1,59 @@
> > > > > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > > > > +
> > > > > > > > > > +# Root permissions required for
> > > > > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > > > > specified
> > > > > > > > > > +# - executing some of the tests
> > > > > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > > > > +NEED_ROOT=1
> > > > > > > > > > +
> > > > > > > > > > +function test_pre_check {
> > > > > > > > > > +    # linaro parser dependencies
> > > > > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > > > > +    assert_has_program sed
> > > > > > > > > > +    assert_has_program awk
> > > > > > > > > > +    assert_has_program grep
> > > > > > > > > > +    assert_has_program egrep
> > > > > > > > > > +    assert_has_program tee
> > > > > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > > > > no need to check for them here.
> > > > > > > > > I already made a patch to remove those lines.
> > > > > > > > >
> > > > > > > > > > +
> > > > > > > > > > +    # test-runner requires a password-less connection
> > > > > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > > > > +    # su jenkins
> > > > > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > > > +    # vi ~/.ssh/config
> > > > > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +function test_build {
> > > > > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > > > > >
> > > > > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > > > > >
> > > > > > > > > ===== doing fuego phase: build =====
> > > > > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > > > > Cloning into 'fuego_git_repo'...
> > > > > > > > > Checkout branch/tag/commit id master.
> > > > > > > > > Already on 'master'
> > > > > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > > BIN_PATH:
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > > Downloading/unpacking pexpect (from -r
> > > > > > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > (line 1))
> > > > > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in
> > > /usr/lib/python2.7/dist-packages
> > > > > (from -r
> > > > > > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > (line 2))
> > > > > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in
> > > /usr/lib/python2.7/dist-packages
> > > > > (from
> > > > > > > -r
> > > > > > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > (line 3))
> > > > > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > > > >
> > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > (line 1))
> > > > > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > > > > >                              ^
> > > > > > > > > SyntaxError: invalid syntax
> > > > > > > > >
> > > > > > > > > Successfully installed pexpect ptyprocess
> > > > > > > > > Cleaning up...
> > > > > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > > > > >
> > > > > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > > > > error after the first build of the job.
> > > > > > > > >
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +function test_run {
> > > > > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > > > > +
> > > > > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > > > > +    fi
> > > > > > > > > > +
> > > > > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > > > > +            test_or_plan_flag="-p"
> > > > > > > > > > +    else
> > > > > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > > > > +            test_or_plan_flag="-d"
> > > > > > > > > > +    fi
> > > > > > > > > > +
> > > > > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > > > > +    else
> > > > > > > > > > +        PARAMS=""
> > > > > > > > > > +    fi
> > > > > > > > > > +
> > > > > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > > > > repository, clean unnecessary files
> > > > > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > > > > new file mode 100755
> > > > > > > > > > index 0000000..48b502b
> > > > > > > > > > --- /dev/null
> > > > > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > > > > @@ -0,0 +1,25 @@
> > > > > > > > > > +#!/usr/bin/python
> > > > > > > > > > +
> > > > > > > > > > +import os, sys, collections
> > > > > > > > > > +import common as plib
> > > > > > > > > > +import json
> > > > > > > > > > +
> > > > > > > > > > +# allocate variable to store the results
> > > > > > > > > > +measurements = {}
> > > > > > > > > > +measurements = collections.OrderedDict()
> > > > > > > > > > +
> > > > > > > > > > +# read results from linaro result.json format
> > > > > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > > > > +    data = json.load(f)[0]
> > > > > > > > > > +
> > > > > > > > > > +for test_case in data['metrics']:
> > > > > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > > > > +    result = test_case['result']
> > > > > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > > > > +    # measurement = test_case['measurement']
> > > > > > > > > > +    # units = test_case['units']
> > > > > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > > > > +
> > > > > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > > > > +
> > > > > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > > > > new file mode 100644
> > > > > > > > > > index 0000000..561e2ab
> > > > > > > > > > --- /dev/null
> > > > > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > > > > @@ -0,0 +1,16 @@
> > > > > > > > > > +{
> > > > > > > > > > +    "testName": "Functional.linaro",
> > > > > > > > > > +    "specs": {
> > > > > > > > > > +        "default": {
> > > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > > +        },
> > > > > > > > > > +        "smoke": {
> > > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > > +        }
> > > > > > > > > > +    }
> > > > > > > > > > +}
> > > > > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > > > > new file mode 100644
> > > > > > > > > > index 0000000..a2efee8
> > > > > > > > > > --- /dev/null
> > > > > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > > > > @@ -0,0 +1,27 @@
> > > > > > > > > > +fuego_package_version: 1
> > > > > > > > > > +name: Functional.linaro
> > > > > > > > > > +description: |
> > > > > > > > > > +    Linaro test-definitions
> > > > > > > > > > +license: GPL-2.0
> > > > > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > > +version: latest git commits
> > > > > > > > > > +fuego_release: 1
> > > > > > > > > > +type: Functional
> > > > > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > > > > +params:
> > > > > > > > > > +    - YAML:
> > > > > > > > > > +        description: test definiton or plan.
> > > > > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > > > > +        optional: no
> > > > > > > > > > +    - PARAMS:
> > > > > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > > > > [PARAM2=VALUE2]
> > > > > > > > > > +        example: "TESTS='pwd'"
> > > > > > > > > > +        optional: yes
> > > > > > > > > > +data_files:
> > > > > > > > > > +    - chart_config.json
> > > > > > > > > > +    - fuego_test.sh
> > > > > > > > > > +    - parser.py
> > > > > > > > > > +    - spec.json
> > > > > > > > > > +    - test.yaml
> > > > > > > > > > --
> > > > > > > > > > 2.7.4
> > > > > > > > >
> > > > > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > > > > The issue may be something weird in my board file or configuration.
> > > > > > > > >
> > > > > > > > > ===== doing fuego phase: run =====
> > > > > > > > > -------------------------------------------------
> > > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > > BIN_PATH:
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > > > >
> > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on
> > > root@10.0.1.74
> > > > > > > > > {'path':
> > > > > > > > >
> > > > >
> '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > + cat uuid
> > > > > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > > > > + cd ./automated/linux/smoke/
> > > > > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > > > > INFO: install_deps skipped
> > > > > > > > >
> > > > > > > > > INFO: Running pwd test...
> > > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > > > > pwd pass
> > > > > > > > >
> > > > > > > > > INFO: Running lsb_release test...
> > > > > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > > > > lsb_release fail
> > > > > > > > >
> > > > > > > > > INFO: Running uname test...
> > > > > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > > > > uname pass
> > > > > > > > >
> > > > > > > > > INFO: Running ip test...
> > > > > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
> qlen
> > > 1
> > > > > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > >     inet6 ::1/128 scope host
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP
> > > group
> > > > > default
> > > > > > > > > qlen 1000
> > > > > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > > >     link/can
> > > > > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > > >     link/can
> > > > > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > > default
> > > > > qlen 1000
> > > > > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > > default
> > > > > qlen 1000
> > > > > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > ip pass
> > > > > > > > >
> > > > > > > > > INFO: Running lscpu test...
> > > > > > > > > Architecture:          armv7l
> > > > > > > > > Byte Order:            Little Endian
> > > > > > > > > CPU(s):                1
> > > > > > > > > On-line CPU(s) list:   0
> > > > > > > > > Thread(s) per core:    1
> > > > > > > > > Core(s) per socket:    1
> > > > > > > > > Socket(s):             1
> > > > > > > > > Model:                 2
> > > > > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > > > > CPU max MHz:           1000.0000
> > > > > > > > > CPU min MHz:           300.0000
> > > > > > > > > BogoMIPS:              995.32
> > > > > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > > > > lscpu pass
> > > > > > > > >
> > > > > > > > > INFO: Running vmstat test...
> > > > > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > > > > vmstat pass
> > > > > > > > >
> > > > > > > > > INFO: Running lsblk test...
> > > > > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > > > > lsblk pass
> > > > > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > > > > >
> > > > > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > > > >
> > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > =vmstat RESULT=pass>
> > > > > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > --- Printing result.csv ---
> > > > > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu,
> vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > >
> > > > > > > > > -------------------------------------------------
> > > > > > > > > ===== doing fuego phase: post_test =====
> > > > > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > > > > INFO: the test did not produce a test log on the target
> > > > > > > > > ===== doing fuego phase: processing =====
> > > > > > > > > ### WARNING: Program returned exit code ''
> > > > > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ### Unrecognized results format
> > > > > > > > > ERROR: results did not satisfy the threshold
> > > > > > > > > Fuego: requested test phases complete!
> > > > > > > > > Build step 'Execute shell' marked build as failure
> > > > > > > > >
> > > > > > > > > ----------
> > > > > > > > >
> > > > > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > > > > that I should fix, please let me know.
> > > > > > > > >
> > > > > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > > > > >
> > > > > > > > > Just FYI.  Thanks for the code.
> > > > > > > > >  -- Tim
> > > > > > > >
> > > > > > > > _______________________________________________
> > > > > > > > Fuego mailing list
> > > > > > > > Fuego@lists.linuxfoundation.org
> > > > > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-25  5:35           ` daniel.sangorrin
@ 2019-02-26  8:50             ` Chase Qi
  2019-02-27  6:13               ` Tim.Bird
  0 siblings, 1 reply; 14+ messages in thread
From: Chase Qi @ 2019-02-26  8:50 UTC (permalink / raw)
  To: daniel.sangorrin; +Cc: Anders Roxell, fuego

Hi Daniel,

Thanks a lot for the comments.

On Mon, Feb 25, 2019 at 1:35 PM <daniel.sangorrin@toshiba.co.jp> wrote:
>
> Hello Chase,
>
> > From: Chase Qi <chase.qi@linaro.org>
> [...]
> > > Today, I sent a patch series that allows you to install Fuego without Jenkins on Docker. Maybe that will solve
> > your previous problems. I also submitted a few more changes to allow users changing the port where Jenkins
> > listens.
> >
> > I noticed the patches. I definitely will give it a spin later on. I am
> > currently still using fuego v1.40 based docker image for prototyping
> > with LAVA multinode job. I built and uploaded fuego docker imge with
> > fuego test code included here
> > https://cloud.docker.com/repository/docker/chaseqi/standalone-fuego/tags
> > .  BTW, are you guys plan to publish fuego official docker image?
>
> Nice.
> We have talked about that but we haven't published an official image yet.
>

Ok, I will stick with the one I made for the moment.

> > Here is the changes I made in the dockerfile.
> >
> > ```
> > $ git diff
> > diff --git a/Dockerfile b/Dockerfile
> > index 269e1f6..16586fa 100644
> > --- a/Dockerfile
> > +++ b/Dockerfile
> > @@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent --fail \
> >  RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers
> >
> >
> > +#
> > =================================================================
> > =============
> > +#Install fuego
> > +#
> > =================================================================
> > =============
> > +RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
> > +    && git clone https://bitbucket.org/tbird20d/fuego-core.git /fuego-core \
> > +    && ln -s /fuego/fuego-ro/ / \
> > +    && ln -s /fuego/fuego-rw/ /
>
> The upstream repository has changed to
> https://bitbucket.org/fuegotest/fuego.git
> https://bitbucket.org/fuegotest/fuego-core.git
> Also, if you use the next branch you have to clone fuego-core within the fuego folder now.
>

Thanks for the pointers. I switched to the new links.

> > +
> > +
> >  #
> > =================================================================
> > =============
> >  # get ttc script and helpers
> >  #
> > =================================================================
> > =============
> > @@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
> >  # Lava
> >  #
> > =================================================================
> > =============
> >
> > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> >  # CONVENIENCE HACKS
> >  # not mounted, yet
> >  #RUN echo "fuego-create-node --board raspberrypi3" >> /root/firststart.sh
> > @@ -218,6 +227,14 @@ RUN ln -s
> > /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> >  #RUN DEBIAN_FRONTEND=noninteractive apt-get update
> >  #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
> > crossbuild-essential-armhf cpp-arm-linux-gnueabihf
> > gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf
> >
> > +#
> > =================================================================
> > =============
> > +#Install arm64 toolchain
> > +#
> > =================================================================
> > =============
> > +RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
> > +    && apt-get clean \
> > +    && rm -rf /tmp/* /var/tmp/*
> > +
> > +
> >  #
> > =================================================================
> > =============
> >  # Setup startup command
> >  #
> > =================================================================
> > =============
> > ```
> >
> > >
> > > I am also preparing a native install script. Unfortunately, my time is up today and I couldn't test it. I send it to
> > you attached in case you want to give it a try. But make sure you do it on a container or VM where nothing bad
> > can happen ;)
> >
> > Thanks a lot. I just tried the script within lxc jessie container.
> > Most of it just works. The following run with ftc cmds are
> > problematic, which probably are expected. I guess we also need to
> > patch ftc to make jenkins and docker optional.
>
> Sorry about that, I am working on it. I will release a better script soon, maybe today or tommorow. I will let you know when it's ready.
>
> Thanks,
> Daniel
>
>
> > ```
> > root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
> > sudo: docker: command not found
> > $ diff native-install-jessie.sh native-install-jessie.sh.original
> > 86,92c86
> > < # ftc list-boards reports
> > < # sudo: docker: command not found
> > < # TODO: patch ftc to make docker optional.
> > < curl -fsSL https://get.docker.com -o get-docker.sh
> > < sh get-docker.sh
> > <
> > < ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
> > ---
> > > ln -s /fuego-core/scripts/ftc /usr/local/bin/
> >
> > root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
> > diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> > index ab2d2cb..1b09812 100755
> > --- a/engine/scripts/ftc
> > +++ b/engine/scripts/ftc
> > @@ -4665,7 +4665,7 @@ def main():
> >                  print "Can't do rm-jobs outside the container! Aborting."
> >                  sys.exit(1)
> >              command += arg + " "
> > -        container_command(command)
> > +        #container_command(command)
> >
> >      if len(sys.argv) < 2:
> >          error_out('Missing command\nUse "ftc help" to get usage help.', 1)
> > @@ -4781,7 +4781,7 @@ def main():
> >          # shows fuego boards
> >          do_list_boards(conf)
> >
> > -    import jenkins
> > +    #import jenkins
> >      server = jenkins.Jenkins('http://localhost:8080/fuego')
> >
> >      if command.startswith("add-job"):
> >
> > # ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
> > Traceback (most recent call last):
> >   File "/usr/local/bin/ftc", line 4929, in <module>
> >     main()
> >   File "/usr/local/bin/ftc", line 4785, in main
> >     server = jenkins.Jenkins('http://localhost:8080/fuego')
> > AttributeError: 'NoneType' object has no attribute 'Jenkins
> > ```
> >
> > Thanks,
> > Chase
> >
> > >
> > > > > > > > * as you pointed, parsing fuego's test result file in LAVA is easy to do.
> > > > > > >
> > > > > > > The only problem is that I would need to run the Fuego parser on the target board.
> > > > > > > For that, I would need to modularize the parser into a library (e.g. import fuego-parser), and the board
> > > > would
> > > > > > need to install the python modules required by fuego-parser. This is on my TODO list since I proposed
> > it
> > > > during
> > > > > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > > > > >
> > > > > > > What alternatives do I have?
> > > > > > > - send the results to LAVA through a REST API instead of having it monitor the serial cable? probably
> > not
> > > > > > possible.
> > > > > > > - create a simplified parser on the test (e.g. using our log_compare function). Not ideal, but possible.
> > > > > > >
> > > > > > > In the end, this stems from the fact that Fuego assumes parsing is done in the host (to use python),
> > while
> > > > > > Linaro uses grep/awk/sed directly on the target. There is a trade-off there.
> > > > > > >
> > > > > > > > * existing way to run fuego tests in LAVA are hacks. The problem is
> > > > > > > > they don't scale, 'scale' means remote and distributed CI setup.
> > > > > > >
> > > > > > > Yes, it is a hack.
> > > > > > > I think Fuego is not supposed to run with LAVA, because the goals are very different.
> > > > > > > But parts of Fuego can run with LAVA. This is what I think we can collaborate on.
> > > > > >
> > > > > > Yes, +1. When running with LAVA, IMHO, only the backend and real tests
> > > > > > are needed.
> > > > > >
> > > > > > >
> > > > > > > > * I am tring to hanld both fuego host controller and DUT with LAVA.
> > > > > > > > The first part is hard part. Still tring to find a way. About the host
> > > > > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > > > > jenkins and docker related issues. I feel build, publish and pull a
> > > > > > > > fuego docker image is the way to go now.
> > > > > > >
> > > > > > > I think this approach might be too hard.
> > > > > >
> > > > > > LAVA v2 introduced lxc-protocol. With the protocol, single node test
> > > > > > job can deploy and boot a lxc container to control DUT. Here is an
> > > > > > example: https://lkft.validation.linaro.org/scheduler/job/605270 . The
> > > > > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > > > > configed with static IP, the IP is known to lxc container with LAVA
> > > > > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > > > > possible. Based on these features, I thought we can run fuego tests
> > > > > > with LAVA just like how we run it now. As mentioned above, there is no
> > > > > > and will be no support for docker-protocol in LAVA, and migrating
> > > > > > fuego installation to lxc also is problemic. Please do let me know
> > > > > > once you have a script for fuego installation. I am having problem to
> > > > > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > > > > Once I am alble to install fuego within lxc, I can propare a job
> > > > > > example. It would be one test definition for all fuego tests. This is
> > > > > > how we do it before. `automated/linux/workload-automation3
> > > > > > ` is a good example.
> > > > >
> > > > > I see what you want to do. Using LXC sounds doable.
> > > > > But I guess that having Fuego installed on the target (or an LXC DUT) would be much easier.
> > > >
> > > > Yeah, I guess if target run Debian based distros, then installing on
> > > > DUT will be easier. Most of our targets run openembedded based distro,
> > > > it is hard to install fuego on them. It is possible to build docker
> > > > into these OE images and run fuego on target within docker container,
> > > > but some of the boards don't have the resource for that...  In LAVA,
> > > > LXC or LXC DUT are mainly used as host to control other ARM based
> > > > DUTs.
> > >
> > > I see, maybe running Fuego on OE might require some work. Maybe it is easier to start with Debian images.
> > >
> > > > > I am going to work on the installation of Fuego natively then.
> > > > > By the way, if you export the docker filesystem (docker export..) and import it in LXC you would get a DUT
> > with
> > > > Fuego installed. Wouldn't that solve your problem? Fuego can run tests on the host (see docker.board)
> > although
> > > > to run with "root" permissions you need to change jenkins permissions.
> > > >
> > > > I tried to test docker.board within fuego docker container, it works
> > > > well, and yes, I hit the root permissions issue. I haven't tried to
> > > > import fuego docker filesystem in LXC, that is a new concept to me.
> > > > Does it require docker installed and running in LXC container? If yes,
> > > > that is a problem in LAVA. I think we will need to modify lxc
> > > > cofiguration somehow on lava-dispatcher to support docker in lxc.
> > >
> > > No, I just meant to use Docker to create the filesystem tree and then use it in LXC.
> > >
> > > > I am getting close to get my whole setup working with LAVA multinode
> > > > job. Here is the test definitions in case anyone interested in
> > > > https://github.com/chase-qi/test-definitions/tree/fuego/automated/linux/fuego-multinode
> > > > . I will share a job example once I have it.

Here is a  LAVA job example
https://lava.slipslow.ml/scheduler/job/119. The test job uses:
* docker device as host using the customized fuego docker image, refer
the description
https://cloud.docker.com/u/chaseqi/repository/docker/chaseqi/standalone-fuego
* raspberry pi3 as DUT, the DUT boots OE based build via tftp and nfsrootfs.
* lava multinode protocol to sync machine status between host and dut,
and sent IP and ssh key between them.

From my point of view, here are the pros and cons of this approach.
Pros:
* Doesn't need local fuego host controller any more, it is native run
within LAVA.
* Easy to scale. Jobs can be submitted from any client to any devices available.
* Pretty fast, once the docker image pulled, on the same
lava-dispatcher, the following test jobs will use it directly to
create container as fuego host.

Cons:
* Complicated, typically for new LAVA users or whoever don't want to
touch LAVA multinode.
* Requires SSH access to the DUT. In the case of network isn't
supported, it blocks all tests.

With your effort on native installer and non-jenkins, I think it is
possible to do something similar with single node job. As I wrote, it
will be lxc protocol plus DUT(with static IP). However, when network
isn't available, then we are in trouble too.

IMHO, this is hard but a faster way to get fuego tests running within
LAVA as it works just like how we use fuego now.  I am adding it to
Linaro test-definitions with PR
https://github.com/Linaro/test-definitions/pull/22 . I believe you
guys are the best to review the PR. Any comments would be appreciated.

> > >
> > > Great! Thanks a lot!
> > >
> > > Kind regards,
> > > Daniel
> > >
> > >
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > > >
> > > > > > Alternatively, I can lunch docker device and DUT with multinode job,
> > > > > > but that is complex. And fuego docker container eats a lot of
> > > > > > memory(blame jenkins?). The exsting docker devices in our lib only
> > > > > > have 1G memory configured.
> > > > >
> > > > > I haven't checked the memory consumed, I guess the reason is Java.
> > > > >
> > > > > > > This is my current work-in-progress approach:
> > > > > > > https://github.com/sangorrin/test-definitions/tree/master/automated/linux/fuego
> > > > > > >
> > > > > > > - Manual usage (run locally)
> > > > > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > > > > >         $ cd test-definitions
> > > > > > >         $ . ./automated/bin/setenv.sh
> > > > > > >         $ cd automated/linux/fuego/
> > > > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > > > >         $  tree output/
> > > > > > >                 output/
> > > > > > >                 ├── build <- equivalent to fuego buildzone
> > > > > > >                 │   ├── hello
> > > > > > >                 │   ├── hello.c
> > > > > > >                 │   ├── Makefile
> > > > > > >                 │   └── README.md
> > > > > > >                 ├── fuego.Functional.hello_world <- equivalent to board test folder
> > > > > > >                 │   └── hello
> > > > > > >                 └── logs <- equivalent to logdir
> > > > > > >                         └── testlog.txt
> > > > > > > - test-runner usage (run on remote board)
> > > > > > >         $ cd test-definitions
> > > > > > >         $ test-runner -g root@192.168.1.45 -d ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > > > >         $ ls ../output
> > > > > > >                 result.csv
> > > > > > >                 result.json
> > > > > > >
> > > > > > > I have yet to add the LAVA messages and prepare result.txt but it will be working soon.
> > > > > >
> > > > > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > > > > will take care of it. When running in LAVA, the helper uses
> > > > > > lava-test-case for result collecting, and when running without LAVA,
> > > > > > the helper prints result lines in a fixed format for result parsing
> > > > > > within test-runner. (When I writing this, I noticed your next reply,
> > > > > > maybe I am looking at the latest code already, I will give it a spin
> > > > > > with LAVA and come back to you)
> > > > >
> > > > > Thanks again for checking. I am glad that it worked for your. I have a LAVA setup on the CIP project so
> > I have
> > > > started to do tests there.
> > > > >
> > > > > > So basically, we are running in two different directions. From my
> > > > > > point of view, you are porting fuego tests to Linaro test-definitions
> > > > > > natively. Although I am not yet sure how the integration between these
> > > > > > two projects goes, we are happy to see this happening :)
> > > > >
> > > > > Thanks, you are right. But porting it to Fuego misses a lot of the good features in Fuego such as the passing
> > > > criteria. Perhaps your approach is better.

No really : ) As I wrote above, my approach requests ssh access to DUT
which isn't always the case. It also is one of the reasons we do
install(optional) -> run -> parsing on target. lava-dispatcher will
clone test repos and apply them to rootfs as overlay before
deployment. To some extend, it solves 'no network' issue.

When dependence can be installed or pre-installed, your approach will
work every where. We, at least me, will be very happy to see it in
Linaro test-definitions. It is a very good example for adding fuego
tests to test-definitions project.

Thanks,
Chase

> > > > >
> > > > > > > By the way, I couldn't reuse some parts of Fuego that usually run on the host because they depend
> > on
> > > > bash.
> > > > > > > Currently Functional.hello_world is working on sh but I will find similar issues as I add more test
> > definitions.
> > > > > > > Is sh a hard requirement for you guys? or would you be fine with tests requiring bash.
> > > > > > >
> > > > > >
> > > > > > lava-test-shell requires POSIX shell. We normally use /bin/sh which
> > > > > > links to dash on Debian based distros, and we also have some test
> > > > > > definitions like ltp and android trandfed using bash. bash has some
> > > > > > extensions are not POSIX compatiable. IMHO, using bash without these
> > > > > > extensions is totally fine. We are using shellcheck in sanity check to
> > > > > > dedect potential POSIX issues.
> > > > >
> > > > > OK, I got it. Thank you!
> > > > >
> > > > > Kind regards,
> > > > > Daniel
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Chase
> > > > > >
> > > > > >
> > > > > > > Thanks,
> > > > > > > Daniel
> > > > > > >
> > > > > > > > We probably should start a new thread for this topic to share progress?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Chase
> > > > > > > >
> > > > > > > > [1] https://github.com/Linaro/test-definitions/blob/master/automated/lib/sh-test-lib#L250
> > > > > > > >
> > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Daniel
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > > > > > > > > > Sent: Thursday, February 14, 2019 6:51 AM
> > > > > > > > > > To: sangorrin daniel(サンゴリン ダニエル ○SWC□OST) <daniel.sangorrin@toshiba.co.jp>;
> > > > > > > > > > fuego@lists.linuxfoundation.org
> > > > > > > > > > Subject: RE: [Fuego] [PATCH] tests: add support for Linaro test-definitons
> > > > > > > > > >
> > > > > > > > > > Comments inline below.
> > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: Daniel Sangorrin
> > > > > > > > > > >
> > > > > > > > > > > This adds initial support for reusing Linaro test-definitions.
> > > > > > > > > > > It is still a proof of concept and only tested with
> > > > > > > > > > > smoke tests. I have written a few FIXTHIS to indicate what
> > > > > > > > > > > is left.
> > > > > > > > > > >
> > > > > > > > > > > To try it follow these steps:
> > > > > > > > > > >
> > > > > > > > > > > - prepare SSH_KEY for your board
> > > > > > > > > > >     Eg: Inside fuego's docker container do
> > > > > > > > > > >     > su jenkins
> > > > > > > > > > >     > cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > > > >     > vi ~/.ssh/config
> > > > > > > > > > >     >  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > > > >     >    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > > > - ftc add-job -b bbb -t Functional.linaro
> > > > > > > > > > > - execute the job from jenkins
> > > > > > > > > > > - expected results
> > > > > > > > > > >     - table with each test case and the results (PASS/FAIL/SKIP)
> > > > > > > > > > >     - run.json
> > > > > > > > > > >     - csv
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > > > ---
> > > > > > > > > > >  tests/Functional.linaro/chart_config.json |  3 ++
> > > > > > > > > > >  tests/Functional.linaro/fuego_test.sh     | 59
> > > > > > > > > > > +++++++++++++++++++++++++++++++
> > > > > > > > > > >  tests/Functional.linaro/parser.py         | 25 +++++++++++++
> > > > > > > > > > >  tests/Functional.linaro/spec.json         | 16 +++++++++
> > > > > > > > > > >  tests/Functional.linaro/test.yaml         | 27 ++++++++++++++
> > > > > > > > > > >  5 files changed, 130 insertions(+)
> > > > > > > > > > >  create mode 100644 tests/Functional.linaro/chart_config.json
> > > > > > > > > > >  create mode 100755 tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > >  create mode 100755 tests/Functional.linaro/parser.py
> > > > > > > > > > >  create mode 100644 tests/Functional.linaro/spec.json
> > > > > > > > > > >  create mode 100644 tests/Functional.linaro/test.yaml
> > > > > > > > > > >
> > > > > > > > > > > diff --git a/tests/Functional.linaro/chart_config.json
> > > > > > > > > > > b/tests/Functional.linaro/chart_config.json
> > > > > > > > > > > new file mode 100644
> > > > > > > > > > > index 0000000..b8c8fb6
> > > > > > > > > > > --- /dev/null
> > > > > > > > > > > +++ b/tests/Functional.linaro/chart_config.json
> > > > > > > > > > > @@ -0,0 +1,3 @@
> > > > > > > > > > > +{
> > > > > > > > > > > +    "chart_type": "testcase_table"
> > > > > > > > > > > +}
> > > > > > > > > > > diff --git a/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > > b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > > new file mode 100755
> > > > > > > > > > > index 0000000..17b56a9
> > > > > > > > > > > --- /dev/null
> > > > > > > > > > > +++ b/tests/Functional.linaro/fuego_test.sh
> > > > > > > > > > > @@ -0,0 +1,59 @@
> > > > > > > > > > > +gitrepo="https://github.com/Linaro/test-definitions.git"
> > > > > > > > > > > +
> > > > > > > > > > > +# Root permissions required for
> > > > > > > > > > > +# - installing dependencies on the target (debian/centos) when -s is not
> > > > > > > > > > > specified
> > > > > > > > > > > +# - executing some of the tests
> > > > > > > > > > > +# FIXTHIS: don't force root permissions for tests that do not require them
> > > > > > > > > > > +NEED_ROOT=1
> > > > > > > > > > > +
> > > > > > > > > > > +function test_pre_check {
> > > > > > > > > > > +    # linaro parser dependencies
> > > > > > > > > > > +    # FIXTHIS: use dependencies specified in the test definition yaml
> > > > > > > > > > > +    assert_has_program sed
> > > > > > > > > > > +    assert_has_program awk
> > > > > > > > > > > +    assert_has_program grep
> > > > > > > > > > > +    assert_has_program egrep
> > > > > > > > > > > +    assert_has_program tee
> > > > > > > > > > I missed this earlier, but Fuego requires 'grep' and 'tee', so there's
> > > > > > > > > > no need to check for them here.
> > > > > > > > > > I already made a patch to remove those lines.
> > > > > > > > > >
> > > > > > > > > > > +
> > > > > > > > > > > +    # test-runner requires a password-less connection
> > > > > > > > > > > +    # Eg: Inside fuego's docker container do
> > > > > > > > > > > +    # su jenkins
> > > > > > > > > > > +    # cp path/to/bbb_id_rsa ~/.ssh/
> > > > > > > > > > > +    # vi ~/.ssh/config
> > > > > > > > > > > +    #  Host 192.167.1.99 <- replace with your boards ip address ($IPADDR)
> > > > > > > > > > > +    #    IdentityFile ~/.ssh/bbb_id_rsa
> > > > > > > > > > > +    assert_define SSH_KEY "Please setup SSH_KEY on your board file (fuego-
> > > > > > > > > > > ro/boards/$NODE_NAME.board)"
> > > > > > > > > > > +}
> > > > > > > > > > > +
> > > > > > > > > > > +function test_build {
> > > > > > > > > > > +    source ./automated/bin/setenv.sh
> > > > > > > > > > > +    pip install -r $REPO_PATH/automated/utils/requirements.txt --user
> > > > > > > > > >
> > > > > > > > > > OK.  I gave this a spin, and here's an error I got:
> > > > > > > > > >
> > > > > > > > > > ===== doing fuego phase: build =====
> > > > > > > > > > Clone repository https://github.com/Linaro/test-definitions.git.
> > > > > > > > > > Cloning into 'fuego_git_repo'...
> > > > > > > > > > Checkout branch/tag/commit id master.
> > > > > > > > > > Already on 'master'
> > > > > > > > > > Your branch is up-to-date with 'origin/master'.
> > > > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > > > BIN_PATH:
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > > > Downloading/unpacking pexpect (from -r
> > > > > > > > > >
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > > (line 1))
> > > > > > > > > > Requirement already satisfied (use --upgrade to upgrade): pyyaml in
> > > > /usr/lib/python2.7/dist-packages
> > > > > > (from -r
> > > > > > > > > >
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > > (line 2))
> > > > > > > > > > Requirement already satisfied (use --upgrade to upgrade): requests in
> > > > /usr/lib/python2.7/dist-packages
> > > > > > (from
> > > > > > > > -r
> > > > > > > > > >
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > > (line 3))
> > > > > > > > > > Downloading/unpacking ptyprocess>=0.5 (from pexpect->-r
> > > > > > > > > >
> > > > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/utils/requirements.txt
> > > > > > (line 1))
> > > > > > > > > >   Downloading ptyprocess-0.6.0-py2.py3-none-any.whl
> > > > > > > > > > Installing collected packages: pexpect, ptyprocess
> > > > > > > > > > Compiling /tmp/pip-build-65KFAp/pexpect/pexpect/_async.py ...
> > > > > > > > > >   File "/tmp/pip-build-65KFAp/pexpect/pexpect/_async.py", line 19
> > > > > > > > > >     transport, pw = yield from asyncio.get_event_loop()\
> > > > > > > > > >                              ^
> > > > > > > > > > SyntaxError: invalid syntax
> > > > > > > > > >
> > > > > > > > > > Successfully installed pexpect ptyprocess
> > > > > > > > > > Cleaning up...
> > > > > > > > > > Fuego test_build duration=1.56257462502 seconds
> > > > > > > > > >
> > > > > > > > > > Weirdly, I only see this on the first build.  I think the pip install alters the docker
> > > > > > > > > > container, so that even if I specify 'rebuild' for this job, I don't see the "compilation"
> > > > > > > > > > error after the first build of the job.
> > > > > > > > > >
> > > > > > > > > > > +}
> > > > > > > > > > > +
> > > > > > > > > > > +function test_run {
> > > > > > > > > > > +    source $WORKSPACE/$JOB_BUILD_DIR/automated/bin/setenv.sh
> > > > > > > > > > > +
> > > > > > > > > > > +    yaml_file=${FUNCTIONAL_LINARO_YAML:-
> > > > > > > > > > > "automated/linux/smoke/smoke.yaml"}
> > > > > > > > > > > +    if [ ! -e "${REPO_PATH}/$yaml_file" ]; then
> > > > > > > > > > > +            abort_job "$yaml_file not found"
> > > > > > > > > > > +    fi
> > > > > > > > > > > +
> > > > > > > > > > > +    if startswith "$yaml_file" "plans"; then
> > > > > > > > > > > +            echo "using test plan: $yaml_file"
> > > > > > > > > > > +            test_or_plan_flag="-p"
> > > > > > > > > > > +    else
> > > > > > > > > > > +            echo "using test definition: $yaml_file"
> > > > > > > > > > > +            test_or_plan_flag="-d"
> > > > > > > > > > > +    fi
> > > > > > > > > > > +
> > > > > > > > > > > +    if [ -n "$FUNCTIONAL_LINARO_PARAMS" ]; then
> > > > > > > > > > > +        PARAMS="-r $FUNCTIONAL_LINARO_PARAMS"
> > > > > > > > > > > +    else
> > > > > > > > > > > +        PARAMS=""
> > > > > > > > > > > +    fi
> > > > > > > > > > > +
> > > > > > > > > > > +    # FIXTHIS: don't use -s for targets with debian/centos
> > > > > > > > > > > +    test-runner -o ${LOGDIR} $test_or_plan_flag ${REPO_PATH}/$yaml_file
> > > > > > > > > > > $PARAMS -g $LOGIN@$IPADDR -s -e
> > > > > > > > > > > +}
> > > > > > > > > > > +
> > > > > > > > > > > +# FIXTHIS: the log directory is populated with a copy of the whole
> > > > > > > > > > > repository, clean unnecessary files
> > > > > > > > > > > diff --git a/tests/Functional.linaro/parser.py
> > > > > > > > > > > b/tests/Functional.linaro/parser.py
> > > > > > > > > > > new file mode 100755
> > > > > > > > > > > index 0000000..48b502b
> > > > > > > > > > > --- /dev/null
> > > > > > > > > > > +++ b/tests/Functional.linaro/parser.py
> > > > > > > > > > > @@ -0,0 +1,25 @@
> > > > > > > > > > > +#!/usr/bin/python
> > > > > > > > > > > +
> > > > > > > > > > > +import os, sys, collections
> > > > > > > > > > > +import common as plib
> > > > > > > > > > > +import json
> > > > > > > > > > > +
> > > > > > > > > > > +# allocate variable to store the results
> > > > > > > > > > > +measurements = {}
> > > > > > > > > > > +measurements = collections.OrderedDict()
> > > > > > > > > > > +
> > > > > > > > > > > +# read results from linaro result.json format
> > > > > > > > > > > +with open(plib.LOGDIR + "/result.json") as f:
> > > > > > > > > > > +    data = json.load(f)[0]
> > > > > > > > > > > +
> > > > > > > > > > > +for test_case in data['metrics']:
> > > > > > > > > > > +    test_case_id = test_case['test_case_id']
> > > > > > > > > > > +    result = test_case['result']
> > > > > > > > > > > +    # FIXTHIS: add measurements when available
> > > > > > > > > > > +    # measurement = test_case['measurement']
> > > > > > > > > > > +    # units = test_case['units']
> > > > > > > > > > > +    measurements['default.' + test_case_id] = result.upper()
> > > > > > > > > > > +
> > > > > > > > > > > +# FIXTHIS: think about how to get each test's log from stdout.log
> > > > > > > > > > > +
> > > > > > > > > > > +sys.exit(plib.process(measurements))
> > > > > > > > > > > diff --git a/tests/Functional.linaro/spec.json
> > > > > > > > > > > b/tests/Functional.linaro/spec.json
> > > > > > > > > > > new file mode 100644
> > > > > > > > > > > index 0000000..561e2ab
> > > > > > > > > > > --- /dev/null
> > > > > > > > > > > +++ b/tests/Functional.linaro/spec.json
> > > > > > > > > > > @@ -0,0 +1,16 @@
> > > > > > > > > > > +{
> > > > > > > > > > > +    "testName": "Functional.linaro",
> > > > > > > > > > > +    "specs": {
> > > > > > > > > > > +        "default": {
> > > > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > > > +        },
> > > > > > > > > > > +        "smoke": {
> > > > > > > > > > > +            "yaml": "automated/linux/smoke/smoke.yaml",
> > > > > > > > > > > +            "params": "TESTS='pwd'",
> > > > > > > > > > > +            "extra_success_links": {"csv": "result.csv"},
> > > > > > > > > > > +            "extra_fail_links": {"csv": "results.csv"}
> > > > > > > > > > > +        }
> > > > > > > > > > > +    }
> > > > > > > > > > > +}
> > > > > > > > > > > diff --git a/tests/Functional.linaro/test.yaml
> > > > > > > > > > > b/tests/Functional.linaro/test.yaml
> > > > > > > > > > > new file mode 100644
> > > > > > > > > > > index 0000000..a2efee8
> > > > > > > > > > > --- /dev/null
> > > > > > > > > > > +++ b/tests/Functional.linaro/test.yaml
> > > > > > > > > > > @@ -0,0 +1,27 @@
> > > > > > > > > > > +fuego_package_version: 1
> > > > > > > > > > > +name: Functional.linaro
> > > > > > > > > > > +description: |
> > > > > > > > > > > +    Linaro test-definitions
> > > > > > > > > > > +license: GPL-2.0
> > > > > > > > > > > +author: Milosz Wasilewski, Chase Qi
> > > > > > > > > > > +maintainer: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> > > > > > > > > > > +version: latest git commits
> > > > > > > > > > > +fuego_release: 1
> > > > > > > > > > > +type: Functional
> > > > > > > > > > > +tags: ['kernel', 'linaro']
> > > > > > > > > > > +git_src: https://github.com/Linaro/test-definitions
> > > > > > > > > > > +params:
> > > > > > > > > > > +    - YAML:
> > > > > > > > > > > +        description: test definiton or plan.
> > > > > > > > > > > +        example: "automated/linux/smoke/smoke.yaml"
> > > > > > > > > > > +        optional: no
> > > > > > > > > > > +    - PARAMS:
> > > > > > > > > > > +        description: List of params for the test PARAM1=VALUE1
> > > > > > > > > > > [PARAM2=VALUE2]
> > > > > > > > > > > +        example: "TESTS='pwd'"
> > > > > > > > > > > +        optional: yes
> > > > > > > > > > > +data_files:
> > > > > > > > > > > +    - chart_config.json
> > > > > > > > > > > +    - fuego_test.sh
> > > > > > > > > > > +    - parser.py
> > > > > > > > > > > +    - spec.json
> > > > > > > > > > > +    - test.yaml
> > > > > > > > > > > --
> > > > > > > > > > > 2.7.4
> > > > > > > > > >
> > > > > > > > > > And here's output from one of my initial runs.  I haven't debugged it yet.
> > > > > > > > > > The issue may be something weird in my board file or configuration.
> > > > > > > > > >
> > > > > > > > > > ===== doing fuego phase: run =====
> > > > > > > > > > -------------------------------------------------
> > > > > > > > > > REPO_PATH: /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf
> > > > > > > > > > BIN_PATH:
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > /fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/bin:/usr/local/bin:/usr/local/bin:
> > > > > > > > > > /usr/bin:/bin:/usr/local/games:/usr/games
> > > > > > > > > > using test definition: automated/linux/smoke/smoke.yaml
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:14,364 - RUNNER: INFO: Tests to run:
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:14,814 - RUNNER.TestSetup: INFO: Test repo copied to:
> > > > > > > > > >
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > > 2019-02-13 21:46:14,826 - RUNNER.TestRun: INFO: Archiving test files
> > > > > > > > > > 2019-02-13 21:46:14,845 - RUNNER.TestRun: INFO: Creating test path
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:15,133 - RUNNER.TestRun: INFO: Copying test archive to target host
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:16,260 - RUNNER.TestRun: INFO: Unarchiving test files on target
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:16,674 - RUNNER.TestRun: INFO: Removing test file archive from target
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > 2019-02-13 21:46:16,978 - RUNNER.TestRun: INFO: Executing
> > > > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/run.sh remotely on
> > > > root@10.0.1.74
> > > > > > > > > > {'path':
> > > > > > > > > >
> > > > > >
> > '/fuego-rw/buildzone/bbb.default.Functional.linaro-debian-armhf/automated/linux/smoke/smoke.yaml',
> > > > > > > > > > 'uuid': '2efe23f3-2a7b-4655-b785-d25c11b84ea8', 'timeout': None, 'skip_install': True}
> > > > > > > > > > Warning: Permanently added '10.0.1.74' (ECDSA) to the list of known hosts.
> > > > > > > > > > + export TESTRUN_ID=smoke-tests-basic
> > > > > > > > > > + cd /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > > + cat uuid
> > > > > > > > > > + UUID=2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > > + echo <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > > <STARTRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > > + export PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin
> > > > > > > > > > + cd ./automated/linux/smoke/
> > > > > > > > > > + ./smoke.sh -s True -t pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat, lsblk
> > > > > > > > > > INFO: install_deps skipped
> > > > > > > > > >
> > > > > > > > > > INFO: Running pwd test...
> > > > > > > > > > /root/output/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8/automated/linux/smoke
> > > > > > > > > > pwd pass
> > > > > > > > > >
> > > > > > > > > > INFO: Running lsb_release test...
> > > > > > > > > > ./smoke.sh: 1: eval: lsb_release: not found
> > > > > > > > > > lsb_release fail
> > > > > > > > > >
> > > > > > > > > > INFO: Running uname test...
> > > > > > > > > > Linux beaglebone 4.4.88-ti-r125 #1 SMP Thu Sep 21 19:23:24 UTC 2017 armv7l GNU/Linux
> > > > > > > > > > uname pass
> > > > > > > > > >
> > > > > > > > > > INFO: Running ip test...
> > > > > > > > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
> > qlen
> > > > 1
> > > > > > > > > >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > > > > > > >     inet 127.0.0.1/8 scope host lo
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > >     inet6 ::1/128 scope host
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > > 2: eth0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> > UP
> > > > group
> > > > > > default
> > > > > > > > > > qlen 1000
> > > > > > > > > >     link/ether 90:59:af:54:cd:e6 brd ff:ff:ff:ff:ff:ff
> > > > > > > > > >     inet 10.0.1.74/24 brd 10.0.1.255 scope global eth0
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > >     inet6 fe80::9259:afff:fe54:cde6/64 scope link
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > > 3: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > > > >     link/can
> > > > > > > > > > 4: can1: <NOARP,ECHO> mtu 16 qdisc noop state DOWN group default qlen 10
> > > > > > > > > >     link/can
> > > > > > > > > > 5: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > > > default
> > > > > > qlen 1000
> > > > > > > > > >     link/ether 90:59:af:54:cd:e8 brd ff:ff:ff:ff:ff:ff
> > > > > > > > > >     inet 192.168.7.2/30 brd 192.168.7.3 scope global usb0
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > >     inet6 fe80::9259:afff:fe54:cde8/64 scope link
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > > 6: usb1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
> > > > default
> > > > > > qlen 1000
> > > > > > > > > >     link/ether 90:59:af:54:cd:eb brd ff:ff:ff:ff:ff:ff
> > > > > > > > > >     inet 192.168.6.2/30 brd 192.168.6.3 scope global usb1
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > >     inet6 fe80::9259:afff:fe54:cdeb/64 scope link
> > > > > > > > > >        valid_lft forever preferred_lft forever
> > > > > > > > > > ip pass
> > > > > > > > > >
> > > > > > > > > > INFO: Running lscpu test...
> > > > > > > > > > Architecture:          armv7l
> > > > > > > > > > Byte Order:            Little Endian
> > > > > > > > > > CPU(s):                1
> > > > > > > > > > On-line CPU(s) list:   0
> > > > > > > > > > Thread(s) per core:    1
> > > > > > > > > > Core(s) per socket:    1
> > > > > > > > > > Socket(s):             1
> > > > > > > > > > Model:                 2
> > > > > > > > > > Model name:            ARMv7 Processor rev 2 (v7l)
> > > > > > > > > > CPU max MHz:           1000.0000
> > > > > > > > > > CPU min MHz:           300.0000
> > > > > > > > > > BogoMIPS:              995.32
> > > > > > > > > > Flags:                 half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpd32
> > > > > > > > > > lscpu pass
> > > > > > > > > >
> > > > > > > > > > INFO: Running vmstat test...
> > > > > > > > > > procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> > > > > > > > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
> > > > > > > > > >  0  0      0 381532   2780  58072    0    0     1     3   38    2  0  0 99  0  0
> > > > > > > > > > vmstat pass
> > > > > > > > > >
> > > > > > > > > > INFO: Running lsblk test...
> > > > > > > > > > NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> > > > > > > > > > mmcblk0      179:0    0 14.9G  0 disk
> > > > > > > > > > |-mmcblk0p1  179:1    0    6G  0 part /
> > > > > > > > > > `-mmcblk0p2  179:2    0  8.9G  0 part /data
> > > > > > > > > > mmcblk1      179:8    0  1.8G  0 disk
> > > > > > > > > > `-mmcblk1p1  179:9    0  1.8G  0 part /mnt/on-board-mmc
> > > > > > > > > > mmcblk1boot0 179:16   0    1M  1 disk
> > > > > > > > > > mmcblk1boot1 179:24   0    1M  1 disk
> > > > > > > > > > lsblk pass
> > > > > > > > > > + ../../utils/send-to-lava.sh ./output/result.txt
> > > > > > > > > > <TEST_CASE_ID=pwd RESULT=pass>
> > > > > > > > > > <TEST_CASE_ID=lsb_release RESULT=fail>
> > > > > > > > > > <TEST_CASE_ID=uname RESULT=pass>
> > > > > > > > > > <TEST_CASE_ID=ip RESULT=pass>
> > > > > > > > > > <TEST_CASE_ID=lscpu RESULT=pass>
> > > > > > > > > > <TEST_CASE_ID2019-02-13 21:46:18,376 - RUNNER.TestRun: INFO:
> > > > > > > > > > smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8 test finished.
> > > > > > > > > >
> > > > > > > > > > 2019-02-13 21:46:18,397 - RUNNER.ResultParser: INFO: Result files saved to:
> > > > > > > > > >
> > > > /fuego-rw/logs/Functional.linaro/bbb.default.6.6/smoke_2efe23f3-2a7b-4655-b785-d25c11b84ea8
> > > > > > > > > > =vmstat RESULT=pass>
> > > > > > > > > > <TEST_CASE_ID=lsblk RESULT=pass>
> > > > > > > > > > + echo <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > > <ENDRUN smoke-tests-basic 2efe23f3-2a7b-4655-b785-d25c11b84ea8>
> > > > > > > > > > --- Printing result.csv ---
> > > > > > > > > > name,test_case_id,result,measurement,units,test_params
> > > > > > > > > > smoke-tests-basic,pwd,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,lsb_release,fail,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu,
> > vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,uname,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,ip,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,lscpu,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,vmstat,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > > smoke-tests-basic,lsblk,pass,,,"TESTS=pwd, lsb_release -a, uname -a, ip a, lscpu, vmstat,
> > > > > > > > > > lsblk;SKIP_INSTALL=False"
> > > > > > > > > >
> > > > > > > > > > -------------------------------------------------
> > > > > > > > > > ===== doing fuego phase: post_test =====
> > > > > > > > > > scp: /home/fuego/fuego.Functional.linaro/Functional.linaro.log: No such file or directory
> > > > > > > > > > INFO: the test did not produce a test log on the target
> > > > > > > > > > ===== doing fuego phase: processing =====
> > > > > > > > > > ### WARNING: Program returned exit code ''
> > > > > > > > > > ### WARNING: Log evaluation may be invalid
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ### Unrecognized results format
> > > > > > > > > > ERROR: results did not satisfy the threshold
> > > > > > > > > > Fuego: requested test phases complete!
> > > > > > > > > > Build step 'Execute shell' marked build as failure
> > > > > > > > > >
> > > > > > > > > > ----------
> > > > > > > > > >
> > > > > > > > > > It looks like I'm close.  I'll keep playing with it, but if you see something
> > > > > > > > > > that I should fix, please let me know.
> > > > > > > > > >
> > > > > > > > > > Note that I *do* get a results table in Jenkins.  lsb_release fails, but the
> > > > > > > > > > other tests (ip, lsblk, lscpu, pwd, uname, and vmstat) all pass.   But
> > > > > > > > > > testlog.txt has 'INFO: the test did not produce a test log on the target'.
> > > > > > > > > >
> > > > > > > > > > Just FYI.  Thanks for the code.
> > > > > > > > > >  -- Tim
> > > > > > > > >
> > > > > > > > > _______________________________________________
> > > > > > > > > Fuego mailing list
> > > > > > > > > Fuego@lists.linuxfoundation.org
> > > > > > > > > https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-26  8:50             ` Chase Qi
@ 2019-02-27  6:13               ` Tim.Bird
  2019-02-27  8:16                 ` Chase Qi
  2019-02-27 10:32                 ` Milosz Wasilewski
  0 siblings, 2 replies; 14+ messages in thread
From: Tim.Bird @ 2019-02-27  6:13 UTC (permalink / raw)
  To: chase.qi, daniel.sangorrin; +Cc: anders.roxell, fuego

Sorry for the belated reply, but I was on vacation last week.

I have a few comments I'll put inline below.


> -----Original Message-----
> From: Chase Qi 
>
> Hi Daniel,
> 
> Thanks a lot for the comments.
> 
> On Mon, Feb 25, 2019 at 1:35 PM <daniel.sangorrin@toshiba.co.jp> wrote:
> >
> > Hello Chase,
> >
> > > From: Chase Qi <chase.qi@linaro.org>
> > [...]
> > > > Today, I sent a patch series that allows you to install Fuego without
> Jenkins on Docker. Maybe that will solve
> > > your previous problems. I also submitted a few more changes to allow
> users changing the port where Jenkins
> > > listens.
> > >
> > > I noticed the patches. I definitely will give it a spin later on. I am
> > > currently still using fuego v1.40 based docker image for prototyping
> > > with LAVA multinode job.
I think that using configuring Fuego support in LAVA as a multinode job is
a good approach.  But I'll comment more at your pro/con list below.

> > > I built and uploaded fuego docker imge with
> > > fuego test code included here
> > > https://cloud.docker.com/repository/docker/chaseqi/standalone-
> fuego/tags
> > > .  BTW, are you guys plan to publish fuego official docker image?
> >
> > Nice.
> > We have talked about that but we haven't published an official image yet.
With the 1.3 release, we almost released an official image.  But we ran into
some issues stabilizing it to run in people's environments.  There always has
to be a customization step (such as dealing with people's firewalls and proxies,
and matching the user account on the host that is using the docker container),
which we couldn't quite finish the work on for that release.
As Daniel says, it's on our 'to-do' list.

> >
> 
> Ok, I will stick with the one I made for the moment.
> 
> > > Here is the changes I made in the dockerfile.
> > >
> > > ```
> > > $ git diff
> > > diff --git a/Dockerfile b/Dockerfile
> > > index 269e1f6..16586fa 100644
> > > --- a/Dockerfile
> > > +++ b/Dockerfile
> > > @@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent -
> -fail \
> > >  RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers
> > >
> > >
> > > +#
> > >
> ==========================================================
> =======
> > > =============
> > > +#Install fuego
> > > +#
> > >
> ==========================================================
> =======
> > > =============
> > > +RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
> > > +    && git clone https://bitbucket.org/tbird20d/fuego-core.git /fuego-
> core \
> > > +    && ln -s /fuego/fuego-ro/ / \
> > > +    && ln -s /fuego/fuego-rw/ /
> >
> > The upstream repository has changed to
> > https://bitbucket.org/fuegotest/fuego.git
> > https://bitbucket.org/fuegotest/fuego-core.git
> > Also, if you use the next branch you have to clone fuego-core within the
> fuego folder now.
> >
> 
> Thanks for the pointers. I switched to the new links.
> 
> > > +
> > > +
> > >  #
> > >
> ==========================================================
> =======
> > > =============
> > >  # get ttc script and helpers
> > >  #
> > >
> ==========================================================
> =======
> > > =============
> > > @@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
> > >  # Lava
> > >  #
> > >
> ==========================================================
> =======
> > > =============
> > >
> > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > >  # CONVENIENCE HACKS
> > >  # not mounted, yet
> > >  #RUN echo "fuego-create-node --board raspberrypi3" >>
> /root/firststart.sh
> > > @@ -218,6 +227,14 @@ RUN ln -s
> > > /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get update
> > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
> > > crossbuild-essential-armhf cpp-arm-linux-gnueabihf
> > > gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf
> > >
> > > +#
> > >
> ==========================================================
> =======
> > > =============
> > > +#Install arm64 toolchain
> > > +#
> > >
> ==========================================================
> =======
> > > =============
> > > +RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
> > > +    && apt-get clean \
> > > +    && rm -rf /tmp/* /var/tmp/*
> > > +
> > > +
> > >  #
> > >
> ==========================================================
> =======
> > > =============
> > >  # Setup startup command
> > >  #
> > >
> ==========================================================
> =======
> > > =============
> > > ```
> > >
> > > >
> > > > I am also preparing a native install script. Unfortunately, my time is up
> today and I couldn't test it. I send it to
> > > you attached in case you want to give it a try. But make sure you do it on
> a container or VM where nothing bad
> > > can happen ;)
> > >
> > > Thanks a lot. I just tried the script within lxc jessie container.
> > > Most of it just works. The following run with ftc cmds are
> > > problematic, which probably are expected. I guess we also need to
> > > patch ftc to make jenkins and docker optional.
> >
> > Sorry about that, I am working on it. I will release a better script soon,
> maybe today or tommorow. I will let you know when it's ready.
> >
> > Thanks,
> > Daniel
> >
> >
> > > ```
> > > root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
> > > sudo: docker: command not found
> > > $ diff native-install-jessie.sh native-install-jessie.sh.original
> > > 86,92c86
> > > < # ftc list-boards reports
> > > < # sudo: docker: command not found
> > > < # TODO: patch ftc to make docker optional.
> > > < curl -fsSL https://get.docker.com -o get-docker.sh
> > > < sh get-docker.sh
> > > <
> > > < ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
> > > ---
> > > > ln -s /fuego-core/scripts/ftc /usr/local/bin/
> > >
> > > root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
> > > diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> > > index ab2d2cb..1b09812 100755
> > > --- a/engine/scripts/ftc
> > > +++ b/engine/scripts/ftc
> > > @@ -4665,7 +4665,7 @@ def main():
> > >                  print "Can't do rm-jobs outside the container! Aborting."
> > >                  sys.exit(1)
> > >              command += arg + " "
> > > -        container_command(command)
> > > +        #container_command(command)
> > >
> > >      if len(sys.argv) < 2:
> > >          error_out('Missing command\nUse "ftc help" to get usage help.', 1)
> > > @@ -4781,7 +4781,7 @@ def main():
> > >          # shows fuego boards
> > >          do_list_boards(conf)
> > >
> > > -    import jenkins
> > > +    #import jenkins
> > >      server = jenkins.Jenkins('http://localhost:8080/fuego')
> > >
> > >      if command.startswith("add-job"):
> > >
> > > # ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
> > > Traceback (most recent call last):
> > >   File "/usr/local/bin/ftc", line 4929, in <module>
> > >     main()
> > >   File "/usr/local/bin/ftc", line 4785, in main
> > >     server = jenkins.Jenkins('http://localhost:8080/fuego')
> > > AttributeError: 'NoneType' object has no attribute 'Jenkins
> > > ```
> > >
> > > Thanks,
> > > Chase
> > >
> > > >
> > > > > > > > > * as you pointed, parsing fuego's test result file in LAVA is easy
> to do.
> > > > > > > >
> > > > > > > > The only problem is that I would need to run the Fuego parser
> on the target board.
> > > > > > > > For that, I would need to modularize the parser into a library
> (e.g. import fuego-parser), and the board
> > > > > would
> > > > > > > need to install the python modules required by fuego-parser. This
> is on my TODO list since I proposed
> > > it
> > > > > during
> > > > > > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > > > > > >
> > > > > > > > What alternatives do I have?
> > > > > > > > - send the results to LAVA through a REST API instead of having
> it monitor the serial cable? probably
> > > not
> > > > > > > possible.
> > > > > > > > - create a simplified parser on the test (e.g. using our
> log_compare function). Not ideal, but possible.
> > > > > > > >
> > > > > > > > In the end, this stems from the fact that Fuego assumes parsing
> is done in the host (to use python),
> > > while
> > > > > > > Linaro uses grep/awk/sed directly on the target. There is a trade-
> off there.
> > > > > > > >
> > > > > > > > > * existing way to run fuego tests in LAVA are hacks. The
> problem is
> > > > > > > > > they don't scale, 'scale' means remote and distributed CI
> setup.
> > > > > > > >
> > > > > > > > Yes, it is a hack.
> > > > > > > > I think Fuego is not supposed to run with LAVA, because the
> goals are very different.
> > > > > > > > But parts of Fuego can run with LAVA. This is what I think we can
> collaborate on.
> > > > > > >
> > > > > > > Yes, +1. When running with LAVA, IMHO, only the backend and
> real tests
> > > > > > > are needed.
> > > > > > >
> > > > > > > >
> > > > > > > > > * I am tring to hanld both fuego host controller and DUT with
> LAVA.
> > > > > > > > > The first part is hard part. Still tring to find a way. About the
> host
> > > > > > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > > > > > jenkins and docker related issues. I feel build, publish and pull
> a
> > > > > > > > > fuego docker image is the way to go now.
> > > > > > > >
> > > > > > > > I think this approach might be too hard.
> > > > > > >
> > > > > > > LAVA v2 introduced lxc-protocol. With the protocol, single node
> test
> > > > > > > job can deploy and boot a lxc container to control DUT. Here is an
> > > > > > > example: https://lkft.validation.linaro.org/scheduler/job/605270 .
> The
> > > > > > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > > > > > configed with static IP, the IP is known to lxc container with LAVA
> > > > > > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > > > > > possible. Based on these features, I thought we can run fuego
> tests
> > > > > > > with LAVA just like how we run it now. As mentioned above,
> there is no
> > > > > > > and will be no support for docker-protocol in LAVA, and migrating
> > > > > > > fuego installation to lxc also is problemic. Please do let me know
> > > > > > > once you have a script for fuego installation. I am having problem
> to
> > > > > > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > > > > > Once I am alble to install fuego within lxc, I can propare a job
> > > > > > > example. It would be one test definition for all fuego tests. This is
> > > > > > > how we do it before. `automated/linux/workload-automation3
> > > > > > > ` is a good example.
> > > > > >
> > > > > > I see what you want to do. Using LXC sounds doable.
> > > > > > But I guess that having Fuego installed on the target (or an LXC DUT)
> would be much easier.
> > > > >
> > > > > Yeah, I guess if target run Debian based distros, then installing on
> > > > > DUT will be easier. Most of our targets run openembedded based
> distro,
> > > > > it is hard to install fuego on them. It is possible to build docker
> > > > > into these OE images and run fuego on target within docker
> container,
> > > > > but some of the boards don't have the resource for that...  In LAVA,
> > > > > LXC or LXC DUT are mainly used as host to control other ARM based
> > > > > DUTs.
> > > >
> > > > I see, maybe running Fuego on OE might require some work. Maybe it
> is easier to start with Debian images.
> > > >
> > > > > > I am going to work on the installation of Fuego natively then.
> > > > > > By the way, if you export the docker filesystem (docker export..)
> and import it in LXC you would get a DUT
> > > with
> > > > > Fuego installed. Wouldn't that solve your problem? Fuego can run
> tests on the host (see docker.board)
> > > although
> > > > > to run with "root" permissions you need to change jenkins
> permissions.
> > > > >
> > > > > I tried to test docker.board within fuego docker container, it works
> > > > > well, and yes, I hit the root permissions issue. I haven't tried to
> > > > > import fuego docker filesystem in LXC, that is a new concept to me.
> > > > > Does it require docker installed and running in LXC container? If yes,
> > > > > that is a problem in LAVA. I think we will need to modify lxc
> > > > > cofiguration somehow on lava-dispatcher to support docker in lxc.
> > > >
> > > > No, I just meant to use Docker to create the filesystem tree and then
> use it in LXC.
> > > >
> > > > > I am getting close to get my whole setup working with LAVA
> multinode
> > > > > job. Here is the test definitions in case anyone interested in
> > > > > https://github.com/chase-qi/test-
> definitions/tree/fuego/automated/linux/fuego-multinode
> > > > > . I will share a job example once I have it.
> 
> Here is a  LAVA job example
> https://lava.slipslow.ml/scheduler/job/119. The test job uses:
> * docker device as host using the customized fuego docker image, refer
> the description
> https://cloud.docker.com/u/chaseqi/repository/docker/chaseqi/standalone
> -fuego
> * raspberry pi3 as DUT, the DUT boots OE based build via tftp and nfsrootfs.
> * lava multinode protocol to sync machine status between host and dut,
> and sent IP and ssh key between them.

Just to clarify, is the Fuego docker container running on the LAVA host, but
treated like another DUT via LAVA multimode mechanisms?  Or is it running
on a separate (3rd) machine?

I prefer this approach to the "run Fuego tests natively on the board" approach
because the latter approach requires a lot of overhead on the target board
that we were trying to avoid (bash, python, toolchain).  The last bit,
about actually building the software, I think we'll eventually get rid of for
most users, with a  server-side cache of pre-built packages.  However, I'd still like
to require as little as possible on the board.  That's the whole reason Fuego
has the architecture it does - driving test commands from the host - so that
any complex logic or coordination can happen on a more full-featured machine
rather than the target board.

> 
> From my point of view, here are the pros and cons of this approach.
> Pros:
> * Doesn't need local fuego host controller any more, it is native run
> within LAVA.
> * Easy to scale. Jobs can be submitted from any client to any devices
> available.
> * Pretty fast, once the docker image pulled, on the same
> lava-dispatcher, the following test jobs will use it directly to
> create container as fuego host.
> 
> Cons:
> * Complicated, typically for new LAVA users or whoever don't want to
> touch LAVA multinode.
That's interesting.  I don't know enough about LAVA to know how big
this issue is.  Do most users not use multinode?

One aspect of Fuego architecture that is attractive (IMHO), is that every
test is implicitly multimode because there is always a host and a board,
and when the host and the board are directly communicating, and the
tests is already running on the host, it simplifies
some of the multi-machine setup.  But this is only true when the connection
between the host and the board has the hardware configuration that applies
to the test (e.g. they are connected via USB for USB testing, or via network
for network testing, or serial for serial testing, etc.)

My own opinion is that this whole area of managing off-DUT hardware and
connections is not fully realized in either Fuego or LAVA, and I'm hoping
to discuss ideas about this at Connect in Bangkok.

> * Requires SSH access to the DUT. In the case of network isn't
> supported, it blocks all tests.

Fuego doesn't require SSH access, though that is the most common
'transport' used by most Fuego users.  I'm not sure what you mean
when you say "if the network isn't supported, it blocks all tests".

> 
> With your effort on native installer and non-jenkins, I think it is
> possible to do something similar with single node job. As I wrote, it
> will be lxc protocol plus DUT(with static IP). However, when network
> isn't available, then we are in trouble too.
> 
> IMHO, this is hard but a faster way to get fuego tests running within
> LAVA as it works just like how we use fuego now.  I am adding it to
> Linaro test-definitions with PR
> https://github.com/Linaro/test-definitions/pull/22 . I believe you
> guys are the best to review the PR. Any comments would be appreciated.
> 
> > > >
> > > > Great! Thanks a lot!
> > > >
> > > > Kind regards,
> > > > Daniel
> > > >
> > > >
> > > > >
> > > > > Thanks,
> > > > > Chase
> > > > >
> > > > > >
> > > > > > > Alternatively, I can lunch docker device and DUT with multinode
> job,
> > > > > > > but that is complex. And fuego docker container eats a lot of
> > > > > > > memory(blame jenkins?). The exsting docker devices in our lib
> only
> > > > > > > have 1G memory configured.
> > > > > >
> > > > > > I haven't checked the memory consumed, I guess the reason is
> Java.
> > > > > >
> > > > > > > > This is my current work-in-progress approach:
> > > > > > > > https://github.com/sangorrin/test-
> definitions/tree/master/automated/linux/fuego
> > > > > > > >
> > > > > > > > - Manual usage (run locally)
> > > > > > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > > > > > >         $ cd test-definitions
> > > > > > > >         $ . ./automated/bin/setenv.sh
> > > > > > > >         $ cd automated/linux/fuego/
> > > > > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > > > > >         $  tree output/
> > > > > > > >                 output/
> > > > > > > >                 ├── build <- equivalent to fuego buildzone
> > > > > > > >                 │   ├── hello
> > > > > > > >                 │   ├── hello.c
> > > > > > > >                 │   ├── Makefile
> > > > > > > >                 │   └── README.md
> > > > > > > >                 ├── fuego.Functional.hello_world <- equivalent to
> board test folder
> > > > > > > >                 │   └── hello
> > > > > > > >                 └── logs <- equivalent to logdir
> > > > > > > >                         └── testlog.txt
> > > > > > > > - test-runner usage (run on remote board)
> > > > > > > >         $ cd test-definitions
> > > > > > > >         $ test-runner -g root@192.168.1.45 -d
> ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > > > > >         $ ls ../output
> > > > > > > >                 result.csv
> > > > > > > >                 result.json
> > > > > > > >
> > > > > > > > I have yet to add the LAVA messages and prepare result.txt but
> it will be working soon.
> > > > > > >
> > > > > > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > > > > > will take care of it. When running in LAVA, the helper uses
> > > > > > > lava-test-case for result collecting, and when running without
> LAVA,
> > > > > > > the helper prints result lines in a fixed format for result parsing
> > > > > > > within test-runner. (When I writing this, I noticed your next reply,
> > > > > > > maybe I am looking at the latest code already, I will give it a spin
> > > > > > > with LAVA and come back to you)
> > > > > >
> > > > > > Thanks again for checking. I am glad that it worked for your. I have a
> LAVA setup on the CIP project so
> > > I have
> > > > > started to do tests there.
> > > > > >
> > > > > > > So basically, we are running in two different directions. From my
> > > > > > > point of view, you are porting fuego tests to Linaro test-
> definitions
> > > > > > > natively. Although I am not yet sure how the integration between
> these
> > > > > > > two projects goes, we are happy to see this happening :)
> > > > > >
> > > > > > Thanks, you are right. But porting it to Fuego misses a lot of the
> good features in Fuego such as the passing
> > > > > criteria. Perhaps your approach is better.
> 
> No really : ) As I wrote above, my approach requests ssh access to DUT
> which isn't always the case.

OK - maybe I understand your statement above better.  But I'm not sure.

> It also is one of the reasons we do
> install(optional) -> run -> parsing on target. lava-dispatcher will
> clone test repos and apply them to rootfs as overlay before
> deployment. To some extend, it solves 'no network' issue.

I presume this means that every test in LAVA requires a boot/deploy
cycle on the target board, which is a downside in that it takes longer
per test (but an upside in that each test starts with a clean slate).

I think when we have a pre-built test package cache in Fuego, we
might be able to support this same operational flow.
> 
> When dependence can be installed or pre-installed, your approach will
> work every where. We, at least me, will be very happy to see it in
> Linaro test-definitions. It is a very good example for adding fuego
> tests to test-definitions project.

Thanks very much for working on this.  We are much farther along
at integrating Fuego and LAVA than I thought we would be at this point.
And we are starting to shake out some interesting issues which I think
will help each project deal with the other's idiosyncrasies and use cases.

Regards,
 -- Tim


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-27  6:13               ` Tim.Bird
@ 2019-02-27  8:16                 ` Chase Qi
  2019-02-27 10:32                 ` Milosz Wasilewski
  1 sibling, 0 replies; 14+ messages in thread
From: Chase Qi @ 2019-02-27  8:16 UTC (permalink / raw)
  To: Tim.Bird; +Cc: Anders Roxell, fuego

[-- Attachment #1: Type: text/plain, Size: 25187 bytes --]

Hello Tim,

On Wed, Feb 27, 2019 at 2:14 PM <Tim.Bird@sony.com> wrote:

> Sorry for the belated reply, but I was on vacation last week.
>
> I have a few comments I'll put inline below.
>
>
Thanks a lot for the comments. I hope you had a nice vacation.


>
> > -----Original Message-----
> > From: Chase Qi
> >
> > Hi Daniel,
> >
> > Thanks a lot for the comments.
> >
> > On Mon, Feb 25, 2019 at 1:35 PM <daniel.sangorrin@toshiba.co.jp> wrote:
> > >
> > > Hello Chase,
> > >
> > > > From: Chase Qi <chase.qi@linaro.org>
> > > [...]
> > > > > Today, I sent a patch series that allows you to install Fuego
> without
> > Jenkins on Docker. Maybe that will solve
> > > > your previous problems. I also submitted a few more changes to allow
> > users changing the port where Jenkins
> > > > listens.
> > > >
> > > > I noticed the patches. I definitely will give it a spin later on. I
> am
> > > > currently still using fuego v1.40 based docker image for prototyping
> > > > with LAVA multinode job.
> I think that using configuring Fuego support in LAVA as a multinode job is
> a good approach.  But I'll comment more at your pro/con list below.
>
> > > > I built and uploaded fuego docker imge with
> > > > fuego test code included here
> > > > https://cloud.docker.com/repository/docker/chaseqi/standalone-
> > fuego/tags
> > > > .  BTW, are you guys plan to publish fuego official docker image?
> > >
> > > Nice.
> > > We have talked about that but we haven't published an official image
> yet.
> With the 1.3 release, we almost released an official image.  But we ran
> into
> some issues stabilizing it to run in people's environments.  There always
> has
> to be a customization step (such as dealing with people's firewalls and
> proxies,
> and matching the user account on the host that is using the docker
> container),
> which we couldn't quite finish the work on for that release.
> As Daniel says, it's on our 'to-do' list.
>
>
Thanks for the info. It is very good to know.


> > >
> >
> > Ok, I will stick with the one I made for the moment.
> >
> > > > Here is the changes I made in the dockerfile.
> > > >
> > > > ```
> > > > $ git diff
> > > > diff --git a/Dockerfile b/Dockerfile
> > > > index 269e1f6..16586fa 100644
> > > > --- a/Dockerfile
> > > > +++ b/Dockerfile
> > > > @@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent -
> > -fail \
> > > >  RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers
> > > >
> > > >
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +#Install fuego
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
> > > > +    && git clone https://bitbucket.org/tbird20d/fuego-core.git
> /fuego-
> > core \
> > > > +    && ln -s /fuego/fuego-ro/ / \
> > > > +    && ln -s /fuego/fuego-rw/ /
> > >
> > > The upstream repository has changed to
> > > https://bitbucket.org/fuegotest/fuego.git
> > > https://bitbucket.org/fuegotest/fuego-core.git
> > > Also, if you use the next branch you have to clone fuego-core within
> the
> > fuego folder now.
> > >
> >
> > Thanks for the pointers. I switched to the new links.
> >
> > > > +
> > > > +
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >  # get ttc script and helpers
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > @@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
> > > >  # Lava
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >
> > > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown
> /usr/local/bin
> > > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown
> /usr/local/bin
> > > >  # CONVENIENCE HACKS
> > > >  # not mounted, yet
> > > >  #RUN echo "fuego-create-node --board raspberrypi3" >>
> > /root/firststart.sh
> > > > @@ -218,6 +227,14 @@ RUN ln -s
> > > > /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get update
> > > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
> > > > crossbuild-essential-armhf cpp-arm-linux-gnueabihf
> > > > gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf
> > > >
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +#Install arm64 toolchain
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
> > > > +    && apt-get clean \
> > > > +    && rm -rf /tmp/* /var/tmp/*
> > > > +
> > > > +
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >  # Setup startup command
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > ```
> > > >
> > > > >
> > > > > I am also preparing a native install script. Unfortunately, my
> time is up
> > today and I couldn't test it. I send it to
> > > > you attached in case you want to give it a try. But make sure you do
> it on
> > a container or VM where nothing bad
> > > > can happen ;)
> > > >
> > > > Thanks a lot. I just tried the script within lxc jessie container.
> > > > Most of it just works. The following run with ftc cmds are
> > > > problematic, which probably are expected. I guess we also need to
> > > > patch ftc to make jenkins and docker optional.
> > >
> > > Sorry about that, I am working on it. I will release a better script
> soon,
> > maybe today or tommorow. I will let you know when it's ready.
> > >
> > > Thanks,
> > > Daniel
> > >
> > >
> > > > ```
> > > > root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
> > > > sudo: docker: command not found
> > > > $ diff native-install-jessie.sh native-install-jessie.sh.original
> > > > 86,92c86
> > > > < # ftc list-boards reports
> > > > < # sudo: docker: command not found
> > > > < # TODO: patch ftc to make docker optional.
> > > > < curl -fsSL https://get.docker.com -o get-docker.sh
> > > > < sh get-docker.sh
> > > > <
> > > > < ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
> > > > ---
> > > > > ln -s /fuego-core/scripts/ftc /usr/local/bin/
> > > >
> > > > root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
> > > > diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> > > > index ab2d2cb..1b09812 100755
> > > > --- a/engine/scripts/ftc
> > > > +++ b/engine/scripts/ftc
> > > > @@ -4665,7 +4665,7 @@ def main():
> > > >                  print "Can't do rm-jobs outside the container!
> Aborting."
> > > >                  sys.exit(1)
> > > >              command += arg + " "
> > > > -        container_command(command)
> > > > +        #container_command(command)
> > > >
> > > >      if len(sys.argv) < 2:
> > > >          error_out('Missing command\nUse "ftc help" to get usage
> help.', 1)
> > > > @@ -4781,7 +4781,7 @@ def main():
> > > >          # shows fuego boards
> > > >          do_list_boards(conf)
> > > >
> > > > -    import jenkins
> > > > +    #import jenkins
> > > >      server = jenkins.Jenkins('http://localhost:8080/fuego')
> > > >
> > > >      if command.startswith("add-job"):
> > > >
> > > > # ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
> > > > Traceback (most recent call last):
> > > >   File "/usr/local/bin/ftc", line 4929, in <module>
> > > >     main()
> > > >   File "/usr/local/bin/ftc", line 4785, in main
> > > >     server = jenkins.Jenkins('http://localhost:8080/fuego')
> > > > AttributeError: 'NoneType' object has no attribute 'Jenkins
> > > > ```
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > > >
> > > > > > > > > > * as you pointed, parsing fuego's test result file in
> LAVA is easy
> > to do.
> > > > > > > > >
> > > > > > > > > The only problem is that I would need to run the Fuego
> parser
> > on the target board.
> > > > > > > > > For that, I would need to modularize the parser into a
> library
> > (e.g. import fuego-parser), and the board
> > > > > > would
> > > > > > > > need to install the python modules required by fuego-parser.
> This
> > is on my TODO list since I proposed
> > > > it
> > > > > > during
> > > > > > > > the last Fuego jamboree. I will try to do it as soon as i
> can.
> > > > > > > > >
> > > > > > > > > What alternatives do I have?
> > > > > > > > > - send the results to LAVA through a REST API instead of
> having
> > it monitor the serial cable? probably
> > > > not
> > > > > > > > possible.
> > > > > > > > > - create a simplified parser on the test (e.g. using our
> > log_compare function). Not ideal, but possible.
> > > > > > > > >
> > > > > > > > > In the end, this stems from the fact that Fuego assumes
> parsing
> > is done in the host (to use python),
> > > > while
> > > > > > > > Linaro uses grep/awk/sed directly on the target. There is a
> trade-
> > off there.
> > > > > > > > >
> > > > > > > > > > * existing way to run fuego tests in LAVA are hacks. The
> > problem is
> > > > > > > > > > they don't scale, 'scale' means remote and distributed CI
> > setup.
> > > > > > > > >
> > > > > > > > > Yes, it is a hack.
> > > > > > > > > I think Fuego is not supposed to run with LAVA, because the
> > goals are very different.
> > > > > > > > > But parts of Fuego can run with LAVA. This is what I think
> we can
> > collaborate on.
> > > > > > > >
> > > > > > > > Yes, +1. When running with LAVA, IMHO, only the backend and
> > real tests
> > > > > > > > are needed.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > * I am tring to hanld both fuego host controller and DUT
> with
> > LAVA.
> > > > > > > > > > The first part is hard part. Still tring to find a way.
> About the
> > host
> > > > > > > > > > controller part, I started with LAVA-lxc protocol, but
> hit some
> > > > > > > > > > jenkins and docker related issues. I feel build, publish
> and pull
> > a
> > > > > > > > > > fuego docker image is the way to go now.
> > > > > > > > >
> > > > > > > > > I think this approach might be too hard.
> > > > > > > >
> > > > > > > > LAVA v2 introduced lxc-protocol. With the protocol, single
> node
> > test
> > > > > > > > job can deploy and boot a lxc container to control DUT. Here
> is an
> > > > > > > > example:
> https://lkft.validation.linaro.org/scheduler/job/605270 .
> > The
> > > > > > > > example job use lxc contianer to deploy imgs to DUT. If DUT
> was
> > > > > > > > configed with static IP, the IP is known to lxc container
> with LAVA
> > > > > > > > helper lava-target-ip, then ssh connection between lxc and
> DUT is
> > > > > > > > possible. Based on these features, I thought we can run fuego
> > tests
> > > > > > > > with LAVA just like how we run it now. As mentioned above,
> > there is no
> > > > > > > > and will be no support for docker-protocol in LAVA, and
> migrating
> > > > > > > > fuego installation to lxc also is problemic. Please do let
> me know
> > > > > > > > once you have a script for fuego installation. I am having
> problem
> > to
> > > > > > > > do that, hit jenkins missing, docker missing, permission
> issues, etc.
> > > > > > > > Once I am alble to install fuego within lxc, I can propare a
> job
> > > > > > > > example. It would be one test definition for all fuego
> tests. This is
> > > > > > > > how we do it before. `automated/linux/workload-automation3
> > > > > > > > ` is a good example.
> > > > > > >
> > > > > > > I see what you want to do. Using LXC sounds doable.
> > > > > > > But I guess that having Fuego installed on the target (or an
> LXC DUT)
> > would be much easier.
> > > > > >
> > > > > > Yeah, I guess if target run Debian based distros, then
> installing on
> > > > > > DUT will be easier. Most of our targets run openembedded based
> > distro,
> > > > > > it is hard to install fuego on them. It is possible to build
> docker
> > > > > > into these OE images and run fuego on target within docker
> > container,
> > > > > > but some of the boards don't have the resource for that...  In
> LAVA,
> > > > > > LXC or LXC DUT are mainly used as host to control other ARM based
> > > > > > DUTs.
> > > > >
> > > > > I see, maybe running Fuego on OE might require some work. Maybe it
> > is easier to start with Debian images.
> > > > >
> > > > > > > I am going to work on the installation of Fuego natively then.
> > > > > > > By the way, if you export the docker filesystem (docker
> export..)
> > and import it in LXC you would get a DUT
> > > > with
> > > > > > Fuego installed. Wouldn't that solve your problem? Fuego can run
> > tests on the host (see docker.board)
> > > > although
> > > > > > to run with "root" permissions you need to change jenkins
> > permissions.
> > > > > >
> > > > > > I tried to test docker.board within fuego docker container, it
> works
> > > > > > well, and yes, I hit the root permissions issue. I haven't tried
> to
> > > > > > import fuego docker filesystem in LXC, that is a new concept to
> me.
> > > > > > Does it require docker installed and running in LXC container?
> If yes,
> > > > > > that is a problem in LAVA. I think we will need to modify lxc
> > > > > > cofiguration somehow on lava-dispatcher to support docker in lxc.
> > > > >
> > > > > No, I just meant to use Docker to create the filesystem tree and
> then
> > use it in LXC.
> > > > >
> > > > > > I am getting close to get my whole setup working with LAVA
> > multinode
> > > > > > job. Here is the test definitions in case anyone interested in
> > > > > > https://github.com/chase-qi/test-
> > definitions/tree/fuego/automated/linux/fuego-multinode
> > > > > > . I will share a job example once I have it.
> >
> > Here is a  LAVA job example
> > https://lava.slipslow.ml/scheduler/job/119. The test job uses:
> > * docker device as host using the customized fuego docker image, refer
> > the description
> > https://cloud.docker.com/u/chaseqi/repository/docker/chaseqi/standalone
> > -fuego
> > * raspberry pi3 as DUT, the DUT boots OE based build via tftp and
> nfsrootfs.
> > * lava multinode protocol to sync machine status between host and dut,
> > and sent IP and ssh key between them.
>
> Just to clarify, is the Fuego docker container running on the LAVA host,
> but
> treated like another DUT via LAVA multimode mechanisms?  Or is it running
> on a separate (3rd) machine?
>
>
Fuego docker container runs on lava-dispatcher. It is a bit confusing. Just
like raspberry pi3, docker is a another device in LAVA. LAVA contains two
major components, lava-master and lava-dispatcher. These two components can
be installed on the same machine, which is called standalone installation.
These two components also can be installed on separate machines, we call it
distributed installation, they are connected using ZMQ. All devices(dev
boards, phones, qem, lxc, docker, ect.) are connected and controlled by
lava-dispatcher. Test jobs are scheduled by lava-master. LAVA treat docker
as a normal device, user can use it as DUT or controller or what they need
it to be. LAVA multinode mechanism is required when on test job needs two
devices. It is the case for fuego. In the example job, I use docker as
fuego host and raspberry pi3 as DUT.


> I prefer this approach to the "run Fuego tests natively on the board"
> approach
> because the latter approach requires a lot of overhead on the target board
> that we were trying to avoid (bash, python, toolchain).  The last bit,
> about actually building the software, I think we'll eventually get rid of
> for
> most users, with a  server-side cache of pre-built packages.  However, I'd
> still like
> to require as little as possible on the board.  That's the whole reason
> Fuego
> has the architecture it does - driving test commands from the host - so
> that
> any complex logic or coordination can happen on a more full-featured
> machine
> rather than the target board.
>
> >
> > From my point of view, here are the pros and cons of this approach.
> > Pros:
> > * Doesn't need local fuego host controller any more, it is native run
> > within LAVA.
> > * Easy to scale. Jobs can be submitted from any client to any devices
> > available.
> > * Pretty fast, once the docker image pulled, on the same
> > lava-dispatcher, the following test jobs will use it directly to
> > create container as fuego host.
> >
> > Cons:
> > * Complicated, typically for new LAVA users or whoever don't want to
> > touch LAVA multinode.
> That's interesting.  I don't know enough about LAVA to know how big
> this issue is.  Do most users not use multinode?
>
>
I am not sure about that. I would say most of our test jobs are single node
job. multinode is not a big issue, but getting familiar with it takes time,
typically for new test writers. Once test definition and job example
developed, it is pretty easy for user to copy or modify.


> One aspect of Fuego architecture that is attractive (IMHO), is that every
> test is implicitly multimode because there is always a host and a board,
> and when the host and the board are directly communicating, and the
> tests is already running on the host, it simplifies
> some of the multi-machine setup.  But this is only true when the connection
> between the host and the board has the hardware configuration that applies
> to the test (e.g. they are connected via USB for USB testing, or via
> network
> for network testing, or serial for serial testing, etc.)
>
> My own opinion is that this whole area of managing off-DUT hardware and
> connections is not fully realized in either Fuego or LAVA, and I'm hoping
> to discuss ideas about this at Connect in Bangkok.
>
>
Yeah. Good to known your guys are joining. Looks like we started warming up
already :)


> > * Requires SSH access to the DUT. In the case of network isn't
> > supported, it blocks all tests.
>
> Fuego doesn't require SSH access, though that is the most common
> 'transport' used by most Fuego users.  I'm not sure what you mean
> when you say "if the network isn't supported, it blocks all tests".
>
>
I realized fuego supports serial transport, but serial is occupied by LAVA
for devices in LAVA, so I am using ssh transport. I meant when network
isn't supported on DUT, then this approach cannot run any tests.


> >
> > With your effort on native installer and non-jenkins, I think it is
> > possible to do something similar with single node job. As I wrote, it
> > will be lxc protocol plus DUT(with static IP). However, when network
> > isn't available, then we are in trouble too.
> >
> > IMHO, this is hard but a faster way to get fuego tests running within
> > LAVA as it works just like how we use fuego now.  I am adding it to
> > Linaro test-definitions with PR
> > https://github.com/Linaro/test-definitions/pull/22 . I believe you
> > guys are the best to review the PR. Any comments would be appreciated.
> >
> > > > >
> > > > > Great! Thanks a lot!
> > > > >
> > > > > Kind regards,
> > > > > Daniel
> > > > >
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Chase
> > > > > >
> > > > > > >
> > > > > > > > Alternatively, I can lunch docker device and DUT with
> multinode
> > job,
> > > > > > > > but that is complex. And fuego docker container eats a lot of
> > > > > > > > memory(blame jenkins?). The exsting docker devices in our lib
> > only
> > > > > > > > have 1G memory configured.
> > > > > > >
> > > > > > > I haven't checked the memory consumed, I guess the reason is
> > Java.
> > > > > > >
> > > > > > > > > This is my current work-in-progress approach:
> > > > > > > > > https://github.com/sangorrin/test-
> > definitions/tree/master/automated/linux/fuego
> > > > > > > > >
> > > > > > > > > - Manual usage (run locally)
> > > > > > > > >         $ git clone
> https://github.com/sangorrin/test-definitions
> > > > > > > > >         $ cd test-definitions
> > > > > > > > >         $ . ./automated/bin/setenv.sh
> > > > > > > > >         $ cd automated/linux/fuego/
> > > > > > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > > > > > >         $  tree output/
> > > > > > > > >                 output/
> > > > > > > > >                 ├── build <- equivalent to fuego buildzone
> > > > > > > > >                 │   ├── hello
> > > > > > > > >                 │   ├── hello.c
> > > > > > > > >                 │   ├── Makefile
> > > > > > > > >                 │   └── README.md
> > > > > > > > >                 ├── fuego.Functional.hello_world <-
> equivalent to
> > board test folder
> > > > > > > > >                 │   └── hello
> > > > > > > > >                 └── logs <- equivalent to logdir
> > > > > > > > >                         └── testlog.txt
> > > > > > > > > - test-runner usage (run on remote board)
> > > > > > > > >         $ cd test-definitions
> > > > > > > > >         $ test-runner -g root@192.168.1.45 -d
> > ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > > > > > >         $ ls ../output
> > > > > > > > >                 result.csv
> > > > > > > > >                 result.json
> > > > > > > > >
> > > > > > > > > I have yet to add the LAVA messages and prepare result.txt
> but
> > it will be working soon.
> > > > > > > >
> > > > > > > > You don't have to. It looks like a done job to me.
> send-to-lava.sh
> > > > > > > > will take care of it. When running in LAVA, the helper uses
> > > > > > > > lava-test-case for result collecting, and when running
> without
> > LAVA,
> > > > > > > > the helper prints result lines in a fixed format for result
> parsing
> > > > > > > > within test-runner. (When I writing this, I noticed your
> next reply,
> > > > > > > > maybe I am looking at the latest code already, I will give
> it a spin
> > > > > > > > with LAVA and come back to you)
> > > > > > >
> > > > > > > Thanks again for checking. I am glad that it worked for your.
> I have a
> > LAVA setup on the CIP project so
> > > > I have
> > > > > > started to do tests there.
> > > > > > >
> > > > > > > > So basically, we are running in two different directions.
> From my
> > > > > > > > point of view, you are porting fuego tests to Linaro test-
> > definitions
> > > > > > > > natively. Although I am not yet sure how the integration
> between
> > these
> > > > > > > > two projects goes, we are happy to see this happening :)
> > > > > > >
> > > > > > > Thanks, you are right. But porting it to Fuego misses a lot of
> the
> > good features in Fuego such as the passing
> > > > > > criteria. Perhaps your approach is better.
> >
> > No really : ) As I wrote above, my approach requests ssh access to DUT
> > which isn't always the case.
>
> OK - maybe I understand your statement above better.  But I'm not sure.
>
> > It also is one of the reasons we do
> > install(optional) -> run -> parsing on target. lava-dispatcher will
> > clone test repos and apply them to rootfs as overlay before
> > deployment. To some extend, it solves 'no network' issue.
>
> I presume this means that every test in LAVA requires a boot/deploy
> cycle on the target board, which is a downside in that it takes longer
> per test (but an upside in that each test starts with a clean slate).
>
> I think when we have a pre-built test package cache in Fuego, we
> might be able to support this same operational flow.
>

LAVA test job normally run the full 'deploy -> boot -> test' cycle. LAVA
Devices are shared, which means device state(software) is unknown for this
job, the last job may deployed or installed something else. The device
assigned may doesn't boot at all.


> >
> > When dependence can be installed or pre-installed, your approach will
> > work every where. We, at least me, will be very happy to see it in
> > Linaro test-definitions. It is a very good example for adding fuego
> > tests to test-definitions project.
>
> Thanks very much for working on this.  We are much farther along
> at integrating Fuego and LAVA than I thought we would be at this point.
> And we are starting to shake out some interesting issues which I think
> will help each project deal with the other's idiosyncrasies and use cases.
>
>
Many thanks to you guys.

- Chase


> Regards,
>  -- Tim
>
>

[-- Attachment #2: Type: text/html, Size: 34859 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Fuego] Integration of Fuego and Linaro test-definitons
  2019-02-27  6:13               ` Tim.Bird
  2019-02-27  8:16                 ` Chase Qi
@ 2019-02-27 10:32                 ` Milosz Wasilewski
  1 sibling, 0 replies; 14+ messages in thread
From: Milosz Wasilewski @ 2019-02-27 10:32 UTC (permalink / raw)
  To: Tim.Bird; +Cc: Anders Roxell, fuego

On Wed, 27 Feb 2019 at 06:14, <Tim.Bird@sony.com> wrote:
>
> Sorry for the belated reply, but I was on vacation last week.
>
> I have a few comments I'll put inline below.
>
>
> > -----Original Message-----
> > From: Chase Qi
> >
> > Hi Daniel,
> >
> > Thanks a lot for the comments.
> >
> > On Mon, Feb 25, 2019 at 1:35 PM <daniel.sangorrin@toshiba.co.jp> wrote:
> > >
> > > Hello Chase,
> > >
> > > > From: Chase Qi <chase.qi@linaro.org>
> > > [...]
> > > > > Today, I sent a patch series that allows you to install Fuego without
> > Jenkins on Docker. Maybe that will solve
> > > > your previous problems. I also submitted a few more changes to allow
> > users changing the port where Jenkins
> > > > listens.
> > > >
> > > > I noticed the patches. I definitely will give it a spin later on. I am
> > > > currently still using fuego v1.40 based docker image for prototyping
> > > > with LAVA multinode job.
> I think that using configuring Fuego support in LAVA as a multinode job is
> a good approach.  But I'll comment more at your pro/con list below.
>
> > > > I built and uploaded fuego docker imge with
> > > > fuego test code included here
> > > > https://cloud.docker.com/repository/docker/chaseqi/standalone-
> > fuego/tags
> > > > .  BTW, are you guys plan to publish fuego official docker image?
> > >
> > > Nice.
> > > We have talked about that but we haven't published an official image yet.
> With the 1.3 release, we almost released an official image.  But we ran into
> some issues stabilizing it to run in people's environments.  There always has
> to be a customization step (such as dealing with people's firewalls and proxies,
> and matching the user account on the host that is using the docker container),
> which we couldn't quite finish the work on for that release.
> As Daniel says, it's on our 'to-do' list.
>
> > >
> >
> > Ok, I will stick with the one I made for the moment.
> >
> > > > Here is the changes I made in the dockerfile.
> > > >
> > > > ```
> > > > $ git diff
> > > > diff --git a/Dockerfile b/Dockerfile
> > > > index 269e1f6..16586fa 100644
> > > > --- a/Dockerfile
> > > > +++ b/Dockerfile
> > > > @@ -114,6 +114,15 @@ RUN CHROME_DRIVER_VERSION=$(curl --silent -
> > -fail \
> > > >  RUN echo "jenkins ALL = (root) NOPASSWD:ALL" >> /etc/sudoers
> > > >
> > > >
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +#Install fuego
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +RUN git clone https://bitbucket.org/tbird20d/fuego.git /fuego \
> > > > +    && git clone https://bitbucket.org/tbird20d/fuego-core.git /fuego-
> > core \
> > > > +    && ln -s /fuego/fuego-ro/ / \
> > > > +    && ln -s /fuego/fuego-rw/ /
> > >
> > > The upstream repository has changed to
> > > https://bitbucket.org/fuegotest/fuego.git
> > > https://bitbucket.org/fuegotest/fuego-core.git
> > > Also, if you use the next branch you have to clone fuego-core within the
> > fuego folder now.
> > >
> >
> > Thanks for the pointers. I switched to the new links.
> >
> > > > +
> > > > +
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >  # get ttc script and helpers
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > @@ -201,8 +210,8 @@ RUN chown -R jenkins:jenkins $JENKINS_HOME/
> > > >  # Lava
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >
> > > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > > -RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-setup /usr/local/bin
> > > > +# RUN ln -s /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > > >  # CONVENIENCE HACKS
> > > >  # not mounted, yet
> > > >  #RUN echo "fuego-create-node --board raspberrypi3" >>
> > /root/firststart.sh
> > > > @@ -218,6 +227,14 @@ RUN ln -s
> > > > /fuego-ro/scripts/fuego-lava-target-teardown /usr/local/bin
> > > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get update
> > > >  #RUN DEBIAN_FRONTEND=noninteractive apt-get -yV install
> > > > crossbuild-essential-armhf cpp-arm-linux-gnueabihf
> > > > gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf
> > > >
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +#Install arm64 toolchain
> > > > +#
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > +RUN /fuego-ro/toolchains/install_cross_toolchain.sh arm64 \
> > > > +    && apt-get clean \
> > > > +    && rm -rf /tmp/* /var/tmp/*
> > > > +
> > > > +
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > >  # Setup startup command
> > > >  #
> > > >
> > ==========================================================
> > =======
> > > > =============
> > > > ```
> > > >
> > > > >
> > > > > I am also preparing a native install script. Unfortunately, my time is up
> > today and I couldn't test it. I send it to
> > > > you attached in case you want to give it a try. But make sure you do it on
> > a container or VM where nothing bad
> > > > can happen ;)
> > > >
> > > > Thanks a lot. I just tried the script within lxc jessie container.
> > > > Most of it just works. The following run with ftc cmds are
> > > > problematic, which probably are expected. I guess we also need to
> > > > patch ftc to make jenkins and docker optional.
> > >
> > > Sorry about that, I am working on it. I will release a better script soon,
> > maybe today or tommorow. I will let you know when it's ready.
> > >
> > > Thanks,
> > > Daniel
> > >
> > >
> > > > ```
> > > > root@fuego-native:/fuego/fuego-core/engine# ftc list-boards
> > > > sudo: docker: command not found
> > > > $ diff native-install-jessie.sh native-install-jessie.sh.original
> > > > 86,92c86
> > > > < # ftc list-boards reports
> > > > < # sudo: docker: command not found
> > > > < # TODO: patch ftc to make docker optional.
> > > > < curl -fsSL https://get.docker.com -o get-docker.sh
> > > > < sh get-docker.sh
> > > > <
> > > > < ln -s /fuego/fuego-core/engine/scripts/ftc /usr/local/bin/
> > > > ---
> > > > > ln -s /fuego-core/scripts/ftc /usr/local/bin/
> > > >
> > > > root@fuego-native:/fuego/fuego-core/engine/scripts# git diff
> > > > diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> > > > index ab2d2cb..1b09812 100755
> > > > --- a/engine/scripts/ftc
> > > > +++ b/engine/scripts/ftc
> > > > @@ -4665,7 +4665,7 @@ def main():
> > > >                  print "Can't do rm-jobs outside the container! Aborting."
> > > >                  sys.exit(1)
> > > >              command += arg + " "
> > > > -        container_command(command)
> > > > +        #container_command(command)
> > > >
> > > >      if len(sys.argv) < 2:
> > > >          error_out('Missing command\nUse "ftc help" to get usage help.', 1)
> > > > @@ -4781,7 +4781,7 @@ def main():
> > > >          # shows fuego boards
> > > >          do_list_boards(conf)
> > > >
> > > > -    import jenkins
> > > > +    #import jenkins
> > > >      server = jenkins.Jenkins('http://localhost:8080/fuego')
> > > >
> > > >      if command.startswith("add-job"):
> > > >
> > > > # ftc run-test -b raspberrypi3 -t Benchmark.fio -s default
> > > > Traceback (most recent call last):
> > > >   File "/usr/local/bin/ftc", line 4929, in <module>
> > > >     main()
> > > >   File "/usr/local/bin/ftc", line 4785, in main
> > > >     server = jenkins.Jenkins('http://localhost:8080/fuego')
> > > > AttributeError: 'NoneType' object has no attribute 'Jenkins
> > > > ```
> > > >
> > > > Thanks,
> > > > Chase
> > > >
> > > > >
> > > > > > > > > > * as you pointed, parsing fuego's test result file in LAVA is easy
> > to do.
> > > > > > > > >
> > > > > > > > > The only problem is that I would need to run the Fuego parser
> > on the target board.
> > > > > > > > > For that, I would need to modularize the parser into a library
> > (e.g. import fuego-parser), and the board
> > > > > > would
> > > > > > > > need to install the python modules required by fuego-parser. This
> > is on my TODO list since I proposed
> > > > it
> > > > > > during
> > > > > > > > the last Fuego jamboree. I will try to do it as soon as i can.
> > > > > > > > >
> > > > > > > > > What alternatives do I have?
> > > > > > > > > - send the results to LAVA through a REST API instead of having
> > it monitor the serial cable? probably
> > > > not
> > > > > > > > possible.
> > > > > > > > > - create a simplified parser on the test (e.g. using our
> > log_compare function). Not ideal, but possible.
> > > > > > > > >
> > > > > > > > > In the end, this stems from the fact that Fuego assumes parsing
> > is done in the host (to use python),
> > > > while
> > > > > > > > Linaro uses grep/awk/sed directly on the target. There is a trade-
> > off there.
> > > > > > > > >
> > > > > > > > > > * existing way to run fuego tests in LAVA are hacks. The
> > problem is
> > > > > > > > > > they don't scale, 'scale' means remote and distributed CI
> > setup.
> > > > > > > > >
> > > > > > > > > Yes, it is a hack.
> > > > > > > > > I think Fuego is not supposed to run with LAVA, because the
> > goals are very different.
> > > > > > > > > But parts of Fuego can run with LAVA. This is what I think we can
> > collaborate on.
> > > > > > > >
> > > > > > > > Yes, +1. When running with LAVA, IMHO, only the backend and
> > real tests
> > > > > > > > are needed.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > * I am tring to hanld both fuego host controller and DUT with
> > LAVA.
> > > > > > > > > > The first part is hard part. Still tring to find a way. About the
> > host
> > > > > > > > > > controller part, I started with LAVA-lxc protocol, but hit some
> > > > > > > > > > jenkins and docker related issues. I feel build, publish and pull
> > a
> > > > > > > > > > fuego docker image is the way to go now.
> > > > > > > > >
> > > > > > > > > I think this approach might be too hard.
> > > > > > > >
> > > > > > > > LAVA v2 introduced lxc-protocol. With the protocol, single node
> > test
> > > > > > > > job can deploy and boot a lxc container to control DUT. Here is an
> > > > > > > > example: https://lkft.validation.linaro.org/scheduler/job/605270 .
> > The
> > > > > > > > example job use lxc contianer to deploy imgs to DUT. If DUT was
> > > > > > > > configed with static IP, the IP is known to lxc container with LAVA
> > > > > > > > helper lava-target-ip, then ssh connection between lxc and DUT is
> > > > > > > > possible. Based on these features, I thought we can run fuego
> > tests
> > > > > > > > with LAVA just like how we run it now. As mentioned above,
> > there is no
> > > > > > > > and will be no support for docker-protocol in LAVA, and migrating
> > > > > > > > fuego installation to lxc also is problemic. Please do let me know
> > > > > > > > once you have a script for fuego installation. I am having problem
> > to
> > > > > > > > do that, hit jenkins missing, docker missing, permission issues, etc.
> > > > > > > > Once I am alble to install fuego within lxc, I can propare a job
> > > > > > > > example. It would be one test definition for all fuego tests. This is
> > > > > > > > how we do it before. `automated/linux/workload-automation3
> > > > > > > > ` is a good example.
> > > > > > >
> > > > > > > I see what you want to do. Using LXC sounds doable.
> > > > > > > But I guess that having Fuego installed on the target (or an LXC DUT)
> > would be much easier.
> > > > > >
> > > > > > Yeah, I guess if target run Debian based distros, then installing on
> > > > > > DUT will be easier. Most of our targets run openembedded based
> > distro,
> > > > > > it is hard to install fuego on them. It is possible to build docker
> > > > > > into these OE images and run fuego on target within docker
> > container,
> > > > > > but some of the boards don't have the resource for that...  In LAVA,
> > > > > > LXC or LXC DUT are mainly used as host to control other ARM based
> > > > > > DUTs.
> > > > >
> > > > > I see, maybe running Fuego on OE might require some work. Maybe it
> > is easier to start with Debian images.
> > > > >
> > > > > > > I am going to work on the installation of Fuego natively then.
> > > > > > > By the way, if you export the docker filesystem (docker export..)
> > and import it in LXC you would get a DUT
> > > > with
> > > > > > Fuego installed. Wouldn't that solve your problem? Fuego can run
> > tests on the host (see docker.board)
> > > > although
> > > > > > to run with "root" permissions you need to change jenkins
> > permissions.
> > > > > >
> > > > > > I tried to test docker.board within fuego docker container, it works
> > > > > > well, and yes, I hit the root permissions issue. I haven't tried to
> > > > > > import fuego docker filesystem in LXC, that is a new concept to me.
> > > > > > Does it require docker installed and running in LXC container? If yes,
> > > > > > that is a problem in LAVA. I think we will need to modify lxc
> > > > > > cofiguration somehow on lava-dispatcher to support docker in lxc.
> > > > >
> > > > > No, I just meant to use Docker to create the filesystem tree and then
> > use it in LXC.
> > > > >
> > > > > > I am getting close to get my whole setup working with LAVA
> > multinode
> > > > > > job. Here is the test definitions in case anyone interested in
> > > > > > https://github.com/chase-qi/test-
> > definitions/tree/fuego/automated/linux/fuego-multinode
> > > > > > . I will share a job example once I have it.
> >
> > Here is a  LAVA job example
> > https://lava.slipslow.ml/scheduler/job/119. The test job uses:
> > * docker device as host using the customized fuego docker image, refer
> > the description
> > https://cloud.docker.com/u/chaseqi/repository/docker/chaseqi/standalone
> > -fuego
> > * raspberry pi3 as DUT, the DUT boots OE based build via tftp and nfsrootfs.
> > * lava multinode protocol to sync machine status between host and dut,
> > and sent IP and ssh key between them.
>
> Just to clarify, is the Fuego docker container running on the LAVA host, but
> treated like another DUT via LAVA multimode mechanisms?  Or is it running
> on a separate (3rd) machine?
>
> I prefer this approach to the "run Fuego tests natively on the board" approach
> because the latter approach requires a lot of overhead on the target board
> that we were trying to avoid (bash, python, toolchain).  The last bit,
> about actually building the software, I think we'll eventually get rid of for
> most users, with a  server-side cache of pre-built packages.  However, I'd still like
> to require as little as possible on the board.  That's the whole reason Fuego
> has the architecture it does - driving test commands from the host - so that
> any complex logic or coordination can happen on a more full-featured machine
> rather than the target board.
>
> >
> > From my point of view, here are the pros and cons of this approach.
> > Pros:
> > * Doesn't need local fuego host controller any more, it is native run
> > within LAVA.
> > * Easy to scale. Jobs can be submitted from any client to any devices
> > available.
> > * Pretty fast, once the docker image pulled, on the same
> > lava-dispatcher, the following test jobs will use it directly to
> > create container as fuego host.
> >
> > Cons:
> > * Complicated, typically for new LAVA users or whoever don't want to
> > touch LAVA multinode.
> That's interesting.  I don't know enough about LAVA to know how big
> this issue is.  Do most users not use multinode?
>
> One aspect of Fuego architecture that is attractive (IMHO), is that every
> test is implicitly multimode because there is always a host and a board,
> and when the host and the board are directly communicating, and the
> tests is already running on the host, it simplifies
> some of the multi-machine setup.  But this is only true when the connection
> between the host and the board has the hardware configuration that applies
> to the test (e.g. they are connected via USB for USB testing, or via network
> for network testing, or serial for serial testing, etc.)

In LAVA multi-node means sth different. The name isn't best IMHO. It
serves the purpose of synchronizing between testing steps on multiple
targets. An example might be a 'server' running on DUT1 and 'client'
running on DUT2. Both DUTs have to be provisioned. In case DUT1 can
perform provisioning faster than DUT2 it has to wait before the actual
testing begins. This is what multi-node helpers in LAVA do.

The case you're describing above is much closer to 'single node with
host' known also as 'LXC' in LAVA. In this case LAVA job consists of
LXC container simulating host and a target board connected to it. This
is done to isolate host environment from LAVA dispatcher. It allows to
always start with 'clean' host and not pollute dispatcher with
potentially dangerous tools. Unfortunately Fuego doesn't work in LXC
right now and docker can't run inside LXC container. So we have to
experiment with multi-node as Chase described.

milosz

>
> My own opinion is that this whole area of managing off-DUT hardware and
> connections is not fully realized in either Fuego or LAVA, and I'm hoping
> to discuss ideas about this at Connect in Bangkok.
>
> > * Requires SSH access to the DUT. In the case of network isn't
> > supported, it blocks all tests.
>
> Fuego doesn't require SSH access, though that is the most common
> 'transport' used by most Fuego users.  I'm not sure what you mean
> when you say "if the network isn't supported, it blocks all tests".
>
> >
> > With your effort on native installer and non-jenkins, I think it is
> > possible to do something similar with single node job. As I wrote, it
> > will be lxc protocol plus DUT(with static IP). However, when network
> > isn't available, then we are in trouble too.
> >
> > IMHO, this is hard but a faster way to get fuego tests running within
> > LAVA as it works just like how we use fuego now.  I am adding it to
> > Linaro test-definitions with PR
> > https://github.com/Linaro/test-definitions/pull/22 . I believe you
> > guys are the best to review the PR. Any comments would be appreciated.
> >
> > > > >
> > > > > Great! Thanks a lot!
> > > > >
> > > > > Kind regards,
> > > > > Daniel
> > > > >
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Chase
> > > > > >
> > > > > > >
> > > > > > > > Alternatively, I can lunch docker device and DUT with multinode
> > job,
> > > > > > > > but that is complex. And fuego docker container eats a lot of
> > > > > > > > memory(blame jenkins?). The exsting docker devices in our lib
> > only
> > > > > > > > have 1G memory configured.
> > > > > > >
> > > > > > > I haven't checked the memory consumed, I guess the reason is
> > Java.
> > > > > > >
> > > > > > > > > This is my current work-in-progress approach:
> > > > > > > > > https://github.com/sangorrin/test-
> > definitions/tree/master/automated/linux/fuego
> > > > > > > > >
> > > > > > > > > - Manual usage (run locally)
> > > > > > > > >         $ git clone https://github.com/sangorrin/test-definitions
> > > > > > > > >         $ cd test-definitions
> > > > > > > > >         $ . ./automated/bin/setenv.sh
> > > > > > > > >         $ cd automated/linux/fuego/
> > > > > > > > >         $ ./fuego.sh -d Functional.hello_world
> > > > > > > > >         $  tree output/
> > > > > > > > >                 output/
> > > > > > > > >                 ├── build <- equivalent to fuego buildzone
> > > > > > > > >                 │   ├── hello
> > > > > > > > >                 │   ├── hello.c
> > > > > > > > >                 │   ├── Makefile
> > > > > > > > >                 │   └── README.md
> > > > > > > > >                 ├── fuego.Functional.hello_world <- equivalent to
> > board test folder
> > > > > > > > >                 │   └── hello
> > > > > > > > >                 └── logs <- equivalent to logdir
> > > > > > > > >                         └── testlog.txt
> > > > > > > > > - test-runner usage (run on remote board)
> > > > > > > > >         $ cd test-definitions
> > > > > > > > >         $ test-runner -g root@192.168.1.45 -d
> > ./automated/linux/fuego/fuego.yaml -s -o ../output
> > > > > > > > >         $ ls ../output
> > > > > > > > >                 result.csv
> > > > > > > > >                 result.json
> > > > > > > > >
> > > > > > > > > I have yet to add the LAVA messages and prepare result.txt but
> > it will be working soon.
> > > > > > > >
> > > > > > > > You don't have to. It looks like a done job to me. send-to-lava.sh
> > > > > > > > will take care of it. When running in LAVA, the helper uses
> > > > > > > > lava-test-case for result collecting, and when running without
> > LAVA,
> > > > > > > > the helper prints result lines in a fixed format for result parsing
> > > > > > > > within test-runner. (When I writing this, I noticed your next reply,
> > > > > > > > maybe I am looking at the latest code already, I will give it a spin
> > > > > > > > with LAVA and come back to you)
> > > > > > >
> > > > > > > Thanks again for checking. I am glad that it worked for your. I have a
> > LAVA setup on the CIP project so
> > > > I have
> > > > > > started to do tests there.
> > > > > > >
> > > > > > > > So basically, we are running in two different directions. From my
> > > > > > > > point of view, you are porting fuego tests to Linaro test-
> > definitions
> > > > > > > > natively. Although I am not yet sure how the integration between
> > these
> > > > > > > > two projects goes, we are happy to see this happening :)
> > > > > > >
> > > > > > > Thanks, you are right. But porting it to Fuego misses a lot of the
> > good features in Fuego such as the passing
> > > > > > criteria. Perhaps your approach is better.
> >
> > No really : ) As I wrote above, my approach requests ssh access to DUT
> > which isn't always the case.
>
> OK - maybe I understand your statement above better.  But I'm not sure.
>
> > It also is one of the reasons we do
> > install(optional) -> run -> parsing on target. lava-dispatcher will
> > clone test repos and apply them to rootfs as overlay before
> > deployment. To some extend, it solves 'no network' issue.
>
> I presume this means that every test in LAVA requires a boot/deploy
> cycle on the target board, which is a downside in that it takes longer
> per test (but an upside in that each test starts with a clean slate).
>
> I think when we have a pre-built test package cache in Fuego, we
> might be able to support this same operational flow.
> >
> > When dependence can be installed or pre-installed, your approach will
> > work every where. We, at least me, will be very happy to see it in
> > Linaro test-definitions. It is a very good example for adding fuego
> > tests to test-definitions project.
>
> Thanks very much for working on this.  We are much farther along
> at integrating Fuego and LAVA than I thought we would be at this point.
> And we are starting to shake out some interesting issues which I think
> will help each project deal with the other's idiosyncrasies and use cases.
>
> Regards,
>  -- Tim
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-02-27 10:32 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14  1:53 [Fuego] Integration of Fuego and Linaro test-definitons daniel.sangorrin
2019-02-14  8:10 ` daniel.sangorrin
2019-02-14  8:51   ` Chase Qi
2019-02-19  5:28     ` daniel.sangorrin
2019-02-14  8:27 ` Chase Qi
2019-02-21  5:45   ` daniel.sangorrin
2019-02-22  7:00     ` Chase Qi
2019-02-22  8:14       ` daniel.sangorrin
2019-02-25  5:24         ` Chase Qi
2019-02-25  5:35           ` daniel.sangorrin
2019-02-26  8:50             ` Chase Qi
2019-02-27  6:13               ` Tim.Bird
2019-02-27  8:16                 ` Chase Qi
2019-02-27 10:32                 ` Milosz Wasilewski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.