All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] Unification of functional and benchmark tests
@ 2017-04-24  8:37 Daniel Sangorrin
  2017-04-24  8:37 ` [Fuego] [PATCH 1/3] core: the great unification " Daniel Sangorrin
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Daniel Sangorrin @ 2017-04-24  8:37 UTC (permalink / raw)
  To: fuego

Hi,

These patches contain a big change that unifies functional and
benchmark tests. These are the main points but for more detailed
fixes please check the source code.

- functional.sh and benchmark.sh have been merged into main.sh which
is called directly from Jenkins (ftc needs update).
  + TODO: rename bc.sh to fuego_test.sh (for all tests)
- Both type of tests (Functional and Benchmark) now output a 
"results.json" file in the same format. When I implement "ftc report", 
all those results.json files will be combined to create a pdf/html/excel report.
  + TODO: runs.json information needs to be merged into results.json
    and we should make sure that we can generate KernelCI requests as well.

fuego patches:
[PATCH] flot: unify functional and benchmark

fuego-core patches:
[PATCH 1/3] core: the great unification of functional and benchmark
[PATCH 2/3] fix expat test and add it to testplan docker
[PATCH 3/3] add Functional.jpeg test to testplan docker since it

Next steps
   - Parse LTP to produce json instead of an excel sheet
   - Add functionality for blacklisting test cases
   - merge runs.json information into results.json
   - implement ftc report

Thanks,
Daniel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Fuego] [PATCH 1/3] core: the great unification of functional and benchmark tests
  2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
@ 2017-04-24  8:37 ` Daniel Sangorrin
  2017-04-27  0:08   ` Bird, Timothy
  2017-04-24  8:37 ` [Fuego] [PATCH] flot: unify functional and benchmark Daniel Sangorrin
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Daniel Sangorrin @ 2017-04-24  8:37 UTC (permalink / raw)
  To: fuego

Until now we had functional and benchmark tests separated. The
problem with that is that we couldn't share important code
such as json output generation between the two.

For most functional tests this doesn't change anything. They
can continue using log_compare (improved by the patch) and
the core code will automatically parse the results and output
them in the same json format as the one used for benchmarks.

In other words, ALL TESTS now output a results.json.

For tests with various groups, like LTP, we can use a parser.py
instance. (TODO: implement LTP in this way).

I consider logruns and reports deprecated so I discarded
everything related. It will all be substituted it by an
"ftc report" command that will collect the results.json for
each job in a testplan (or in the command line) and produce a
report.

Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
---
 engine/scripts/README                              |   5 +-
 engine/scripts/benchmark.sh                        |  61 -----
 engine/scripts/ftc                                 |   8 +-
 engine/scripts/functional.sh                       |  61 -----
 engine/scripts/functions.sh                        | 274 ++++++++++-----------
 engine/scripts/generic_parser.py                   |  11 +
 engine/scripts/main.sh                             |  59 +++++
 engine/scripts/parser/common.py                    |  25 +-
 engine/tests/Benchmark.Dhrystone/Dhrystone.sh      |   2 +-
 engine/tests/Benchmark.GLMark/GLMark.sh            |   2 +-
 engine/tests/Benchmark.IOzone/IOzone.sh            |   2 +-
 engine/tests/Benchmark.Interbench/Interbench.sh    |   2 +-
 engine/tests/Benchmark.Java/Java.sh                |   2 +-
 engine/tests/Benchmark.OpenSSL/OpenSSL.sh          |   2 +-
 engine/tests/Benchmark.Stream/Stream.sh            |   2 +-
 engine/tests/Benchmark.Whetstone/Whetstone.sh      |   2 +-
 engine/tests/Benchmark.aim7/aim7.sh                |   2 +-
 engine/tests/Benchmark.blobsallad/blobsallad.sh    |   2 +-
 engine/tests/Benchmark.bonnie/bonnie.sh            |   2 +-
 engine/tests/Benchmark.cyclictest/cyclictest.sh    |   2 +-
 engine/tests/Benchmark.dbench/dbench.sh            |   2 +-
 engine/tests/Benchmark.ebizzy/ebizzy.sh            |   2 +-
 engine/tests/Benchmark.ffsb/ffsb.sh                |   2 +-
 engine/tests/Benchmark.fio/fio.sh                  |   2 +-
 .../fuego_check_plots.sh                           |   2 +-
 engine/tests/Benchmark.gtkperf/gtkperf.sh          |   2 +-
 engine/tests/Benchmark.hackbench/hackbench.sh      |   2 +-
 engine/tests/Benchmark.himeno/himeno.sh            |   2 +-
 engine/tests/Benchmark.iperf/iperf.sh              |   2 +-
 engine/tests/Benchmark.linpack/linpack.sh          |   2 +-
 engine/tests/Benchmark.lmbench2/lmbench2.sh        |   2 +-
 engine/tests/Benchmark.nbench-byte/nbench-byte.sh  |   2 +-
 engine/tests/Benchmark.nbench_byte/nbench_byte.sh  |   2 +-
 engine/tests/Benchmark.netperf/netperf.sh          |   2 +-
 engine/tests/Benchmark.reboot/reboot.sh            |   2 +-
 engine/tests/Benchmark.signaltest/signaltest.sh    |   2 +-
 engine/tests/Benchmark.tiobench/tiobench.sh        |   2 +-
 engine/tests/Benchmark.x11perf/x11perf.sh          |   2 +-
 engine/tests/Functional.LTP/LTP.sh                 |   2 +-
 engine/tests/Functional.OpenSSL/OpenSSL.sh         |   2 +-
 engine/tests/Functional.aiostress/aiostress.sh     |   2 +-
 engine/tests/Functional.arch_timer/arch_timer.sh   |   2 +-
 engine/tests/Functional.bc/bc.sh                   |   2 +-
 engine/tests/Functional.boost/boost.sh             |   2 +-
 engine/tests/Functional.bsdiff/bsdiff.sh           |   2 +-
 engine/tests/Functional.bzip2/bzip2.sh             |   2 +-
 engine/tests/Functional.cmt/cmt.sh                 |   2 +-
 .../Functional.commonAPI_C++/commonAPI_C++.sh      |   2 +-
 .../Functional.commonAPI_Dbus/commonAPI_Dbus.sh    |   2 +-
 .../commonAPI_SomeIp.sh                            |   2 +-
 engine/tests/Functional.crashme/crashme.sh         |   2 +-
 engine/tests/Functional.croco/croco.sh             |   2 +-
 engine/tests/Functional.curl/curl.sh               |   2 +-
 engine/tests/Functional.expat/expat.sh             |   2 +-
 engine/tests/Functional.fixesproto/fixesproto.sh   |   2 +-
 engine/tests/Functional.fontconfig/fontconfig.sh   |   2 +-
 engine/tests/Functional.fuego_abort/fuego_abort.sh |   2 +-
 .../fuego_board_check.sh                           |   2 +-
 .../fuego_test_phases.sh                           |   2 +-
 .../Functional.fuego_transport/fuego_transport.sh  |   2 +-
 engine/tests/Functional.fuse/fuse.sh               |   2 +-
 engine/tests/Functional.giflib/giflib-scripts.sh   |   2 +-
 engine/tests/Functional.glib/glib.sh               |   2 +-
 engine/tests/Functional.glib2/glib2-scripts.sh     |   2 +-
 engine/tests/Functional.glibc/glibc.sh             |   2 +-
 engine/tests/Functional.hciattach/hciattach.sh     |   2 +-
 engine/tests/Functional.hello_world/hello_world.sh |   2 +-
 engine/tests/Functional.imagemagick/imagemagick.sh |   2 +-
 engine/tests/Functional.iptables/iptables.sh       |   2 +-
 engine/tests/Functional.iputils/iputils.sh         |   2 +-
 engine/tests/Functional.ipv6connect/ipv6connect.sh |   2 +-
 engine/tests/Functional.jpeg/jpeg.sh               |   2 +-
 .../tests/Functional.kernel_build/kernel_build.sh  |   2 +-
 engine/tests/Functional.kmod/kmod.sh               |   2 +-
 engine/tests/Functional.libogg/ogg.sh              |   2 +-
 engine/tests/Functional.libpcap/libpcap.sh         |   2 +-
 engine/tests/Functional.librsvg/librsvg-scripts.sh |   2 +-
 engine/tests/Functional.libspeex/speex.sh          |   2 +-
 engine/tests/Functional.libtar/libtar.sh           |   2 +-
 .../tests/Functional.libwebsocket/libwebsocket.sh  |   2 +-
 .../tests/Functional.linus_stress/linus_stress.sh  |   2 +-
 engine/tests/Functional.lwip/lwip.sh               |   2 +-
 engine/tests/Functional.neon/neon-scripts.sh       |   2 +-
 .../Functional.net-tools/net-tools-scripts.sh      |   2 +-
 engine/tests/Functional.netperf/netperf.sh         |   2 +-
 engine/tests/Functional.pixman/pixman.sh           |   2 +-
 engine/tests/Functional.pppd/pppd.sh               |   2 +-
 engine/tests/Functional.protobuf/protobuf.sh       |   2 +-
 engine/tests/Functional.rmaptest/rmaptest.sh       |   2 +-
 engine/tests/Functional.scifab/scifab.sh           |   2 +-
 engine/tests/Functional.sdhi_0/sdhi_0.sh           |   2 +-
 engine/tests/Functional.stress/stress.sh           |   2 +-
 engine/tests/Functional.synctest/synctest.sh       |   2 +-
 engine/tests/Functional.thrift/thrift.sh           |   2 +-
 engine/tests/Functional.tiff/tiff.sh               |   2 +-
 engine/tests/Functional.vsomeip/vsomeip-scripts.sh |   2 +-
 engine/tests/Functional.xorg-macros/xorg-macros.sh |   2 +-
 engine/tests/Functional.zlib/zlib.sh               |   2 +-
 98 files changed, 307 insertions(+), 377 deletions(-)
 delete mode 100644 engine/scripts/benchmark.sh
 delete mode 100644 engine/scripts/functional.sh
 create mode 100755 engine/scripts/generic_parser.py
 create mode 100644 engine/scripts/main.sh

diff --git a/engine/scripts/README b/engine/scripts/README
index 41f38d7..59b2d0c 100644
--- a/engine/scripts/README
+++ b/engine/scripts/README
@@ -7,10 +7,7 @@ overlays.sh
 reports.sh
 
 -- Scripts with basic test sequences --
-benchmark.sh
-functional.sh
-stress.sh
-
+main.sh
 
 -- Toolchain config --
 tools.sh
diff --git a/engine/scripts/benchmark.sh b/engine/scripts/benchmark.sh
deleted file mode 100644
index f10d73a..0000000
--- a/engine/scripts/benchmark.sh
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) 2014 Cogent Embedded, Inc.
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-# THE SOFTWARE.
-
-# DESCRIPTION
-# This script contains a sequence of calls that are needed for running benchmakr test
-
-if [ -n "$FUEGO_DEBUG" ] ; then
-	set -x
-fi
-set -e
-
-source $FUEGO_CORE/engine/scripts/overlays.sh
-set_overlay_vars
-
-source $FUEGO_CORE/engine/scripts/functions.sh
-
-source $FUEGO_CORE/engine/scripts/reports.sh
-
-echo "##### doing fuego phase: pre_test ########"
-pre_test $TESTDIR
-
-echo "##### doing fuego phase: build ########"
-if $Rebuild; then
-    build
-fi
-
-echo "##### doing fuego phase: deploy ########"
-deploy
-
-echo "##### doing fuego phase: run ########"
-test_run
-
-echo "##### doing fuego phases: get_testlog AND processing ########"
-set_testres_file
-
-FUEGO_RESULT=0
-bench_processing
-export FUEGO_RESULT=$?
-check_create_logrun
-
-post_test $TESTDIR
-echo "Fuego: all test phases complete!"
-return $FUEGO_RESULT
-
diff --git a/engine/scripts/ftc b/engine/scripts/ftc
index c534688..6552c7f 100755
--- a/engine/scripts/ftc
+++ b/engine/scripts/ftc
@@ -781,11 +781,7 @@ def get_includes(include_filename, conf):
     return inc_vars
 
 def create_job(board, test):
-    # flot only necessary for Benchmarks
-    if test.test_type == 'Benchmark':
-        flot_link = '<flotile.FlotPublisher plugin="flot@1.0-SNAPSHOT"/>'
-    else:
-        flot_link = ''
+    flot_link = '<flotile.FlotPublisher plugin="flot@1.0-SNAPSHOT"/>'
 
     # prepare links for the descriptionsetter plugin
     test_spec_path = '/fuego-core/engine/tests/%s/%s.spec' % (test.name, test.name)
@@ -838,7 +834,7 @@ export TESTDIR={testdir}
 export TESTNAME={testname}
 export TESTSPEC={testspec}
 #export FUEGO_DEBUG=1
-timeout --signal=9 {timeout} /bin/bash $FUEGO_CORE/engine/tests/${{TESTDIR}}/${{TESTNAME}}.sh
+timeout --signal=9 {timeout} /bin/bash $FUEGO_CORE/engine/scripts/main.sh
 </command>
     </hudson.tasks.Shell>
     </builders>
diff --git a/engine/scripts/functional.sh b/engine/scripts/functional.sh
deleted file mode 100644
index 9574cdd..0000000
--- a/engine/scripts/functional.sh
+++ /dev/null
@@ -1,61 +0,0 @@
-# Copyright (c) 2014 Cogent Embedded, Inc.
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-# THE SOFTWARE.
-
-# DESCRIPTION
-# This script contains a sequence of calls that are needed for running functional test
-
-if [ -n "$FUEGO_DEBUG" ] ; then
-	set -x
-fi
-set -e
-
-source $FUEGO_CORE/engine/scripts/overlays.sh
-set_overlay_vars
-
-source $FUEGO_CORE/engine/scripts/reports.sh
-source $FUEGO_CORE/engine/scripts/functions.sh
-
-echo "##### doing fuego phase: pre_test ########"
-pre_test $TESTDIR
-
-echo "##### doing fuego phase: build ########"
-if $Rebuild; then
-    build
-fi
-
-echo "##### doing fuego phase: deploy ########"
-deploy
-
-echo "##### doing fuego phase: run ########"
-call_if_present test_run
-
-echo "##### doing fuego phase: get_testlog ########"
-get_testlog $TESTDIR
-
-echo "##### doing fuego phase: processing ########"
-FUEGO_RESULT=0
-set +e
-call_if_present test_processing
-export FUEGO_RESULT=$?
-set -e
-
-post_test $TESTDIR
-echo "Fuego: all test phases complete!"
-return $FUEGO_RESULT
diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
index b9828c8..0e5e1ad 100755
--- a/engine/scripts/functions.sh
+++ b/engine/scripts/functions.sh
@@ -253,84 +253,126 @@ function target_setup_route_to_host () {
 }
 
 function pre_test {
-# $1 - testdir
-# Make sure the target is alive, and prepare workspace for the test
-  source $FUEGO_RO/toolchains/tools.sh
-  export SSHPASS=$PASSWORD
+    # Make sure the target is alive, and prepare workspace for the test
+    source $FUEGO_RO/toolchains/tools.sh
+    export SSHPASS=$PASSWORD
 
-  is_empty $1
+    is_empty $TESTDIR
 
-  # Setup routing to target if needed
-  [ -n "$TARGET_SETUP_LINK" ] && $TARGET_SETUP_LINK
+    # Setup routing to target if needed
+    [ -n "$TARGET_SETUP_LINK" ] && $TARGET_SETUP_LINK
 
-  cmd "true" || abort_job "Cannot connect to $DEVICE via $TRANSPORT"
+    cmd "true" || abort_job "Cannot connect to $DEVICE via $TRANSPORT"
 
-# Target cleanup flag check
-  [ "$Target_PreCleanup" = "true" ] && target_cleanup $1 || true
+    # Target cleanup flag check
+    [ "$Target_PreCleanup" = "true" ] && target_cleanup $TESTDIR || true
 
-  export LOGDIR="$FUEGO_RW/logs/$TESTDIR/${NODE_NAME}.${TESTSPEC}.${BUILD_NUMBER}.${BUILD_ID}"
+    export LOGDIR="$FUEGO_RW/logs/$TESTDIR/${NODE_NAME}.${TESTSPEC}.${BUILD_NUMBER}.${BUILD_ID}"
 
-# call test_pre_check if defined
-  call_if_present test_pre_check
+    # call test_pre_check if defined
+    call_if_present test_pre_check
 
-# Get target device firmware.
-  firmware
-  cmd "echo \"Firmware revision:\" $FWVER" || abort_job "Error while ROOTFS_FWVER command execution on target"
+    # Get target device firmware.
+    firmware
+    cmd "echo \"Firmware revision:\" $FWVER" || abort_job "Error while ROOTFS_FWVER command execution on target"
 
-# XXX: Sync date/time between target device and framework host
-# Also log memory and disk status as well as non-kernel processes,and interrupts
+    # XXX: Sync date/time between target device and framework host
+    # Also log memory and disk status as well as non-kernel processes,and interrupts
 
-  ov_rootfs_state
+    ov_rootfs_state
 
-  cmd "if [ ! -d $BOARD_TESTDIR ]; then mkdir -p $BOARD_TESTDIR; fi" || abort_job "ERROR: cannot find nor create $BOARD_TESTDIR"
+    cmd "if [ ! -d $BOARD_TESTDIR ]; then mkdir -p $BOARD_TESTDIR; fi" || abort_job "ERROR: cannot find nor create $BOARD_TESTDIR"
 
-  local fuego_test_dir=$BOARD_TESTDIR/fuego.$1
+    local fuego_test_dir=$BOARD_TESTDIR/fuego.$TESTDIR
 
-  # use a /tmp dir in case logs should be on a different partition
-  # a board file can override the default of /tmp by setting FUEGO_TARGET_TMP
-  local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$1
+    # use a /tmp dir in case logs should be on a different partition
+    # a board file can override the default of /tmp by setting FUEGO_TARGET_TMP
+    local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$TESTDIR
 
-  cmd "rm -rf ${fuego_test_dir} ${fuego_test_tmp}; mkdir -p ${fuego_test_dir}" || abort_job "Could not create ${fuego_test_dir} on $NODE_NAME"
-  # note that dump_syslogs (below) creates ${fuego_test_tmp} if needed
+    cmd "rm -rf ${fuego_test_dir} ${fuego_test_tmp}; mkdir -p ${fuego_test_dir}" || abort_job "Could not create ${fuego_test_dir} on $NODE_NAME"
+    # note that dump_syslogs (below) creates ${fuego_test_tmp} if needed
 
-# Log test name
-  ov_logger "Starting test ${JOB_NAME}"
+    # Log test name
+    ov_logger "Starting test ${JOB_NAME}"
 
-  dump_syslogs ${fuego_test_tmp} "before"
+    dump_syslogs ${fuego_test_tmp} "before"
 
-# flush buffers to physical media and drop filesystem caches to make system load more predictable during test execution
-  ov_rootfs_sync
+    # flush buffers to physical media and drop filesystem caches to make system load more predictable during test execution
+    ov_rootfs_sync
 
-  ov_rootfs_drop_caches
+    ov_rootfs_drop_caches
 }
 
-function bench_processing {
-  firmware
-  export GEN_TESTRES_FILE=$GEN_TESTRES_FILE
+function processing {
+    # PWD: /fuego-rw/buildzone/board.spec.testtype.testcase-platform
+    local fuego_test_dir=${BOARD_TESTDIR}/fuego.$TESTDIR
+    local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$TESTDIR
+    local RETURN_VALUE=0
 
-  echo -e "\n RESULT ANALYSIS \n"
+    # fetch data for processing
+    firmware
+    get $BOARD_TESTDIR/fuego.$TESTDIR/$TESTDIR.log ${LOGDIR}/testlog.txt
 
-  # Get the test results
-  get_testlog $TESTDIR $BOARD_TESTDIR/fuego.$TESTDIR/$TESTDIR.log
+    dump_syslogs ${fuego_test_tmp} "after"
+    get ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.before ${LOGDIR}/syslog.before.txt
+    if [ $? -ne 0 ] ; then
+        echo "Fuego error: Can't read 'before' system log, possibly because /tmp was cleared on boot"
+        echo "Consider setting FUEGO_TARGET_TMP in your board file to a directory on target that won't get cleared on boot"
+        touch ${LOGDIR}/syslog.before.txt
+    fi
+    get ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.after ${LOGDIR}/syslog.after.txt
 
-  PYTHON_ARGS="-W ignore::DeprecationWarning -W ignore::UserWarning"
+    # process the fetched data
+    call_if_present test_processing
+    if [ $? -ne 0 ]; then
+        echo "ERROR: test_processing returned an error"
+        RETURN_VALUE=1
+    fi
 
-  # parse the test log and create a plot
-  # return codes: 0 (everything ok), 1 (problem while parsing, see log), 2 (the results didn't satisfy the threshold)
-  run_python $PYTHON_ARGS $FUEGO_CORE/engine/tests/${TESTDIR}/parser.py && rc=0 || rc=$?
+    fail_check_cases
+    if [ $? -ne 0 ]; then
+        echo "ERROR: fail_check_cases returned an error"
+        RETURN_VALUE=1
+    fi
+
+    syslog_cmp
+    if [ $? -ne 0 ]; then
+        echo "ERROR: syslog_cmp returned an error"
+        RETURN_VALUE=1
+    fi
 
-  if [ $rc -eq 0 ] || [ $rc -eq 2 ]; then
-    # store results as a json file fro the flot plugin
-    run_python $PYTHON_ARGS $FUEGO_CORE/engine/scripts/parser/dataload.py && rc=0 || echo "dataload.py didn't work properly"
-    if [ $rc -eq 0 ]; then
-        # FIXTHIS: this should not be here
-        ln -s "../plot.png" "$LOGDIR/plot.png" || true
+    PYTHON_ARGS="-W ignore::DeprecationWarning -W ignore::UserWarning"
+    if [ -e "$TEST_HOME/parser.py" ] ; then
+        # FIXTHIS: make sure that json is generated even on failures
+        run_python $PYTHON_ARGS $FUEGO_CORE/engine/tests/${TESTDIR}/parser.py && rc=0 || rc=$?
     else
-        false
+        run_python $PYTHON_ARGS $FUEGO_CORE/engine/scripts/generic_parser.py $RETURN_VALUE && rc=0 || rc=$?
     fi
-  else
-    false
-  fi
+
+    # return codes: 0 (everything ok), 1 (problem while parsing, see log), 2 (the results didn't satisfy the threshold)
+    if [ $rc -eq 0 ] || [ $rc -eq 2 ]; then
+        # store results as a json file fro the flot plugin
+        run_python $PYTHON_ARGS $FUEGO_CORE/engine/scripts/parser/dataload.py && rc=0 || echo "dataload.py didn't work properly"
+        if [ $rc -eq 0 ]; then
+            # FIXTHIS: this should not be here
+            ln -s "../plot.png" "$LOGDIR/plot.png" || true
+        else
+            "ERROR: problem while running dataload.py"
+            RETURN_VALUE=1
+        fi
+    else
+        echo "ERROR: problem while running the parser"
+        RETURN_VALUE=1
+    fi
+
+    # make a convenience link to the Jenkins console log, if the log doesn't exist
+    # this code assumes that if consolelog.txt doesn't exist, this was a Jenkins job build,
+    # and the console log is over in the jenkins build directory.
+    if [ ! -e $LOGDIR/consolelog.txt ] ; then
+        ln -s "/var/lib/jenkins/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log" $LOGDIR/consolelog.txt
+    fi
+
+    return $RETURN_VALUE
 }
 
 # search in test log for {!JOB_NAME}_FAIL_PATTERN_n fail cases and abort with message {!JOB_NAME}_FAIL_MESSAGE_n if found
@@ -364,9 +406,10 @@ function fail_check_cases () {
         if [ ! -z "$fpslog" ]
         then
 
-            if diff -ua ${slog_prefix}.before ${slog_prefix}.after | grep -vEf "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e $fptemplate;
+            if diff -ua ${slog_prefix}.before.txt ${slog_prefix}.after.txt | grep -vEf "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e $fptemplate;
             then
                 echo "Detected fail message in syslog diff: $fpmessage"
+                return 1
             else
                 continue
             fi
@@ -375,6 +418,7 @@ function fail_check_cases () {
         if grep -e "$fptemplate" $testlog ;
         then
             echo "Detected fail message in $testlog: $fpmessage"
+            return 1
         fi
     done
 }
@@ -448,7 +492,6 @@ function post_term_handler {
 
 # $1 - $TESTDIR
 function post_test {
-  echo "##### doing fuego phase: post_test ########"
   # reset the signal handler to avoid an infinite loop
   trap post_term_handler SIGTERM
   trap - SIGHUP SIGALRM SIGINT ERR EXIT
@@ -466,55 +509,11 @@ function post_test {
   # anything outside the test directories
   call_if_present test_cleanup
 
-  local fuego_test_dir=${BOARD_TESTDIR}/fuego.$1
-  local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$1
-
-  # log test completion message.
-  ov_logger "Test $1 is finished"
-
-  # Syslog dump
-  dump_syslogs ${fuego_test_tmp} "after"
-
-  # Get syslogs
-  set +e
-  get ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.before ${LOGDIR}/syslog.before.txt
-  if [ $? -ne 0 ] ; then
-     echo "Fuego error: Can't read 'before' system log, possibly because /tmp was cleared on boot"
-     echo "Consider setting FUEGO_TARGET_TMP in your board file to a directory on target that won't get cleared on boot"
-     touch ${LOGDIR}/syslog.before.txt
-  fi
-
-  get ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.after ${LOGDIR}/syslog.after.txt
-
-  # make a convenience link to the Jenkins console log, if the log doesn't exist
-  # this code assumes that if consolelog.txt doesn't exist, this was a Jenkins job build,
-  # and the console log is over in the jenkins build directory.
-  if [ ! -e $LOGDIR/consolelog.txt ] ; then
-     ln -s "/var/lib/jenkins/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log" $LOGDIR/consolelog.txt
-  fi
-  set -e
-
   # Remove work and log dirs
   [ "$Target_PostCleanup" = "true" ] && target_cleanup $1 || true
 
-  # Syslog comparison
-  # FIXTHIS: should affect FUEGO_RESULT
-  syslog_cmp
-
-  # FIXTHIS: should affect FUEGO_RESULT
-  fail_check_cases  || true
-
-  # create functional result file
-  # don't freak out if the parsing doesn't happen
-  set +e
-  PYTHON_ARGS="-W ignore::DeprecationWarning -W ignore::UserWarning"
-  if startswith $TESTDIR "Functional." ; then
-      if [ -e "$TEST_HOME/parser.py" ] ; then
-         # FIXTHIS: this interface has changed
-         run_python $PYTHON_ARGS "$TEST_HOME/parser.py" -t $TESTDIR -b $NODE_NAME -j $JOB_NAME -n $BUILD_NUMBER -s $BUILD_TIMESTAMP -r $FUEGO_RESULT
-      fi
-  fi
-  set -e
+  # log test completion message.
+  echo "Test $1 is finished"
 
   # Teardown communication link to target if board specifies
   # a function to do so
@@ -564,61 +563,44 @@ function build_cleanup {
 
 # sets $FUEGO_RESULT
 function log_compare {
-# 1 - $TESTDIR, 2 - number of results, 3 - Regex, 4 - n or p (i.e. negative or positive)
-
-  if [ ! $FUEGO_RESULT="0" ] ; then
-    return $FUEGO_RESULT
-  fi
+    # 1 - $TESTDIR, 2 - number of results, 3 - Regex, 4 - n, p (i.e. negative or positive)
+    local RETURN_VALUE=0
+    local PARSED_LOGFILE="testlog.${4}.txt"
+
+    if [ -f ${LOGDIR}/testlog.txt ]; then
+        current_count=`cat ${LOGDIR}/testlog.txt | grep -E "${3}" 2>&1 | wc -l`
+        if [ "$4" = "p" ]; then
+            if [ $current_count -ge $2 ] ; then
+                echo "log_compare: pattern $3 found $current_count times (expected greater or equal than $2)"
+                FUEGO_RESULT=0
+            else
+                echo "ERROR: log_compare: pattern $3 found $current_count times (expected greater or equal than $2)"
+                RETURN_VALUE=1
+            fi
+        fi
 
-  cd ${LOGDIR}
-  LOGFILE="testlog.txt"
-  PARSED_LOGFILE="testlog.${4}.txt"
-
-  if [ -f $LOGFILE ]; then
-    current_count=`cat $LOGFILE | grep -E "${3}" 2>&1 | wc -l`
-    if [ $current_count -eq $2 ] ; then
-      FUEGO_RESULT=0
-      # FIXTHIS: make this work and support both the p and n log files
-      # cat $LOGFILE | grep -E "${3}" | tee "$PARSED_LOGFILE"
-      # local TMP_P=`diff -u ${FUEGO_CORE}/engine/tests/${TESTDIR}/${1}_${4}.log "$PARSED_LOGFILE" 2>&1`
-      # if [ $? -ne 0 ];then
-      #   echo -e "\nFuego error reason: Unexpected test log output:\n$TMP_P\n"
-      #   check_create_functional_logrun "test error"
-      #   FUEGO_RESULT=1
-      # else
-      #   check_create_functional_logrun "passed"
-      #   FUEGO_RESULT=0
-      # fi
+        if [ "$4" = "n" ]; then
+            if [ $current_count -le $2 ] ; then
+                echo "log_compare: pattern $3 found $current_count times (expected less or equal than $2)"
+                FUEGO_RESULT=0
+            else
+                echo "ERROR: log_compare: pattern $3 found $current_count times (expected less or equal than $2)"
+                RETURN_VALUE=1
+            fi
+        fi
     else
-      echo -e "\nFuego error reason: Mismatch in expected ($2) and actual ($current_count) pos/neg ($4) results. (pattern: $3)\n"
-      check_create_functional_logrun "failed"
-      FUEGO_RESULT=1
+        echo -e "\nFuego error reason: '$LOGDIR/testlog.txt' is missing.\n"
+        RETURN_VALUE=1
     fi
-  else
-    echo -e "\nFuego error reason: '$LOGDIR/$LOGFILE' is missing.\n"
-    check_create_functional_logrun "test error"
-    FUEGO_RESULT=1
-  fi
 
-  cd -
-  return $FUEGO_RESULT
-}
-
-function get_testlog {
-# $1 - testdir,  $2 - full path to logfile
-# XXX: It will be unified
-  if [ -n "$2" ]; then
-    get ${2} ${LOGDIR}/testlog.txt
-  else
-    get $BOARD_TESTDIR/fuego.$1/$1.log ${LOGDIR}/testlog.txt
-  fi;
+    return $RETURN_VALUE
 }
 
 function syslog_cmp {
   PREFIX="$LOGDIR/syslog"
   rc=0
-  if [ -f ${PREFIX}.before ]; then
-    if diff -ua ${PREFIX}.before ${PREFIX}.after | grep -vEf "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e '\.(Bug:|Oops)'; then
+  if [ -f ${PREFIX}.before.txt ]; then
+    if diff -ua ${PREFIX}.before.txt ${PREFIX}.after.txt | grep -vEf "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e '\.(Bug:|Oops)'; then
       rc=1
     fi
   # else # special case for "reboot" test
diff --git a/engine/scripts/generic_parser.py b/engine/scripts/generic_parser.py
new file mode 100755
index 0000000..08e4443
--- /dev/null
+++ b/engine/scripts/generic_parser.py
@@ -0,0 +1,11 @@
+#!/bin/python
+# sys.argv[1]: 0 (PASS) or 1 (FAIL)
+import os
+import sys
+sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
+import common as plib
+
+print "Received: " + str(sys.argv[1])
+cur_dict = {'fail_or_pass' : sys.argv[1]}
+sys.exit(plib.process_data(test_results=cur_dict, label='FAIL or PASS'))
+
diff --git a/engine/scripts/main.sh b/engine/scripts/main.sh
new file mode 100644
index 0000000..06e4e5d
--- /dev/null
+++ b/engine/scripts/main.sh
@@ -0,0 +1,59 @@
+# Copyright (c) 2014 Cogent Embedded, Inc.
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+# THE SOFTWARE.
+
+# DESCRIPTION
+# This script contains a sequence of calls that are needed for running a test
+
+if [ -n "$FUEGO_DEBUG" ] ; then
+    set -x
+fi
+
+source $FUEGO_CORE/engine/scripts/overlays.sh
+set_overlay_vars
+
+source $FUEGO_CORE/engine/scripts/functions.sh
+
+# FIXTHIS: use a fixed name like fuego_test.sh instead of $TESTNAME.sh
+source $FUEGO_CORE/engine/tests/$TESTDIR/$TESTNAME.sh
+
+echo "##### doing fuego phase: pre_test ########"
+pre_test
+
+echo "##### doing fuego phase: build ########"
+if $Rebuild; then
+    build
+fi
+
+echo "##### doing fuego phase: deploy ########"
+deploy
+
+echo "##### doing fuego phase: run ########"
+call_if_present test_run
+
+echo "##### doing fuego phase: processing ########"
+FUEGO_RESULT=0
+processing
+export FUEGO_RESULT=$?
+
+echo "##### doing fuego phase: post_test ########"
+post_test $TESTDIR
+
+echo "Fuego: all test phases complete!"
+exit $FUEGO_RESULT
diff --git a/engine/scripts/parser/common.py b/engine/scripts/parser/common.py
index 25baf2f..cdc2700 100644
--- a/engine/scripts/parser/common.py
+++ b/engine/scripts/parser/common.py
@@ -70,13 +70,14 @@ def write_report_results(rep_data):
         else:
                 print ("Not writing testres file")
 
-def process_data(ref_section_pat, test_results, plot_type, label):
+def process_data(ref_section_pat=None, test_results={}, plot_type='s', label='default_label'):
     """
     Parameters
     ----------
     ref_section_pat: regular expression that matches the 'section' in a
-      reference.log entry (FIXTHIS: use "^\[([\w\d&._/()-]+)\|([gle]{2})\]"
-      for all of them
+      reference.log entry. If none, we assume the same keys as the
+      test results, and a PASS or FAIL result.
+      FIXTHIS: use "^\[([\w\d&._/()-]+)\|([gle]{2})\]" for all of them
     test_results: dictionary where keys are reflog sections, and values
       are test results extracted from the test log.
     plot_type: single plot (s) multiplot (m, l, or xl)
@@ -88,7 +89,13 @@ def process_data(ref_section_pat, test_results, plot_type, label):
     if custom_write_report:
         write_report_results(cur_dict)
 
-    thresholds, criteria = read_thresholds_data(ref_section_pat)
+    if ref_section_pat:
+        thresholds, criteria = read_thresholds_data(ref_section_pat)
+    else:
+        # special case for fail or pass tests (see generic_parser.py)
+        thresholds = {'fail_or_pass': '0'}
+        criteria = {'fail_or_pass': 'eq'}
+
     rc = compare(thresholds, test_results, criteria)
     store_plot_data(thresholds, test_results)
     set_plot_properties(plot_type)
@@ -182,11 +189,8 @@ def store_plot_data(thresholds, test_results):
         plot_file = open(PLOT_DATA,"w") # Create new
 
     for key in sorted(test_results.iterkeys()):
-        thresholds_split = thresholds[key].split()
-        test_results_split = test_results[key].split()
-        for i,u in enumerate(test_results_split):
-            line = "%s %s %s %s %s %s %s %s %s %s %s\n" % (NODE_NAME, TESTDIR, TESTSPEC, BUILD_NUMBER, BUILD_ID, BUILD_TIMESTAMP, FWVER, PLATFORM, key, thresholds_split[i], test_results_split[i])
-            plot_file.write(line)
+        line = "%s %s %s %s %s %s %s %s %s %s %s\n" % (NODE_NAME, TESTDIR, TESTSPEC, BUILD_NUMBER, BUILD_ID, BUILD_TIMESTAMP, FWVER, PLATFORM, key, thresholds[key], test_results[key])
+        plot_file.write(line)
 
     plot_file.close()
     print "Data file "+PLOT_DATA+" was updated."
@@ -430,6 +434,9 @@ def compare(thresholds, test_results, criteria):
             elif criteria[key] == 'le' and comparison_result > 0:
                 hls("Test section %s: test result %s is greater than threshold %s." % (key, test_results_split[i], thresholds_split[i]),'e')
                 return 2
+            elif criteria[key] == 'eq' and comparison_result != 0:
+                hls("Test section %s: test result %s is different than %s (PASS)." % (key, test_results_split[i], thresholds_split[i]),'e')
+                return 2
             else:
                 print "Test section %s: test result %s satisfies (%s) threshold %s." % (key, test_results_split[i], criteria[key], thresholds_split[i])
     return 0
diff --git a/engine/tests/Benchmark.Dhrystone/Dhrystone.sh b/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
index 617ce94..bc3f4c8 100755
--- a/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
+++ b/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
@@ -19,4 +19,4 @@ function test_run {
     report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./dhrystone $BENCHMARK_DHRYSTONE_LOOPS"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.GLMark/GLMark.sh b/engine/tests/Benchmark.GLMark/GLMark.sh
index 8a940f1..f409109 100755
--- a/engine/tests/Benchmark.GLMark/GLMark.sh
+++ b/engine/tests/Benchmark.GLMark/GLMark.sh
@@ -19,4 +19,4 @@ function test_run {
 	safe_cmd "{ cd $BOARD_TESTDIR/fuego.$TESTDIR; export DISPLAY=:0; xrandr |awk '/\*/ {split(\$1, a, \"x\"); print a[1], a[2], 32, 1}' > params; ./glmark &>   < params; } || { [ \$? -eq 142 ] && exit 0; }"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.IOzone/IOzone.sh b/engine/tests/Benchmark.IOzone/IOzone.sh
index 59f89e3..fe06e75 100755
--- a/engine/tests/Benchmark.IOzone/IOzone.sh
+++ b/engine/tests/Benchmark.IOzone/IOzone.sh
@@ -46,4 +46,4 @@ function test_cleanup {
     kill_procs iozone
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.Interbench/Interbench.sh b/engine/tests/Benchmark.Interbench/Interbench.sh
index 3ba1a71..5f950e2 100755
--- a/engine/tests/Benchmark.Interbench/Interbench.sh
+++ b/engine/tests/Benchmark.Interbench/Interbench.sh
@@ -17,4 +17,4 @@ function test_cleanup {
 	kill_procs interbench
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.Java/Java.sh b/engine/tests/Benchmark.Java/Java.sh
index 457d2d3..b0483d2 100755
--- a/engine/tests/Benchmark.Java/Java.sh
+++ b/engine/tests/Benchmark.Java/Java.sh
@@ -23,4 +23,4 @@ function test_cleanup {
     #kill_procs java
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.OpenSSL/OpenSSL.sh b/engine/tests/Benchmark.OpenSSL/OpenSSL.sh
index 2cc5f13..34973fb 100755
--- a/engine/tests/Benchmark.OpenSSL/OpenSSL.sh
+++ b/engine/tests/Benchmark.OpenSSL/OpenSSL.sh
@@ -10,4 +10,4 @@ function test_run {
     report "cd $BOARD_TESTDIR/fuego.$TESTDIR; apps/openssl speed"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.Stream/Stream.sh b/engine/tests/Benchmark.Stream/Stream.sh
index e29dfa6..ddeb26c 100755
--- a/engine/tests/Benchmark.Stream/Stream.sh
+++ b/engine/tests/Benchmark.Stream/Stream.sh
@@ -12,4 +12,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./stream_c.exe"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.Whetstone/Whetstone.sh b/engine/tests/Benchmark.Whetstone/Whetstone.sh
index 27e09a9..925bfcb 100755
--- a/engine/tests/Benchmark.Whetstone/Whetstone.sh
+++ b/engine/tests/Benchmark.Whetstone/Whetstone.sh
@@ -14,4 +14,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR && ./whetstone $BENCHMARK_WHETSTONE_LOOPS"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.aim7/aim7.sh b/engine/tests/Benchmark.aim7/aim7.sh
index 17bce92..2cee552 100755
--- a/engine/tests/Benchmark.aim7/aim7.sh
+++ b/engine/tests/Benchmark.aim7/aim7.sh
@@ -17,4 +17,4 @@ function test_run {
 	report_append "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./reaim -c ./data/reaim.config -f ./data/workfile.all_utime"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.blobsallad/blobsallad.sh b/engine/tests/Benchmark.blobsallad/blobsallad.sh
index b6b5268..ed5273d 100755
--- a/engine/tests/Benchmark.blobsallad/blobsallad.sh
+++ b/engine/tests/Benchmark.blobsallad/blobsallad.sh
@@ -17,4 +17,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; export DISPLAY=:0; xrandr | awk '/\*/ {split(\$1, a, \"x\"); exit(system(\"./blobsallad \" a[1]  a[2]))}'"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.bonnie/bonnie.sh b/engine/tests/Benchmark.bonnie/bonnie.sh
index edfbb65..a7f7c0d 100755
--- a/engine/tests/Benchmark.bonnie/bonnie.sh
+++ b/engine/tests/Benchmark.bonnie/bonnie.sh
@@ -29,4 +29,4 @@ function test_run {
     hd_test_clean_umount $BENCHMARK_BONNIE_MOUNT_BLOCKDEV $BENCHMARK_BONNIE_MOUNT_POINT
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.cyclictest/cyclictest.sh b/engine/tests/Benchmark.cyclictest/cyclictest.sh
index 0c1ff57..657de9b 100755
--- a/engine/tests/Benchmark.cyclictest/cyclictest.sh
+++ b/engine/tests/Benchmark.cyclictest/cyclictest.sh
@@ -13,4 +13,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./cyclictest -t 2 -l $BENCHMARK_CYCLICTEST_LOOPS -q"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.dbench/dbench.sh b/engine/tests/Benchmark.dbench/dbench.sh
index 5aedf2c..d84c820 100755
--- a/engine/tests/Benchmark.dbench/dbench.sh
+++ b/engine/tests/Benchmark.dbench/dbench.sh
@@ -25,4 +25,4 @@ function test_run {
     hd_test_clean_umount $BENCHMARK_DBENCH_MOUNT_BLOCKDEV $BENCHMARK_DBENCH_MOUNT_POINT
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.ebizzy/ebizzy.sh b/engine/tests/Benchmark.ebizzy/ebizzy.sh
index ed6ad6c..1e1e7ac 100755
--- a/engine/tests/Benchmark.ebizzy/ebizzy.sh
+++ b/engine/tests/Benchmark.ebizzy/ebizzy.sh
@@ -17,4 +17,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./ebizzy -m -n $BENCHMARK_EBIZZY_CHUNKS -P -R -s $BENCHMARK_EBIZZY_CHUNK_SIZE  -S $BENCHMARK_EBIZZY_TIME -t $BENCHMARK_EBIZZY_THREADS"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.ffsb/ffsb.sh b/engine/tests/Benchmark.ffsb/ffsb.sh
index 04fd64e..2037c8f 100755
--- a/engine/tests/Benchmark.ffsb/ffsb.sh
+++ b/engine/tests/Benchmark.ffsb/ffsb.sh
@@ -23,4 +23,4 @@ function test_run {
     hd_test_clean_umount $BENCHMARK_FFSB_MOUNT_BLOCKDEV $BENCHMARK_FFSB_MOUNT_POINT
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.fio/fio.sh b/engine/tests/Benchmark.fio/fio.sh
index 5ac95dc..212ed62 100755
--- a/engine/tests/Benchmark.fio/fio.sh
+++ b/engine/tests/Benchmark.fio/fio.sh
@@ -47,4 +47,4 @@ function test_cleanup {
     kill_procs fio
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.fuego_check_plots/fuego_check_plots.sh b/engine/tests/Benchmark.fuego_check_plots/fuego_check_plots.sh
index 88ddfc4..562b659 100644
--- a/engine/tests/Benchmark.fuego_check_plots/fuego_check_plots.sh
+++ b/engine/tests/Benchmark.fuego_check_plots/fuego_check_plots.sh
@@ -11,4 +11,4 @@ function test_run {
     report "echo fuego_check_plots result: $RESULT"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.gtkperf/gtkperf.sh b/engine/tests/Benchmark.gtkperf/gtkperf.sh
index 78b77be..20e9a29 100755
--- a/engine/tests/Benchmark.gtkperf/gtkperf.sh
+++ b/engine/tests/Benchmark.gtkperf/gtkperf.sh
@@ -27,4 +27,4 @@ function test_cleanup {
 	kill_procs gtkperf
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.hackbench/hackbench.sh b/engine/tests/Benchmark.hackbench/hackbench.sh
index c46c4e3..3b45dd3 100755
--- a/engine/tests/Benchmark.hackbench/hackbench.sh
+++ b/engine/tests/Benchmark.hackbench/hackbench.sh
@@ -12,4 +12,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hackbench $groups"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.himeno/himeno.sh b/engine/tests/Benchmark.himeno/himeno.sh
index 22fb7ab..debce2f 100755
--- a/engine/tests/Benchmark.himeno/himeno.sh
+++ b/engine/tests/Benchmark.himeno/himeno.sh
@@ -13,4 +13,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR && ./bmt"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.iperf/iperf.sh b/engine/tests/Benchmark.iperf/iperf.sh
index 3b4f55a..74b4437 100755
--- a/engine/tests/Benchmark.iperf/iperf.sh
+++ b/engine/tests/Benchmark.iperf/iperf.sh
@@ -41,4 +41,4 @@ function test_cleanup {
 	kill_procs iperf
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.linpack/linpack.sh b/engine/tests/Benchmark.linpack/linpack.sh
index b785b93..a42d665 100755
--- a/engine/tests/Benchmark.linpack/linpack.sh
+++ b/engine/tests/Benchmark.linpack/linpack.sh
@@ -13,4 +13,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR && ./linpack"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.lmbench2/lmbench2.sh b/engine/tests/Benchmark.lmbench2/lmbench2.sh
index 5bebabc..67d4afe 100755
--- a/engine/tests/Benchmark.lmbench2/lmbench2.sh
+++ b/engine/tests/Benchmark.lmbench2/lmbench2.sh
@@ -30,4 +30,4 @@ function test_cleanup {
    kill_procs lmbench lat_mem_rd par_mem
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.nbench-byte/nbench-byte.sh b/engine/tests/Benchmark.nbench-byte/nbench-byte.sh
index f9b9f3b..650b71f 100755
--- a/engine/tests/Benchmark.nbench-byte/nbench-byte.sh
+++ b/engine/tests/Benchmark.nbench-byte/nbench-byte.sh
@@ -15,4 +15,4 @@ function test_run {
 	report  "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./nbench"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.nbench_byte/nbench_byte.sh b/engine/tests/Benchmark.nbench_byte/nbench_byte.sh
index f11a7fd..42b78d1 100755
--- a/engine/tests/Benchmark.nbench_byte/nbench_byte.sh
+++ b/engine/tests/Benchmark.nbench_byte/nbench_byte.sh
@@ -15,4 +15,4 @@ function test_run {
 	report  "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./nbench"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.netperf/netperf.sh b/engine/tests/Benchmark.netperf/netperf.sh
index 9c3eb28..bcc7c4a 100755
--- a/engine/tests/Benchmark.netperf/netperf.sh
+++ b/engine/tests/Benchmark.netperf/netperf.sh
@@ -23,5 +23,5 @@ function test_run {
     report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./netperf-rabench_script $srv"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
 
diff --git a/engine/tests/Benchmark.reboot/reboot.sh b/engine/tests/Benchmark.reboot/reboot.sh
index f998f65..603e545 100755
--- a/engine/tests/Benchmark.reboot/reboot.sh
+++ b/engine/tests/Benchmark.reboot/reboot.sh
@@ -20,4 +20,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./get_reboot_time.sh"
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.signaltest/signaltest.sh b/engine/tests/Benchmark.signaltest/signaltest.sh
index 87a74e5..7c0e3fb 100755
--- a/engine/tests/Benchmark.signaltest/signaltest.sh
+++ b/engine/tests/Benchmark.signaltest/signaltest.sh
@@ -14,4 +14,4 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./signaltest  -l $BENCHMARK_SIGNALTEST_LOOPS -q"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.tiobench/tiobench.sh b/engine/tests/Benchmark.tiobench/tiobench.sh
index dc8986a..a974859 100755
--- a/engine/tests/Benchmark.tiobench/tiobench.sh
+++ b/engine/tests/Benchmark.tiobench/tiobench.sh
@@ -23,4 +23,4 @@ function test_run {
     hd_test_clean_umount $BENCHMARK_TIOBENCH_MOUNT_BLOCKDEV $BENCHMARK_TIOBENCH_MOUNT_POINT
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
diff --git a/engine/tests/Benchmark.x11perf/x11perf.sh b/engine/tests/Benchmark.x11perf/x11perf.sh
index deedb90..b706d2b 100755
--- a/engine/tests/Benchmark.x11perf/x11perf.sh
+++ b/engine/tests/Benchmark.x11perf/x11perf.sh
@@ -20,5 +20,5 @@ function test_run {
 	report "cd $BOARD_TESTDIR/fuego.$TESTDIR; export DISPLAY=:0; ./x11perf -repeat 1 -time $BENCHMARK_X11PERF_TIME -dot -oddtilerect10 -seg100c2 -64poly10complex"  
 }
 
-. $FUEGO_CORE/engine/scripts/benchmark.sh
+
 
diff --git a/engine/tests/Functional.LTP/LTP.sh b/engine/tests/Functional.LTP/LTP.sh
index 6905ec0..d20b9f7 100755
--- a/engine/tests/Functional.LTP/LTP.sh
+++ b/engine/tests/Functional.LTP/LTP.sh
@@ -179,4 +179,4 @@ function test_processing {
     fi
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.OpenSSL/OpenSSL.sh b/engine/tests/Functional.OpenSSL/OpenSSL.sh
index bfed5d1..2cc0b16 100755
--- a/engine/tests/Functional.OpenSSL/OpenSSL.sh
+++ b/engine/tests/Functional.OpenSSL/OpenSSL.sh
@@ -18,4 +18,4 @@ function test_run {
     report "cd $BOARD_TESTDIR/fuego.$TESTDIR; bash run-tests.sh"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.aiostress/aiostress.sh b/engine/tests/Functional.aiostress/aiostress.sh
index 7f7cf64..2bebb67 100755
--- a/engine/tests/Functional.aiostress/aiostress.sh
+++ b/engine/tests/Functional.aiostress/aiostress.sh
@@ -23,5 +23,5 @@ function test_processing {
 	true
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.arch_timer/arch_timer.sh b/engine/tests/Functional.arch_timer/arch_timer.sh
index 48ba491..de6922b 100755
--- a/engine/tests/Functional.arch_timer/arch_timer.sh
+++ b/engine/tests/Functional.arch_timer/arch_timer.sh
@@ -22,5 +22,5 @@ function test_processing {
     log_compare "$TESTDIR" $FUNCTIONAL_ARCH_TIMER_RES_LINES_COUNT "Test passed" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.bc/bc.sh b/engine/tests/Functional.bc/bc.sh
index 7b41fb4..7285e6b 100755
--- a/engine/tests/Functional.bc/bc.sh
+++ b/engine/tests/Functional.bc/bc.sh
@@ -25,4 +25,4 @@ function test_processing {
     log_compare "$TESTDIR" "1" "OK" "p"          
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.boost/boost.sh b/engine/tests/Functional.boost/boost.sh
index 883f771..602b9e3 100755
--- a/engine/tests/Functional.boost/boost.sh
+++ b/engine/tests/Functional.boost/boost.sh
@@ -63,6 +63,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.bsdiff/bsdiff.sh b/engine/tests/Functional.bsdiff/bsdiff.sh
index 7cc5759..48723c0 100755
--- a/engine/tests/Functional.bsdiff/bsdiff.sh
+++ b/engine/tests/Functional.bsdiff/bsdiff.sh
@@ -20,4 +20,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.bzip2/bzip2.sh b/engine/tests/Functional.bzip2/bzip2.sh
index a42279a..f7b5ccc 100755
--- a/engine/tests/Functional.bzip2/bzip2.sh
+++ b/engine/tests/Functional.bzip2/bzip2.sh
@@ -38,4 +38,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.cmt/cmt.sh b/engine/tests/Functional.cmt/cmt.sh
index cbcde96..5a0482d 100755
--- a/engine/tests/Functional.cmt/cmt.sh
+++ b/engine/tests/Functional.cmt/cmt.sh
@@ -18,5 +18,5 @@ function test_processing {
     log_compare "$TESTDIR" $FUNCTIONAL_CMT_LINES_COUNT "Test passed" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.commonAPI_C++/commonAPI_C++.sh b/engine/tests/Functional.commonAPI_C++/commonAPI_C++.sh
index bb6b985..94d74c4 100755
--- a/engine/tests/Functional.commonAPI_C++/commonAPI_C++.sh
+++ b/engine/tests/Functional.commonAPI_C++/commonAPI_C++.sh
@@ -35,6 +35,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.commonAPI_Dbus/commonAPI_Dbus.sh b/engine/tests/Functional.commonAPI_Dbus/commonAPI_Dbus.sh
index fbac53a..485d279 100755
--- a/engine/tests/Functional.commonAPI_Dbus/commonAPI_Dbus.sh
+++ b/engine/tests/Functional.commonAPI_Dbus/commonAPI_Dbus.sh
@@ -22,6 +22,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.commonAPI_SomeIp/commonAPI_SomeIp.sh b/engine/tests/Functional.commonAPI_SomeIp/commonAPI_SomeIp.sh
index 21d5612..c47d56b 100755
--- a/engine/tests/Functional.commonAPI_SomeIp/commonAPI_SomeIp.sh
+++ b/engine/tests/Functional.commonAPI_SomeIp/commonAPI_SomeIp.sh
@@ -23,6 +23,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.crashme/crashme.sh b/engine/tests/Functional.crashme/crashme.sh
index 2e93235..58eac9f 100755
--- a/engine/tests/Functional.crashme/crashme.sh
+++ b/engine/tests/Functional.crashme/crashme.sh
@@ -23,4 +23,4 @@ function test_processing {
 	log_compare "$TESTDIR" "1" "0 ...  3000" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.croco/croco.sh b/engine/tests/Functional.croco/croco.sh
index 3e4a975..f1a26dc 100755
--- a/engine/tests/Functional.croco/croco.sh
+++ b/engine/tests/Functional.croco/croco.sh
@@ -30,4 +30,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.curl/curl.sh b/engine/tests/Functional.curl/curl.sh
index 79a9e93..bdd3d88 100755
--- a/engine/tests/Functional.curl/curl.sh
+++ b/engine/tests/Functional.curl/curl.sh
@@ -20,4 +20,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.expat/expat.sh b/engine/tests/Functional.expat/expat.sh
index 431adc3..7b41cbf 100755
--- a/engine/tests/Functional.expat/expat.sh
+++ b/engine/tests/Functional.expat/expat.sh
@@ -42,4 +42,4 @@ function test_processing {
 }
 
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fixesproto/fixesproto.sh b/engine/tests/Functional.fixesproto/fixesproto.sh
index e6c2db2..8521368 100755
--- a/engine/tests/Functional.fixesproto/fixesproto.sh
+++ b/engine/tests/Functional.fixesproto/fixesproto.sh
@@ -23,4 +23,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fontconfig/fontconfig.sh b/engine/tests/Functional.fontconfig/fontconfig.sh
index 30af92e..a80d2e4 100755
--- a/engine/tests/Functional.fontconfig/fontconfig.sh
+++ b/engine/tests/Functional.fontconfig/fontconfig.sh
@@ -17,4 +17,4 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "TEST FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fuego_abort/fuego_abort.sh b/engine/tests/Functional.fuego_abort/fuego_abort.sh
index 8d2606d..6865052 100755
--- a/engine/tests/Functional.fuego_abort/fuego_abort.sh
+++ b/engine/tests/Functional.fuego_abort/fuego_abort.sh
@@ -116,4 +116,4 @@ function test_cleanup {
     set -e
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fuego_board_check/fuego_board_check.sh b/engine/tests/Functional.fuego_board_check/fuego_board_check.sh
index f0bc448..2374ae4 100755
--- a/engine/tests/Functional.fuego_board_check/fuego_board_check.sh
+++ b/engine/tests/Functional.fuego_board_check/fuego_board_check.sh
@@ -59,4 +59,4 @@ function test_processing {
     log_compare "$TESTDIR" "1" "SUCCESS" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fuego_test_phases/fuego_test_phases.sh b/engine/tests/Functional.fuego_test_phases/fuego_test_phases.sh
index 8d9dd5a..80251c9 100755
--- a/engine/tests/Functional.fuego_test_phases/fuego_test_phases.sh
+++ b/engine/tests/Functional.fuego_test_phases/fuego_test_phases.sh
@@ -74,4 +74,4 @@ function test_cleanup {
     set -e
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fuego_transport/fuego_transport.sh b/engine/tests/Functional.fuego_transport/fuego_transport.sh
index 658a8de..bf4667b 100644
--- a/engine/tests/Functional.fuego_transport/fuego_transport.sh
+++ b/engine/tests/Functional.fuego_transport/fuego_transport.sh
@@ -54,4 +54,4 @@ function test_processing {
     log_compare "$TESTDIR" "6" "^ok" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.fuse/fuse.sh b/engine/tests/Functional.fuse/fuse.sh
index ba422e8..2d6405a 100755
--- a/engine/tests/Functional.fuse/fuse.sh
+++ b/engine/tests/Functional.fuse/fuse.sh
@@ -24,4 +24,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.giflib/giflib-scripts.sh b/engine/tests/Functional.giflib/giflib-scripts.sh
index 3382827..6c300c9 100755
--- a/engine/tests/Functional.giflib/giflib-scripts.sh
+++ b/engine/tests/Functional.giflib/giflib-scripts.sh
@@ -49,7 +49,7 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
 
diff --git a/engine/tests/Functional.glib/glib.sh b/engine/tests/Functional.glib/glib.sh
index 8a23698..c346f05 100755
--- a/engine/tests/Functional.glib/glib.sh
+++ b/engine/tests/Functional.glib/glib.sh
@@ -48,4 +48,4 @@ function test_processing {
 	true
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.glib2/glib2-scripts.sh b/engine/tests/Functional.glib2/glib2-scripts.sh
index 03a960a..3be0ab0 100755
--- a/engine/tests/Functional.glib2/glib2-scripts.sh
+++ b/engine/tests/Functional.glib2/glib2-scripts.sh
@@ -74,6 +74,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.glibc/glibc.sh b/engine/tests/Functional.glibc/glibc.sh
index f6d9fee..e387900 100755
--- a/engine/tests/Functional.glibc/glibc.sh
+++ b/engine/tests/Functional.glibc/glibc.sh
@@ -55,4 +55,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.hciattach/hciattach.sh b/engine/tests/Functional.hciattach/hciattach.sh
index 77bc33b..afe66aa 100755
--- a/engine/tests/Functional.hciattach/hciattach.sh
+++ b/engine/tests/Functional.hciattach/hciattach.sh
@@ -15,4 +15,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.hello_world/hello_world.sh b/engine/tests/Functional.hello_world/hello_world.sh
index 3f39dc6..27a8344 100755
--- a/engine/tests/Functional.hello_world/hello_world.sh
+++ b/engine/tests/Functional.hello_world/hello_world.sh
@@ -19,4 +19,4 @@ function test_processing {
     log_compare "$TESTDIR" "1" "SUCCESS" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.imagemagick/imagemagick.sh b/engine/tests/Functional.imagemagick/imagemagick.sh
index 38125fd..3f1c69f 100755
--- a/engine/tests/Functional.imagemagick/imagemagick.sh
+++ b/engine/tests/Functional.imagemagick/imagemagick.sh
@@ -60,4 +60,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.iptables/iptables.sh b/engine/tests/Functional.iptables/iptables.sh
index 92d987e..023c2d9 100755
--- a/engine/tests/Functional.iptables/iptables.sh
+++ b/engine/tests/Functional.iptables/iptables.sh
@@ -67,6 +67,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.iputils/iputils.sh b/engine/tests/Functional.iputils/iputils.sh
index 9568e09..1a043b4 100755
--- a/engine/tests/Functional.iputils/iputils.sh
+++ b/engine/tests/Functional.iputils/iputils.sh
@@ -25,7 +25,7 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
 
diff --git a/engine/tests/Functional.ipv6connect/ipv6connect.sh b/engine/tests/Functional.ipv6connect/ipv6connect.sh
index 75f1892..bcda1f3 100755
--- a/engine/tests/Functional.ipv6connect/ipv6connect.sh
+++ b/engine/tests/Functional.ipv6connect/ipv6connect.sh
@@ -16,4 +16,4 @@ function test_processing {
 	true
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.jpeg/jpeg.sh b/engine/tests/Functional.jpeg/jpeg.sh
index bfb6dad..55f34d6 100755
--- a/engine/tests/Functional.jpeg/jpeg.sh
+++ b/engine/tests/Functional.jpeg/jpeg.sh
@@ -32,4 +32,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.kernel_build/kernel_build.sh b/engine/tests/Functional.kernel_build/kernel_build.sh
index 285d5ba..c3c1744 100755
--- a/engine/tests/Functional.kernel_build/kernel_build.sh
+++ b/engine/tests/Functional.kernel_build/kernel_build.sh
@@ -77,4 +77,4 @@ function test_processing {
     fi
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.kmod/kmod.sh b/engine/tests/Functional.kmod/kmod.sh
index 786e576..d9236ad 100755
--- a/engine/tests/Functional.kmod/kmod.sh
+++ b/engine/tests/Functional.kmod/kmod.sh
@@ -27,6 +27,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.libogg/ogg.sh b/engine/tests/Functional.libogg/ogg.sh
index 94a56a4..1a71032 100755
--- a/engine/tests/Functional.libogg/ogg.sh
+++ b/engine/tests/Functional.libogg/ogg.sh
@@ -22,4 +22,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.libpcap/libpcap.sh b/engine/tests/Functional.libpcap/libpcap.sh
index 1594b6b..b013249 100755
--- a/engine/tests/Functional.libpcap/libpcap.sh
+++ b/engine/tests/Functional.libpcap/libpcap.sh
@@ -35,7 +35,7 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
 
diff --git a/engine/tests/Functional.librsvg/librsvg-scripts.sh b/engine/tests/Functional.librsvg/librsvg-scripts.sh
index 6df8c2e..75513a0 100755
--- a/engine/tests/Functional.librsvg/librsvg-scripts.sh
+++ b/engine/tests/Functional.librsvg/librsvg-scripts.sh
@@ -49,6 +49,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.libspeex/speex.sh b/engine/tests/Functional.libspeex/speex.sh
index bfe9a7e..5395a88 100755
--- a/engine/tests/Functional.libspeex/speex.sh
+++ b/engine/tests/Functional.libspeex/speex.sh
@@ -22,4 +22,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.libtar/libtar.sh b/engine/tests/Functional.libtar/libtar.sh
index 6f66454..06ef34b 100755
--- a/engine/tests/Functional.libtar/libtar.sh
+++ b/engine/tests/Functional.libtar/libtar.sh
@@ -38,4 +38,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.libwebsocket/libwebsocket.sh b/engine/tests/Functional.libwebsocket/libwebsocket.sh
index 7150a4d..2d950c5 100755
--- a/engine/tests/Functional.libwebsocket/libwebsocket.sh
+++ b/engine/tests/Functional.libwebsocket/libwebsocket.sh
@@ -34,6 +34,6 @@ function test_processing {
     	log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.linus_stress/linus_stress.sh b/engine/tests/Functional.linus_stress/linus_stress.sh
index 6edeeca..b5b2afb 100755
--- a/engine/tests/Functional.linus_stress/linus_stress.sh
+++ b/engine/tests/Functional.linus_stress/linus_stress.sh
@@ -26,4 +26,4 @@ function test_processing {
 	true
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.lwip/lwip.sh b/engine/tests/Functional.lwip/lwip.sh
index 287f747..d5c8c80 100755
--- a/engine/tests/Functional.lwip/lwip.sh
+++ b/engine/tests/Functional.lwip/lwip.sh
@@ -45,4 +45,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.neon/neon-scripts.sh b/engine/tests/Functional.neon/neon-scripts.sh
index 1a3d8e2..58b4c9f 100755
--- a/engine/tests/Functional.neon/neon-scripts.sh
+++ b/engine/tests/Functional.neon/neon-scripts.sh
@@ -74,6 +74,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.net-tools/net-tools-scripts.sh b/engine/tests/Functional.net-tools/net-tools-scripts.sh
index 1b2704f..b25f8b7 100755
--- a/engine/tests/Functional.net-tools/net-tools-scripts.sh
+++ b/engine/tests/Functional.net-tools/net-tools-scripts.sh
@@ -26,6 +26,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.netperf/netperf.sh b/engine/tests/Functional.netperf/netperf.sh
index 9630e94..5a3029e 100755
--- a/engine/tests/Functional.netperf/netperf.sh
+++ b/engine/tests/Functional.netperf/netperf.sh
@@ -23,4 +23,4 @@ function test_run {
     report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./netperf-random_rr_script $srv 50 5"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.pixman/pixman.sh b/engine/tests/Functional.pixman/pixman.sh
index 798d0ce..7999b0d 100755
--- a/engine/tests/Functional.pixman/pixman.sh
+++ b/engine/tests/Functional.pixman/pixman.sh
@@ -205,5 +205,5 @@ function test_processing {
     log_compare "$TESTDIR" "30" "^TEST.*OK" "p"
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.pppd/pppd.sh b/engine/tests/Functional.pppd/pppd.sh
index e8cab80..a622f95 100755
--- a/engine/tests/Functional.pppd/pppd.sh
+++ b/engine/tests/Functional.pppd/pppd.sh
@@ -39,4 +39,4 @@ function test_processing {
     log_compare "$TESTDIR" "1" "^TEST.*OK" "p"
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.protobuf/protobuf.sh b/engine/tests/Functional.protobuf/protobuf.sh
index e8e868b..57037d3 100755
--- a/engine/tests/Functional.protobuf/protobuf.sh
+++ b/engine/tests/Functional.protobuf/protobuf.sh
@@ -43,4 +43,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.rmaptest/rmaptest.sh b/engine/tests/Functional.rmaptest/rmaptest.sh
index bee45b6..69e08b0 100755
--- a/engine/tests/Functional.rmaptest/rmaptest.sh
+++ b/engine/tests/Functional.rmaptest/rmaptest.sh
@@ -22,4 +22,4 @@ function test_processing {
 	true
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.scifab/scifab.sh b/engine/tests/Functional.scifab/scifab.sh
index ff4e85f..d874964 100755
--- a/engine/tests/Functional.scifab/scifab.sh
+++ b/engine/tests/Functional.scifab/scifab.sh
@@ -22,4 +22,4 @@ function test_processing {
     log_compare "$TESTDIR" $FUNCTIONAL_SCIFAB_RES_LINES_COUNT "Passed:$FUNCTIONAL_SCIFAB_RES_PASS_COUNT Failed:$FUNCTIONAL_SCIFAB_RES_FAIL_COUNT" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.sdhi_0/sdhi_0.sh b/engine/tests/Functional.sdhi_0/sdhi_0.sh
index 267ca07..b290808 100755
--- a/engine/tests/Functional.sdhi_0/sdhi_0.sh
+++ b/engine/tests/Functional.sdhi_0/sdhi_0.sh
@@ -21,5 +21,5 @@ function test_processing {
 
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.stress/stress.sh b/engine/tests/Functional.stress/stress.sh
index 888cec3..e9c467f 100755
--- a/engine/tests/Functional.stress/stress.sh
+++ b/engine/tests/Functional.stress/stress.sh
@@ -25,5 +25,5 @@ function test_processing {
 	log_compare "$TESTDIR" "1" "successful run completed in" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.synctest/synctest.sh b/engine/tests/Functional.synctest/synctest.sh
index 40df5ed..aff0820 100755
--- a/engine/tests/Functional.synctest/synctest.sh
+++ b/engine/tests/Functional.synctest/synctest.sh
@@ -24,5 +24,5 @@ function test_processing {
 	log_compare "$TESTDIR" "1" "PASS : sync interrupted" "p"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.thrift/thrift.sh b/engine/tests/Functional.thrift/thrift.sh
index 1995001..8310e4e 100755
--- a/engine/tests/Functional.thrift/thrift.sh
+++ b/engine/tests/Functional.thrift/thrift.sh
@@ -142,5 +142,5 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
diff --git a/engine/tests/Functional.tiff/tiff.sh b/engine/tests/Functional.tiff/tiff.sh
index 87235a0..35703fb 100755
--- a/engine/tests/Functional.tiff/tiff.sh
+++ b/engine/tests/Functional.tiff/tiff.sh
@@ -42,4 +42,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAILED" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.vsomeip/vsomeip-scripts.sh b/engine/tests/Functional.vsomeip/vsomeip-scripts.sh
index 309022c..37dcd50 100755
--- a/engine/tests/Functional.vsomeip/vsomeip-scripts.sh
+++ b/engine/tests/Functional.vsomeip/vsomeip-scripts.sh
@@ -194,6 +194,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
diff --git a/engine/tests/Functional.xorg-macros/xorg-macros.sh b/engine/tests/Functional.xorg-macros/xorg-macros.sh
index ffe6c3c..95490be 100755
--- a/engine/tests/Functional.xorg-macros/xorg-macros.sh
+++ b/engine/tests/Functional.xorg-macros/xorg-macros.sh
@@ -22,4 +22,4 @@ function test_processing {
     log_compare "$TESTDIR" "0" "^TEST.*FAIL" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
diff --git a/engine/tests/Functional.zlib/zlib.sh b/engine/tests/Functional.zlib/zlib.sh
index 3e70a69..7615b5d 100755
--- a/engine/tests/Functional.zlib/zlib.sh
+++ b/engine/tests/Functional.zlib/zlib.sh
@@ -29,6 +29,6 @@ function test_processing {
 	log_compare "$TESTDIR" "0" "${N_CRIT}" "n"
 }
 
-. $FUEGO_CORE/engine/scripts/functional.sh
+
 
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Fuego] [PATCH] flot: unify functional and benchmark
  2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
  2017-04-24  8:37 ` [Fuego] [PATCH 1/3] core: the great unification " Daniel Sangorrin
@ 2017-04-24  8:37 ` Daniel Sangorrin
  2017-04-25 23:55   ` Bird, Timothy
  2017-04-24  8:37 ` [Fuego] [PATCH 2/3] fix expat test and add it to testplan docker Daniel Sangorrin
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Daniel Sangorrin @ 2017-04-24  8:37 UTC (permalink / raw)
  To: fuego

Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
---
 .../plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js            | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/frontend-install/plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js b/frontend-install/plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js
index 1c4cc3a..d3bb785 100644
--- a/frontend-install/plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js
+++ b/frontend-install/plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js
@@ -28,6 +28,7 @@ var jenkins_logs_path = 'http://'+location['host'] + '/fuego/userContent/fuego.l
 
 // get the test name from the URL
 var localurl = jQuery(location).attr('href').split("/");
+var testtype = localurl[localurl.length - 2].split(".")[2] // E.g.: Functional
 var testsuite = localurl[localurl.length - 2].split(".")[3] // E.g.: Dhrystone
 
 // results.json file
@@ -230,6 +231,6 @@ function plot_all_groupnames(series) {
     });
 }
 
-jQuery.ajax({ url: jenkins_logs_path+'/Benchmark.'+testsuite+'/results.json', method: 'GET', dataType: 'json', async: false, success: plot_all_groupnames});
+jQuery.ajax({ url: jenkins_logs_path+'/'+testtype+'.'+testsuite+'/results.json', method: 'GET', dataType: 'json', async: false, success: plot_all_groupnames});
 
 })
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Fuego] [PATCH 2/3] fix expat test and add it to testplan docker
  2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
  2017-04-24  8:37 ` [Fuego] [PATCH 1/3] core: the great unification " Daniel Sangorrin
  2017-04-24  8:37 ` [Fuego] [PATCH] flot: unify functional and benchmark Daniel Sangorrin
@ 2017-04-24  8:37 ` Daniel Sangorrin
  2017-04-24  8:37 ` [Fuego] [PATCH 3/3] add Functional.jpeg test to testplan docker since it works Daniel Sangorrin
  2017-04-27  0:14 ` [Fuego] Unification of functional and benchmark tests Bird, Timothy
  4 siblings, 0 replies; 8+ messages in thread
From: Daniel Sangorrin @ 2017-04-24  8:37 UTC (permalink / raw)
  To: fuego

Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
---
 engine/overlays/testplans/testplan_docker.json      | 3 +++
 engine/tests/Functional.expat/Functional.expat.spec | 5 ++++-
 engine/tests/Functional.expat/expat.sh              | 8 ++++----
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/engine/overlays/testplans/testplan_docker.json b/engine/overlays/testplans/testplan_docker.json
index f8db7d5..160f04e 100644
--- a/engine/overlays/testplans/testplan_docker.json
+++ b/engine/overlays/testplans/testplan_docker.json
@@ -56,6 +56,9 @@
             "timeout": "100m"
         },
         {
+            "testName": "Functional.expat"
+        },
+        {
             "testName": "Functional.aiostress"
         },
         {
diff --git a/engine/tests/Functional.expat/Functional.expat.spec b/engine/tests/Functional.expat/Functional.expat.spec
index 7804c78..73b32b4 100644
--- a/engine/tests/Functional.expat/Functional.expat.spec
+++ b/engine/tests/Functional.expat/Functional.expat.spec
@@ -3,6 +3,9 @@
     "success_links": {"log": "testlog.txt"},
     "fail_links": {"log": "testlog.txt"},
     "specs": {
-        "default": {}
+        "default": {
+            "subtest_count_pos" : "1768",
+            "subtest_count_neg" : "41"
+        }
     }
 }
diff --git a/engine/tests/Functional.expat/expat.sh b/engine/tests/Functional.expat/expat.sh
index 7b41cbf..5a0f4f5 100755
--- a/engine/tests/Functional.expat/expat.sh
+++ b/engine/tests/Functional.expat/expat.sh
@@ -34,11 +34,11 @@ function test_run {
 }
 
 function test_processing {
-    assert_define EXPAT_SUBTEST_COUNT_POS
-    assert_define EXPAT_SUBTEST_COUNT_NEG
+    assert_define FUNCTIONAL_EXPAT_SUBTEST_COUNT_POS
+    assert_define FUNCTIONAL_EXPAT_SUBTEST_COUNT_NEG
 
-    log_compare "$TESTDIR" $EXPAT_SUBTEST_COUNT_POS "100%: Checks: 48|passed" "p"
-    log_compare "$TESTDIR" $EXPAT_SUBTEST_COUNT_NEG "failed" "n"
+    log_compare "$TESTDIR" $FUNCTIONAL_EXPAT_SUBTEST_COUNT_POS "^.*\.xml passed\.$" "p"
+    log_compare "$TESTDIR" $FUNCTIONAL_EXPAT_SUBTEST_COUNT_NEG "^.*\.xml failed\.$" "n"
 }
 
 
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Fuego] [PATCH 3/3] add Functional.jpeg test to testplan docker since it works
  2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
                   ` (2 preceding siblings ...)
  2017-04-24  8:37 ` [Fuego] [PATCH 2/3] fix expat test and add it to testplan docker Daniel Sangorrin
@ 2017-04-24  8:37 ` Daniel Sangorrin
  2017-04-27  0:14 ` [Fuego] Unification of functional and benchmark tests Bird, Timothy
  4 siblings, 0 replies; 8+ messages in thread
From: Daniel Sangorrin @ 2017-04-24  8:37 UTC (permalink / raw)
  To: fuego

Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
---
 engine/overlays/testplans/testplan_docker.json | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/engine/overlays/testplans/testplan_docker.json b/engine/overlays/testplans/testplan_docker.json
index 160f04e..d018fa7 100644
--- a/engine/overlays/testplans/testplan_docker.json
+++ b/engine/overlays/testplans/testplan_docker.json
@@ -92,6 +92,9 @@
             "testName": "Functional.hello_world"
         },
         {
+            "testName": "Functional.jpeg"
+        },
+        {
             "testName": "Functional.netperf",
             "spec": "docker",
             "timeout": "10m"
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Fuego] [PATCH] flot: unify functional and benchmark
  2017-04-24  8:37 ` [Fuego] [PATCH] flot: unify functional and benchmark Daniel Sangorrin
@ 2017-04-25 23:55   ` Bird, Timothy
  0 siblings, 0 replies; 8+ messages in thread
From: Bird, Timothy @ 2017-04-25 23:55 UTC (permalink / raw)
  To: Daniel Sangorrin, fuego

Accepted into 'next'.
 -- Tim


> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego-
> bounces@lists.linuxfoundation.org] On Behalf Of Daniel Sangorrin
> Sent: Monday, April 24, 2017 1:38 AM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH] flot: unify functional and benchmark
> 
> Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> ---
>  .../plugins/flot-plotter-plugin/src/main/webapp/flot/mod.js            | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/frontend-install/plugins/flot-plotter-
> plugin/src/main/webapp/flot/mod.js b/frontend-install/plugins/flot-plotter-
> plugin/src/main/webapp/flot/mod.js
> index 1c4cc3a..d3bb785 100644
> --- a/frontend-install/plugins/flot-plotter-
> plugin/src/main/webapp/flot/mod.js
> +++ b/frontend-install/plugins/flot-plotter-
> plugin/src/main/webapp/flot/mod.js
> @@ -28,6 +28,7 @@ var jenkins_logs_path = 'http://'+location['host'] +
> '/fuego/userContent/fuego.l
> 
>  // get the test name from the URL
>  var localurl = jQuery(location).attr('href').split("/");
> +var testtype = localurl[localurl.length - 2].split(".")[2] // E.g.: Functional
>  var testsuite = localurl[localurl.length - 2].split(".")[3] // E.g.: Dhrystone
> 
>  // results.json file
> @@ -230,6 +231,6 @@ function plot_all_groupnames(series) {
>      });
>  }
> 
> -jQuery.ajax({ url:
> jenkins_logs_path+'/Benchmark.'+testsuite+'/results.json', method: 'GET',
> dataType: 'json', async: false, success: plot_all_groupnames});
> +jQuery.ajax({ url:
> jenkins_logs_path+'/'+testtype+'.'+testsuite+'/results.json', method: 'GET',
> dataType: 'json', async: false, success: plot_all_groupnames});
> 
>  })
> --
> 2.7.4
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Fuego] [PATCH 1/3] core: the great unification of functional and benchmark tests
  2017-04-24  8:37 ` [Fuego] [PATCH 1/3] core: the great unification " Daniel Sangorrin
@ 2017-04-27  0:08   ` Bird, Timothy
  0 siblings, 0 replies; 8+ messages in thread
From: Bird, Timothy @ 2017-04-27  0:08 UTC (permalink / raw)
  To: Daniel Sangorrin, fuego

Looks good.  Accepted into my 'next' branch.

A few minor comments inline below.
 -- Tim

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego-
> bounces@lists.linuxfoundation.org] On Behalf Of Daniel Sangorrin
> Sent: Monday, April 24, 2017 1:38 AM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH 1/3] core: the great unification of functional and
> benchmark tests
> 
> Until now we had functional and benchmark tests separated. The
> problem with that is that we couldn't share important code
> such as json output generation between the two.
> 
> For most functional tests this doesn't change anything. They
> can continue using log_compare (improved by the patch) and
> the core code will automatically parse the results and output
> them in the same json format as the one used for benchmarks.
> 
> In other words, ALL TESTS now output a results.json.
> 
> For tests with various groups, like LTP, we can use a parser.py
> instance. (TODO: implement LTP in this way).
> 
> I consider logruns and reports deprecated so I discarded
> everything related. It will all be substituted it by an
> "ftc report" command that will collect the results.json for
> each job in a testplan (or in the command line) and produce a
> report.
As long as the functionality shows up eventually, that should
be OK.  logruns basically just captured all the data from a batch
job into a single file, for use in generating a report for that batch
run.

> 
> Signed-off-by: Daniel Sangorrin <daniel.sangorrin@toshiba.co.jp>
> ---
>  engine/scripts/README                              |   5 +-
>  engine/scripts/benchmark.sh                        |  61 -----
>  engine/scripts/ftc                                 |   8 +-
>  engine/scripts/functional.sh                       |  61 -----
>  engine/scripts/functions.sh                        | 274 ++++++++++-----------
>  engine/scripts/generic_parser.py                   |  11 +
>  engine/scripts/main.sh                             |  59 +++++
>  engine/scripts/parser/common.py                    |  25 +-
>  engine/tests/Benchmark.Dhrystone/Dhrystone.sh      |   2 +-
>  engine/tests/Benchmark.GLMark/GLMark.sh            |   2 +-
>  engine/tests/Benchmark.IOzone/IOzone.sh            |   2 +-
>  engine/tests/Benchmark.Interbench/Interbench.sh    |   2 +-
>  engine/tests/Benchmark.Java/Java.sh                |   2 +-
>  engine/tests/Benchmark.OpenSSL/OpenSSL.sh          |   2 +-
>  engine/tests/Benchmark.Stream/Stream.sh            |   2 +-
>  engine/tests/Benchmark.Whetstone/Whetstone.sh      |   2 +-
>  engine/tests/Benchmark.aim7/aim7.sh                |   2 +-
>  engine/tests/Benchmark.blobsallad/blobsallad.sh    |   2 +-
>  engine/tests/Benchmark.bonnie/bonnie.sh            |   2 +-
>  engine/tests/Benchmark.cyclictest/cyclictest.sh    |   2 +-
>  engine/tests/Benchmark.dbench/dbench.sh            |   2 +-
>  engine/tests/Benchmark.ebizzy/ebizzy.sh            |   2 +-
>  engine/tests/Benchmark.ffsb/ffsb.sh                |   2 +-
>  engine/tests/Benchmark.fio/fio.sh                  |   2 +-
>  .../fuego_check_plots.sh                           |   2 +-
>  engine/tests/Benchmark.gtkperf/gtkperf.sh          |   2 +-
>  engine/tests/Benchmark.hackbench/hackbench.sh      |   2 +-
>  engine/tests/Benchmark.himeno/himeno.sh            |   2 +-
>  engine/tests/Benchmark.iperf/iperf.sh              |   2 +-
>  engine/tests/Benchmark.linpack/linpack.sh          |   2 +-
>  engine/tests/Benchmark.lmbench2/lmbench2.sh        |   2 +-
>  engine/tests/Benchmark.nbench-byte/nbench-byte.sh  |   2 +-
>  engine/tests/Benchmark.nbench_byte/nbench_byte.sh  |   2 +-
>  engine/tests/Benchmark.netperf/netperf.sh          |   2 +-
>  engine/tests/Benchmark.reboot/reboot.sh            |   2 +-
>  engine/tests/Benchmark.signaltest/signaltest.sh    |   2 +-
>  engine/tests/Benchmark.tiobench/tiobench.sh        |   2 +-
>  engine/tests/Benchmark.x11perf/x11perf.sh          |   2 +-
>  engine/tests/Functional.LTP/LTP.sh                 |   2 +-
>  engine/tests/Functional.OpenSSL/OpenSSL.sh         |   2 +-
>  engine/tests/Functional.aiostress/aiostress.sh     |   2 +-
>  engine/tests/Functional.arch_timer/arch_timer.sh   |   2 +-
>  engine/tests/Functional.bc/bc.sh                   |   2 +-
>  engine/tests/Functional.boost/boost.sh             |   2 +-
>  engine/tests/Functional.bsdiff/bsdiff.sh           |   2 +-
>  engine/tests/Functional.bzip2/bzip2.sh             |   2 +-
>  engine/tests/Functional.cmt/cmt.sh                 |   2 +-
>  .../Functional.commonAPI_C++/commonAPI_C++.sh      |   2 +-
>  .../Functional.commonAPI_Dbus/commonAPI_Dbus.sh    |   2 +-
>  .../commonAPI_SomeIp.sh                            |   2 +-
>  engine/tests/Functional.crashme/crashme.sh         |   2 +-
>  engine/tests/Functional.croco/croco.sh             |   2 +-
>  engine/tests/Functional.curl/curl.sh               |   2 +-
>  engine/tests/Functional.expat/expat.sh             |   2 +-
>  engine/tests/Functional.fixesproto/fixesproto.sh   |   2 +-
>  engine/tests/Functional.fontconfig/fontconfig.sh   |   2 +-
>  engine/tests/Functional.fuego_abort/fuego_abort.sh |   2 +-
>  .../fuego_board_check.sh                           |   2 +-
>  .../fuego_test_phases.sh                           |   2 +-
>  .../Functional.fuego_transport/fuego_transport.sh  |   2 +-
>  engine/tests/Functional.fuse/fuse.sh               |   2 +-
>  engine/tests/Functional.giflib/giflib-scripts.sh   |   2 +-
>  engine/tests/Functional.glib/glib.sh               |   2 +-
>  engine/tests/Functional.glib2/glib2-scripts.sh     |   2 +-
>  engine/tests/Functional.glibc/glibc.sh             |   2 +-
>  engine/tests/Functional.hciattach/hciattach.sh     |   2 +-
>  engine/tests/Functional.hello_world/hello_world.sh |   2 +-
>  engine/tests/Functional.imagemagick/imagemagick.sh |   2 +-
>  engine/tests/Functional.iptables/iptables.sh       |   2 +-
>  engine/tests/Functional.iputils/iputils.sh         |   2 +-
>  engine/tests/Functional.ipv6connect/ipv6connect.sh |   2 +-
>  engine/tests/Functional.jpeg/jpeg.sh               |   2 +-
>  .../tests/Functional.kernel_build/kernel_build.sh  |   2 +-
>  engine/tests/Functional.kmod/kmod.sh               |   2 +-
>  engine/tests/Functional.libogg/ogg.sh              |   2 +-
>  engine/tests/Functional.libpcap/libpcap.sh         |   2 +-
>  engine/tests/Functional.librsvg/librsvg-scripts.sh |   2 +-
>  engine/tests/Functional.libspeex/speex.sh          |   2 +-
>  engine/tests/Functional.libtar/libtar.sh           |   2 +-
>  .../tests/Functional.libwebsocket/libwebsocket.sh  |   2 +-
>  .../tests/Functional.linus_stress/linus_stress.sh  |   2 +-
>  engine/tests/Functional.lwip/lwip.sh               |   2 +-
>  engine/tests/Functional.neon/neon-scripts.sh       |   2 +-
>  .../Functional.net-tools/net-tools-scripts.sh      |   2 +-
>  engine/tests/Functional.netperf/netperf.sh         |   2 +-
>  engine/tests/Functional.pixman/pixman.sh           |   2 +-
>  engine/tests/Functional.pppd/pppd.sh               |   2 +-
>  engine/tests/Functional.protobuf/protobuf.sh       |   2 +-
>  engine/tests/Functional.rmaptest/rmaptest.sh       |   2 +-
>  engine/tests/Functional.scifab/scifab.sh           |   2 +-
>  engine/tests/Functional.sdhi_0/sdhi_0.sh           |   2 +-
>  engine/tests/Functional.stress/stress.sh           |   2 +-
>  engine/tests/Functional.synctest/synctest.sh       |   2 +-
>  engine/tests/Functional.thrift/thrift.sh           |   2 +-
>  engine/tests/Functional.tiff/tiff.sh               |   2 +-
>  engine/tests/Functional.vsomeip/vsomeip-scripts.sh |   2 +-
>  engine/tests/Functional.xorg-macros/xorg-macros.sh |   2 +-
>  engine/tests/Functional.zlib/zlib.sh               |   2 +-
>  98 files changed, 307 insertions(+), 377 deletions(-)
>  delete mode 100644 engine/scripts/benchmark.sh
>  delete mode 100644 engine/scripts/functional.sh
>  create mode 100755 engine/scripts/generic_parser.py
>  create mode 100644 engine/scripts/main.sh
> 
> diff --git a/engine/scripts/README b/engine/scripts/README
> index 41f38d7..59b2d0c 100644
> --- a/engine/scripts/README
> +++ b/engine/scripts/README
> @@ -7,10 +7,7 @@ overlays.sh
>  reports.sh
> 
>  -- Scripts with basic test sequences --
> -benchmark.sh
> -functional.sh
> -stress.sh

Functional.scrashme and Functiona.pi_tests still reference $FUEGO_CORE/engine/scripts/stress.sh.

Is there a FIXTHIS somewhere for adapting these tests to the unification?


> -
> +main.sh
> 
>  -- Toolchain config --
>  tools.sh
> diff --git a/engine/scripts/benchmark.sh b/engine/scripts/benchmark.sh
> deleted file mode 100644
> index f10d73a..0000000
> --- a/engine/scripts/benchmark.sh
> +++ /dev/null
> @@ -1,61 +0,0 @@
> -# Copyright (c) 2014 Cogent Embedded, Inc.
> -
> -# Permission is hereby granted, free of charge, to any person obtaining a
> copy
> -# of this software and associated documentation files (the "Software"), to
> deal
> -# in the Software without restriction, including without limitation the rights
> -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> -# copies of the Software, and to permit persons to whom the Software is
> -# furnished to do so, subject to the following conditions:
> -
> -# The above copyright notice and this permission notice shall be included in
> -# all copies or substantial portions of the Software.
> -
> -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> EXPRESS OR
> -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
> EVENT SHALL THE
> -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES
> OR OTHER
> -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> ARISING FROM,
> -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> DEALINGS IN
> -# THE SOFTWARE.
> -
> -# DESCRIPTION
> -# This script contains a sequence of calls that are needed for running
> benchmakr test
> -
> -if [ -n "$FUEGO_DEBUG" ] ; then
> -	set -x
> -fi
> -set -e
> -
> -source $FUEGO_CORE/engine/scripts/overlays.sh
> -set_overlay_vars
> -
> -source $FUEGO_CORE/engine/scripts/functions.sh
> -
> -source $FUEGO_CORE/engine/scripts/reports.sh
> -
> -echo "##### doing fuego phase: pre_test ########"
> -pre_test $TESTDIR
> -
> -echo "##### doing fuego phase: build ########"
> -if $Rebuild; then
> -    build
> -fi
> -
> -echo "##### doing fuego phase: deploy ########"
> -deploy
> -
> -echo "##### doing fuego phase: run ########"
> -test_run
> -
> -echo "##### doing fuego phases: get_testlog AND processing ########"
> -set_testres_file
> -
> -FUEGO_RESULT=0
> -bench_processing
> -export FUEGO_RESULT=$?
> -check_create_logrun
> -
> -post_test $TESTDIR
> -echo "Fuego: all test phases complete!"
> -return $FUEGO_RESULT
> -
> diff --git a/engine/scripts/ftc b/engine/scripts/ftc
> index c534688..6552c7f 100755
> --- a/engine/scripts/ftc
> +++ b/engine/scripts/ftc
> @@ -781,11 +781,7 @@ def get_includes(include_filename, conf):
>      return inc_vars
> 
>  def create_job(board, test):
> -    # flot only necessary for Benchmarks
> -    if test.test_type == 'Benchmark':
> -        flot_link = '<flotile.FlotPublisher plugin="flot@1.0-SNAPSHOT"/>'
> -    else:
> -        flot_link = ''
> +    flot_link = '<flotile.FlotPublisher plugin="flot@1.0-SNAPSHOT"/>'
> 
>      # prepare links for the descriptionsetter plugin
>      test_spec_path = '/fuego-core/engine/tests/%s/%s.spec' % (test.name,
> test.name)
> @@ -838,7 +834,7 @@ export TESTDIR={testdir}
>  export TESTNAME={testname}
>  export TESTSPEC={testspec}
>  #export FUEGO_DEBUG=1
> -timeout --signal=9 {timeout} /bin/bash
> $FUEGO_CORE/engine/tests/${{TESTDIR}}/${{TESTNAME}}.sh
> +timeout --signal=9 {timeout} /bin/bash
> $FUEGO_CORE/engine/scripts/main.sh
>  </command>
>      </hudson.tasks.Shell>
>      </builders>
> diff --git a/engine/scripts/functional.sh b/engine/scripts/functional.sh
> deleted file mode 100644
> index 9574cdd..0000000
> --- a/engine/scripts/functional.sh
> +++ /dev/null
> @@ -1,61 +0,0 @@
> -# Copyright (c) 2014 Cogent Embedded, Inc.
> -
> -# Permission is hereby granted, free of charge, to any person obtaining a
> copy
> -# of this software and associated documentation files (the "Software"), to
> deal
> -# in the Software without restriction, including without limitation the rights
> -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> -# copies of the Software, and to permit persons to whom the Software is
> -# furnished to do so, subject to the following conditions:
> -
> -# The above copyright notice and this permission notice shall be included in
> -# all copies or substantial portions of the Software.
> -
> -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> EXPRESS OR
> -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
> EVENT SHALL THE
> -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES
> OR OTHER
> -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> ARISING FROM,
> -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> DEALINGS IN
> -# THE SOFTWARE.
> -
> -# DESCRIPTION
> -# This script contains a sequence of calls that are needed for running
> functional test
> -
> -if [ -n "$FUEGO_DEBUG" ] ; then
> -	set -x
> -fi
> -set -e
> -
> -source $FUEGO_CORE/engine/scripts/overlays.sh
> -set_overlay_vars
> -
> -source $FUEGO_CORE/engine/scripts/reports.sh
> -source $FUEGO_CORE/engine/scripts/functions.sh
> -
> -echo "##### doing fuego phase: pre_test ########"
> -pre_test $TESTDIR
> -
> -echo "##### doing fuego phase: build ########"
> -if $Rebuild; then
> -    build
> -fi
> -
> -echo "##### doing fuego phase: deploy ########"
> -deploy
> -
> -echo "##### doing fuego phase: run ########"
> -call_if_present test_run
> -
> -echo "##### doing fuego phase: get_testlog ########"
> -get_testlog $TESTDIR
> -
> -echo "##### doing fuego phase: processing ########"
> -FUEGO_RESULT=0
> -set +e
> -call_if_present test_processing
> -export FUEGO_RESULT=$?
> -set -e
> -
> -post_test $TESTDIR
> -echo "Fuego: all test phases complete!"
> -return $FUEGO_RESULT
> diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
> index b9828c8..0e5e1ad 100755
> --- a/engine/scripts/functions.sh
> +++ b/engine/scripts/functions.sh
> @@ -253,84 +253,126 @@ function target_setup_route_to_host () {
>  }
> 
>  function pre_test {
> -# $1 - testdir
> -# Make sure the target is alive, and prepare workspace for the test
> -  source $FUEGO_RO/toolchains/tools.sh
> -  export SSHPASS=$PASSWORD
> +    # Make sure the target is alive, and prepare workspace for the test
> +    source $FUEGO_RO/toolchains/tools.sh
> +    export SSHPASS=$PASSWORD
> 
> -  is_empty $1
> +    is_empty $TESTDIR
> 
> -  # Setup routing to target if needed
> -  [ -n "$TARGET_SETUP_LINK" ] && $TARGET_SETUP_LINK
> +    # Setup routing to target if needed
> +    [ -n "$TARGET_SETUP_LINK" ] && $TARGET_SETUP_LINK
> 
> -  cmd "true" || abort_job "Cannot connect to $DEVICE via $TRANSPORT"
> +    cmd "true" || abort_job "Cannot connect to $DEVICE via $TRANSPORT"
> 
> -# Target cleanup flag check
> -  [ "$Target_PreCleanup" = "true" ] && target_cleanup $1 || true
> +    # Target cleanup flag check
> +    [ "$Target_PreCleanup" = "true" ] && target_cleanup $TESTDIR || true
> 
> -  export
> LOGDIR="$FUEGO_RW/logs/$TESTDIR/${NODE_NAME}.${TESTSPEC}.${BUIL
> D_NUMBER}.${BUILD_ID}"
> +    export
> LOGDIR="$FUEGO_RW/logs/$TESTDIR/${NODE_NAME}.${TESTSPEC}.${BUIL
> D_NUMBER}.${BUILD_ID}"
> 
> -# call test_pre_check if defined
> -  call_if_present test_pre_check
> +    # call test_pre_check if defined
> +    call_if_present test_pre_check
> 
> -# Get target device firmware.
> -  firmware
> -  cmd "echo \"Firmware revision:\" $FWVER" || abort_job "Error while
> ROOTFS_FWVER command execution on target"
> +    # Get target device firmware.
> +    firmware
> +    cmd "echo \"Firmware revision:\" $FWVER" || abort_job "Error while
> ROOTFS_FWVER command execution on target"
> 
> -# XXX: Sync date/time between target device and framework host
> -# Also log memory and disk status as well as non-kernel processes,and
> interrupts
> +    # XXX: Sync date/time between target device and framework host
> +    # Also log memory and disk status as well as non-kernel processes,and
> interrupts
> 
> -  ov_rootfs_state
> +    ov_rootfs_state
> 
> -  cmd "if [ ! -d $BOARD_TESTDIR ]; then mkdir -p $BOARD_TESTDIR; fi" ||
> abort_job "ERROR: cannot find nor create $BOARD_TESTDIR"
> +    cmd "if [ ! -d $BOARD_TESTDIR ]; then mkdir -p $BOARD_TESTDIR; fi" ||
> abort_job "ERROR: cannot find nor create $BOARD_TESTDIR"
> 
> -  local fuego_test_dir=$BOARD_TESTDIR/fuego.$1
> +    local fuego_test_dir=$BOARD_TESTDIR/fuego.$TESTDIR
> 
> -  # use a /tmp dir in case logs should be on a different partition
> -  # a board file can override the default of /tmp by setting
> FUEGO_TARGET_TMP
> -  local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$1
> +    # use a /tmp dir in case logs should be on a different partition
> +    # a board file can override the default of /tmp by setting
> FUEGO_TARGET_TMP
> +    local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$TESTDIR
> 
> -  cmd "rm -rf ${fuego_test_dir} ${fuego_test_tmp}; mkdir -p
> ${fuego_test_dir}" || abort_job "Could not create ${fuego_test_dir} on
> $NODE_NAME"
> -  # note that dump_syslogs (below) creates ${fuego_test_tmp} if needed
> +    cmd "rm -rf ${fuego_test_dir} ${fuego_test_tmp}; mkdir -p
> ${fuego_test_dir}" || abort_job "Could not create ${fuego_test_dir} on
> $NODE_NAME"
> +    # note that dump_syslogs (below) creates ${fuego_test_tmp} if needed
> 
> -# Log test name
> -  ov_logger "Starting test ${JOB_NAME}"
> +    # Log test name
> +    ov_logger "Starting test ${JOB_NAME}"
> 
> -  dump_syslogs ${fuego_test_tmp} "before"
> +    dump_syslogs ${fuego_test_tmp} "before"
> 
> -# flush buffers to physical media and drop filesystem caches to make system
> load more predictable during test execution
> -  ov_rootfs_sync
> +    # flush buffers to physical media and drop filesystem caches to make
> system load more predictable during test execution
> +    ov_rootfs_sync
> 
> -  ov_rootfs_drop_caches
> +    ov_rootfs_drop_caches
>  }
> 
> -function bench_processing {
> -  firmware
> -  export GEN_TESTRES_FILE=$GEN_TESTRES_FILE
> +function processing {
> +    # PWD: /fuego-rw/buildzone/board.spec.testtype.testcase-platform
> +    local fuego_test_dir=${BOARD_TESTDIR}/fuego.$TESTDIR
> +    local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$TESTDIR
> +    local RETURN_VALUE=0
> 
> -  echo -e "\n RESULT ANALYSIS \n"
> +    # fetch data for processing
> +    firmware
> +    get $BOARD_TESTDIR/fuego.$TESTDIR/$TESTDIR.log
> ${LOGDIR}/testlog.txt
> 
> -  # Get the test results
> -  get_testlog $TESTDIR $BOARD_TESTDIR/fuego.$TESTDIR/$TESTDIR.log
> +    dump_syslogs ${fuego_test_tmp} "after"
> +    get
> ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.befor
> e ${LOGDIR}/syslog.before.txt
> +    if [ $? -ne 0 ] ; then
> +        echo "Fuego error: Can't read 'before' system log, possibly because
> /tmp was cleared on boot"
> +        echo "Consider setting FUEGO_TARGET_TMP in your board file to a
> directory on target that won't get cleared on boot"
> +        touch ${LOGDIR}/syslog.before.txt
> +    fi
> +    get
> ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.after
> ${LOGDIR}/syslog.after.txt
> 
> -  PYTHON_ARGS="-W ignore::DeprecationWarning -W ignore::UserWarning"
> +    # process the fetched data
> +    call_if_present test_processing
> +    if [ $? -ne 0 ]; then
> +        echo "ERROR: test_processing returned an error"
> +        RETURN_VALUE=1
> +    fi
> 
> -  # parse the test log and create a plot
> -  # return codes: 0 (everything ok), 1 (problem while parsing, see log), 2 (the
> results didn't satisfy the threshold)
> -  run_python $PYTHON_ARGS
> $FUEGO_CORE/engine/tests/${TESTDIR}/parser.py && rc=0 || rc=$?
> +    fail_check_cases
> +    if [ $? -ne 0 ]; then
> +        echo "ERROR: fail_check_cases returned an error"
> +        RETURN_VALUE=1
> +    fi
> +
> +    syslog_cmp
> +    if [ $? -ne 0 ]; then
> +        echo "ERROR: syslog_cmp returned an error"
> +        RETURN_VALUE=1
> +    fi
> 
> -  if [ $rc -eq 0 ] || [ $rc -eq 2 ]; then
> -    # store results as a json file fro the flot plugin
> -    run_python $PYTHON_ARGS
> $FUEGO_CORE/engine/scripts/parser/dataload.py && rc=0 || echo
> "dataload.py didn't work properly"
> -    if [ $rc -eq 0 ]; then
> -        # FIXTHIS: this should not be here
> -        ln -s "../plot.png" "$LOGDIR/plot.png" || true
> +    PYTHON_ARGS="-W ignore::DeprecationWarning -W
> ignore::UserWarning"
> +    if [ -e "$TEST_HOME/parser.py" ] ; then
> +        # FIXTHIS: make sure that json is generated even on failures
> +        run_python $PYTHON_ARGS
> $FUEGO_CORE/engine/tests/${TESTDIR}/parser.py && rc=0 || rc=$?
>      else
> -        false
> +        run_python $PYTHON_ARGS
> $FUEGO_CORE/engine/scripts/generic_parser.py $RETURN_VALUE && rc=0
> || rc=$?
>      fi
> -  else
> -    false
> -  fi
> +
> +    # return codes: 0 (everything ok), 1 (problem while parsing, see log), 2
> (the results didn't satisfy the threshold)
> +    if [ $rc -eq 0 ] || [ $rc -eq 2 ]; then
> +        # store results as a json file fro the flot plugin
"fro" -> "for"

> +        run_python $PYTHON_ARGS
> $FUEGO_CORE/engine/scripts/parser/dataload.py && rc=0 || echo
> "dataload.py didn't work properly"
> +        if [ $rc -eq 0 ]; then
> +            # FIXTHIS: this should not be here
> +            ln -s "../plot.png" "$LOGDIR/plot.png" || true
> +        else
> +            "ERROR: problem while running dataload.py"
> +            RETURN_VALUE=1
> +        fi
> +    else
> +        echo "ERROR: problem while running the parser"
> +        RETURN_VALUE=1
> +    fi
> +
> +    # make a convenience link to the Jenkins console log, if the log doesn't
> exist
> +    # this code assumes that if consolelog.txt doesn't exist, this was a Jenkins
> job build,
> +    # and the console log is over in the jenkins build directory.
> +    if [ ! -e $LOGDIR/consolelog.txt ] ; then
> +        ln -s
> "/var/lib/jenkins/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log"
> $LOGDIR/consolelog.txt
> +    fi
> +
> +    return $RETURN_VALUE
>  }
> 
>  # search in test log for {!JOB_NAME}_FAIL_PATTERN_n fail cases and abort
> with message {!JOB_NAME}_FAIL_MESSAGE_n if found
> @@ -364,9 +406,10 @@ function fail_check_cases () {
>          if [ ! -z "$fpslog" ]
>          then
> 
> -            if diff -ua ${slog_prefix}.before ${slog_prefix}.after | grep -vEf
> "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e $fptemplate;
> +            if diff -ua ${slog_prefix}.before.txt ${slog_prefix}.after.txt | grep -vEf
> "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e $fptemplate;
>              then
>                  echo "Detected fail message in syslog diff: $fpmessage"
> +                return 1
>              else
>                  continue
>              fi
> @@ -375,6 +418,7 @@ function fail_check_cases () {
>          if grep -e "$fptemplate" $testlog ;
>          then
>              echo "Detected fail message in $testlog: $fpmessage"
> +            return 1
>          fi
>      done
>  }
> @@ -448,7 +492,6 @@ function post_term_handler {
> 
>  # $1 - $TESTDIR
>  function post_test {
> -  echo "##### doing fuego phase: post_test ########"
>    # reset the signal handler to avoid an infinite loop
>    trap post_term_handler SIGTERM
>    trap - SIGHUP SIGALRM SIGINT ERR EXIT
> @@ -466,55 +509,11 @@ function post_test {
>    # anything outside the test directories
>    call_if_present test_cleanup
> 
> -  local fuego_test_dir=${BOARD_TESTDIR}/fuego.$1
> -  local fuego_test_tmp=${FUEGO_TARGET_TMP:-/tmp}/fuego.$1
> -
> -  # log test completion message.
> -  ov_logger "Test $1 is finished"
> -
> -  # Syslog dump
> -  dump_syslogs ${fuego_test_tmp} "after"
> -
> -  # Get syslogs
> -  set +e
> -  get
> ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.befor
> e ${LOGDIR}/syslog.before.txt
> -  if [ $? -ne 0 ] ; then
> -     echo "Fuego error: Can't read 'before' system log, possibly because /tmp
> was cleared on boot"
> -     echo "Consider setting FUEGO_TARGET_TMP in your board file to a
> directory on target that won't get cleared on boot"
> -     touch ${LOGDIR}/syslog.before.txt
> -  fi
> -
> -  get
> ${fuego_test_tmp}/${NODE_NAME}.${BUILD_ID}.${BUILD_NUMBER}.after
> ${LOGDIR}/syslog.after.txt
> -
> -  # make a convenience link to the Jenkins console log, if the log doesn't
> exist
> -  # this code assumes that if consolelog.txt doesn't exist, this was a Jenkins
> job build,
> -  # and the console log is over in the jenkins build directory.
> -  if [ ! -e $LOGDIR/consolelog.txt ] ; then
> -     ln -s "/var/lib/jenkins/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/log"
> $LOGDIR/consolelog.txt
> -  fi
> -  set -e
> -
>    # Remove work and log dirs
>    [ "$Target_PostCleanup" = "true" ] && target_cleanup $1 || true
> 
> -  # Syslog comparison
> -  # FIXTHIS: should affect FUEGO_RESULT
> -  syslog_cmp
> -
> -  # FIXTHIS: should affect FUEGO_RESULT
> -  fail_check_cases  || true
> -
> -  # create functional result file
> -  # don't freak out if the parsing doesn't happen
> -  set +e
> -  PYTHON_ARGS="-W ignore::DeprecationWarning -W ignore::UserWarning"
> -  if startswith $TESTDIR "Functional." ; then
> -      if [ -e "$TEST_HOME/parser.py" ] ; then
> -         # FIXTHIS: this interface has changed
> -         run_python $PYTHON_ARGS "$TEST_HOME/parser.py" -t $TESTDIR -b
> $NODE_NAME -j $JOB_NAME -n $BUILD_NUMBER -s $BUILD_TIMESTAMP -r
> $FUEGO_RESULT
> -      fi
> -  fi
> -  set -e
> +  # log test completion message.
> +  echo "Test $1 is finished"
> 
>    # Teardown communication link to target if board specifies
>    # a function to do so
> @@ -564,61 +563,44 @@ function build_cleanup {
> 
>  # sets $FUEGO_RESULT
>  function log_compare {
> -# 1 - $TESTDIR, 2 - number of results, 3 - Regex, 4 - n or p (i.e. negative or
> positive)
> -
> -  if [ ! $FUEGO_RESULT="0" ] ; then
> -    return $FUEGO_RESULT
> -  fi
> +    # 1 - $TESTDIR, 2 - number of results, 3 - Regex, 4 - n, p (i.e. negative or
> positive)
> +    local RETURN_VALUE=0
> +    local PARSED_LOGFILE="testlog.${4}.txt"
> +
> +    if [ -f ${LOGDIR}/testlog.txt ]; then
> +        current_count=`cat ${LOGDIR}/testlog.txt | grep -E "${3}" 2>&1 | wc -l`
> +        if [ "$4" = "p" ]; then
> +            if [ $current_count -ge $2 ] ; then
> +                echo "log_compare: pattern $3 found $current_count times
> (expected greater or equal than $2)"
> +                FUEGO_RESULT=0
> +            else
> +                echo "ERROR: log_compare: pattern $3 found $current_count times
> (expected greater or equal than $2)"
> +                RETURN_VALUE=1
> +            fi
> +        fi
> 
> -  cd ${LOGDIR}
> -  LOGFILE="testlog.txt"
> -  PARSED_LOGFILE="testlog.${4}.txt"
> -
> -  if [ -f $LOGFILE ]; then
> -    current_count=`cat $LOGFILE | grep -E "${3}" 2>&1 | wc -l`
> -    if [ $current_count -eq $2 ] ; then
> -      FUEGO_RESULT=0
> -      # FIXTHIS: make this work and support both the p and n log files
> -      # cat $LOGFILE | grep -E "${3}" | tee "$PARSED_LOGFILE"
> -      # local TMP_P=`diff -u
> ${FUEGO_CORE}/engine/tests/${TESTDIR}/${1}_${4}.log "$PARSED_LOGFILE"
> 2>&1`
> -      # if [ $? -ne 0 ];then
> -      #   echo -e "\nFuego error reason: Unexpected test log
> output:\n$TMP_P\n"
> -      #   check_create_functional_logrun "test error"
> -      #   FUEGO_RESULT=1
> -      # else
> -      #   check_create_functional_logrun "passed"
> -      #   FUEGO_RESULT=0
> -      # fi
> +        if [ "$4" = "n" ]; then
> +            if [ $current_count -le $2 ] ; then
> +                echo "log_compare: pattern $3 found $current_count times
> (expected less or equal than $2)"
> +                FUEGO_RESULT=0
> +            else
> +                echo "ERROR: log_compare: pattern $3 found $current_count times
> (expected less or equal than $2)"
> +                RETURN_VALUE=1
> +            fi
> +        fi
>      else
> -      echo -e "\nFuego error reason: Mismatch in expected ($2) and actual
> ($current_count) pos/neg ($4) results. (pattern: $3)\n"
> -      check_create_functional_logrun "failed"
> -      FUEGO_RESULT=1
> +        echo -e "\nFuego error reason: '$LOGDIR/testlog.txt' is missing.\n"
> +        RETURN_VALUE=1
>      fi
> -  else
> -    echo -e "\nFuego error reason: '$LOGDIR/$LOGFILE' is missing.\n"
> -    check_create_functional_logrun "test error"
> -    FUEGO_RESULT=1
> -  fi
> 
> -  cd -
> -  return $FUEGO_RESULT
> -}
> -
> -function get_testlog {
> -# $1 - testdir,  $2 - full path to logfile
> -# XXX: It will be unified
> -  if [ -n "$2" ]; then
> -    get ${2} ${LOGDIR}/testlog.txt
> -  else
> -    get $BOARD_TESTDIR/fuego.$1/$1.log ${LOGDIR}/testlog.txt
> -  fi;
> +    return $RETURN_VALUE
>  }
> 
>  function syslog_cmp {
>    PREFIX="$LOGDIR/syslog"
>    rc=0
> -  if [ -f ${PREFIX}.before ]; then
> -    if diff -ua ${PREFIX}.before ${PREFIX}.after | grep -vEf
> "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e '\.(Bug:|Oops)';
> then
> +  if [ -f ${PREFIX}.before.txt ]; then
> +    if diff -ua ${PREFIX}.before.txt ${PREFIX}.after.txt | grep -vEf
> "$FUEGO_CORE/engine/scripts/syslog.ignore" | grep -E -e '\.(Bug:|Oops)';
> then
>        rc=1
>      fi
>    # else # special case for "reboot" test
> diff --git a/engine/scripts/generic_parser.py
> b/engine/scripts/generic_parser.py
> new file mode 100755
> index 0000000..08e4443
> --- /dev/null
> +++ b/engine/scripts/generic_parser.py
> @@ -0,0 +1,11 @@
> +#!/bin/python
> +# sys.argv[1]: 0 (PASS) or 1 (FAIL)
> +import os
> +import sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +print "Received: " + str(sys.argv[1])
> +cur_dict = {'fail_or_pass' : sys.argv[1]}
> +sys.exit(plib.process_data(test_results=cur_dict, label='FAIL or PASS'))
> +
> diff --git a/engine/scripts/main.sh b/engine/scripts/main.sh
> new file mode 100644
> index 0000000..06e4e5d
> --- /dev/null
> +++ b/engine/scripts/main.sh
> @@ -0,0 +1,59 @@
> +# Copyright (c) 2014 Cogent Embedded, Inc.
> +
> +# Permission is hereby granted, free of charge, to any person obtaining a
> copy
> +# of this software and associated documentation files (the "Software"), to
> deal
> +# in the Software without restriction, including without limitation the rights
> +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> +# copies of the Software, and to permit persons to whom the Software is
> +# furnished to do so, subject to the following conditions:
> +
> +# The above copyright notice and this permission notice shall be included in
> +# all copies or substantial portions of the Software.
> +
> +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> EXPRESS OR
> +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
> EVENT SHALL THE
> +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES
> OR OTHER
> +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> ARISING FROM,
> +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> DEALINGS IN
> +# THE SOFTWARE.
> +
> +# DESCRIPTION
> +# This script contains a sequence of calls that are needed for running a test
> +
> +if [ -n "$FUEGO_DEBUG" ] ; then
> +    set -x
> +fi
> +
> +source $FUEGO_CORE/engine/scripts/overlays.sh
> +set_overlay_vars
> +
> +source $FUEGO_CORE/engine/scripts/functions.sh
> +
> +# FIXTHIS: use a fixed name like fuego_test.sh instead of $TESTNAME.sh
> +source $FUEGO_CORE/engine/tests/$TESTDIR/$TESTNAME.sh

Hmmm.  This seems problematic.  Originally, TESTNAME.sh sourced functional.sh (or benchmark.sh),
so the definitions in that file already existed at the time the overlay generator was called.
It seems like this should be sourced very early (especially before the call to the overlay generator)
to keep the variable assignment order as close to original as possible.  I don't have any examples
of things I know will break, but I'd rather this 'source' was up about 7 lines.

> +
> +echo "##### doing fuego phase: pre_test ########"
> +pre_test
> +
> +echo "##### doing fuego phase: build ########"
> +if $Rebuild; then
> +    build
> +fi
> +
> +echo "##### doing fuego phase: deploy ########"
> +deploy
> +
> +echo "##### doing fuego phase: run ########"
> +call_if_present test_run
> +
> +echo "##### doing fuego phase: processing ########"
> +FUEGO_RESULT=0
> +processing
> +export FUEGO_RESULT=$?
> +
> +echo "##### doing fuego phase: post_test ########"
> +post_test $TESTDIR
> +
> +echo "Fuego: all test phases complete!"
> +exit $FUEGO_RESULT
> diff --git a/engine/scripts/parser/common.py
> b/engine/scripts/parser/common.py
> index 25baf2f..cdc2700 100644
> --- a/engine/scripts/parser/common.py
> +++ b/engine/scripts/parser/common.py
> @@ -70,13 +70,14 @@ def write_report_results(rep_data):
>          else:
>                  print ("Not writing testres file")
> 
> -def process_data(ref_section_pat, test_results, plot_type, label):
> +def process_data(ref_section_pat=None, test_results={}, plot_type='s',
> label='default_label'):
>      """
>      Parameters
>      ----------
>      ref_section_pat: regular expression that matches the 'section' in a
> -      reference.log entry (FIXTHIS: use "^\[([\w\d&._/()-]+)\|([gle]{2})\]"
> -      for all of them
> +      reference.log entry. If none, we assume the same keys as the
> +      test results, and a PASS or FAIL result.
> +      FIXTHIS: use "^\[([\w\d&._/()-]+)\|([gle]{2})\]" for all of them
>      test_results: dictionary where keys are reflog sections, and values
>        are test results extracted from the test log.
>      plot_type: single plot (s) multiplot (m, l, or xl)
> @@ -88,7 +89,13 @@ def process_data(ref_section_pat, test_results,
> plot_type, label):
>      if custom_write_report:
>          write_report_results(cur_dict)
> 
> -    thresholds, criteria = read_thresholds_data(ref_section_pat)
> +    if ref_section_pat:
> +        thresholds, criteria = read_thresholds_data(ref_section_pat)
> +    else:
> +        # special case for fail or pass tests (see generic_parser.py)
> +        thresholds = {'fail_or_pass': '0'}
> +        criteria = {'fail_or_pass': 'eq'}
> +
>      rc = compare(thresholds, test_results, criteria)
>      store_plot_data(thresholds, test_results)
>      set_plot_properties(plot_type)
> @@ -182,11 +189,8 @@ def store_plot_data(thresholds, test_results):
>          plot_file = open(PLOT_DATA,"w") # Create new
> 
>      for key in sorted(test_results.iterkeys()):
> -        thresholds_split = thresholds[key].split()
> -        test_results_split = test_results[key].split()
> -        for i,u in enumerate(test_results_split):
> -            line = "%s %s %s %s %s %s %s %s %s %s %s\n" % (NODE_NAME,
> TESTDIR, TESTSPEC, BUILD_NUMBER, BUILD_ID, BUILD_TIMESTAMP, FWVER,
> PLATFORM, key, thresholds_split[i], test_results_split[i])
> -            plot_file.write(line)
> +        line = "%s %s %s %s %s %s %s %s %s %s %s\n" % (NODE_NAME,
> TESTDIR, TESTSPEC, BUILD_NUMBER, BUILD_ID, BUILD_TIMESTAMP, FWVER,
> PLATFORM, key, thresholds[key], test_results[key])
> +        plot_file.write(line)
> 
>      plot_file.close()
>      print "Data file "+PLOT_DATA+" was updated."
> @@ -430,6 +434,9 @@ def compare(thresholds, test_results, criteria):
>              elif criteria[key] == 'le' and comparison_result > 0:
>                  hls("Test section %s: test result %s is greater than threshold %s." %
> (key, test_results_split[i], thresholds_split[i]),'e')
>                  return 2
> +            elif criteria[key] == 'eq' and comparison_result != 0:
> +                hls("Test section %s: test result %s is different than %s (PASS)." %
> (key, test_results_split[i], thresholds_split[i]),'e')
> +                return 2
>              else:
>                  print "Test section %s: test result %s satisfies (%s) threshold %s." %
> (key, test_results_split[i], criteria[key], thresholds_split[i])
>      return 0
> diff --git a/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
> b/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
> index 617ce94..bc3f4c8 100755
> --- a/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
> +++ b/engine/tests/Benchmark.Dhrystone/Dhrystone.sh
> @@ -19,4 +19,4 @@ function test_run {
>      report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./dhrystone
> $BENCHMARK_DHRYSTONE_LOOPS"
>  }
> 
> -. $FUEGO_CORE/engine/scripts/benchmark.sh
> +

This leaves 2 blank lines at the end of every base test script.
Is this intentional?  It doesn't bother me a lot, but would be nice to clean up.

Also, this switch to not needing to source the appropriate test-type
from a test's base script will require some documentation re-writes.
I'll add a to-do item for that.

It's looking like there are enough changes that I'll have to do another major
documentation snapshot to isolate 1.1 and 1.2 documentation.

I've started to collect notes for the 1.2 release on this page:
http://bird.org/fuego/Release_1.2_To_Do

> diff --git a/engine/tests/Benchmark.GLMark/GLMark.sh
> b/engine/tests/Benchmark.GLMark/GLMark.sh
> index 8a940f1..f409109 100755
> --- a/engine/tests/Benchmark.GLMark/GLMark.sh
> +++ b/engine/tests/Benchmark.GLMark/GLMark.sh
> @@ -19,4 +19,4 @@ function test_run {
>  	safe_cmd "{ cd $BOARD_TESTDIR/fuego.$TESTDIR; export
> DISPLAY=:0; xrandr |awk '/\*/ {split(\$1, a, \"x\"); print a[1], a[2], 32, 1}' >
> params; ./glmark &>   < params; } || { [ \$? -eq 142 ] && exit 0; }"
>  }
> 
> -. $FUEGO_CORE/engine/scripts/benchmark.sh
> +

[rest of these omitted]

This is a nice simplification.  I'm trying to think if there's a use case for manually running the base script
directly (manually), that we would lose support for with this change, but I can't think of one.
I'll modify ftc run-test to adapt to the new calling convention, and that should support using 
Fuego at the command line.
 -- Tim


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Fuego] Unification of functional and benchmark tests
  2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
                   ` (3 preceding siblings ...)
  2017-04-24  8:37 ` [Fuego] [PATCH 3/3] add Functional.jpeg test to testplan docker since it works Daniel Sangorrin
@ 2017-04-27  0:14 ` Bird, Timothy
  4 siblings, 0 replies; 8+ messages in thread
From: Bird, Timothy @ 2017-04-27  0:14 UTC (permalink / raw)
  To: Daniel Sangorrin, fuego



> -----Original Message-----
> From: Daniel Sangorrin on Monday, April 24, 2017 1:38 AM
> These patches contain a big change that unifies functional and
> benchmark tests. These are the main points but for more detailed
> fixes please check the source code.
> 
> - functional.sh and benchmark.sh have been merged into main.sh which
> is called directly from Jenkins (ftc needs update).
I've made a todo for this.

>   + TODO: rename bc.sh to fuego_test.sh (for all tests)
Also still on my todo list.  If we're serious about this, it should happen
this release (1.2).  I don't want to go through another major
refactoring after the 1.2 release, as I'm hoping that after 1.2 I
can start promoting a usage model where tests live outside the
fuego-core repository.  Once that happens, we'll need to have backwards
compatibility, so major refactoring will be more difficult.

> - Both type of tests (Functional and Benchmark) now output a
> "results.json" file in the same format. When I implement "ftc report",
> all those results.json files will be combined to create a pdf/html/excel report.
Sounds good!

>   + TODO: runs.json information needs to be merged into results.json
>     and we should make sure that we can generate KernelCI requests as well.
Agreed.
  -- Tim

> 
> fuego patches:
> [PATCH] flot: unify functional and benchmark
> 
> fuego-core patches:
> [PATCH 1/3] core: the great unification of functional and benchmark
> [PATCH 2/3] fix expat test and add it to testplan docker
> [PATCH 3/3] add Functional.jpeg test to testplan docker since it
> 
> Next steps
>    - Parse LTP to produce json instead of an excel sheet
>    - Add functionality for blacklisting test cases
Yes!  This is highly needed.

>    - merge runs.json information into results.json
>    - implement ftc report

Sounds like a good list.  Let me know what I can help with.
 -- Tim

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-04-27  0:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-24  8:37 [Fuego] Unification of functional and benchmark tests Daniel Sangorrin
2017-04-24  8:37 ` [Fuego] [PATCH 1/3] core: the great unification " Daniel Sangorrin
2017-04-27  0:08   ` Bird, Timothy
2017-04-24  8:37 ` [Fuego] [PATCH] flot: unify functional and benchmark Daniel Sangorrin
2017-04-25 23:55   ` Bird, Timothy
2017-04-24  8:37 ` [Fuego] [PATCH 2/3] fix expat test and add it to testplan docker Daniel Sangorrin
2017-04-24  8:37 ` [Fuego] [PATCH 3/3] add Functional.jpeg test to testplan docker since it works Daniel Sangorrin
2017-04-27  0:14 ` [Fuego] Unification of functional and benchmark tests Bird, Timothy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.