All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
       [not found] <1530708469-14477-1-git-send-email-lixm.fnst@cn.fujitsu.com>
@ 2018-07-06  5:50 ` Li, Xiaoming
  2018-07-06  8:03   ` Daniel Sangorrin
                     ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Li, Xiaoming @ 2018-07-06  5:50 UTC (permalink / raw)
  To: fuego


Signed-off-by: Li Xiaoming <lixm.fnst@cn.fujitsu.com>
---
 engine/tests/Functional.LTP_Smack/fuego_test.sh |  77 +++++++
 engine/tests/Functional.LTP_Smack/parser.py     | 270 ++++++++++++++++++++++++
 engine/tests/Functional.LTP_Smack/spec.json     |   7 +
 engine/tests/Functional.LTP_Smack/test_mount.sh |  28 +++
 4 files changed, 382 insertions(+)
 create mode 100755 engine/tests/Functional.LTP_Smack/fuego_test.sh
 create mode 100755 engine/tests/Functional.LTP_Smack/parser.py
 create mode 100644 engine/tests/Functional.LTP_Smack/spec.json
 create mode 100644 engine/tests/Functional.LTP_Smack/test_mount.sh

diff --git a/engine/tests/Functional.LTP_Smack/fuego_test.sh b/engine/tests/Functional.LTP_Smack/fuego_test.sh
new file mode 100755
index 0000000..2fc2fe5
--- /dev/null
+++ b/engine/tests/Functional.LTP_Smack/fuego_test.sh
@@ -0,0 +1,77 @@
+# Don't allow jobs to share build directories # the 
+"test_successfully_built" flag for one spec function test_build {
+    # check for LTP build directory
+    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed s/LTP_one_test/LTP/ | sed s/$TESTSPEC/default/)"
+    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
+
+    # if not already built, build LTP
+    if [ ! -e ${LTP_BUILD_DIR}/fuego_test_successfully_built ] ; then
+        echo "Building parent LTP test..."
+        ftc run-test -b $NODE_NAME -t Functional.LTP -p pcb
+        # NOTE: vars used in ftc run-test should not leak into this environment
+        # that is, none of our test vars should have changed.
+    fi
+}
+
+function test_deploy {
+    # set LTP_BUILD_DIR (possibly again), in case test_build was skipped
+    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed s/LTP_Smack/LTP/ | sed s/$TESTSPEC/default/)"
+    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"     
+
+    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
+    echo "bdir=${bdir}"    
+
+    # copy helper files, runltp, ltp-pan and the
+    # test program to the board
+    cmd "mkdir -p $bdir/bin $bdir/runtest  $bdir/testcases/bin "
+    put ${LTP_BUILD_DIR}/target_bin/IDcheck.sh $bdir/
+    put ${LTP_BUILD_DIR}/target_bin/ver_linux $bdir/
+    put ${LTP_BUILD_DIR}/target_bin/Version $bdir/
+    put ${LTP_BUILD_DIR}/target_bin/runltp $bdir/
+    put ${LTP_BUILD_DIR}/target_bin/bin/ltp-pan $bdir/bin/
+    
+    put ${LTP_BUILD_DIR}/target_bin/runtest/smack $bdir/runtest
+
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_direct.sh      $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_current.sh     $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_socket_labels  $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_onlycap.sh     $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_cipso.sh       $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_doi.sh         $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_file_access.sh     $bdir/testcases/bin 
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_notroot            $bdir/testcases/bin
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_ambient.sh     $bdir/testcases/bin
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_netlabel.sh    $bdir/testcases/bin
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_common.sh          $bdir/testcases/bin
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_load.sh        $bdir/testcases/bin    
+    
+    # smack test cases need them
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/test.sh                  $bdir/testcases/bin
+    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/tst_ansi_color.sh        $bdir/testcases/bin
+   
+    # test_mount.sh set the smack env
+    put $TEST_HOME/test_mount.sh $bdir/ }
+
+function test_run {
+    
+    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
+    local odir="$BOARD_TESTDIR/fuego.$TESTDIR/result/default"
+    echo "test_run__bdir:" $bdir
+    
+    report "cd $bdir; chmod +x test_mount.sh; ./test_mount.sh start"
+    report "cd $bdir; mkdir -p $odir; ./runltp -f smack -l $odir/result.log -o $odir/output.log"
+    report "cd $bdir; ./test_mount.sh end"
+}
+
+function test_fetch_results {
+    echo "Fetching LTP Smack results"
+    rm -rf result/
+    get $BOARD_TESTDIR/fuego.$TESTDIR/result $LOGDIR }
+
+function test_processing {
+    return
+}
+
diff --git a/engine/tests/Functional.LTP_Smack/parser.py b/engine/tests/Functional.LTP_Smack/parser.py
new file mode 100755
index 0000000..2dc44a8
--- /dev/null
+++ b/engine/tests/Functional.LTP_Smack/parser.py
@@ -0,0 +1,270 @@
+#!/usr/bin/python
+# -*- coding: UTF-8 -*-
+import os, os.path, re, sys
+sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser') 
+import common as plib
+
+SAVEDIR=os.getcwd()
+LOGDIR=os.environ["LOGDIR"]
+
+def abort(msg):
+    print msg
+    os.chdir(SAVEDIR)
+    sys.exit(1)
+
+def split_output_per_testcase (test_category):
+    '''
+        For each test category/group (e.g. syscalls) there is an output.log
+        file that contains the output log of each testcase. This function
+        splits output.log into per-testcase files
+    '''
+
+    # open input
+    try:
+        output_all = open("%s/output.log" % test_category)
+    except IOError:
+        abort('"%s/result.log" cannot be opened.' % test_category)
+
+    # prepare for outputs
+    out_dir = test_category + "/outputs"
+    try:
+        os.mkdir(out_dir)
+    except OSError:
+        pass
+
+    lines = output_all.readlines()
+    output_all.close()
+    for line in lines:
+        m = re.compile("^<<<test_start>>>").match(line)
+        if m is not None:
+            loop_end = 0
+            in_loop = 1
+            try:
+              output_each = open(out_dir+"/tmp.log", "w")
+            except IOError:
+                abort('"%s/tmp.log" cannot be opened.' % out_dir)
+
+        m = re.compile("^tag=([^ ]*)").match(line)
+        if m is not None:
+            test_case = m.group(1)
+
+        m = re.compile("^<<<test_end>>>").match(line)
+        if m is not None:
+            loop_end = 1
+
+        if in_loop:
+            output_each.write("%s" % line)
+
+        if in_loop & loop_end:
+            output_each.close()
+            os.rename(out_dir+"/tmp.log", out_dir+"/%s.log" % test_case)
+            in_loop = 0
+
+def read_output (test_category, test_case):
+    '''
+        Reads one of the files splitted by split_output_per_testcase
+    '''
+    case_log = "%s/outputs/%s.log" % (test_category, test_case)
+    try:
+        output_each = open(case_log)
+    except IOError:
+        abort('"%s"" cannot be opened.' % (case_log))
+
+    output = output_each.read()
+    output_each.close()
+
+    m = re.compile("<<<test_output>>>\n(.*)\n<<<execution_status>>>", re.M | re.S).search(output)
+    if m is not None:
+        result = m.group(1)
+    else:
+        result = ""
+
+    return result
+
+
+# Check for results dir, and cd there
+try:
+    os.chdir(LOGDIR+"/result")
+except:
+    print "WARNING: no result directory (probably a build only test)."
+    sys.exit(3)
+
+# there are three types of results - regular, posix and realtime # 
+parse the regular results, first, into test_results
+
+# Loop that proceses each test folder
+tests = os.listdir('.')
+tests.sort()
+test_results = {}
+for test_category in tests:
+    if not os.path.isdir(test_category):
+        continue
+
+    split_output_per_testcase(test_category)
+
+    ## Check result.log
+    try:
+        f = open("%s/result.log" % test_category)
+    except IOError:
+        print '"%s/result.log" cannot be opened.' % test_category
+        continue
+
+    lines = f.readlines()
+    f.close()
+    regc = re.compile("^tag=([^ ]*) stime=([^ ]*) dur=([^ ]*) exit=([^ ]*) stat=([^ ]*) core=([^ ]*) cu=([^ ]*) cs=([^ ]*)")
+    for line in lines:
+        m = regc.match(line)
+        if m is not None:
+            test_case = m.group(1)
+            result = m.group(5)
+
+            errtype = []
+            decision = 0 # 0: PASS, 1: FAIL
+
+            if int(result) == 0:
+                errtype.append("PASS")
+
+            if int(result) & 32 != 0:
+                errtype.append("CONF")
+                decision = 0
+
+            if int(result) & 16 != 0:
+                errtype.append("INFO")
+                decision = 1
+
+            if int(result) & 4 != 0:
+                errtype.append("WARN")
+                decision = 1
+
+            if int(result) & 2 != 0:
+                errtype.append("BROK")
+                decision = 1
+
+            if int(result) & 1 != 0:
+                errtype.append("FAIL")
+                decision = 1
+
+            if int(result) & 0x100 != 0:
+                decision = 1
+                errtype.append("ERRNO")
+
+            if int(result) & 0x200 != 0:
+                decision = 1
+                errtype.append("TERRNO")
+
+            if int(result) & 0x300 != 0:
+                decision = 1
+                errtype.append("RERRNO")
+
+            if decision == 0:
+                print "%s:%s passed" % (test_category, test_case)
+                status = "PASS"
+            else:
+                print "%s:%s failed" % (test_category, test_case)
+                status = "FAIL"
+
+            # FIXTHIS: show errtype
+            # FIXTHIS: add sub-test data
+            test_results[test_category + '.' + test_case] = status
+
+            # put test output to console log
+            output = read_output(test_category, test_case)
+            print output
+
+# now process posix results - from pts.log file posix_results = {} 
+pts_logfile=LOGDIR+"/result/pts.log"
+lines = []
+if os.path.exists(pts_logfile):
+    try:
+        f = open(pts_logfile)
+        lines = f.readlines()
+        f.close()
+    except IOError:
+        print '"%s" cannot be opened.' % pts_logfile
+
+regc = re.compile(r"^conformance/([^/]*)/([^/]*)/([^/]*): execution: 
+(.*)") for line in lines:
+    m = regc.match(line)
+    if m:
+        test_set = m.group(2)
+        test_case = m.group(3)
+        result = m.group(4)
+
+        test_id = test_set+"."+test_case
+        status = "ERROR"
+        if result.startswith("PASS"):
+            status = "PASS"
+        elif result.startswith("FAIL"):
+            status = "FAIL"
+        elif result.startswith("UNTESTED"):
+            status = "SKIP"
+        posix_results[test_id] = status
+
+# hope no posix tests have the same test_ids as regular tests
+test_results.update(posix_results)
+
+if os.path.exists('rt.log'):
+    rt_results = {}
+    with open('rt.log') as f:
+        rt_testcase_regex = "^--- Running testcase (.*)  ---$"
+        rt_results_regex = "^\s*Result:\s*(.*)$"
+        for line in f:
+            m = re.match(rt_testcase_regex, line.rstrip())
+            if m:
+                test_case = m.group(1)
+            m = re.match(rt_results_regex, line.rstrip())
+            if m:
+                test_result = m.group(1)
+                test_id = "rt." + test_case
+                rt_results[test_id] = test_result
+    test_results.update(rt_results)
+
+os.chdir(SAVEDIR)
+sys.exit(plib.process(test_results))
+
+# Posix Test Suite processing
+#last_was_conformance = False
+#set_pts_format = False
+#fills = {'UNRESOLVED':brok_fill, 'FAILED':fail_fill, 'PASS':pass_fill, 
+'UNTESTED':conf_fill, 'UNSUPPORTED':info_fill}
+
+#def pts_set_style(ws):
+    #for r in range(1, ws.get_highest_row()):
+        #ws.cell(row=r, column=1).style.fill = fills[str(ws.cell(row=r, column=1).value)]
+    ## adjust column widths
+    #dims ={}
+    #for row in ws.rows:
+        #for cell in row:
+            #if cell.value:
+                #dims[cell.column] = max((dims.get(cell.column, 0), len(cell.value) + 2))
+    #for col, value in dims.items():
+        #ws.column_dimensions[col].width = value
+
+#if os.path.exists('pts.log'):
+    ## create one sheet per test group and fill the cells with the results
+    #with open('pts.log') as f:
+        #for line in f:
+            #line = line.rstrip()
+            #if not line:
+                #continue
+            #splitted = line.split(':')
+            #if splitted[0] in ['AIO', 'MEM', 'MSG', 'SEM', 'SIG', 'THR', 'TMR', 'TPS']:
+                #if set_pts_format:
+                    #pts_set_style(ws)
+                #ws = book.create_sheet(title=splitted[0])
+                #ws.append(["Test", "Result", "Log"])
+                #last_was_conformance = False
+                #set_pts_format = True
+            #elif splitted[0].startswith('conformance'):
+                #last_was_conformance = True
+                #ws.append([os.path.basename(splitted[0]), splitted[2].lstrip()])
+            #else:
+                #if last_was_conformance:
+                    #cell = ws.cell(row=ws.get_highest_row() - 1, column=2)
+                    #if cell.value:
+                        #cell.value = str(cell.value) + '\n' + line
+                    #else:
+                        #cell.value = line
+
+
+
diff --git a/engine/tests/Functional.LTP_Smack/spec.json b/engine/tests/Functional.LTP_Smack/spec.json
new file mode 100644
index 0000000..5d03076
--- /dev/null
+++ b/engine/tests/Functional.LTP_Smack/spec.json
@@ -0,0 +1,7 @@
+{
+    "testName": "Functional.LTP_Smack",
+    "specs": {
+        "default": {
+        }
+    }
+}
diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh b/engine/tests/Functional.LTP_Smack/test_mount.sh
new file mode 100644
index 0000000..9d78154
--- /dev/null
+++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
@@ -0,0 +1,28 @@
+#!/bin/sh
+
+if [ "$1" == "start" ]; then
+    touch test_mount.log
+    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
+    if [ $? -eq 0 ]; then
+        exit 0
+    fi
+
+    if [ ! -d /smack ]; then
+        mkdir /smack > /dev/null
+        echo "NEW_DIR" >> test_mount.log
+    fi
+
+    mount -t smackfs smackfs /smack
+    echo "NEW_MOUNT" >> test_mount.log
+fi
+
+
+if [ "$1" == "end" ]; then
+    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
+        umount /smack
+    fi
+
+    if grep "NEW_DIR" test_mount.log > /dev/null; then
+        rmdir /smack
+    fi
+fi
--
2.7.4




^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-06  5:50 ` [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module Li, Xiaoming
@ 2018-07-06  8:03   ` Daniel Sangorrin
  2018-07-13 17:14     ` Tim.Bird
  2018-07-06  8:30   ` Daniel Sangorrin
  2018-07-13 17:05   ` Tim.Bird
  2 siblings, 1 reply; 7+ messages in thread
From: Daniel Sangorrin @ 2018-07-06  8:03 UTC (permalink / raw)
  To: 'Li, Xiaoming', fuego

Hi Li, Tim:

The smack tests can be run using:
  $ ftc run-test -b myboard -t Functional.LTP --dynamic-vars "{'tests':'smack'}"

Alternatively, while "Dynamic vars" are merged, you can instead add a spec to Functional.LTP/spec.json

Note: make sure that you prepare your machine for smack by modifying fstab and adding the security=smack kernel parameter
# vi /etc/fstab
smackfs /sys/fs/smackfs smackfs defaults 0 0
# reboot
  -> grub: add security=smack

If you want to run a single smack test (e.g. smack_set_ambient) then you should be able to use Function.LTP_one_test. Unfortunately, Function.LTP_one_test's test_deploy function still needs some improvements.

After those improvements this should work:
  $ ftc run-test -b myboard -t Functional.LTP_one_test --dynamic-vars "{'TEST':'smack_set_environment', 'scenario':'smack'}"

Thanks,
Daniel

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org
> <fuego-bounces@lists.linuxfoundation.org> On Behalf Of Li, Xiaoming
> Sent: Friday, July 6, 2018 2:50 PM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
> 
> 
> Signed-off-by: Li Xiaoming <lixm.fnst@cn.fujitsu.com>
> ---
>  engine/tests/Functional.LTP_Smack/fuego_test.sh |  77 +++++++
>  engine/tests/Functional.LTP_Smack/parser.py     | 270
> ++++++++++++++++++++++++
>  engine/tests/Functional.LTP_Smack/spec.json     |   7 +
>  engine/tests/Functional.LTP_Smack/test_mount.sh |  28 +++
>  4 files changed, 382 insertions(+)
>  create mode 100755 engine/tests/Functional.LTP_Smack/fuego_test.sh
>  create mode 100755 engine/tests/Functional.LTP_Smack/parser.py
>  create mode 100644 engine/tests/Functional.LTP_Smack/spec.json
>  create mode 100644 engine/tests/Functional.LTP_Smack/test_mount.sh
> 
> diff --git a/engine/tests/Functional.LTP_Smack/fuego_test.sh
> b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> new file mode 100755
> index 0000000..2fc2fe5
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> @@ -0,0 +1,77 @@
> +# Don't allow jobs to share build directories # the
> +"test_successfully_built" flag for one spec function test_build {
> +    # check for LTP build directory
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_one_test/LTP/ | sed s/$TESTSPEC/default/)"
> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    # if not already built, build LTP
> +    if [ ! -e ${LTP_BUILD_DIR}/fuego_test_successfully_built ] ; then
> +        echo "Building parent LTP test..."
> +        ftc run-test -b $NODE_NAME -t Functional.LTP -p pcb
> +        # NOTE: vars used in ftc run-test should not leak into this environment
> +        # that is, none of our test vars should have changed.
> +    fi
> +}
> +
> +function test_deploy {
> +    # set LTP_BUILD_DIR (possibly again), in case test_build was skipped
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_Smack/LTP/ | sed s/$TESTSPEC/default/)"
> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    echo "bdir=${bdir}"
> +
> +    # copy helper files, runltp, ltp-pan and the
> +    # test program to the board
> +    cmd "mkdir -p $bdir/bin $bdir/runtest  $bdir/testcases/bin "
> +    put ${LTP_BUILD_DIR}/target_bin/IDcheck.sh $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/ver_linux $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/Version $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/runltp $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/bin/ltp-pan $bdir/bin/
> +
> +    put ${LTP_BUILD_DIR}/target_bin/runtest/smack $bdir/runtest
> +
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_direct.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_current.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_socket_labels
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_onlycap.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_cipso.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_doi.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_file_access.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_notroot
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_ambient.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_netlabel.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_common.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_load.sh
> $bdir/testcases/bin
> +
> +    # smack test cases need them
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/test.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/tst_ansi_color.sh
> $bdir/testcases/bin
> +
> +    # test_mount.sh set the smack env
> +    put $TEST_HOME/test_mount.sh $bdir/ }
> +
> +function test_run {
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    local odir="$BOARD_TESTDIR/fuego.$TESTDIR/result/default"
> +    echo "test_run__bdir:" $bdir
> +
> +    report "cd $bdir; chmod +x test_mount.sh; ./test_mount.sh start"
> +    report "cd $bdir; mkdir -p $odir; ./runltp -f smack -l $odir/result.log -o
> $odir/output.log"
> +    report "cd $bdir; ./test_mount.sh end"
> +}
> +
> +function test_fetch_results {
> +    echo "Fetching LTP Smack results"
> +    rm -rf result/
> +    get $BOARD_TESTDIR/fuego.$TESTDIR/result $LOGDIR }
> +
> +function test_processing {
> +    return
> +}
> +
> diff --git a/engine/tests/Functional.LTP_Smack/parser.py
> b/engine/tests/Functional.LTP_Smack/parser.py
> new file mode 100755
> index 0000000..2dc44a8
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/parser.py
> @@ -0,0 +1,270 @@
> +#!/usr/bin/python
> +# -*- coding: UTF-8 -*-
> +import os, os.path, re, sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +SAVEDIR=os.getcwd()
> +LOGDIR=os.environ["LOGDIR"]
> +
> +def abort(msg):
> +    print msg
> +    os.chdir(SAVEDIR)
> +    sys.exit(1)
> +
> +def split_output_per_testcase (test_category):
> +    '''
> +        For each test category/group (e.g. syscalls) there is an output.log
> +        file that contains the output log of each testcase. This function
> +        splits output.log into per-testcase files
> +    '''
> +
> +    # open input
> +    try:
> +        output_all = open("%s/output.log" % test_category)
> +    except IOError:
> +        abort('"%s/result.log" cannot be opened.' % test_category)
> +
> +    # prepare for outputs
> +    out_dir = test_category + "/outputs"
> +    try:
> +        os.mkdir(out_dir)
> +    except OSError:
> +        pass
> +
> +    lines = output_all.readlines()
> +    output_all.close()
> +    for line in lines:
> +        m = re.compile("^<<<test_start>>>").match(line)
> +        if m is not None:
> +            loop_end = 0
> +            in_loop = 1
> +            try:
> +              output_each = open(out_dir+"/tmp.log", "w")
> +            except IOError:
> +                abort('"%s/tmp.log" cannot be opened.' % out_dir)
> +
> +        m = re.compile("^tag=([^ ]*)").match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +
> +        m = re.compile("^<<<test_end>>>").match(line)
> +        if m is not None:
> +            loop_end = 1
> +
> +        if in_loop:
> +            output_each.write("%s" % line)
> +
> +        if in_loop & loop_end:
> +            output_each.close()
> +            os.rename(out_dir+"/tmp.log", out_dir+"/%s.log" % test_case)
> +            in_loop = 0
> +
> +def read_output (test_category, test_case):
> +    '''
> +        Reads one of the files splitted by split_output_per_testcase
> +    '''
> +    case_log = "%s/outputs/%s.log" % (test_category, test_case)
> +    try:
> +        output_each = open(case_log)
> +    except IOError:
> +        abort('"%s"" cannot be opened.' % (case_log))
> +
> +    output = output_each.read()
> +    output_each.close()
> +
> +    m = re.compile("<<<test_output>>>\n(.*)\n<<<execution_status>>>",
> re.M | re.S).search(output)
> +    if m is not None:
> +        result = m.group(1)
> +    else:
> +        result = ""
> +
> +    return result
> +
> +
> +# Check for results dir, and cd there
> +try:
> +    os.chdir(LOGDIR+"/result")
> +except:
> +    print "WARNING: no result directory (probably a build only test)."
> +    sys.exit(3)
> +
> +# there are three types of results - regular, posix and realtime #
> +parse the regular results, first, into test_results
> +
> +# Loop that proceses each test folder
> +tests = os.listdir('.')
> +tests.sort()
> +test_results = {}
> +for test_category in tests:
> +    if not os.path.isdir(test_category):
> +        continue
> +
> +    split_output_per_testcase(test_category)
> +
> +    ## Check result.log
> +    try:
> +        f = open("%s/result.log" % test_category)
> +    except IOError:
> +        print '"%s/result.log" cannot be opened.' % test_category
> +        continue
> +
> +    lines = f.readlines()
> +    f.close()
> +    regc = re.compile("^tag=([^ ]*) stime=([^ ]*) dur=([^ ]*) exit=([^ ]*)
> stat=([^ ]*) core=([^ ]*) cu=([^ ]*) cs=([^ ]*)")
> +    for line in lines:
> +        m = regc.match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +            result = m.group(5)
> +
> +            errtype = []
> +            decision = 0 # 0: PASS, 1: FAIL
> +
> +            if int(result) == 0:
> +                errtype.append("PASS")
> +
> +            if int(result) & 32 != 0:
> +                errtype.append("CONF")
> +                decision = 0
> +
> +            if int(result) & 16 != 0:
> +                errtype.append("INFO")
> +                decision = 1
> +
> +            if int(result) & 4 != 0:
> +                errtype.append("WARN")
> +                decision = 1
> +
> +            if int(result) & 2 != 0:
> +                errtype.append("BROK")
> +                decision = 1
> +
> +            if int(result) & 1 != 0:
> +                errtype.append("FAIL")
> +                decision = 1
> +
> +            if int(result) & 0x100 != 0:
> +                decision = 1
> +                errtype.append("ERRNO")
> +
> +            if int(result) & 0x200 != 0:
> +                decision = 1
> +                errtype.append("TERRNO")
> +
> +            if int(result) & 0x300 != 0:
> +                decision = 1
> +                errtype.append("RERRNO")
> +
> +            if decision == 0:
> +                print "%s:%s passed" % (test_category, test_case)
> +                status = "PASS"
> +            else:
> +                print "%s:%s failed" % (test_category, test_case)
> +                status = "FAIL"
> +
> +            # FIXTHIS: show errtype
> +            # FIXTHIS: add sub-test data
> +            test_results[test_category + '.' + test_case] = status
> +
> +            # put test output to console log
> +            output = read_output(test_category, test_case)
> +            print output
> +
> +# now process posix results - from pts.log file posix_results = {}
> +pts_logfile=LOGDIR+"/result/pts.log"
> +lines = []
> +if os.path.exists(pts_logfile):
> +    try:
> +        f = open(pts_logfile)
> +        lines = f.readlines()
> +        f.close()
> +    except IOError:
> +        print '"%s" cannot be opened.' % pts_logfile
> +
> +regc = re.compile(r"^conformance/([^/]*)/([^/]*)/([^/]*): execution:
> +(.*)") for line in lines:
> +    m = regc.match(line)
> +    if m:
> +        test_set = m.group(2)
> +        test_case = m.group(3)
> +        result = m.group(4)
> +
> +        test_id = test_set+"."+test_case
> +        status = "ERROR"
> +        if result.startswith("PASS"):
> +            status = "PASS"
> +        elif result.startswith("FAIL"):
> +            status = "FAIL"
> +        elif result.startswith("UNTESTED"):
> +            status = "SKIP"
> +        posix_results[test_id] = status
> +
> +# hope no posix tests have the same test_ids as regular tests
> +test_results.update(posix_results)
> +
> +if os.path.exists('rt.log'):
> +    rt_results = {}
> +    with open('rt.log') as f:
> +        rt_testcase_regex = "^--- Running testcase (.*)  ---$"
> +        rt_results_regex = "^\s*Result:\s*(.*)$"
> +        for line in f:
> +            m = re.match(rt_testcase_regex, line.rstrip())
> +            if m:
> +                test_case = m.group(1)
> +            m = re.match(rt_results_regex, line.rstrip())
> +            if m:
> +                test_result = m.group(1)
> +                test_id = "rt." + test_case
> +                rt_results[test_id] = test_result
> +    test_results.update(rt_results)
> +
> +os.chdir(SAVEDIR)
> +sys.exit(plib.process(test_results))
> +
> +# Posix Test Suite processing
> +#last_was_conformance = False
> +#set_pts_format = False
> +#fills = {'UNRESOLVED':brok_fill, 'FAILED':fail_fill, 'PASS':pass_fill,
> +'UNTESTED':conf_fill, 'UNSUPPORTED':info_fill}
> +
> +#def pts_set_style(ws):
> +    #for r in range(1, ws.get_highest_row()):
> +        #ws.cell(row=r, column=1).style.fill = fills[str(ws.cell(row=r,
> column=1).value)]
> +    ## adjust column widths
> +    #dims ={}
> +    #for row in ws.rows:
> +        #for cell in row:
> +            #if cell.value:
> +                #dims[cell.column] = max((dims.get(cell.column, 0),
> len(cell.value) + 2))
> +    #for col, value in dims.items():
> +        #ws.column_dimensions[col].width = value
> +
> +#if os.path.exists('pts.log'):
> +    ## create one sheet per test group and fill the cells with the results
> +    #with open('pts.log') as f:
> +        #for line in f:
> +            #line = line.rstrip()
> +            #if not line:
> +                #continue
> +            #splitted = line.split(':')
> +            #if splitted[0] in ['AIO', 'MEM', 'MSG', 'SEM', 'SIG', 'THR', 'TMR', 'TPS']:
> +                #if set_pts_format:
> +                    #pts_set_style(ws)
> +                #ws = book.create_sheet(title=splitted[0])
> +                #ws.append(["Test", "Result", "Log"])
> +                #last_was_conformance = False
> +                #set_pts_format = True
> +            #elif splitted[0].startswith('conformance'):
> +                #last_was_conformance = True
> +                #ws.append([os.path.basename(splitted[0]),
> splitted[2].lstrip()])
> +            #else:
> +                #if last_was_conformance:
> +                    #cell = ws.cell(row=ws.get_highest_row() - 1, column=2)
> +                    #if cell.value:
> +                        #cell.value = str(cell.value) + '\n' + line
> +                    #else:
> +                        #cell.value = line
> +
> +
> +
> diff --git a/engine/tests/Functional.LTP_Smack/spec.json
> b/engine/tests/Functional.LTP_Smack/spec.json
> new file mode 100644
> index 0000000..5d03076
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/spec.json
> @@ -0,0 +1,7 @@
> +{
> +    "testName": "Functional.LTP_Smack",
> +    "specs": {
> +        "default": {
> +        }
> +    }
> +}
> diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh
> b/engine/tests/Functional.LTP_Smack/test_mount.sh
> new file mode 100644
> index 0000000..9d78154
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
> @@ -0,0 +1,28 @@
> +#!/bin/sh
> +
> +if [ "$1" == "start" ]; then
> +    touch test_mount.log
> +    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
> +    if [ $? -eq 0 ]; then
> +        exit 0
> +    fi
> +
> +    if [ ! -d /smack ]; then
> +        mkdir /smack > /dev/null
> +        echo "NEW_DIR" >> test_mount.log
> +    fi
> +
> +    mount -t smackfs smackfs /smack
> +    echo "NEW_MOUNT" >> test_mount.log
> +fi
> +
> +
> +if [ "$1" == "end" ]; then
> +    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
> +        umount /smack
> +    fi
> +
> +    if grep "NEW_DIR" test_mount.log > /dev/null; then
> +        rmdir /smack
> +    fi
> +fi
> --
> 2.7.4
> 
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-06  5:50 ` [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module Li, Xiaoming
  2018-07-06  8:03   ` Daniel Sangorrin
@ 2018-07-06  8:30   ` Daniel Sangorrin
  2018-07-13 20:03     ` Tim.Bird
  2018-07-13 17:05   ` Tim.Bird
  2 siblings, 1 reply; 7+ messages in thread
From: Daniel Sangorrin @ 2018-07-06  8:30 UTC (permalink / raw)
  To: 'Li, Xiaoming', fuego

Hi Li, Tim

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org
> <fuego-bounces@lists.linuxfoundation.org> On Behalf Of Li, Xiaoming
[...]
> diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh
> b/engine/tests/Functional.LTP_Smack/test_mount.sh
> new file mode 100644
> index 0000000..9d78154
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
> @@ -0,0 +1,28 @@
> +#!/bin/sh
> +
> +if [ "$1" == "start" ]; then
> +    touch test_mount.log
> +    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
> +    if [ $? -eq 0 ]; then
> +        exit 0
> +    fi
> +
> +    if [ ! -d /smack ]; then
> +        mkdir /smack > /dev/null
> +        echo "NEW_DIR" >> test_mount.log
> +    fi
> +
> +    mount -t smackfs smackfs /smack
> +    echo "NEW_MOUNT" >> test_mount.log
> +fi
> +
> +
> +if [ "$1" == "end" ]; then
> +    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
> +        umount /smack
> +    fi
> +
> +    if grep "NEW_DIR" test_mount.log > /dev/null; then
> +        rmdir /smack
> +    fi
> +fi

This could be added as prechecks in LTP (note that it requires root permissions). 
But I think we don't need to mount it for the user, just check that smack is ready and report otherwise should be fine.
Additionally, checking the SMACK Kconfig values in the target board's kernel configuration would be helpful.
What do you think Tim?

Best regards,
Daniel




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-06  5:50 ` [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module Li, Xiaoming
  2018-07-06  8:03   ` Daniel Sangorrin
  2018-07-06  8:30   ` Daniel Sangorrin
@ 2018-07-13 17:05   ` Tim.Bird
  2 siblings, 0 replies; 7+ messages in thread
From: Tim.Bird @ 2018-07-13 17:05 UTC (permalink / raw)
  To: lixm.fnst, fuego

Li,

Thank you for this test.  I will make some comments about the test
in this e-mail (see below),  and then address Daniel's comments in a separate
e-mail. 

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego-
> bounces@lists.linuxfoundation.org] On Behalf Of Li, Xiaoming
> Sent: Thursday, July 05, 2018 10:50 PM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
> 
> 
> Signed-off-by: Li Xiaoming <lixm.fnst@cn.fujitsu.com>
> ---
>  engine/tests/Functional.LTP_Smack/fuego_test.sh |  77 +++++++
>  engine/tests/Functional.LTP_Smack/parser.py     | 270
> ++++++++++++++++++++++++
>  engine/tests/Functional.LTP_Smack/spec.json     |   7 +
>  engine/tests/Functional.LTP_Smack/test_mount.sh |  28 +++
>  4 files changed, 382 insertions(+)
>  create mode 100755 engine/tests/Functional.LTP_Smack/fuego_test.sh
>  create mode 100755 engine/tests/Functional.LTP_Smack/parser.py
>  create mode 100644 engine/tests/Functional.LTP_Smack/spec.json
>  create mode 100644 engine/tests/Functional.LTP_Smack/test_mount.sh
> 
> diff --git a/engine/tests/Functional.LTP_Smack/fuego_test.sh
> b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> new file mode 100755
> index 0000000..2fc2fe5
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> @@ -0,0 +1,77 @@
> +# Don't allow jobs to share build directories # the
> +"test_successfully_built" flag for one spec function test_build {
> +    # check for LTP build directory
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_one_test/LTP/ | sed s/$TESTSPEC/default/)"

This needs to be:
LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
 s/LTP_Smack/LTP/ | sed s/$TESTSPEC/default/)"

This construct is used to convert this test's build directory into the LTP
build directory.  But this test is not LTP_one_test, so the sed substitution
from that test won't work.

I'm curious - did this work for you?  It looks like you have the right substitution
below, so maybe this would only affect the build.  I think that in the worst
case, this will never detect a pre-built LTP, and might not cause a build when
one was needed (or vice-versa - trigger a build even when one was not
needed).

> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    # if not already built, build LTP
> +    if [ ! -e ${LTP_BUILD_DIR}/fuego_test_successfully_built ] ; then
> +        echo "Building parent LTP test..."
> +        ftc run-test -b $NODE_NAME -t Functional.LTP -p pcb
> +        # NOTE: vars used in ftc run-test should not leak into this environment
> +        # that is, none of our test vars should have changed.
> +    fi
> +}
> +
> +function test_deploy {
> +    # set LTP_BUILD_DIR (possibly again), in case test_build was skipped
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_Smack/LTP/ | sed s/$TESTSPEC/default/)"
OK - this sets the correct LTP_BUILD_DIR
 
> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    echo "bdir=${bdir}"
> +
> +    # copy helper files, runltp, ltp-pan and the
> +    # test program to the board
> +    cmd "mkdir -p $bdir/bin $bdir/runtest  $bdir/testcases/bin "
> +    put ${LTP_BUILD_DIR}/target_bin/IDcheck.sh $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/ver_linux $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/Version $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/runltp $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/bin/ltp-pan $bdir/bin/
> +
> +    put ${LTP_BUILD_DIR}/target_bin/runtest/smack $bdir/runtest
> +
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_direct.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_current.sh
> $bdir/testcases/bin
> +    put
> ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_socket_labels
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_onlycap.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_cipso.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_doi.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_file_access.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_notroot
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_ambient.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_netlabel.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_common.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_load.sh
> $bdir/testcases/bin

Using a wildcard here will reduce the lines needed above:
put ${LTP_BUILD_DIR}/target_bin/testcase/bin/smack_* $bdir/testcase/bin

> +
> +    # smack test cases need them
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/test.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/tst_ansi_color.sh
> $bdir/testcases/bin
> +
> +    # test_mount.sh set the smack env
> +    put $TEST_HOME/test_mount.sh $bdir/ }
> +
> +function test_run {
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    local odir="$BOARD_TESTDIR/fuego.$TESTDIR/result/default"
> +    echo "test_run__bdir:" $bdir
> +
> +    report "cd $bdir; chmod +x test_mount.sh; ./test_mount.sh start"
> +    report "cd $bdir; mkdir -p $odir; ./runltp -f smack -l $odir/result.log -o
> $odir/output.log"
> +    report "cd $bdir; ./test_mount.sh end"
> +}
> +
> +function test_fetch_results {
> +    echo "Fetching LTP Smack results"
> +    rm -rf result/
> +    get $BOARD_TESTDIR/fuego.$TESTDIR/result $LOGDIR }
> +
> +function test_processing {
> +    return
> +}
> +
> diff --git a/engine/tests/Functional.LTP_Smack/parser.py
> b/engine/tests/Functional.LTP_Smack/parser.py

Is this a direct copy of Functional.LTP/parser.py?  If so, is there a way to 
share that one (maybe a symlink)?

If we do keep this parser.py separate, we don't need the posix parsing
at all, so a lot of this code could be eliminated.

> new file mode 100755
> index 0000000..2dc44a8
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/parser.py
> @@ -0,0 +1,270 @@
> +#!/usr/bin/python
> +# -*- coding: UTF-8 -*-
> +import os, os.path, re, sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +SAVEDIR=os.getcwd()
> +LOGDIR=os.environ["LOGDIR"]
> +
> +def abort(msg):
> +    print msg
> +    os.chdir(SAVEDIR)
> +    sys.exit(1)
> +
> +def split_output_per_testcase (test_category):
> +    '''
> +        For each test category/group (e.g. syscalls) there is an output.log
> +        file that contains the output log of each testcase. This function
> +        splits output.log into per-testcase files
> +    '''
> +
> +    # open input
> +    try:
> +        output_all = open("%s/output.log" % test_category)
> +    except IOError:
> +        abort('"%s/result.log" cannot be opened.' % test_category)
> +
> +    # prepare for outputs
> +    out_dir = test_category + "/outputs"
> +    try:
> +        os.mkdir(out_dir)
> +    except OSError:
> +        pass
> +
> +    lines = output_all.readlines()
> +    output_all.close()
> +    for line in lines:
> +        m = re.compile("^<<<test_start>>>").match(line)
> +        if m is not None:
> +            loop_end = 0
> +            in_loop = 1
> +            try:
> +              output_each = open(out_dir+"/tmp.log", "w")
> +            except IOError:
> +                abort('"%s/tmp.log" cannot be opened.' % out_dir)
> +
> +        m = re.compile("^tag=([^ ]*)").match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +
> +        m = re.compile("^<<<test_end>>>").match(line)
> +        if m is not None:
> +            loop_end = 1
> +
> +        if in_loop:
> +            output_each.write("%s" % line)
> +
> +        if in_loop & loop_end:
> +            output_each.close()
> +            os.rename(out_dir+"/tmp.log", out_dir+"/%s.log" % test_case)
> +            in_loop = 0
> +
> +def read_output (test_category, test_case):
> +    '''
> +        Reads one of the files splitted by split_output_per_testcase
> +    '''
> +    case_log = "%s/outputs/%s.log" % (test_category, test_case)
> +    try:
> +        output_each = open(case_log)
> +    except IOError:
> +        abort('"%s"" cannot be opened.' % (case_log))
> +
> +    output = output_each.read()
> +    output_each.close()
> +
> +    m = re.compile("<<<test_output>>>\n(.*)\n<<<execution_status>>>",
> re.M | re.S).search(output)
> +    if m is not None:
> +        result = m.group(1)
> +    else:
> +        result = ""
> +
> +    return result
> +
> +
> +# Check for results dir, and cd there
> +try:
> +    os.chdir(LOGDIR+"/result")
> +except:
> +    print "WARNING: no result directory (probably a build only test)."
> +    sys.exit(3)
> +
> +# there are three types of results - regular, posix and realtime #
> +parse the regular results, first, into test_results
> +
> +# Loop that proceses each test folder
> +tests = os.listdir('.')
> +tests.sort()
> +test_results = {}
> +for test_category in tests:
> +    if not os.path.isdir(test_category):
> +        continue
> +
> +    split_output_per_testcase(test_category)
> +
> +    ## Check result.log
> +    try:
> +        f = open("%s/result.log" % test_category)
> +    except IOError:
> +        print '"%s/result.log" cannot be opened.' % test_category
> +        continue
> +
> +    lines = f.readlines()
> +    f.close()
> +    regc = re.compile("^tag=([^ ]*) stime=([^ ]*) dur=([^ ]*) exit=([^ ]*)
> stat=([^ ]*) core=([^ ]*) cu=([^ ]*) cs=([^ ]*)")
> +    for line in lines:
> +        m = regc.match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +            result = m.group(5)
> +
> +            errtype = []
> +            decision = 0 # 0: PASS, 1: FAIL
> +
> +            if int(result) == 0:
> +                errtype.append("PASS")
> +
> +            if int(result) & 32 != 0:
> +                errtype.append("CONF")
> +                decision = 0
> +
> +            if int(result) & 16 != 0:
> +                errtype.append("INFO")
> +                decision = 1
> +
> +            if int(result) & 4 != 0:
> +                errtype.append("WARN")
> +                decision = 1
> +
> +            if int(result) & 2 != 0:
> +                errtype.append("BROK")
> +                decision = 1
> +
> +            if int(result) & 1 != 0:
> +                errtype.append("FAIL")
> +                decision = 1
> +
> +            if int(result) & 0x100 != 0:
> +                decision = 1
> +                errtype.append("ERRNO")
> +
> +            if int(result) & 0x200 != 0:
> +                decision = 1
> +                errtype.append("TERRNO")
> +
> +            if int(result) & 0x300 != 0:
> +                decision = 1
> +                errtype.append("RERRNO")
> +
> +            if decision == 0:
> +                print "%s:%s passed" % (test_category, test_case)
> +                status = "PASS"
> +            else:
> +                print "%s:%s failed" % (test_category, test_case)
> +                status = "FAIL"
> +
> +            # FIXTHIS: show errtype
> +            # FIXTHIS: add sub-test data
> +            test_results[test_category + '.' + test_case] = status
> +
> +            # put test output to console log
> +            output = read_output(test_category, test_case)
> +            print output
> +
> +# now process posix results - from pts.log file posix_results = {}
> +pts_logfile=LOGDIR+"/result/pts.log"
> +lines = []
> +if os.path.exists(pts_logfile):
> +    try:
> +        f = open(pts_logfile)
> +        lines = f.readlines()
> +        f.close()
> +    except IOError:
> +        print '"%s" cannot be opened.' % pts_logfile
> +
> +regc = re.compile(r"^conformance/([^/]*)/([^/]*)/([^/]*): execution:
> +(.*)") for line in lines:
> +    m = regc.match(line)
> +    if m:
> +        test_set = m.group(2)
> +        test_case = m.group(3)
> +        result = m.group(4)
> +
> +        test_id = test_set+"."+test_case
> +        status = "ERROR"
> +        if result.startswith("PASS"):
> +            status = "PASS"
> +        elif result.startswith("FAIL"):
> +            status = "FAIL"
> +        elif result.startswith("UNTESTED"):
> +            status = "SKIP"
> +        posix_results[test_id] = status
> +
> +# hope no posix tests have the same test_ids as regular tests
> +test_results.update(posix_results)
> +
> +if os.path.exists('rt.log'):
> +    rt_results = {}
> +    with open('rt.log') as f:
> +        rt_testcase_regex = "^--- Running testcase (.*)  ---$"
> +        rt_results_regex = "^\s*Result:\s*(.*)$"
> +        for line in f:
> +            m = re.match(rt_testcase_regex, line.rstrip())
> +            if m:
> +                test_case = m.group(1)
> +            m = re.match(rt_results_regex, line.rstrip())
> +            if m:
> +                test_result = m.group(1)
> +                test_id = "rt." + test_case
> +                rt_results[test_id] = test_result
> +    test_results.update(rt_results)
> +
> +os.chdir(SAVEDIR)
> +sys.exit(plib.process(test_results))
> +
> +# Posix Test Suite processing
> +#last_was_conformance = False
> +#set_pts_format = False
> +#fills = {'UNRESOLVED':brok_fill, 'FAILED':fail_fill, 'PASS':pass_fill,
> +'UNTESTED':conf_fill, 'UNSUPPORTED':info_fill}
> +
> +#def pts_set_style(ws):
> +    #for r in range(1, ws.get_highest_row()):
> +        #ws.cell(row=r, column=1).style.fill = fills[str(ws.cell(row=r,
> column=1).value)]
> +    ## adjust column widths
> +    #dims ={}
> +    #for row in ws.rows:
> +        #for cell in row:
> +            #if cell.value:
> +                #dims[cell.column] = max((dims.get(cell.column, 0), len(cell.value)
> + 2))
> +    #for col, value in dims.items():
> +        #ws.column_dimensions[col].width = value
> +
> +#if os.path.exists('pts.log'):
> +    ## create one sheet per test group and fill the cells with the results
> +    #with open('pts.log') as f:
> +        #for line in f:
> +            #line = line.rstrip()
> +            #if not line:
> +                #continue
> +            #splitted = line.split(':')
> +            #if splitted[0] in ['AIO', 'MEM', 'MSG', 'SEM', 'SIG', 'THR', 'TMR', 'TPS']:
> +                #if set_pts_format:
> +                    #pts_set_style(ws)
> +                #ws = book.create_sheet(title=splitted[0])
> +                #ws.append(["Test", "Result", "Log"])
> +                #last_was_conformance = False
> +                #set_pts_format = True
> +            #elif splitted[0].startswith('conformance'):
> +                #last_was_conformance = True
> +                #ws.append([os.path.basename(splitted[0]), splitted[2].lstrip()])
> +            #else:
> +                #if last_was_conformance:
> +                    #cell = ws.cell(row=ws.get_highest_row() - 1, column=2)
> +                    #if cell.value:
> +                        #cell.value = str(cell.value) + '\n' + line
> +                    #else:
> +                        #cell.value = line
> +
> +
> +
> diff --git a/engine/tests/Functional.LTP_Smack/spec.json
> b/engine/tests/Functional.LTP_Smack/spec.json
> new file mode 100644
> index 0000000..5d03076
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/spec.json
> @@ -0,0 +1,7 @@
> +{
> +    "testName": "Functional.LTP_Smack",
> +    "specs": {
> +        "default": {
> +        }
> +    }
> +}
> diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh
> b/engine/tests/Functional.LTP_Smack/test_mount.sh
> new file mode 100644
> index 0000000..9d78154
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
> @@ -0,0 +1,28 @@
> +#!/bin/sh
> +
> +if [ "$1" == "start" ]; then
> +    touch test_mount.log
> +    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
> +    if [ $? -eq 0 ]; then
> +        exit 0
> +    fi
> +
> +    if [ ! -d /smack ]; then
> +        mkdir /smack > /dev/null
> +        echo "NEW_DIR" >> test_mount.log
> +    fi
> +
> +    mount -t smackfs smackfs /smack
> +    echo "NEW_MOUNT" >> test_mount.log
> +fi
> +
> +
> +if [ "$1" == "end" ]; then
> +    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
> +        umount /smack
> +    fi
> +
> +    if grep "NEW_DIR" test_mount.log > /dev/null; then
> +        rmdir /smack
> +    fi
> +fi

Do we need to remove test_mount.log, when 'end' is called?
Or does the default test cleanup get rid of it?  Just in case the
user specifies no Target_PreCleanup=false and Target_PostCleanup=false,
I think it would be good to explicitly remove it when we're done with it.

As stated above, more comments on a separate thread.
 -- Tim


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-06  8:03   ` Daniel Sangorrin
@ 2018-07-13 17:14     ` Tim.Bird
  2018-07-18  2:54       ` Daniel Sangorrin
  0 siblings, 1 reply; 7+ messages in thread
From: Tim.Bird @ 2018-07-13 17:14 UTC (permalink / raw)
  To: daniel.sangorrin, lixm.fnst, fuego



> -----Original Message-----
> From: Daniel Sangorrin
> 
> Hi Li, Tim:
> 
> The smack tests can be run using:
>   $ ftc run-test -b myboard -t Functional.LTP --dynamic-vars "{'tests':'smack'}"

Which reminds me that I want to get the dynamic-vars feature merged.
Maybe we can find some time next week to split it from the 'call-ftc-directly'
patches and get it into master?

> 
> Alternatively, while "Dynamic vars" are merged, you can instead add a spec
> to Functional.LTP/spec.json

Do you mean "not merged"?

But yes, Li, you should be able to do this as a spec for Functional.LTP.  Is
there a reason this was done as a separate test?  I thought there would
be special setup (and there is, to some degree).  Is that the reason this
is not just another LTP spec?

> 
> Note: make sure that you prepare your machine for smack by modifying
> fstab and adding the security=smack kernel parameter
> # vi /etc/fstab
> smackfs /sys/fs/smackfs smackfs defaults 0 0
> # reboot
>   -> grub: add security=smack
> 
> If you want to run a single smack test (e.g. smack_set_ambient) then you
> should be able to use Function.LTP_one_test. Unfortunately,
> Function.LTP_one_test's test_deploy function still needs some
> improvements.
> 
> After those improvements this should work:
>   $ ftc run-test -b myboard -t Functional.LTP_one_test --dynamic-vars
> "{'TEST':'smack_set_environment', 'scenario':'smack'}"

This looks like it will perform the smack setup, but does it also
run the smack tests?
  -- Tim


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-06  8:30   ` Daniel Sangorrin
@ 2018-07-13 20:03     ` Tim.Bird
  0 siblings, 0 replies; 7+ messages in thread
From: Tim.Bird @ 2018-07-13 20:03 UTC (permalink / raw)
  To: daniel.sangorrin, lixm.fnst, fuego

> -----Original Message-----
> From Daniel Sangorrin
> 
> Hi Li, Tim
> 
> > -----Original Message-----
> > From: fuego-bounces@lists.linuxfoundation.org
> > <fuego-bounces@lists.linuxfoundation.org> On Behalf Of Li, Xiaoming
> [...]
> > diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh
> > b/engine/tests/Functional.LTP_Smack/test_mount.sh
> > new file mode 100644
> > index 0000000..9d78154
> > --- /dev/null
> > +++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
> > @@ -0,0 +1,28 @@
> > +#!/bin/sh
> > +
> > +if [ "$1" == "start" ]; then
> > +    touch test_mount.log
> > +    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
> > +    if [ $? -eq 0 ]; then
> > +        exit 0
> > +    fi
> > +
> > +    if [ ! -d /smack ]; then
> > +        mkdir /smack > /dev/null
> > +        echo "NEW_DIR" >> test_mount.log
> > +    fi
> > +
> > +    mount -t smackfs smackfs /smack
> > +    echo "NEW_MOUNT" >> test_mount.log
> > +fi
> > +
> > +
> > +if [ "$1" == "end" ]; then
> > +    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
> > +        umount /smack
> > +    fi
> > +
> > +    if grep "NEW_DIR" test_mount.log > /dev/null; then
> > +        rmdir /smack
> > +    fi
> > +fi
> 
> This could be added as prechecks in LTP (note that it requires root
> permissions).
> But I think we don't need to mount it for the user, just check that smack is
> ready and report otherwise should be fine.
> Additionally, checking the SMACK Kconfig values in the target board's kernel
> configuration would be helpful.
> What do you think Tim?

There are a few different issues here, that I think are worth discussing.

First, what approach to take with the test depends a bit on the
expected status of the kernel and distribution, and what you are
trying to test.  If it is expected that the kernel will have smack
configured and "running" (mounted), then the best approach is
to add some pre_checks for those conditions, and abort the test if
they are not met.

I don't think it makes much sense for someone to test smack, unless
they are actually using it in normal practice.

So, I think a check of kconfig would be good, as well as possibly a
test that smack is turned on, and already active.  That's probably
what this test should have.

(OK - now waxing a bit philosophical and looking at the bigger picture...
Sorry in advance for the mental detour.)

Another kind of test, however, is of these mechanisms themselves
(configuring the kernel, passing the kernel command line, mounting
the smackfs).  That test of smack (a different one than the one embedded
in LTP) would require a different approach.

Fuego tests usually have a restricted scope of execution, which is 
after the core software is built, the board is provisioned, and the board
is booted.  Specifically, we currently leave provisioning and hardware
booting as an exercise for the user.  If we need to handle these other
steps, and string multiple actions together, we can somewhat do that
with batch jobs.

However, one can imagine certain kinds of testing where we had a different
scope of execution for test_run.  Here are some different scopes:
1) minimal - the board is running, the test is already built and present on the board
(e.g. as part of the base distribution for the board), and we just execute the test.
In this case, we can skip test_build and test_deploy, and just do test_run and
other phases.
2) normal - the board is already running, and fuego builds the test, deploys it, and executes
it on the board.
3) boot test - the board is not already running.  Fuego boots the board as part of
run_test (possibly to test kernel command line parameters, or examine other
boot-time operations)
4) build & boot test - the kernel and/or distribution software are built as part
of the test, the board is provisioned and booted as part of the test, and fuego
records information about this.  This scope of testing allows us to alter 
kernel configuration parameters (possibly from different specs, or using dynamic
vars to do e.g. config bisecting), as well as alter boot parameters.  This type of
testing is currently outside the scope of Fuego.

Fuego currently has an API that can handle rebooting the board, but it does
not have functions for managing boot parameters (varying the kernel
at boot time).  Also, Fuego does not have an API for managing base software
building, or for provisioning the board.  So, there are various kinds of 
variation testing that Fuego is not well-suited for, at the moment.  I hope
to add APIs for Fuego to do these kinds of operations in the future.  At the
moment, you could use Fuego tests or Jenkins jobs to perform these other
operations, and string them together with Fuego batch jobs.  However,
there are a few things that need to be done to support this more effectively,
and especially to be able to share such artifacts with each other.

If we string Fuego tests together in batch jobs to handle these types of
expanded scope testing, we need some clear protocols between tests, to
allow them to coordinate their inputs and outputs.  For example, there
needs to be communication between a test that does kernel build and a test that
does board provisioning, to specify the location of the kernel and modules
to install on the board.

You can do these types of tests with Fuego, but it is more work than it should
be to deal with different board configurations and especially provisioning
methods.  This is one reason I'm working with other groups to formulate
some industry-wide APIs for these things.

</end of detour>

Let's get this test fixed up, and see where to go next on this stuff.
 -- Tim


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
  2018-07-13 17:14     ` Tim.Bird
@ 2018-07-18  2:54       ` Daniel Sangorrin
  0 siblings, 0 replies; 7+ messages in thread
From: Daniel Sangorrin @ 2018-07-18  2:54 UTC (permalink / raw)
  To: Tim.Bird, lixm.fnst, fuego

> From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> Sent: Saturday, July 14, 2018 2:15 AM
> > From: Daniel Sangorrin
> >
> > Hi Li, Tim:
> >
> > The smack tests can be run using:
> >   $ ftc run-test -b myboard -t Functional.LTP --dynamic-vars "{'tests':'smack'}"
> 
> Which reminds me that I want to get the dynamic-vars feature merged.
> Maybe we can find some time next week to split it from the 'call-ftc-directly'
> patches and get it into master?

OK, I will prepare clean patches for master and re-send.


> 
> >
> > Alternatively, while "Dynamic vars" are merged, you can instead add a spec
> > to Functional.LTP/spec.json
> 
> Do you mean "not merged"?

Yes, sorry.

> But yes, Li, you should be able to do this as a spec for Functional.LTP.  Is
> there a reason this was done as a separate test?  I thought there would
> be special setup (and there is, to some degree).  Is that the reason this
> is not just another LTP spec?
> 
> >
> > Note: make sure that you prepare your machine for smack by modifying
> > fstab and adding the security=smack kernel parameter
> > # vi /etc/fstab
> > smackfs /sys/fs/smackfs smackfs defaults 0 0
> > # reboot
> >   -> grub: add security=smack
> >
> > If you want to run a single smack test (e.g. smack_set_ambient) then you
> > should be able to use Function.LTP_one_test. Unfortunately,
> > Function.LTP_one_test's test_deploy function still needs some
> > improvements.
> >
> > After those improvements this should work:
> >   $ ftc run-test -b myboard -t Functional.LTP_one_test --dynamic-vars
> > "{'TEST':'smack_set_environment', 'scenario':'smack'}"
> 
> This looks like it will perform the smack setup, but does it also
> run the smack tests?

Sorry the test was supposed to read smack_set_ambient	
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/security/smack/smack_set_ambient.sh

Thanks,
Daniel






^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-07-18  2:54 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1530708469-14477-1-git-send-email-lixm.fnst@cn.fujitsu.com>
2018-07-06  5:50 ` [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module Li, Xiaoming
2018-07-06  8:03   ` Daniel Sangorrin
2018-07-13 17:14     ` Tim.Bird
2018-07-18  2:54       ` Daniel Sangorrin
2018-07-06  8:30   ` Daniel Sangorrin
2018-07-13 20:03     ` Tim.Bird
2018-07-13 17:05   ` Tim.Bird

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.