All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Daniel Sangorrin" <daniel.sangorrin@toshiba.co.jp>
To: "'Li, Xiaoming'" <lixm.fnst@cn.fujitsu.com>,
	fuego@lists.linuxfoundation.org
Subject: Re: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
Date: Fri, 6 Jul 2018 17:03:54 +0900	[thread overview]
Message-ID: <000601d414ff$e1f7dac0$a5e79040$@toshiba.co.jp> (raw)
In-Reply-To: <62079D2F712F7747B0BDCC6821B8DF0D013CF2F1@G08CNEXMBPEKD02.g08.fujitsu.local>

Hi Li, Tim:

The smack tests can be run using:
  $ ftc run-test -b myboard -t Functional.LTP --dynamic-vars "{'tests':'smack'}"

Alternatively, while "Dynamic vars" are merged, you can instead add a spec to Functional.LTP/spec.json

Note: make sure that you prepare your machine for smack by modifying fstab and adding the security=smack kernel parameter
# vi /etc/fstab
smackfs /sys/fs/smackfs smackfs defaults 0 0
# reboot
  -> grub: add security=smack

If you want to run a single smack test (e.g. smack_set_ambient) then you should be able to use Function.LTP_one_test. Unfortunately, Function.LTP_one_test's test_deploy function still needs some improvements.

After those improvements this should work:
  $ ftc run-test -b myboard -t Functional.LTP_one_test --dynamic-vars "{'TEST':'smack_set_environment', 'scenario':'smack'}"

Thanks,
Daniel

> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org
> <fuego-bounces@lists.linuxfoundation.org> On Behalf Of Li, Xiaoming
> Sent: Friday, July 6, 2018 2:50 PM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module
> 
> 
> Signed-off-by: Li Xiaoming <lixm.fnst@cn.fujitsu.com>
> ---
>  engine/tests/Functional.LTP_Smack/fuego_test.sh |  77 +++++++
>  engine/tests/Functional.LTP_Smack/parser.py     | 270
> ++++++++++++++++++++++++
>  engine/tests/Functional.LTP_Smack/spec.json     |   7 +
>  engine/tests/Functional.LTP_Smack/test_mount.sh |  28 +++
>  4 files changed, 382 insertions(+)
>  create mode 100755 engine/tests/Functional.LTP_Smack/fuego_test.sh
>  create mode 100755 engine/tests/Functional.LTP_Smack/parser.py
>  create mode 100644 engine/tests/Functional.LTP_Smack/spec.json
>  create mode 100644 engine/tests/Functional.LTP_Smack/test_mount.sh
> 
> diff --git a/engine/tests/Functional.LTP_Smack/fuego_test.sh
> b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> new file mode 100755
> index 0000000..2fc2fe5
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/fuego_test.sh
> @@ -0,0 +1,77 @@
> +# Don't allow jobs to share build directories # the
> +"test_successfully_built" flag for one spec function test_build {
> +    # check for LTP build directory
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_one_test/LTP/ | sed s/$TESTSPEC/default/)"
> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    # if not already built, build LTP
> +    if [ ! -e ${LTP_BUILD_DIR}/fuego_test_successfully_built ] ; then
> +        echo "Building parent LTP test..."
> +        ftc run-test -b $NODE_NAME -t Functional.LTP -p pcb
> +        # NOTE: vars used in ftc run-test should not leak into this environment
> +        # that is, none of our test vars should have changed.
> +    fi
> +}
> +
> +function test_deploy {
> +    # set LTP_BUILD_DIR (possibly again), in case test_build was skipped
> +    LTP_BUILD_DIR="${WORKSPACE}/$(echo $JOB_BUILD_DIR | sed
> s/LTP_Smack/LTP/ | sed s/$TESTSPEC/default/)"
> +    echo "LTP_BUILD_DIR=${LTP_BUILD_DIR}"
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    echo "bdir=${bdir}"
> +
> +    # copy helper files, runltp, ltp-pan and the
> +    # test program to the board
> +    cmd "mkdir -p $bdir/bin $bdir/runtest  $bdir/testcases/bin "
> +    put ${LTP_BUILD_DIR}/target_bin/IDcheck.sh $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/ver_linux $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/Version $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/runltp $bdir/
> +    put ${LTP_BUILD_DIR}/target_bin/bin/ltp-pan $bdir/bin/
> +
> +    put ${LTP_BUILD_DIR}/target_bin/runtest/smack $bdir/runtest
> +
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_direct.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_current.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_socket_labels
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_onlycap.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_cipso.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_doi.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_file_access.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_notroot
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_ambient.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_netlabel.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_common.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/smack_set_load.sh
> $bdir/testcases/bin
> +
> +    # smack test cases need them
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/test.sh
> $bdir/testcases/bin
> +    put ${LTP_BUILD_DIR}/target_bin/testcases/bin/tst_ansi_color.sh
> $bdir/testcases/bin
> +
> +    # test_mount.sh set the smack env
> +    put $TEST_HOME/test_mount.sh $bdir/ }
> +
> +function test_run {
> +
> +    local bdir="$BOARD_TESTDIR/fuego.$TESTDIR"
> +    local odir="$BOARD_TESTDIR/fuego.$TESTDIR/result/default"
> +    echo "test_run__bdir:" $bdir
> +
> +    report "cd $bdir; chmod +x test_mount.sh; ./test_mount.sh start"
> +    report "cd $bdir; mkdir -p $odir; ./runltp -f smack -l $odir/result.log -o
> $odir/output.log"
> +    report "cd $bdir; ./test_mount.sh end"
> +}
> +
> +function test_fetch_results {
> +    echo "Fetching LTP Smack results"
> +    rm -rf result/
> +    get $BOARD_TESTDIR/fuego.$TESTDIR/result $LOGDIR }
> +
> +function test_processing {
> +    return
> +}
> +
> diff --git a/engine/tests/Functional.LTP_Smack/parser.py
> b/engine/tests/Functional.LTP_Smack/parser.py
> new file mode 100755
> index 0000000..2dc44a8
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/parser.py
> @@ -0,0 +1,270 @@
> +#!/usr/bin/python
> +# -*- coding: UTF-8 -*-
> +import os, os.path, re, sys
> +sys.path.insert(0, os.environ['FUEGO_CORE'] + '/engine/scripts/parser')
> +import common as plib
> +
> +SAVEDIR=os.getcwd()
> +LOGDIR=os.environ["LOGDIR"]
> +
> +def abort(msg):
> +    print msg
> +    os.chdir(SAVEDIR)
> +    sys.exit(1)
> +
> +def split_output_per_testcase (test_category):
> +    '''
> +        For each test category/group (e.g. syscalls) there is an output.log
> +        file that contains the output log of each testcase. This function
> +        splits output.log into per-testcase files
> +    '''
> +
> +    # open input
> +    try:
> +        output_all = open("%s/output.log" % test_category)
> +    except IOError:
> +        abort('"%s/result.log" cannot be opened.' % test_category)
> +
> +    # prepare for outputs
> +    out_dir = test_category + "/outputs"
> +    try:
> +        os.mkdir(out_dir)
> +    except OSError:
> +        pass
> +
> +    lines = output_all.readlines()
> +    output_all.close()
> +    for line in lines:
> +        m = re.compile("^<<<test_start>>>").match(line)
> +        if m is not None:
> +            loop_end = 0
> +            in_loop = 1
> +            try:
> +              output_each = open(out_dir+"/tmp.log", "w")
> +            except IOError:
> +                abort('"%s/tmp.log" cannot be opened.' % out_dir)
> +
> +        m = re.compile("^tag=([^ ]*)").match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +
> +        m = re.compile("^<<<test_end>>>").match(line)
> +        if m is not None:
> +            loop_end = 1
> +
> +        if in_loop:
> +            output_each.write("%s" % line)
> +
> +        if in_loop & loop_end:
> +            output_each.close()
> +            os.rename(out_dir+"/tmp.log", out_dir+"/%s.log" % test_case)
> +            in_loop = 0
> +
> +def read_output (test_category, test_case):
> +    '''
> +        Reads one of the files splitted by split_output_per_testcase
> +    '''
> +    case_log = "%s/outputs/%s.log" % (test_category, test_case)
> +    try:
> +        output_each = open(case_log)
> +    except IOError:
> +        abort('"%s"" cannot be opened.' % (case_log))
> +
> +    output = output_each.read()
> +    output_each.close()
> +
> +    m = re.compile("<<<test_output>>>\n(.*)\n<<<execution_status>>>",
> re.M | re.S).search(output)
> +    if m is not None:
> +        result = m.group(1)
> +    else:
> +        result = ""
> +
> +    return result
> +
> +
> +# Check for results dir, and cd there
> +try:
> +    os.chdir(LOGDIR+"/result")
> +except:
> +    print "WARNING: no result directory (probably a build only test)."
> +    sys.exit(3)
> +
> +# there are three types of results - regular, posix and realtime #
> +parse the regular results, first, into test_results
> +
> +# Loop that proceses each test folder
> +tests = os.listdir('.')
> +tests.sort()
> +test_results = {}
> +for test_category in tests:
> +    if not os.path.isdir(test_category):
> +        continue
> +
> +    split_output_per_testcase(test_category)
> +
> +    ## Check result.log
> +    try:
> +        f = open("%s/result.log" % test_category)
> +    except IOError:
> +        print '"%s/result.log" cannot be opened.' % test_category
> +        continue
> +
> +    lines = f.readlines()
> +    f.close()
> +    regc = re.compile("^tag=([^ ]*) stime=([^ ]*) dur=([^ ]*) exit=([^ ]*)
> stat=([^ ]*) core=([^ ]*) cu=([^ ]*) cs=([^ ]*)")
> +    for line in lines:
> +        m = regc.match(line)
> +        if m is not None:
> +            test_case = m.group(1)
> +            result = m.group(5)
> +
> +            errtype = []
> +            decision = 0 # 0: PASS, 1: FAIL
> +
> +            if int(result) == 0:
> +                errtype.append("PASS")
> +
> +            if int(result) & 32 != 0:
> +                errtype.append("CONF")
> +                decision = 0
> +
> +            if int(result) & 16 != 0:
> +                errtype.append("INFO")
> +                decision = 1
> +
> +            if int(result) & 4 != 0:
> +                errtype.append("WARN")
> +                decision = 1
> +
> +            if int(result) & 2 != 0:
> +                errtype.append("BROK")
> +                decision = 1
> +
> +            if int(result) & 1 != 0:
> +                errtype.append("FAIL")
> +                decision = 1
> +
> +            if int(result) & 0x100 != 0:
> +                decision = 1
> +                errtype.append("ERRNO")
> +
> +            if int(result) & 0x200 != 0:
> +                decision = 1
> +                errtype.append("TERRNO")
> +
> +            if int(result) & 0x300 != 0:
> +                decision = 1
> +                errtype.append("RERRNO")
> +
> +            if decision == 0:
> +                print "%s:%s passed" % (test_category, test_case)
> +                status = "PASS"
> +            else:
> +                print "%s:%s failed" % (test_category, test_case)
> +                status = "FAIL"
> +
> +            # FIXTHIS: show errtype
> +            # FIXTHIS: add sub-test data
> +            test_results[test_category + '.' + test_case] = status
> +
> +            # put test output to console log
> +            output = read_output(test_category, test_case)
> +            print output
> +
> +# now process posix results - from pts.log file posix_results = {}
> +pts_logfile=LOGDIR+"/result/pts.log"
> +lines = []
> +if os.path.exists(pts_logfile):
> +    try:
> +        f = open(pts_logfile)
> +        lines = f.readlines()
> +        f.close()
> +    except IOError:
> +        print '"%s" cannot be opened.' % pts_logfile
> +
> +regc = re.compile(r"^conformance/([^/]*)/([^/]*)/([^/]*): execution:
> +(.*)") for line in lines:
> +    m = regc.match(line)
> +    if m:
> +        test_set = m.group(2)
> +        test_case = m.group(3)
> +        result = m.group(4)
> +
> +        test_id = test_set+"."+test_case
> +        status = "ERROR"
> +        if result.startswith("PASS"):
> +            status = "PASS"
> +        elif result.startswith("FAIL"):
> +            status = "FAIL"
> +        elif result.startswith("UNTESTED"):
> +            status = "SKIP"
> +        posix_results[test_id] = status
> +
> +# hope no posix tests have the same test_ids as regular tests
> +test_results.update(posix_results)
> +
> +if os.path.exists('rt.log'):
> +    rt_results = {}
> +    with open('rt.log') as f:
> +        rt_testcase_regex = "^--- Running testcase (.*)  ---$"
> +        rt_results_regex = "^\s*Result:\s*(.*)$"
> +        for line in f:
> +            m = re.match(rt_testcase_regex, line.rstrip())
> +            if m:
> +                test_case = m.group(1)
> +            m = re.match(rt_results_regex, line.rstrip())
> +            if m:
> +                test_result = m.group(1)
> +                test_id = "rt." + test_case
> +                rt_results[test_id] = test_result
> +    test_results.update(rt_results)
> +
> +os.chdir(SAVEDIR)
> +sys.exit(plib.process(test_results))
> +
> +# Posix Test Suite processing
> +#last_was_conformance = False
> +#set_pts_format = False
> +#fills = {'UNRESOLVED':brok_fill, 'FAILED':fail_fill, 'PASS':pass_fill,
> +'UNTESTED':conf_fill, 'UNSUPPORTED':info_fill}
> +
> +#def pts_set_style(ws):
> +    #for r in range(1, ws.get_highest_row()):
> +        #ws.cell(row=r, column=1).style.fill = fills[str(ws.cell(row=r,
> column=1).value)]
> +    ## adjust column widths
> +    #dims ={}
> +    #for row in ws.rows:
> +        #for cell in row:
> +            #if cell.value:
> +                #dims[cell.column] = max((dims.get(cell.column, 0),
> len(cell.value) + 2))
> +    #for col, value in dims.items():
> +        #ws.column_dimensions[col].width = value
> +
> +#if os.path.exists('pts.log'):
> +    ## create one sheet per test group and fill the cells with the results
> +    #with open('pts.log') as f:
> +        #for line in f:
> +            #line = line.rstrip()
> +            #if not line:
> +                #continue
> +            #splitted = line.split(':')
> +            #if splitted[0] in ['AIO', 'MEM', 'MSG', 'SEM', 'SIG', 'THR', 'TMR', 'TPS']:
> +                #if set_pts_format:
> +                    #pts_set_style(ws)
> +                #ws = book.create_sheet(title=splitted[0])
> +                #ws.append(["Test", "Result", "Log"])
> +                #last_was_conformance = False
> +                #set_pts_format = True
> +            #elif splitted[0].startswith('conformance'):
> +                #last_was_conformance = True
> +                #ws.append([os.path.basename(splitted[0]),
> splitted[2].lstrip()])
> +            #else:
> +                #if last_was_conformance:
> +                    #cell = ws.cell(row=ws.get_highest_row() - 1, column=2)
> +                    #if cell.value:
> +                        #cell.value = str(cell.value) + '\n' + line
> +                    #else:
> +                        #cell.value = line
> +
> +
> +
> diff --git a/engine/tests/Functional.LTP_Smack/spec.json
> b/engine/tests/Functional.LTP_Smack/spec.json
> new file mode 100644
> index 0000000..5d03076
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/spec.json
> @@ -0,0 +1,7 @@
> +{
> +    "testName": "Functional.LTP_Smack",
> +    "specs": {
> +        "default": {
> +        }
> +    }
> +}
> diff --git a/engine/tests/Functional.LTP_Smack/test_mount.sh
> b/engine/tests/Functional.LTP_Smack/test_mount.sh
> new file mode 100644
> index 0000000..9d78154
> --- /dev/null
> +++ b/engine/tests/Functional.LTP_Smack/test_mount.sh
> @@ -0,0 +1,28 @@
> +#!/bin/sh
> +
> +if [ "$1" == "start" ]; then
> +    touch test_mount.log
> +    mount | grep -v /sys/fs/smackfs | grep /smack > /dev/null
> +    if [ $? -eq 0 ]; then
> +        exit 0
> +    fi
> +
> +    if [ ! -d /smack ]; then
> +        mkdir /smack > /dev/null
> +        echo "NEW_DIR" >> test_mount.log
> +    fi
> +
> +    mount -t smackfs smackfs /smack
> +    echo "NEW_MOUNT" >> test_mount.log
> +fi
> +
> +
> +if [ "$1" == "end" ]; then
> +    if grep "NEW_MOUNT" test_mount.log > /dev/null; then
> +        umount /smack
> +    fi
> +
> +    if grep "NEW_DIR" test_mount.log > /dev/null; then
> +        rmdir /smack
> +    fi
> +fi
> --
> 2.7.4
> 
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego




  reply	other threads:[~2018-07-06  8:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1530708469-14477-1-git-send-email-lixm.fnst@cn.fujitsu.com>
2018-07-06  5:50 ` [Fuego] [PATCH] LTP_Smack: add a new job to test "smack" module Li, Xiaoming
2018-07-06  8:03   ` Daniel Sangorrin [this message]
2018-07-13 17:14     ` Tim.Bird
2018-07-18  2:54       ` Daniel Sangorrin
2018-07-06  8:30   ` Daniel Sangorrin
2018-07-13 20:03     ` Tim.Bird
2018-07-13 17:05   ` Tim.Bird

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='000601d414ff$e1f7dac0$a5e79040$@toshiba.co.jp' \
    --to=daniel.sangorrin@toshiba.co.jp \
    --cc=fuego@lists.linuxfoundation.org \
    --cc=lixm.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.