All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2 v7] test-case-mgmt
@ 2019-02-14  5:50 Yeoh Ee Peng
  2019-02-14  5:50 ` [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis Yeoh Ee Peng
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Yeoh Ee Peng @ 2019-02-14  5:50 UTC (permalink / raw)
  To: openembedded-core

v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

v4:
  Add new features, merge result files & regression analysis 
  Add selftest to merge, store, report and regression functionalities
  Revise codebase for pythonic
  
v5:
  Add required files for selftest testing store
  
v6:
  Add regression for directory and git repository
  Enable regression pairing base set to multiple target sets 
  Revise selftest testing for regression
  
v7: 
  Optimize regression computation for ptest results
  Rename entry point script to resulttool

Mazliana (1):
  scripts/resulttool: enable manual execution and result creation

Yeoh Ee Peng (1):
  resulttool: enable merge, store, report and regression analysis

 meta/lib/oeqa/files/testresults/testresults.json   |  40 ++++
 meta/lib/oeqa/selftest/cases/resulttooltests.py    | 104 +++++++++++
 scripts/lib/resulttool/__init__.py                 |   0
 scripts/lib/resulttool/manualexecution.py          | 137 ++++++++++++++
 scripts/lib/resulttool/merge.py                    |  71 +++++++
 scripts/lib/resulttool/regression.py               | 208 +++++++++++++++++++++
 scripts/lib/resulttool/report.py                   | 113 +++++++++++
 scripts/lib/resulttool/resultsutils.py             |  67 +++++++
 scripts/lib/resulttool/store.py                    | 110 +++++++++++
 .../resulttool/template/test_report_full_text.txt  |  35 ++++
 scripts/resulttool                                 |  92 +++++++++
 11 files changed, 977 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resulttooltests.py
 create mode 100644 scripts/lib/resulttool/__init__.py
 create mode 100755 scripts/lib/resulttool/manualexecution.py
 create mode 100644 scripts/lib/resulttool/merge.py
 create mode 100644 scripts/lib/resulttool/regression.py
 create mode 100644 scripts/lib/resulttool/report.py
 create mode 100644 scripts/lib/resulttool/resultsutils.py
 create mode 100644 scripts/lib/resulttool/store.py
 create mode 100644 scripts/lib/resulttool/template/test_report_full_text.txt
 create mode 100755 scripts/resulttool

-- 
2.7.4



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis
  2019-02-14  5:50 [PATCH 0/2 v7] test-case-mgmt Yeoh Ee Peng
@ 2019-02-14  5:50 ` Yeoh Ee Peng
  2019-02-14  5:50 ` [PATCH 2/2 v7] scripts/resulttool: enable manual execution and result creation Yeoh Ee Peng
  2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
  2 siblings, 0 replies; 18+ messages in thread
From: Yeoh Ee Peng @ 2019-02-14  5:50 UTC (permalink / raw)
  To: openembedded-core

OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.

These scripts were developed as a test result tools to manage
these testresults.json file.

Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.

Using the "merge" operation, user can merge multiple testresults.json
files to a target file.

Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.

Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.

These resulttool operations expect the testresults.json file to use
the json format below.
{
    "<testresult_1>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
    ...
    "<testresult_n>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
    $ resulttool

To store test result from oeqa automated tests, execute the below
    $ resulttool store <source_dir> <git_branch>

To merge multiple testresults.json files, execute the below
    $ resulttool merge <base_result_file> <target_result_file>

To report test report, execute the below
    $ resulttool report <source_dir>

To perform regression file analysis, execute the below
    $ resulttool regression-file <base_result_file> <target_result_file>

To perform regression dir analysis, execute the below
    $ resulttool regression-dir <base_result_dir> <target_result_dir>

To perform regression git analysis, execute the below
    $ resulttool regression-git <source_dir> <base_branch> <target_branch>

[YOCTO# 13012]
[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/files/testresults/testresults.json   |  40 ++++
 meta/lib/oeqa/selftest/cases/resulttooltests.py    | 104 +++++++++++
 scripts/lib/resulttool/__init__.py                 |   0
 scripts/lib/resulttool/merge.py                    |  71 +++++++
 scripts/lib/resulttool/regression.py               | 208 +++++++++++++++++++++
 scripts/lib/resulttool/report.py                   | 113 +++++++++++
 scripts/lib/resulttool/resultsutils.py             |  67 +++++++
 scripts/lib/resulttool/store.py                    | 110 +++++++++++
 .../resulttool/template/test_report_full_text.txt  |  35 ++++
 scripts/resulttool                                 |  84 +++++++++
 10 files changed, 832 insertions(+)
 create mode 100644 meta/lib/oeqa/files/testresults/testresults.json
 create mode 100644 meta/lib/oeqa/selftest/cases/resulttooltests.py
 create mode 100644 scripts/lib/resulttool/__init__.py
 create mode 100644 scripts/lib/resulttool/merge.py
 create mode 100644 scripts/lib/resulttool/regression.py
 create mode 100644 scripts/lib/resulttool/report.py
 create mode 100644 scripts/lib/resulttool/resultsutils.py
 create mode 100644 scripts/lib/resulttool/store.py
 create mode 100644 scripts/lib/resulttool/template/test_report_full_text.txt
 create mode 100755 scripts/resulttool

diff --git a/meta/lib/oeqa/files/testresults/testresults.json b/meta/lib/oeqa/files/testresults/testresults.json
new file mode 100644
index 0000000..1a62155
--- /dev/null
+++ b/meta/lib/oeqa/files/testresults/testresults.json
@@ -0,0 +1,40 @@
+{
+    "runtime_core-image-minimal_qemuarm_20181225195701": {
+        "configuration": {
+            "DISTRO": "poky",
+            "HOST_DISTRO": "ubuntu-16.04",
+            "IMAGE_BASENAME": "core-image-minimal",
+            "IMAGE_PKGTYPE": "rpm",
+            "LAYERS": {
+                "meta": {
+                    "branch": "master",
+                    "commit": "801745d918e83f976c706f29669779f5b292ade3",
+                    "commit_count": 52782
+                },
+                "meta-poky": {
+                    "branch": "master",
+                    "commit": "801745d918e83f976c706f29669779f5b292ade3",
+                    "commit_count": 52782
+                },
+                "meta-yocto-bsp": {
+                    "branch": "master",
+                    "commit": "801745d918e83f976c706f29669779f5b292ade3",
+                    "commit_count": 52782
+                }
+            },
+            "MACHINE": "qemuarm",
+            "STARTTIME": "20181225195701",
+            "TEST_TYPE": "runtime"
+        },
+        "result": {
+            "apt.AptRepoTest.test_apt_install_from_repo": {
+                "log": "Test requires apt to be installed",
+                "status": "PASSED"
+            },
+            "buildcpio.BuildCpioTest.test_cpio": {
+                "log": "Test requires autoconf to be installed",
+                "status": "ERROR"
+            }            
+        }
+    }
+}
\ No newline at end of file
diff --git a/meta/lib/oeqa/selftest/cases/resulttooltests.py b/meta/lib/oeqa/selftest/cases/resulttooltests.py
new file mode 100644
index 0000000..7bf1ec6
--- /dev/null
+++ b/meta/lib/oeqa/selftest/cases/resulttooltests.py
@@ -0,0 +1,104 @@
+import os
+import sys
+basepath = os.path.abspath(os.path.dirname(__file__) + '/../../../../../')
+lib_path = basepath + '/scripts/lib'
+sys.path = sys.path + [lib_path]
+from resulttool.report import ResultsTextReport
+from resulttool.regression import ResultsRegressionSelector, ResultsRegression
+from resulttool.merge import ResultsMerge
+from resulttool.store import ResultsGitStore
+from resulttool.resultsutils import checkout_git_dir
+from oeqa.selftest.case import OESelftestTestCase
+
+class ResultToolTests(OESelftestTestCase):
+
+    def test_report_can_aggregate_test_result(self):
+        result_data = {'result': {'test1': {'status': 'PASSED'},
+                                  'test2': {'status': 'PASSED'},
+                                  'test3': {'status': 'FAILED'},
+                                  'test4': {'status': 'ERROR'},
+                                  'test5': {'status': 'SKIPPED'}}}
+        report = ResultsTextReport()
+        result_report = report.get_aggregated_test_result(None, result_data)
+        self.assertTrue(result_report['passed'] == 2, msg="Passed count not correct:%s" % result_report['passed'])
+        self.assertTrue(result_report['failed'] == 2, msg="Failed count not correct:%s" % result_report['failed'])
+        self.assertTrue(result_report['skipped'] == 1, msg="Skipped count not correct:%s" % result_report['skipped'])
+
+    def test_regression_can_get_regression_base_target_pair(self):
+        base_results_data = {'base_result1': {'configuration': {"TEST_TYPE": "oeselftest",
+                                                                "HOST": "centos-7"}},
+                             'base_result2': {'configuration': {"TEST_TYPE": "oeselftest",
+                                                                "HOST": "centos-7",
+                                                                "MACHINE": "qemux86-64"}}}
+        target_results_data = {'target_result1': {'configuration': {"TEST_TYPE": "oeselftest",
+                                                                    "HOST": "centos-7"}},
+                               'target_result2': {'configuration': {"TEST_TYPE": "oeselftest",
+                                                                    "HOST": "centos-7",
+                                                                    "MACHINE": "qemux86"}},
+                               'target_result3': {'configuration': {"TEST_TYPE": "oeselftest",
+                                                                    "HOST": "centos-7",
+                                                                    "MACHINE": "qemux86-64"}}}
+        regression = ResultsRegressionSelector()
+        pair = regression.get_regression_base_target_pair(self.logger, base_results_data, target_results_data)
+        self.assertTrue('target_result1' in pair['base_result1'], msg="Pair not correct:%s" % pair['base_result1'])
+        self.assertTrue('target_result3' in pair['base_result2'], msg="Pair not correct:%s" % pair['base_result2'])
+
+    def test_regrresion_can_get_regression_result(self):
+        base_result_data = {'result': {'test1': {'status': 'PASSED'},
+                                       'test2': {'status': 'PASSED'},
+                                       'test3': {'status': 'FAILED'},
+                                       'test4': {'status': 'ERROR'},
+                                       'test5': {'status': 'SKIPPED'}}}
+        target_result_data = {'result': {'test1': {'status': 'PASSED'},
+                                         'test2': {'status': 'FAILED'},
+                                         'test3': {'status': 'PASSED'},
+                                         'test4': {'status': 'ERROR'},
+                                         'test5': {'status': 'SKIPPED'}}}
+        regression = ResultsRegression()
+        result = regression.get_regression_result(self.logger, base_result_data, target_result_data)
+        self.assertTrue(result['test2']['base'] == 'PASSED',
+                        msg="regression not correct:%s" % result['test2']['base'])
+        self.assertTrue(result['test2']['target'] == 'FAILED',
+                        msg="regression not correct:%s" % result['test2']['target'])
+        self.assertTrue(result['test3']['base'] == 'FAILED',
+                        msg="regression not correct:%s" % result['test3']['base'])
+        self.assertTrue(result['test3']['target'] == 'PASSED',
+                        msg="regression not correct:%s" % result['test3']['target'])
+
+    def test_merge_can_merged_results(self):
+        base_results_data = {'base_result1': {},
+                             'base_result2': {}}
+        target_results_data = {'target_result1': {},
+                               'target_result2': {},
+                               'target_result3': {}}
+
+        merge = ResultsMerge()
+        results = merge.merge_results(base_results_data, target_results_data)
+        self.assertTrue(len(results.keys()) == 5, msg="merge not correct:%s" % len(results.keys()))
+
+    def test_store_can_store_to_new_git_repository(self):
+        basepath = os.path.abspath(os.path.dirname(__file__) + '/../../')
+        source_dir = basepath + '/files/testresults'
+        git_branch = 'qa-cycle-2.7'
+        store = ResultsGitStore()
+        output_dir = store.store_to_new(self.logger, source_dir, git_branch)
+        self.assertTrue(checkout_git_dir(output_dir, git_branch), msg="store to new git repository failed:%s" %
+                                                                      output_dir)
+        store._remove_temporary_workspace_dir(output_dir)
+
+    def test_store_can_store_to_existing(self):
+        basepath = os.path.abspath(os.path.dirname(__file__) + '/../../')
+        source_dir = basepath + '/files/testresults'
+        git_branch = 'qa-cycle-2.6'
+        store = ResultsGitStore()
+        output_dir = store.store_to_new(self.logger, source_dir, git_branch)
+        self.assertTrue(checkout_git_dir(output_dir, git_branch), msg="store to new git repository failed:%s" %
+                                                                      output_dir)
+        git_branch = 'qa-cycle-2.7'
+        output_dir = store.store_to_existing_with_new_branch(self.logger, source_dir, output_dir, git_branch)
+        self.assertTrue(checkout_git_dir(output_dir, git_branch), msg="store to existing git repository failed:%s" %
+                                                                      output_dir)
+        output_dir = store.store_to_existing(self.logger, source_dir, output_dir, git_branch)
+        self.assertTrue(checkout_git_dir(output_dir, git_branch), msg="store to existing git repository failed:%s" %
+                                                                      output_dir)
+        store._remove_temporary_workspace_dir(output_dir)
diff --git a/scripts/lib/resulttool/__init__.py b/scripts/lib/resulttool/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/scripts/lib/resulttool/merge.py b/scripts/lib/resulttool/merge.py
new file mode 100644
index 0000000..441dfad
--- /dev/null
+++ b/scripts/lib/resulttool/merge.py
@@ -0,0 +1,71 @@
+# test result tool - merge multiple testresults.json files
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+from resulttool.resultsutils import load_json_file, get_dict_value, dump_json_data
+import os
+import json
+
+class ResultsMerge(object):
+
+    def get_test_results(self, logger, file, result_id):
+        results = load_json_file(file)
+        if result_id:
+            result = get_dict_value(logger, results, result_id)
+            if result:
+                return {result_id: result}
+            return result
+        return results
+
+    def merge_results(self, base_results, target_results):
+        for k in target_results:
+            base_results[k] = target_results[k]
+        return base_results
+
+    def _get_write_dir(self):
+        basepath = os.environ['BUILDDIR']
+        return basepath + '/tmp/'
+
+    def dump_merged_results(self, results, output_dir):
+        file_output_dir = output_dir if output_dir else self._get_write_dir()
+        dump_json_data(file_output_dir, 'testresults.json', results)
+        print('Successfully merged results to: %s' % os.path.join(file_output_dir, 'testresults.json'))
+
+    def run(self, logger, base_result_file, target_result_file, target_result_id, output_dir):
+        base_results = self.get_test_results(logger, base_result_file, '')
+        target_results = self.get_test_results(logger, target_result_file, target_result_id)
+        if base_results and target_results:
+            merged_results = self.merge_results(base_results, target_results)
+            self.dump_merged_results(merged_results, output_dir)
+
+def merge(args, logger):
+    merge = ResultsMerge()
+    merge.run(logger, args.base_result_file, args.target_result_file, args.target_result_id, args.output_dir)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('merge', help='merge test results',
+                                         description='merge results from multiple files',
+                                         group='setup')
+    parser_build.set_defaults(func=merge)
+    parser_build.add_argument('base_result_file',
+                              help='base result file provide the base result set')
+    parser_build.add_argument('target_result_file',
+                              help='target result file provide the target result set for merging into the '
+                                   'base result set')
+    parser_build.add_argument('-t', '--target-result-id', default='',
+                              help='(optional) default merge all result sets available from target to base '
+                                   'unless specific target result id was provided')
+    parser_build.add_argument('-o', '--output-dir', default='',
+                              help='(optional) default write merged results to <poky>/build/tmp/ unless specific  '
+                                   'output directory was provided')
diff --git a/scripts/lib/resulttool/regression.py b/scripts/lib/resulttool/regression.py
new file mode 100644
index 0000000..8f3f5d2
--- /dev/null
+++ b/scripts/lib/resulttool/regression.py
@@ -0,0 +1,208 @@
+# test result tool - regression analysis
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+from resulttool.resultsutils import load_json_file, get_dict_value, pop_dict_element
+import json
+
+class ResultsRegressionSelector(object):
+
+    def get_results_unique_configurations(self, logger, results):
+        unique_configurations_map = {"oeselftest": ['TEST_TYPE', 'HOST_DISTRO', 'MACHINE'],
+                                     "runtime": ['TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE'],
+                                     "sdk": ['TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE'],
+                                     "sdkext": ['TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE']}
+        results_unique_configs = {}
+        for k in results:
+            result = results[k]
+            result_configs = get_dict_value(logger, result, 'configuration')
+            result_test_type = get_dict_value(logger, result_configs, 'TEST_TYPE')
+            unique_configuration_keys = get_dict_value(logger, unique_configurations_map, result_test_type)
+            result_unique_config = {}
+            for ck in unique_configuration_keys:
+                config_value = get_dict_value(logger, result_configs, ck)
+                if config_value:
+                    result_unique_config[ck] = config_value
+            results_unique_configs[k] = result_unique_config
+        return results_unique_configs
+
+    def get_regression_base_target_pair(self, logger, base_results, target_results):
+        base_configs = self.get_results_unique_configurations(logger, base_results)
+        logger.debug('Retrieved base configuration: config=%s' % base_configs)
+        target_configs = self.get_results_unique_configurations(logger, target_results)
+        logger.debug('Retrieved target configuration: config=%s' % target_configs)
+        regression_pair = {}
+        for bk in base_configs:
+            base_config = base_configs[bk]
+            for tk in target_configs:
+                target_config = target_configs[tk]
+                if base_config == target_config:
+                    if bk in regression_pair:
+                        regression_pair[bk].append(tk)
+                    else:
+                        regression_pair[bk] = [tk]
+        return regression_pair
+
+    def run_regression_with_regression_pairing(self, logger, regression_pair, base_results, target_results):
+        regression = ResultsRegression()
+        for base in regression_pair:
+            for target in regression_pair[base]:
+                print('Getting regression for base=%s target=%s' % (base, target))
+                regression.run(logger, base_results[base], target_results[target])
+
+class ResultsRegression(object):
+
+    def print_regression_result(self, result):
+        if result:
+            print('============================Start Regression============================')
+            print('Only print regression if base status not equal target')
+            print('<test case> : <base status> -> <target status>')
+            print('========================================================================')
+            for k in result:
+                print(k, ':', result[k]['base'], '->', result[k]['target'])
+            print('==============================End Regression==============================')
+
+    def get_regression_result(self, logger, base_result, target_result):
+        base_result = get_dict_value(logger, base_result, 'result')
+        target_result = get_dict_value(logger, target_result, 'result')
+        result = {}
+        if base_result and target_result:
+            logger.debug('Getting regression result')
+            for k in base_result:
+                base_testcase = base_result[k]
+                base_status = get_dict_value(logger, base_testcase, 'status')
+                if base_status:
+                    target_testcase = get_dict_value(logger, target_result, k)
+                    target_status = get_dict_value(logger, target_testcase, 'status')
+                    if base_status != target_status:
+                        result[k] = {'base': base_status, 'target': target_status}
+                else:
+                    logger.error('Failed to retrieved base test case status: %s' % k)
+        return result
+
+    def run(self, logger, base_result, target_result):
+        if base_result and target_result:
+            result = self.get_regression_result(logger, base_result, target_result)
+            logger.debug('Retrieved regression result =%s' % result)
+            self.print_regression_result(result)
+        else:
+            logger.error('Input data objects must not be empty (base_result=%s, target_result=%s)' %
+                         (base_result, target_result))
+
+def get_results_from_directory(logger, source_dir):
+    from resulttool.merge import ResultsMerge
+    from resulttool.resultsutils import get_directory_files
+    result_files = get_directory_files(source_dir, ['.git'], 'testresults.json')
+    base_results = {}
+    for file in result_files:
+        merge = ResultsMerge()
+        results = merge.get_test_results(logger, file, '')
+        base_results = merge.merge_results(base_results, results)
+    return base_results
+
+def remove_testcases_to_optimize_regression_runtime(logger, results):
+    test_case_removal = ['ptestresult.rawlogs', 'ptestresult.sections']
+    for r in test_case_removal:
+        for k in results:
+            result = get_dict_value(logger, results[k], 'result')
+            pop_dict_element(logger, result, r)
+
+def regression_file(args, logger):
+    base_results = load_json_file(args.base_result_file)
+    print('Successfully loaded base test results from: %s' % args.base_result_file)
+    target_results = load_json_file(args.target_result_file)
+    print('Successfully loaded target test results from: %s' % args.target_result_file)
+    remove_testcases_to_optimize_regression_runtime(logger, base_results)
+    remove_testcases_to_optimize_regression_runtime(logger, target_results)
+    if args.base_result_id and args.target_result_id:
+        base_result = get_dict_value(logger, base_results, base_result_id)
+        print('Getting base test result with result_id=%s' % base_result_id)
+        target_result = get_dict_value(logger, target_results, target_result_id)
+        print('Getting target test result with result_id=%s' % target_result_id)
+        regression = ResultsRegression()
+        regression.run(logger, base_result, target_result)
+    else:
+        regression = ResultsRegressionSelector()
+        regression_pair = regression.get_regression_base_target_pair(logger, base_results, target_results)
+        logger.debug('Retrieved regression pair=%s' % regression_pair)
+        regression.run_regression_with_regression_pairing(logger, regression_pair, base_results, target_results)
+    return 0
+
+def regression_directory(args, logger):
+    base_results = get_results_from_directory(logger, args.base_result_directory)
+    target_results = get_results_from_directory(logger, args.target_result_directory)
+    remove_testcases_to_optimize_regression_runtime(logger, base_results)
+    remove_testcases_to_optimize_regression_runtime(logger, target_results)
+    regression = ResultsRegressionSelector()
+    regression_pair = regression.get_regression_base_target_pair(logger, base_results, target_results)
+    logger.debug('Retrieved regression pair=%s' % regression_pair)
+    regression.run_regression_with_regression_pairing(logger, regression_pair, base_results, target_results)
+    return 0
+
+def regression_git(args, logger):
+    from resulttool.resultsutils import checkout_git_dir
+    base_results = {}
+    target_results = {}
+    if checkout_git_dir(args.source_dir, args.base_git_branch):
+        base_results = get_results_from_directory(logger, args.source_dir)
+    if checkout_git_dir(args.source_dir, args.target_git_branch):
+        target_results = get_results_from_directory(logger, args.source_dir)
+    if base_results and target_results:
+        remove_testcases_to_optimize_regression_runtime(logger, base_results)
+        remove_testcases_to_optimize_regression_runtime(logger, target_results)
+        regression = ResultsRegressionSelector()
+        regression_pair = regression.get_regression_base_target_pair(logger, base_results, target_results)
+        logger.debug('Retrieved regression pair=%s' % regression_pair)
+        regression.run_regression_with_regression_pairing(logger, regression_pair, base_results, target_results)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('regression-file', help='regression file analysis',
+                                         description='regression analysis comparing base result set to target '
+                                                     'result set',
+                                         group='analysis')
+    parser_build.set_defaults(func=regression_file)
+    parser_build.add_argument('base_result_file',
+                              help='base result file provide the base result set')
+    parser_build.add_argument('target_result_file',
+                              help='target result file provide the target result set for comparison with base result')
+    parser_build.add_argument('-b', '--base-result-id', default='',
+                              help='(optional) default select regression based on configurations unless base result '
+                                   'id was provided')
+    parser_build.add_argument('-t', '--target-result-id', default='',
+                              help='(optional) default select regression based on configurations unless target result '
+                                   'id was provided')
+
+    parser_build = subparsers.add_parser('regression-dir', help='regression directory analysis',
+                                         description='regression analysis comparing base result set to target '
+                                                     'result set',
+                                         group='analysis')
+    parser_build.set_defaults(func=regression_directory)
+    parser_build.add_argument('base_result_directory',
+                              help='base result directory provide the files for base result set')
+    parser_build.add_argument('target_result_directory',
+                              help='target result file provide the files for target result set for comparison with '
+                                   'base result')
+
+    parser_build = subparsers.add_parser('regression-git', help='regression git analysis',
+                                         description='regression analysis comparing base result set to target '
+                                                     'result set',
+                                         group='analysis')
+    parser_build.set_defaults(func=regression_git)
+    parser_build.add_argument('source_dir',
+                              help='source directory that contain the git repository with test result files')
+    parser_build.add_argument('base_git_branch',
+                              help='base git branch that provide the files for base result set')
+    parser_build.add_argument('target_git_branch',
+                              help='target git branch that provide the files for target result set for comparison with '
+                                   'base result')
diff --git a/scripts/lib/resulttool/report.py b/scripts/lib/resulttool/report.py
new file mode 100644
index 0000000..c8fbdc9
--- /dev/null
+++ b/scripts/lib/resulttool/report.py
@@ -0,0 +1,113 @@
+# test result tool - report text based test results
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import os
+import glob
+import json
+from resulttool.resultsutils import checkout_git_dir, load_json_file, get_dict_value, get_directory_files
+
+class ResultsTextReport(object):
+
+    def get_aggregated_test_result(self, logger, testresult):
+        test_count_report = {'passed': 0, 'failed': 0, 'skipped': 0, 'failed_testcases': []}
+        result_types = {'passed': ['PASSED', 'passed'],
+                        'failed': ['FAILED', 'failed', 'ERROR', 'error', 'UNKNOWN'],
+                        'skipped': ['SKIPPED', 'skipped']}
+        result = get_dict_value(logger, testresult, 'result')
+        for k in result:
+            test_status = get_dict_value(logger, result[k], 'status')
+            for tk in result_types:
+                if test_status in result_types[tk]:
+                    test_count_report[tk] += 1
+            if test_status in result_types['failed']:
+                test_count_report['failed_testcases'].append(k)
+        return test_count_report
+
+    def get_test_result_percentage(self, test_result_count):
+        total_tested = test_result_count['passed'] + test_result_count['failed'] + test_result_count['skipped']
+        test_percent_report = {'passed': 0, 'failed': 0, 'skipped': 0}
+        for k in test_percent_report:
+            test_percent_report[k] = format(test_result_count[k] / total_tested * 100, '.2f')
+        return test_percent_report
+
+    def add_test_configurations(self, test_report, source_dir, file, result_id):
+        test_report['file_dir'] = self._get_short_file_dir(source_dir, file)
+        test_report['result_id'] = result_id
+        test_report['test_file_dir_result_id'] = '%s_%s' % (test_report['file_dir'], test_report['result_id'])
+
+    def _get_short_file_dir(self, source_dir, file):
+        file_dir = os.path.dirname(file)
+        source_dir = source_dir[:-1] if source_dir[-1] == '/' else source_dir
+        if file_dir == source_dir:
+            return 'None'
+        return file_dir.replace(source_dir, '')
+
+    def get_max_string_len(self, test_result_list, key, default_max_len):
+        max_len = default_max_len
+        for test_result in test_result_list:
+            value_len = len(test_result[key])
+            if value_len > max_len:
+                max_len = value_len
+        return max_len
+
+    def print_test_report(self, template_file_name, test_count_reports, test_percent_reports,
+                          max_len_dir, max_len_result_id):
+        from jinja2 import Environment, FileSystemLoader
+        script_path = os.path.dirname(os.path.realpath(__file__))
+        file_loader = FileSystemLoader(script_path + '/template')
+        env = Environment(loader=file_loader, trim_blocks=True)
+        template = env.get_template(template_file_name)
+        output = template.render(test_count_reports=test_count_reports,
+                                 test_percent_reports=test_percent_reports,
+                                 max_len_dir=max_len_dir,
+                                 max_len_result_id=max_len_result_id)
+        print('Printing text-based test report:')
+        print(output)
+
+    def view_test_report(self, logger, source_dir, git_branch):
+        if git_branch:
+            checkout_git_dir(source_dir, git_branch)
+        test_count_reports = []
+        test_percent_reports = []
+        for file in get_directory_files(source_dir, ['.git'], 'testresults.json'):
+            logger.debug('Computing result for test result file: %s' % file)
+            testresults = load_json_file(file)
+            for k in testresults:
+                test_count_report = self.get_aggregated_test_result(logger, testresults[k])
+                test_percent_report = self.get_test_result_percentage(test_count_report)
+                self.add_test_configurations(test_count_report, source_dir, file, k)
+                self.add_test_configurations(test_percent_report, source_dir, file, k)
+                test_count_reports.append(test_count_report)
+                test_percent_reports.append(test_percent_report)
+        max_len_dir = self.get_max_string_len(test_count_reports, 'file_dir', len('file_dir'))
+        max_len_result_id = self.get_max_string_len(test_count_reports, 'result_id', len('result_id'))
+        self.print_test_report('test_report_full_text.txt', test_count_reports, test_percent_reports,
+                               max_len_dir, max_len_result_id)
+
+def report(args, logger):
+    report = ResultsTextReport()
+    report.view_test_report(logger, args.source_dir, args.git_branch)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('report', help='report test result summary',
+                                         description='report text-based test result summary from the source directory',
+                                         group='analysis')
+    parser_build.set_defaults(func=report)
+    parser_build.add_argument('source_dir',
+                              help='source directory that contain the test result files for reporting')
+    parser_build.add_argument('-b', '--git-branch', default='',
+                              help='(optional) default assume source directory contains all available files for '
+                                   'reporting unless a git branch was provided where it will try to checkout '
+                                   'the provided git branch assuming source directory was a git repository')
diff --git a/scripts/lib/resulttool/resultsutils.py b/scripts/lib/resulttool/resultsutils.py
new file mode 100644
index 0000000..bfcf36b
--- /dev/null
+++ b/scripts/lib/resulttool/resultsutils.py
@@ -0,0 +1,67 @@
+# test result tool - utilities
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import os
+import json
+import scriptpath
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+
+def load_json_file(file):
+    with open(file, "r") as f:
+        return json.load(f)
+
+def dump_json_data(write_dir, file_name, json_data):
+    file_content = json.dumps(json_data, sort_keys=True, indent=4)
+    file_path = os.path.join(write_dir, file_name)
+    with open(file_path, 'w') as the_file:
+        the_file.write(file_content)
+
+def get_dict_value(logger, dict, key):
+    try:
+        return dict[key]
+    except KeyError:
+        if logger:
+            logger.debug('Faced KeyError exception: dict=%s: key=%s' % (dict, key))
+        return None
+    except TypeError:
+        if logger:
+            logger.debug('Faced TypeError exception: dict=%s: key=%s' % (dict, key))
+        return None
+
+def pop_dict_element(logger, dict, key):
+    try:
+        dict.pop(key)
+    except KeyError:
+        if logger:
+            logger.debug('Faced KeyError exception: dict=%s: key=%s' % (dict, key))
+    except AttributeError:
+        if logger:
+            logger.debug('Faced AttributeError exception: dict=%s: key=%s' % (dict, key))
+
+def checkout_git_dir(git_dir, git_branch):
+    try:
+        repo = GitRepo(git_dir, is_topdir=True)
+        repo.run_cmd('checkout %s' % git_branch)
+        return True
+    except GitError:
+        return False
+
+def get_directory_files(source_dir, excludes, file):
+    files_in_dir = []
+    for root, dirs, files in os.walk(source_dir, topdown=True):
+        [dirs.remove(d) for d in list(dirs) if d in excludes]
+        for name in files:
+            if name == file:
+                files_in_dir.append(os.path.join(root, name))
+    return files_in_dir
\ No newline at end of file
diff --git a/scripts/lib/resulttool/store.py b/scripts/lib/resulttool/store.py
new file mode 100644
index 0000000..0b59ccf
--- /dev/null
+++ b/scripts/lib/resulttool/store.py
@@ -0,0 +1,110 @@
+# test result tool - store test results
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import datetime
+import tempfile
+import os
+import subprocess
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from resulttool.resultsutils import checkout_git_dir
+try:
+    import bb
+except ImportError:
+    pass
+
+class ResultsGitStore(object):
+
+    def _get_output_dir(self):
+        basepath = os.environ['BUILDDIR']
+        return basepath + '/testresults_%s/' % datetime.datetime.now().strftime("%Y%m%d%H%M%S")
+
+    def _create_temporary_workspace_dir(self):
+        return tempfile.mkdtemp(prefix='testresults.')
+
+    def _remove_temporary_workspace_dir(self, workspace_dir):
+        return subprocess.run(["rm", "-rf",  workspace_dir])
+
+    def _oe_copy_files(self, source_dir, destination_dir):
+        from oe.path import copytree
+        copytree(source_dir, destination_dir)
+
+    def _copy_files(self, source_dir, destination_dir, copy_ignore=None):
+        from shutil import copytree
+        copytree(source_dir, destination_dir, ignore=copy_ignore)
+
+    def _store_files_to_git(self, logger, file_dir, git_dir, git_branch, commit_msg_subject, commit_msg_body):
+        logger.debug('Storing test result into git repository (%s) and branch (%s)'
+                     % (git_dir, git_branch))
+        return subprocess.run(["oe-git-archive",
+                               file_dir,
+                               "-g", git_dir,
+                               "-b", git_branch,
+                               "--commit-msg-subject", commit_msg_subject,
+                               "--commit-msg-body", commit_msg_body])
+
+    def store_to_existing(self, logger, source_dir, git_dir, git_branch):
+        logger.debug('Storing files to existing git repository and branch')
+        from shutil import ignore_patterns
+        dest_dir = self._create_temporary_workspace_dir()
+        dest_top_dir = os.path.join(dest_dir, 'top_dir')
+        self._copy_files(git_dir, dest_top_dir, copy_ignore=ignore_patterns('.git'))
+        self._oe_copy_files(source_dir, dest_top_dir)
+        self._store_files_to_git(logger, dest_top_dir, git_dir, git_branch,
+                                 'Store as existing git and branch', 'Store as existing git repository and branch')
+        self._remove_temporary_workspace_dir(dest_dir)
+        return git_dir
+
+    def store_to_existing_with_new_branch(self, logger, source_dir, git_dir, git_branch):
+        logger.debug('Storing files to existing git repository with new branch')
+        self._store_files_to_git(logger, source_dir, git_dir, git_branch,
+                                 'Store as existing git with new branch',
+                                 'Store as existing git repository with new branch')
+        return git_dir
+
+    def store_to_new(self, logger, source_dir, git_branch):
+        logger.debug('Storing files to new git repository')
+        output_dir = self._get_output_dir()
+        self._store_files_to_git(logger, source_dir, output_dir, git_branch,
+                                 'Store as new', 'Store as new git repository')
+        return output_dir
+
+    def store(self, logger, source_dir, git_dir, git_branch):
+        if git_dir:
+            if checkout_git_dir(git_dir, git_branch):
+                self.store_to_existing(logger, source_dir, git_dir, git_branch)
+            else:
+                self.store_to_existing_with_new_branch(logger, source_dir, git_dir, git_branch)
+        else:
+            self.store_to_new(logger, source_dir, git_branch)
+
+def store(args, logger):
+    gitstore = ResultsGitStore()
+    gitstore.store(logger, args.source_dir, args.git_dir, args.git_branch)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('store', help='store test result files and directories into git repository',
+                                         description='store the testresults.json files and related directories '
+                                                     'from the source directory into the destination git repository '
+                                                     'with the given git branch',
+                                         group='setup')
+    parser_build.set_defaults(func=store)
+    parser_build.add_argument('source_dir',
+                              help='source directory that contain the test result files and directories to be stored')
+    parser_build.add_argument('git_branch', help='git branch used for store')
+    parser_build.add_argument('-d', '--git-dir', default='',
+                              help='(optional) default store to new <top_dir>/<build>/<testresults_datetime> '
+                                   'directory unless provided with existing git repository as destination')
diff --git a/scripts/lib/resulttool/template/test_report_full_text.txt b/scripts/lib/resulttool/template/test_report_full_text.txt
new file mode 100644
index 0000000..2e80d59
--- /dev/null
+++ b/scripts/lib/resulttool/template/test_report_full_text.txt
@@ -0,0 +1,35 @@
+==============================================================================================================
+Test Report (Count of passed, failed, skipped group by file_dir, result_id)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed'.ljust(10) }} | {{ 'failed'.ljust(10) }} | {{ 'skipped'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %}
+{{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Percent of passed, failed, skipped group by file_dir, result_id)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed_%'.ljust(10) }} | {{ 'failed_%'.ljust(10) }} | {{ 'skipped_%'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_percent_reports |sort(attribute='test_file_dir_result_id') %}
+{{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Failed test cases group by file_dir, result_id)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %}
+{% if report.failed_testcases %}
+file_dir | result_id : {{ report.file_dir }} | {{ report.result_id }}
+{% for testcase in report.failed_testcases %}
+    {{ testcase }}
+{% endfor %}
+{% endif %}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
\ No newline at end of file
diff --git a/scripts/resulttool b/scripts/resulttool
new file mode 100755
index 0000000..ebb5fc8
--- /dev/null
+++ b/scripts/resulttool
@@ -0,0 +1,84 @@
+#!/usr/bin/env python3
+#
+# test results tool - tool for testresults.json (merge test results, regression analysis)
+#
+# To look for help information.
+#    $ resulttool
+#
+# To store test result from oeqa automated tests, execute the below
+#     $ resulttool store <source_dir> <git_branch>
+#
+# To merge test results, execute the below
+#    $ resulttool merge <base_result_file> <target_result_file>
+#
+# To report test report, execute the below
+#     $ resulttool report <source_dir>
+#
+# To perform regression file analysis, execute the below
+#     $ resulttool regression-file <base_result_file> <target_result_file>
+#
+# Copyright (c) 2019, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+
+import os
+import sys
+import argparse
+import logging
+script_path = os.path.dirname(os.path.realpath(__file__))
+lib_path = script_path + '/lib'
+sys.path = sys.path + [lib_path]
+import argparse_oe
+import scriptutils
+import resulttool.merge
+import resulttool.store
+import resulttool.regression
+import resulttool.report
+logger = scriptutils.logger_create('resulttool')
+
+def _validate_user_input_arguments(args):
+    if hasattr(args, "source_dir"):
+        if not os.path.isdir(args.source_dir):
+            logger.error('source_dir argument need to be a directory : %s' % args.source_dir)
+            return False
+    return True
+
+def main():
+    parser = argparse_oe.ArgumentParser(description="OpenEmbedded test results tool.",
+                                        epilog="Use %(prog)s <subcommand> --help to get help on a specific command")
+    parser.add_argument('-d', '--debug', help='enable debug output', action='store_true')
+    parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
+    subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
+    subparsers.required = True
+    subparsers.add_subparser_group('setup', 'setup', 200)
+    resulttool.merge.register_commands(subparsers)
+    resulttool.store.register_commands(subparsers)
+    subparsers.add_subparser_group('analysis', 'analysis', 100)
+    resulttool.regression.register_commands(subparsers)
+    resulttool.report.register_commands(subparsers)
+
+    args = parser.parse_args()
+    if args.debug:
+        logger.setLevel(logging.DEBUG)
+    elif args.quiet:
+        logger.setLevel(logging.ERROR)
+
+    if not _validate_user_input_arguments(args):
+        return -1
+
+    try:
+        ret = args.func(args, logger)
+    except argparse_oe.ArgumentUsageError as ae:
+        parser.error_subcommand(ae.message, ae.subcommand)
+    return ret
+
+if __name__ == "__main__":
+    sys.exit(main())
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2 v7] scripts/resulttool: enable manual execution and result creation
  2019-02-14  5:50 [PATCH 0/2 v7] test-case-mgmt Yeoh Ee Peng
  2019-02-14  5:50 ` [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis Yeoh Ee Peng
@ 2019-02-14  5:50 ` Yeoh Ee Peng
  2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
  2 siblings, 0 replies; 18+ messages in thread
From: Yeoh Ee Peng @ 2019-02-14  5:50 UTC (permalink / raw)
  To: openembedded-core

From: Mazliana <mazliana.mohamad@intel.com>

Integrated “manualexecution” operation to resulttool scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
        $ resulttool

To execute manual test cases, execute the below
        $ resulttool manualexecution <manualjsonfile>

By default testresults.json store in <build_dir>/tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/lib/resulttool/manualexecution.py | 137 ++++++++++++++++++++++++++++++
 scripts/resulttool                        |   8 ++
 2 files changed, 145 insertions(+)
 create mode 100755 scripts/lib/resulttool/manualexecution.py

diff --git a/scripts/lib/resulttool/manualexecution.py b/scripts/lib/resulttool/manualexecution.py
new file mode 100755
index 0000000..64ec581
--- /dev/null
+++ b/scripts/lib/resulttool/manualexecution.py
@@ -0,0 +1,137 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+from resulttool.resultsutils import load_json_file
+
+class ManualTestRunner(object):
+    def __init__(self):
+        self.jdata = ''
+        self.test_module = ''
+        self.test_suite = ''
+        self.test_cases = ''
+        self.configuration = ''
+        self.starttime = ''
+        self.result_id = ''
+        self.write_dir = ''
+
+    def _get_testcases(self, file):
+        self.jdata = load_json_file(file)
+        self.test_cases = []
+        self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+        self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+        for i in self.jdata:
+            self.test_cases.append(i['test']['@alias'].split('.', 2)[2])
+    
+    def _get_input(self, config):
+        while True:
+            output = input('{} = '.format(config))
+            if re.match('^[a-zA-Z0-9_]+$', output):
+                break
+            print('Only alphanumeric and underscore are allowed. Please try again')
+        return output
+
+    def _create_config(self):
+        self.configuration = {}
+        while True:
+            try:
+                conf_total = int(input('\nPlease provide how many configuration you want to save \n'))
+                break
+            except ValueError:
+                print('Invalid input. Please provide input as a number not character.')
+        for i in range(conf_total):
+            print('---------------------------------------------')
+            print('This is configuration #%s ' % (i + 1) + '. Please provide configuration name and its value')
+            print('---------------------------------------------')
+            name_conf = self._get_input('Configuration Name')
+            value_conf = self._get_input('Configuration Value')
+            print('---------------------------------------------\n')
+            self.configuration[name_conf.upper()] = value_conf
+        current_datetime = datetime.datetime.now()
+        self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+        self.configuration['STARTTIME'] = self.starttime
+        self.configuration['TEST_TYPE'] = self.test_module
+
+    def _create_result_id(self):
+        self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+    def _execute_test_steps(self, test_id):
+        test_result = {}
+        testcase_id = self.test_module + '.' + self.test_suite + '.' + self.test_cases[test_id]
+        total_steps = len(self.jdata[test_id]['test']['execution'].keys())
+        print('------------------------------------------------------------------------')
+        print('Executing test case:' + '' '' + self.test_cases[test_id])
+        print('------------------------------------------------------------------------')
+        print('You have total ' + str(total_steps) + ' test steps to be executed.')
+        print('------------------------------------------------------------------------\n')
+        for step in sorted((self.jdata[test_id]['test']['execution']).keys()):
+            print('Step %s: ' % step + self.jdata[test_id]['test']['execution']['%s' % step]['action'])
+            print('Expected output: ' + self.jdata[test_id]['test']['execution']['%s' % step]['expected_results'])
+            done = input('\nPlease press ENTER when you are done to proceed to next step.\n')
+        while True:
+            done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked/(S)kipped? \n')
+            done = done.lower()
+            result_types = {'p':'PASSED',
+                                'f':'FAILED',
+                                'b':'BLOCKED',
+                                's':'SKIPPED'}
+            if done in result_types:
+                for r in result_types:
+                    if done == r:
+                        res = result_types[r]
+                        if res == 'FAILED':
+                            log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n')
+                            test_result.update({testcase_id: {'status': '%s' % res, 'log': '%s' % log_input}})
+                        else:
+                            test_result.update({testcase_id: {'status': '%s' % res}})
+                break
+            print('Invalid input!')
+        return test_result
+
+    def _create_write_dir(self):
+        basepath = os.environ['BUILDDIR']
+        self.write_dir = basepath + '/tmp/log/manual/'
+
+    def run_test(self, file):
+        self._get_testcases(file)
+        self._create_config()
+        self._create_result_id()
+        self._create_write_dir()
+        test_results = {}
+        print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(self.jdata))
+        for i in range(0, len(self.jdata)):
+            test_result = self._execute_test_steps(i)
+            test_results.update(test_result)
+        return self.configuration, self.result_id, self.write_dir, test_results
+
+def manualexecution(args, logger):
+    testrunner = ManualTestRunner()
+    get_configuration, get_result_id, get_write_dir, get_test_results = testrunner.run_test(args.file)
+    resultjsonhelper = OETestResultJSONHelper()
+    resultjsonhelper.dump_testresult_file(get_write_dir, get_configuration, get_result_id,
+                                          get_test_results)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('manualexecution', help='helper script for results populating during manual test execution.',
+                                         description='helper script for results populating during manual test execution. You can find manual test case JSON file in meta/lib/oeqa/manual/',
+                                         group='manualexecution')
+    parser_build.set_defaults(func=manualexecution)
+    parser_build.add_argument('file', help='specify path to manual test case JSON file.Note: Please use \"\" to encapsulate the file path.')
\ No newline at end of file
diff --git a/scripts/resulttool b/scripts/resulttool
index ebb5fc8..13430e1 100755
--- a/scripts/resulttool
+++ b/scripts/resulttool
@@ -17,6 +17,11 @@
 # To perform regression file analysis, execute the below
 #     $ resulttool regression-file <base_result_file> <target_result_file>
 #
+# To execute manual test cases, execute the below
+#     $ resulttool manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution store in <build>/tmp/log/manual/
+#
 # Copyright (c) 2019, Intel Corporation.
 #
 # This program is free software; you can redistribute it and/or modify it
@@ -42,6 +47,7 @@ import resulttool.merge
 import resulttool.store
 import resulttool.regression
 import resulttool.report
+import resulttool.manualexecution
 logger = scriptutils.logger_create('resulttool')
 
 def _validate_user_input_arguments(args):
@@ -58,6 +64,8 @@ def main():
     parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
     subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
     subparsers.required = True
+    subparsers.add_subparser_group('manualexecution', 'manual testcases', 300)
+    resulttool.manualexecution.register_commands(subparsers)
     subparsers.add_subparser_group('setup', 'setup', 200)
     resulttool.merge.register_commands(subparsers)
     resulttool.store.register_commands(subparsers)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-14  5:50 [PATCH 0/2 v7] test-case-mgmt Yeoh Ee Peng
  2019-02-14  5:50 ` [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis Yeoh Ee Peng
  2019-02-14  5:50 ` [PATCH 2/2 v7] scripts/resulttool: enable manual execution and result creation Yeoh Ee Peng
@ 2019-02-17 16:09 ` Richard Purdie
  2019-02-17 17:54   ` Richard Purdie
                     ` (2 more replies)
  2 siblings, 3 replies; 18+ messages in thread
From: Richard Purdie @ 2019-02-17 16:09 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

On Thu, 2019-02-14 at 13:50 +0800, Yeoh Ee Peng wrote:
> v1:
>   Face key error from oe-git-archive
>   Undesirable behavior when storing to multiple git branch
> 
> v2: 
>   Include fix for oe-git-archive
>   Include fix for store result to multiple git branch
>   Improve git commit message   
> 
> v3:
>   Enhance fix for oe-git-archive by using exception catch to
>   improve code readability and easy to understand
> 
> v4:
>   Add new features, merge result files & regression analysis 
>   Add selftest to merge, store, report and regression functionalities
>   Revise codebase for pythonic
>   
> v5:
>   Add required files for selftest testing store
>   
> v6:
>   Add regression for directory and git repository
>   Enable regression pairing base set to multiple target sets 
>   Revise selftest testing for regression
>   
> v7: 
>   Optimize regression computation for ptest results
>   Rename entry point script to resulttool
> 
> Mazliana (1):
>   scripts/resulttool: enable manual execution and result creation
> 
> Yeoh Ee Peng (1):
>   resulttool: enable merge, store, report and regression analysis

Hi Ee Peng,

Thanks for working on this, it does get better each iteration. I've
been struggling a little to explain what we need to do to finish this
off. Firstly I wanted to give some feedback on some general python
tips:

a) We can't use subprocess.run() as its a python 3.6 feature and we
have autobuilder workers with 3.5. This lead to failures like: 
https://autobuilder.yoctoproject.org/typhoon/#/builders/56/builds/242
We can use check_call or other functions instead.

b) I'd not recommend using "file" as a variable name in python as its a
keyword, similarly "dict" (in resultutils.py).

c) get_dict_value() is something we can probably avoid needing if we
use the .get() methods of dicts (you can specify a value to return if a
value isn't present).

I started to experiment with the tool to try and get it to follow the
workflow we need with the autobuilder QA process. Right now I'm heavily
focusing on what we need it to do to generate reports from the
autobuilder, to the extent that I'm ignoring most other workflows.

The reason for this is that I want to get it merged and use this to run
2.7 M3 testing on the autobuilder. The other workflows can be added
if/as/when we find we have need of them.

I ended up making a few changes to alter the tool to do the things I
think we need it to and to improve its output/usability. I'll send out
a separate patch with my changes so far. I've tried to summarise some
of the reasoning here:

* Rename resultsutils -> resultutils to match the resultstool ->
resulttool rename

* Formalised the handling of "file_name" to "TESTSERIES" which the code
will now add into the json configuration data if its not present, based
on the directory name.

* When we don't have failed test cases, print something saying so
instead of an empty table

* Tweak the table headers in the report to be more readable (reference
"Test Series" instead if file_id and ID instead of results_id)

* Improve/simplify the max string length handling

* Merge the counts and percentage data into one table in the report
since printing two reports of the same data confuses the user

* Removed the confusing header in the regression report

* Show matches, then regressions, then unmatched runs in the regression
report, also remove chatting unneeded output

* Try harder to "pair" up matching configurations to reduce noise in
the regressions report

* Abstracted the "mapping" table concept used to pairing in the
regression code to general code in resultutils

* Created multiple mappings for results analysis, results storage and
'flattening' results data in a merge

* Simplify the merge command to take a source and a destination,
letting the destination be a directory or a file, removing the need for
an output directory parameter

* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression
mappings

* Have the store command place the testresults files in a layout from
the mapping, making commits into the git repo for results storage more
useful for simple comparison purposes

* Set the oe-git-archive tag format appropriately for oeqa results
storage (and simplify the commit messages closer to their defaults)


Despite my changes there are things that still need to be done.
Essential things which need to happen before this code merges:

* oe-git-archive is importing using the commit/branch of the current 
  repo, not the data in the results file.

* Fix the -t option to merge command

* Audit the command option help

* Revisit and redo the way the git branch handling is happening. We 
  really want to model how oe-build-perf-report handles git repos for 
  comparisons:
  - Its able to query data from git repos without changing the current 
    working branch, 
  - it can search on tag formats to find comparison data

* Add ptest summary to the report command


Things which may be "nice to have" which can come in the future:

* Make the percentage vs. count in the report a commandline option? 
  (not sure but I wondered if that would be better)

* Add ptest sub-command to extract log data

* Generate HTML report

* Generate graphical ptest result charts


I'd be interested in your feedback on my changes and hope you agree
with them! I'll continue to work on some of the above items as I'd like
to get this merged sooner than later. If you're going to work on any of
them let me know first. I'll try and keep 
http://git.yoctoproject.org/clean/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222
up to date with my changes.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
@ 2019-02-17 17:54   ` Richard Purdie
  2019-02-17 22:45     ` Richard Purdie
  2019-02-18  1:28   ` Yeoh, Ee Peng
  2019-02-18  8:33   ` Yeoh, Ee Peng
  2 siblings, 1 reply; 18+ messages in thread
From: Richard Purdie @ 2019-02-17 17:54 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

> Despite my changes there are things that still need to be done.
> Essential things which need to happen before this code merges:
> 
> * oe-git-archive is importing using the commit/branch of the current 
>   repo, not the data in the results file.
> 
> * Fix the -t option to merge command

Got rid of this for now, we can add it later if we need it, can become
a "nice to have" for later.

> * Audit the command option help

Done on my branch.

> * Revisit and redo the way the git branch handling is happening. We 
>   really want to model how oe-build-perf-report handles git repos
> for 
>   comparisons:
>   - Its able to query data from git repos without changing the
> current 
>     working branch, 
>   - it can search on tag formats to find comparison data
> 
> * Add ptest summary to the report command

Done on my branch.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 17:54   ` Richard Purdie
@ 2019-02-17 22:45     ` Richard Purdie
  2019-02-18  8:09       ` Yeoh, Ee Peng
  2019-02-20  6:27       ` Yeoh, Ee Peng
  0 siblings, 2 replies; 18+ messages in thread
From: Richard Purdie @ 2019-02-17 22:45 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

On Sun, 2019-02-17 at 17:54 +0000, Richard Purdie wrote:
> > Despite my changes there are things that still need to be done.
> > Essential things which need to happen before this code merges:
> > 
> > * oe-git-archive is importing using the commit/branch of the
> > current 
> >   repo, not the data in the results file.

Also now fixed. I put my patches into master-next too.

With this working, I was able to run something along the lines of:

for D in $1/*; do
    resulttool store $D $2 --allow-empty
done

on the autobuilder's recent results which lead to the creation of this
repository:

http://git.yoctoproject.org/cgit.cgi/yocto-testresults/


> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos
> > for 
> >   comparisons:
> >   - Its able to query data from git repos without changing the
> > current 
> >     working branch, 
> >   - it can search on tag formats to find comparison data

Which means we now need to make the git branch functionality of the
report and regression commands compare with the above repo, so we're a
step closer to getting thie merged.

Ultimately we'll auto-populate the above repo by having the autobuilder
run a "store" command at the end of its runs.

I have a feeling I may have broken the resulttool selftests so that is
something else which will need to be fixed before anything merges. Time
for me to step away from the keyboard for a bit too.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
  2019-02-17 17:54   ` Richard Purdie
@ 2019-02-18  1:28   ` Yeoh, Ee Peng
  2019-02-18  8:33   ` Yeoh, Ee Peng
  2 siblings, 0 replies; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-18  1:28 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core

Hi RP,

Thank you very much for providing me your precious advices and I will definitely look into them. 

Let me look into all the improvements that you had developed and I will try my best to provide further improvement needed. 

Best regards,
Yeoh Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 12:10 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Thu, 2019-02-14 at 13:50 +0800, Yeoh Ee Peng wrote:
> v1:
>   Face key error from oe-git-archive
>   Undesirable behavior when storing to multiple git branch
> 
> v2: 
>   Include fix for oe-git-archive
>   Include fix for store result to multiple git branch
>   Improve git commit message   
> 
> v3:
>   Enhance fix for oe-git-archive by using exception catch to
>   improve code readability and easy to understand
> 
> v4:
>   Add new features, merge result files & regression analysis 
>   Add selftest to merge, store, report and regression functionalities
>   Revise codebase for pythonic
>   
> v5:
>   Add required files for selftest testing store
>   
> v6:
>   Add regression for directory and git repository
>   Enable regression pairing base set to multiple target sets 
>   Revise selftest testing for regression
>   
> v7: 
>   Optimize regression computation for ptest results
>   Rename entry point script to resulttool
> 
> Mazliana (1):
>   scripts/resulttool: enable manual execution and result creation
> 
> Yeoh Ee Peng (1):
>   resulttool: enable merge, store, report and regression analysis

Hi Ee Peng,

Thanks for working on this, it does get better each iteration. I've been struggling a little to explain what we need to do to finish this off. Firstly I wanted to give some feedback on some general python
tips:

a) We can't use subprocess.run() as its a python 3.6 feature and we have autobuilder workers with 3.5. This lead to failures like: 
https://autobuilder.yoctoproject.org/typhoon/#/builders/56/builds/242
We can use check_call or other functions instead.

b) I'd not recommend using "file" as a variable name in python as its a keyword, similarly "dict" (in resultutils.py).

c) get_dict_value() is something we can probably avoid needing if we use the .get() methods of dicts (you can specify a value to return if a value isn't present).

I started to experiment with the tool to try and get it to follow the workflow we need with the autobuilder QA process. Right now I'm heavily focusing on what we need it to do to generate reports from the autobuilder, to the extent that I'm ignoring most other workflows.

The reason for this is that I want to get it merged and use this to run
2.7 M3 testing on the autobuilder. The other workflows can be added if/as/when we find we have need of them.

I ended up making a few changes to alter the tool to do the things I think we need it to and to improve its output/usability. I'll send out a separate patch with my changes so far. I've tried to summarise some of the reasoning here:

* Rename resultsutils -> resultutils to match the resultstool -> resulttool rename

* Formalised the handling of "file_name" to "TESTSERIES" which the code will now add into the json configuration data if its not present, based on the directory name.

* When we don't have failed test cases, print something saying so instead of an empty table

* Tweak the table headers in the report to be more readable (reference "Test Series" instead if file_id and ID instead of results_id)

* Improve/simplify the max string length handling

* Merge the counts and percentage data into one table in the report since printing two reports of the same data confuses the user

* Removed the confusing header in the regression report

* Show matches, then regressions, then unmatched runs in the regression report, also remove chatting unneeded output

* Try harder to "pair" up matching configurations to reduce noise in the regressions report

* Abstracted the "mapping" table concept used to pairing in the regression code to general code in resultutils

* Created multiple mappings for results analysis, results storage and 'flattening' results data in a merge

* Simplify the merge command to take a source and a destination, letting the destination be a directory or a file, removing the need for an output directory parameter

* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression mappings

* Have the store command place the testresults files in a layout from the mapping, making commits into the git repo for results storage more useful for simple comparison purposes

* Set the oe-git-archive tag format appropriately for oeqa results storage (and simplify the commit messages closer to their defaults)


Despite my changes there are things that still need to be done.
Essential things which need to happen before this code merges:

* oe-git-archive is importing using the commit/branch of the current
  repo, not the data in the results file.

* Fix the -t option to merge command

* Audit the command option help

* Revisit and redo the way the git branch handling is happening. We
  really want to model how oe-build-perf-report handles git repos for
  comparisons:
  - Its able to query data from git repos without changing the current 
    working branch,
  - it can search on tag formats to find comparison data

* Add ptest summary to the report command


Things which may be "nice to have" which can come in the future:

* Make the percentage vs. count in the report a commandline option? 
  (not sure but I wondered if that would be better)

* Add ptest sub-command to extract log data

* Generate HTML report

* Generate graphical ptest result charts


I'd be interested in your feedback on my changes and hope you agree with them! I'll continue to work on some of the above items as I'd like to get this merged sooner than later. If you're going to work on any of them let me know first. I'll try and keep
http://git.yoctoproject.org/clean/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222
up to date with my changes.

Cheers,

Richard


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 22:45     ` Richard Purdie
@ 2019-02-18  8:09       ` Yeoh, Ee Peng
  2019-02-18  9:07         ` Richard Purdie
  2019-02-20  6:27       ` Yeoh, Ee Peng
  1 sibling, 1 reply; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-18  8:09 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core

Hi RP,

Thank you very much again for continuously providing your precious feedbacks to me.
Also thank you very much for spending great amount of time to improve this patchset siginificantly. 
 
I did some testing with the latest from resulttool: Update to use gitarchive library function. 
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222&id=b9eecaabe56db5bcafff31e67cdabadc42e2d2e4

I had 2 questions. 
1. For "resulttool regression", currently it was comparing result id set without comprehending the difference in the host distro used to executed the oeselftest. Example: it was matching oeselftest run with fedora28 host distro with oeselftest run with ubuntu18 host distro, is this the expected behavior? 
Match: oeselftest_fedora-28_qemux86-64_20190201181656
       oeselftest_ubuntu-18.04_qemux86-64_20190201175023
Match: oeselftest_fedora-26_qemux86-64_20190131144317
       oeselftest_fedora-26_qemux86-64_20190131144317
Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
       oeselftest_fedora-28_qemux86-64_20190201181656
Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
       oeselftest_opensuse-42.3_qemux86-64_20190126152612

I believe that we shall comprehend the 'HOST_DISTRO' configuration inside the regression_map.  
regression_map = {
-    "oeselftest": ['TEST_TYPE', 'MACHINE'],
+    "oeselftest": ['TEST_TYPE', 'HOST_DISTRO', 'MACHINE'],
     "runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'IMAGE_PKGTYPE', 'DISTRO'],
     "sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE'],
     "sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE']
 }

After comprehending this 'HOST_DISTRO', it was able to perform regression for oeselftest with the matching host distro.
Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
       oeselftest_ubuntu-18.04_qemux86-64_20190201175023
Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
       oeselftest_opensuse-42.3_qemux86-64_20190126152612
Match: oeselftest_fedora-26_qemux86-64_20190131144317
       oeselftest_fedora-26_qemux86-64_20190131144317
Match: oeselftest_fedora-28_qemux86-64_20190201181656
       oeselftest_fedora-28_qemux86-64_20190201181656

2. For "resulttool store", I had noticed that it will now generally stored testresults.json in a meaningful file directory structure based on the store_map except oeselftest. oeselftest currently store multiple result id set inside oselftest file directory without comprehend the host distro. 

For example runtime, store testresult.json with the configured store_map. 
├── oeselftest
│   └── testresults.json
├── runtime
│   ├── poky
│   │   ├── qemuarm
│   │   │   ├── core-image-minimal
│   │   │   │   └── testresults.json
│   │   │   ├── core-image-sato
│   │   │   │   └── testresults.json
│   │   │   └── core-image-sato-sdk
│   │   │       └── testresults.json
│   │   ├── qemuarm64
│   │   │   ├── core-image-minimal
│   │   │   │   └── testresults.json
│   │   │   ├── core-image-sato
│   │   │   │   └── testresults.json
│   │   │   └── core-image-sato-sdk
│   │   │       └── testresults.json

I believe that we shall again comprehend the 'HOST_DISTRO' configuration inside the store_map.  
store_map = {
-    "oeselftest": ['TEST_TYPE'],
+    "oeselftest": ['TEST_TYPE','HOST_DISTRO'],
     "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
     "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
     "sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME']

Doing so, it will store oeselftest in a more useful file directory structure with host distro comprehended. 
└── oeselftest
    ├── fedora-26
    │   └── testresults.json
    ├── fedora-28
    │   └── testresults.json
    ├── opensuse-42.3
    │   └── testresults.json
    └── ubuntu-18.04
        └── testresults.json

Please let me know if you have any question related to above. 

Best regards,
Yeoh Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:46 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Sun, 2019-02-17 at 17:54 +0000, Richard Purdie wrote:
> > Despite my changes there are things that still need to be done.
> > Essential things which need to happen before this code merges:
> > 
> > * oe-git-archive is importing using the commit/branch of the current
> >   repo, not the data in the results file.

Also now fixed. I put my patches into master-next too.

With this working, I was able to run something along the lines of:

for D in $1/*; do
    resulttool store $D $2 --allow-empty done

on the autobuilder's recent results which lead to the creation of this
repository:

http://git.yoctoproject.org/cgit.cgi/yocto-testresults/


> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos 
> > for
> >   comparisons:
> >   - Its able to query data from git repos without changing the 
> > current
> >     working branch, 
> >   - it can search on tag formats to find comparison data

Which means we now need to make the git branch functionality of the report and regression commands compare with the above repo, so we're a step closer to getting thie merged.

Ultimately we'll auto-populate the above repo by having the autobuilder run a "store" command at the end of its runs.

I have a feeling I may have broken the resulttool selftests so that is something else which will need to be fixed before anything merges. Time for me to step away from the keyboard for a bit too.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
  2019-02-17 17:54   ` Richard Purdie
  2019-02-18  1:28   ` Yeoh, Ee Peng
@ 2019-02-18  8:33   ` Yeoh, Ee Peng
  2019-02-18  9:17     ` Richard Purdie
  2 siblings, 1 reply; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-18  8:33 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core

Hi RP,

I have a question for "TESTSERIES".
* Formalised the handling of "file_name" to "TESTSERIES" which the code will now add into the json configuration data if its not present, based on the directory name.

May I know why was "TESTSERIES" was added as one of the key configuration for regression comparison selection inside regression_map? 
regression_map = {
    "oeselftest": ['TEST_TYPE', 'MACHINE'],
    "runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'IMAGE_PKGTYPE', 'DISTRO'],
    "sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE'],
    "sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE', 'SDKMACHINE']
}

Firstly, from the current yocto-testresults repository, I noticed that "TESTSERIES" was mostly duplicated with "MACHINE", or "MACHINE" & "DISTRO", or "TEST_TYPE" for selftest case. 

Secondly, since "TESTSERIES" was created based on directory name from the source directory being used, will this introduce unexpected complication to regression comparison in the future if directory name for the source was changed? If directory name was changed even slightly, example for runtime_core-image-lsb, if the source directory name changed from "qemuarm-lsb" to "qemuarm_lsb", I believe the regression comparison will not able to compare the result id set even though they were having same configurations and they were meant to be compare directly. 

Examples: 
"runtime_core-image-minimal_qemuarm_20190215014628": {
        "configuration": {
            "DISTRO": "poky",
            "HOST_DISTRO": "ubuntu-18.04",
            "IMAGE_BASENAME": "core-image-minimal",
            "IMAGE_PKGTYPE": "rpm",
            "LAYERS": {
                "meta": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-poky": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-yocto-bsp": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                }
            },
            "MACHINE": "qemuarm",
            "STARTTIME": "20190215014628",
            "TESTSERIES": "qemuarm",
            "TEST_TYPE": "runtime"
        },

"runtime_core-image-lsb_qemuarm_20190215014624": {
        "configuration": {
            "DISTRO": "poky-lsb",
            "HOST_DISTRO": "ubuntu-18.04",
            "IMAGE_BASENAME": "core-image-lsb",
            "IMAGE_PKGTYPE": "rpm",
            "LAYERS": {
                "meta": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-poky": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-yocto-bsp": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                }
            },
            "MACHINE": "qemuarm",
            "STARTTIME": "20190215014624",
            "TESTSERIES": "qemuarm-lsb",
            "TEST_TYPE": "runtime"
        },

    "oeselftest_debian-9_qemux86-64_20190215010815": {
        "configuration": {
            "HOST_DISTRO": "debian-9",
            "HOST_NAME": "debian9-ty-2.yocto.io",
            "LAYERS": {
                "meta": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-poky": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-selftest": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                },
                "meta-yocto-bsp": {
                    "branch": "master",
                    "commit": "5fa3b5b15229babc9f96606c79436ab83651bf83",
                    "commit_count": 53265
                }
            },
            "MACHINE": "qemux86-64",
            "STARTTIME": "20190215010815",
            "TESTSERIES": "oe-selftest",
            "TEST_TYPE": "oeselftest"
        },

Please let me know if you have any question for the above. 

Best regards,
Yeoh Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 12:10 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Thu, 2019-02-14 at 13:50 +0800, Yeoh Ee Peng wrote:
> v1:
>   Face key error from oe-git-archive
>   Undesirable behavior when storing to multiple git branch
> 
> v2: 
>   Include fix for oe-git-archive
>   Include fix for store result to multiple git branch
>   Improve git commit message   
> 
> v3:
>   Enhance fix for oe-git-archive by using exception catch to
>   improve code readability and easy to understand
> 
> v4:
>   Add new features, merge result files & regression analysis 
>   Add selftest to merge, store, report and regression functionalities
>   Revise codebase for pythonic
>   
> v5:
>   Add required files for selftest testing store
>   
> v6:
>   Add regression for directory and git repository
>   Enable regression pairing base set to multiple target sets 
>   Revise selftest testing for regression
>   
> v7: 
>   Optimize regression computation for ptest results
>   Rename entry point script to resulttool
> 
> Mazliana (1):
>   scripts/resulttool: enable manual execution and result creation
> 
> Yeoh Ee Peng (1):
>   resulttool: enable merge, store, report and regression analysis

Hi Ee Peng,

Thanks for working on this, it does get better each iteration. I've been struggling a little to explain what we need to do to finish this off. Firstly I wanted to give some feedback on some general python
tips:

a) We can't use subprocess.run() as its a python 3.6 feature and we have autobuilder workers with 3.5. This lead to failures like: 
https://autobuilder.yoctoproject.org/typhoon/#/builders/56/builds/242
We can use check_call or other functions instead.

b) I'd not recommend using "file" as a variable name in python as its a keyword, similarly "dict" (in resultutils.py).

c) get_dict_value() is something we can probably avoid needing if we use the .get() methods of dicts (you can specify a value to return if a value isn't present).

I started to experiment with the tool to try and get it to follow the workflow we need with the autobuilder QA process. Right now I'm heavily focusing on what we need it to do to generate reports from the autobuilder, to the extent that I'm ignoring most other workflows.

The reason for this is that I want to get it merged and use this to run
2.7 M3 testing on the autobuilder. The other workflows can be added if/as/when we find we have need of them.

I ended up making a few changes to alter the tool to do the things I think we need it to and to improve its output/usability. I'll send out a separate patch with my changes so far. I've tried to summarise some of the reasoning here:

* Rename resultsutils -> resultutils to match the resultstool -> resulttool rename

* Formalised the handling of "file_name" to "TESTSERIES" which the code will now add into the json configuration data if its not present, based on the directory name.

* When we don't have failed test cases, print something saying so instead of an empty table

* Tweak the table headers in the report to be more readable (reference "Test Series" instead if file_id and ID instead of results_id)

* Improve/simplify the max string length handling

* Merge the counts and percentage data into one table in the report since printing two reports of the same data confuses the user

* Removed the confusing header in the regression report

* Show matches, then regressions, then unmatched runs in the regression report, also remove chatting unneeded output

* Try harder to "pair" up matching configurations to reduce noise in the regressions report

* Abstracted the "mapping" table concept used to pairing in the regression code to general code in resultutils

* Created multiple mappings for results analysis, results storage and 'flattening' results data in a merge

* Simplify the merge command to take a source and a destination, letting the destination be a directory or a file, removing the need for an output directory parameter

* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression mappings

* Have the store command place the testresults files in a layout from the mapping, making commits into the git repo for results storage more useful for simple comparison purposes

* Set the oe-git-archive tag format appropriately for oeqa results storage (and simplify the commit messages closer to their defaults)


Despite my changes there are things that still need to be done.
Essential things which need to happen before this code merges:

* oe-git-archive is importing using the commit/branch of the current
  repo, not the data in the results file.

* Fix the -t option to merge command

* Audit the command option help

* Revisit and redo the way the git branch handling is happening. We
  really want to model how oe-build-perf-report handles git repos for
  comparisons:
  - Its able to query data from git repos without changing the current 
    working branch,
  - it can search on tag formats to find comparison data

* Add ptest summary to the report command


Things which may be "nice to have" which can come in the future:

* Make the percentage vs. count in the report a commandline option? 
  (not sure but I wondered if that would be better)

* Add ptest sub-command to extract log data

* Generate HTML report

* Generate graphical ptest result charts


I'd be interested in your feedback on my changes and hope you agree with them! I'll continue to work on some of the above items as I'd like to get this merged sooner than later. If you're going to work on any of them let me know first. I'll try and keep
http://git.yoctoproject.org/clean/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222
up to date with my changes.

Cheers,

Richard


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-18  8:09       ` Yeoh, Ee Peng
@ 2019-02-18  9:07         ` Richard Purdie
  2019-02-18  9:20           ` Yeoh, Ee Peng
  0 siblings, 1 reply; 18+ messages in thread
From: Richard Purdie @ 2019-02-18  9:07 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

Hi Ee Peng,

On Mon, 2019-02-18 at 08:09 +0000, Yeoh, Ee Peng wrote: 
> I did some testing with the latest from resulttool: Update to use
> gitarchive library function. 
> http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222&id=b9eecaabe56db5bcafff31e67cdabadc42e2d2e4
> 
> I had 2 questions. 
> 1. For "resulttool regression", currently it was comparing result id
> set without comprehending the difference in the host distro used to
> executed the oeselftest. Example: it was matching oeselftest run with
> fedora28 host distro with oeselftest run with ubuntu18 host distro,
> is this the expected behavior? 
> Match: oeselftest_fedora-28_qemux86-64_20190201181656
>        oeselftest_ubuntu-18.04_qemux86-64_20190201175023
> Match: oeselftest_fedora-26_qemux86-64_20190131144317
>        oeselftest_fedora-26_qemux86-64_20190131144317
> Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
>        oeselftest_fedora-28_qemux86-64_20190201181656
> Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
>        oeselftest_opensuse-42.3_qemux86-64_20190126152612

There were two reasons for this:

a) the results of the selftest should be independent of which
HOST_DISTRO they're run on so they can be compared.

b) some builds only have one oe-selftest (a-quick) and some have four
(a-full). In an a-quick build, the HOST_DISTRO would likely therefore
be different between two builds but we still would like the tool to
compare them.

> 2. For "resulttool store", I had noticed that it will now generally
> stored testresults.json in a meaningful file directory structure
> based on the store_map except oeselftest. oeselftest currently store
> multiple result id set inside oselftest file directory without
> comprehend the host distro. 
> 
> For example runtime, store testresult.json with the configured
> store_map. 
> ├── oeselftest
> │   └── testresults.json
> ├── runtime
> │   ├── poky
> │   │   ├── qemuarm
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │       └── testresults.json
> │   │   ├── qemuarm64
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │       └── testresults.json
> 
> I believe that we shall again comprehend the 'HOST_DISTRO'
> configuration inside the store_map.  
> store_map = {
> -    "oeselftest": ['TEST_TYPE'],
> +    "oeselftest": ['TEST_TYPE','HOST_DISTRO'],
>      "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
>      "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
>      "sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE',
> 'IMAGE_BASENAME']
> 
> Doing so, it will store oeselftest in a more useful file directory
> structure with host distro comprehended. 
> └── oeselftest
>     ├── fedora-26
>     │   └── testresults.json
>     ├── fedora-28
>     │   └── testresults.json
>     ├── opensuse-42.3
>     │   └── testresults.json
>     └── ubuntu-18.04
>         └── testresults.json

The reasoning is the same as the above, its more useful to allow the
files to be directly compared between different host distros.

Cheers,

Richard





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-18  8:33   ` Yeoh, Ee Peng
@ 2019-02-18  9:17     ` Richard Purdie
  0 siblings, 0 replies; 18+ messages in thread
From: Richard Purdie @ 2019-02-18  9:17 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

On Mon, 2019-02-18 at 08:33 +0000, Yeoh, Ee Peng wrote:
> Hi RP,
> 
> I have a question for "TESTSERIES".
> * Formalised the handling of "file_name" to "TESTSERIES" which the
> code will now add into the json configuration data if its not
> present, based on the directory name.
> 
> May I know why was "TESTSERIES" was added as one of the key
> configuration for regression comparison selection inside
> regression_map? 
> regression_map = {
>     "oeselftest": ['TEST_TYPE', 'MACHINE'],
>     "runtime": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME',
> 'MACHINE', 'IMAGE_PKGTYPE', 'DISTRO'],
>     "sdk": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME', 'MACHINE',
> 'SDKMACHINE'],
>     "sdkext": ['TESTSERIES', 'TEST_TYPE', 'IMAGE_BASENAME',
> 'MACHINE', 'SDKMACHINE']
> }
> 
> Firstly, from the current yocto-testresults repository, I noticed
> that "TESTSERIES" was mostly duplicated with "MACHINE", or "MACHINE"
> & "DISTRO", or "TEST_TYPE" for selftest case.

The store_map doesn't use TESTSERIES as a key since I didn't think it
was needed for the git file layout. Particuarly for the runtime tests,
it does mean we end up with a larger number of results under the
qemux86* files in particular.

When performing regression analysis it seemed like a useful way to know
which results to compare and lowered the number of inexact matches we
had to make.

> Secondly, since "TESTSERIES" was created based on directory name from
> the source directory being used, will this introduce unexpected
> complication to regression comparison in the future if directory name
> for the source was changed? If directory name was changed even
> slightly, example for runtime_core-image-lsb, if the source directory
> name changed from "qemuarm-lsb" to "qemuarm_lsb", I believe the
> regression comparison will not able to compare the result id set even
> though they were having same configurations and they were meant to be
> compare directly. 

For Yocto QA usage, the directory names are going to come directly from
the autobuilder target names so they should be consistent. I'd be fine
with adding a parameter to control it, right now it does add useful
information we need for the autobuilder though as it makes the results
more accurate (and hints to the user which autobuilder target the
change came from too).

Cheers,

Richard





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-18  9:07         ` Richard Purdie
@ 2019-02-18  9:20           ` Yeoh, Ee Peng
  2019-02-18 10:12             ` Richard Purdie
  0 siblings, 1 reply; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-18  9:20 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core

Hi Richard,

Thank you for sharing on the selftest comparison consideration! 

I agreed with you that in the high level, selftest should be independent of which HOST_DISTRO, it shall compared 2 selftest even when the host distro are different. 

But in the case that the build have multiple set of selftest each with slightly different environments (eg. host distro), in that case, will it better to compare selftest more closely if possible with same host distro used? 

Cheers,
Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 5:08 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Mon, 2019-02-18 at 08:09 +0000, Yeoh, Ee Peng wrote: 
> I did some testing with the latest from resulttool: Update to use 
> gitarchive library function.
> http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t2
> 22&id=b9eecaabe56db5bcafff31e67cdabadc42e2d2e4
> 
> I had 2 questions. 
> 1. For "resulttool regression", currently it was comparing result id 
> set without comprehending the difference in the host distro used to 
> executed the oeselftest. Example: it was matching oeselftest run with
> fedora28 host distro with oeselftest run with ubuntu18 host distro, is 
> this the expected behavior?
> Match: oeselftest_fedora-28_qemux86-64_20190201181656
>        oeselftest_ubuntu-18.04_qemux86-64_20190201175023
> Match: oeselftest_fedora-26_qemux86-64_20190131144317
>        oeselftest_fedora-26_qemux86-64_20190131144317
> Match: oeselftest_ubuntu-18.04_qemux86-64_20190201175023
>        oeselftest_fedora-28_qemux86-64_20190201181656
> Match: oeselftest_opensuse-42.3_qemux86-64_20190126152612
>        oeselftest_opensuse-42.3_qemux86-64_20190126152612

There were two reasons for this:

a) the results of the selftest should be independent of which HOST_DISTRO they're run on so they can be compared.

b) some builds only have one oe-selftest (a-quick) and some have four (a-full). In an a-quick build, the HOST_DISTRO would likely therefore be different between two builds but we still would like the tool to compare them.

> 2. For "resulttool store", I had noticed that it will now generally 
> stored testresults.json in a meaningful file directory structure based 
> on the store_map except oeselftest. oeselftest currently store 
> multiple result id set inside oselftest file directory without 
> comprehend the host distro.
> 
> For example runtime, store testresult.json with the configured 
> store_map.
> ├── oeselftest
> │   └── testresults.json
> ├── runtime
> │   ├── poky
> │   │   ├── qemuarm
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │       └── testresults.json
> │   │   ├── qemuarm64
> │   │   │   ├── core-image-minimal
> │   │   │   │   └── testresults.json
> │   │   │   ├── core-image-sato
> │   │   │   │   └── testresults.json
> │   │   │   └── core-image-sato-sdk
> │   │   │       └── testresults.json
> 
> I believe that we shall again comprehend the 'HOST_DISTRO'
> configuration inside the store_map.  
> store_map = {
> -    "oeselftest": ['TEST_TYPE'],
> +    "oeselftest": ['TEST_TYPE','HOST_DISTRO'],
>      "runtime": ['TEST_TYPE', 'DISTRO', 'MACHINE', 'IMAGE_BASENAME'],
>      "sdk": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 'IMAGE_BASENAME'],
>      "sdkext": ['TEST_TYPE', 'MACHINE', 'SDKMACHINE', 
> 'IMAGE_BASENAME']
> 
> Doing so, it will store oeselftest in a more useful file directory 
> structure with host distro comprehended.
> └── oeselftest
>     ├── fedora-26
>     │   └── testresults.json
>     ├── fedora-28
>     │   └── testresults.json
>     ├── opensuse-42.3
>     │   └── testresults.json
>     └── ubuntu-18.04
>         └── testresults.json

The reasoning is the same as the above, its more useful to allow the files to be directly compared between different host distros.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-18  9:20           ` Yeoh, Ee Peng
@ 2019-02-18 10:12             ` Richard Purdie
  2019-02-19  1:02               ` Yeoh, Ee Peng
  0 siblings, 1 reply; 18+ messages in thread
From: Richard Purdie @ 2019-02-18 10:12 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

On Mon, 2019-02-18 at 09:20 +0000, Yeoh, Ee Peng wrote:
> Thank you for sharing on the selftest comparison consideration! 
> 
> I agreed with you that in the high level, selftest should be
> independent of which HOST_DISTRO, it shall compared 2 selftest even
> when the host distro are different. 
> 
> But in the case that the build have multiple set of selftest each
> with slightly different environments (eg. host distro), in that case,
> will it better to compare selftest more closely if possible with same
> host distro used? 

In an ideal world, yes. In reality trying to do that and making it
conditional will complicate the code for little "real" end difference
though?

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-18 10:12             ` Richard Purdie
@ 2019-02-19  1:02               ` Yeoh, Ee Peng
  0 siblings, 0 replies; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-19  1:02 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core

RP, 
Noted, thanks. 

Cheers,
Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:12 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Mon, 2019-02-18 at 09:20 +0000, Yeoh, Ee Peng wrote:
> Thank you for sharing on the selftest comparison consideration! 
> 
> I agreed with you that in the high level, selftest should be 
> independent of which HOST_DISTRO, it shall compared 2 selftest even 
> when the host distro are different.
> 
> But in the case that the build have multiple set of selftest each with 
> slightly different environments (eg. host distro), in that case, will 
> it better to compare selftest more closely if possible with same host 
> distro used?

In an ideal world, yes. In reality trying to do that and making it conditional will complicate the code for little "real" end difference though?

Cheers,

Richard


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-17 22:45     ` Richard Purdie
  2019-02-18  8:09       ` Yeoh, Ee Peng
@ 2019-02-20  6:27       ` Yeoh, Ee Peng
  2019-02-20 21:44         ` Richard Purdie
  1 sibling, 1 reply; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-20  6:27 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core; +Cc: Eggleton, Paul

Hi RP,

Thank you very much for all your help and inputs! 
Would you like us to take all the improvements from your branch to merge or squash with the base patchset and move forward with the one remaining improvement below? 

> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos 
> > for
> >   comparisons:
> >   - Its able to query data from git repos without changing the 
> > current
> >     working branch, 
> >   - it can search on tag formats to find comparison data

Best regards,
Yeoh Ee Peng

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, February 18, 2019 6:46 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

On Sun, 2019-02-17 at 17:54 +0000, Richard Purdie wrote:
> > Despite my changes there are things that still need to be done.
> > Essential things which need to happen before this code merges:
> > 
> > * oe-git-archive is importing using the commit/branch of the current
> >   repo, not the data in the results file.

Also now fixed. I put my patches into master-next too.

With this working, I was able to run something along the lines of:

for D in $1/*; do
    resulttool store $D $2 --allow-empty done

on the autobuilder's recent results which lead to the creation of this
repository:

http://git.yoctoproject.org/cgit.cgi/yocto-testresults/


> > * Revisit and redo the way the git branch handling is happening.
> > We 
> >   really want to model how oe-build-perf-report handles git repos 
> > for
> >   comparisons:
> >   - Its able to query data from git repos without changing the 
> > current
> >     working branch, 
> >   - it can search on tag formats to find comparison data

Which means we now need to make the git branch functionality of the report and regression commands compare with the above repo, so we're a step closer to getting thie merged.

Ultimately we'll auto-populate the above repo by having the autobuilder run a "store" command at the end of its runs.

I have a feeling I may have broken the resulttool selftests so that is something else which will need to be fixed before anything merges. Time for me to step away from the keyboard for a bit too.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-20  6:27       ` Yeoh, Ee Peng
@ 2019-02-20 21:44         ` Richard Purdie
  2019-02-21  1:19           ` Yeoh, Ee Peng
  2019-02-21  1:24           ` Yeoh, Ee Peng
  0 siblings, 2 replies; 18+ messages in thread
From: Richard Purdie @ 2019-02-20 21:44 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core; +Cc: Eggleton, Paul

Hi Ee Peng,

On Wed, 2019-02-20 at 06:27 +0000, Yeoh, Ee Peng wrote:
> Thank you very much for all your help and inputs! 
> Would you like us to take all the improvements from your branch to
> merge or squash with the base patchset and move forward with the one
> remaining improvement below? 

I've done some further work on this today and the good news is I was
able to sort out the git repo handling pieces and fix the test cases.
With two of the test cases I ended up removing them as I've changed the
functionality enough that they'd need to be rewritten.

I've sent out a patch on top of your original work as well as a second
patch to move some functionality into library functions to allow us to
use it from the new code. I think this combination of patches should
now be ready to merge.

There will be fixes and improvements on top of this, e.g. I'd love to
get some html reports and graphs but those are things that come later.

The next step once this is merged is to start storing autobuilder test
result data, generating reports and regression reports automatically
from each test run.

Its great to see this all coming together!

Cheers,

Richard



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-20 21:44         ` Richard Purdie
@ 2019-02-21  1:19           ` Yeoh, Ee Peng
  2019-02-21  1:24           ` Yeoh, Ee Peng
  1 sibling, 0 replies; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-21  1:19 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core; +Cc: Eggleton, Paul

Hi RP,

Noted, thank you once again for your help and inputs! Really glad to hear that resulttool was ready! 
We shall plan forward for future improvement in html reports and graphs. Also we shall look into future test case development if needed.

Cheers,
Yeoh Ee Peng 

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Thursday, February 21, 2019 5:44 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Eggleton, Paul <paul.eggleton@intel.com>
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Wed, 2019-02-20 at 06:27 +0000, Yeoh, Ee Peng wrote:
> Thank you very much for all your help and inputs! 
> Would you like us to take all the improvements from your branch to 
> merge or squash with the base patchset and move forward with the one 
> remaining improvement below?

I've done some further work on this today and the good news is I was able to sort out the git repo handling pieces and fix the test cases.
With two of the test cases I ended up removing them as I've changed the functionality enough that they'd need to be rewritten.

I've sent out a patch on top of your original work as well as a second patch to move some functionality into library functions to allow us to use it from the new code. I think this combination of patches should now be ready to merge.

There will be fixes and improvements on top of this, e.g. I'd love to get some html reports and graphs but those are things that come later.

The next step once this is merged is to start storing autobuilder test result data, generating reports and regression reports automatically from each test run.

Its great to see this all coming together!

Cheers,

Richard


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/2 v7] test-case-mgmt
  2019-02-20 21:44         ` Richard Purdie
  2019-02-21  1:19           ` Yeoh, Ee Peng
@ 2019-02-21  1:24           ` Yeoh, Ee Peng
  1 sibling, 0 replies; 18+ messages in thread
From: Yeoh, Ee Peng @ 2019-02-21  1:24 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core; +Cc: Eggleton, Paul

Hi RP,

Noted, thank you once again for your great help and inputs! Really glad to hear that resulttool was ready! 
We shall plan forward for future improvement in html reports and graphs. Also we shall look into future test case development if needed.

Cheers,
Yeoh Ee Peng

-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Thursday, February 21, 2019 5:44 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Eggleton, Paul <paul.eggleton@intel.com>
Subject: Re: [OE-core] [PATCH 0/2 v7] test-case-mgmt

Hi Ee Peng,

On Wed, 2019-02-20 at 06:27 +0000, Yeoh, Ee Peng wrote:
> Thank you very much for all your help and inputs! 
> Would you like us to take all the improvements from your branch to 
> merge or squash with the base patchset and move forward with the one 
> remaining improvement below?

I've done some further work on this today and the good news is I was able to sort out the git repo handling pieces and fix the test cases.
With two of the test cases I ended up removing them as I've changed the functionality enough that they'd need to be rewritten.

I've sent out a patch on top of your original work as well as a second patch to move some functionality into library functions to allow us to use it from the new code. I think this combination of patches should now be ready to merge.

There will be fixes and improvements on top of this, e.g. I'd love to get some html reports and graphs but those are things that come later.

The next step once this is merged is to start storing autobuilder test result data, generating reports and regression reports automatically from each test run.

Its great to see this all coming together!

Cheers,

Richard


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-02-21  1:24 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14  5:50 [PATCH 0/2 v7] test-case-mgmt Yeoh Ee Peng
2019-02-14  5:50 ` [PATCH 1/2 v7] resulttool: enable merge, store, report and regression analysis Yeoh Ee Peng
2019-02-14  5:50 ` [PATCH 2/2 v7] scripts/resulttool: enable manual execution and result creation Yeoh Ee Peng
2019-02-17 16:09 ` [PATCH 0/2 v7] test-case-mgmt Richard Purdie
2019-02-17 17:54   ` Richard Purdie
2019-02-17 22:45     ` Richard Purdie
2019-02-18  8:09       ` Yeoh, Ee Peng
2019-02-18  9:07         ` Richard Purdie
2019-02-18  9:20           ` Yeoh, Ee Peng
2019-02-18 10:12             ` Richard Purdie
2019-02-19  1:02               ` Yeoh, Ee Peng
2019-02-20  6:27       ` Yeoh, Ee Peng
2019-02-20 21:44         ` Richard Purdie
2019-02-21  1:19           ` Yeoh, Ee Peng
2019-02-21  1:24           ` Yeoh, Ee Peng
2019-02-18  1:28   ` Yeoh, Ee Peng
2019-02-18  8:33   ` Yeoh, Ee Peng
2019-02-18  9:17     ` Richard Purdie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.