All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3 v3] test-case-mgmt
@ 2019-01-04  6:46 Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error Yeoh Ee Peng
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Yeoh Ee Peng @ 2019-01-04  6:46 UTC (permalink / raw)
  To: openembedded-core

v1:
  Face key error from oe-git-archive
  Undesirable behavior when storing to multiple git branch

v2: 
  Include fix for oe-git-archive
  Include fix for store result to multiple git branch
  Improve git commit message   

v3:
  Enhance fix for oe-git-archive by using exception catch to
  improve code readability and easy to understand

Mazliana (1):
  scripts/test-case-mgmt: enable manual execution and result creation

Yeoh Ee Peng (2):
  scripts/oe-git-archive: fix non-existent key referencing error
  scripts/test-case-mgmt: store test result and reporting

 scripts/lib/testcasemgmt/__init__.py               |   0
 scripts/lib/testcasemgmt/gitstore.py               | 172 +++++++++++++++++++++
 scripts/lib/testcasemgmt/manualexecution.py        | 142 +++++++++++++++++
 scripts/lib/testcasemgmt/report.py                 | 136 ++++++++++++++++
 scripts/lib/testcasemgmt/store.py                  |  40 +++++
 .../template/test_report_full_text.txt             |  33 ++++
 scripts/oe-git-archive                             |  19 ++-
 scripts/test-case-mgmt                             | 105 +++++++++++++
 8 files changed, 641 insertions(+), 6 deletions(-)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

-- 
2.7.4



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error
  2019-01-04  6:46 [PATCH 0/3 v3] test-case-mgmt Yeoh Ee Peng
@ 2019-01-04  6:46 ` Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 3/3 v3] scripts/test-case-mgmt: enable manual execution and result creation Yeoh Ee Peng
  2 siblings, 0 replies; 7+ messages in thread
From: Yeoh Ee Peng @ 2019-01-04  6:46 UTC (permalink / raw)
  To: openembedded-core

Without installing gitpython package, oe-git-archive will face error
below, where it was referencing key that was non-existent inside
metadata object.

Traceback (most recent call last):
  File "<poky_dir>/scripts/oe-git-archive", line 271, in <module>
    sys.exit(main())
  File "<poky_dir>/scripts/oe-git-archive", line 229, in main
    'commit_count': metadata['layers']['meta']['commit_count'],
KeyError: 'commit_count'

Fix this error by adding exception catch when referencing
non-existent key (based on inputs provided by Richard Purdie).

[YOCTO# 13082]

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 scripts/oe-git-archive | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/scripts/oe-git-archive b/scripts/oe-git-archive
index ab19cb9..913291a 100755
--- a/scripts/oe-git-archive
+++ b/scripts/oe-git-archive
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
 #
 # Helper script for committing data to git and pushing upstream
 #
@@ -208,6 +208,13 @@ def parse_args(argv):
                         help="Data to commit")
     return parser.parse_args(argv)
 
+def get_nested(d, list_of_keys):
+    try:
+        for k in list_of_keys:
+            d = d[k]
+        return d
+    except KeyError:
+        return ""
 
 def main(argv=None):
     """Script entry point"""
@@ -223,11 +230,11 @@ def main(argv=None):
 
         # Get keywords to be used in tag and branch names and messages
         metadata = metadata_from_bb()
-        keywords = {'hostname': metadata['hostname'],
-                    'branch': metadata['layers']['meta']['branch'],
-                    'commit': metadata['layers']['meta']['commit'],
-                    'commit_count': metadata['layers']['meta']['commit_count'],
-                    'machine': metadata['config']['MACHINE']}
+        keywords = {'hostname': get_nested(metadata, ['hostname']),
+                    'branch': get_nested(metadata, ['layers', 'meta', 'branch']),
+                    'commit': get_nested(metadata, ['layers', 'meta', 'commit']),
+                    'commit_count': get_nested(metadata, ['layers', 'meta', 'commit_count']),
+                    'machine': get_nested(metadata, ['config', 'MACHINE'])}
 
         # Expand strings early in order to avoid getting into inconsistent
         # state (e.g. no tag even if data was committed)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting
  2019-01-04  6:46 [PATCH 0/3 v3] test-case-mgmt Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error Yeoh Ee Peng
@ 2019-01-04  6:46 ` Yeoh Ee Peng
  2019-01-21 14:25   ` Richard Purdie
  2019-01-04  6:46 ` [PATCH 3/3 v3] scripts/test-case-mgmt: enable manual execution and result creation Yeoh Ee Peng
  2 siblings, 1 reply; 7+ messages in thread
From: Yeoh Ee Peng @ 2019-01-04  6:46 UTC (permalink / raw)
  To: openembedded-core

These scripts were developed as an alternative testcase management
tool to Testopia. Using these scripts, user can manage the
testresults.json files generated by oeqa automated tests. Using the
"store" operation, user can store multiple groups of test result each
into individual git branch. Within each git branch, user can store
multiple testresults.json files under different directories (eg.
categorize directory by selftest-<distro>, runtime-<image>-<machine>).
Then, using the "report" operation, user can view the test result
summary for all available testresults.json files being stored that
were grouped by directory and test configuration.

The "report" operation expect the testresults.json file to use the
json format below.
{
    "<testresult_1>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
    ...
    "<testresult_n>": {
        "configuration": {
            "<config_name_1>": "<config_value_1>",
            "<config_name_2>": "<config_value_2>",
            ...
            "<config_name_n>": "<config_value_n>",
        },
        "result": {
            "<testcase_namespace_1>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            "<testcase_namespace_2>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
            ...
            "<testcase_namespace_n>": {
                "status": "<PASSED or FAILED or ERROR or SKIPPED>",
                "log": "<failure or error logging>"
            },
        }
    },
}

To use these scripts, first source oe environment, then run the
entry point script to look for help.
    $ test-case-mgmt

To store test result from oeqa automated tests, execute the below
    $ test-case-mgmt store <source_dir> <git_branch>
By default, test result will be stored at <top_dir>/testresults

To store test result from oeqa automated tests under a specific
directory, execute the below
    $ test-case-mgmt store <source_dir> <git_branch> -s <sub_directory>

To view test report, execute the below
    $ test-case-mgmt view <git_branch>

This scripts depends on scripts/oe-git-archive where it was
facing error if gitpython package was not installed. Refer to
[YOCTO# 13082] for more detail.

[YOCTO# 12654]

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 scripts/lib/testcasemgmt/__init__.py               |   0
 scripts/lib/testcasemgmt/gitstore.py               | 172 +++++++++++++++++++++
 scripts/lib/testcasemgmt/report.py                 | 136 ++++++++++++++++
 scripts/lib/testcasemgmt/store.py                  |  40 +++++
 .../template/test_report_full_text.txt             |  33 ++++
 scripts/test-case-mgmt                             |  96 ++++++++++++
 6 files changed, 477 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/__init__.py
 create mode 100644 scripts/lib/testcasemgmt/gitstore.py
 create mode 100644 scripts/lib/testcasemgmt/report.py
 create mode 100644 scripts/lib/testcasemgmt/store.py
 create mode 100644 scripts/lib/testcasemgmt/template/test_report_full_text.txt
 create mode 100755 scripts/test-case-mgmt

diff --git a/scripts/lib/testcasemgmt/__init__.py b/scripts/lib/testcasemgmt/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/scripts/lib/testcasemgmt/gitstore.py b/scripts/lib/testcasemgmt/gitstore.py
new file mode 100644
index 0000000..19ff28f
--- /dev/null
+++ b/scripts/lib/testcasemgmt/gitstore.py
@@ -0,0 +1,172 @@
+# test case management tool - store test result & log to git repository
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import tempfile
+import os
+import subprocess
+import shutil
+import scriptpath
+scriptpath.add_bitbake_lib_path()
+scriptpath.add_oe_lib_path()
+from oeqa.utils.git import GitRepo, GitError
+
+class GitStore(object):
+
+    def __init__(self, git_dir, git_branch):
+        self.git_dir = git_dir
+        self.git_branch = git_branch
+
+    def _git_init(self):
+        return GitRepo(self.git_dir, is_topdir=True)
+
+    def _run_git_cmd(self, repo, cmd):
+        try:
+            output = repo.run_cmd(cmd)
+            return True, output
+        except GitError:
+            return False, None
+
+    def check_if_git_dir_exist(self, logger):
+        if not os.path.exists('%s/.git' % self.git_dir):
+            logger.debug('Could not find destination git directory: %s' % self.git_dir)
+            return False
+        logger.debug('Found destination git directory: %s' % self.git_dir)
+        return True
+
+    def checkout_git_dir(self, logger):
+        repo = self._git_init()
+        cmd = 'checkout %s' % self.git_branch
+        (status, output) = self._run_git_cmd(repo, cmd)
+        if not status:
+            logger.debug('Could not find git branch: %s' % self.git_branch)
+            return False
+        logger.debug('Found git branch: %s' % self.git_branch)
+        return status
+
+    def _check_if_need_sub_dir(self, logger, git_sub_dir):
+        if len(git_sub_dir) > 0:
+            logger.debug('Need to store into sub dir: %s' % git_sub_dir)
+            return True
+        logger.debug('No need to store into sub dir')
+        return False
+
+    def _check_if_sub_dir_exist(self, logger, git_sub_dir):
+        if os.path.exists(os.path.join(self.git_dir, git_sub_dir)):
+            logger.debug('Found existing sub directory: %s' % os.path.join(self.git_dir, git_sub_dir))
+            return True
+        logger.debug('Could not find existing sub directory: %s' % os.path.join(self.git_dir, git_sub_dir))
+        return False
+
+    def _check_if_testresults_file_exist(self, logger, file_name):
+        if os.path.exists(os.path.join(self.git_dir, file_name)):
+            logger.debug('Found existing %s file inside: %s' % (file_name, self.git_dir))
+            return True
+        logger.debug('Could not find %s file inside: %s' % (file_name, self.git_dir))
+        return False
+
+    def _check_if_need_overwrite_existing(self, logger, overwrite_result):
+        if overwrite_result:
+            logger.debug('Overwriting existing testresult')
+        else:
+            logger.error('Skipped storing test result as it already exist. '
+                         'Specify overwrite argument if you wish to delete existing testresult and store again.')
+        return overwrite_result
+
+    def _create_temporary_workspace_dir(self):
+        return tempfile.mkdtemp(prefix='testresultlog.')
+
+    def _remove_temporary_workspace_dir(self, workspace_dir):
+        return subprocess.run(["rm", "-rf",  workspace_dir])
+
+    def _oe_copy_files(self, logger, source_dir, destination_dir):
+        from oe.path import copytree
+        if os.path.exists(source_dir):
+            logger.debug('Copying test result from %s to %s' % (source_dir, destination_dir))
+            copytree(source_dir, destination_dir)
+        else:
+            logger.error('Could not find the source directory: %s' % source_dir)
+
+    def _copy_files(self, logger, source_dir, destination_dir, copy_ignore=None):
+        from shutil import copytree
+        if os.path.exists(source_dir):
+            logger.debug('Copying test result from %s to %s' % (source_dir, destination_dir))
+            copytree(source_dir, destination_dir, ignore=copy_ignore)
+        else:
+            logger.error('Could not find the source directory: %s' % source_dir)
+
+    def _get_commit_subject_and_body(self, git_sub_dir):
+        commit_msg_subject = 'Store %s from {hostname}' % os.path.join(self.git_dir, git_sub_dir)
+        commit_msg_body = 'git dir: %s\nsub dir list: %s\nhostname: {hostname}' % (self.git_dir, git_sub_dir)
+        return commit_msg_subject, commit_msg_body
+
+    def _store_files_to_git(self, logger, file_dir, commit_msg_subject, commit_msg_body):
+        logger.debug('Storing test result into git repository (%s) and branch (%s)'
+                     % (self.git_dir, self.git_branch))
+        return subprocess.run(["oe-git-archive",
+                               file_dir,
+                               "-g", self.git_dir,
+                               "-b", self.git_branch,
+                               "--commit-msg-subject", commit_msg_subject,
+                               "--commit-msg-body", commit_msg_body])
+
+    def _store_files_to_new_git(self, logger, source_dir, git_sub_dir):
+        logger.debug('Could not find destination git directory (%s) or git branch (%s)' %
+                     (self.git_dir, self.git_branch))
+        logger.debug('Storing files to new git or branch')
+        dest_top_dir = self._create_temporary_workspace_dir()
+        dest_sub_dir = os.path.join(dest_top_dir, git_sub_dir)
+        self._oe_copy_files(logger, source_dir, dest_sub_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body(git_sub_dir)
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_top_dir)
+
+    def _store_files_into_sub_dir_of_existing_git(self, logger, source_dir, git_sub_dir):
+        from shutil import ignore_patterns
+        logger.debug('Storing files to existing git with sub directory')
+        dest_ori_dir = self._create_temporary_workspace_dir()
+        dest_top_dir = os.path.join(dest_ori_dir, 'top_dir')
+        self._copy_files(logger, self.git_dir, dest_top_dir, copy_ignore=ignore_patterns('.git'))
+        dest_sub_dir = os.path.join(dest_top_dir, git_sub_dir)
+        self._oe_copy_files(logger, source_dir, dest_sub_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body(git_sub_dir)
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_ori_dir)
+
+    def _store_files_into_existing_git(self, logger, source_dir):
+        from shutil import ignore_patterns
+        logger.debug('Storing files to existing git without sub directory')
+        dest_ori_dir = self._create_temporary_workspace_dir()
+        dest_top_dir = os.path.join(dest_ori_dir, 'top_dir')
+        self._copy_files(logger, self.git_dir, dest_top_dir, copy_ignore=ignore_patterns('.git'))
+        self._oe_copy_files(logger, source_dir, dest_top_dir)
+        commit_msg_subject, commit_msg_body = self._get_commit_subject_and_body('')
+        self._store_files_to_git(logger, dest_top_dir, commit_msg_subject, commit_msg_body)
+        self._remove_temporary_workspace_dir(dest_ori_dir)
+
+    def store_test_result(self, logger, source_dir, git_sub_dir, overwrite_result):
+        if self.check_if_git_dir_exist(logger) and self.checkout_git_dir(logger):
+            if self._check_if_need_sub_dir(logger, git_sub_dir):
+                if self._check_if_sub_dir_exist(logger, git_sub_dir):
+                    if self._check_if_need_overwrite_existing(logger, overwrite_result):
+                        shutil.rmtree(os.path.join(self.git_dir, git_sub_dir))
+                        self._store_files_into_sub_dir_of_existing_git(logger, source_dir, git_sub_dir)
+                else:
+                    self._store_files_into_sub_dir_of_existing_git(logger, source_dir, git_sub_dir)
+            else:
+                if self._check_if_testresults_file_exist(logger, 'testresults.json'):
+                    if self._check_if_need_overwrite_existing(logger, overwrite_result):
+                        self._store_files_into_existing_git(logger, source_dir)
+                else:
+                    self._store_files_into_existing_git(logger, source_dir)
+        else:
+            self._store_files_to_new_git(logger, source_dir, git_sub_dir)
diff --git a/scripts/lib/testcasemgmt/report.py b/scripts/lib/testcasemgmt/report.py
new file mode 100644
index 0000000..7c9c440
--- /dev/null
+++ b/scripts/lib/testcasemgmt/report.py
@@ -0,0 +1,136 @@
+import os
+import glob
+import json
+from testcasemgmt.gitstore import GitStore
+
+class TextTestReport(object):
+
+    def _get_test_result_files(self, git_dir, excludes, test_result_file):
+        testresults = []
+        for root, dirs, files in os.walk(git_dir, topdown=True):
+            [dirs.remove(d) for d in list(dirs) if d in excludes]
+            for name in files:
+                if name == test_result_file:
+                    testresults.append(os.path.join(root, name))
+        return testresults
+
+    def _load_json_test_results(self, file):
+        if os.path.exists(file):
+            with open(file, "r") as f:
+                return json.load(f)
+        else:
+            return None
+
+    def _map_raw_test_result_to_predefined_list(self, testresult):
+        passed_list = ['PASSED', 'passed']
+        failed_list = ['FAILED', 'failed', 'ERROR', 'error']
+        skipped_list = ['SKIPPED', 'skipped']
+        test_result = {'passed': 0, 'failed': 0, 'skipped': 0, 'failed_testcases': []}
+
+        result = testresult["result"]
+        for testcase in result.keys():
+            test_status = result[testcase]["status"]
+            if test_status in passed_list:
+                test_result['passed'] += 1
+            elif test_status in failed_list:
+                test_result['failed'] += 1
+                test_result['failed_testcases'].append(testcase)
+            elif test_status in skipped_list:
+                test_result['skipped'] += 1
+        return test_result
+
+    def _compute_test_result_percentage(self, test_result):
+        total_tested = test_result['passed'] + test_result['failed'] + test_result['skipped']
+        test_result['passed_percent'] = 0
+        test_result['failed_percent'] = 0
+        test_result['skipped_percent'] = 0
+        if total_tested > 0:
+            test_result['passed_percent'] = format(test_result['passed']/total_tested * 100, '.2f')
+            test_result['failed_percent'] = format(test_result['failed']/total_tested * 100, '.2f')
+            test_result['skipped_percent'] = format(test_result['skipped']/total_tested * 100, '.2f')
+
+    def _convert_test_result_to_string(self, test_result):
+        test_result['passed_percent'] = str(test_result['passed_percent'])
+        test_result['failed_percent'] = str(test_result['failed_percent'])
+        test_result['skipped_percent'] = str(test_result['skipped_percent'])
+        test_result['passed'] = str(test_result['passed'])
+        test_result['failed'] = str(test_result['failed'])
+        test_result['skipped'] = str(test_result['skipped'])
+        if 'idle' in test_result:
+            test_result['idle'] = str(test_result['idle'])
+        if 'idle_percent' in test_result:
+            test_result['idle_percent'] = str(test_result['idle_percent'])
+        if 'complete' in test_result:
+            test_result['complete'] = str(test_result['complete'])
+        if 'complete_percent' in test_result:
+            test_result['complete_percent'] = str(test_result['complete_percent'])
+
+    def _compile_test_result(self, testresult):
+        test_result = self._map_raw_test_result_to_predefined_list(testresult)
+        self._compute_test_result_percentage(test_result)
+        self._convert_test_result_to_string(test_result)
+        return test_result
+
+    def _get_test_component(self, git_dir, file_dir):
+        test_component = 'None'
+        if git_dir != os.path.dirname(file_dir):
+            test_component = file_dir.replace(git_dir + '/', '')
+        return test_component
+
+    def _get_max_string_len(self, test_result_list, key, default_max_len):
+        max_len = default_max_len
+        for test_result in test_result_list:
+            value_len = len(test_result[key])
+            if value_len > max_len:
+                max_len = value_len
+        return max_len
+
+    def _render_text_test_report(self, template_file_name, test_result_list, max_len_component, max_len_config):
+        from jinja2 import Environment, FileSystemLoader
+        script_path = os.path.dirname(os.path.realpath(__file__))
+        file_loader = FileSystemLoader(script_path + '/template')
+        env = Environment(loader=file_loader, trim_blocks=True)
+        template = env.get_template(template_file_name)
+        output = template.render(test_reports=test_result_list,
+                                 max_len_component=max_len_component,
+                                 max_len_config=max_len_config)
+        print('Printing text-based test report:')
+        print(output)
+
+    def view_test_report(self, logger, git_dir):
+        test_result_list = []
+        for test_result_file in self._get_test_result_files(git_dir, ['.git'], 'testresults.json'):
+            logger.debug('Computing test result for test result file: %s' % test_result_file)
+            testresults = self._load_json_test_results(test_result_file)
+            for testresult_key in testresults.keys():
+                test_result = self._compile_test_result(testresults[testresult_key])
+                test_result['test_component'] = self._get_test_component(git_dir, test_result_file)
+                test_result['test_configuration'] = testresult_key
+                test_result['test_component_configuration'] = '%s_%s' % (test_result['test_component'],
+                                                                         test_result['test_configuration'])
+                test_result_list.append(test_result)
+        max_len_component = self._get_max_string_len(test_result_list, 'test_component', len('test_component'))
+        max_len_config = self._get_max_string_len(test_result_list, 'test_configuration', len('test_configuration'))
+        self._render_text_test_report('test_report_full_text.txt', test_result_list, max_len_component, max_len_config)
+
+def report(args, logger):
+    gitstore = GitStore(args.git_dir, args.git_branch)
+    if gitstore.check_if_git_dir_exist(logger):
+        if gitstore.checkout_git_dir(logger):
+            logger.debug('Checkout git branch: %s' % args.git_branch)
+            testreport = TextTestReport()
+            testreport.view_test_report(logger, args.git_dir)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('report', help='report test result summary',
+                                         description='report text-based test result summary from the source git '
+                                                     'directory with the given git branch',
+                                         group='report')
+    parser_build.set_defaults(func=report)
+    parser_build.add_argument('git_branch', help='git branch to be used to compute test summary report')
+    parser_build.add_argument('-d', '--git-dir', default='',
+                              help='(optional) source directory to be used as git repository '
+                                   'to compute test report where default location for source directory '
+                                   'will be <top_dir>/testresults')
diff --git a/scripts/lib/testcasemgmt/store.py b/scripts/lib/testcasemgmt/store.py
new file mode 100644
index 0000000..c80f7be
--- /dev/null
+++ b/scripts/lib/testcasemgmt/store.py
@@ -0,0 +1,40 @@
+# test case management tool - store test result
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+from testcasemgmt.gitstore import GitStore
+
+def store(args, logger):
+    gitstore = GitStore(args.git_dir, args.git_branch)
+    gitstore.store_test_result(logger, args.source_dir, args.git_sub_dir, args.overwrite_result)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('store', help='store test result files into git repository',
+                                         description='store the testresults.json file from the source directory into '
+                                                     'the destination git repository with the given git branch',
+                                         group='store')
+    parser_build.set_defaults(func=store)
+    parser_build.add_argument('source_dir',
+                              help='source directory that contain the test result files to be stored')
+    parser_build.add_argument('git_branch', help='git branch (new or existing) used to store the test result files')
+    parser_build.add_argument('-d', '--git-dir', default='',
+                              help='(optional) destination directory (new or existing) to be used as git repository '
+                                   'to store the test result files from the source directory where '
+                                   'default location for destination directory will be <top_dir>/testresults')
+    parser_build.add_argument('-s', '--git-sub-dir', default='',
+                              help='(optional) additional sub directory (new or existing) under the destination '
+                                   'directory (git-dir) where it will be used to hold the test result files, used '
+                                   'this if storing multiple test result files')
+    parser_build.add_argument('-o', '--overwrite-result', action='store_true',
+                              help='(optional) overwrite existing test result file with new file provided')
diff --git a/scripts/lib/testcasemgmt/template/test_report_full_text.txt b/scripts/lib/testcasemgmt/template/test_report_full_text.txt
new file mode 100644
index 0000000..2cec64c
--- /dev/null
+++ b/scripts/lib/testcasemgmt/template/test_report_full_text.txt
@@ -0,0 +1,33 @@
+==============================================================================================================
+Test Report (Count of passed, failed, skipped group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'test_component'.ljust(max_len_component) }} | {{ 'test_configuration'.ljust(max_len_config) }} | {{ 'passed'.ljust(10) }} | {{ 'failed'.ljust(10) }} | {{ 'skipped'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+{{ report.test_component.ljust(max_len_component) }} | {{ report.test_configuration.ljust(max_len_config) }} | {{ report.passed.ljust(10) }} | {{ report.failed.ljust(10) }} | {{ report.skipped.ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Percent of passed, failed, skipped group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{{ 'test_component'.ljust(max_len_component) }} | {{ 'test_configuration'.ljust(max_len_config) }} | {{ 'passed_%'.ljust(10) }} | {{ 'failed_%'.ljust(10) }} | {{ 'skipped_%'.ljust(10) }}
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+{{ report.test_component.ljust(max_len_component) }} | {{ report.test_configuration.ljust(max_len_config) }} | {{ report.passed_percent.ljust(10) }} | {{ report.failed_percent.ljust(10) }} | {{ report.skipped_percent.ljust(10) }}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
+
+==============================================================================================================
+Test Report (Failed test cases group by test_component, test_configuration)
+==============================================================================================================
+--------------------------------------------------------------------------------------------------------------
+{% for report in test_reports |sort(attribute='test_component_configuration') %}
+test_component | test_configuration : {{ report.test_component }} | {{ report.test_configuration }}
+{% for testcase in report.failed_testcases %}
+    {{ testcase }}
+{% endfor %}
+{% endfor %}
+--------------------------------------------------------------------------------------------------------------
\ No newline at end of file
diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
new file mode 100755
index 0000000..0df305d
--- /dev/null
+++ b/scripts/test-case-mgmt
@@ -0,0 +1,96 @@
+#!/usr/bin/env python3
+#
+# test case management tool - store test result, report test result summary,
+# & manual test execution
+#
+# As part of the initiative to provide LITE version Test Case Management System
+# with command-line to replace Testopia.
+# test-case-mgmt script was designed as part of the helper script for below purpose:
+# 1. To store test result inside git repository
+# 2. To report text-based test result summary
+# 3. (Future) To execute manual test cases
+#
+# To look for help information.
+#    $ test-case-mgmt
+#
+# To store test result, execute the below
+#    $ test-case-mgmt store <source_dir> <git_branch>
+#
+# To report test result summary, execute the below
+#     $ test-case-mgmt report <git_branch>
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+
+import os
+import sys
+import argparse
+import logging
+script_path = os.path.dirname(os.path.realpath(__file__))
+lib_path = script_path + '/lib'
+sys.path = sys.path + [lib_path]
+import argparse_oe
+import scriptutils
+import testcasemgmt.store
+import testcasemgmt.report
+logger = scriptutils.logger_create('test-case-mgmt')
+
+def _validate_user_input_arguments(args):
+    if hasattr(args, "source_dir"):
+        if not os.path.isdir(args.source_dir):
+            logger.error('source_dir argument need to be a directory : %s' % args.source_dir)
+            return False
+    if hasattr(args, "git_sub_dir"):
+        if '/' in args.git_sub_dir:
+            logger.error('git_sub_dir argument cannot contain / : %s' % args.git_sub_dir)
+            return False
+        if '\\' in r"%r" % args.git_sub_dir:
+            logger.error('git_sub_dir argument cannot contain \\ : %r' % args.git_sub_dir)
+            return False
+    return True
+
+def _set_default_arg_value(args):
+    if hasattr(args, "git_dir"):
+        if args.git_dir == '':
+            base_path = script_path + '/..'
+            args.git_dir = os.path.join(os.path.abspath(base_path), 'testresults')
+        logger.debug('Set git_dir argument: %s' % args.git_dir)
+
+def main():
+    parser = argparse_oe.ArgumentParser(description="OpenEmbedded test case management tool.",
+                                        epilog="Use %(prog)s <subcommand> --help to get help on a specific command")
+    parser.add_argument('-d', '--debug', help='enable debug output', action='store_true')
+    parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
+    subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
+    subparsers.required = True
+    subparsers.add_subparser_group('store', 'store test result', 200)
+    testcasemgmt.store.register_commands(subparsers)
+    subparsers.add_subparser_group('report', 'report test result summary', 100)
+    testcasemgmt.report.register_commands(subparsers)
+    args = parser.parse_args()
+    if args.debug:
+        logger.setLevel(logging.DEBUG)
+    elif args.quiet:
+        logger.setLevel(logging.ERROR)
+
+    if not _validate_user_input_arguments(args):
+        return -1
+    _set_default_arg_value(args)
+
+    try:
+        ret = args.func(args, logger)
+    except argparse_oe.ArgumentUsageError as ae:
+        parser.error_subcommand(ae.message, ae.subcommand)
+    return ret
+
+if __name__ == "__main__":
+    sys.exit(main())
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3 v3] scripts/test-case-mgmt: enable manual execution and result creation
  2019-01-04  6:46 [PATCH 0/3 v3] test-case-mgmt Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error Yeoh Ee Peng
  2019-01-04  6:46 ` [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
@ 2019-01-04  6:46 ` Yeoh Ee Peng
  2 siblings, 0 replies; 7+ messages in thread
From: Yeoh Ee Peng @ 2019-01-04  6:46 UTC (permalink / raw)
  To: openembedded-core

From: Mazliana <mazliana.mohamad@intel.com>

Integrated “manualexecution” operation to test-case-mgmt scripts.
Manual execution script is a helper script to execute all manual
test cases in baseline command, which consists of user guideline
steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are
passed/failed/blocked/skipped status. The result given will be
written in testresults.json including log error from the user
input and configuration if there is any.The output test result
for json file is created by using OEQA library.

The configuration part is manually key-in by the user. The system
allow user to specify how many configuration they want to add and
they need to define the required configuration name and value pair.
In QA perspective, "configuration" means the test environments and
parameters used during QA setup before testing can be carry out.
Example of configurations: image used for boot up, host machine
distro used, poky configurations, etc.

The purpose of adding the configuration is to standardize the
output test result format between automation and manual execution.

To use these scripts, first source oe environment, then run the
entry point script to look for help.
        $ test-case-mgmt

To execute manual test cases, execute the below
        $ test-case-mgmt manualexecution <manualjsonfile>

By default testresults.json store in <build_dir>/tmp/log/manual/

[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/lib/testcasemgmt/manualexecution.py | 142 ++++++++++++++++++++++++++++
 scripts/test-case-mgmt                      |  11 ++-
 2 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 0000000..8fd378d
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,142 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+    def __init__(self):
+        self.jdata = ''
+        self.test_module = ''
+        self.test_suite = ''
+        self.test_case = ''
+        self.configuration = ''
+        self.starttime = ''
+        self.result_id = ''
+        self.write_dir = ''
+
+    def _read_json(self, file):
+        self.jdata = json.load(open('%s' % file))
+        self.test_case = []
+        self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+        self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+        for i in range(0, len(self.jdata)):
+            self.test_case.append(self.jdata[i]['test']['@alias'].split('.', 2)[2])
+
+    def _get_input(self, config):
+        while True:
+            output = input('{} = '.format(config))
+            if re.match('^[a-zA-Z0-9_]+$', output):
+                break
+            print('Only alphanumeric and underscore are allowed. Please try again')
+        return output
+
+    def _create_config(self):
+        self.configuration = {}
+        while True:
+            try:
+                conf_total = int(input('\nPlease provide how many configuration you want to save \n'))
+                break
+            except ValueError:
+                print('Invalid input. Please provide input as a number not character.')
+        for i in range(conf_total):
+            print('---------------------------------------------')
+            print('This is configuration #%s ' % (i + 1) + '. Please provide configuration name and its value')
+            print('---------------------------------------------')
+            name_conf = self._get_input('Configuration Name')
+            value_conf = self._get_input('Configuration Value')
+            print('---------------------------------------------\n')
+            self.configuration[name_conf.upper()] = value_conf
+        current_datetime = datetime.datetime.now()
+        self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+        self.configuration['STARTTIME'] = self.starttime
+        self.configuration['TEST_TYPE'] = self.test_module
+
+    def _create_result_id(self):
+        self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+    def _execute_test_steps(self, test_id):
+        test_result = {}
+        testcase_id = self.test_module + '.' + self.test_suite + '.' + self.test_case[test_id]
+        total_steps = len(self.jdata[test_id]['test']['execution'].keys())
+        print('------------------------------------------------------------------------')
+        print('Executing test case:' + '' '' + self.test_case[test_id])
+        print('------------------------------------------------------------------------')
+        print('You have total ' + str(total_steps) + ' test steps to be executed.')
+        print('------------------------------------------------------------------------\n')
+
+        for step in range (1, (total_steps + 1)):
+            print('Step %s: ' % step + self.jdata[test_id]['test']['execution']['%s' % step]['action'])
+            print('Expected output: ' + self.jdata[test_id]['test']['execution']['%s' % step]['expected_results'])
+            if step == total_steps:
+                while True:
+                    try:
+                        done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked/(S)kipped? \n')
+                        done = done.lower()
+                        if done == 'p':
+                            res = 'PASSED'
+                        elif done == 'f':
+                            res = 'FAILED'
+                            log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n')
+                        elif done == 'b':
+                            res = 'BLOCKED'
+                        elif done == 's':
+                            res = 'SKIPPED'
+
+                        if res == 'FAILED':
+                            test_result.update({testcase_id: {'status': '%s' % res, 'log': '%s' % log_input}})
+                        else:
+                            test_result.update({testcase_id: {'status': '%s' % res}})
+                        break
+                    except:
+                        print('Invalid input!')
+            else:
+                done = input('\nPlease press ENTER when you are done to proceed to next step.\n')
+        return test_result
+
+    def _create_write_dir(self):
+        basepath = os.environ['BUILDDIR']
+        self.write_dir = basepath + '/tmp/log/manual/'
+
+    def run_test(self, file):
+        self._read_json(file)
+        self._create_config()
+        self._create_result_id()
+        self._create_write_dir()
+        test_results = {}
+        print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(self.jdata))
+        for i in range(0, len(self.jdata)):
+            test_result = self._execute_test_steps(i)
+            test_results.update(test_result)
+        return self.configuration, self.result_id, self.write_dir, test_results
+
+def manualexecution(args, logger):
+    testrunner = ManualTestRunner()
+    get_configuration, get_result_id, get_write_dir, get_test_results = testrunner.run_test(args.file)
+    resultjsonhelper = OETestResultJSONHelper()
+    resultjsonhelper.dump_testresult_file(get_write_dir, get_configuration, get_result_id,
+                                          get_test_results)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.',
+                                         description='Helper script for results populating during manual test execution. You can find manual test case JSON file in meta/lib/oeqa/manual/',
+                                         group='manualexecution')
+    parser_build.set_defaults(func=manualexecution)
+    parser_build.add_argument('file', help='Specify path to manual test case JSON file.Note: Please use \"\" to encapsulate the file path.')
diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
index 0df305d..5c1d435 100755
--- a/scripts/test-case-mgmt
+++ b/scripts/test-case-mgmt
@@ -8,7 +8,7 @@
 # test-case-mgmt script was designed as part of the helper script for below purpose:
 # 1. To store test result inside git repository
 # 2. To report text-based test result summary
-# 3. (Future) To execute manual test cases
+# 3. To execute manual test cases
 #
 # To look for help information.
 #    $ test-case-mgmt
@@ -19,6 +19,12 @@
 # To report test result summary, execute the below
 #     $ test-case-mgmt report <git_branch>
 #
+# To execute manual test cases, execute the below
+#    $ test-case-mgmt manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution store in <build_dir>/tmp/log/manual/
+#
+#
 # Copyright (c) 2018, Intel Corporation.
 #
 # This program is free software; you can redistribute it and/or modify it
@@ -42,6 +48,7 @@ import argparse_oe
 import scriptutils
 import testcasemgmt.store
 import testcasemgmt.report
+import testcasemgmt.manualexecution
 logger = scriptutils.logger_create('test-case-mgmt')
 
 def _validate_user_input_arguments(args):
@@ -72,6 +79,8 @@ def main():
     parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
     subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
     subparsers.required = True
+    subparsers.add_subparser_group('manualexecution', 'execute manual test cases', 300)
+    testcasemgmt.manualexecution.register_commands(subparsers)
     subparsers.add_subparser_group('store', 'store test result', 200)
     testcasemgmt.store.register_commands(subparsers)
     subparsers.add_subparser_group('report', 'report test result summary', 100)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting
  2019-01-04  6:46 ` [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
@ 2019-01-21 14:25   ` Richard Purdie
  2019-01-22  9:44     ` Yeoh, Ee Peng
       [not found]     ` <E0805CCB83E6104E80E61FD34E5788AE55DD18A4@PGSMSX110.gar.corp.intel.com>
  0 siblings, 2 replies; 7+ messages in thread
From: Richard Purdie @ 2019-01-21 14:25 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core; +Cc: Paul Eggleton

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management
> tool to Testopia. Using these scripts, user can manage the
> testresults.json files generated by oeqa automated tests. Using the
> "store" operation, user can store multiple groups of test result each
> into individual git branch. Within each git branch, user can store
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>-
> <machine>).
> Then, using the "report" operation, user can view the test result
> summary for all available testresults.json files being stored that
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was
> facing error if gitpython package was not installed. Refer to
> [YOCTO# 13082] for more detail.

Thanks for the patches. These are a lot more readable than the previous
versions and the code quality is much better which in turn helped
review!

I experimented with the code a bit. I'm fine with the manual test
execution piece of this, I do have some questions/concerns with the
result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test
results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per
test run? or ???
- Are branches used for each release series (master, thud, sumo etc?)
Basically, the layout we'd use to import the autobuilder results for
each master run for example remains unclear to me, or how we'd look up
the status of a given commit.

The code doesn't support comparison of two sets of test results (which
tests were added/removed? passed when previously failed? failed when
previously passed?)

The code also doesn't allow investigation of test report "subdata" like
looking at the ptest results, comparing them to previous runs, showing
the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to
decide on the QA state of a given set of testreport data. I'm just not
sure this patch set lets us do that, or gives us a path to allow us to
do that either.

Cheers,

Richard





^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting
  2019-01-21 14:25   ` Richard Purdie
@ 2019-01-22  9:44     ` Yeoh, Ee Peng
       [not found]     ` <E0805CCB83E6104E80E61FD34E5788AE55DD18A4@PGSMSX110.gar.corp.intel.com>
  1 sibling, 0 replies; 7+ messages in thread
From: Yeoh, Ee Peng @ 2019-01-22  9:44 UTC (permalink / raw)
  To: Richard Purdie, openembedded-core; +Cc: Paul Eggleton

Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope to improve the code readability and ease of maintenance. Also new functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis for two specified testresults.json
2. Add selftest to test merge, store, report and regression functionalities
3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The target layout shall be a specific git branch for each commit tested, where the file directories shall be  based on existing Autobuilder results archive (eg. assuming store command was executed inside Autobuilder machine that stored the testresults.json files and predefined directory), simply execute: $ resultstool store <source_dir> <git_branch> where source_dir was the top directory used by Autobuilder to archive all testresults.json file, git_branch was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git repository under <poky>/<build>/ directory. To update files to be stored, simply execute $ resultstool store <source_dir> <git_branch> -d <poky>/<build>/<testresults_datetime>.

2. The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

Assuming results from a particular tested commit were merged into a single file (using existing "merge" functionality), user shall use the newly added "regression" functionality for comparing results status for two testresults.json files. Based on the configurations data for each result_id set, the comparison logic will select result with same configurations for comparison. More advance regression and automation can be developed from current code base. 

3. The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your sharing and help!

Thanks,
Yeoh Ee Peng 



-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>- 
> <machine>).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution piece of this, I do have some questions/concerns with the result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to decide on the QA state of a given set of testreport data. I'm just not sure this patch set lets us do that, or gives us a path to allow us to do that either.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting
       [not found]     ` <E0805CCB83E6104E80E61FD34E5788AE55DD18A4@PGSMSX110.gar.corp.intel.com>
@ 2019-01-22 10:19       ` Yeoh, Ee Peng
  0 siblings, 0 replies; 7+ messages in thread
From: Yeoh, Ee Peng @ 2019-01-22 10:19 UTC (permalink / raw)
  To: 'Richard Purdie',
	'openembedded-core@lists.openembedded.org'
  Cc: 'Paul Eggleton'

Sorry, I realized that I had missed to include the files used for oe-selftest that testing the store operation.
Submitted v5 patches that added the required files for oe-selftest -r resultstooltests.

http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278243.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278244.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278245.html

-----Original Message-----
From: Yeoh, Ee Peng 
Sent: Tuesday, January 22, 2019 5:45 PM
To: Richard Purdie <richard.purdie@linuxfoundation.org>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: RE: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope to improve the code readability and ease of maintenance. Also new functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis for two specified testresults.json 2. Add selftest to test merge, store, report and regression functionalities 3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The target layout shall be a specific git branch for each commit tested, where the file directories shall be  based on existing Autobuilder results archive (eg. assuming store command was executed inside Autobuilder machine that stored the testresults.json files and predefined directory), simply execute: $ resultstool store <source_dir> <git_branch> where source_dir was the top directory used by Autobuilder to archive all testresults.json file, git_branch was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git repository under <poky>/<build>/ directory. To update files to be stored, simply execute $ resultstool store <source_dir> <git_branch> -d <poky>/<build>/<testresults_datetime>.

2. The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

Assuming results from a particular tested commit were merged into a single file (using existing "merge" functionality), user shall use the newly added "regression" functionality for comparing results status for two testresults.json files. Based on the configurations data for each result_id set, the comparison logic will select result with same configurations for comparison. More advance regression and automation can be developed from current code base. 

3. The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your sharing and help!

Thanks,
Yeoh Ee Peng 



-----Original Message-----
From: Richard Purdie [mailto:richard.purdie@linuxfoundation.org]
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Cc: Burton, Ross <ross.burton@intel.com>; Paul Eggleton <paul.eggleton@linux.intel.com>
Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>- 
> <machine>).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution piece of this, I do have some questions/concerns with the result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to decide on the QA state of a given set of testreport data. I'm just not sure this patch set lets us do that, or gives us a path to allow us to do that either.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-01-22 10:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-04  6:46 [PATCH 0/3 v3] test-case-mgmt Yeoh Ee Peng
2019-01-04  6:46 ` [PATCH 1/3 v3] scripts/oe-git-archive: fix non-existent key referencing error Yeoh Ee Peng
2019-01-04  6:46 ` [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
2019-01-21 14:25   ` Richard Purdie
2019-01-22  9:44     ` Yeoh, Ee Peng
     [not found]     ` <E0805CCB83E6104E80E61FD34E5788AE55DD18A4@PGSMSX110.gar.corp.intel.com>
2019-01-22 10:19       ` Yeoh, Ee Peng
2019-01-04  6:46 ` [PATCH 3/3 v3] scripts/test-case-mgmt: enable manual execution and result creation Yeoh Ee Peng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.