All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/4] oeqa/core/runner: write testresult to json files
@ 2018-10-22 10:34 Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 2/4] oeqa/selftest/context: " Yeoh Ee Peng
                   ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-22 10:34 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA need to output testresult into single json file, where json
testresult file will be stored in git repository by the future
test-case-management tools.

The json testresult file will store more than one set of results,
where each set of results was uniquely identified by the result_id.
The result_id would be like "runtime-qemux86-core-image-sato", where
it was a runtime test with target machine equal to qemux86 and running
on core-image-sato image. The json testresult file will only store
the latest test content for a given result_id. The json testresult
file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE),
result (eg. PASSED, FAILED, ERROR), test log, and result_id.

Based on the destination json testresult file directory provided,
it could have multiple instances of bitbake trying to write json
testresult to a single testresult file, using locking a lockfile
alongside the results file directory to prevent races.

Also the library class inside this patch will be reused by the future
test-case-management tools to write json testresult for manual test
case executed.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/core/runner.py | 39 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 38 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index f1dd080..2243a10 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -6,6 +6,7 @@ import time
 import unittest
 import logging
 import re
+import json
 
 from unittest import TextTestResult as _TestResult
 from unittest import TextTestRunner as _TestRunner
@@ -119,8 +120,9 @@ class OETestResult(_TestResult):
         self.successes.append((test, None))
         super(OETestResult, self).addSuccess(test)
 
-    def logDetails(self):
+    def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
         self.tc.logger.info("RESULTS:")
+        result = {}
         for case_name in self.tc._registry['cases']:
             case = self.tc._registry['cases'][case_name]
 
@@ -137,6 +139,11 @@ class OETestResult(_TestResult):
                 t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
 
             self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(), oeid, status, t))
+            result[case.id()] = {'status': status, 'log': log}
+
+        if json_file_dir:
+            tresultjsonhelper = OETestResultJSONHelper()
+            tresultjsonhelper.dump_testresult_file(result_id, result, configuration, json_file_dir)
 
 class OEListTestsResult(object):
     def wasSuccessful(self):
@@ -249,3 +256,33 @@ class OETestRunner(_TestRunner):
             self._list_tests_module(suite)
 
         return OEListTestsResult()
+
+class OETestResultJSONHelper(object):
+
+    testresult_filename = 'testresults.json'
+
+    def _get_existing_testresults_if_available(self, write_dir):
+        testresults = {}
+        file = os.path.join(write_dir, self.testresult_filename)
+        if os.path.exists(file):
+            with open(file, "r") as f:
+                testresults = json.load(f)
+        return testresults
+
+    def _create_json_testresults_string(self, test_results, result_id, test_result, configuration):
+        test_results[result_id] = {'configuration': configuration, 'result': test_result}
+        return json.dumps(test_results, sort_keys=True, indent=4)
+
+    def _write_file(self, write_dir, file_name, file_content):
+        file_path = os.path.join(write_dir, file_name)
+        with open(file_path, 'w') as the_file:
+            the_file.write(file_content)
+
+    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
+        bb.utils.mkdirhier(write_dir)
+        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
+        test_results = self._get_existing_testresults_if_available(write_dir)
+        test_results[result_id] = {'configuration': configuration, 'result': test_result}
+        json_testresults = json.dumps(test_results, sort_keys=True, indent=4)
+        self._write_file(write_dir, self.testresult_filename, json_testresults)
+        bb.utils.unlockfile(lf)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/4] oeqa/selftest/context: write testresult to json files
  2018-10-22 10:34 [PATCH 1/4] oeqa/core/runner: write testresult to json files Yeoh Ee Peng
@ 2018-10-22 10:34 ` Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 3/4] testimage.bbclass: " Yeoh Ee Peng
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-22 10:34 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA selftest need to output testresult into json files, where
these json testresult files will be stored into git repository
by the future test-case-management tools.

To configure multiple instances of bitbake to write json testresult
to a single testresult file at custom direcotry, user will define
the variable "OEQA_JSON_RESULT_DIR" with the custom directory for writing
json testresult.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/selftest/context.py | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/meta/lib/oeqa/selftest/context.py b/meta/lib/oeqa/selftest/context.py
index c78947e..59d4b59 100644
--- a/meta/lib/oeqa/selftest/context.py
+++ b/meta/lib/oeqa/selftest/context.py
@@ -99,8 +99,8 @@ class OESelftestTestContextExecutor(OETestContextExecutor):
         return cases_paths
 
     def _process_args(self, logger, args):
-        args.output_log = '%s-results-%s.log' % (self.name,
-                time.strftime("%Y%m%d%H%M%S"))
+        args.test_start_time = time.strftime("%Y%m%d%H%M%S")
+        args.output_log = '%s-results-%s.log' % (self.name, args.test_start_time)
         args.test_data_file = None
         args.CASES_PATHS = None
 
@@ -204,6 +204,33 @@ class OESelftestTestContextExecutor(OETestContextExecutor):
         self.tc.logger.info("Running bitbake -e to test the configuration is valid/parsable")
         runCmd("bitbake -e")
 
+    def _get_json_result_dir(self, args):
+        json_result_dir = os.path.join(os.path.dirname(os.path.abspath(args.output_log)), 'oeqa')
+        if "OEQA_JSON_RESULT_DIR" in self.tc.td:
+            json_result_dir = self.tc.td["OEQA_JSON_RESULT_DIR"]
+
+        return json_result_dir
+
+    def _get_configuration(self, args):
+        import platform
+        from oeqa.utils.metadata import metadata_from_bb
+        metadata = metadata_from_bb()
+        configuration = {'TEST_TYPE': 'oeselftest',
+                        'START_TIME': args.test_start_time,
+                        'MACHINE': self.tc.td["MACHINE"],
+                        'HOST_DISTRO': platform.linux_distribution(),
+                        'HOST_NAME': metadata['hostname']}
+        layers = metadata['layers']
+        for l in layers:
+            configuration['%s_BRANCH_REV' % os.path.basename(l)] = '%s:%s' % (
+                                                                    metadata['layers'][l]['branch'],
+                                                                    metadata['layers'][l]['commit'])
+        return configuration
+
+    def _get_result_id(self, configuration):
+        distro = '_'.join(configuration['HOST_DISTRO'])
+        return '%s-%s-%s' % (configuration['TEST_TYPE'], distro, configuration['MACHINE'])
+
     def _internal_run(self, logger, args):
         self.module_paths = self._get_cases_paths(
                 self.tc_kwargs['init']['td']['BBPATH'].split(':'))
@@ -220,7 +247,10 @@ class OESelftestTestContextExecutor(OETestContextExecutor):
         else:
             self._pre_run()
             rc = self.tc.runTests(**self.tc_kwargs['run'])
-            rc.logDetails()
+            configuration = self._get_configuration(args)
+            rc.logDetails(self._get_json_result_dir(args),
+                          configuration,
+                          self._get_result_id(configuration))
             rc.logSummary(self.name)
 
         return rc
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/4] testimage.bbclass: write testresult to json files
  2018-10-22 10:34 [PATCH 1/4] oeqa/core/runner: write testresult to json files Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 2/4] oeqa/selftest/context: " Yeoh Ee Peng
@ 2018-10-22 10:34 ` Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 4/4] testsdk.bbclass: " Yeoh Ee Peng
  2018-10-22 22:54 ` [PATCH 1/4] oeqa/core/runner: " Richard Purdie
  3 siblings, 0 replies; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-22 10:34 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA testimage need to output testresult into json files, where
these json testresult files will be stored into git repository
by the future test-case-management tools.

To configure multiple instances of bitbake to write json testresult
to a single testresult file at custom direcotry, user will define
the variable "OEQA_JSON_RESULT_DIR" with the custom directory for writing
json testresult.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/classes/testimage.bbclass | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/meta/classes/testimage.bbclass b/meta/classes/testimage.bbclass
index 2642a72..df91d90 100644
--- a/meta/classes/testimage.bbclass
+++ b/meta/classes/testimage.bbclass
@@ -2,7 +2,7 @@
 #
 # Released under the MIT license (see COPYING.MIT)
 
-
+inherit metadata_scm
 # testimage.bbclass enables testing of qemu images using python unittests.
 # Most of the tests are commands run on target image over ssh.
 # To use it add testimage to global inherit and call your target image with -c testimage
@@ -141,6 +141,30 @@ def testimage_sanity(d):
         bb.fatal('When TEST_TARGET is set to "simpleremote" '
                  'TEST_TARGET_IP and TEST_SERVER_IP are needed too.')
 
+def _get_testimage_configuration(d, test_type, pid, machine):
+    import platform
+    configuration = {'TEST_TYPE': test_type,
+                    'PROCESS_ID': pid,
+                    'MACHINE': machine,
+                    'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
+                    'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
+                    'HOST_DISTRO': platform.linux_distribution()}
+    layers = (d.getVar("BBLAYERS") or "").split()
+    for l in layers:
+        configuration['%s_BRANCH_REV' % os.path.basename(l)] = '%s:%s' % (base_get_metadata_git_branch(l, None).strip(),
+                                                                          base_get_metadata_git_revision(l, None))
+    return configuration
+
+def _get_testimage_json_result_dir(d, configuration):
+    json_result_dir = os.path.join(d.getVar("WORKDIR"), 'oeqa')
+    oeqa_json_result_common_dir = d.getVar("OEQA_JSON_RESULT_DIR")
+    if oeqa_json_result_common_dir:
+        json_result_dir = oeqa_json_result_common_dir
+    return json_result_dir
+
+def _get_testimage_result_id(configuration):
+    return '%s-%s-%s' % (configuration['TEST_TYPE'], configuration['IMAGE_BASENAME'], configuration['MACHINE'])
+
 def testimage_main(d):
     import os
     import json
@@ -308,7 +332,10 @@ def testimage_main(d):
     # Show results (if we have them)
     if not results:
         bb.fatal('%s - FAILED - tests were interrupted during execution' % pn, forcelog=True)
-    results.logDetails()
+    configuration = _get_testimage_configuration(d, 'runtime', os.getpid(), machine)
+    results.logDetails(_get_testimage_json_result_dir(d, configuration),
+                       configuration,
+                       _get_testimage_result_id(configuration))
     results.logSummary(pn)
     if not results.wasSuccessful():
         bb.fatal('%s - FAILED - check the task log and the ssh log' % pn, forcelog=True)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/4] testsdk.bbclass: write testresult to json files
  2018-10-22 10:34 [PATCH 1/4] oeqa/core/runner: write testresult to json files Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 2/4] oeqa/selftest/context: " Yeoh Ee Peng
  2018-10-22 10:34 ` [PATCH 3/4] testimage.bbclass: " Yeoh Ee Peng
@ 2018-10-22 10:34 ` Yeoh Ee Peng
  2018-10-22 22:54 ` [PATCH 1/4] oeqa/core/runner: " Richard Purdie
  3 siblings, 0 replies; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-22 10:34 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA sdk and sdkext need to output testresult into json files, where
these json testresult files will be stored into git repository
by the future test-case-management tools.

To configure multiple instances of bitbake to write json testresult
to a single testresult file at custom direcotry, user will define
the variable "OEQA_JSON_RESULT_DIR" with the custom directory for writing
json testresult.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/classes/testsdk.bbclass | 36 ++++++++++++++++++++++++++++++++----
 1 file changed, 32 insertions(+), 4 deletions(-)

diff --git a/meta/classes/testsdk.bbclass b/meta/classes/testsdk.bbclass
index d3f475d..2e5f672 100644
--- a/meta/classes/testsdk.bbclass
+++ b/meta/classes/testsdk.bbclass
@@ -14,6 +14,30 @@
 #
 # where "<image-name>" is an image like core-image-sato.
 
+def _get_sdk_configuration(d, test_type, pid):
+    import platform
+    configuration = {'TEST_TYPE': test_type,
+                    'PROCESS_ID': pid,
+                    'SDK_MACHINE': d.getVar("SDKMACHINE"),
+                    'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"),
+                    'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"),
+                    'HOST_DISTRO': platform.linux_distribution()}
+    layers = (d.getVar("BBLAYERS") or "").split()
+    for l in layers:
+        configuration['%s_BRANCH_REV' % os.path.basename(l)] = '%s:%s' % (base_get_metadata_git_branch(l, None).strip(),
+                                                                          base_get_metadata_git_revision(l, None))
+    return configuration
+
+def _get_sdk_json_result_dir(d, configuration):
+    json_result_dir = os.path.join(d.getVar("WORKDIR"), 'oeqa')
+    oeqa_json_result_common_dir = d.getVar("OEQA_JSON_RESULT_DIR")
+    if oeqa_json_result_common_dir:
+        json_result_dir = oeqa_json_result_common_dir
+    return json_result_dir
+
+def _get_sdk_result_id(configuration):
+    return '%s-%s-%s' % (configuration['TEST_TYPE'], configuration['IMAGE_BASENAME'], configuration['SDK_MACHINE'])
+
 def testsdk_main(d):
     import os
     import subprocess
@@ -80,8 +104,10 @@ def testsdk_main(d):
 
         component = "%s %s" % (pn, OESDKTestContextExecutor.name)
         context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env))
-
-        result.logDetails()
+        configuration = _get_sdk_configuration(d, 'sdk', os.getpid())
+        result.logDetails(_get_sdk_json_result_dir(d, configuration),
+                           configuration,
+                           _get_sdk_result_id(configuration))
         result.logSummary(component, context_msg)
 
         if not result.wasSuccessful():
@@ -184,8 +210,10 @@ def testsdkext_main(d):
 
         component = "%s %s" % (pn, OESDKExtTestContextExecutor.name)
         context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env))
-
-        result.logDetails()
+        configuration = _get_sdk_configuration(d, 'sdkext', os.getpid())
+        result.logDetails(_get_sdk_json_result_dir(d, configuration),
+                           configuration,
+                           _get_sdk_result_id(configuration))
         result.logSummary(component, context_msg)
 
         if not result.wasSuccessful():
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22 10:34 [PATCH 1/4] oeqa/core/runner: write testresult to json files Yeoh Ee Peng
                   ` (2 preceding siblings ...)
  2018-10-22 10:34 ` [PATCH 4/4] testsdk.bbclass: " Yeoh Ee Peng
@ 2018-10-22 22:54 ` Richard Purdie
  2018-10-23  6:39   ` Yeoh, Ee Peng
  3 siblings, 1 reply; 21+ messages in thread
From: Richard Purdie @ 2018-10-22 22:54 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

On Mon, 2018-10-22 at 18:34 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult,
> OEQA need to output testresult into single json file, where json
> testresult file will be stored in git repository by the future
> test-case-management tools.
> 
> The json testresult file will store more than one set of results,
> where each set of results was uniquely identified by the result_id.
> The result_id would be like "runtime-qemux86-core-image-sato", where
> it was a runtime test with target machine equal to qemux86 and running
> on core-image-sato image. The json testresult file will only store
> the latest test content for a given result_id. The json testresult
> file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE),
> result (eg. PASSED, FAILED, ERROR), test log, and result_id.
> 
> Based on the destination json testresult file directory provided,
> it could have multiple instances of bitbake trying to write json
> testresult to a single testresult file, using locking a lockfile
> alongside the results file directory to prevent races.
> 
> Also the library class inside this patch will be reused by the future
> test-case-management tools to write json testresult for manual test
> case executed.
> 
> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 39 ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
> index f1dd080..2243a10 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -6,6 +6,7 @@ import time
>  import unittest
>  import logging
>  import re
> +import json
>  
>  from unittest import TextTestResult as _TestResult
>  from unittest import TextTestRunner as _TestRunner
> @@ -119,8 +120,9 @@ class OETestResult(_TestResult):
>          self.successes.append((test, None))
>          super(OETestResult, self).addSuccess(test)
>  
> -    def logDetails(self):
> +    def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
>          self.tc.logger.info("RESULTS:")
> +        result = {}
>          for case_name in self.tc._registry['cases']:
>              case = self.tc._registry['cases'][case_name]
>  
> @@ -137,6 +139,11 @@ class OETestResult(_TestResult):
>                  t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
>  
>              self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(), oeid, status, t))
> +            result[case.id()] = {'status': status, 'log': log}
> +
> +        if json_file_dir:
> +            tresultjsonhelper = OETestResultJSONHelper()
> +            tresultjsonhelper.dump_testresult_file(result_id, result, configuration, json_file_dir)
>  
>  class OEListTestsResult(object):
>      def wasSuccessful(self):
> @@ -249,3 +256,33 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    testresult_filename = 'testresults.json'
> +
> +    def _get_existing_testresults_if_available(self, write_dir):
> +        testresults = {}
> +        file = os.path.join(write_dir, self.testresult_filename)
> +        if os.path.exists(file):
> +            with open(file, "r") as f:
> +                testresults = json.load(f)
> +        return testresults
> +
> +    def _create_json_testresults_string(self, test_results, result_id, test_result, configuration):
> +        test_results[result_id] = {'configuration': configuration, 'result': test_result}
> +        return json.dumps(test_results, sort_keys=True, indent=4)

I think the above function can be removed as its no longer used
anywhere.

I've queued these patches for testing on the main autobuilder as I
think we're close and wanted to check it doesn't show issues. I still
wonder if we couldn't write out the files to a specific directory more
reliably but I think I'd need to look at the code for longer to come up
with any proposal. Right now I guess we can at least configure the
autobuilder to put the files in a common location (so it can collect
them up).

Cheers,

Richard

> +    def _write_file(self, write_dir, file_name, file_content):
> +        file_path = os.path.join(write_dir, file_name)
> +        with open(file_path, 'w') as the_file:
> +            the_file.write(file_content)
> +
> +    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
> +        bb.utils.mkdirhier(write_dir)
> +        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
> +        test_results = self._get_existing_testresults_if_available(write_dir)
> +        test_results[result_id] = {'configuration': configuration, 'result': test_result}
> +        json_testresults = json.dumps(test_results, sort_keys=True, indent=4)
> +        self._write_file(write_dir, self.testresult_filename, json_testresults)
> +        bb.utils.unlockfile(lf)
> -- 
> 2.7.4
> 



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22 22:54 ` [PATCH 1/4] oeqa/core/runner: " Richard Purdie
@ 2018-10-23  6:39   ` Yeoh, Ee Peng
  2018-10-29 10:44     ` richard.purdie
  0 siblings, 1 reply; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-23  6:39 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

I submitted the revised patches below. Sorry for the missing "version #" in the patch title. After this, I will add the "version #" into the patch title. 

Please let me know if any question or inputs. Thank you very much for your attention & sharing! 

Best regards,
Yeoh Ee Peng 

A) oeqa/core/runner:
	- removed no longer used function
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156990.html

B) oeqa/selftest/context:
	- concatenate list of element from DISTRO to single element, replace empty character
	Eg. 'HOST_DISTRO': ('-'.join(platform.linux_distribution())).replace(' ', '-')
	- change result_id to separator character from "-" to "_"  
	Eg. '%s_%s_%s' % (configuration['TEST_TYPE'], configuration['HOST_DISTRO'], configuration['MACHINE'])
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156991.html

C) testimage.bbclass
	- removed no longer used "configuration" argument inside function
	Eg. def _get_testimage_json_result_dir(d):
	- concatenate list of element from DISTRO to single element, replace empty character
	- change result_id to separator character from "-" to "_"  
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156992.html

D) testsdk.bbclass
	- removed no longer used "configuration" argument inside function
	Eg. def _get_testimage_json_result_dir(d):
	- concatenate list of element from DISTRO to single element, replace empty character
	- change result_id to separator character from "-" to "_"  
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156993.html



-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Tuesday, October 23, 2018 6:54 AM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

On Mon, 2018-10-22 at 18:34 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult, OEQA 
> need to output testresult into single json file, where json testresult 
> file will be stored in git repository by the future 
> test-case-management tools.
> 
> The json testresult file will store more than one set of results, 
> where each set of results was uniquely identified by the result_id.
> The result_id would be like "runtime-qemux86-core-image-sato", where 
> it was a runtime test with target machine equal to qemux86 and running 
> on core-image-sato image. The json testresult file will only store the 
> latest test content for a given result_id. The json testresult file 
> contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE), 
> result (eg. PASSED, FAILED, ERROR), test log, and result_id.
> 
> Based on the destination json testresult file directory provided, it 
> could have multiple instances of bitbake trying to write json 
> testresult to a single testresult file, using locking a lockfile 
> alongside the results file directory to prevent races.
> 
> Also the library class inside this patch will be reused by the future 
> test-case-management tools to write json testresult for manual test 
> case executed.
> 
> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 39 
> ++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py 
> b/meta/lib/oeqa/core/runner.py index f1dd080..2243a10 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -6,6 +6,7 @@ import time
>  import unittest
>  import logging
>  import re
> +import json
>  
>  from unittest import TextTestResult as _TestResult  from unittest 
> import TextTestRunner as _TestRunner @@ -119,8 +120,9 @@ class 
> OETestResult(_TestResult):
>          self.successes.append((test, None))
>          super(OETestResult, self).addSuccess(test)
>  
> -    def logDetails(self):
> +    def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
>          self.tc.logger.info("RESULTS:")
> +        result = {}
>          for case_name in self.tc._registry['cases']:
>              case = self.tc._registry['cases'][case_name]
>  
> @@ -137,6 +139,11 @@ class OETestResult(_TestResult):
>                  t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
>  
>              self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % 
> (case.id(), oeid, status, t))
> +            result[case.id()] = {'status': status, 'log': log}
> +
> +        if json_file_dir:
> +            tresultjsonhelper = OETestResultJSONHelper()
> +            tresultjsonhelper.dump_testresult_file(result_id, result, 
> + configuration, json_file_dir)
>  
>  class OEListTestsResult(object):
>      def wasSuccessful(self):
> @@ -249,3 +256,33 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    testresult_filename = 'testresults.json'
> +
> +    def _get_existing_testresults_if_available(self, write_dir):
> +        testresults = {}
> +        file = os.path.join(write_dir, self.testresult_filename)
> +        if os.path.exists(file):
> +            with open(file, "r") as f:
> +                testresults = json.load(f)
> +        return testresults
> +
> +    def _create_json_testresults_string(self, test_results, result_id, test_result, configuration):
> +        test_results[result_id] = {'configuration': configuration, 'result': test_result}
> +        return json.dumps(test_results, sort_keys=True, indent=4)

I think the above function can be removed as its no longer used anywhere.

I've queued these patches for testing on the main autobuilder as I think we're close and wanted to check it doesn't show issues. I still wonder if we couldn't write out the files to a specific directory more reliably but I think I'd need to look at the code for longer to come up with any proposal. Right now I guess we can at least configure the autobuilder to put the files in a common location (so it can collect them up).

Cheers,

Richard

> +    def _write_file(self, write_dir, file_name, file_content):
> +        file_path = os.path.join(write_dir, file_name)
> +        with open(file_path, 'w') as the_file:
> +            the_file.write(file_content)
> +
> +    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
> +        bb.utils.mkdirhier(write_dir)
> +        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
> +        test_results = self._get_existing_testresults_if_available(write_dir)
> +        test_results[result_id] = {'configuration': configuration, 'result': test_result}
> +        json_testresults = json.dumps(test_results, sort_keys=True, indent=4)
> +        self._write_file(write_dir, self.testresult_filename, json_testresults)
> +        bb.utils.unlockfile(lf)
> --
> 2.7.4
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-23  6:39   ` Yeoh, Ee Peng
@ 2018-10-29 10:44     ` richard.purdie
  2018-10-29 13:58       ` Richard Purdie
  0 siblings, 1 reply; 21+ messages in thread
From: richard.purdie @ 2018-10-29 10:44 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

On Tue, 2018-10-23 at 06:39 +0000, Yeoh, Ee Peng wrote:
> I submitted the revised patches below. Sorry for the missing "version
> #" in the patch title. After this, I will add the "version #" into
> the patch title. 
> 
> Please let me know if any question or inputs. Thank you very much for
> your attention & sharing! 

Thanks for the changes. I was at a conference last week and then became
unwell. I wanted to have a further look and test the patches before
merging and I wanted a clearer head to do that.

Unfortunately I found another problem. On my build machine, it shows:

Traceback (most recent call last):
  File "/media/build1/poky/scripts/oe-selftest", line 70, in <module>
    ret = main()
  File "/media/build1/poky/scripts/oe-selftest", line 57, in main
    results = args.func(logger, args)
  File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 289, in run
    rc = self._internal_run(logger, args)
  File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 248, in _internal_run
    configuration = self._get_configuration(args)
  File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 217, in _get_configuration
    metadata = metadata_from_bb()
  File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 42, in metadata_from_bb
    info_dict['layers'] = get_layers(data_dict['BBLAYERS'])
  File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 81, in get_layers
    layer_dict[layer_name] = git_rev_info(layer)
  File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 61, in git_rev_info
    from git import Repo, InvalidGitRepositoryError, NoSuchPathError
ModuleNotFoundError: No module named 'git'

That is obviously easily fixed by installing the git module but it does
raise some questions, in particular, why we have two code paths which
do the same thing (one in metadata_scm.bbclass and one in
lib/oeqa/utils/metadata.py).

It also means we've just added new module dependencies to oe-selftest
and the other test utilities which we don't test for anywhere or have
documented. Doing this last thing in M4 is bad.

So this is going to need a little more thought...

Cheers,

Richard





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-29 10:44     ` richard.purdie
@ 2018-10-29 13:58       ` Richard Purdie
  2018-10-30  8:55         ` Yeoh, Ee Peng
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Purdie @ 2018-10-29 13:58 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

On Mon, 2018-10-29 at 10:44 +0000, richard.purdie@linuxfoundation.org
wrote:
> On Tue, 2018-10-23 at 06:39 +0000, Yeoh, Ee Peng wrote:
> > I submitted the revised patches below. Sorry for the missing
> > "version
> > #" in the patch title. After this, I will add the "version #" into
> > the patch title. 
> > 
> > Please let me know if any question or inputs. Thank you very much
> > for
> > your attention & sharing! 
> 
> Thanks for the changes. I was at a conference last week and then
> became
> unwell. I wanted to have a further look and test the patches before
> merging and I wanted a clearer head to do that.
> 
> Unfortunately I found another problem. On my build machine, it shows:
> 
> Traceback (most recent call last):
>   File "/media/build1/poky/scripts/oe-selftest", line 70, in <module>
>     ret = main()
>   File "/media/build1/poky/scripts/oe-selftest", line 57, in main
>     results = args.func(logger, args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line
> 289, in run
>     rc = self._internal_run(logger, args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line
> 248, in _internal_run
>     configuration = self._get_configuration(args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line
> 217, in _get_configuration
>     metadata = metadata_from_bb()
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 42,
> in metadata_from_bb
>     info_dict['layers'] = get_layers(data_dict['BBLAYERS'])
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 81,
> in get_layers
>     layer_dict[layer_name] = git_rev_info(layer)
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 61,
> in git_rev_info
>     from git import Repo, InvalidGitRepositoryError, NoSuchPathError
> ModuleNotFoundError: No module named 'git'
> 
> That is obviously easily fixed by installing the git module but it
> does
> raise some questions, in particular, why we have two code paths which
> do the same thing (one in metadata_scm.bbclass and one in
> lib/oeqa/utils/metadata.py).
> 
> It also means we've just added new module dependencies to oe-selftest
> and the other test utilities which we don't test for anywhere or have
> documented. Doing this last thing in M4 is bad.
> 
> So this is going to need a little more thought...

I've played with the patches today. The above can be addressed by using
the same code as we use in metadata_scm in the oeqa metadata file as a
quick fix. This all needs to be reworked in 2.7 after we sort out 2.6.
I've a patch queued. I also noticed that:

* We have pointless empty log entries in the json files
* SDKs don't record which MACHINE built them (in the unique identifier 
  or the configuration section)
* the identifiers for the configuration sections in the json files are 
  not unique and results from multiple runs were being overwritten 
  locally
* the patches call SDKMACHINE SDK_MACHINE which just confuses things
* The layer metadata config was being squashed into a single entry 
  with multiple contents, there is no good reason to do that in json, 
  just leave the fields separate
* The layer metadata was being obtained from different functions, 
  potentially leading to different processing
* The output data was not being placed in LOG_DIR
* The functions still had weird "_" prefixes

Since time is short, to fix these issues I'm going to include a set of
tweaks for the patches.

Cheers,

Richard





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-29 13:58       ` Richard Purdie
@ 2018-10-30  8:55         ` Yeoh, Ee Peng
  0 siblings, 0 replies; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-30  8:55 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

Thanks for sharing with us your inputs for this patch. 

For the weird "_" prefixes, the 3 new methods were initially developed in “OESelftestTestContextExecutor” class to gather information needed for selftest to write json test result file, while the “OESelftestTestContextExecutor” class have the public interface such as "run" and "register_commands", the decision at that point was to use "_" for the 3 new methods to highlight that these new methods for was internal use and avoid confusion with the existing public interface.  While porting these new methods to bbclass, I made the mistake and missed the fact that "_" was a weird practice and make no sense in bbclass. Thank you for sharing this with me, I will pay special attention and avoid this in future.    

For the identifiers for the configuration sections in the json files are not unique and results from multiple runs were being overwritten locally. This was done intentionally as the initial design of json test results file was for the QA usage. From QA usage, there could be multiple set of tests being executed for the same configuration, some of these test result sets may be invalid due to environments used, where QA only want the latest set of test that was valid.  Thus this result in the decision to have the code that generate the unique key without timestamp, so that the json test results file will only store the latest valid result. Hope this explain the reason. Now that you shared with us the important of timestamp, we understand the responsibility of this json test results file was more than QA, where it was important to keep record for all executed tests. 

Hope this explain some reasons for the patch submitted. Thank you again for sharing your inputs with us!

Best regards,
Yeoh Ee Peng 

-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, October 29, 2018 9:59 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

On Mon, 2018-10-29 at 10:44 +0000, richard.purdie@linuxfoundation.org
wrote:
> On Tue, 2018-10-23 at 06:39 +0000, Yeoh, Ee Peng wrote:
> > I submitted the revised patches below. Sorry for the missing 
> > "version #" in the patch title. After this, I will add the "version 
> > #" into the patch title.
> > 
> > Please let me know if any question or inputs. Thank you very much 
> > for your attention & sharing!
> 
> Thanks for the changes. I was at a conference last week and then 
> became unwell. I wanted to have a further look and test the patches 
> before merging and I wanted a clearer head to do that.
> 
> Unfortunately I found another problem. On my build machine, it shows:
> 
> Traceback (most recent call last):
>   File "/media/build1/poky/scripts/oe-selftest", line 70, in <module>
>     ret = main()
>   File "/media/build1/poky/scripts/oe-selftest", line 57, in main
>     results = args.func(logger, args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 
> 289, in run
>     rc = self._internal_run(logger, args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 
> 248, in _internal_run
>     configuration = self._get_configuration(args)
>   File "/media/build1/poky/meta/lib/oeqa/selftest/context.py", line 
> 217, in _get_configuration
>     metadata = metadata_from_bb()
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 42, 
> in metadata_from_bb
>     info_dict['layers'] = get_layers(data_dict['BBLAYERS'])
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 81, 
> in get_layers
>     layer_dict[layer_name] = git_rev_info(layer)
>   File "/media/build1/poky/meta/lib/oeqa/utils/metadata.py", line 61, 
> in git_rev_info
>     from git import Repo, InvalidGitRepositoryError, NoSuchPathError
> ModuleNotFoundError: No module named 'git'
> 
> That is obviously easily fixed by installing the git module but it 
> does raise some questions, in particular, why we have two code paths 
> which do the same thing (one in metadata_scm.bbclass and one in 
> lib/oeqa/utils/metadata.py).
> 
> It also means we've just added new module dependencies to oe-selftest 
> and the other test utilities which we don't test for anywhere or have 
> documented. Doing this last thing in M4 is bad.
> 
> So this is going to need a little more thought...

I've played with the patches today. The above can be addressed by using the same code as we use in metadata_scm in the oeqa metadata file as a quick fix. This all needs to be reworked in 2.7 after we sort out 2.6.
I've a patch queued. I also noticed that:

* We have pointless empty log entries in the json files
* SDKs don't record which MACHINE built them (in the unique identifier
  or the configuration section)
* the identifiers for the configuration sections in the json files are
  not unique and results from multiple runs were being overwritten
  locally
* the patches call SDKMACHINE SDK_MACHINE which just confuses things
* The layer metadata config was being squashed into a single entry
  with multiple contents, there is no good reason to do that in json,
  just leave the fields separate
* The layer metadata was being obtained from different functions,
  potentially leading to different processing
* The output data was not being placed in LOG_DIR
* The functions still had weird "_" prefixes

Since time is short, to fix these issues I'm going to include a set of tweaks for the patches.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/4] oeqa/core/runner: write testresult to json files
@ 2018-10-23  5:57 Yeoh Ee Peng
  0 siblings, 0 replies; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-23  5:57 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA need to output testresult into single json file, where json
testresult file will be stored in git repository by the future
test-case-management tools.

The json testresult file will store more than one set of results,
where each set of results was uniquely identified by the result_id.
The result_id would be like "runtime-qemux86-core-image-sato", where
it was a runtime test with target machine equal to qemux86 and running
on core-image-sato image. The json testresult file will only store
the latest test content for a given result_id. The json testresult
file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE),
result (eg. PASSED, FAILED, ERROR), test log, and result_id.

Based on the destination json testresult file directory provided,
it could have multiple instances of bitbake trying to write json
testresult to a single testresult file, using locking a lockfile
alongside the results file directory to prevent races.

Also the library class inside this patch will be reused by the future
test-case-management tools to write json testresult for manual test
case executed.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/core/runner.py | 35 ++++++++++++++++++++++++++++++++++-
 1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index f1dd080..d6d5afe 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -6,6 +6,7 @@ import time
 import unittest
 import logging
 import re
+import json
 
 from unittest import TextTestResult as _TestResult
 from unittest import TextTestRunner as _TestRunner
@@ -119,8 +120,9 @@ class OETestResult(_TestResult):
         self.successes.append((test, None))
         super(OETestResult, self).addSuccess(test)
 
-    def logDetails(self):
+    def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
         self.tc.logger.info("RESULTS:")
+        result = {}
         for case_name in self.tc._registry['cases']:
             case = self.tc._registry['cases'][case_name]
 
@@ -137,6 +139,11 @@ class OETestResult(_TestResult):
                 t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
 
             self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(), oeid, status, t))
+            result[case.id()] = {'status': status, 'log': log}
+
+        if json_file_dir:
+            tresultjsonhelper = OETestResultJSONHelper()
+            tresultjsonhelper.dump_testresult_file(json_file_dir, configuration, result_id, result)
 
 class OEListTestsResult(object):
     def wasSuccessful(self):
@@ -249,3 +256,29 @@ class OETestRunner(_TestRunner):
             self._list_tests_module(suite)
 
         return OEListTestsResult()
+
+class OETestResultJSONHelper(object):
+
+    testresult_filename = 'testresults.json'
+
+    def _get_existing_testresults_if_available(self, write_dir):
+        testresults = {}
+        file = os.path.join(write_dir, self.testresult_filename)
+        if os.path.exists(file):
+            with open(file, "r") as f:
+                testresults = json.load(f)
+        return testresults
+
+    def _write_file(self, write_dir, file_name, file_content):
+        file_path = os.path.join(write_dir, file_name)
+        with open(file_path, 'w') as the_file:
+            the_file.write(file_content)
+
+    def dump_testresult_file(self, write_dir, configuration, result_id, test_result):
+        bb.utils.mkdirhier(write_dir)
+        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
+        test_results = self._get_existing_testresults_if_available(write_dir)
+        test_results[result_id] = {'configuration': configuration, 'result': test_result}
+        json_testresults = json.dumps(test_results, sort_keys=True, indent=4)
+        self._write_file(write_dir, self.testresult_filename, json_testresults)
+        bb.utils.unlockfile(lf)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22  9:34     ` richard.purdie
  2018-10-22  9:47       ` Yeoh, Ee Peng
@ 2018-10-22 10:53       ` Yeoh, Ee Peng
  1 sibling, 0 replies; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-22 10:53 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

I had refactor and incorporate the inputs your provided, submitted the patches for your review.
Thank you very much for your attention and sharing!

http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156945.html
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156946.html
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156947.html
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156948.html

Thanks,
Yeoh Ee Peng 

-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, October 22, 2018 5:34 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

On Mon, 2018-10-22 at 08:59 +0000, Yeoh, Ee Peng wrote:
> Hi Richard
> 
> Current codes does load existing testresult json file if it exist, 
> then it will write the new testresult into it based on the result_id.
> > +    def _get_testresults(self, write_dir):
> > +        testresults = {}
> > +        file = os.path.join(write_dir, self.testresult_filename)
> > +        if os.path.exists(file):
> > +            with open(file, "r") as f:
> > +                testresults = json.load(f)
> > +        return testresults

I managed to miss that function and call, sorry. That should be fine. I think we may want to inline some of these functions to make things clearer.

> I did have the same thinking on if we can have a common function to 
> manage configuration and result_id or let individual test classes to 
> manage it, in the end, the thinking was configuration/result_id were 
> really responsibility of each test classes, where the json helper 
> class inside runner shall not have the knowledge or know-how on 
> configuration/result_id. Thus the decision in the end was to make json 
> helper class responsibility as simple as to consume the configuration, 
> results, result_id information provided by the test classes. Hope this 
> explain the reason behind current design.
> 
> Thank you very much for your attention & knowledge sharing! 

You could just do what amounts to getVar("LOGDIR") + "/oeqa" and have that as the default test result location?

Cheers,

Richard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22  9:34     ` richard.purdie
@ 2018-10-22  9:47       ` Yeoh, Ee Peng
  2018-10-22 10:53       ` Yeoh, Ee Peng
  1 sibling, 0 replies; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-22  9:47 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

You are right, the current codes and functions does not express clearly what it does, especially on getting existing testresult, let me refactor this to make it clearer. 

Yes, let me use getVar("LOGDIR") + "/oeqa" as default result_dir. Do you think we shall have OEQA_JSON_RESULT_DIR variable for user to define custom result_dir? 

Thanks,
Yeoh Ee Peng 

-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, October 22, 2018 5:34 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

On Mon, 2018-10-22 at 08:59 +0000, Yeoh, Ee Peng wrote:
> Hi Richard
> 
> Current codes does load existing testresult json file if it exist, 
> then it will write the new testresult into it based on the result_id.
> > +    def _get_testresults(self, write_dir):
> > +        testresults = {}
> > +        file = os.path.join(write_dir, self.testresult_filename)
> > +        if os.path.exists(file):
> > +            with open(file, "r") as f:
> > +                testresults = json.load(f)
> > +        return testresults

I managed to miss that function and call, sorry. That should be fine. I think we may want to inline some of these functions to make things clearer.

> I did have the same thinking on if we can have a common function to 
> manage configuration and result_id or let individual test classes to 
> manage it, in the end, the thinking was configuration/result_id were 
> really responsibility of each test classes, where the json helper 
> class inside runner shall not have the knowledge or know-how on 
> configuration/result_id. Thus the decision in the end was to make json 
> helper class responsibility as simple as to consume the configuration, 
> results, result_id information provided by the test classes. Hope this 
> explain the reason behind current design.
> 
> Thank you very much for your attention & knowledge sharing! 

You could just do what amounts to getVar("LOGDIR") + "/oeqa" and have that as the default test result location?

Cheers,

Richard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22  8:59   ` Yeoh, Ee Peng
@ 2018-10-22  9:34     ` richard.purdie
  2018-10-22  9:47       ` Yeoh, Ee Peng
  2018-10-22 10:53       ` Yeoh, Ee Peng
  0 siblings, 2 replies; 21+ messages in thread
From: richard.purdie @ 2018-10-22  9:34 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

On Mon, 2018-10-22 at 08:59 +0000, Yeoh, Ee Peng wrote:
> Hi Richard
> 
> Current codes does load existing testresult json file if it exist,
> then it will write the new testresult into it based on the result_id.
> > +    def _get_testresults(self, write_dir):
> > +        testresults = {}
> > +        file = os.path.join(write_dir, self.testresult_filename)
> > +        if os.path.exists(file):
> > +            with open(file, "r") as f:
> > +                testresults = json.load(f)
> > +        return testresults

I managed to miss that function and call, sorry. That should be fine. I
think we may want to inline some of these functions to make things
clearer.

> I did have the same thinking on if we can have a common function to
> manage configuration and result_id or let individual test classes to
> manage it, in the end, the thinking was configuration/result_id were
> really responsibility of each test classes, where the json helper
> class inside runner shall not have the knowledge or know-how on
> configuration/result_id. Thus the decision in the end was to make
> json helper class responsibility as simple as to consume the
> configuration, results, result_id information provided by the test
> classes. Hope this explain the reason behind current design. 
> 
> Thank you very much for your attention & knowledge sharing! 

You could just do what amounts to getVar("LOGDIR") + "/oeqa" and have
that as the default test result location?

Cheers,

Richard





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22  8:31 ` Richard Purdie
@ 2018-10-22  8:59   ` Yeoh, Ee Peng
  2018-10-22  9:34     ` richard.purdie
  0 siblings, 1 reply; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-22  8:59 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard

Current codes does load existing testresult json file if it exist, then it will write the new testresult into it based on the result_id.
> +    def _get_testresults(self, write_dir):
> +        testresults = {}
> +        file = os.path.join(write_dir, self.testresult_filename)
> +        if os.path.exists(file):
> +            with open(file, "r") as f:
> +                testresults = json.load(f)
> +        return testresults

I did have the same thinking on if we can have a common function to manage configuration and result_id or let individual test classes to manage it, in the end, the thinking was configuration/result_id were really responsibility of each test classes, where the json helper class inside runner shall not have the knowledge or know-how on configuration/result_id. Thus the decision in the end was to make json helper class responsibility as simple as to consume the configuration, results, result_id information provided by the test classes. Hope this explain the reason behind current design. 

Thank you very much for your attention & knowledge sharing! 

Thanks,
Yeoh Ee Peng

-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, October 22, 2018 4:32 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

Hi Ee Peng,

Thanks, this is looking good, there is still one small tweak needed below.

On Mon, 2018-10-22 at 14:54 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult, OEQA 
> need to output testresult into single json file, where json testresult 
> file will be stored in git repository by the future 
> test-case-management tools.
> 
> The json testresult file will store more than one set of results, 
> where each set of results was uniquely identified by the result_id.
> The result_id would be like "runtime-qemux86-core-image-sato", where 
> it was a runtime test with target machine equal to qemux86 and running 
> on core-image-sato image. The json testresult file will only store the 
> latest testresult for a given result_id. The json testresult file 
> contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE), 
> result (eg. PASSED, FAILED, ERROR), test log, and result_id.
> 
> Based on the destination json testresult file directory provided, it 
> could have multiple instances of bitbake trying to write json 
> testresult to a single testresult file, using locking a lockfile 
> alongside the results file directory to prevent races.
> 
> Also the library class inside this patch will be reused by the future 
> test-case-management tools to write json testresult for manual test 
> case executed.
> 
> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 40 
> +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py 
> b/meta/lib/oeqa/core/runner.py index f1dd080..82463cf 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -249,3 +256,34 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    testresult_filename = 'testresults.json'
> +
> +    def _get_testresults(self, write_dir):
> +        testresults = {}
> +        file = os.path.join(write_dir, self.testresult_filename)
> +        if os.path.exists(file):
> +            with open(file, "r") as f:
> +                testresults = json.load(f)
> +        return testresults
> +
> +    def _create_json_testresults_string(self, result_id, test_result, configuration, write_dir):
> +        testresults = self._get_testresults(write_dir)
> +        testresult = {'configuration': configuration,
> +                      'result': test_result}
> +        testresults[result_id] = testresult
> +        return json.dumps(testresults, sort_keys=True, indent=4)
> +
> +    def _write_file(self, write_dir, file_name, file_content):
> +        file_path = os.path.join(write_dir, file_name)
> +        with open(file_path, 'w') as the_file:
> +            the_file.write(file_content)
> +
> +    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
> +        bb.utils.mkdirhier(write_dir)
> +        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
> +        json_testresults = self._create_json_testresults_string(result_id, test_result, configuration, write_dir)
> +        self._write_file(write_dir, self.testresult_filename, json_testresults)
> +        bb.utils.unlockfile(lf)

Before we write out the file we need to load in any existing data so we effectively append to the data. I think if we do that this patch should be ready to merge.

I did also wonder if we need a common configuration function rather than duplicating the code into each of the test classes.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-22  6:54 Yeoh Ee Peng
@ 2018-10-22  8:31 ` Richard Purdie
  2018-10-22  8:59   ` Yeoh, Ee Peng
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Purdie @ 2018-10-22  8:31 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

Hi Ee Peng,

Thanks, this is looking good, there is still one small tweak needed
below.

On Mon, 2018-10-22 at 14:54 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult,
> OEQA need to output testresult into single json file, where json
> testresult file will be stored in git repository by the future
> test-case-management tools.
> 
> The json testresult file will store more than one set of results,
> where each set of results was uniquely identified by the result_id.
> The result_id would be like "runtime-qemux86-core-image-sato", where
> it was a runtime test with target machine equal to qemux86 and running
> on core-image-sato image. The json testresult file will only store
> the latest testresult for a given result_id. The json testresult
> file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE),
> result (eg. PASSED, FAILED, ERROR), test log, and result_id.
> 
> Based on the destination json testresult file directory provided,
> it could have multiple instances of bitbake trying to write json
> testresult to a single testresult file, using locking a lockfile
> alongside the results file directory to prevent races.
> 
> Also the library class inside this patch will be reused by the future
> test-case-management tools to write json testresult for manual test
> case executed.
> 
> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 40 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
> index f1dd080..82463cf 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -249,3 +256,34 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    testresult_filename = 'testresults.json'
> +
> +    def _get_testresults(self, write_dir):
> +        testresults = {}
> +        file = os.path.join(write_dir, self.testresult_filename)
> +        if os.path.exists(file):
> +            with open(file, "r") as f:
> +                testresults = json.load(f)
> +        return testresults
> +
> +    def _create_json_testresults_string(self, result_id, test_result, configuration, write_dir):
> +        testresults = self._get_testresults(write_dir)
> +        testresult = {'configuration': configuration,
> +                      'result': test_result}
> +        testresults[result_id] = testresult
> +        return json.dumps(testresults, sort_keys=True, indent=4)
> +
> +    def _write_file(self, write_dir, file_name, file_content):
> +        file_path = os.path.join(write_dir, file_name)
> +        with open(file_path, 'w') as the_file:
> +            the_file.write(file_content)
> +
> +    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
> +        bb.utils.mkdirhier(write_dir)
> +        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
> +        json_testresults = self._create_json_testresults_string(result_id, test_result, configuration, write_dir)
> +        self._write_file(write_dir, self.testresult_filename, json_testresults)
> +        bb.utils.unlockfile(lf)

Before we write out the file we need to load in any existing data so we
effectively append to the data. I think if we do that this patch should
be ready to merge.

I did also wonder if we need a common configuration function rather
than duplicating the code into each of the test classes.

Cheers,

Richard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/4] oeqa/core/runner: write testresult to json files
@ 2018-10-22  6:54 Yeoh Ee Peng
  2018-10-22  8:31 ` Richard Purdie
  0 siblings, 1 reply; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-22  6:54 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA need to output testresult into single json file, where json
testresult file will be stored in git repository by the future
test-case-management tools.

The json testresult file will store more than one set of results,
where each set of results was uniquely identified by the result_id.
The result_id would be like "runtime-qemux86-core-image-sato", where
it was a runtime test with target machine equal to qemux86 and running
on core-image-sato image. The json testresult file will only store
the latest testresult for a given result_id. The json testresult
file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE),
result (eg. PASSED, FAILED, ERROR), test log, and result_id.

Based on the destination json testresult file directory provided,
it could have multiple instances of bitbake trying to write json
testresult to a single testresult file, using locking a lockfile
alongside the results file directory to prevent races.

Also the library class inside this patch will be reused by the future
test-case-management tools to write json testresult for manual test
case executed.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/core/runner.py | 40 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index f1dd080..82463cf 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -6,6 +6,7 @@ import time
 import unittest
 import logging
 import re
+import json
 
 from unittest import TextTestResult as _TestResult
 from unittest import TextTestRunner as _TestRunner
@@ -119,8 +120,9 @@ class OETestResult(_TestResult):
         self.successes.append((test, None))
         super(OETestResult, self).addSuccess(test)
 
-    def logDetails(self):
+    def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
         self.tc.logger.info("RESULTS:")
+        result = {}
         for case_name in self.tc._registry['cases']:
             case = self.tc._registry['cases'][case_name]
 
@@ -137,6 +139,11 @@ class OETestResult(_TestResult):
                 t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
 
             self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(), oeid, status, t))
+            result[case.id()] = {'status': status, 'log': log}
+
+        if json_file_dir:
+            tresultjsonhelper = OETestResultJSONHelper()
+            tresultjsonhelper.dump_testresult_file(result_id, result, configuration, json_file_dir)
 
 class OEListTestsResult(object):
     def wasSuccessful(self):
@@ -249,3 +256,34 @@ class OETestRunner(_TestRunner):
             self._list_tests_module(suite)
 
         return OEListTestsResult()
+
+class OETestResultJSONHelper(object):
+
+    testresult_filename = 'testresults.json'
+
+    def _get_testresults(self, write_dir):
+        testresults = {}
+        file = os.path.join(write_dir, self.testresult_filename)
+        if os.path.exists(file):
+            with open(file, "r") as f:
+                testresults = json.load(f)
+        return testresults
+
+    def _create_json_testresults_string(self, result_id, test_result, configuration, write_dir):
+        testresults = self._get_testresults(write_dir)
+        testresult = {'configuration': configuration,
+                      'result': test_result}
+        testresults[result_id] = testresult
+        return json.dumps(testresults, sort_keys=True, indent=4)
+
+    def _write_file(self, write_dir, file_name, file_content):
+        file_path = os.path.join(write_dir, file_name)
+        with open(file_path, 'w') as the_file:
+            the_file.write(file_content)
+
+    def dump_testresult_file(self, result_id, test_result, configuration, write_dir):
+        bb.utils.mkdirhier(write_dir)
+        lf = bb.utils.lockfile(os.path.join(write_dir, 'jsontestresult.lock'))
+        json_testresults = self._create_json_testresults_string(result_id, test_result, configuration, write_dir)
+        self._write_file(write_dir, self.testresult_filename, json_testresults)
+        bb.utils.unlockfile(lf)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-15  8:59     ` richard.purdie
@ 2018-10-15 10:00       ` Yeoh, Ee Peng
  0 siblings, 0 replies; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-15 10:00 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

Thanks for explaining this in great depth, I fully understood them now. 
Currently, the environments information (eg. MACHINE, DISTRO) discovered inside OEQA were used to create the filesystem directories for each testresult json file. 
Example: MACHINE:qemux86, IMAGE: core-image-sato-sdk
└── runtime
    └── qemux86
        └── core-image-sato-sdk
            ├── logs
            ├── ping.json
            └── ssh.json

These filesystem directories that representing the test environments have 2 use cases:

Use case#1: 
During testresult json storing phase, one can input additional environments information and the extra filesystem directories will be created and appended on existing directories, then the overall filesystem directories that represent the entire environments will be store in git repository.  With these environments stored in filesystem, one will be able to view the overall test environments for all test components (eg. runtime, selftest, sdk, manual tests, etc) by walk through the filesystem directories. 

Use case#2:
These filesystem directories that represent the test environment were used during result storing in git repository.  The store program will check if the git repository already have the environments used by the testresult json file, if git repository already have the environments (filesystem directories exists), it will ask user to provide overwrite argument in order to overwrite the testresult that was stored previously.

I agreed that these environments need to be store in the testresult json file as well. Let me work on the adding these environments information into the testresult json file. 

Thanks,
Yeoh Ee Peng 

-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Monday, October 15, 2018 5:00 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

Hi Ee Peng,

On Mon, 2018-10-15 at 08:42 +0000, Yeoh, Ee Peng wrote:
> Thank you very much for your inputs!
> I had completed making most of the enhancements following your inputs, 
> except the 1 input to put test result logs alongside the test result 
> data in the json file.
> The concerns I have over putting test result logs into test result 
> json file:
> 1. Test result logs will be arbitrary long depending on the type of 
> failure/error and tetecase. With test result logs adding to the test 
> result json file, it will potentially making it hard to read the json 
> file, where the test logs will potentially span over multiple lines.
> 2. By having test logs per each test case in separate file, it will 
> allow quick and easy regression of test logs per specific test case.  
> By putting all the test logs into the one testresult json file, I am 
> worry that it will make regression on test logs per specific test case 
> harder.
> 
> I hope the concerns above justify keeping test logs separate from 
> testresult json file. Please let me know your thoughts and inputs.

The log data and the test results are connected and belong together.
The intent here is to make files which capture the results information and direct user readability is a secondary issue, these files are not intended to be directly read by humans.

If we need a clean readable output, I'm imagining we'd have a simple processing tool which could for example just filter out the test results.

Having a single results file also makes it easier for the automated systems to collect up the results files from different autobuilders and also makes it easier to potentially merge results together.

One piece I suspect may also be missing is that we may need to record some extra information into these files too, such as the MACHINE, DISTRO, hostname of the builder that ran them and the revision of the codebase (layers) used to run the test. There is probably other information we need to record in order to make these results useful in the wider context of the project but I do believe it can and should be recorded in a single file.

Does that help explain why I'm asking for this?

Cheers,

Richard




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-15  8:42   ` Yeoh, Ee Peng
@ 2018-10-15  8:59     ` richard.purdie
  2018-10-15 10:00       ` Yeoh, Ee Peng
  0 siblings, 1 reply; 21+ messages in thread
From: richard.purdie @ 2018-10-15  8:59 UTC (permalink / raw)
  To: Yeoh, Ee Peng, openembedded-core

Hi Ee Peng,

On Mon, 2018-10-15 at 08:42 +0000, Yeoh, Ee Peng wrote:
> Thank you very much for your inputs!
> I had completed making most of the enhancements following your
> inputs, except the 1 input to put test result logs alongside the test
> result data in the json file. 
> The concerns I have over putting test result logs into test result
> json file:
> 1. Test result logs will be arbitrary long depending on the type of
> failure/error and tetecase. With test result logs adding to the test
> result json file, it will potentially making it hard to read the json
> file, where the test logs will potentially span over multiple lines. 
> 2. By having test logs per each test case in separate file, it will
> allow quick and easy regression of test logs per specific test
> case.  By putting all the test logs into the one testresult json
> file, I am worry that it will make regression on test logs per
> specific test case harder. 
> 
> I hope the concerns above justify keeping test logs separate from
> testresult json file. Please let me know your thoughts and inputs. 

The log data and the test results are connected and belong together.
The intent here is to make files which capture the results information
and direct user readability is a secondary issue, these files are not
intended to be directly read by humans.

If we need a clean readable output, I'm imagining we'd have a simple
processing tool which could for example just filter out the test
results.

Having a single results file also makes it easier for the automated
systems to collect up the results files from different autobuilders and
also makes it easier to potentially merge results together.

One piece I suspect may also be missing is that we may need to record
some extra information into these files too, such as the MACHINE,
DISTRO, hostname of the builder that ran them and the revision of the
codebase (layers) used to run the test. There is probably other
information we need to record in order to make these results useful in
the wider context of the project but I do believe it can and should be
recorded in a single file.

Does that help explain why I'm asking for this?

Cheers,

Richard





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-12 15:00 ` Richard Purdie
@ 2018-10-15  8:42   ` Yeoh, Ee Peng
  2018-10-15  8:59     ` richard.purdie
  0 siblings, 1 reply; 21+ messages in thread
From: Yeoh, Ee Peng @ 2018-10-15  8:42 UTC (permalink / raw)
  To: richard.purdie, openembedded-core

Hi Richard,

Thank you very much for your inputs!
I had completed making most of the enhancements following your inputs, except the 1 input to put test result logs alongside the test result data in the json file. 
The concerns I have over putting test result logs into test result json file:
1. Test result logs will be arbitrary long depending on the type of failure/error and tetecase. With test result logs adding to the test result json file, it will potentially making it hard to read the json file, where the test logs will potentially span over multiple lines. 
2. By having test logs per each test case in separate file, it will allow quick and easy regression of test logs per specific test case.  By putting all the test logs into the one testresult json file, I am worry that it will make regression on test logs per specific test case harder. 

I hope the concerns above justify keeping test logs separate from testresult json file. Please let me know your thoughts and inputs. 

Thank you very much!

Best regards,
Yeoh Ee Peng 

[OE-core] [PATCH 1/5] oeqa/core/runner: refactor for OEQA to write json testresult
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156710.html

[OE-core] [PATCH 2/5] oeqa/core/runner: write testresult to json	files
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156711.html

[OE-core] [PATCH 3/5] oeqa/selftest/context: write testresult to	json files
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156712.html

[OE-core] [PATCH 4/5] testimage.bbclass: write testresult to json	files
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156713.html

[OE-core] [PATCH 5/5] testsdk.bbclass: write testresult to json	files
http://lists.openembedded.org/pipermail/openembedded-core/2018-October/156714.html


-----Original Message-----
From: richard.purdie@linuxfoundation.org [mailto:richard.purdie@linuxfoundation.org] 
Sent: Friday, October 12, 2018 11:00 PM
To: Yeoh, Ee Peng <ee.peng.yeoh@intel.com>; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [PATCH 1/4] oeqa/core/runner: write testresult to json files

On Fri, 2018-10-12 at 14:33 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult, OEQA 
> need to output testresult into json files, where these json testresult 
> files will be stored in git repository by the future 
> test-case-management tools.
> 
> Both the testresult (eg. PASSED, FAILED, ERROR) and  the test log (eg. 
> message from unit test assertion) will be created for storing.
> 
> Also the library class inside this patch will be reused by the future 
> test-case-management tools to write json testresult for manual test 
> case executed.

This code is along the right lines but we need to work on some of the coding style and I think some of this can be simplified.

> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 132 
> +++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 121 insertions(+), 11 deletions(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py 
> b/meta/lib/oeqa/core/runner.py index eeb625b..cc33d9c 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -6,6 +6,8 @@ import time
>  import unittest
>  import logging
>  import re
> +import json
> +import pathlib
>  
>  from unittest import TextTestResult as _TestResult  from unittest 
> import TextTestRunner as _TestRunner @@ -44,6 +46,9 @@ class 
> OETestResult(_TestResult):
>  
>          self.tc = tc
>  
> +        self.result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
> +        self.result_desc = ['FAILED', 'ERROR', 'SKIPPED', 
> + 'EXPECTEDFAIL', 'PASSED']
> +
>      def startTest(self, test):
>          # May have been set by concurrencytest
>          if test.id() not in self.starttime:
> @@ -80,7 +85,7 @@ class OETestResult(_TestResult):
>              msg += " (skipped=%d)" % skipped
>          self.tc.logger.info(msg)
>  
> -    def _getDetailsNotPassed(self, case, type, desc):
> +    def _isTestResultContainTestCaseWithResultTypeProvided(self, case, type):
>          found = False
>  
>          for (scase, msg) in getattr(self, type):
> @@ -121,16 +126,12 @@ class OETestResult(_TestResult):
>          for case_name in self.tc._registry['cases']:
>              case = self.tc._registry['cases'][case_name]
>  
> -            result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
> -            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 'PASSED']
> -
> -            fail = False
> +            found = False
>              desc = None
> -            for idx, name in enumerate(result_types):
> -                (fail, msg) = self._getDetailsNotPassed(case, result_types[idx],
> -                        result_desc[idx])
> -                if fail:
> -                    desc = result_desc[idx]
> +            for idx, name in enumerate(self.result_types):
> +                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
> +                if found:
> +                    desc = self.result_desc[idx]
>                      break
>
>              oeid = -1
> @@ -143,13 +144,43 @@ class OETestResult(_TestResult):
>              if case.id() in self.starttime and case.id() in self.endtime:
>                  t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
>  
> -            if fail:
> +            if found:
>                  self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
>                      oeid, desc, t))
>              else:
>                  self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
>                      oeid, 'UNKNOWN', t))

I think the above needs to be split into a separate patch where you can explain you're extracting the common functionality for use in other functions and cleaning up the variable names.

The function name "_isTestResultContainTestCaseWithResultTypeProvided"
is not good though (and I agree the current one is also suboptimal).
findTestResultDetails() would perhaps be a better name?




 
> +    def _get_testcase_result_and_log_dict(self):
> +        testcase_result_dict = {}
> +        testcase_log_dict = {}
> +        for case_name in self.tc._registry['cases']:
> +            case = self.tc._registry['cases'][case_name]
> +
> +            found = False
> +            desc = None
> +            test_log = ''
> +            for idx, name in enumerate(self.result_types):
> +                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
> +                if found:
> +                    desc = self.result_desc[idx]
> +                    test_log = msg
> +                    break
> +
> +            if found:
> +                testcase_result_dict[case.id()] = desc
> +                testcase_log_dict[case.id()] = test_log
> +            else:
> +                testcase_result_dict[case.id()] = "UNKNOWN"
> +        return testcase_result_dict, testcase_log_dict
> +
> +    def logDetailsInJson(self, file_dir):
> +        (testcase_result_dict, testcase_log_dict) = self._get_testcase_result_and_log_dict()
> +        if len(testcase_result_dict) > 0 and len(testcase_log_dict) > 0:
> +            tresultjsonhelper = OETestResultJSONHelper()
> +            tresultjsonhelper.dump_testresult_files(testcase_result_dict, file_dir)
> +            tresultjsonhelper.dump_log_files(testcase_log_dict, 
> + os.path.join(file_dir, 'logs'))

The above two functions may as well be combined. I suspect the code can also be simplified, for example moving the second "if found" into the first one. I'd also be tempted to combine testcase_result_dict and testcase_log_dict into one variable called something like "results", we don't need to state its a dict. For example you could do:

results[case.id()] = (desc, test_log)


Ultimately you look like you then process this data into sorting it by testsuite below. I'm tempted to suggest you simply do that here to start with too.

>  class OEListTestsResult(object):
>      def wasSuccessful(self):
>          return True
> @@ -261,3 +292,82 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    def get_testsuite_from_testcase(self, testcase):
> +        testsuite = testcase[0:testcase.rfind(".")]
> +        return testsuite

def get_testsuite(self, testcase):
    return testcase[:testcase.rfind(".")]

> +
> +    def get_testmodule_from_testsuite(self, testsuite):
> +        testmodule = testsuite[0:testsuite.find(".")]
> +        return testmodule
> +
> +    def get_testsuite_testcase_dictionary(self, testcase_result_dict):
> +        testcase_list = testcase_result_dict.keys()
> +        testsuite_testcase_dict = {}
> +        for testcase in testcase_list:
> +            testsuite = self.get_testsuite_from_testcase(testcase)
> +            if testsuite in testsuite_testcase_dict:
> +                testsuite_testcase_dict[testsuite].append(testcase)
> +            else:
> +                testsuite_testcase_dict[testsuite] = [testcase]
> +        return testsuite_testcase_dict
> +
> +    def get_testmodule_testsuite_dictionary(self, testsuite_testcase_dict):
> +        testsuite_list = testsuite_testcase_dict.keys()
> +        testmodule_testsuite_dict = {}
> +        for testsuite in testsuite_list:
> +            testmodule = self.get_testmodule_from_testsuite(testsuite)
> +            if testmodule in testmodule_testsuite_dict:
> +                testmodule_testsuite_dict[testmodule].append(testsuite)
> +            else:
> +                testmodule_testsuite_dict[testmodule] = [testsuite]
> +        return testmodule_testsuite_dict
> +
> +    def _get_testcase_result(self, testcase, testcase_status_dict):
> +        if testcase in testcase_status_dict:
> +            return testcase_status_dict[testcase]
> +        return ""
> +
> +    def _create_testcase_testresult_object(self, testcase_list, testcase_result_dict):
> +        testcase_dict = {}
> +        for testcase in sorted(testcase_list):
> +            result = self._get_testcase_result(testcase, testcase_result_dict)
> +            testcase_dict[testcase] = {"testresult": result}
> +        return testcase_dict
> +
> +    def _create_json_testsuite_string(self, testsuite_list, testsuite_testcase_dict, testcase_result_dict):
> +        testsuite_object = {'testsuite': {}}
> +        testsuite_dict = testsuite_object['testsuite']
> +        for testsuite in sorted(testsuite_list):
> +            testsuite_dict[testsuite] = {'testcase': {}}
> +            testsuite_dict[testsuite]['testcase'] = self._create_testcase_testresult_object(
> +                testsuite_testcase_dict[testsuite],
> +                testcase_result_dict)
> +        return json.dumps(testsuite_object, sort_keys=True, indent=4)

This all looks too complicated for what we really need. I think/hope we can have one output json file and the original function above can construct the data in a better form so we don't need much of the above.

> +    def dump_testresult_files(self, testcase_result_dict, write_dir):
> +        if not os.path.exists(write_dir):
> +            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
> +        testsuite_testcase_dict = self.get_testsuite_testcase_dictionary(testcase_result_dict)
> +        testmodule_testsuite_dict = self.get_testmodule_testsuite_dictionary(testsuite_testcase_dict)
> +        for testmodule in testmodule_testsuite_dict.keys():
> +            testsuite_list = testmodule_testsuite_dict[testmodule]
> +            json_testsuite = self._create_json_testsuite_string(testsuite_list, testsuite_testcase_dict,
> +                                                                testcase_result_dict)
> +            file_name = '%s.json' % testmodule
> +            file_path = os.path.join(write_dir, file_name)
> +            with open(file_path, 'w') as the_file:
> +                the_file.write(json_testsuite)

Do we need to write a file per module? Why wouldn't we simply write all the results into a single json file?

> +    def dump_log_files(self, testcase_log_dict, write_dir):
> +        if not os.path.exists(write_dir):
> +            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
> +        for testcase in testcase_log_dict.keys():
> +            test_log = testcase_log_dict[testcase]
> +            if test_log is not None:
> +                file_name = '%s.log' % testcase
> +                file_path = os.path.join(write_dir, file_name)
> +                with open(file_path, 'w') as the_file:
> +                    the_file.write(test_log)

Why wouldn't we put the test result logs alongside the test result data in the json file?

Cheers,

Richard


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] oeqa/core/runner: write testresult to json files
  2018-10-12  6:33 Yeoh Ee Peng
@ 2018-10-12 15:00 ` Richard Purdie
  2018-10-15  8:42   ` Yeoh, Ee Peng
  0 siblings, 1 reply; 21+ messages in thread
From: Richard Purdie @ 2018-10-12 15:00 UTC (permalink / raw)
  To: Yeoh Ee Peng, openembedded-core

On Fri, 2018-10-12 at 14:33 +0800, Yeoh Ee Peng wrote:
> As part of the solution to replace Testopia to store testresult,
> OEQA need to output testresult into json files, where these json
> testresult files will be stored in git repository by the future
> test-case-management tools.
> 
> Both the testresult (eg. PASSED, FAILED, ERROR) and  the test log
> (eg. message from unit test assertion) will be created for storing.
> 
> Also the library class inside this patch will be reused by the future
> test-case-management tools to write json testresult for manual test
> case executed.

This code is along the right lines but we need to work on some of the
coding style and I think some of this can be simplified.

> Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
> ---
>  meta/lib/oeqa/core/runner.py | 132 +++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 121 insertions(+), 11 deletions(-)
> 
> diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
> index eeb625b..cc33d9c 100644
> --- a/meta/lib/oeqa/core/runner.py
> +++ b/meta/lib/oeqa/core/runner.py
> @@ -6,6 +6,8 @@ import time
>  import unittest
>  import logging
>  import re
> +import json
> +import pathlib
>  
>  from unittest import TextTestResult as _TestResult
>  from unittest import TextTestRunner as _TestRunner
> @@ -44,6 +46,9 @@ class OETestResult(_TestResult):
>  
>          self.tc = tc
>  
> +        self.result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
> +        self.result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 'PASSED']
> +
>      def startTest(self, test):
>          # May have been set by concurrencytest
>          if test.id() not in self.starttime:
> @@ -80,7 +85,7 @@ class OETestResult(_TestResult):
>              msg += " (skipped=%d)" % skipped
>          self.tc.logger.info(msg)
>  
> -    def _getDetailsNotPassed(self, case, type, desc):
> +    def _isTestResultContainTestCaseWithResultTypeProvided(self, case, type):
>          found = False
>  
>          for (scase, msg) in getattr(self, type):
> @@ -121,16 +126,12 @@ class OETestResult(_TestResult):
>          for case_name in self.tc._registry['cases']:
>              case = self.tc._registry['cases'][case_name]
>  
> -            result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
> -            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 'PASSED']
> -
> -            fail = False
> +            found = False
>              desc = None
> -            for idx, name in enumerate(result_types):
> -                (fail, msg) = self._getDetailsNotPassed(case, result_types[idx],
> -                        result_desc[idx])
> -                if fail:
> -                    desc = result_desc[idx]
> +            for idx, name in enumerate(self.result_types):
> +                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
> +                if found:
> +                    desc = self.result_desc[idx]
>                      break
>
>              oeid = -1
> @@ -143,13 +144,43 @@ class OETestResult(_TestResult):
>              if case.id() in self.starttime and case.id() in self.endtime:
>                  t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
>  
> -            if fail:
> +            if found:
>                  self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
>                      oeid, desc, t))
>              else:
>                  self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
>                      oeid, 'UNKNOWN', t))

I think the above needs to be split into a separate patch where you can
explain you're extracting the common functionality for use in other
functions and cleaning up the variable names.

The function name "_isTestResultContainTestCaseWithResultTypeProvided"
is not good though (and I agree the current one is also suboptimal).
findTestResultDetails() would perhaps be a better name?




 
> +    def _get_testcase_result_and_log_dict(self):
> +        testcase_result_dict = {}
> +        testcase_log_dict = {}
> +        for case_name in self.tc._registry['cases']:
> +            case = self.tc._registry['cases'][case_name]
> +
> +            found = False
> +            desc = None
> +            test_log = ''
> +            for idx, name in enumerate(self.result_types):
> +                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
> +                if found:
> +                    desc = self.result_desc[idx]
> +                    test_log = msg
> +                    break
> +
> +            if found:
> +                testcase_result_dict[case.id()] = desc
> +                testcase_log_dict[case.id()] = test_log
> +            else:
> +                testcase_result_dict[case.id()] = "UNKNOWN"
> +        return testcase_result_dict, testcase_log_dict
> +
> +    def logDetailsInJson(self, file_dir):
> +        (testcase_result_dict, testcase_log_dict) = self._get_testcase_result_and_log_dict()
> +        if len(testcase_result_dict) > 0 and len(testcase_log_dict) > 0:
> +            tresultjsonhelper = OETestResultJSONHelper()
> +            tresultjsonhelper.dump_testresult_files(testcase_result_dict, file_dir)
> +            tresultjsonhelper.dump_log_files(testcase_log_dict, os.path.join(file_dir, 'logs'))

The above two functions may as well be combined. I suspect the code can
also be simplified, for example moving the second "if found" into the
first one. I'd also be tempted to combine testcase_result_dict and
testcase_log_dict into one variable called something like "results", we
don't need to state its a dict. For example you could do:

results[case.id()] = (desc, test_log)


Ultimately you look like you then process this data into sorting it by
testsuite below. I'm tempted to suggest you simply do that here to
start with too.

>  class OEListTestsResult(object):
>      def wasSuccessful(self):
>          return True
> @@ -261,3 +292,82 @@ class OETestRunner(_TestRunner):
>              self._list_tests_module(suite)
>  
>          return OEListTestsResult()
> +
> +class OETestResultJSONHelper(object):
> +
> +    def get_testsuite_from_testcase(self, testcase):
> +        testsuite = testcase[0:testcase.rfind(".")]
> +        return testsuite

def get_testsuite(self, testcase):
    return testcase[:testcase.rfind(".")]

> +
> +    def get_testmodule_from_testsuite(self, testsuite):
> +        testmodule = testsuite[0:testsuite.find(".")]
> +        return testmodule
> +
> +    def get_testsuite_testcase_dictionary(self, testcase_result_dict):
> +        testcase_list = testcase_result_dict.keys()
> +        testsuite_testcase_dict = {}
> +        for testcase in testcase_list:
> +            testsuite = self.get_testsuite_from_testcase(testcase)
> +            if testsuite in testsuite_testcase_dict:
> +                testsuite_testcase_dict[testsuite].append(testcase)
> +            else:
> +                testsuite_testcase_dict[testsuite] = [testcase]
> +        return testsuite_testcase_dict
> +
> +    def get_testmodule_testsuite_dictionary(self, testsuite_testcase_dict):
> +        testsuite_list = testsuite_testcase_dict.keys()
> +        testmodule_testsuite_dict = {}
> +        for testsuite in testsuite_list:
> +            testmodule = self.get_testmodule_from_testsuite(testsuite)
> +            if testmodule in testmodule_testsuite_dict:
> +                testmodule_testsuite_dict[testmodule].append(testsuite)
> +            else:
> +                testmodule_testsuite_dict[testmodule] = [testsuite]
> +        return testmodule_testsuite_dict
> +
> +    def _get_testcase_result(self, testcase, testcase_status_dict):
> +        if testcase in testcase_status_dict:
> +            return testcase_status_dict[testcase]
> +        return ""
> +
> +    def _create_testcase_testresult_object(self, testcase_list, testcase_result_dict):
> +        testcase_dict = {}
> +        for testcase in sorted(testcase_list):
> +            result = self._get_testcase_result(testcase, testcase_result_dict)
> +            testcase_dict[testcase] = {"testresult": result}
> +        return testcase_dict
> +
> +    def _create_json_testsuite_string(self, testsuite_list, testsuite_testcase_dict, testcase_result_dict):
> +        testsuite_object = {'testsuite': {}}
> +        testsuite_dict = testsuite_object['testsuite']
> +        for testsuite in sorted(testsuite_list):
> +            testsuite_dict[testsuite] = {'testcase': {}}
> +            testsuite_dict[testsuite]['testcase'] = self._create_testcase_testresult_object(
> +                testsuite_testcase_dict[testsuite],
> +                testcase_result_dict)
> +        return json.dumps(testsuite_object, sort_keys=True, indent=4)

This all looks too complicated for what we really need. I think/hope we
can have one output json file and the original function above can
construct the data in a better form so we don't need much of the above.

> +    def dump_testresult_files(self, testcase_result_dict, write_dir):
> +        if not os.path.exists(write_dir):
> +            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
> +        testsuite_testcase_dict = self.get_testsuite_testcase_dictionary(testcase_result_dict)
> +        testmodule_testsuite_dict = self.get_testmodule_testsuite_dictionary(testsuite_testcase_dict)
> +        for testmodule in testmodule_testsuite_dict.keys():
> +            testsuite_list = testmodule_testsuite_dict[testmodule]
> +            json_testsuite = self._create_json_testsuite_string(testsuite_list, testsuite_testcase_dict,
> +                                                                testcase_result_dict)
> +            file_name = '%s.json' % testmodule
> +            file_path = os.path.join(write_dir, file_name)
> +            with open(file_path, 'w') as the_file:
> +                the_file.write(json_testsuite)

Do we need to write a file per module? Why wouldn't we simply write all
the results into a single json file?

> +    def dump_log_files(self, testcase_log_dict, write_dir):
> +        if not os.path.exists(write_dir):
> +            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
> +        for testcase in testcase_log_dict.keys():
> +            test_log = testcase_log_dict[testcase]
> +            if test_log is not None:
> +                file_name = '%s.log' % testcase
> +                file_path = os.path.join(write_dir, file_name)
> +                with open(file_path, 'w') as the_file:
> +                    the_file.write(test_log)

Why wouldn't we put the test result logs alongside the test result data
in the json file?

Cheers,

Richard



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/4] oeqa/core/runner: write testresult to json files
@ 2018-10-12  6:33 Yeoh Ee Peng
  2018-10-12 15:00 ` Richard Purdie
  0 siblings, 1 reply; 21+ messages in thread
From: Yeoh Ee Peng @ 2018-10-12  6:33 UTC (permalink / raw)
  To: openembedded-core

As part of the solution to replace Testopia to store testresult,
OEQA need to output testresult into json files, where these json
testresult files will be stored in git repository by the future
test-case-management tools.

Both the testresult (eg. PASSED, FAILED, ERROR) and  the test log
(eg. message from unit test assertion) will be created for storing.

Also the library class inside this patch will be reused by the future
test-case-management tools to write json testresult for manual test
case executed.

Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
---
 meta/lib/oeqa/core/runner.py | 132 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 121 insertions(+), 11 deletions(-)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index eeb625b..cc33d9c 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -6,6 +6,8 @@ import time
 import unittest
 import logging
 import re
+import json
+import pathlib
 
 from unittest import TextTestResult as _TestResult
 from unittest import TextTestRunner as _TestRunner
@@ -44,6 +46,9 @@ class OETestResult(_TestResult):
 
         self.tc = tc
 
+        self.result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
+        self.result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 'PASSED']
+
     def startTest(self, test):
         # May have been set by concurrencytest
         if test.id() not in self.starttime:
@@ -80,7 +85,7 @@ class OETestResult(_TestResult):
             msg += " (skipped=%d)" % skipped
         self.tc.logger.info(msg)
 
-    def _getDetailsNotPassed(self, case, type, desc):
+    def _isTestResultContainTestCaseWithResultTypeProvided(self, case, type):
         found = False
 
         for (scase, msg) in getattr(self, type):
@@ -121,16 +126,12 @@ class OETestResult(_TestResult):
         for case_name in self.tc._registry['cases']:
             case = self.tc._registry['cases'][case_name]
 
-            result_types = ['failures', 'errors', 'skipped', 'expectedFailures', 'successes']
-            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL', 'PASSED']
-
-            fail = False
+            found = False
             desc = None
-            for idx, name in enumerate(result_types):
-                (fail, msg) = self._getDetailsNotPassed(case, result_types[idx],
-                        result_desc[idx])
-                if fail:
-                    desc = result_desc[idx]
+            for idx, name in enumerate(self.result_types):
+                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
+                if found:
+                    desc = self.result_desc[idx]
                     break
 
             oeid = -1
@@ -143,13 +144,43 @@ class OETestResult(_TestResult):
             if case.id() in self.starttime and case.id() in self.endtime:
                 t = " (" + "{0:.2f}".format(self.endtime[case.id()] - self.starttime[case.id()]) + "s)"
 
-            if fail:
+            if found:
                 self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
                     oeid, desc, t))
             else:
                 self.tc.logger.info("RESULTS - %s - Testcase %s: %s%s" % (case.id(),
                     oeid, 'UNKNOWN', t))
 
+    def _get_testcase_result_and_log_dict(self):
+        testcase_result_dict = {}
+        testcase_log_dict = {}
+        for case_name in self.tc._registry['cases']:
+            case = self.tc._registry['cases'][case_name]
+
+            found = False
+            desc = None
+            test_log = ''
+            for idx, name in enumerate(self.result_types):
+                (found, msg) = self._isTestResultContainTestCaseWithResultTypeProvided(case, self.result_types[idx])
+                if found:
+                    desc = self.result_desc[idx]
+                    test_log = msg
+                    break
+
+            if found:
+                testcase_result_dict[case.id()] = desc
+                testcase_log_dict[case.id()] = test_log
+            else:
+                testcase_result_dict[case.id()] = "UNKNOWN"
+        return testcase_result_dict, testcase_log_dict
+
+    def logDetailsInJson(self, file_dir):
+        (testcase_result_dict, testcase_log_dict) = self._get_testcase_result_and_log_dict()
+        if len(testcase_result_dict) > 0 and len(testcase_log_dict) > 0:
+            tresultjsonhelper = OETestResultJSONHelper()
+            tresultjsonhelper.dump_testresult_files(testcase_result_dict, file_dir)
+            tresultjsonhelper.dump_log_files(testcase_log_dict, os.path.join(file_dir, 'logs'))
+
 class OEListTestsResult(object):
     def wasSuccessful(self):
         return True
@@ -261,3 +292,82 @@ class OETestRunner(_TestRunner):
             self._list_tests_module(suite)
 
         return OEListTestsResult()
+
+class OETestResultJSONHelper(object):
+
+    def get_testsuite_from_testcase(self, testcase):
+        testsuite = testcase[0:testcase.rfind(".")]
+        return testsuite
+
+    def get_testmodule_from_testsuite(self, testsuite):
+        testmodule = testsuite[0:testsuite.find(".")]
+        return testmodule
+
+    def get_testsuite_testcase_dictionary(self, testcase_result_dict):
+        testcase_list = testcase_result_dict.keys()
+        testsuite_testcase_dict = {}
+        for testcase in testcase_list:
+            testsuite = self.get_testsuite_from_testcase(testcase)
+            if testsuite in testsuite_testcase_dict:
+                testsuite_testcase_dict[testsuite].append(testcase)
+            else:
+                testsuite_testcase_dict[testsuite] = [testcase]
+        return testsuite_testcase_dict
+
+    def get_testmodule_testsuite_dictionary(self, testsuite_testcase_dict):
+        testsuite_list = testsuite_testcase_dict.keys()
+        testmodule_testsuite_dict = {}
+        for testsuite in testsuite_list:
+            testmodule = self.get_testmodule_from_testsuite(testsuite)
+            if testmodule in testmodule_testsuite_dict:
+                testmodule_testsuite_dict[testmodule].append(testsuite)
+            else:
+                testmodule_testsuite_dict[testmodule] = [testsuite]
+        return testmodule_testsuite_dict
+
+    def _get_testcase_result(self, testcase, testcase_status_dict):
+        if testcase in testcase_status_dict:
+            return testcase_status_dict[testcase]
+        return ""
+
+    def _create_testcase_testresult_object(self, testcase_list, testcase_result_dict):
+        testcase_dict = {}
+        for testcase in sorted(testcase_list):
+            result = self._get_testcase_result(testcase, testcase_result_dict)
+            testcase_dict[testcase] = {"testresult": result}
+        return testcase_dict
+
+    def _create_json_testsuite_string(self, testsuite_list, testsuite_testcase_dict, testcase_result_dict):
+        testsuite_object = {'testsuite': {}}
+        testsuite_dict = testsuite_object['testsuite']
+        for testsuite in sorted(testsuite_list):
+            testsuite_dict[testsuite] = {'testcase': {}}
+            testsuite_dict[testsuite]['testcase'] = self._create_testcase_testresult_object(
+                testsuite_testcase_dict[testsuite],
+                testcase_result_dict)
+        return json.dumps(testsuite_object, sort_keys=True, indent=4)
+
+    def dump_testresult_files(self, testcase_result_dict, write_dir):
+        if not os.path.exists(write_dir):
+            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
+        testsuite_testcase_dict = self.get_testsuite_testcase_dictionary(testcase_result_dict)
+        testmodule_testsuite_dict = self.get_testmodule_testsuite_dictionary(testsuite_testcase_dict)
+        for testmodule in testmodule_testsuite_dict.keys():
+            testsuite_list = testmodule_testsuite_dict[testmodule]
+            json_testsuite = self._create_json_testsuite_string(testsuite_list, testsuite_testcase_dict,
+                                                                testcase_result_dict)
+            file_name = '%s.json' % testmodule
+            file_path = os.path.join(write_dir, file_name)
+            with open(file_path, 'w') as the_file:
+                the_file.write(json_testsuite)
+
+    def dump_log_files(self, testcase_log_dict, write_dir):
+        if not os.path.exists(write_dir):
+            pathlib.Path(write_dir).mkdir(parents=True, exist_ok=True)
+        for testcase in testcase_log_dict.keys():
+            test_log = testcase_log_dict[testcase]
+            if test_log is not None:
+                file_name = '%s.log' % testcase
+                file_path = os.path.join(write_dir, file_name)
+                with open(file_path, 'w') as the_file:
+                    the_file.write(test_log)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-10-30  8:55 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-22 10:34 [PATCH 1/4] oeqa/core/runner: write testresult to json files Yeoh Ee Peng
2018-10-22 10:34 ` [PATCH 2/4] oeqa/selftest/context: " Yeoh Ee Peng
2018-10-22 10:34 ` [PATCH 3/4] testimage.bbclass: " Yeoh Ee Peng
2018-10-22 10:34 ` [PATCH 4/4] testsdk.bbclass: " Yeoh Ee Peng
2018-10-22 22:54 ` [PATCH 1/4] oeqa/core/runner: " Richard Purdie
2018-10-23  6:39   ` Yeoh, Ee Peng
2018-10-29 10:44     ` richard.purdie
2018-10-29 13:58       ` Richard Purdie
2018-10-30  8:55         ` Yeoh, Ee Peng
  -- strict thread matches above, loose matches on Subject: below --
2018-10-23  5:57 Yeoh Ee Peng
2018-10-22  6:54 Yeoh Ee Peng
2018-10-22  8:31 ` Richard Purdie
2018-10-22  8:59   ` Yeoh, Ee Peng
2018-10-22  9:34     ` richard.purdie
2018-10-22  9:47       ` Yeoh, Ee Peng
2018-10-22 10:53       ` Yeoh, Ee Peng
2018-10-12  6:33 Yeoh Ee Peng
2018-10-12 15:00 ` Richard Purdie
2018-10-15  8:42   ` Yeoh, Ee Peng
2018-10-15  8:59     ` richard.purdie
2018-10-15 10:00       ` Yeoh, Ee Peng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.