All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool
@ 2018-12-06 10:59 mazliana.mohamad
  2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: mazliana.mohamad @ 2018-12-06 10:59 UTC (permalink / raw)
  To: openembedded-core

From: Mazliana <mazliana.mohamad@intel.com>

Integrated the test-case-mgmt "store", "report" with "manual execution".Manual test execution is one
of an alternative test case management tool of Testopia. This script has only a bare-minimum function.
Bare-minimum function refer to function where the user can only execute all of the test cases that
component have.

To use these scripts, first source oe environment, then run the entry point script to look for help.
        $ test-case-mgmt

To execute manual test cases, execute the below
        $ test-case-mgmt manualexecution <manualjsonfile>

By default testresults.json store in poky/<build_dir>/tmp/log/manual

[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/test-case-mgmt | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
index b832ee2..ef69db4 100755
--- a/scripts/test-case-mgmt
+++ b/scripts/test-case-mgmt
@@ -17,6 +17,11 @@
 # To store test result & log, execute the below
 #    $ test-case-mgmt store <source_dir> <git_branch>
 #
+# To execute manual test cases, execute the below
+#    $ test-case-mgmt manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution in poky/<build>/tmp/log/manual/
+#
 # Copyright (c) 2018, Intel Corporation.
 #
 # This program is free software; you can redistribute it and/or modify it
@@ -40,6 +45,7 @@ import argparse_oe
 import scriptutils
 import testcasemgmt.store
 import testcasemgmt.report
+import testcasemgmt.manualexecution
 logger = scriptutils.logger_create('test-case-mgmt')
 
 def _validate_user_input_arguments(args):
@@ -74,6 +80,8 @@ def main():
     testcasemgmt.store.register_commands(subparsers)
     subparsers.add_subparser_group('report', 'Reporting for test result & log', 200)
     testcasemgmt.report.register_commands(subparsers)
+    subparsers.add_subparser_group('manualexecution', 'Execute manual test cases', 100)
+    testcasemgmt.manualexecution.register_commands(subparsers)
     args = parser.parse_args()
     if args.debug:
         logger.setLevel(logging.DEBUG)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function
  2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad
@ 2018-12-06 10:59 ` mazliana.mohamad
  2018-12-06 12:07   ` Mohamad, Mazliana
  2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork
  2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana
  2 siblings, 1 reply; 6+ messages in thread
From: mazliana.mohamad @ 2018-12-06 10:59 UTC (permalink / raw)
  To: openembedded-core

From: Mazliana <mazliana.mohamad@intel.com>

Manual execution is a helper script to execute all manual test cases in baseline command. Basically, this script
will show the steps and expected results. Then, in the end of the steps, user need to give their input for the result.
The input based on the passed/failed/skipped status. The result given will  be stored in testresults.json and it will
stored the log error given by user input & the configuration. The output test result for json file was created by using
OEQA library.

The configuration part is manually key-in by user. The system allow user to specify how many configuration they want to
add and they needs to define the configuration name and value pair needed. The configuration part was added because we
want to standardize the output test result format between automation and manual execution.
  
[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/lib/testcasemgmt/manualexecution.py | 141 ++++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 0000000..2911c1e
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,141 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import scriptpath
+import re
+scriptpath.add_oe_lib_path()
+scriptpath.add_bitbake_lib_path()
+import bb.utils
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+    def __init__(self):
+        self.jdata = ''
+        self.testmodule = ''
+        self.testsuite = ''
+        self.testcase = ''
+        self.configuration = ''
+        self.host_distro = ''
+        self.host_name = ''
+        self.machine = ''
+        self.starttime = ''
+        self.result_id = ''
+
+    def read_json(self, file):
+        self.jdata = json.load(open('%s' % file))
+        self.testcase = []
+        self.testmodule = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+        self.testsuite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+        for i in range(0, len(self.jdata)):
+            self.testcase.append(self.jdata[i]['test']['@alias'].split('.', 2)[2])
+        return self.jdata, self.testmodule, self.testsuite, self.testcase
+    
+    def get_input(self, config):
+        while True:
+            output = input('{} = '.format(config))
+            if re.match('^[a-zA-Z0-9_]+$', output):
+                break
+            print('Only alphanumeric and underscore are allowed. Please try again')
+        return output
+
+    def create_config(self):
+        self.configuration = {}
+        while True:
+            try:
+                conf_total = int(input('\nPlease provide how many configuration you want to save \n'))
+                break
+            except ValueError:
+                print('Invalid input. Please provide input as a number not character.')
+        for i in range(conf_total):
+            print('---------------------------------------------')
+            print('This is your %s ' % (i + 1) + 'configuration. Please provide configuration name and its value')
+            print('---------------------------------------------')
+            self.name_conf = self.get_input('Configuration Name')
+            self.value_conf = self.get_input('Configuration Value')
+            print('---------------------------------------------\n')
+            self.configuration[self.name_conf.upper()] = self.value_conf
+        self.currentDT = datetime.datetime.now()
+        self.starttime = self.currentDT.strftime('%Y%m%d%H%M%S')
+        self.test_type = self.testmodule
+        self.configuration['STARTTIME'] = self.starttime
+        self.configuration['TEST_TYPE'] = self.test_type
+        return self.configuration
+
+    def create_result_id(self):
+        self.result_id = 'manual_' + self.test_type + '_' + self.starttime
+        return self.result_id
+
+    def execute_test_steps(self, testID):
+        temp = {}
+        testcaseID = self.testmodule + '.' + self.testsuite + '.' + self.testcase[testID]
+        print('------------------------------------------------------------------------')
+        print('Executing test case:' + '' '' + self.testcase[testID])
+        print('------------------------------------------------------------------------')
+        print('You have total ' + max(self.jdata[testID]['test']['execution'].keys()) + ' test steps to be executed.')
+        print('------------------------------------------------------------------------\n')
+
+        for step in range(1, int(max(self.jdata[testID]['test']['execution'].keys()) + '1')):
+            print('Step %s: ' % step + self.jdata[testID]['test']['execution']['%s' % step]['action'])
+            print('Expected output: ' + self.jdata[testID]['test']['execution']['%s' % step]['expected_results'])
+            if step == int(max(self.jdata[testID]['test']['execution'].keys())):
+                done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked? \n')
+                break
+            else:
+                done = input('\nPlease press ENTER when you are done to proceed to next step.\n')
+                step = step + 1
+
+        if done == 'p' or done == 'P':
+            res = 'PASSED'
+        elif done == 'f' or done == 'F':
+            res = 'FAILED'
+            log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n')
+        elif done == 'b' or done == 'B':
+            res = 'BLOCKED'
+        else:
+            res = 'SKIPPED'
+            
+        if res == 'FAILED':
+            temp.update({testcaseID: {'status': '%s' % res, 'log': '%s' % log_input}})
+        else:
+            temp.update({testcaseID: {'status': '%s' % res}})
+        return temp
+
+def manualexecution(args, logger):
+    basepath = os.getcwd()
+    write_dir = basepath + '/tmp/log/manual/'
+    sys.path.insert(0, write_dir)
+    manualtestcasejson = ManualTestRunner()
+    manualtestcasejson.read_json(args.file)
+    test_result = {}
+    manualtestcasejson.create_config()
+    manualtestcasejson.create_result_id()
+    print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(manualtestcasejson.jdata))
+    for i in range(0, len(manualtestcasejson.jdata)):
+        test_results = manualtestcasejson.execute_test_steps(i)
+        test_result.update(test_results)
+    resultjsonhelper = OETestResultJSONHelper()
+    resultjsonhelper.dump_testresult_file(write_dir, manualtestcasejson.configuration, manualtestcasejson.result_id, test_result)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.',
+                                         description='Helper script for results populating during manual test execution. You can find file in meta/lib/oeqa/manual/',
+                                         group='manualexecution')
+    parser_build.set_defaults(func=manualexecution)
+    parser_build.add_argument('file', help='Specify path to manual test case JSON file.Note: Please use \"\" to encapsulate the file path.')
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more
  2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad
  2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
@ 2018-12-06 11:33 ` Patchwork
  2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana
  2 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2018-12-06 11:33 UTC (permalink / raw)
  To: mazliana.mohamad; +Cc: openembedded-core

== Series Details ==

Series: "scripts/test-case-mgmt: add "m..." and 1 more
Revision: 1
URL   : https://patchwork.openembedded.org/series/15244/
State : failure

== Summary ==


Thank you for submitting this patch series to OpenEmbedded Core. This is
an automated response. Several tests have been executed on the proposed
series by patchtest resulting in the following failures:



* Issue             Series does not apply on top of target branch [test_series_merge_on_head] 
  Suggested fix    Rebase your series on top of targeted branch
  Targeted branch  master (currently at bbe5099ba7)

* Patch            [2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function
 Issue             Commit shortlog is too long [test_shortlog_length] 
  Suggested fix    Edit shortlog so that it is 90 characters or less (currently 92 characters)



If you believe any of these test results are incorrect, please reply to the
mailing list (openembedded-core@lists.openembedded.org) raising your concerns.
Otherwise we would appreciate you correcting the issues and submitting a new
version of the patchset if applicable. Please ensure you add/increment the
version number when sending the new version (i.e. [PATCH] -> [PATCH v2] ->
[PATCH v3] -> ...).

---
Guidelines:     https://www.openembedded.org/wiki/Commit_Patch_Message_Guidelines
Test framework: http://git.yoctoproject.org/cgit/cgit.cgi/patchtest
Test suite:     http://git.yoctoproject.org/cgit/cgit.cgi/patchtest-oe



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function
  2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
@ 2018-12-06 12:07   ` Mohamad, Mazliana
  0 siblings, 0 replies; 6+ messages in thread
From: Mohamad, Mazliana @ 2018-12-06 12:07 UTC (permalink / raw)
  To: openembedded-core

Please ignore this patch

-----Original Message-----
From: openembedded-core-bounces@lists.openembedded.org [mailto:openembedded-core-bounces@lists.openembedded.org] On Behalf Of mazliana.mohamad@intel.com
Sent: Thursday, December 6, 2018 6:59 PM
To: openembedded-core@lists.openembedded.org
Subject: [OE-core] [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function

From: Mazliana <mazliana.mohamad@intel.com>

Manual execution is a helper script to execute all manual test cases in baseline command. Basically, this script will show the steps and expected results. Then, in the end of the steps, user need to give their input for the result.
The input based on the passed/failed/skipped status. The result given will  be stored in testresults.json and it will stored the log error given by user input & the configuration. The output test result for json file was created by using OEQA library.

The configuration part is manually key-in by user. The system allow user to specify how many configuration they want to add and they needs to define the configuration name and value pair needed. The configuration part was added because we want to standardize the output test result format between automation and manual execution.
  
[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/lib/testcasemgmt/manualexecution.py | 141 ++++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 0000000..2911c1e
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,141 @@
+# test case management tool - manual execution from testopia test cases 
+# # Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify 
+it # under the terms and conditions of the GNU General Public License, 
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but 
+WITHOUT # ANY WARRANTY; without even the implied warranty of 
+MERCHANTABILITY or # FITNESS FOR A PARTICULAR PURPOSE.  See the GNU 
+General Public License for # more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import scriptpath
+import re
+scriptpath.add_oe_lib_path()
+scriptpath.add_bitbake_lib_path()
+import bb.utils
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+    def __init__(self):
+        self.jdata = ''
+        self.testmodule = ''
+        self.testsuite = ''
+        self.testcase = ''
+        self.configuration = ''
+        self.host_distro = ''
+        self.host_name = ''
+        self.machine = ''
+        self.starttime = ''
+        self.result_id = ''
+
+    def read_json(self, file):
+        self.jdata = json.load(open('%s' % file))
+        self.testcase = []
+        self.testmodule = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+        self.testsuite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+        for i in range(0, len(self.jdata)):
+            self.testcase.append(self.jdata[i]['test']['@alias'].split('.', 2)[2])
+        return self.jdata, self.testmodule, self.testsuite, 
+ self.testcase
+    
+    def get_input(self, config):
+        while True:
+            output = input('{} = '.format(config))
+            if re.match('^[a-zA-Z0-9_]+$', output):
+                break
+            print('Only alphanumeric and underscore are allowed. Please try again')
+        return output
+
+    def create_config(self):
+        self.configuration = {}
+        while True:
+            try:
+                conf_total = int(input('\nPlease provide how many configuration you want to save \n'))
+                break
+            except ValueError:
+                print('Invalid input. Please provide input as a number not character.')
+        for i in range(conf_total):
+            print('---------------------------------------------')
+            print('This is your %s ' % (i + 1) + 'configuration. Please provide configuration name and its value')
+            print('---------------------------------------------')
+            self.name_conf = self.get_input('Configuration Name')
+            self.value_conf = self.get_input('Configuration Value')
+            print('---------------------------------------------\n')
+            self.configuration[self.name_conf.upper()] = self.value_conf
+        self.currentDT = datetime.datetime.now()
+        self.starttime = self.currentDT.strftime('%Y%m%d%H%M%S')
+        self.test_type = self.testmodule
+        self.configuration['STARTTIME'] = self.starttime
+        self.configuration['TEST_TYPE'] = self.test_type
+        return self.configuration
+
+    def create_result_id(self):
+        self.result_id = 'manual_' + self.test_type + '_' + self.starttime
+        return self.result_id
+
+    def execute_test_steps(self, testID):
+        temp = {}
+        testcaseID = self.testmodule + '.' + self.testsuite + '.' + self.testcase[testID]
+        print('------------------------------------------------------------------------')
+        print('Executing test case:' + '' '' + self.testcase[testID])
+        print('------------------------------------------------------------------------')
+        print('You have total ' + max(self.jdata[testID]['test']['execution'].keys()) + ' test steps to be executed.')
+        
+ print('---------------------------------------------------------------
+ ---------\n')
+
+        for step in range(1, int(max(self.jdata[testID]['test']['execution'].keys()) + '1')):
+            print('Step %s: ' % step + self.jdata[testID]['test']['execution']['%s' % step]['action'])
+            print('Expected output: ' + self.jdata[testID]['test']['execution']['%s' % step]['expected_results'])
+            if step == int(max(self.jdata[testID]['test']['execution'].keys())):
+                done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked? \n')
+                break
+            else:
+                done = input('\nPlease press ENTER when you are done to proceed to next step.\n')
+                step = step + 1
+
+        if done == 'p' or done == 'P':
+            res = 'PASSED'
+        elif done == 'f' or done == 'F':
+            res = 'FAILED'
+            log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n')
+        elif done == 'b' or done == 'B':
+            res = 'BLOCKED'
+        else:
+            res = 'SKIPPED'
+            
+        if res == 'FAILED':
+            temp.update({testcaseID: {'status': '%s' % res, 'log': '%s' % log_input}})
+        else:
+            temp.update({testcaseID: {'status': '%s' % res}})
+        return temp
+
+def manualexecution(args, logger):
+    basepath = os.getcwd()
+    write_dir = basepath + '/tmp/log/manual/'
+    sys.path.insert(0, write_dir)
+    manualtestcasejson = ManualTestRunner()
+    manualtestcasejson.read_json(args.file)
+    test_result = {}
+    manualtestcasejson.create_config()
+    manualtestcasejson.create_result_id()
+    print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(manualtestcasejson.jdata))
+    for i in range(0, len(manualtestcasejson.jdata)):
+        test_results = manualtestcasejson.execute_test_steps(i)
+        test_result.update(test_results)
+    resultjsonhelper = OETestResultJSONHelper()
+    resultjsonhelper.dump_testresult_file(write_dir, manualtestcasejson.configuration, manualtestcasejson.result_id, test_result)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.',
+                                         description='Helper script for results populating during manual test execution. You can find file in meta/lib/oeqa/manual/',
+                                         group='manualexecution')
+    parser_build.set_defaults(func=manualexecution)
+    parser_build.add_argument('file', help='Specify path to manual test 
+case JSON file.Note: Please use \"\" to encapsulate the file path.')
--
2.7.4

--
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool
  2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad
  2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
  2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork
@ 2018-12-06 12:08 ` Mohamad, Mazliana
  2 siblings, 0 replies; 6+ messages in thread
From: Mohamad, Mazliana @ 2018-12-06 12:08 UTC (permalink / raw)
  To: openembedded-core

Please ignore this patch

-----Original Message-----
From: openembedded-core-bounces@lists.openembedded.org [mailto:openembedded-core-bounces@lists.openembedded.org] On Behalf Of mazliana.mohamad@intel.com
Sent: Thursday, December 6, 2018 6:59 PM
To: openembedded-core@lists.openembedded.org
Subject: [OE-core] [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool

From: Mazliana <mazliana.mohamad@intel.com>

Integrated the test-case-mgmt "store", "report" with "manual execution".Manual test execution is one of an alternative test case management tool of Testopia. This script has only a bare-minimum function.
Bare-minimum function refer to function where the user can only execute all of the test cases that component have.

To use these scripts, first source oe environment, then run the entry point script to look for help.
        $ test-case-mgmt

To execute manual test cases, execute the below
        $ test-case-mgmt manualexecution <manualjsonfile>

By default testresults.json store in poky/<build_dir>/tmp/log/manual

[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/test-case-mgmt | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt index b832ee2..ef69db4 100755
--- a/scripts/test-case-mgmt
+++ b/scripts/test-case-mgmt
@@ -17,6 +17,11 @@
 # To store test result & log, execute the below
 #    $ test-case-mgmt store <source_dir> <git_branch>
 #
+# To execute manual test cases, execute the below
+#    $ test-case-mgmt manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution in 
+poky/<build>/tmp/log/manual/ #
 # Copyright (c) 2018, Intel Corporation.
 #
 # This program is free software; you can redistribute it and/or modify it @@ -40,6 +45,7 @@ import argparse_oe  import scriptutils  import testcasemgmt.store  import testcasemgmt.report
+import testcasemgmt.manualexecution
 logger = scriptutils.logger_create('test-case-mgmt')
 
 def _validate_user_input_arguments(args):
@@ -74,6 +80,8 @@ def main():
     testcasemgmt.store.register_commands(subparsers)
     subparsers.add_subparser_group('report', 'Reporting for test result & log', 200)
     testcasemgmt.report.register_commands(subparsers)
+    subparsers.add_subparser_group('manualexecution', 'Execute manual test cases', 100)
+    testcasemgmt.manualexecution.register_commands(subparsers)
     args = parser.parse_args()
     if args.debug:
         logger.setLevel(logging.DEBUG)
--
2.7.4

--
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function
  2018-12-24  8:21 [PATCH 1/2] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
@ 2018-12-24  8:21 ` Yeoh Ee Peng
  0 siblings, 0 replies; 6+ messages in thread
From: Yeoh Ee Peng @ 2018-12-24  8:21 UTC (permalink / raw)
  To: openembedded-core

From: Mazliana <mazliana.mohamad@intel.com>

Manual execution script is a helper script to execute all manual test cases in baseline command,
which consists of user guideline steps and the expected results. The last step will ask user to
provide their input to execute result. The input options are passed/failed/blocked/skipped status.
The result given will be written in testresults.json including log error from the user input
and configuration if there is any. The output test result for json file is created by using
OEQA library.
 
The configuration part is manually key-in by the user. The system allow user to specify how many
configuration they want to add and they need to define the required configuration name and value
pair. In QA perspective, "configuration" means the test environments and parameters used during
QA setup before testing can be carry out. Example of configurations: image used for boot up, host
machine distro used, poky configurations, etc.
 
The purpose of adding the configuration is to standardize the output test result format between
automation and manual execution.

scripts/test-case-mgmt: add "manualexecution" as a tool

Integrated the test-case-mgmt "store", "report" with "manual execution".Manual test execution
is one of an alternative test case management tool of Testopia. This script has only a
bare-minimum function. Bare-minimum function refer to function where the user can only execute all of the
test cases that component have.

To use these scripts, first source oe environment, then run the entry point
script to look for help.
        $ test-case-mgmt

To execute manual test cases, execute the below
        $ test-case-mgmt manualexecution <manualjsonfile>

By default testresults.json store in <build_dir>/tmp/log/manual

[YOCTO #12651]

Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
 scripts/lib/testcasemgmt/manualexecution.py | 142 ++++++++++++++++++++++++++++
 scripts/test-case-mgmt                      |  11 ++-
 2 files changed, 152 insertions(+), 1 deletion(-)
 create mode 100644 scripts/lib/testcasemgmt/manualexecution.py

diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py
new file mode 100644
index 0000000..c6c450f
--- /dev/null
+++ b/scripts/lib/testcasemgmt/manualexecution.py
@@ -0,0 +1,142 @@
+# test case management tool - manual execution from testopia test cases
+#
+# Copyright (c) 2018, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+import argparse
+import json
+import os
+import sys
+import datetime
+import re
+from oeqa.core.runner import OETestResultJSONHelper
+
+class ManualTestRunner(object):
+    def __init__(self):
+        self.jdata = ''
+        self.test_module = ''
+        self.test_suite = ''
+        self.test_case = ''
+        self.configuration = ''
+        self.starttime = ''
+        self.result_id = ''
+        self.write_dir = ''
+
+    def _read_json(self, file):
+        self.jdata = json.load(open('%s' % file))
+        self.test_case = []
+        self.test_module = self.jdata[0]['test']['@alias'].split('.', 2)[0]
+        self.test_suite = self.jdata[0]['test']['@alias'].split('.', 2)[1]
+        for i in range(0, len(self.jdata)):
+            self.test_case.append(self.jdata[i]['test']['@alias'].split('.', 2)[2])
+    
+    def _get_input(self, config):
+        while True:
+            output = input('{} = '.format(config))
+            if re.match('^[a-zA-Z0-9_]+$', output):
+                break
+            print('Only alphanumeric and underscore are allowed. Please try again')
+        return output
+
+    def _create_config(self):
+        self.configuration = {}
+        while True:
+            try:
+                conf_total = int(input('\nPlease provide how many configuration you want to save \n'))
+                break
+            except ValueError:
+                print('Invalid input. Please provide input as a number not character.')
+        for i in range(conf_total):
+            print('---------------------------------------------')
+            print('This is configuration #%s ' % (i + 1) + '. Please provide configuration name and its value')
+            print('---------------------------------------------')
+            name_conf = self._get_input('Configuration Name')
+            value_conf = self._get_input('Configuration Value')
+            print('---------------------------------------------\n')
+            self.configuration[name_conf.upper()] = value_conf
+        current_datetime = datetime.datetime.now()
+        self.starttime = current_datetime.strftime('%Y%m%d%H%M%S')
+        self.configuration['STARTTIME'] = self.starttime
+        self.configuration['TEST_TYPE'] = self.test_module
+
+    def _create_result_id(self):
+        self.result_id = 'manual_' + self.test_module + '_' + self.starttime
+
+    def _execute_test_steps(self, test_id):
+        test_result = {}
+        testcase_id = self.test_module + '.' + self.test_suite + '.' + self.test_case[test_id]
+        total_steps = len(self.jdata[test_id]['test']['execution'].keys())
+        print('------------------------------------------------------------------------')
+        print('Executing test case:' + '' '' + self.test_case[test_id])
+        print('------------------------------------------------------------------------')
+        print('You have total ' + str(total_steps) + ' test steps to be executed.')
+        print('------------------------------------------------------------------------\n')
+        
+        for step in range (1, (total_steps + 1)):
+            print('Step %s: ' % step + self.jdata[test_id]['test']['execution']['%s' % step]['action'])
+            print('Expected output: ' + self.jdata[test_id]['test']['execution']['%s' % step]['expected_results'])
+            if step == total_steps:
+                while True:
+                    try:
+                        done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked/(S)kipped? \n')
+                        done = done.lower()
+                        if done == 'p':
+                            res = 'PASSED'
+                        elif done == 'f':
+                            res = 'FAILED'
+                            log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n')
+                        elif done == 'b':
+                            res = 'BLOCKED'
+                        elif done == 's':
+                            res = 'SKIPPED'
+                          
+                        if res == 'FAILED':
+                            test_result.update({testcase_id: {'status': '%s' % res, 'log': '%s' % log_input}})
+                        else:
+                            test_result.update({testcase_id: {'status': '%s' % res}})
+                        break
+                    except:
+                        print('Invalid input!')
+            else:
+                done = input('\nPlease press ENTER when you are done to proceed to next step.\n')
+        return test_result
+
+    def _create_write_dir(self):
+        basepath = os.environ['BUILDDIR']
+        self.write_dir = basepath + '/tmp/log/manual/'
+
+    def run_test(self, file):
+        self._read_json(file)
+        self._create_config()
+        self._create_result_id()
+        self._create_write_dir()
+        test_results = {}
+        print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(self.jdata))
+        for i in range(0, len(self.jdata)):
+            test_result = self._execute_test_steps(i)
+            test_results.update(test_result)
+        return self.configuration, self.result_id, self.write_dir, test_results
+
+def manualexecution(args, logger):
+    testrunner = ManualTestRunner()
+    get_configuration, get_result_id, get_write_dir, get_test_results = testrunner.run_test(args.file)
+    resultjsonhelper = OETestResultJSONHelper()
+    resultjsonhelper.dump_testresult_file(get_write_dir, get_configuration, get_result_id,
+                                          get_test_results)
+    return 0
+
+def register_commands(subparsers):
+    """Register subcommands from this plugin"""
+    parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.',
+                                         description='Helper script for results populating during manual test execution. You can find manual test case JSON file in meta/lib/oeqa/manual/',
+                                         group='manualexecution')
+    parser_build.set_defaults(func=manualexecution)
+    parser_build.add_argument('file', help='Specify path to manual test case JSON file.Note: Please use \"\" to encapsulate the file path.')
diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
index 0df305d..5c1d435 100755
--- a/scripts/test-case-mgmt
+++ b/scripts/test-case-mgmt
@@ -8,7 +8,7 @@
 # test-case-mgmt script was designed as part of the helper script for below purpose:
 # 1. To store test result inside git repository
 # 2. To report text-based test result summary
-# 3. (Future) To execute manual test cases
+# 3. To execute manual test cases
 #
 # To look for help information.
 #    $ test-case-mgmt
@@ -19,6 +19,12 @@
 # To report test result summary, execute the below
 #     $ test-case-mgmt report <git_branch>
 #
+# To execute manual test cases, execute the below
+#    $ test-case-mgmt manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution store in <build_dir>/tmp/log/manual/
+#
+#
 # Copyright (c) 2018, Intel Corporation.
 #
 # This program is free software; you can redistribute it and/or modify it
@@ -42,6 +48,7 @@ import argparse_oe
 import scriptutils
 import testcasemgmt.store
 import testcasemgmt.report
+import testcasemgmt.manualexecution
 logger = scriptutils.logger_create('test-case-mgmt')
 
 def _validate_user_input_arguments(args):
@@ -72,6 +79,8 @@ def main():
     parser.add_argument('-q', '--quiet', help='print only errors', action='store_true')
     subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
     subparsers.required = True
+    subparsers.add_subparser_group('manualexecution', 'execute manual test cases', 300)
+    testcasemgmt.manualexecution.register_commands(subparsers)
     subparsers.add_subparser_group('store', 'store test result', 200)
     testcasemgmt.store.register_commands(subparsers)
     subparsers.add_subparser_group('report', 'report test result summary', 100)
-- 
2.7.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-12-24  8:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad
2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
2018-12-06 12:07   ` Mohamad, Mazliana
2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork
2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana
2018-12-24  8:21 [PATCH 1/2] scripts/test-case-mgmt: store test result and reporting Yeoh Ee Peng
2018-12-24  8:21 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function Yeoh Ee Peng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.