All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] oe-build-perf-test: use Python unittest framework
@ 2016-08-12  9:11 Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 1/9] oeqa.buildperf: rename module containing basic tests Markus Lehtonen
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

This patchset converts the lately added oe-build-perf-test (and oeqa.buildperf)
to utilize the Python's unittest framework. Unittest (framework) provides a lot
of functionality and it should make future development of build perf tests
easier. As of now, the most visible change is different console output from the
oe-build-perf-test script.

The first patch renames the .py file containing the actual tests. It is
cosmetic but would make oeqa.buildperf module more consistent by requiring all
(possible) future test files to start with a 'test' prefix, too.

The next five patches actually convert the buildperf tests to use Python
unittest framework. This conversion is one bigger change that is split into
multiple smaller patches in order to make it more comprehensible. The patches
are dependent on each other and all of them should be applied as a unit.

The seventh patch aligns test status better with how they are expected to be
used (test failure vs. test error).

The last two patches improve the console output of the script.


The following changes since commit 5ed0d5a7d9b051a551a6de644bf6a42b87c12471:

  dbus: backport stdint.h build fix (2016-08-10 10:45:33 +0100)

are available in the git repository at:

  git://git.openembedded.org/openembedded-core-contrib marquiz/buildperf/unittest
  http://git.openembedded.org/openembedded-core-contrib/log/?h=marquiz/buildperf/unittest


Markus Lehtonen (9):
  oeqa.buildperf: rename module containing basic tests
  oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase
  oeqa.buildperf: add BuildPerfTestLoader class
  oeqa.buildperf: add BuildPerfTestResult class
  oeqa.buildperf: convert test cases to unittest
  oe-build-perf-test: use new unittest based framework
  oeqa.buildperf: introduce runCmd2()
  oe-build-perf-test: write logger output into file only
  oeqa.buildperf: be more verbose about failed commands

 meta/lib/oeqa/buildperf/__init__.py    |  10 +-
 meta/lib/oeqa/buildperf/base.py        | 245 +++++++++++++++++----------------
 meta/lib/oeqa/buildperf/basic_tests.py | 133 ------------------
 meta/lib/oeqa/buildperf/test_basic.py  | 121 ++++++++++++++++
 scripts/oe-build-perf-test             |  54 +++++---
 5 files changed, 294 insertions(+), 269 deletions(-)
 delete mode 100644 meta/lib/oeqa/buildperf/basic_tests.py
 create mode 100644 meta/lib/oeqa/buildperf/test_basic.py

-- 
2.6.6



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/9] oeqa.buildperf: rename module containing basic tests
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 2/9] oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase Markus Lehtonen
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/__init__.py    |   2 +-
 meta/lib/oeqa/buildperf/basic_tests.py | 133 ---------------------------------
 meta/lib/oeqa/buildperf/test_basic.py  | 133 +++++++++++++++++++++++++++++++++
 3 files changed, 134 insertions(+), 134 deletions(-)
 delete mode 100644 meta/lib/oeqa/buildperf/basic_tests.py
 create mode 100644 meta/lib/oeqa/buildperf/test_basic.py

diff --git a/meta/lib/oeqa/buildperf/__init__.py b/meta/lib/oeqa/buildperf/__init__.py
index ad5b37c..c816bd2 100644
--- a/meta/lib/oeqa/buildperf/__init__.py
+++ b/meta/lib/oeqa/buildperf/__init__.py
@@ -12,4 +12,4 @@
 """Build performance tests"""
 from .base import (perf_test_case, BuildPerfTest, BuildPerfTestRunner,
                    KernelDropCaches)
-from .basic_tests import *
+from .test_basic import *
diff --git a/meta/lib/oeqa/buildperf/basic_tests.py b/meta/lib/oeqa/buildperf/basic_tests.py
deleted file mode 100644
index ada5aba..0000000
--- a/meta/lib/oeqa/buildperf/basic_tests.py
+++ /dev/null
@@ -1,133 +0,0 @@
-# Copyright (c) 2016, Intel Corporation.
-#
-# This program is free software; you can redistribute it and/or modify it
-# under the terms and conditions of the GNU General Public License,
-# version 2, as published by the Free Software Foundation.
-#
-# This program is distributed in the hope it will be useful, but WITHOUT
-# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
-# more details.
-#
-"""Basic set of build performance tests"""
-import os
-import shutil
-
-from . import BuildPerfTest, perf_test_case
-from oeqa.utils.commands import get_bb_vars
-
-
-@perf_test_case
-class Test1P1(BuildPerfTest):
-    name = "test1"
-    build_target = 'core-image-sato'
-    description = "Measure wall clock of bitbake {} and size of tmp dir".format(build_target)
-
-    def _run(self):
-        self.log_cmd_output("bitbake {} -c fetchall".format(self.build_target))
-        self.rm_tmp()
-        self.rm_sstate()
-        self.rm_cache()
-        self.sync()
-        self.measure_cmd_resources(['bitbake', self.build_target], 'build',
-                                   'bitbake ' + self.build_target)
-        self.measure_disk_usage(self.bb_vars['TMPDIR'], 'tmpdir', 'tmpdir')
-        self.save_buildstats()
-
-
-@perf_test_case
-class Test1P2(BuildPerfTest):
-    name = "test12"
-    build_target = 'virtual/kernel'
-    description = "Measure bitbake {}".format(build_target)
-
-    def _run(self):
-        self.log_cmd_output("bitbake {} -c cleansstate".format(
-            self.build_target))
-        self.sync()
-        self.measure_cmd_resources(['bitbake', self.build_target], 'build',
-                                   'bitbake ' + self.build_target)
-
-
-@perf_test_case
-class Test1P3(BuildPerfTest):
-    name = "test13"
-    build_target = 'core-image-sato'
-    description = "Build {} with rm_work enabled".format(build_target)
-
-    def _run(self):
-        postfile = os.path.join(self.out_dir, 'postfile.conf')
-        with open(postfile, 'w') as fobj:
-            fobj.write('INHERIT += "rm_work"\n')
-        try:
-            self.rm_tmp()
-            self.rm_sstate()
-            self.rm_cache()
-            self.sync()
-            cmd = ['bitbake', '-R', postfile, self.build_target]
-            self.measure_cmd_resources(cmd, 'build',
-                                       'bitbake' + self.build_target)
-            self.measure_disk_usage(self.bb_vars['TMPDIR'], 'tmpdir', 'tmpdir')
-        finally:
-            os.unlink(postfile)
-        self.save_buildstats()
-
-
-@perf_test_case
-class Test2(BuildPerfTest):
-    name = "test2"
-    build_target = 'core-image-sato'
-    description = "Measure bitbake {} -c rootfs with sstate".format(build_target)
-
-    def _run(self):
-        self.rm_tmp()
-        self.rm_cache()
-        self.sync()
-        cmd = ['bitbake', self.build_target, '-c', 'rootfs']
-        self.measure_cmd_resources(cmd, 'do_rootfs', 'bitbake do_rootfs')
-
-
-@perf_test_case
-class Test3(BuildPerfTest):
-    name = "test3"
-    description = "Parsing time metrics (bitbake -p)"
-
-    def _run(self):
-        # Drop all caches and parse
-        self.rm_cache()
-        self.force_rm(os.path.join(self.bb_vars['TMPDIR'], 'cache'))
-        self.measure_cmd_resources(['bitbake', '-p'], 'parse_1',
-                                   'bitbake -p (no caches)')
-        # Drop tmp/cache
-        self.force_rm(os.path.join(self.bb_vars['TMPDIR'], 'cache'))
-        self.measure_cmd_resources(['bitbake', '-p'], 'parse_2',
-                                   'bitbake -p (no tmp/cache)')
-        # Parse with fully cached data
-        self.measure_cmd_resources(['bitbake', '-p'], 'parse_3',
-                                   'bitbake -p (cached)')
-
-
-@perf_test_case
-class Test4(BuildPerfTest):
-    name = "test4"
-    build_target = 'core-image-sato'
-    description = "eSDK metrics"
-
-    def _run(self):
-        self.log_cmd_output("bitbake {} -c do_populate_sdk_ext".format(
-            self.build_target))
-        self.bb_vars = get_bb_vars(None, self.build_target)
-        tmp_dir = self.bb_vars['TMPDIR']
-        installer = os.path.join(
-            self.bb_vars['SDK_DEPLOY'],
-            self.bb_vars['TOOLCHAINEXT_OUTPUTNAME'] + '.sh')
-        # Measure installer size
-        self.measure_disk_usage(installer, 'installer_bin', 'eSDK installer')
-        # Measure deployment time and deployed size
-        deploy_dir = os.path.join(tmp_dir, 'esdk-deploy')
-        if os.path.exists(deploy_dir):
-            shutil.rmtree(deploy_dir)
-        self.sync()
-        self.measure_cmd_resources([installer, '-y', '-d', deploy_dir],
-                                   'deploy', 'eSDK deploy')
-        self.measure_disk_usage(deploy_dir, 'deploy_dir', 'deploy dir')
diff --git a/meta/lib/oeqa/buildperf/test_basic.py b/meta/lib/oeqa/buildperf/test_basic.py
new file mode 100644
index 0000000..ada5aba
--- /dev/null
+++ b/meta/lib/oeqa/buildperf/test_basic.py
@@ -0,0 +1,133 @@
+# Copyright (c) 2016, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+"""Basic set of build performance tests"""
+import os
+import shutil
+
+from . import BuildPerfTest, perf_test_case
+from oeqa.utils.commands import get_bb_vars
+
+
+@perf_test_case
+class Test1P1(BuildPerfTest):
+    name = "test1"
+    build_target = 'core-image-sato'
+    description = "Measure wall clock of bitbake {} and size of tmp dir".format(build_target)
+
+    def _run(self):
+        self.log_cmd_output("bitbake {} -c fetchall".format(self.build_target))
+        self.rm_tmp()
+        self.rm_sstate()
+        self.rm_cache()
+        self.sync()
+        self.measure_cmd_resources(['bitbake', self.build_target], 'build',
+                                   'bitbake ' + self.build_target)
+        self.measure_disk_usage(self.bb_vars['TMPDIR'], 'tmpdir', 'tmpdir')
+        self.save_buildstats()
+
+
+@perf_test_case
+class Test1P2(BuildPerfTest):
+    name = "test12"
+    build_target = 'virtual/kernel'
+    description = "Measure bitbake {}".format(build_target)
+
+    def _run(self):
+        self.log_cmd_output("bitbake {} -c cleansstate".format(
+            self.build_target))
+        self.sync()
+        self.measure_cmd_resources(['bitbake', self.build_target], 'build',
+                                   'bitbake ' + self.build_target)
+
+
+@perf_test_case
+class Test1P3(BuildPerfTest):
+    name = "test13"
+    build_target = 'core-image-sato'
+    description = "Build {} with rm_work enabled".format(build_target)
+
+    def _run(self):
+        postfile = os.path.join(self.out_dir, 'postfile.conf')
+        with open(postfile, 'w') as fobj:
+            fobj.write('INHERIT += "rm_work"\n')
+        try:
+            self.rm_tmp()
+            self.rm_sstate()
+            self.rm_cache()
+            self.sync()
+            cmd = ['bitbake', '-R', postfile, self.build_target]
+            self.measure_cmd_resources(cmd, 'build',
+                                       'bitbake' + self.build_target)
+            self.measure_disk_usage(self.bb_vars['TMPDIR'], 'tmpdir', 'tmpdir')
+        finally:
+            os.unlink(postfile)
+        self.save_buildstats()
+
+
+@perf_test_case
+class Test2(BuildPerfTest):
+    name = "test2"
+    build_target = 'core-image-sato'
+    description = "Measure bitbake {} -c rootfs with sstate".format(build_target)
+
+    def _run(self):
+        self.rm_tmp()
+        self.rm_cache()
+        self.sync()
+        cmd = ['bitbake', self.build_target, '-c', 'rootfs']
+        self.measure_cmd_resources(cmd, 'do_rootfs', 'bitbake do_rootfs')
+
+
+@perf_test_case
+class Test3(BuildPerfTest):
+    name = "test3"
+    description = "Parsing time metrics (bitbake -p)"
+
+    def _run(self):
+        # Drop all caches and parse
+        self.rm_cache()
+        self.force_rm(os.path.join(self.bb_vars['TMPDIR'], 'cache'))
+        self.measure_cmd_resources(['bitbake', '-p'], 'parse_1',
+                                   'bitbake -p (no caches)')
+        # Drop tmp/cache
+        self.force_rm(os.path.join(self.bb_vars['TMPDIR'], 'cache'))
+        self.measure_cmd_resources(['bitbake', '-p'], 'parse_2',
+                                   'bitbake -p (no tmp/cache)')
+        # Parse with fully cached data
+        self.measure_cmd_resources(['bitbake', '-p'], 'parse_3',
+                                   'bitbake -p (cached)')
+
+
+@perf_test_case
+class Test4(BuildPerfTest):
+    name = "test4"
+    build_target = 'core-image-sato'
+    description = "eSDK metrics"
+
+    def _run(self):
+        self.log_cmd_output("bitbake {} -c do_populate_sdk_ext".format(
+            self.build_target))
+        self.bb_vars = get_bb_vars(None, self.build_target)
+        tmp_dir = self.bb_vars['TMPDIR']
+        installer = os.path.join(
+            self.bb_vars['SDK_DEPLOY'],
+            self.bb_vars['TOOLCHAINEXT_OUTPUTNAME'] + '.sh')
+        # Measure installer size
+        self.measure_disk_usage(installer, 'installer_bin', 'eSDK installer')
+        # Measure deployment time and deployed size
+        deploy_dir = os.path.join(tmp_dir, 'esdk-deploy')
+        if os.path.exists(deploy_dir):
+            shutil.rmtree(deploy_dir)
+        self.sync()
+        self.measure_cmd_resources([installer, '-y', '-d', deploy_dir],
+                                   'deploy', 'eSDK deploy')
+        self.measure_disk_usage(deploy_dir, 'deploy_dir', 'deploy dir')
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/9] oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 1/9] oeqa.buildperf: rename module containing basic tests Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 3/9] oeqa.buildperf: add BuildPerfTestLoader class Markus Lehtonen
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Rename BuildPerfTest to BuildPerfTestCase and convert it to be derived
from TestCase class from the unittest framework of the Python standard
library. This doesn't work with our existing testcases or test runner
class and these need to be modified, too.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/__init__.py |  4 ++-
 meta/lib/oeqa/buildperf/base.py     | 67 +++++++++++++++++--------------------
 2 files changed, 33 insertions(+), 38 deletions(-)

diff --git a/meta/lib/oeqa/buildperf/__init__.py b/meta/lib/oeqa/buildperf/__init__.py
index c816bd2..add3be2 100644
--- a/meta/lib/oeqa/buildperf/__init__.py
+++ b/meta/lib/oeqa/buildperf/__init__.py
@@ -10,6 +10,8 @@
 # more details.
 #
 """Build performance tests"""
-from .base import (perf_test_case, BuildPerfTest, BuildPerfTestRunner,
+from .base import (perf_test_case,
+                   BuildPerfTestCase,
+                   BuildPerfTestRunner,
                    KernelDropCaches)
 from .test_basic import *
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 527563b..5b4c37c 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -19,6 +19,7 @@ import socket
 import tempfile
 import time
 import traceback
+import unittest
 from datetime import datetime, timedelta
 
 from oeqa.utils.commands import runCmd, get_bb_vars
@@ -191,50 +192,34 @@ def perf_test_case(obj):
     return obj
 
 
-class BuildPerfTest(object):
+class BuildPerfTestCase(unittest.TestCase):
     """Base class for build performance tests"""
     SYSRES = 'sysres'
     DISKUSAGE = 'diskusage'
 
-    name = None
-    description = None
-
-    def __init__(self, out_dir):
-        self.out_dir = out_dir
-        self.results = {'name':self.name,
-                        'description': self.description,
-                        'status': 'NOTRUN',
-                        'start_time': None,
-                        'elapsed_time': None,
-                        'measurements': []}
-        if not os.path.exists(self.out_dir):
-            os.makedirs(self.out_dir)
-        if not self.name:
-            self.name = self.__class__.__name__
+    def __init__(self, *args, **kwargs):
+        super(BuildPerfTestCase, self).__init__(*args, **kwargs)
+        self.name = self._testMethodName
+        self.out_dir = None
+        self.start_time = None
+        self.elapsed_time = None
+        self.measurements = []
         self.bb_vars = get_bb_vars()
-        # TODO: remove the _failed flag when globalres.log is ditched as all
-        # failures should raise an exception
-        self._failed = False
-        self.cmd_log = os.path.join(self.out_dir, 'commands.log')
+        # TODO: remove 'times' and 'sizes' arrays when globalres support is
+        # removed
+        self.times = []
+        self.sizes = []
 
-    def run(self):
+    def run(self, *args, **kwargs):
         """Run test"""
-        self.results['status'] = 'FAILED'
-        self.results['start_time'] = datetime.now()
-        self._run()
-        self.results['elapsed_time'] = (datetime.now() -
-                                        self.results['start_time'])
-        # Test is regarded as completed if it doesn't raise an exception
-        if not self._failed:
-            self.results['status'] = 'COMPLETED'
-
-    def _run(self):
-        """Actual test payload"""
-        raise NotImplementedError
+        self.start_time = datetime.now()
+        super(BuildPerfTestCase, self).run(*args, **kwargs)
+        self.elapsed_time = datetime.now() - self.start_time
 
     def log_cmd_output(self, cmd):
         """Run a command and log it's output"""
-        with open(self.cmd_log, 'a') as fobj:
+        cmd_log = os.path.join(self.out_dir, 'commands.log')
+        with open(cmd_log, 'a') as fobj:
             runCmd(cmd, stdout=fobj)
 
     def measure_cmd_resources(self, cmd, name, legend):
@@ -251,7 +236,8 @@ class BuildPerfTest(object):
 
         cmd_str = cmd if isinstance(cmd, str) else ' '.join(cmd)
         log.info("Timing command: %s", cmd_str)
-        with open(self.cmd_log, 'a') as fobj:
+        cmd_log = os.path.join(self.out_dir, 'commands.log')
+        with open(cmd_log, 'a') as fobj:
             ret, timedata = time_cmd(cmd, stdout=fobj)
         if ret.status:
             log.error("Time will be reported as 0. Command failed: %s",
@@ -266,12 +252,17 @@ class BuildPerfTest(object):
                        'name': name,
                        'legend': legend}
         measurement['values'] = {'elapsed_time': etime}
-        self.results['measurements'].append(measurement)
+        self.measurements.append(measurement)
+        e_sec = etime.total_seconds()
         nlogs = len(glob.glob(self.out_dir + '/results.log*'))
         results_log = os.path.join(self.out_dir,
                                    'results.log.{}'.format(nlogs + 1))
         with open(results_log, 'w') as fobj:
             fobj.write(timedata)
+        # Append to 'times' array for globalres log
+        self.times.append('{:d}:{:02d}:{:.2f}'.format(int(e_sec / 3600),
+                                                      int((e_sec % 3600) / 60),
+                                                       e_sec % 60))
 
     def measure_disk_usage(self, path, name, legend):
         """Estimate disk usage of a file or directory"""
@@ -289,7 +280,9 @@ class BuildPerfTest(object):
                        'name': name,
                        'legend': legend}
         measurement['values'] = {'size': size}
-        self.results['measurements'].append(measurement)
+        self.measurements.append(measurement)
+        # Append to 'sizes' array for globalres log
+        self.sizes.append(str(size))
 
     def save_buildstats(self):
         """Save buildstats"""
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/9] oeqa.buildperf: add BuildPerfTestLoader class
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 1/9] oeqa.buildperf: rename module containing basic tests Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 2/9] oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 4/9] oeqa.buildperf: add BuildPerfTestResult class Markus Lehtonen
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/__init__.py | 1 +
 meta/lib/oeqa/buildperf/base.py     | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/meta/lib/oeqa/buildperf/__init__.py b/meta/lib/oeqa/buildperf/__init__.py
index add3be2..7e51726 100644
--- a/meta/lib/oeqa/buildperf/__init__.py
+++ b/meta/lib/oeqa/buildperf/__init__.py
@@ -12,6 +12,7 @@
 """Build performance tests"""
 from .base import (perf_test_case,
                    BuildPerfTestCase,
+                   BuildPerfTestLoader,
                    BuildPerfTestRunner,
                    KernelDropCaches)
 from .test_basic import *
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 5b4c37c..5049ba9 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -321,3 +321,8 @@ class BuildPerfTestCase(unittest.TestCase):
         os.sync()
         # Wait a bit for all the dirty blocks to be written onto disk
         time.sleep(3)
+
+
+class BuildPerfTestLoader(unittest.TestLoader):
+    """Test loader for build performance tests"""
+    sortTestMethodsUsing = None
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/9] oeqa.buildperf: add BuildPerfTestResult class
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (2 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 3/9] oeqa.buildperf: add BuildPerfTestLoader class Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 5/9] oeqa.buildperf: convert test cases to unittest Markus Lehtonen
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

The new class is derived from unittest.TextTestResult class. It is
actually implemented by modifying the old BuildPerfTestRunner class
which, in turn, is replaced by a totally new simple implementation
derived from unittest.TestRunner.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/__init__.py |   4 +-
 meta/lib/oeqa/buildperf/base.py     | 150 +++++++++++++++++++-----------------
 scripts/oe-build-perf-test          |  10 +++
 3 files changed, 90 insertions(+), 74 deletions(-)

diff --git a/meta/lib/oeqa/buildperf/__init__.py b/meta/lib/oeqa/buildperf/__init__.py
index 7e51726..85abf3a 100644
--- a/meta/lib/oeqa/buildperf/__init__.py
+++ b/meta/lib/oeqa/buildperf/__init__.py
@@ -10,9 +10,9 @@
 # more details.
 #
 """Build performance tests"""
-from .base import (perf_test_case,
-                   BuildPerfTestCase,
+from .base import (BuildPerfTestCase,
                    BuildPerfTestLoader,
+                   BuildPerfTestResult,
                    BuildPerfTestRunner,
                    KernelDropCaches)
 from .test_basic import *
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 5049ba9..9711a6a 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -76,25 +76,26 @@ def time_cmd(cmd, **kwargs):
     return ret, timedata
 
 
-class BuildPerfTestRunner(object):
+class BuildPerfTestResult(unittest.TextTestResult):
     """Runner class for executing the individual tests"""
     # List of test cases to run
     test_run_queue = []
 
-    def __init__(self, out_dir):
-        self.results = {}
-        self.out_dir = os.path.abspath(out_dir)
-        if not os.path.exists(self.out_dir):
-            os.makedirs(self.out_dir)
+    def __init__(self, out_dir, *args, **kwargs):
+        super(BuildPerfTestResult, self).__init__(*args, **kwargs)
 
+        self.out_dir = out_dir
         # Get Git parameters
         try:
             self.repo = GitRepo('.')
         except GitError:
             self.repo = None
-        self.git_rev, self.git_branch = self.get_git_revision()
+        self.git_revision, self.git_branch = self.get_git_revision()
+        self.hostname = socket.gethostname()
+        self.start_time = self.elapsed_time = None
+        self.successes = []
         log.info("Using Git branch:revision %s:%s", self.git_branch,
-                 self.git_rev)
+                 self.git_revision)
 
     def get_git_revision(self):
         """Get git branch and revision under testing"""
@@ -117,79 +118,71 @@ class BuildPerfTestRunner(object):
                     branch = None
         return str(rev), str(branch)
 
-    def run_tests(self):
-        """Method that actually runs the tests"""
-        self.results['schema_version'] = 1
-        self.results['git_revision'] = self.git_rev
-        self.results['git_branch'] = self.git_branch
-        self.results['tester_host'] = socket.gethostname()
-        start_time = datetime.utcnow()
-        self.results['start_time'] = start_time
-        self.results['tests'] = {}
-
-        self.archive_build_conf()
-        for test_class in self.test_run_queue:
-            log.info("Executing test %s: %s", test_class.name,
-                     test_class.description)
-
-            test = test_class(self.out_dir)
-            try:
-                test.run()
-            except Exception:
-                # Catch all exceptions. This way e.g buggy tests won't scrap
-                # the whole test run
-                sep = '-' * 5 + ' TRACEBACK ' + '-' * 60 + '\n'
-                tb_msg = sep + traceback.format_exc() + sep
-                log.error("Test execution failed with:\n" + tb_msg)
-            self.results['tests'][test.name] = test.results
-
-        self.results['elapsed_time'] = datetime.utcnow() - start_time
-        return 0
-
-    def archive_build_conf(self):
-        """Archive build/conf to test results"""
-        src_dir = os.path.join(os.environ['BUILDDIR'], 'conf')
-        tgt_dir = os.path.join(self.out_dir, 'build', 'conf')
-        os.makedirs(os.path.dirname(tgt_dir))
-        shutil.copytree(src_dir, tgt_dir)
+    def addSuccess(self, test):
+        """Record results from successful tests"""
+        super(BuildPerfTestResult, self).addSuccess(test)
+        self.successes.append((test, None))
+
+    def startTest(self, test):
+        """Pre-test hook"""
+        test.out_dir = self.out_dir
+        log.info("Executing test %s: %s", test.name, test.shortDescription())
+        self.stream.write(datetime.now().strftime("[%Y-%m-%d %H:%M:%S] "))
+        super(BuildPerfTestResult, self).startTest(test)
+
+    def startTestRun(self):
+        """Pre-run hook"""
+        self.start_time = datetime.utcnow()
+
+    def stopTestRun(self):
+        """Pre-run hook"""
+        self.elapsed_time = datetime.utcnow() - self.start_time
+
+    def all_results(self):
+        result_map = {'SUCCESS': self.successes,
+                      'FAIL': self.failures,
+                      'ERROR': self.errors,
+                      'EXP_FAIL': self.expectedFailures,
+                      'UNEXP_SUCCESS': self.unexpectedSuccesses}
+        for status, tests in result_map.items():
+            for test in tests:
+                yield (status, test)
+
 
     def update_globalres_file(self, filename):
         """Write results to globalres csv file"""
+        # Map test names to time and size columns in globalres
+        # The tuples represent index and length of times and sizes
+        # respectively
+        gr_map = {'test1': ((0, 1), (8, 1)),
+                  'test12': ((1, 1), (None, None)),
+                  'test13': ((2, 1), (9, 1)),
+                  'test2': ((3, 1), (None, None)),
+                  'test3': ((4, 3), (None, None)),
+                  'test4': ((7, 1), (10, 2))}
+
         if self.repo:
-            git_tag_rev = self.repo.run_cmd(['describe', self.git_rev])
+            git_tag_rev = self.repo.run_cmd(['describe', self.git_revision])
         else:
-            git_tag_rev = self.git_rev
-        times = []
-        sizes = []
-        for test in self.results['tests'].values():
-            for measurement in test['measurements']:
-                res_type = measurement['type']
-                values = measurement['values']
-                if res_type == BuildPerfTest.SYSRES:
-                    e_sec = values['elapsed_time'].total_seconds()
-                    times.append('{:d}:{:02d}:{:.2f}'.format(
-                        int(e_sec / 3600),
-                        int((e_sec % 3600) / 60),
-                        e_sec % 60))
-                elif res_type == BuildPerfTest.DISKUSAGE:
-                    sizes.append(str(values['size']))
-                else:
-                    log.warning("Unable to handle '%s' values in "
-                                "globalres.log", res_type)
+            git_tag_rev = self.git_revision
+
+        values = ['0'] * 12
+        for status, test in self.all_results():
+            if status not in ['SUCCESS', 'FAILURE', 'EXP_SUCCESS']:
+                continue
+            (t_ind, t_len), (s_ind, s_len) = gr_map[test.name]
+            if t_ind is not None:
+                values[t_ind:t_ind + t_len] = test.times
+            if s_ind is not None:
+                values[s_ind:s_ind + s_len] = test.sizes
 
         log.debug("Writing globalres log to %s", filename)
         with open(filename, 'a') as fobj:
-            fobj.write('{},{}:{},{},'.format(self.results['tester_host'],
-                                             self.results['git_branch'],
-                                             self.results['git_revision'],
+            fobj.write('{},{}:{},{},'.format(self.hostname,
+                                             self.git_branch,
+                                             self.git_revision,
                                              git_tag_rev))
-            fobj.write(','.join(times + sizes) + '\n')
-
-
-def perf_test_case(obj):
-    """Decorator for adding test classes"""
-    BuildPerfTestRunner.test_run_queue.append(obj)
-    return obj
+            fobj.write(','.join(values) + '\n')
 
 
 class BuildPerfTestCase(unittest.TestCase):
@@ -326,3 +319,16 @@ class BuildPerfTestCase(unittest.TestCase):
 class BuildPerfTestLoader(unittest.TestLoader):
     """Test loader for build performance tests"""
     sortTestMethodsUsing = None
+
+
+class BuildPerfTestRunner(unittest.TextTestRunner):
+    """Test loader for build performance tests"""
+    sortTestMethodsUsing = None
+
+    def __init__(self, out_dir, *args, **kwargs):
+        super(BuildPerfTestRunner, self).__init__(*args, **kwargs)
+        self.out_dir = out_dir
+
+    def _makeResult(self):
+       return BuildPerfTestResult(self.out_dir, self.stream, self.descriptions,
+                                  self.verbosity)
diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index 996996b..8142b03 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -19,6 +19,7 @@ import errno
 import fcntl
 import logging
 import os
+import shutil
 import sys
 from datetime import datetime
 
@@ -78,6 +79,14 @@ def setup_file_logging(log_file):
     log.addHandler(handler)
 
 
+def archive_build_conf(out_dir):
+    """Archive build/conf to test results"""
+    src_dir = os.path.join(os.environ['BUILDDIR'], 'conf')
+    tgt_dir = os.path.join(out_dir, 'build', 'conf')
+    os.makedirs(os.path.dirname(tgt_dir))
+    shutil.copytree(src_dir, tgt_dir)
+
+
 def parse_args(argv):
     """Parse command line arguments"""
     parser = argparse.ArgumentParser(
@@ -120,6 +129,7 @@ def main(argv=None):
 
     # Run actual tests
     runner = BuildPerfTestRunner(out_dir)
+    archive_build_conf(out_dir)
     ret = runner.run_tests()
     if not ret:
         if args.globalres_file:
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/9] oeqa.buildperf: convert test cases to unittest
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (3 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 4/9] oeqa.buildperf: add BuildPerfTestResult class Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 6/9] oe-build-perf-test: use new unittest based framework Markus Lehtonen
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

This commit converts the actual tests to be compatible with the new
Python unittest based framework.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/test_basic.py | 50 +++++++++++++----------------------
 1 file changed, 19 insertions(+), 31 deletions(-)

diff --git a/meta/lib/oeqa/buildperf/test_basic.py b/meta/lib/oeqa/buildperf/test_basic.py
index ada5aba..b8bec6d 100644
--- a/meta/lib/oeqa/buildperf/test_basic.py
+++ b/meta/lib/oeqa/buildperf/test_basic.py
@@ -13,17 +13,15 @@
 import os
 import shutil
 
-from . import BuildPerfTest, perf_test_case
+from oeqa.buildperf import BuildPerfTestCase
 from oeqa.utils.commands import get_bb_vars
 
 
-@perf_test_case
-class Test1P1(BuildPerfTest):
-    name = "test1"
+class Test1P1(BuildPerfTestCase):
     build_target = 'core-image-sato'
-    description = "Measure wall clock of bitbake {} and size of tmp dir".format(build_target)
 
-    def _run(self):
+    def test1(self):
+        """Measure wall clock of bitbake core-image-sato and size of tmp dir"""
         self.log_cmd_output("bitbake {} -c fetchall".format(self.build_target))
         self.rm_tmp()
         self.rm_sstate()
@@ -35,13 +33,11 @@ class Test1P1(BuildPerfTest):
         self.save_buildstats()
 
 
-@perf_test_case
-class Test1P2(BuildPerfTest):
-    name = "test12"
+class Test1P2(BuildPerfTestCase):
     build_target = 'virtual/kernel'
-    description = "Measure bitbake {}".format(build_target)
 
-    def _run(self):
+    def test12(self):
+        """Measure bitbake virtual/kernel"""
         self.log_cmd_output("bitbake {} -c cleansstate".format(
             self.build_target))
         self.sync()
@@ -49,13 +45,11 @@ class Test1P2(BuildPerfTest):
                                    'bitbake ' + self.build_target)
 
 
-@perf_test_case
-class Test1P3(BuildPerfTest):
-    name = "test13"
+class Test1P3(BuildPerfTestCase):
     build_target = 'core-image-sato'
-    description = "Build {} with rm_work enabled".format(build_target)
 
-    def _run(self):
+    def test13(self):
+        """Build core-image-sato with rm_work enabled"""
         postfile = os.path.join(self.out_dir, 'postfile.conf')
         with open(postfile, 'w') as fobj:
             fobj.write('INHERIT += "rm_work"\n')
@@ -73,13 +67,11 @@ class Test1P3(BuildPerfTest):
         self.save_buildstats()
 
 
-@perf_test_case
-class Test2(BuildPerfTest):
-    name = "test2"
+class Test2(BuildPerfTestCase):
     build_target = 'core-image-sato'
-    description = "Measure bitbake {} -c rootfs with sstate".format(build_target)
 
-    def _run(self):
+    def test2(self):
+        """Measure bitbake core-image-sato -c rootfs with sstate"""
         self.rm_tmp()
         self.rm_cache()
         self.sync()
@@ -87,12 +79,10 @@ class Test2(BuildPerfTest):
         self.measure_cmd_resources(cmd, 'do_rootfs', 'bitbake do_rootfs')
 
 
-@perf_test_case
-class Test3(BuildPerfTest):
-    name = "test3"
-    description = "Parsing time metrics (bitbake -p)"
+class Test3(BuildPerfTestCase):
 
-    def _run(self):
+    def test3(self):
+        """Parsing time metrics (bitbake -p)"""
         # Drop all caches and parse
         self.rm_cache()
         self.force_rm(os.path.join(self.bb_vars['TMPDIR'], 'cache'))
@@ -107,13 +97,11 @@ class Test3(BuildPerfTest):
                                    'bitbake -p (cached)')
 
 
-@perf_test_case
-class Test4(BuildPerfTest):
-    name = "test4"
+class Test4(BuildPerfTestCase):
     build_target = 'core-image-sato'
-    description = "eSDK metrics"
 
-    def _run(self):
+    def test4(self):
+        """eSDK metrics"""
         self.log_cmd_output("bitbake {} -c do_populate_sdk_ext".format(
             self.build_target))
         self.bb_vars = get_bb_vars(None, self.build_target)
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/9] oe-build-perf-test: use new unittest based framework
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (4 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 5/9] oeqa.buildperf: convert test cases to unittest Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 7/9] oeqa.buildperf: introduce runCmd2() Markus Lehtonen
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Convert scripts/oe-build-perf-test to be compatible with the new Python
unittest based buildperf test framework.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 scripts/oe-build-perf-test | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index 8142b03..786c715 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -21,12 +21,15 @@ import logging
 import os
 import shutil
 import sys
+import unittest
 from datetime import datetime
 
 sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + '/lib')
 import scriptpath
 scriptpath.add_oe_lib_path()
-from oeqa.buildperf import BuildPerfTestRunner, KernelDropCaches
+import oeqa.buildperf
+from oeqa.buildperf import (BuildPerfTestLoader, BuildPerfTestResult,
+                            BuildPerfTestRunner, KernelDropCaches)
 from oeqa.utils.commands import runCmd
 
 
@@ -123,19 +126,23 @@ def main(argv=None):
     # Check our capability to drop caches and ask pass if needed
     KernelDropCaches.check()
 
+    # Load build perf tests
+    loader = BuildPerfTestLoader()
+    suite = loader.discover(start_dir=os.path.dirname(oeqa.buildperf.__file__))
     # Set-up log file
     out_dir = args.out_dir.format(date=datetime.now().strftime('%Y%m%d%H%M%S'))
     setup_file_logging(os.path.join(out_dir, 'output.log'))
 
     # Run actual tests
-    runner = BuildPerfTestRunner(out_dir)
     archive_build_conf(out_dir)
-    ret = runner.run_tests()
-    if not ret:
+    runner = BuildPerfTestRunner(out_dir, verbosity=2)
+    result = runner.run(suite)
+    if result.wasSuccessful():
         if args.globalres_file:
-            runner.update_globalres_file(args.globalres_file)
+            result.update_globalres_file(args.globalres_file)
+        return 0
 
-    return ret
+    return 1
 
 
 if __name__ == '__main__':
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/9] oeqa.buildperf: introduce runCmd2()
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (5 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 6/9] oe-build-perf-test: use new unittest based framework Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 8/9] oe-build-perf-test: write logger output into file only Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 9/9] oeqa.buildperf: be more verbose about failed commands Markus Lehtonen
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Special runCmd() for build perf tests which doesn't raise an
AssertionError when the command fails. This causes command failures to
be detected as test errors instead of test failures. This way "failed"
state of tests is reserved for future making it possible to set e.g.
thresholds for certain measurement results.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/__init__.py |  3 ++-
 meta/lib/oeqa/buildperf/base.py     | 15 ++++++++++-----
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/meta/lib/oeqa/buildperf/__init__.py b/meta/lib/oeqa/buildperf/__init__.py
index 85abf3a..605f429 100644
--- a/meta/lib/oeqa/buildperf/__init__.py
+++ b/meta/lib/oeqa/buildperf/__init__.py
@@ -14,5 +14,6 @@ from .base import (BuildPerfTestCase,
                    BuildPerfTestLoader,
                    BuildPerfTestResult,
                    BuildPerfTestRunner,
-                   KernelDropCaches)
+                   KernelDropCaches,
+                   runCmd2)
 from .test_basic import *
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 9711a6a..7ea3183 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -21,6 +21,7 @@ import time
 import traceback
 import unittest
 from datetime import datetime, timedelta
+from functools import partial
 
 from oeqa.utils.commands import runCmd, get_bb_vars
 from oeqa.utils.git import GitError, GitRepo
@@ -28,6 +29,10 @@ from oeqa.utils.git import GitError, GitRepo
 # Get logger for this module
 log = logging.getLogger('build-perf')
 
+# Our own version of runCmd which does not raise AssertErrors which would cause
+# errors to interpreted as failures
+runCmd2 = partial(runCmd, assert_error=False)
+
 
 class KernelDropCaches(object):
     """Container of the functions for dropping kernel caches"""
@@ -39,7 +44,7 @@ class KernelDropCaches(object):
         from getpass import getpass
         from locale import getdefaultlocale
         cmd = ['sudo', '-k', '-n', 'tee', '/proc/sys/vm/drop_caches']
-        ret = runCmd(cmd, ignore_status=True, data=b'0')
+        ret = runCmd2(cmd, ignore_status=True, data=b'0')
         if ret.output.startswith('sudo:'):
             pass_str = getpass(
                 "\nThe script requires sudo access to drop caches between "
@@ -59,7 +64,7 @@ class KernelDropCaches(object):
             input_data = b''
         cmd += ['tee', '/proc/sys/vm/drop_caches']
         input_data += b'3'
-        runCmd(cmd, data=input_data)
+        runCmd2(cmd, data=input_data)
 
 
 def time_cmd(cmd, **kwargs):
@@ -71,7 +76,7 @@ def time_cmd(cmd, **kwargs):
         timecmd += cmd
         # TODO: 'ignore_status' could/should be removed when globalres.log is
         # deprecated. The function would just raise an exception, instead
-        ret = runCmd(timecmd, ignore_status=True, **kwargs)
+        ret = runCmd2(timecmd, ignore_status=True, **kwargs)
         timedata = tmpf.file.read()
     return ret, timedata
 
@@ -213,7 +218,7 @@ class BuildPerfTestCase(unittest.TestCase):
         """Run a command and log it's output"""
         cmd_log = os.path.join(self.out_dir, 'commands.log')
         with open(cmd_log, 'a') as fobj:
-            runCmd(cmd, stdout=fobj)
+            runCmd2(cmd, stdout=fobj)
 
     def measure_cmd_resources(self, cmd, name, legend):
         """Measure system resource usage of a command"""
@@ -261,7 +266,7 @@ class BuildPerfTestCase(unittest.TestCase):
         """Estimate disk usage of a file or directory"""
         # TODO: 'ignore_status' could/should be removed when globalres.log is
         # deprecated. The function would just raise an exception, instead
-        ret = runCmd(['du', '-s', path], ignore_status=True)
+        ret = runCmd2(['du', '-s', path], ignore_status=True)
         if ret.status:
             log.error("du failed, disk usage will be reported as 0")
             size = 0
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/9] oe-build-perf-test: write logger output into file only
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (6 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 7/9] oeqa.buildperf: introduce runCmd2() Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  2016-08-12  9:11 ` [PATCH 9/9] oeqa.buildperf: be more verbose about failed commands Markus Lehtonen
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Write the output from the Python logger only into the log file. This way
the console output from the script is cleaner and not mixed with the
logger records.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 scripts/oe-build-perf-test | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index 786c715..e857ca6 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -33,10 +33,7 @@ from oeqa.buildperf import (BuildPerfTestLoader, BuildPerfTestResult,
 from oeqa.utils.commands import runCmd
 
 
-# Set-up logging
-LOG_FORMAT = '[%(asctime)s] %(levelname)s: %(message)s'
-logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
-log = logging.getLogger()
+log = None
 
 
 def acquire_lock(lock_f):
@@ -71,15 +68,18 @@ def pre_run_sanity_check():
     return True
 
 
-def setup_file_logging(log_file):
+def setup_logging(log_file):
     """Setup loggin to file"""
+    global log
+
     log_dir = os.path.dirname(log_file)
     if not os.path.exists(log_dir):
         os.makedirs(log_dir)
-    formatter = logging.Formatter(LOG_FORMAT)
-    handler = logging.FileHandler(log_file)
-    handler.setFormatter(formatter)
-    log.addHandler(handler)
+
+    log_format = '[%(asctime)s] %(levelname)s: %(message)s'
+    logging.basicConfig(level=logging.INFO, filename=log_file,
+                        format=log_format)
+    log = logging.getLogger()
 
 
 def archive_build_conf(out_dir):
@@ -112,6 +112,10 @@ def main(argv=None):
     """Script entry point"""
     args = parse_args(argv)
 
+    # Set-up logging
+    out_dir = args.out_dir.format(date=datetime.now().strftime('%Y%m%d%H%M%S'))
+    setup_logging(os.path.join(out_dir, 'output.log'))
+
     if args.debug:
         log.setLevel(logging.DEBUG)
 
@@ -129,9 +133,6 @@ def main(argv=None):
     # Load build perf tests
     loader = BuildPerfTestLoader()
     suite = loader.discover(start_dir=os.path.dirname(oeqa.buildperf.__file__))
-    # Set-up log file
-    out_dir = args.out_dir.format(date=datetime.now().strftime('%Y%m%d%H%M%S'))
-    setup_file_logging(os.path.join(out_dir, 'output.log'))
 
     # Run actual tests
     archive_build_conf(out_dir)
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 9/9] oeqa.buildperf: be more verbose about failed commands
  2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
                   ` (7 preceding siblings ...)
  2016-08-12  9:11 ` [PATCH 8/9] oe-build-perf-test: write logger output into file only Markus Lehtonen
@ 2016-08-12  9:11 ` Markus Lehtonen
  8 siblings, 0 replies; 10+ messages in thread
From: Markus Lehtonen @ 2016-08-12  9:11 UTC (permalink / raw)
  To: openembedded-core

Log failures of commands whose output is stored.

Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
 meta/lib/oeqa/buildperf/base.py | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 7ea3183..af169e1 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -23,7 +23,7 @@ import unittest
 from datetime import datetime, timedelta
 from functools import partial
 
-from oeqa.utils.commands import runCmd, get_bb_vars
+from oeqa.utils.commands import CommandError, runCmd, get_bb_vars
 from oeqa.utils.git import GitError, GitRepo
 
 # Get logger for this module
@@ -216,9 +216,15 @@ class BuildPerfTestCase(unittest.TestCase):
 
     def log_cmd_output(self, cmd):
         """Run a command and log it's output"""
+        cmd_str = cmd if isinstance(cmd, str) else ' '.join(cmd)
+        log.info("Logging command: %s", cmd_str)
         cmd_log = os.path.join(self.out_dir, 'commands.log')
-        with open(cmd_log, 'a') as fobj:
-            runCmd2(cmd, stdout=fobj)
+        try:
+            with open(cmd_log, 'a') as fobj:
+                runCmd2(cmd, stdout=fobj)
+        except CommandError as err:
+            log.error("Command failed: %s", err.retcode)
+            raise
 
     def measure_cmd_resources(self, cmd, name, legend):
         """Measure system resource usage of a command"""
-- 
2.6.6



^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-08-12  9:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-12  9:11 [PATCH 0/9] oe-build-perf-test: use Python unittest framework Markus Lehtonen
2016-08-12  9:11 ` [PATCH 1/9] oeqa.buildperf: rename module containing basic tests Markus Lehtonen
2016-08-12  9:11 ` [PATCH 2/9] oeqa.buildperf: derive BuildPerfTestCase class from unitest.TestCase Markus Lehtonen
2016-08-12  9:11 ` [PATCH 3/9] oeqa.buildperf: add BuildPerfTestLoader class Markus Lehtonen
2016-08-12  9:11 ` [PATCH 4/9] oeqa.buildperf: add BuildPerfTestResult class Markus Lehtonen
2016-08-12  9:11 ` [PATCH 5/9] oeqa.buildperf: convert test cases to unittest Markus Lehtonen
2016-08-12  9:11 ` [PATCH 6/9] oe-build-perf-test: use new unittest based framework Markus Lehtonen
2016-08-12  9:11 ` [PATCH 7/9] oeqa.buildperf: introduce runCmd2() Markus Lehtonen
2016-08-12  9:11 ` [PATCH 8/9] oe-build-perf-test: write logger output into file only Markus Lehtonen
2016-08-12  9:11 ` [PATCH 9/9] oeqa.buildperf: be more verbose about failed commands Markus Lehtonen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.