* [PATCH 00/11] oeqa.buildperf: improve test report format
@ 2017-01-19 11:12 Markus Lehtonen
2017-01-19 11:12 ` [PATCH 01/11] oeqa.buildperf: prevent a crash on unexpected success Markus Lehtonen
` (10 more replies)
0 siblings, 11 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
This patchset modifies and extends formatting of the test report produced by
the oe-build-perf-test script. It is based on my former "oeqa.utils.metadata:
update xml schema (v2)" patchset, which should be merged first.
The changes in this patchset can be divided into three parts:
1. Enable XML reports, in a JUnit compatible format
2. Align the JSON report format with the XML schema
3. Store test environment metadata in a separate file (JSON or XML)
[YOCTO #10590]
The following changes since commit c8bd57d7cf127b5ee315597408940870882a10e0:
oeqa.utils.metadata: include BB_NUMBER_THREADS and PARALLEL_MAKE (2017-01-13 13:55:12 +0200)
are available in the git repository at:
git://git.openembedded.org/openembedded-core-contrib marquiz/buildperf/xml
http://git.openembedded.org/openembedded-core-contrib/log/?h=marquiz/buildperf/xml
Markus Lehtonen (11):
oeqa.buildperf: prevent a crash on unexpected success
oeqa.buildperf: sync test status names with JUnit
oeqa.buildperf: include error details in json report
oe-build-perf-test: enable xml reporting
oeqa.buildperf: extend xml format to contain measurement data
oeqa.buildperf: extend xml report format with test description
oeqa.buildperf: report results in chronological order
oe-build-perf-test: save test metadata in a separate file
oe-build-perf-test: remove unused imports and fix indent
oeqa.buildperf: change sorting in json report
oeqa.buildperf: store measurements as a dict (object) in the JSON
report
meta/lib/oeqa/buildperf/base.py | 318 ++++++++++++++++++----------------------
scripts/oe-build-perf-test | 125 +++++++++++++++-
2 files changed, 261 insertions(+), 182 deletions(-)
--
2.10.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 01/11] oeqa.buildperf: prevent a crash on unexpected success
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 02/11] oeqa.buildperf: sync test status names with JUnit Markus Lehtonen
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 59dd025..4955914 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -137,7 +137,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
def addSuccess(self, test):
"""Record results from successful tests"""
super(BuildPerfTestResult, self).addSuccess(test)
- self.successes.append((test, None))
+ self.successes.append(test)
def startTest(self, test):
"""Pre-test hook"""
@@ -165,7 +165,10 @@ class BuildPerfTestResult(unittest.TextTestResult):
'SKIPPED': self.skipped}
for status, tests in result_map.items():
for test in tests:
- yield (status, test)
+ if isinstance(test, tuple):
+ yield (status, test)
+ else:
+ yield (status, (test, None))
def update_globalres_file(self, filename):
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 02/11] oeqa.buildperf: sync test status names with JUnit
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
2017-01-19 11:12 ` [PATCH 01/11] oeqa.buildperf: prevent a crash on unexpected success Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 03/11] oeqa.buildperf: include error details in json report Markus Lehtonen
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Use 'failure' instead of 'fail'. Also, use 'expected' instead of 'exp'.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 4955914..71f3382 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -158,10 +158,10 @@ class BuildPerfTestResult(unittest.TextTestResult):
def all_results(self):
result_map = {'SUCCESS': self.successes,
- 'FAIL': self.failures,
+ 'FAILURE': self.failures,
'ERROR': self.errors,
- 'EXP_FAIL': self.expectedFailures,
- 'UNEXP_SUCCESS': self.unexpectedSuccesses,
+ 'EXPECTED_FAILURE': self.expectedFailures,
+ 'UNEXPECTED_SUCCESS': self.unexpectedSuccesses,
'SKIPPED': self.skipped}
for status, tests in result_map.items():
for test in tests:
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 03/11] oeqa.buildperf: include error details in json report
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
2017-01-19 11:12 ` [PATCH 01/11] oeqa.buildperf: prevent a crash on unexpected success Markus Lehtonen
2017-01-19 11:12 ` [PATCH 02/11] oeqa.buildperf: sync test status names with JUnit Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 04/11] oe-build-perf-test: enable xml reporting Markus Lehtonen
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
This will typically mean assert message and exception type plus a
traceback. In case of skipped tests the reason (i.e. skip message) is
included.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 71f3382..668e822 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -139,6 +139,21 @@ class BuildPerfTestResult(unittest.TextTestResult):
super(BuildPerfTestResult, self).addSuccess(test)
self.successes.append(test)
+ def addError(self, test, err):
+ """Record results from crashed test"""
+ test.err = err
+ super(BuildPerfTestResult, self).addError(test, err)
+
+ def addFailure(self, test, err):
+ """Record results from failed test"""
+ test.err = err
+ super(BuildPerfTestResult, self).addFailure(test, err)
+
+ def addExpectedFailure(self, test, err):
+ """Record results from expectedly failed test"""
+ test.err = err
+ super(BuildPerfTestResult, self).addExpectedFailure(test, err)
+
def startTest(self, test):
"""Pre-test hook"""
test.base_dir = self.out_dir
@@ -226,6 +241,13 @@ class BuildPerfTestResult(unittest.TextTestResult):
'cmd_log_file': os.path.relpath(test.cmd_log_file,
self.out_dir),
'measurements': test.measurements}
+ if status in ('ERROR', 'FAILURE', 'EXPECTED_FAILURE'):
+ tests[test.name]['message'] = str(test.err[1])
+ tests[test.name]['err_type'] = test.err[0].__name__
+ tests[test.name]['err_output'] = reason
+ elif reason:
+ tests[test.name]['message'] = reason
+
results['tests'] = tests
with open(os.path.join(self.out_dir, 'results.json'), 'w') as fobj:
@@ -307,6 +329,8 @@ class BuildPerfTestCase(unittest.TestCase):
self.start_time = None
self.elapsed_time = None
self.measurements = []
+ # self.err is supposed to be a tuple from sys.exc_info()
+ self.err = None
self.bb_vars = get_bb_vars()
# TODO: remove 'times' and 'sizes' arrays when globalres support is
# removed
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 04/11] oe-build-perf-test: enable xml reporting
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (2 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 03/11] oeqa.buildperf: include error details in json report Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 05/11] oeqa.buildperf: extend xml format to contain measurement data Markus Lehtonen
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Add --xml command line option to oe-build-perf-test script for producing
a test report in JUnit XML format instead of JSON.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 43 ++++++++++++++++++++++++++++++++++++++++-
scripts/oe-build-perf-test | 6 ++++++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 668e822..de0ee40 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -21,10 +21,12 @@ import socket
import time
import traceback
import unittest
+import xml.etree.ElementTree as ET
from datetime import datetime, timedelta
from functools import partial
from multiprocessing import Process
from multiprocessing import SimpleQueue
+from xml.dom import minidom
import oe.path
from oeqa.utils.commands import CommandError, runCmd, get_bb_vars
@@ -169,7 +171,6 @@ class BuildPerfTestResult(unittest.TextTestResult):
def stopTestRun(self):
"""Pre-run hook"""
self.elapsed_time = datetime.utcnow() - self.start_time
- self.write_results_json()
def all_results(self):
result_map = {'SUCCESS': self.successes,
@@ -254,6 +255,46 @@ class BuildPerfTestResult(unittest.TextTestResult):
json.dump(results, fobj, indent=4, sort_keys=True,
cls=ResultsJsonEncoder)
+ def write_results_xml(self):
+ """Write test results into a JUnit XML file"""
+ top = ET.Element('testsuites')
+ suite = ET.SubElement(top, 'testsuite')
+ suite.set('name', 'oeqa.buildperf')
+ suite.set('timestamp', self.start_time.isoformat())
+ suite.set('time', str(self.elapsed_time.total_seconds()))
+ suite.set('hostname', self.hostname)
+ suite.set('failures', str(len(self.failures) + len(self.expectedFailures)))
+ suite.set('errors', str(len(self.errors)))
+ suite.set('skipped', str(len(self.skipped)))
+
+ test_cnt = 0
+ for status, (test, reason) in self.all_results():
+ testcase = ET.SubElement(suite, 'testcase')
+ testcase.set('classname', test.__module__ + '.' + test.__class__.__name__)
+ testcase.set('name', test.name)
+ testcase.set('timestamp', test.start_time.isoformat())
+ testcase.set('time', str(test.elapsed_time.total_seconds()))
+ if status in ('ERROR', 'FAILURE', 'EXP_FAILURE'):
+ if status in ('FAILURE', 'EXP_FAILURE'):
+ result = ET.SubElement(testcase, 'failure')
+ else:
+ result = ET.SubElement(testcase, 'error')
+ result.set('message', str(test.err[1]))
+ result.set('type', test.err[0].__name__)
+ result.text = reason
+ elif status == 'SKIPPED':
+ result = ET.SubElement(testcase, 'skipped')
+ result.text = reason
+ elif status not in ('SUCCESS', 'UNEXPECTED_SUCCESS'):
+ raise TypeError("BUG: invalid test status '%s'" % status)
+ test_cnt += 1
+ suite.set('tests', str(test_cnt))
+
+ # Use minidom for pretty-printing
+ dom_doc = minidom.parseString(ET.tostring(top, 'utf-8'))
+ with open(os.path.join(self.out_dir, 'results.xml'), 'w') as fobj:
+ dom_doc.writexml(fobj, addindent=' ', newl='\n', encoding='utf-8')
+ return
def git_commit_results(self, repo_path, branch=None, tag=None):
"""Commit results into a Git repository"""
diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index 638e195..4ec9f14 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -131,6 +131,8 @@ def parse_args(argv):
parser.add_argument('-o', '--out-dir', default='results-{date}',
type=os.path.abspath,
help="Output directory for test results")
+ parser.add_argument('-x', '--xml', action='store_true',
+ help='Enable JUnit xml output')
parser.add_argument('--log-file',
default='{out_dir}/oe-build-perf-test.log',
help="Log file of this script")
@@ -194,6 +196,10 @@ def main(argv=None):
# Restore logger output to stderr
log.handlers[0].setLevel(log.level)
+ if args.xml:
+ result.write_results_xml()
+ else:
+ result.write_results_json()
if args.globalres_file:
result.update_globalres_file(args.globalres_file)
if args.commit_results:
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 05/11] oeqa.buildperf: extend xml format to contain measurement data
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (3 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 04/11] oe-build-perf-test: enable xml reporting Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 06/11] oeqa.buildperf: extend xml report format with test description Markus Lehtonen
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Make the xml report format slightly non-standard by incorporating
measurement data into it.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index de0ee40..efbe20c 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -269,6 +269,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
test_cnt = 0
for status, (test, reason) in self.all_results():
+ test_cnt += 1
testcase = ET.SubElement(suite, 'testcase')
testcase.set('classname', test.__module__ + '.' + test.__class__.__name__)
testcase.set('name', test.name)
@@ -287,7 +288,27 @@ class BuildPerfTestResult(unittest.TextTestResult):
result.text = reason
elif status not in ('SUCCESS', 'UNEXPECTED_SUCCESS'):
raise TypeError("BUG: invalid test status '%s'" % status)
- test_cnt += 1
+
+ for data in test.measurements:
+ measurement = ET.SubElement(testcase, data['type'])
+ measurement.set('name', data['name'])
+ measurement.set('legend', data['legend'])
+ vals = data['values']
+ if data['type'] == BuildPerfTestCase.SYSRES:
+ ET.SubElement(measurement, 'time',
+ timestamp=vals['start_time'].isoformat()).text = \
+ str(vals['elapsed_time'].total_seconds())
+ if 'buildstats_file' in vals:
+ ET.SubElement(measurement, 'buildstats_file').text = vals['buildstats_file']
+ attrib = dict((k, str(v)) for k, v in vals['iostat'].items())
+ ET.SubElement(measurement, 'iostat', attrib=attrib)
+ attrib = dict((k, str(v)) for k, v in vals['rusage'].items())
+ ET.SubElement(measurement, 'rusage', attrib=attrib)
+ elif data['type'] == BuildPerfTestCase.DISKUSAGE:
+ ET.SubElement(measurement, 'size').text = str(vals['size'])
+ else:
+ raise TypeError('BUG: unsupported measurement type')
+
suite.set('tests', str(test_cnt))
# Use minidom for pretty-printing
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 06/11] oeqa.buildperf: extend xml report format with test description
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (4 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 05/11] oeqa.buildperf: extend xml format to contain measurement data Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 07/11] oeqa.buildperf: report results in chronological order Markus Lehtonen
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Add test description as an attribute to the <testcase> element.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index efbe20c..b82476c 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -273,6 +273,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
testcase = ET.SubElement(suite, 'testcase')
testcase.set('classname', test.__module__ + '.' + test.__class__.__name__)
testcase.set('name', test.name)
+ testcase.set('description', test.shortDescription())
testcase.set('timestamp', test.start_time.isoformat())
testcase.set('time', str(test.elapsed_time.total_seconds()))
if status in ('ERROR', 'FAILURE', 'EXP_FAILURE'):
@@ -407,6 +408,9 @@ class BuildPerfTestCase(unittest.TestCase):
def cmd_log_file(self):
return os.path.join(self.out_dir, 'commands.log')
+ def shortDescription(self):
+ return super(BuildPerfTestCase, self).shortDescription() or ""
+
def setUp(self):
"""Set-up fixture for each test"""
if self.build_target:
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 07/11] oeqa.buildperf: report results in chronological order
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (5 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 06/11] oeqa.buildperf: extend xml report format with test description Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 08/11] oe-build-perf-test: save test metadata in a separate file Markus Lehtonen
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Write results in the report file in chronological order, instead of
random order dependent on test statuses.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index b82476c..92f3e45 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -173,18 +173,13 @@ class BuildPerfTestResult(unittest.TextTestResult):
self.elapsed_time = datetime.utcnow() - self.start_time
def all_results(self):
- result_map = {'SUCCESS': self.successes,
- 'FAILURE': self.failures,
- 'ERROR': self.errors,
- 'EXPECTED_FAILURE': self.expectedFailures,
- 'UNEXPECTED_SUCCESS': self.unexpectedSuccesses,
- 'SKIPPED': self.skipped}
- for status, tests in result_map.items():
- for test in tests:
- if isinstance(test, tuple):
- yield (status, test)
- else:
- yield (status, (test, None))
+ compound = [('SUCCESS', t, None) for t in self.successes] + \
+ [('FAILURE', t, m) for t, m in self.failures] + \
+ [('ERROR', t, m) for t, m in self.errors] + \
+ [('EXPECTED_FAILURE', t, m) for t, m in self.expectedFailures] + \
+ [('UNEXPECTED_SUCCESS', t, None) for t in self.unexpectedSuccesses] + \
+ [('SKIPPED', t, m) for t, m in self.skipped]
+ return sorted(compound, key=lambda info: info[1].start_time)
def update_globalres_file(self, filename):
@@ -205,7 +200,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
git_tag_rev = self.git_commit
values = ['0'] * 12
- for status, (test, msg) in self.all_results():
+ for status, test, _ in self.all_results():
if status in ['ERROR', 'SKIPPED']:
continue
(t_ind, t_len), (s_ind, s_len) = gr_map[test.name]
@@ -233,7 +228,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
'elapsed_time': self.elapsed_time}
tests = {}
- for status, (test, reason) in self.all_results():
+ for status, test, reason in self.all_results():
tests[test.name] = {'name': test.name,
'description': test.shortDescription(),
'status': status,
@@ -268,7 +263,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
suite.set('skipped', str(len(self.skipped)))
test_cnt = 0
- for status, (test, reason) in self.all_results():
+ for status, test, reason in self.all_results():
test_cnt += 1
testcase = ET.SubElement(suite, 'testcase')
testcase.set('classname', test.__module__ + '.' + test.__class__.__name__)
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 08/11] oe-build-perf-test: save test metadata in a separate file
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (6 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 07/11] oeqa.buildperf: report results in chronological order Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 09/11] oe-build-perf-test: remove unused imports and fix indent Markus Lehtonen
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
The patch introduces a new metadata (.json or .xml) file in the output
directory. All test meta data, e.g. git revision information and tester
host information is now stored there. The JSON report format is slightly
changed as the metadata is not present in results.json anymore.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 129 ----------------------------------------
scripts/oe-build-perf-test | 118 ++++++++++++++++++++++++++++++++++--
2 files changed, 114 insertions(+), 133 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 92f3e45..4027fdb 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -101,40 +101,10 @@ class BuildPerfTestResult(unittest.TextTestResult):
super(BuildPerfTestResult, self).__init__(*args, **kwargs)
self.out_dir = out_dir
- # Get Git parameters
- try:
- self.repo = GitRepo('.')
- except GitError:
- self.repo = None
- self.git_commit, self.git_commit_count, self.git_branch = \
- self.get_git_revision()
self.hostname = socket.gethostname()
self.product = os.getenv('OE_BUILDPERFTEST_PRODUCT', 'oe-core')
self.start_time = self.elapsed_time = None
self.successes = []
- log.info("Using Git branch:commit %s:%s (%s)", self.git_branch,
- self.git_commit, self.git_commit_count)
-
- def get_git_revision(self):
- """Get git branch and commit under testing"""
- commit = os.getenv('OE_BUILDPERFTEST_GIT_COMMIT')
- commit_cnt = os.getenv('OE_BUILDPERFTEST_GIT_COMMIT_COUNT')
- branch = os.getenv('OE_BUILDPERFTEST_GIT_BRANCH')
- if not self.repo and (not commit or not commit_cnt or not branch):
- log.info("The current working directory doesn't seem to be a Git "
- "repository clone. You can specify branch and commit "
- "displayed in test results with OE_BUILDPERFTEST_GIT_BRANCH, "
- "OE_BUILDPERFTEST_GIT_COMMIT and "
- "OE_BUILDPERFTEST_GIT_COMMIT_COUNT environment variables")
- else:
- if not commit:
- commit = self.repo.rev_parse('HEAD^0')
- commit_cnt = self.repo.run_cmd(['rev-list', '--count', 'HEAD^0'])
- if not branch:
- branch = self.repo.get_current_branch()
- if not branch:
- log.debug('Currently on detached HEAD')
- return str(commit), str(commit_cnt), str(branch)
def addSuccess(self, test):
"""Record results from successful tests"""
@@ -182,48 +152,9 @@ class BuildPerfTestResult(unittest.TextTestResult):
return sorted(compound, key=lambda info: info[1].start_time)
- def update_globalres_file(self, filename):
- """Write results to globalres csv file"""
- # Map test names to time and size columns in globalres
- # The tuples represent index and length of times and sizes
- # respectively
- gr_map = {'test1': ((0, 1), (8, 1)),
- 'test12': ((1, 1), (None, None)),
- 'test13': ((2, 1), (9, 1)),
- 'test2': ((3, 1), (None, None)),
- 'test3': ((4, 3), (None, None)),
- 'test4': ((7, 1), (10, 2))}
-
- if self.repo:
- git_tag_rev = self.repo.run_cmd(['describe', self.git_commit])
- else:
- git_tag_rev = self.git_commit
-
- values = ['0'] * 12
- for status, test, _ in self.all_results():
- if status in ['ERROR', 'SKIPPED']:
- continue
- (t_ind, t_len), (s_ind, s_len) = gr_map[test.name]
- if t_ind is not None:
- values[t_ind:t_ind + t_len] = test.times
- if s_ind is not None:
- values[s_ind:s_ind + s_len] = test.sizes
-
- log.debug("Writing globalres log to %s", filename)
- with open(filename, 'a') as fobj:
- fobj.write('{},{}:{},{},'.format(self.hostname,
- self.git_branch,
- self.git_commit,
- git_tag_rev))
- fobj.write(','.join(values) + '\n')
-
def write_results_json(self):
"""Write test results into a json-formatted file"""
results = {'tester_host': self.hostname,
- 'git_branch': self.git_branch,
- 'git_commit': self.git_commit,
- 'git_commit_count': self.git_commit_count,
- 'product': self.product,
'start_time': self.start_time,
'elapsed_time': self.elapsed_time}
@@ -313,66 +244,6 @@ class BuildPerfTestResult(unittest.TextTestResult):
dom_doc.writexml(fobj, addindent=' ', newl='\n', encoding='utf-8')
return
- def git_commit_results(self, repo_path, branch=None, tag=None):
- """Commit results into a Git repository"""
- repo = GitRepo(repo_path, is_topdir=True)
- if not branch:
- branch = self.git_branch
- else:
- # Replace keywords
- branch = branch.format(git_branch=self.git_branch,
- tester_host=self.hostname)
-
- log.info("Committing test results into %s %s", repo_path, branch)
- tmp_index = os.path.join(repo_path, '.git', 'index.oe-build-perf')
- try:
- # Create new commit object from the new results
- env_update = {'GIT_INDEX_FILE': tmp_index,
- 'GIT_WORK_TREE': self.out_dir}
- repo.run_cmd('add .', env_update)
- tree = repo.run_cmd('write-tree', env_update)
- parent = repo.rev_parse(branch)
- msg = "Results of {}:{}\n".format(self.git_branch, self.git_commit)
- git_cmd = ['commit-tree', tree, '-m', msg]
- if parent:
- git_cmd += ['-p', parent]
- commit = repo.run_cmd(git_cmd, env_update)
-
- # Update branch head
- git_cmd = ['update-ref', 'refs/heads/' + branch, commit]
- if parent:
- git_cmd.append(parent)
- repo.run_cmd(git_cmd)
-
- # Update current HEAD, if we're on branch 'branch'
- if repo.get_current_branch() == branch:
- log.info("Updating %s HEAD to latest commit", repo_path)
- repo.run_cmd('reset --hard')
-
- # Create (annotated) tag
- if tag:
- # Find tags matching the pattern
- tag_keywords = dict(git_branch=self.git_branch,
- git_commit=self.git_commit,
- git_commit_count=self.git_commit_count,
- tester_host=self.hostname,
- tag_num='[0-9]{1,5}')
- tag_re = re.compile(tag.format(**tag_keywords) + '$')
- tag_keywords['tag_num'] = 0
- for existing_tag in repo.run_cmd('tag').splitlines():
- if tag_re.match(existing_tag):
- tag_keywords['tag_num'] += 1
-
- tag = tag.format(**tag_keywords)
- msg = "Test run #{} of {}:{}\n".format(tag_keywords['tag_num'],
- self.git_branch,
- self.git_commit)
- repo.run_cmd(['tag', '-a', '-m', msg, tag, commit])
-
- finally:
- if os.path.exists(tmp_index):
- os.unlink(tmp_index)
-
class BuildPerfTestCase(unittest.TestCase):
"""Base class for build performance tests"""
diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index 4ec9f14..fc4ab31 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -17,8 +17,10 @@
import argparse
import errno
import fcntl
+import json
import logging
import os
+import re
import shutil
import sys
import unittest
@@ -27,11 +29,13 @@ from datetime import datetime
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + '/lib')
import scriptpath
scriptpath.add_oe_lib_path()
+scriptpath.add_bitbake_lib_path()
import oeqa.buildperf
from oeqa.buildperf import (BuildPerfTestLoader, BuildPerfTestResult,
BuildPerfTestRunner, KernelDropCaches)
from oeqa.utils.commands import runCmd
from oeqa.utils.git import GitRepo, GitError
+from oeqa.utils.metadata import metadata_from_bb, write_metadata_file
# Set-up logging
@@ -115,6 +119,100 @@ def archive_build_conf(out_dir):
shutil.copytree(src_dir, tgt_dir)
+def git_commit_results(repo_dir, results_dir, branch, tag, metadata):
+ """Commit results into a Git repository"""
+ repo = GitRepo(repo_dir, is_topdir=True)
+ distro_branch = metadata['layers']['meta']['branch']
+ distro_commit = metadata['layers']['meta']['commit']
+ distro_commit_count = metadata['layers']['meta']['commit_count']
+
+ # Replace keywords
+ branch = branch.format(git_branch=distro_branch,
+ tester_host=metadata['hostname'])
+
+ log.info("Committing test results into %s %s", repo_dir, branch)
+ tmp_index = os.path.join(repo_dir, '.git', 'index.oe-build-perf')
+ try:
+ # Create new commit object from the new results
+ env_update = {'GIT_INDEX_FILE': tmp_index,
+ 'GIT_WORK_TREE': results_dir}
+ repo.run_cmd('add .', env_update)
+ tree = repo.run_cmd('write-tree', env_update)
+ parent = repo.rev_parse(branch)
+ msg = "Results of {}:{}\n".format(distro_branch, distro_commit)
+ git_cmd = ['commit-tree', tree, '-m', msg]
+ if parent:
+ git_cmd += ['-p', parent]
+ commit = repo.run_cmd(git_cmd, env_update)
+
+ # Update branch head
+ git_cmd = ['update-ref', 'refs/heads/' + branch, commit]
+ if parent:
+ git_cmd.append(parent)
+ repo.run_cmd(git_cmd)
+
+ # Update current HEAD, if we're on branch 'branch'
+ if repo.get_current_branch() == branch:
+ log.info("Updating %s HEAD to latest commit", repo_dir)
+ repo.run_cmd('reset --hard')
+
+ # Create (annotated) tag
+ if tag:
+ # Find tags matching the pattern
+ tag_keywords = dict(git_branch=distro_branch,
+ git_commit=distro_commit,
+ git_commit_count=distro_commit_count,
+ tester_host=metadata['hostname'],
+ tag_num='[0-9]{1,5}')
+ tag_re = re.compile(tag.format(**tag_keywords) + '$')
+ tag_keywords['tag_num'] = 0
+ for existing_tag in repo.run_cmd('tag').splitlines():
+ if tag_re.match(existing_tag):
+ tag_keywords['tag_num'] += 1
+
+ tag = tag.format(**tag_keywords)
+ msg = "Test run #{} of {}:{}\n".format(tag_keywords['tag_num'],
+ distro_branch,
+ distro_commit)
+ repo.run_cmd(['tag', '-a', '-m', msg, tag, commit])
+
+ finally:
+ if os.path.exists(tmp_index):
+ os.unlink(tmp_index)
+
+
+def update_globalres_file(result_obj, filename, metadata):
+ """Write results to globalres csv file"""
+ # Map test names to time and size columns in globalres
+ # The tuples represent index and length of times and sizes
+ # respectively
+ gr_map = {'test1': ((0, 1), (8, 1)),
+ 'test12': ((1, 1), (None, None)),
+ 'test13': ((2, 1), (9, 1)),
+ 'test2': ((3, 1), (None, None)),
+ 'test3': ((4, 3), (None, None)),
+ 'test4': ((7, 1), (10, 2))}
+
+ values = ['0'] * 12
+ for status, test, _ in result_obj.all_results():
+ if status in ['ERROR', 'SKIPPED']:
+ continue
+ (t_ind, t_len), (s_ind, s_len) = gr_map[test.name]
+ if t_ind is not None:
+ values[t_ind:t_ind + t_len] = test.times
+ if s_ind is not None:
+ values[s_ind:s_ind + s_len] = test.sizes
+
+ log.debug("Writing globalres log to %s", filename)
+ rev_info = metadata['layers']['meta']
+ with open(filename, 'a') as fobj:
+ fobj.write('{},{}:{},{},'.format(metadata['hostname'],
+ rev_info['branch'],
+ rev_info['commit'],
+ rev_info['commit']))
+ fobj.write(','.join(values) + '\n')
+
+
def parse_args(argv):
"""Parse command line arguments"""
parser = argparse.ArgumentParser(
@@ -183,7 +281,19 @@ def main(argv=None):
else:
suite = loader.loadTestsFromModule(oeqa.buildperf)
+ # Save test metadata
+ metadata = metadata_from_bb()
+ log.info("Testing Git revision branch:commit %s:%s (%s)",
+ metadata['layers']['meta']['branch'],
+ metadata['layers']['meta']['commit'],
+ metadata['layers']['meta']['commit_count'])
+ if args.xml:
+ write_metadata_file(os.path.join(out_dir, 'metadata.xml'), metadata)
+ else:
+ with open(os.path.join(out_dir, 'metadata.json'), 'w') as fobj:
+ json.dump(metadata, fobj, indent=2)
archive_build_conf(out_dir)
+
runner = BuildPerfTestRunner(out_dir, verbosity=2)
# Suppress logger output to stderr so that the output from unittest
@@ -201,11 +311,11 @@ def main(argv=None):
else:
result.write_results_json()
if args.globalres_file:
- result.update_globalres_file(args.globalres_file)
+ update_globalres_file(result, args.globalres_file, metadata)
if args.commit_results:
- result.git_commit_results(args.commit_results,
- args.commit_results_branch,
- args.commit_results_tag)
+ git_commit_results(args.commit_results, out_dir,
+ args.commit_results_branch, args.commit_results_tag,
+ metadata)
if result.wasSuccessful():
return 0
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 09/11] oe-build-perf-test: remove unused imports and fix indent
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (7 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 08/11] oe-build-perf-test: save test metadata in a separate file Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 10/11] oeqa.buildperf: change sorting in json report Markus Lehtonen
2017-01-19 11:12 ` [PATCH 11/11] oeqa.buildperf: store measurements as a dict (object) in the JSON report Markus Lehtonen
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 7 ++-----
scripts/oe-build-perf-test | 1 -
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 4027fdb..f6faedb 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -10,16 +10,13 @@
# more details.
#
"""Build performance test base classes and functionality"""
-import glob
import json
import logging
import os
import re
import resource
-import shutil
import socket
import time
-import traceback
import unittest
import xml.etree.ElementTree as ET
from datetime import datetime, timedelta
@@ -506,5 +503,5 @@ class BuildPerfTestRunner(unittest.TextTestRunner):
self.out_dir = out_dir
def _makeResult(self):
- return BuildPerfTestResult(self.out_dir, self.stream, self.descriptions,
- self.verbosity)
+ return BuildPerfTestResult(self.out_dir, self.stream, self.descriptions,
+ self.verbosity)
diff --git a/scripts/oe-build-perf-test b/scripts/oe-build-perf-test
index fc4ab31..f3867ab 100755
--- a/scripts/oe-build-perf-test
+++ b/scripts/oe-build-perf-test
@@ -23,7 +23,6 @@ import os
import re
import shutil
import sys
-import unittest
from datetime import datetime
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + '/lib')
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 10/11] oeqa.buildperf: change sorting in json report
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (8 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 09/11] oe-build-perf-test: remove unused imports and fix indent Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
2017-01-19 11:12 ` [PATCH 11/11] oeqa.buildperf: store measurements as a dict (object) in the JSON report Markus Lehtonen
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Use OrderedDict() instead of sort_keys=True (of json.dump()). Makes for
more logical sorting of the values in the report.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 63 +++++++++++++++++++++--------------------
1 file changed, 32 insertions(+), 31 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index f6faedb..28c3e29 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -19,6 +19,7 @@ import socket
import time
import unittest
import xml.etree.ElementTree as ET
+from collections import OrderedDict
from datetime import datetime, timedelta
from functools import partial
from multiprocessing import Process
@@ -151,31 +152,31 @@ class BuildPerfTestResult(unittest.TextTestResult):
def write_results_json(self):
"""Write test results into a json-formatted file"""
- results = {'tester_host': self.hostname,
- 'start_time': self.start_time,
- 'elapsed_time': self.elapsed_time}
+ results = OrderedDict([('tester_host', self.hostname),
+ ('start_time', self.start_time),
+ ('elapsed_time', self.elapsed_time),
+ ('tests', OrderedDict())])
- tests = {}
for status, test, reason in self.all_results():
- tests[test.name] = {'name': test.name,
- 'description': test.shortDescription(),
- 'status': status,
- 'start_time': test.start_time,
- 'elapsed_time': test.elapsed_time,
- 'cmd_log_file': os.path.relpath(test.cmd_log_file,
- self.out_dir),
- 'measurements': test.measurements}
+ test_result = OrderedDict([('name', test.name),
+ ('description', test.shortDescription()),
+ ('status', status),
+ ('start_time', test.start_time),
+ ('elapsed_time', test.elapsed_time),
+ ('cmd_log_file', os.path.relpath(test.cmd_log_file,
+ self.out_dir)),
+ ('measurements', test.measurements)])
if status in ('ERROR', 'FAILURE', 'EXPECTED_FAILURE'):
- tests[test.name]['message'] = str(test.err[1])
- tests[test.name]['err_type'] = test.err[0].__name__
- tests[test.name]['err_output'] = reason
+ test_result['message'] = str(test.err[1])
+ test_result['err_type'] = test.err[0].__name__
+ test_result['err_output'] = reason
elif reason:
- tests[test.name]['message'] = reason
+ test_result['message'] = reason
- results['tests'] = tests
+ results['tests'][test.name] = test_result
with open(os.path.join(self.out_dir, 'results.json'), 'w') as fobj:
- json.dump(results, fobj, indent=4, sort_keys=True,
+ json.dump(results, fobj, indent=4,
cls=ResultsJsonEncoder)
def write_results_xml(self):
@@ -306,12 +307,12 @@ class BuildPerfTestCase(unittest.TestCase):
ret = runCmd2(cmd, **kwargs)
etime = datetime.now() - start_time
rusage_struct = resource.getrusage(resource.RUSAGE_CHILDREN)
- iostat = {}
+ iostat = OrderedDict()
with open('/proc/{}/io'.format(os.getpid())) as fobj:
for line in fobj.readlines():
key, val = line.split(':')
iostat[key] = int(val)
- rusage = {}
+ rusage = OrderedDict()
# Skip unused fields, (i.e. 'ru_ixrss', 'ru_idrss', 'ru_isrss',
# 'ru_nswap', 'ru_msgsnd', 'ru_msgrcv' and 'ru_nsignals')
for key in ['ru_utime', 'ru_stime', 'ru_maxrss', 'ru_minflt',
@@ -344,13 +345,13 @@ class BuildPerfTestCase(unittest.TestCase):
raise
etime = data['elapsed_time']
- measurement = {'type': self.SYSRES,
- 'name': name,
- 'legend': legend}
- measurement['values'] = {'start_time': data['start_time'],
- 'elapsed_time': etime,
- 'rusage': data['rusage'],
- 'iostat': data['iostat']}
+ measurement = OrderedDict([('type', self.SYSRES),
+ ('name', name),
+ ('legend', legend)])
+ measurement['values'] = OrderedDict([('start_time', data['start_time']),
+ ('elapsed_time', etime),
+ ('rusage', data['rusage']),
+ ('iostat', data['iostat'])])
if save_bs:
bs_file = self.save_buildstats(legend)
measurement['values']['buildstats_file'] = \
@@ -374,10 +375,10 @@ class BuildPerfTestCase(unittest.TestCase):
ret = runCmd2(cmd)
size = int(ret.output.split()[0])
log.debug("Size of %s path is %s", path, size)
- measurement = {'type': self.DISKUSAGE,
- 'name': name,
- 'legend': legend}
- measurement['values'] = {'size': size}
+ measurement = OrderedDict([('type', self.DISKUSAGE),
+ ('name', name),
+ ('legend', legend)])
+ measurement['values'] = OrderedDict([('size', size)])
self.measurements.append(measurement)
# Append to 'sizes' array for globalres log
self.sizes.append(str(size))
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 11/11] oeqa.buildperf: store measurements as a dict (object) in the JSON report
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
` (9 preceding siblings ...)
2017-01-19 11:12 ` [PATCH 10/11] oeqa.buildperf: change sorting in json report Markus Lehtonen
@ 2017-01-19 11:12 ` Markus Lehtonen
10 siblings, 0 replies; 12+ messages in thread
From: Markus Lehtonen @ 2017-01-19 11:12 UTC (permalink / raw)
To: openembedded-core
Store measurements as a dict, instead of an array, in the JSON report.
This change makes traversing of the report much easier. The change also
disallows identically named measurements under one test, as a sanity
check for the test cases.
[YOCTO #10590]
Signed-off-by: Markus Lehtonen <markus.lehtonen@linux.intel.com>
---
meta/lib/oeqa/buildperf/base.py | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/meta/lib/oeqa/buildperf/base.py b/meta/lib/oeqa/buildperf/base.py
index 28c3e29..975524c 100644
--- a/meta/lib/oeqa/buildperf/base.py
+++ b/meta/lib/oeqa/buildperf/base.py
@@ -214,7 +214,7 @@ class BuildPerfTestResult(unittest.TextTestResult):
elif status not in ('SUCCESS', 'UNEXPECTED_SUCCESS'):
raise TypeError("BUG: invalid test status '%s'" % status)
- for data in test.measurements:
+ for data in test.measurements.values():
measurement = ET.SubElement(testcase, data['type'])
measurement.set('name', data['name'])
measurement.set('legend', data['legend'])
@@ -255,7 +255,7 @@ class BuildPerfTestCase(unittest.TestCase):
self.base_dir = None
self.start_time = None
self.elapsed_time = None
- self.measurements = []
+ self.measurements = OrderedDict()
# self.err is supposed to be a tuple from sys.exc_info()
self.err = None
self.bb_vars = get_bb_vars()
@@ -298,6 +298,13 @@ class BuildPerfTestCase(unittest.TestCase):
log.error("Command failed: %s", err.retcode)
raise
+ def _append_measurement(self, measurement):
+ """Simple helper for adding measurements results"""
+ if measurement['name'] in self.measurements:
+ raise ValueError('BUG: two measurements with the same name in {}'.format(
+ self.__class__.__name__))
+ self.measurements[measurement['name']] = measurement
+
def measure_cmd_resources(self, cmd, name, legend, save_bs=False):
"""Measure system resource usage of a command"""
def _worker(data_q, cmd, **kwargs):
@@ -357,7 +364,7 @@ class BuildPerfTestCase(unittest.TestCase):
measurement['values']['buildstats_file'] = \
os.path.relpath(bs_file, self.base_dir)
- self.measurements.append(measurement)
+ self._append_measurement(measurement)
# Append to 'times' array for globalres log
e_sec = etime.total_seconds()
@@ -379,7 +386,7 @@ class BuildPerfTestCase(unittest.TestCase):
('name', name),
('legend', legend)])
measurement['values'] = OrderedDict([('size', size)])
- self.measurements.append(measurement)
+ self._append_measurement(measurement)
# Append to 'sizes' array for globalres log
self.sizes.append(str(size))
--
2.10.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-01-19 11:12 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-19 11:12 [PATCH 00/11] oeqa.buildperf: improve test report format Markus Lehtonen
2017-01-19 11:12 ` [PATCH 01/11] oeqa.buildperf: prevent a crash on unexpected success Markus Lehtonen
2017-01-19 11:12 ` [PATCH 02/11] oeqa.buildperf: sync test status names with JUnit Markus Lehtonen
2017-01-19 11:12 ` [PATCH 03/11] oeqa.buildperf: include error details in json report Markus Lehtonen
2017-01-19 11:12 ` [PATCH 04/11] oe-build-perf-test: enable xml reporting Markus Lehtonen
2017-01-19 11:12 ` [PATCH 05/11] oeqa.buildperf: extend xml format to contain measurement data Markus Lehtonen
2017-01-19 11:12 ` [PATCH 06/11] oeqa.buildperf: extend xml report format with test description Markus Lehtonen
2017-01-19 11:12 ` [PATCH 07/11] oeqa.buildperf: report results in chronological order Markus Lehtonen
2017-01-19 11:12 ` [PATCH 08/11] oe-build-perf-test: save test metadata in a separate file Markus Lehtonen
2017-01-19 11:12 ` [PATCH 09/11] oe-build-perf-test: remove unused imports and fix indent Markus Lehtonen
2017-01-19 11:12 ` [PATCH 10/11] oeqa.buildperf: change sorting in json report Markus Lehtonen
2017-01-19 11:12 ` [PATCH 11/11] oeqa.buildperf: store measurements as a dict (object) in the JSON report Markus Lehtonen
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.