All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego
@ 2018-12-17  1:53 Liu Wenlong
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests Liu Wenlong
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Liu Wenlong @ 2018-12-17  1:53 UTC (permalink / raw)
  To: fuego

Please give some comments for those patches below.
It's okay if those patches cannot be merged before releasing 1.4.

About the details of each commits:
About the first one,
- parser: add a generic HTML table generator for Benchmark tests.
Added a another HTML table display ability for Benchmark tests.

About the second one,
- common: add support the test tarball from URL.
Added the support to get test source code from a URL.

Liu Wenlong (2):
  parser: add a generic HTML table generator for Benchmark tests
  common: add support the test tarball from URL

 engine/scripts/functions.sh                 |  38 ++++++-
 engine/scripts/parser/common.py             |  56 +++++-----
 engine/scripts/parser/prepare_chart_data.py | 161 +++++++++++++++++++++++++++-
 3 files changed, 228 insertions(+), 27 deletions(-)

-- 
2.7.4




^ permalink raw reply	[flat|nested] 11+ messages in thread

* [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2018-12-17  1:53 [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Liu Wenlong
@ 2018-12-17  1:53 ` Liu Wenlong
  2019-01-15 21:51   ` Tim.Bird
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL Liu Wenlong
  2018-12-18  2:10 ` [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Tim.Bird
  2 siblings, 1 reply; 11+ messages in thread
From: Liu Wenlong @ 2018-12-17  1:53 UTC (permalink / raw)
  To: fuego

It might be helpful to display the test results in a HTML table
for those Benchmark tests.
The test results of different board, different spec, different
kernel version will be in different HTML table.

In the current Fuego, only 3 chart types below are supported,
- "measure_plot"             (for Benchmark tests)
- "testcase_table"           (for Functional tests)
- "testset_summary_table"    (for Functional tests)

So, add another new one HTML table designed for Benchmark tests,
- "measure_table"

Signed-off-by: Liu Wenlong <liuwl.fnst@cn.fujitsu.com>
---
 engine/scripts/mod.js                       |   2 +-
 engine/scripts/parser/common.py             |  56 +++++-----
 engine/scripts/parser/prepare_chart_data.py | 161 +++++++++++++++++++++++++++-
 3 files changed, 193 insertions(+), 26 deletions(-)

diff --git a/engine/scripts/mod.js b/engine/scripts/mod.js
index d9074cc..96d35de 100644
--- a/engine/scripts/mod.js
+++ b/engine/scripts/mod.js
@@ -359,7 +359,7 @@ function do_all_charts(series) {
             new_plot = plot_one_chart(chart, i);
             plots.push(new_plot);
         }
-        else if (chart_type == "testcase_table" || chart_type == "testset_summary_table" ) {
+        else if (chart_type == "testcase_table" || chart_type == "testset_summary_table" || chart_type == "measure_table") {
             // create all html elements
             jQuery('.plots').append(
                 '<div class="container">' +
diff --git a/engine/scripts/parser/common.py b/engine/scripts/parser/common.py
index 99bacfc..8ef9568 100644
--- a/engine/scripts/parser/common.py
+++ b/engine/scripts/parser/common.py
@@ -209,6 +209,37 @@ def get_criterion(tguid, criteria_data, default_criterion=None):
             criterion = crit
     return criterion
 
+def data_compare(value, ref_value, op):
+    try:
+        if op == 'lt':
+            result = float(value) < float(ref_value)
+        elif op == 'le':
+            result = float(value) <= float(ref_value)
+        elif op == 'gt':
+            result = float(value) > float(ref_value)
+        elif op == 'ge':
+            result = float(value) >= float(ref_value)
+        elif op == 'eq':
+            result = float(value) == float(ref_value)
+        elif op == 'ne':
+            result = float(value) != float(ref_value)
+        elif op == 'bt':
+            ref_low, ref_high = ref_value.split(',', 1)
+            result = float(value) >= float(ref_low) and float(value) <= float(ref_high)
+        else:
+            return "ERROR"
+    except:
+        return "SKIP"
+
+    if result:
+        status = "PASS"
+    else:
+        status = "FAIL"
+
+    dprint("  result=%s" % result)
+    dprint("  status=%s" % status)
+    return status
+
 def check_measure(tguid, criteria_data, measure):
     dprint("in check_measure")
     value = measure.get('measure', None)
@@ -247,30 +278,7 @@ def check_measure(tguid, criteria_data, measure):
         eprint("criteria (%s) missing reference operator - returning SKIP" % criterion)
         return 'SKIP'
 
-    if op == 'lt':
-        result = value < float(ref_value)
-    elif op == 'le':
-        result = value <= float(ref_value)
-    elif op == 'gt':
-        result = value > float(ref_value)
-    elif op == 'ge':
-        result = value >= float(ref_value)
-    elif op == 'eq':
-        result = value == float(ref_value)
-    elif op == 'ne':
-        result = value != float(ref_value)
-    elif op == 'bt':
-        ref_low, ref_high = ref_value.split(',', 1)
-        result = value >= float(ref_low) and value <= float(ref_high)
-
-    if result:
-        status = "PASS"
-    else:
-        status = "FAIL"
-
-    dprint("  result=%s" % result)
-    dprint("  status=%s" % status)
-    return status
+    return data_compare(value, ref_value, op)
 
 def decide_status(tguid, criteria_data, child_pass_list, child_fail_list):
     dprint("in decide_status:")
diff --git a/engine/scripts/parser/prepare_chart_data.py b/engine/scripts/parser/prepare_chart_data.py
index 00fdc80..b6dd50e 100644
--- a/engine/scripts/parser/prepare_chart_data.py
+++ b/engine/scripts/parser/prepare_chart_data.py
@@ -36,7 +36,7 @@ import sys, os, re, json, collections
 from filelock import FileLock
 from operator import itemgetter
 from fuego_parser_utils import split_test_id, get_test_case
-from common import dprint, vprint, iprint, wprint, eprint
+from common import dprint, vprint, iprint, wprint, eprint, data_compare
 
 # board testname spec build_number timestamp kernel tguid ref result
 #  0       1       2     3            4        5      6    7    8
@@ -413,6 +413,163 @@ def make_measure_plots(test_name, chart_config, entries):
         chart_list.append(chart)
     return chart_list
 
+def make_measure_tables(test_name, chart_config, entries):
+    # make a table of testcase results for every testcase
+    chart_list = []
+    # the value of 'JENKINS_URL' is "http://localhost:8080/fuego/", which is not we want.
+    jenkins_url_prefix = "/fuego"
+
+    # get a list of (board, test specs) in the data
+    # FIXTHIS - use list of test sets in chart_config, if present
+    bsp_map = {}
+    for entry in entries:
+        bsp_key = entry.board + "." + entry.spec + "." + entry.kernel
+        bsp_map[bsp_key] = ((entry.board, entry.spec, entry.kernel))
+    bsp_list = bsp_map.values()
+
+    # now make a chart for each one:
+    for board, spec, kver in bsp_list:
+        # create a series for each combination of board,spec,test,kernel,tguid
+        dprint("Making a chart for board: %s, test spec: %s, kernel: %s" \
+               % (board, spec, kver))
+        series_list = []
+        title = "%s-%s-%s (%s)" % (board, test_name, spec, kver)
+
+        # get list of test cases for this board and test spec
+        tc_entries = []
+        for entry in entries:
+            if entry.board == board and entry.spec == spec and \
+               entry.kernel == kver and entry.op != "ERROR-undefined":
+                tc_entries.append(entry)
+
+        # determine how many build numbers are represented in the data
+        # and prepare to count the values in each one
+        # count offfsets in the count array are:
+        #   0 = PASS, 1 = FAIL, 2 = SKIP, 3 = ERR
+        build_num_map = {}
+        for entry in tc_entries:
+            build_num_map[entry.build_number] = [0,0,0,0]
+
+        # gather the data for each row
+        result_map = {}
+        for entry in tc_entries:
+            row_key = entry.tguid
+
+            dprint("row_key=%s" % row_key)
+            if row_key not in result_map:
+                dprint("making a new row for '%s'" % row_key)
+                result_map[row_key] = {}
+
+            # add a data point (result) for this entry
+            result_map[row_key][entry.build_number] = entry.result,entry.op,entry.ref
+            # count the result
+            result = data_compare(entry.result, entry.ref, entry.op)
+            if result=="PASS":
+                build_num_map[entry.build_number][0] += 1
+            elif result=="FAIL":
+                build_num_map[entry.build_number][1] += 1
+            elif result=="ERROR":
+                build_num_map[entry.build_number][2] += 1
+            else:
+                build_num_map[entry.build_number][3] += 1
+
+        bn_list = build_num_map.keys()
+        bn_list.sort()
+
+        # FIXTHIS - should read col_limit from chart_config
+        col_limit = 10
+        col_list = bn_list[-col_limit:]
+        bn_col_count = len(col_list)
+
+        # OK, now build the table
+        html = '<table border=="1" cellspacing="0">' + \
+            '<tr style="background-color:#cccccc">' + \
+            '<th colspan=' + str(bn_col_count+2) + '" align="left">' + \
+            'board: ' + board + '<br>' + \
+            'test spec: ' + spec + '<br>' + \
+            'kernel: ' + entry.kernel + '<br>' + \
+            '</th></tr>' + \
+            '<tr style="background-color:#cccccc">' + \
+            '<th rowspan="3" align="left">measure item</th>' + \
+            '<th rowspan="3" align="left">test set</th>' + \
+            '<th align="center" colspan="' + str(bn_col_count) + '">results</th>' + \
+            '</th></tr>' + \
+            '<tr style="background-color:#cccccc">' + \
+            '<th align="center" colspan="' + str(bn_col_count) + '">build_number</th>' + \
+            '</th></tr>'
+
+
+        row = '<tr style="background-color:#cccccc">'
+        for bn in col_list:
+            row += '<th>' + str(bn) + '</th>'
+        row += '</tr>'
+        html += row
+
+        # one row per test case
+        tg_list = result_map.keys()
+        tg_list.sort(cmp_alpha_num)
+
+        for tg in tg_list:
+            # break apart tguid(tc) and divide into test set and test case
+            parts = tg.split(".")
+            ts = parts[0]
+            tc = ".".join(parts[1:])
+
+            # FIXTHIS: add a column for the unit of each measure item
+            row_tc_head = '<tr><td>' + tc + '</td><td>' + ts + '</td>'
+            row_ref_head = '<tr><td>' + tc + '(ref)</td><td>' + ts + '</td>'
+            result = \
+            row_tc = \
+            row_ref = ""
+            for bn in col_list:
+                try:
+                    value,op,ref = result_map[tg][bn]
+                except:
+                    value = ""
+                result = data_compare(value, ref, op)
+                if result=="PASS":
+                    cell_attr = 'style="background-color:#ccffcc" align=\"center\"'
+                elif result=="FAIL":
+                    cell_attr = 'style="background-color:#ffcccc" align=\"center\"'
+                else:
+                    cell_attr = 'align="center"'
+                    value='-'
+
+                row_tc += ("<td %s>" % cell_attr) + value + "</td>"
+                row_ref += "<td align=\"center\">" + op  + " " + ref + "</td>"
+            row_tail = '</tr>'
+
+            # add a new line for each testcase
+            html += row_tc_head + row_tc + row_tail
+            # add a new line for the reference data of each testcase
+            html += row_ref_head + row_ref + row_tail
+
+        # now add the totals to the bottom of the table
+        row = '<tr style="background-color:#cccccc"><th colspan="' + str(bn_col_count+2) + '" align="center">Totals</td></tr>'
+        html += row
+
+        summary_str = ["pass","fail","skip","error"]
+        for i in range(4):
+            row = '<tr><th colspan="2" align="left">' + summary_str[i] + '</td>'
+            for bn in col_list:
+                try:
+                    result = build_num_map[bn][i]
+                except:
+                    result = ""
+                row += "<td>" + str(result) + "</td>"
+            row += '</tr>'
+            html += row
+        html += '</table>'
+        dprint("HTML for this table is: '%s'" % html)
+
+        chart = {
+                    "title": title,
+                    "chart_type": "measure_table",
+                    "data": html
+                }
+        chart_list.append(chart)
+    return chart_list
+
 # define a comparison function for strings that might end with numbers
 # like "test1, test2, ... test10"
 # if items end in digits, and the leading strings are the same, then
@@ -769,6 +926,8 @@ def make_chart_data(test_logdir, TESTDIR, chart_config_filename, data_lines):
     # make the requested charts
     if chart_type=="measure_plot":
         chart_list = make_measure_plots(test_name, chart_config, entries)
+    elif chart_type=="measure_table":
+        chart_list = make_measure_tables(test_name, chart_config, entries)
     elif chart_type=="testcase_table":
         chart_list = make_testcase_table(test_name, chart_config, entries)
     elif chart_type=="testset_summary_table":
-- 
2.7.4




^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL
  2018-12-17  1:53 [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Liu Wenlong
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests Liu Wenlong
@ 2018-12-17  1:53 ` Liu Wenlong
  2019-01-15 22:24   ` Tim.Bird
  2018-12-18  2:10 ` [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Tim.Bird
  2 siblings, 1 reply; 11+ messages in thread
From: Liu Wenlong @ 2018-12-17  1:53 UTC (permalink / raw)
  To: fuego

The current Fuego has supported test source code from following,
- source code from git
- source code from local tarball

Now, add support to use source code from URL.
Based on this feature, Fuego have the ability to abandon those
heavy test tarballs/packages in the upstream repo in the future.

Signed-off-by: Liu Wenlong <liuwl.fnst@cn.fujitsu.com>
---
 engine/scripts/functions.sh | 38 ++++++++++++++++++++++++++++++++++++--
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
index b0cdb0e..6052fbe 100755
--- a/engine/scripts/functions.sh
+++ b/engine/scripts/functions.sh
@@ -79,6 +79,25 @@ function git_clone {
     git checkout $gitref
 }
 
+# FIXTHIS: add support for other protocol, e.g. ftp
+# Download a tarball from a upstream location
+# $1 (tarball): the upstream location of tarball
+# $2 (tarball): the store location for $1
+function download_tarball {
+    local tarball_url=${1}
+    local tarball_des=${2}
+
+    is_empty "$tarball_url"
+    is_empty "$tarball_des"
+
+    echo "Downloading $tarball to $tarball_des"
+    wget $tarball_url -O $tarball_des &> /dev/null
+    if [ $? -ne 0 ]; then
+        abort_job "Downloading $tarball failed."
+        return 1
+    fi
+}
+
 # Untars a tarball in the current folder
 # $1 (tarball): file to untar
 function untar {
@@ -86,6 +105,17 @@ function untar {
 
     is_empty "$tarball"
 
+    # Check if it is a upstream tarball.
+    if [[ $tarball == http* ]]; then
+        upName=`echo "${TESTDIR^^}"| tr '.' '_'`
+        md5sum_value=$(echo ${tarball} | md5sum)
+        md5sum_value=${md5sum_value:0:7}
+        tarball_dest=${FUEGO_RW}/buildzone/${upName}-${md5sum_value}-$(basename ${tarball})
+        [[ -f ${tarball_dest} ]] && echo "Already downloaded, skip download..." \
+                                 || download_tarball ${tarball} ${tarball_dest}
+        tarball=${tarball_dest}
+    fi
+
     echo "Unpacking $tarball"
     case ${tarball/*./} in
         gz|tgz) key=z ;;
@@ -93,10 +123,14 @@ function untar {
         tar) key= ;;
         *) echo "Unknown $tarball file format. Not unpacking."; return 1;;
     esac
-    tar ${key}xf $TEST_HOME/$tarball --strip-components=1
+    if ! is_abs_path $tarball; then
+        tarball=$TEST_HOME/$tarball
+    fi
+
+    tar ${key}xf $tarball --strip-components=1
 
     # record md5sum for possible source code updates
-    md5sum $TEST_HOME/$tarball > fuego_tarball_src_md5sum
+    md5sum $tarball > fuego_tarball_src_md5sum
 }
 
 # Unpacks/clones the test source code into the current directory.
-- 
2.7.4




^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego
  2018-12-17  1:53 [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Liu Wenlong
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests Liu Wenlong
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL Liu Wenlong
@ 2018-12-18  2:10 ` Tim.Bird
  2 siblings, 0 replies; 11+ messages in thread
From: Tim.Bird @ 2018-12-18  2:10 UTC (permalink / raw)
  To: liuwl.fnst, fuego

These both sounds like great features.  And, from a quick look at the patches,
they don't seem too intrusive.  However, I'm going to hold off on adding them
until after the 1.4 release (which I really, really hope to finish by the end of the
year).
 -- Tim


> -----Original Message-----
> From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego-
> bounces@lists.linuxfoundation.org] On Behalf Of Liu Wenlong
> Sent: Sunday, December 16, 2018 5:53 PM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego
> 
> Please give some comments for those patches below.
> It's okay if those patches cannot be merged before releasing 1.4.
> 
> About the details of each commits:
> About the first one,
> - parser: add a generic HTML table generator for Benchmark tests.
> Added a another HTML table display ability for Benchmark tests.
> 
> About the second one,
> - common: add support the test tarball from URL.
> Added the support to get test source code from a URL.
> 
> Liu Wenlong (2):
>   parser: add a generic HTML table generator for Benchmark tests
>   common: add support the test tarball from URL
> 
>  engine/scripts/functions.sh                 |  38 ++++++-
>  engine/scripts/parser/common.py             |  56 +++++-----
>  engine/scripts/parser/prepare_chart_data.py | 161
> +++++++++++++++++++++++++++-
>  3 files changed, 228 insertions(+), 27 deletions(-)
> 
> --
> 2.7.4
> 
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests Liu Wenlong
@ 2019-01-15 21:51   ` Tim.Bird
  2019-01-15 23:36     ` Tim.Bird
  2019-01-15 23:41     ` Tim.Bird
  0 siblings, 2 replies; 11+ messages in thread
From: Tim.Bird @ 2019-01-15 21:51 UTC (permalink / raw)
  To: liuwl.fnst, fuego

Comments inline below.

> -----Original Message-----
> From: Liu Wenlong
> 
> It might be helpful to display the test results in a HTML table
> for those Benchmark tests.
> The test results of different board, different spec, different
> kernel version will be in different HTML table.
> 
> In the current Fuego, only 3 chart types below are supported,
> - "measure_plot"             (for Benchmark tests)
> - "testcase_table"           (for Functional tests)
> - "testset_summary_table"    (for Functional tests)
> 
> So, add another new one HTML table designed for Benchmark tests,
> - "measure_table"

Do you have an example of a test that is good to use with this chart type?

> 
> Signed-off-by: Liu Wenlong <liuwl.fnst@cn.fujitsu.com>
> ---
>  engine/scripts/mod.js                       |   2 +-
>  engine/scripts/parser/common.py             |  56 +++++-----
>  engine/scripts/parser/prepare_chart_data.py | 161
> +++++++++++++++++++++++++++-
>  3 files changed, 193 insertions(+), 26 deletions(-)
> 
> diff --git a/engine/scripts/mod.js b/engine/scripts/mod.js
> index d9074cc..96d35de 100644
> --- a/engine/scripts/mod.js
> +++ b/engine/scripts/mod.js
> @@ -359,7 +359,7 @@ function do_all_charts(series) {
>              new_plot = plot_one_chart(chart, i);
>              plots.push(new_plot);
>          }
> -        else if (chart_type == "testcase_table" || chart_type ==
> "testset_summary_table" ) {
> +        else if (chart_type == "testcase_table" || chart_type ==
> "testset_summary_table" || chart_type == "measure_table") {
This line is long, but that's cosmetic.  I may fix this (or not).

>              // create all html elements
>              jQuery('.plots').append(
>                  '<div class="container">' +
> diff --git a/engine/scripts/parser/common.py
> b/engine/scripts/parser/common.py
> index 99bacfc..8ef9568 100644
> --- a/engine/scripts/parser/common.py
> +++ b/engine/scripts/parser/common.py
> @@ -209,6 +209,37 @@ def get_criterion(tguid, criteria_data,
> default_criterion=None):
>              criterion = crit
>      return criterion
> 
> +def data_compare(value, ref_value, op):
> +    try:
> +        if op == 'lt':
> +            result = float(value) < float(ref_value)
> +        elif op == 'le':
> +            result = float(value) <= float(ref_value)
> +        elif op == 'gt':
> +            result = float(value) > float(ref_value)
> +        elif op == 'ge':
> +            result = float(value) >= float(ref_value)
> +        elif op == 'eq':
> +            result = float(value) == float(ref_value)
> +        elif op == 'ne':
> +            result = float(value) != float(ref_value)
> +        elif op == 'bt':
> +            ref_low, ref_high = ref_value.split(',', 1)
> +            result = float(value) >= float(ref_low) and float(value) <=
> float(ref_high)
> +        else:
> +            return "ERROR"
> +    except:
> +        return "SKIP"
> +
> +    if result:
> +        status = "PASS"
> +    else:
> +        status = "FAIL"
> +
> +    dprint("  result=%s" % result)
> +    dprint("  status=%s" % status)
> +    return status
> +
>  def check_measure(tguid, criteria_data, measure):
>      dprint("in check_measure")
>      value = measure.get('measure', None)
> @@ -247,30 +278,7 @@ def check_measure(tguid, criteria_data, measure):
>          eprint("criteria (%s) missing reference operator - returning SKIP" %
> criterion)
>          return 'SKIP'
> 
> -    if op == 'lt':
> -        result = value < float(ref_value)
> -    elif op == 'le':
> -        result = value <= float(ref_value)
> -    elif op == 'gt':
> -        result = value > float(ref_value)
> -    elif op == 'ge':
> -        result = value >= float(ref_value)
> -    elif op == 'eq':
> -        result = value == float(ref_value)
> -    elif op == 'ne':
> -        result = value != float(ref_value)
> -    elif op == 'bt':
> -        ref_low, ref_high = ref_value.split(',', 1)
> -        result = value >= float(ref_low) and value <= float(ref_high)
> -
> -    if result:
> -        status = "PASS"
> -    else:
> -        status = "FAIL"
> -
> -    dprint("  result=%s" % result)
> -    dprint("  status=%s" % status)
> -    return status
> +    return data_compare(value, ref_value, op)
> 
>  def decide_status(tguid, criteria_data, child_pass_list, child_fail_list):
>      dprint("in decide_status:")
> diff --git a/engine/scripts/parser/prepare_chart_data.py
> b/engine/scripts/parser/prepare_chart_data.py
> index 00fdc80..b6dd50e 100644
> --- a/engine/scripts/parser/prepare_chart_data.py
> +++ b/engine/scripts/parser/prepare_chart_data.py
> @@ -36,7 +36,7 @@ import sys, os, re, json, collections
>  from filelock import FileLock
>  from operator import itemgetter
>  from fuego_parser_utils import split_test_id, get_test_case
> -from common import dprint, vprint, iprint, wprint, eprint
> +from common import dprint, vprint, iprint, wprint, eprint, data_compare
> 
>  # board testname spec build_number timestamp kernel tguid ref result
>  #  0       1       2     3            4        5      6    7    8
> @@ -413,6 +413,163 @@ def make_measure_plots(test_name,
> chart_config, entries):
>          chart_list.append(chart)
>      return chart_list
> 
> +def make_measure_tables(test_name, chart_config, entries):
> +    # make a table of testcase results for every testcase
> +    chart_list = []
> +    # the value of 'JENKINS_URL' is "http://localhost:8080/fuego/", which is
> not we want.
> +    jenkins_url_prefix = "/fuego"
This could be a bit fragile, but I think we'll worry about that if we ever
change our default Jenkins setup.  It would be better to read this
from fuego.conf, in case people are using Fuego with their own
version of jenkins.

> +
> +    # get a list of (board, test specs) in the data
> +    # FIXTHIS - use list of test sets in chart_config, if present
> +    bsp_map = {}
> +    for entry in entries:
> +        bsp_key = entry.board + "." + entry.spec + "." + entry.kernel
> +        bsp_map[bsp_key] = ((entry.board, entry.spec, entry.kernel))
> +    bsp_list = bsp_map.values()
> +
> +    # now make a chart for each one:
> +    for board, spec, kver in bsp_list:
> +        # create a series for each combination of board,spec,test,kernel,tguid
> +        dprint("Making a chart for board: %s, test spec: %s, kernel: %s" \
> +               % (board, spec, kver))
> +        series_list = []
> +        title = "%s-%s-%s (%s)" % (board, test_name, spec, kver)
> +
> +        # get list of test cases for this board and test spec
> +        tc_entries = []
> +        for entry in entries:
> +            if entry.board == board and entry.spec == spec and \
> +               entry.kernel == kver and entry.op != "ERROR-undefined":
> +                tc_entries.append(entry)
> +
> +        # determine how many build numbers are represented in the data
> +        # and prepare to count the values in each one
> +        # count offfsets in the count array are:
> +        #   0 = PASS, 1 = FAIL, 2 = SKIP, 3 = ERR
> +        build_num_map = {}
> +        for entry in tc_entries:
> +            build_num_map[entry.build_number] = [0,0,0,0]
> +
> +        # gather the data for each row
> +        result_map = {}
> +        for entry in tc_entries:
> +            row_key = entry.tguid
> +
> +            dprint("row_key=%s" % row_key)
> +            if row_key not in result_map:
> +                dprint("making a new row for '%s'" % row_key)
> +                result_map[row_key] = {}
> +
> +            # add a data point (result) for this entry
> +            result_map[row_key][entry.build_number] =
> entry.result,entry.op,entry.ref
> +            # count the result
> +            result = data_compare(entry.result, entry.ref, entry.op)
> +            if result=="PASS":
> +                build_num_map[entry.build_number][0] += 1
> +            elif result=="FAIL":
> +                build_num_map[entry.build_number][1] += 1
> +            elif result=="ERROR":
> +                build_num_map[entry.build_number][2] += 1
> +            else:
> +                build_num_map[entry.build_number][3] += 1
> +
> +        bn_list = build_num_map.keys()
> +        bn_list.sort()
> +
> +        # FIXTHIS - should read col_limit from chart_config
> +        col_limit = 10
> +        col_list = bn_list[-col_limit:]
> +        bn_col_count = len(col_list)
> +
> +        # OK, now build the table
> +        html = '<table border=="1" cellspacing="0">' + \
> +            '<tr style="background-color:#cccccc">' + \
> +            '<th colspan=' + str(bn_col_count+2) + '" align="left">' + \
> +            'board: ' + board + '<br>' + \
> +            'test spec: ' + spec + '<br>' + \
> +            'kernel: ' + entry.kernel + '<br>' + \
> +            '</th></tr>' + \
> +            '<tr style="background-color:#cccccc">' + \
> +            '<th rowspan="3" align="left">measure item</th>' + \
> +            '<th rowspan="3" align="left">test set</th>' + \
> +            '<th align="center" colspan="' + str(bn_col_count) + '">results</th>' +
> \
> +            '</th></tr>' + \
> +            '<tr style="background-color:#cccccc">' + \
> +            '<th align="center" colspan="' + str(bn_col_count) +
> '">build_number</th>' + \
> +            '</th></tr>'
> +
> +
> +        row = '<tr style="background-color:#cccccc">'
> +        for bn in col_list:
> +            row += '<th>' + str(bn) + '</th>'
> +        row += '</tr>'
> +        html += row
> +
> +        # one row per test case
> +        tg_list = result_map.keys()
> +        tg_list.sort(cmp_alpha_num)
> +
> +        for tg in tg_list:
> +            # break apart tguid(tc) and divide into test set and test case
> +            parts = tg.split(".")
> +            ts = parts[0]
> +            tc = ".".join(parts[1:])
> +
> +            # FIXTHIS: add a column for the unit of each measure item
> +            row_tc_head = '<tr><td>' + tc + '</td><td>' + ts + '</td>'
> +            row_ref_head = '<tr><td>' + tc + '(ref)</td><td>' + ts + '</td>'
> +            result = \
> +            row_tc = \
> +            row_ref = ""
> +            for bn in col_list:
> +                try:
> +                    value,op,ref = result_map[tg][bn]
> +                except:
> +                    value = ""
> +                result = data_compare(value, ref, op)
> +                if result=="PASS":
> +                    cell_attr = 'style="background-color:#ccffcc" align=\"center\"'
> +                elif result=="FAIL":
> +                    cell_attr = 'style="background-color:#ffcccc" align=\"center\"'
> +                else:
> +                    cell_attr = 'align="center"'
> +                    value='-'
> +
> +                row_tc += ("<td %s>" % cell_attr) + value + "</td>"
> +                row_ref += "<td align=\"center\">" + op  + " " + ref + "</td>"
> +            row_tail = '</tr>'
> +
> +            # add a new line for each testcase
> +            html += row_tc_head + row_tc + row_tail
> +            # add a new line for the reference data of each testcase
> +            html += row_ref_head + row_ref + row_tail
> +
> +        # now add the totals to the bottom of the table
> +        row = '<tr style="background-color:#cccccc"><th colspan="' +
> str(bn_col_count+2) + '" align="center">Totals</td></tr>'
> +        html += row
> +
> +        summary_str = ["pass","fail","skip","error"]
> +        for i in range(4):
> +            row = '<tr><th colspan="2" align="left">' + summary_str[i] + '</td>'
> +            for bn in col_list:
> +                try:
> +                    result = build_num_map[bn][i]
> +                except:
> +                    result = ""
> +                row += "<td>" + str(result) + "</td>"
> +            row += '</tr>'
> +            html += row
> +        html += '</table>'
> +        dprint("HTML for this table is: '%s'" % html)
> +
> +        chart = {
> +                    "title": title,
> +                    "chart_type": "measure_table",
> +                    "data": html
> +                }
> +        chart_list.append(chart)
> +    return chart_list
> +
>  # define a comparison function for strings that might end with numbers
>  # like "test1, test2, ... test10"
>  # if items end in digits, and the leading strings are the same, then
> @@ -769,6 +926,8 @@ def make_chart_data(test_logdir, TESTDIR,
> chart_config_filename, data_lines):
>      # make the requested charts
>      if chart_type=="measure_plot":
>          chart_list = make_measure_plots(test_name, chart_config, entries)
> +    elif chart_type=="measure_table":
> +        chart_list = make_measure_tables(test_name, chart_config, entries)
>      elif chart_type=="testcase_table":
>          chart_list = make_testcase_table(test_name, chart_config, entries)
>      elif chart_type=="testset_summary_table":
> --
> 2.7.4

This all looks OK to me.
Applied to my 'next' branch.

I'll try to test it with something, but a recommendation for a test to
use this with would be good.

Thanks!
 -- Tim


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL
  2018-12-17  1:53 ` [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL Liu Wenlong
@ 2019-01-15 22:24   ` Tim.Bird
  2019-01-20  6:22     ` Liu, Wenlong
  0 siblings, 1 reply; 11+ messages in thread
From: Tim.Bird @ 2019-01-15 22:24 UTC (permalink / raw)
  To: liuwl.fnst, fuego

Looks good.
Applied in my 'next' branch.

It would be good to have an example test that uses this.
Also, it would be good to have a set of Functional.fuego_check_source tests that can
try different source download methods, and indicate success or failure (to
add to our list of Fuego self-tests).

Thanks!
 -- Tim


> -----Original Message-----
> From: Liu Wenlong
> Sent: Sunday, December 16, 2018 5:53 PM
> To: fuego@lists.linuxfoundation.org
> Subject: [Fuego] [PATCH RFC 2/2] common: add support the test tarball from
> URL
> 
> The current Fuego has supported test source code from following,
> - source code from git
> - source code from local tarball
> 
> Now, add support to use source code from URL.
> Based on this feature, Fuego have the ability to abandon those
> heavy test tarballs/packages in the upstream repo in the future.
> 
> Signed-off-by: Liu Wenlong <liuwl.fnst@cn.fujitsu.com>
> ---
>  engine/scripts/functions.sh | 38
> ++++++++++++++++++++++++++++++++++++--
>  1 file changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
> index b0cdb0e..6052fbe 100755
> --- a/engine/scripts/functions.sh
> +++ b/engine/scripts/functions.sh
> @@ -79,6 +79,25 @@ function git_clone {
>      git checkout $gitref
>  }
> 
> +# FIXTHIS: add support for other protocol, e.g. ftp
> +# Download a tarball from a upstream location
> +# $1 (tarball): the upstream location of tarball
> +# $2 (tarball): the store location for $1
> +function download_tarball {
> +    local tarball_url=${1}
> +    local tarball_des=${2}
> +
> +    is_empty "$tarball_url"
> +    is_empty "$tarball_des"
> +
> +    echo "Downloading $tarball to $tarball_des"
> +    wget $tarball_url -O $tarball_des &> /dev/null
> +    if [ $? -ne 0 ]; then
> +        abort_job "Downloading $tarball failed."
> +        return 1
> +    fi
> +}
> +
>  # Untars a tarball in the current folder
>  # $1 (tarball): file to untar
>  function untar {
> @@ -86,6 +105,17 @@ function untar {
> 
>      is_empty "$tarball"
> 
> +    # Check if it is a upstream tarball.
> +    if [[ $tarball == http* ]]; then
> +        upName=`echo "${TESTDIR^^}"| tr '.' '_'`
> +        md5sum_value=$(echo ${tarball} | md5sum)
> +        md5sum_value=${md5sum_value:0:7}
> +        tarball_dest=${FUEGO_RW}/buildzone/${upName}-${md5sum_value}-
> $(basename ${tarball})
> +        [[ -f ${tarball_dest} ]] && echo "Already downloaded, skip download..."
> \
> +                                 || download_tarball ${tarball} ${tarball_dest}
> +        tarball=${tarball_dest}
> +    fi
> +
>      echo "Unpacking $tarball"
>      case ${tarball/*./} in
>          gz|tgz) key=z ;;
> @@ -93,10 +123,14 @@ function untar {
>          tar) key= ;;
>          *) echo "Unknown $tarball file format. Not unpacking."; return 1;;
>      esac
> -    tar ${key}xf $TEST_HOME/$tarball --strip-components=1
> +    if ! is_abs_path $tarball; then
> +        tarball=$TEST_HOME/$tarball
> +    fi
> +
> +    tar ${key}xf $tarball --strip-components=1
> 
>      # record md5sum for possible source code updates
> -    md5sum $TEST_HOME/$tarball > fuego_tarball_src_md5sum
> +    md5sum $tarball > fuego_tarball_src_md5sum
>  }
> 
>  # Unpacks/clones the test source code into the current directory.
> --
> 2.7.4
> 
> 
> 
> _______________________________________________
> Fuego mailing list
> Fuego@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/fuego

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2019-01-15 21:51   ` Tim.Bird
@ 2019-01-15 23:36     ` Tim.Bird
  2019-01-20  6:21       ` Liu, Wenlong
  2019-01-15 23:41     ` Tim.Bird
  1 sibling, 1 reply; 11+ messages in thread
From: Tim.Bird @ 2019-01-15 23:36 UTC (permalink / raw)
  To: Tim.Bird, liuwl.fnst, fuego



> -----Original Message-----
> From: Tim Bird
...
> This all looks OK to me.
> Applied to my 'next' branch.
> 
> I'll try to test it with something, but a recommendation for a test to
> use this with would be good.

OK.  I tested this with Benchmark.netpipe and found a bug.

If the 'op' argument to data_compare is 'none', the code was replacing
the value in the table with the string "-".  This made the table so it was
missing data if there was no criteria.json file for the test.

I've added a criteria.json file, and a chart_config.json file for Benchmark.netpipe.
Also, I fixed the bug, in commit 3ae0356.

This is added to my 'next' branch on bitbucket.  Please try it out to make sure
I haven't broken anything.

Thanks,
 -- Tim


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2019-01-15 21:51   ` Tim.Bird
  2019-01-15 23:36     ` Tim.Bird
@ 2019-01-15 23:41     ` Tim.Bird
  2019-01-20  6:21       ` Liu, Wenlong
  1 sibling, 1 reply; 11+ messages in thread
From: Tim.Bird @ 2019-01-15 23:41 UTC (permalink / raw)
  To: Tim.Bird, liuwl.fnst, fuego


Liu,

Can you please add information to the wiki page:
http://fuegotest.org/wiki/Jenkins_Visualization

for the 'measure_table' chart type?

Thanks!
 -- Tim


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2019-01-15 23:36     ` Tim.Bird
@ 2019-01-20  6:21       ` Liu, Wenlong
  0 siblings, 0 replies; 11+ messages in thread
From: Liu, Wenlong @ 2019-01-20  6:21 UTC (permalink / raw)
  To: Tim.Bird, fuego

> -----Original Message-----
> From: Tim.Bird@sony.com [mailto:Tim.Bird@sony.com]
> Sent: Wednesday, January 16, 2019 7:37 AM
> To: Tim.Bird@sony.com; Liu, Wenlong/刘 文龙 <liuwl.fnst@cn.fujitsu.com>;
> fuego@lists.linuxfoundation.org
> Subject: RE: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table
> generator for Benchmark tests
> > -----Original Message-----
> > From: Tim Bird
> ...
> > This all looks OK to me.
> > Applied to my 'next' branch.
> >
> > I'll try to test it with something, but a recommendation for a test to
> > use this with would be good.
> 
> OK.  I tested this with Benchmark.netpipe and found a bug.
> 
> If the 'op' argument to data_compare is 'none', the code was replacing the
> value in the table with the string "-".  This made the table so it was missing
> data if there was no criteria.json file for the test.
> 
> I've added a criteria.json file, and a chart_config.json file for
> Benchmark.netpipe.
> Also, I fixed the bug, in commit 3ae0356.
> 
> This is added to my 'next' branch on bitbucket.  Please try it out to make
> sure I haven't broken anything. 

Yes, actually, 'op' in the raw data is necessary for this generator, otherwise the test results cannot be displayed completely.
And very sorry that I didn't point out it before.

Thanks for your fix.
I will try it later and report back if any problems.

Thanks.

Best regards
Liu

> 
> Thanks,
>  -- Tim
> 
> 




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests
  2019-01-15 23:41     ` Tim.Bird
@ 2019-01-20  6:21       ` Liu, Wenlong
  0 siblings, 0 replies; 11+ messages in thread
From: Liu, Wenlong @ 2019-01-20  6:21 UTC (permalink / raw)
  To: Tim.Bird, fuego

> -----Original Message-----
> From: Tim.Bird@sony.com [mailto:Tim.Bird@sony.com]
> Sent: Wednesday, January 16, 2019 7:41 AM
> To: Tim.Bird@sony.com; Liu, Wenlong/刘 文龙 <liuwl.fnst@cn.fujitsu.com>;
> fuego@lists.linuxfoundation.org
> Subject: RE: [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table
> generator for Benchmark tests
> 
> 
> Liu,
> 
> Can you please add information to the wiki page:
> http://fuegotest.org/wiki/Jenkins_Visualization
> 
> for the 'measure_table' chart type?

Sure, I will do that.
Thanks.

Best regards
Liu

> Thanks!
>  -- Tim
> 
> 




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL
  2019-01-15 22:24   ` Tim.Bird
@ 2019-01-20  6:22     ` Liu, Wenlong
  0 siblings, 0 replies; 11+ messages in thread
From: Liu, Wenlong @ 2019-01-20  6:22 UTC (permalink / raw)
  To: Tim.Bird, fuego

> -----Original Message-----
> From: Tim.Bird@sony.com [mailto:Tim.Bird@sony.com]
> Sent: Wednesday, January 16, 2019 6:24 AM
> To: Liu, Wenlong/刘 文龙 <liuwl.fnst@cn.fujitsu.com>;
> fuego@lists.linuxfoundation.org
> Subject: RE: [Fuego] [PATCH RFC 2/2] common: add support the test tarball
> from URL
> 
> Looks good.
> Applied in my 'next' branch.
> 
> It would be good to have an example test that uses this.
> Also, it would be good to have a set of Functional.fuego_check_source tests
> that can try different source download methods, and indicate success or
> failure (to add to our list of Fuego self-tests).

OK, I will try to add this self-testing into Fuego self-tests.
Thanks for your review.

Best regards
Liu

> Thanks!
>  -- Tim
> 
> 
> > -----Original Message-----
> > From: Liu Wenlong
> > Sent: Sunday, December 16, 2018 5:53 PM
> > To: fuego@lists.linuxfoundation.org
> > Subject: [Fuego] [PATCH RFC 2/2] common: add support the test tarball
> > from URL
> >
> > The current Fuego has supported test source code from following,
> > - source code from git
> > - source code from local tarball
> >
> > Now, add support to use source code from URL.
> > Based on this feature, Fuego have the ability to abandon those heavy
> > test tarballs/packages in the upstream repo in the future.
> >
> > Signed-off-by: Liu Wenlong <liuwl.fnst@cn.fujitsu.com>
> > ---
> >  engine/scripts/functions.sh | 38
> > ++++++++++++++++++++++++++++++++++++--
> >  1 file changed, 36 insertions(+), 2 deletions(-)
> >
> > diff --git a/engine/scripts/functions.sh b/engine/scripts/functions.sh
> > index b0cdb0e..6052fbe 100755
> > --- a/engine/scripts/functions.sh
> > +++ b/engine/scripts/functions.sh
> > @@ -79,6 +79,25 @@ function git_clone {
> >      git checkout $gitref
> >  }
> >
> > +# FIXTHIS: add support for other protocol, e.g. ftp # Download a
> > +tarball from a upstream location # $1 (tarball): the upstream
> > +location of tarball # $2 (tarball): the store location for $1
> > +function download_tarball {
> > +    local tarball_url=${1}
> > +    local tarball_des=${2}
> > +
> > +    is_empty "$tarball_url"
> > +    is_empty "$tarball_des"
> > +
> > +    echo "Downloading $tarball to $tarball_des"
> > +    wget $tarball_url -O $tarball_des &> /dev/null
> > +    if [ $? -ne 0 ]; then
> > +        abort_job "Downloading $tarball failed."
> > +        return 1
> > +    fi
> > +}
> > +
> >  # Untars a tarball in the current folder  # $1 (tarball): file to
> > untar  function untar { @@ -86,6 +105,17 @@ function untar {
> >
> >      is_empty "$tarball"
> >
> > +    # Check if it is a upstream tarball.
> > +    if [[ $tarball == http* ]]; then
> > +        upName=`echo "${TESTDIR^^}"| tr '.' '_'`
> > +        md5sum_value=$(echo ${tarball} | md5sum)
> > +        md5sum_value=${md5sum_value:0:7}
> > +
> tarball_dest=${FUEGO_RW}/buildzone/${upName}-${md5sum_value}-
> > $(basename ${tarball})
> > +        [[ -f ${tarball_dest} ]] && echo "Already downloaded, skip
> download..."
> > \
> > +                                 || download_tarball ${tarball}
> ${tarball_dest}
> > +        tarball=${tarball_dest}
> > +    fi
> > +
> >      echo "Unpacking $tarball"
> >      case ${tarball/*./} in
> >          gz|tgz) key=z ;;
> > @@ -93,10 +123,14 @@ function untar {
> >          tar) key= ;;
> >          *) echo "Unknown $tarball file format. Not unpacking."; return
> 1;;
> >      esac
> > -    tar ${key}xf $TEST_HOME/$tarball --strip-components=1
> > +    if ! is_abs_path $tarball; then
> > +        tarball=$TEST_HOME/$tarball
> > +    fi
> > +
> > +    tar ${key}xf $tarball --strip-components=1
> >
> >      # record md5sum for possible source code updates
> > -    md5sum $TEST_HOME/$tarball > fuego_tarball_src_md5sum
> > +    md5sum $tarball > fuego_tarball_src_md5sum
> >  }
> >
> >  # Unpacks/clones the test source code into the current directory.
> > --
> > 2.7.4
> >
> >
> >
> > _______________________________________________
> > Fuego mailing list
> > Fuego@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/fuego
> 




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-01-20  6:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-17  1:53 [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Liu Wenlong
2018-12-17  1:53 ` [Fuego] [PATCH RFC 1/2] parser: add a generic HTML table generator for Benchmark tests Liu Wenlong
2019-01-15 21:51   ` Tim.Bird
2019-01-15 23:36     ` Tim.Bird
2019-01-20  6:21       ` Liu, Wenlong
2019-01-15 23:41     ` Tim.Bird
2019-01-20  6:21       ` Liu, Wenlong
2018-12-17  1:53 ` [Fuego] [PATCH RFC 2/2] common: add support the test tarball from URL Liu Wenlong
2019-01-15 22:24   ` Tim.Bird
2019-01-20  6:22     ` Liu, Wenlong
2018-12-18  2:10 ` [Fuego] [PATCH RFC 0/2] Add some useful features for Fuego Tim.Bird

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.