* [Fuego] [PATCH] dbench: adjust test materials for dbench version
@ 2018-04-04 22:42 Tim.Bird
2018-04-09 9:09 ` Daniel Sangorrin
0 siblings, 1 reply; 4+ messages in thread
From: Tim.Bird @ 2018-04-04 22:42 UTC (permalink / raw)
To: daniel.sangorrin, fuego
Daniel,
I had a host of issues with the dbench upgrade. I used the patch below
to adjust some of the test materials for the different dbench tests.
Let me know if you have feedback on this.
In hindsight, maybe splitting the tests wasn't such a great idea.
It may have been better to write a parser that could recognize
and handle both versions (3 and 4) of the test output format.
Although it appears that most systems will be using dbench
version 4.0 or above, it's not impossible that Benchmark.dbench4
could discover dbench version 3.x on a system, and try to use
it, which would result in errors.
Also, I find that using the test name to prefix the test variable
(from the spec file), is problematic in this case. To rename a test
requires changing all the variable names. Maybe we should use
a generic prefix, like TVAR_ or something.
Also, I wasn't sure whether I should change the testcase name
(in the parser and resulting json file) to match the test name (dbench4).
I ended up doing this for dbench3, but left dbench4 alone, as I expect
it will become our defacto 'dbench' test in the future.
And I found that the name in the spec file is only used for generating
the test variable names. I tried changing it in dbench4, but then changed
my mind, and left it alone.
Something is way different between versions 3 and 4 in reporting throughput
on my systems. I'm getting pretty significant differences in results on most
of my boards (the only one with 4.x results consistent with 3.x is on my Renesas
arm64 board).
Here is a table of results:
test spec board tguid result
------------------------------------------------------------
dbench3 default bbb <<< wouldn't compile >>>
dbench3 default bbb dbench3.Throughput 11.5173
dbench4 default bbb dbench.Throughput 0.0282559
dbench4 roota bbb dbench.Throughput 0.116864
dbench testdir docker dbench.Throughput 1741.26
dbench testdir docker dbench.Throughput 1831.78
dbench3 default docker dbench3.Throughput 1169.77
dbench4 default docker dbench.Throughput 10.394
dbench4 roota docker dbench.Throughput 10.959
dbench default min1 dbench.Throughput 254.03
dbench3 default min1 dbench3.Throughput 253.83
dbench3 default min1 dbench3.Throughput 251.732
dbench4 default min1 dbench.Throughput 32.1764
dbench4 default min1 dbench.Throughput 33.0374
dbench4 roota min1 dbench.Throughput 34.0026
dbench4 roota min1 dbench.Throughput 21.3904
dbench default ren1 dbench.Throughput 8.41553
dbench default ren1 dbench.Throughput 8.62088
dbench3 default ren1 dbench3.Throughput 8.43887
dbench3 default ren1 dbench3.Throughput 8.45205
dbench4 default ren1 dbench.Throughput 7.53374
dbench4 roota ren1 dbench.Throughput 8.03552
dbench4 roota ren1 dbench.Throughput 7.72424
dbench default rpi3-1 dbench.Throughput 136.099
dbench default rpi3-1 dbench.Throughput 132.869
dbench3 default rpi3-1 dbench3.Throughput 159.953
dbench4 default rpi3-1 dbench.Throughput 39.9088
dbench4 roota rpi3-1 dbench.Throughput 50.6633
Note that the 'roota' spec tries to mimic the settings used by
the old Benchmark.dbench test.
I'm not sure which version of the test to believe for my
throughput. But something is different enough to skew the
results by a large amount - in the case of docker by 100x.
Have you seen differences in the reported throughput?
-- Tim
------
Convert testplans to use dbench4.
Make sure dbench3 criteria, parser, and spec files use the correct test name.
Have dbench4 test.yaml file use the correct test name.
Signed-off-by: Tim Bird <tim.bird@sony.com>
---
engine/overlays/testplans/testplan_default.json | 2 +-
engine/overlays/testplans/testplan_docker.json | 2 +-
engine/overlays/testplans/testplan_lava.json | 2 +-
engine/overlays/testplans/testplan_ltsi.json | 2 +-
engine/overlays/testplans/testplan_mmc.json | 2 +-
engine/overlays/testplans/testplan_sata.json | 2 +-
engine/overlays/testplans/testplan_smoketest.json | 2 +-
engine/overlays/testplans/testplan_usbstor.json | 2 +-
engine/tests/Benchmark.dbench3/criteria.json | 4 ++--
engine/tests/Benchmark.dbench3/fuego_test.sh | 24 ++++++++++++++---------
engine/tests/Benchmark.dbench3/parser.py | 2 +-
engine/tests/Benchmark.dbench3/reference.json | 2 +-
engine/tests/Benchmark.dbench3/spec.json | 2 +-
engine/tests/Benchmark.dbench4/test.yaml | 2 +-
14 files changed, 29 insertions(+), 23 deletions(-)
diff --git a/engine/overlays/testplans/testplan_default.json b/engine/overlays/testplans/testplan_default.json
index 3127a0d..6a4f1a4 100644
--- a/engine/overlays/testplans/testplan_default.json
+++ b/engine/overlays/testplans/testplan_default.json
@@ -11,7 +11,7 @@
"testName": "Benchmark.Dhrystone"
},
{
- "testName": "Benchmark.dbench"
+ "testName": "Benchmark.dbench4"
},
{
"testName": "Benchmark.gtkperf"
diff --git a/engine/overlays/testplans/testplan_docker.json b/engine/overlays/testplans/testplan_docker.json
index 4f74576..673a26d 100644
--- a/engine/overlays/testplans/testplan_docker.json
+++ b/engine/overlays/testplans/testplan_docker.json
@@ -8,7 +8,7 @@
"spec": "100M"
},
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench4",
"spec": "testdir"
},
{
diff --git a/engine/overlays/testplans/testplan_lava.json b/engine/overlays/testplans/testplan_lava.json
index 2d7b150..dd63784 100644
--- a/engine/overlays/testplans/testplan_lava.json
+++ b/engine/overlays/testplans/testplan_lava.json
@@ -8,7 +8,7 @@
"spec": "100M"
},
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench4",
"spec": "testdir"
},
{
diff --git a/engine/overlays/testplans/testplan_ltsi.json b/engine/overlays/testplans/testplan_ltsi.json
index fb832ba..7197d55 100644
--- a/engine/overlays/testplans/testplan_ltsi.json
+++ b/engine/overlays/testplans/testplan_ltsi.json
@@ -8,7 +8,7 @@
"spec": "100M"
},
{
- "testName": "Benchmark.dbench"
+ "testName": "Benchmark.dbench4"
},
{
"testName": "Benchmark.hackbench"
diff --git a/engine/overlays/testplans/testplan_mmc.json b/engine/overlays/testplans/testplan_mmc.json
index 3e9bf5c..05053f8 100644
--- a/engine/overlays/testplans/testplan_mmc.json
+++ b/engine/overlays/testplans/testplan_mmc.json
@@ -34,7 +34,7 @@
"spec": "mmc"
},
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench4",
"spec": "mmc"
}
diff --git a/engine/overlays/testplans/testplan_sata.json b/engine/overlays/testplans/testplan_sata.json
index 27b49ca..7588214 100644
--- a/engine/overlays/testplans/testplan_sata.json
+++ b/engine/overlays/testplans/testplan_sata.json
@@ -34,7 +34,7 @@
"spec": "sata"
},
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench4",
"spec": "sata"
}
diff --git a/engine/overlays/testplans/testplan_smoketest.json b/engine/overlays/testplans/testplan_smoketest.json
index 5b583d3..e79adf8 100644
--- a/engine/overlays/testplans/testplan_smoketest.json
+++ b/engine/overlays/testplans/testplan_smoketest.json
@@ -12,7 +12,7 @@
"testName": "Benchmark.Dhrystone"
},
{
- "testName": "Benchmark.dbench"
+ "testName": "Benchmark.dbench4"
},
{
"testName": "Benchmark.hackbench"
diff --git a/engine/overlays/testplans/testplan_usbstor.json b/engine/overlays/testplans/testplan_usbstor.json
index 5ecd8ed..c67f2d8 100644
--- a/engine/overlays/testplans/testplan_usbstor.json
+++ b/engine/overlays/testplans/testplan_usbstor.json
@@ -34,7 +34,7 @@
"spec": "usb"
},
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench4",
"spec": "usb"
}
diff --git a/engine/tests/Benchmark.dbench3/criteria.json b/engine/tests/Benchmark.dbench3/criteria.json
index c61a057..3e60d56 100644
--- a/engine/tests/Benchmark.dbench3/criteria.json
+++ b/engine/tests/Benchmark.dbench3/criteria.json
@@ -2,14 +2,14 @@
"schema_version":"1.0",
"criteria":[
{
- "tguid":"default.dbench.Throughput",
+ "tguid":"default.dbench3.Throughput",
"reference":{
"value":0,
"operator":"gt"
}
},
{
- "tguid":"default.dbench",
+ "tguid":"default.dbench3",
"min_pass":1
}
]
diff --git a/engine/tests/Benchmark.dbench3/fuego_test.sh b/engine/tests/Benchmark.dbench3/fuego_test.sh
index f7160f9..3ec4ad1 100755
--- a/engine/tests/Benchmark.dbench3/fuego_test.sh
+++ b/engine/tests/Benchmark.dbench3/fuego_test.sh
@@ -11,18 +11,24 @@ function test_deploy {
}
function test_run {
- assert_define BENCHMARK_DBENCH_MOUNT_BLOCKDEV
- assert_define BENCHMARK_DBENCH_MOUNT_POINT
- assert_define BENCHMARK_DBENCH_TIMELIMIT
- assert_define BENCHMARK_DBENCH_NPROCS
+ assert_define BENCHMARK_DBENCH3_MOUNT_BLOCKDEV
+ assert_define BENCHMARK_DBENCH3_MOUNT_POINT
+ assert_define BENCHMARK_DBENCH3_TIMELIMIT
+ assert_define BENCHMARK_DBENCH3_NPROCS
- hd_test_mount_prepare $BENCHMARK_DBENCH_MOUNT_BLOCKDEV $BENCHMARK_DBENCH_MOUNT_POINT
+ hd_test_mount_prepare $BENCHMARK_DBENCH3_MOUNT_BLOCKDEV \
+ $BENCHMARK_DBENCH3_MOUNT_POINT
- report "cd $BOARD_TESTDIR/fuego.$TESTDIR; cp client.txt $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR; pwd; ./dbench -t $BENCHMARK_DBENCH_TIMELIMIT -D $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR -c $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR/client.txt $BENCHMARK_DBENCH_NPROCS; rm $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR/client.txt"
+ report "cd $BOARD_TESTDIR/fuego.$TESTDIR; \
+ cp client.txt $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR; \
+ pwd; ./dbench -t $BENCHMARK_DBENCH3_TIMELIMIT \
+ -D $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR \
+ -c $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR/client.txt \
+ $BENCHMARK_DBENCH3_NPROCS; \
+ rm $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR/client.txt"
sleep 5
- hd_test_clean_umount $BENCHMARK_DBENCH_MOUNT_BLOCKDEV $BENCHMARK_DBENCH_MOUNT_POINT
+ hd_test_clean_umount $BENCHMARK_DBENCH3_MOUNT_BLOCKDEV \
+ $BENCHMARK_DBENCH3_MOUNT_POINT
}
-
-
diff --git a/engine/tests/Benchmark.dbench3/parser.py b/engine/tests/Benchmark.dbench3/parser.py
index b664936..0201262 100755
--- a/engine/tests/Benchmark.dbench3/parser.py
+++ b/engine/tests/Benchmark.dbench3/parser.py
@@ -12,6 +12,6 @@ regex_string = '^(Throughput)(.*)(MB/sec)(.*)(procs)$'
matches = plib.parse_log(regex_string)
if matches:
- measurements['default.dbench'] = [{"name": "Throughput", "measure" : float(matches[0][1])}]
+ measurements['default.dbench3'] = [{"name": "Throughput", "measure" : float(matches[0][1])}]
sys.exit(plib.process(measurements))
diff --git a/engine/tests/Benchmark.dbench3/reference.json b/engine/tests/Benchmark.dbench3/reference.json
index f08c750..d4715ad 100644
--- a/engine/tests/Benchmark.dbench3/reference.json
+++ b/engine/tests/Benchmark.dbench3/reference.json
@@ -4,7 +4,7 @@
"name":"default",
"test_cases":[
{
- "name":"dbench",
+ "name":"dbench3",
"measurements":[
{
"name":"Throughput",
diff --git a/engine/tests/Benchmark.dbench3/spec.json b/engine/tests/Benchmark.dbench3/spec.json
index 61dbfda..f592f01 100644
--- a/engine/tests/Benchmark.dbench3/spec.json
+++ b/engine/tests/Benchmark.dbench3/spec.json
@@ -1,5 +1,5 @@
{
- "testName": "Benchmark.dbench",
+ "testName": "Benchmark.dbench3",
"specs": {
"sata": {
"MOUNT_BLOCKDEV":"$SATA_DEV",
diff --git a/engine/tests/Benchmark.dbench4/test.yaml b/engine/tests/Benchmark.dbench4/test.yaml
index 166e235..c18ed97 100644
--- a/engine/tests/Benchmark.dbench4/test.yaml
+++ b/engine/tests/Benchmark.dbench4/test.yaml
@@ -1,5 +1,5 @@
fuego_package_version: 1
-name: Benchmark.dbench
+name: Benchmark.dbench4
description: |
Measure disk throughput for simulated netbench run.
license: GPL-3.0
--
1.9.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Fuego] [PATCH] dbench: adjust test materials for dbench version
2018-04-04 22:42 [Fuego] [PATCH] dbench: adjust test materials for dbench version Tim.Bird
@ 2018-04-09 9:09 ` Daniel Sangorrin
2018-04-09 17:25 ` Tim.Bird
0 siblings, 1 reply; 4+ messages in thread
From: Daniel Sangorrin @ 2018-04-09 9:09 UTC (permalink / raw)
To: Tim.Bird, fuego
Hi Tim,
> -----Original Message-----
> From: Tim.Bird@sony.com [mailto:Tim.Bird@sony.com]
> Sent: Thursday, April 5, 2018 7:43 AM
> To: daniel.sangorrin@toshiba.co.jp; fuego@lists.linuxfoundation.org
> Subject: [PATCH] dbench: adjust test materials for dbench version
>
> Daniel,
>
> I had a host of issues with the dbench upgrade. I used the patch below
> to adjust some of the test materials for the different dbench tests.
> Let me know if you have feedback on this.
Thanks for the review and the fixes as well!
> In hindsight, maybe splitting the tests wasn't such a great idea.
> It may have been better to write a parser that could recognize
> and handle both versions (3 and 4) of the test output format.
> Although it appears that most systems will be using dbench
> version 4.0 or above, it's not impossible that Benchmark.dbench4
> could discover dbench version 3.x on a system, and try to use
> it, which would result in errors.
Yes, that's true.
Another option would be to check the version and ask the user to
run Benchmark.dbench3 instead of Benchmark.dbench4. Which one
do you prefer?
> Also, I find that using the test name to prefix the test variable
> (from the spec file), is problematic in this case. To rename a test
> requires changing all the variable names. Maybe we should use
> a generic prefix, like TVAR_ or something.
Good idea. The variables would be sorter as well.
> Also, I wasn't sure whether I should change the testcase name
> (in the parser and resulting json file) to match the test name (dbench4).
> I ended up doing this for dbench3, but left dbench4 alone, as I expect
> it will become our defacto 'dbench' test in the future.
mm I am worrying a new version will come up and we will have to change
many things.
>
> And I found that the name in the spec file is only used for generating
> the test variable names. I tried changing it in dbench4, but then changed
> my mind, and left it alone.
>
> Something is way different between versions 3 and 4 in reporting throughput
> on my systems. I'm getting pretty significant differences in results on most
> of my boards (the only one with 4.x results consistent with 3.x is on my Renesas
> arm64 board).
>
> Here is a table of results:
> test spec board tguid result
> ------------------------------------------------------------
> dbench3 default bbb <<< wouldn't compile >>>
> dbench3 default bbb dbench3.Throughput 11.5173
> dbench4 default bbb dbench.Throughput 0.0282559
> dbench4 roota bbb dbench.Throughput 0.116864
>
> dbench testdir docker dbench.Throughput 1741.26
> dbench testdir docker dbench.Throughput 1831.78
> dbench3 default docker dbench3.Throughput 1169.77
> dbench4 default docker dbench.Throughput 10.394
> dbench4 roota docker dbench.Throughput 10.959
>
> dbench default min1 dbench.Throughput 254.03
> dbench3 default min1 dbench3.Throughput 253.83
> dbench3 default min1 dbench3.Throughput 251.732
> dbench4 default min1 dbench.Throughput 32.1764
> dbench4 default min1 dbench.Throughput 33.0374
> dbench4 roota min1 dbench.Throughput 34.0026
> dbench4 roota min1 dbench.Throughput 21.3904
>
> dbench default ren1 dbench.Throughput 8.41553
> dbench default ren1 dbench.Throughput 8.62088
> dbench3 default ren1 dbench3.Throughput 8.43887
> dbench3 default ren1 dbench3.Throughput 8.45205
> dbench4 default ren1 dbench.Throughput 7.53374
> dbench4 roota ren1 dbench.Throughput 8.03552
> dbench4 roota ren1 dbench.Throughput 7.72424
>
> dbench default rpi3-1 dbench.Throughput 136.099
> dbench default rpi3-1 dbench.Throughput 132.869
> dbench3 default rpi3-1 dbench3.Throughput 159.953
> dbench4 default rpi3-1 dbench.Throughput 39.9088
> dbench4 roota rpi3-1 dbench.Throughput 50.6633
>
> Note that the 'roota' spec tries to mimic the settings used by
> the old Benchmark.dbench test.
>
> I'm not sure which version of the test to believe for my
> throughput. But something is different enough to skew the
> results by a large amount - in the case of docker by 100x.
>
> Have you seen differences in the reported throughput?
Yes, they are completely different in my boards too. I am not sure why, probably the dbench developers know better.
Regards,
Daniel
> Convert testplans to use dbench4.
> Make sure dbench3 criteria, parser, and spec files use the correct test name.
> Have dbench4 test.yaml file use the correct test name.
>
> Signed-off-by: Tim Bird <tim.bird@sony.com>
> ---
> engine/overlays/testplans/testplan_default.json | 2 +-
> engine/overlays/testplans/testplan_docker.json | 2 +-
> engine/overlays/testplans/testplan_lava.json | 2 +-
> engine/overlays/testplans/testplan_ltsi.json | 2 +-
> engine/overlays/testplans/testplan_mmc.json | 2 +-
> engine/overlays/testplans/testplan_sata.json | 2 +-
> engine/overlays/testplans/testplan_smoketest.json | 2 +-
> engine/overlays/testplans/testplan_usbstor.json | 2 +-
> engine/tests/Benchmark.dbench3/criteria.json | 4 ++--
> engine/tests/Benchmark.dbench3/fuego_test.sh | 24
> ++++++++++++++---------
> engine/tests/Benchmark.dbench3/parser.py | 2 +-
> engine/tests/Benchmark.dbench3/reference.json | 2 +-
> engine/tests/Benchmark.dbench3/spec.json | 2 +-
> engine/tests/Benchmark.dbench4/test.yaml | 2 +-
> 14 files changed, 29 insertions(+), 23 deletions(-)
>
> diff --git a/engine/overlays/testplans/testplan_default.json
> b/engine/overlays/testplans/testplan_default.json
> index 3127a0d..6a4f1a4 100644
> --- a/engine/overlays/testplans/testplan_default.json
> +++ b/engine/overlays/testplans/testplan_default.json
> @@ -11,7 +11,7 @@
> "testName": "Benchmark.Dhrystone"
> },
> {
> - "testName": "Benchmark.dbench"
> + "testName": "Benchmark.dbench4"
> },
> {
> "testName": "Benchmark.gtkperf"
> diff --git a/engine/overlays/testplans/testplan_docker.json
> b/engine/overlays/testplans/testplan_docker.json
> index 4f74576..673a26d 100644
> --- a/engine/overlays/testplans/testplan_docker.json
> +++ b/engine/overlays/testplans/testplan_docker.json
> @@ -8,7 +8,7 @@
> "spec": "100M"
> },
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench4",
> "spec": "testdir"
> },
> {
> diff --git a/engine/overlays/testplans/testplan_lava.json
> b/engine/overlays/testplans/testplan_lava.json
> index 2d7b150..dd63784 100644
> --- a/engine/overlays/testplans/testplan_lava.json
> +++ b/engine/overlays/testplans/testplan_lava.json
> @@ -8,7 +8,7 @@
> "spec": "100M"
> },
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench4",
> "spec": "testdir"
> },
> {
> diff --git a/engine/overlays/testplans/testplan_ltsi.json
> b/engine/overlays/testplans/testplan_ltsi.json
> index fb832ba..7197d55 100644
> --- a/engine/overlays/testplans/testplan_ltsi.json
> +++ b/engine/overlays/testplans/testplan_ltsi.json
> @@ -8,7 +8,7 @@
> "spec": "100M"
> },
> {
> - "testName": "Benchmark.dbench"
> + "testName": "Benchmark.dbench4"
> },
> {
> "testName": "Benchmark.hackbench"
> diff --git a/engine/overlays/testplans/testplan_mmc.json
> b/engine/overlays/testplans/testplan_mmc.json
> index 3e9bf5c..05053f8 100644
> --- a/engine/overlays/testplans/testplan_mmc.json
> +++ b/engine/overlays/testplans/testplan_mmc.json
> @@ -34,7 +34,7 @@
> "spec": "mmc"
> },
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench4",
> "spec": "mmc"
> }
>
> diff --git a/engine/overlays/testplans/testplan_sata.json
> b/engine/overlays/testplans/testplan_sata.json
> index 27b49ca..7588214 100644
> --- a/engine/overlays/testplans/testplan_sata.json
> +++ b/engine/overlays/testplans/testplan_sata.json
> @@ -34,7 +34,7 @@
> "spec": "sata"
> },
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench4",
> "spec": "sata"
> }
>
> diff --git a/engine/overlays/testplans/testplan_smoketest.json
> b/engine/overlays/testplans/testplan_smoketest.json
> index 5b583d3..e79adf8 100644
> --- a/engine/overlays/testplans/testplan_smoketest.json
> +++ b/engine/overlays/testplans/testplan_smoketest.json
> @@ -12,7 +12,7 @@
> "testName": "Benchmark.Dhrystone"
> },
> {
> - "testName": "Benchmark.dbench"
> + "testName": "Benchmark.dbench4"
> },
> {
> "testName": "Benchmark.hackbench"
> diff --git a/engine/overlays/testplans/testplan_usbstor.json
> b/engine/overlays/testplans/testplan_usbstor.json
> index 5ecd8ed..c67f2d8 100644
> --- a/engine/overlays/testplans/testplan_usbstor.json
> +++ b/engine/overlays/testplans/testplan_usbstor.json
> @@ -34,7 +34,7 @@
> "spec": "usb"
> },
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench4",
> "spec": "usb"
> }
>
> diff --git a/engine/tests/Benchmark.dbench3/criteria.json
> b/engine/tests/Benchmark.dbench3/criteria.json
> index c61a057..3e60d56 100644
> --- a/engine/tests/Benchmark.dbench3/criteria.json
> +++ b/engine/tests/Benchmark.dbench3/criteria.json
> @@ -2,14 +2,14 @@
> "schema_version":"1.0",
> "criteria":[
> {
> - "tguid":"default.dbench.Throughput",
> + "tguid":"default.dbench3.Throughput",
> "reference":{
> "value":0,
> "operator":"gt"
> }
> },
> {
> - "tguid":"default.dbench",
> + "tguid":"default.dbench3",
> "min_pass":1
> }
> ]
> diff --git a/engine/tests/Benchmark.dbench3/fuego_test.sh
> b/engine/tests/Benchmark.dbench3/fuego_test.sh
> index f7160f9..3ec4ad1 100755
> --- a/engine/tests/Benchmark.dbench3/fuego_test.sh
> +++ b/engine/tests/Benchmark.dbench3/fuego_test.sh
> @@ -11,18 +11,24 @@ function test_deploy {
> }
>
> function test_run {
> - assert_define BENCHMARK_DBENCH_MOUNT_BLOCKDEV
> - assert_define BENCHMARK_DBENCH_MOUNT_POINT
> - assert_define BENCHMARK_DBENCH_TIMELIMIT
> - assert_define BENCHMARK_DBENCH_NPROCS
> + assert_define BENCHMARK_DBENCH3_MOUNT_BLOCKDEV
> + assert_define BENCHMARK_DBENCH3_MOUNT_POINT
> + assert_define BENCHMARK_DBENCH3_TIMELIMIT
> + assert_define BENCHMARK_DBENCH3_NPROCS
>
> - hd_test_mount_prepare $BENCHMARK_DBENCH_MOUNT_BLOCKDEV
> $BENCHMARK_DBENCH_MOUNT_POINT
> + hd_test_mount_prepare $BENCHMARK_DBENCH3_MOUNT_BLOCKDEV \
> + $BENCHMARK_DBENCH3_MOUNT_POINT
>
> - report "cd $BOARD_TESTDIR/fuego.$TESTDIR; cp client.txt
> $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR; pwd; ./dbench -t
> $BENCHMARK_DBENCH_TIMELIMIT -D
> $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR -c
> $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR/client.txt
> $BENCHMARK_DBENCH_NPROCS; rm
> $BENCHMARK_DBENCH_MOUNT_POINT/fuego.$TESTDIR/client.txt"
> + report "cd $BOARD_TESTDIR/fuego.$TESTDIR; \
> + cp client.txt $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR; \
> + pwd; ./dbench -t $BENCHMARK_DBENCH3_TIMELIMIT \
> + -D $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR \
> + -c $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR/client.txt \
> + $BENCHMARK_DBENCH3_NPROCS; \
> + rm $BENCHMARK_DBENCH3_MOUNT_POINT/fuego.$TESTDIR/client.txt"
>
> sleep 5
>
> - hd_test_clean_umount $BENCHMARK_DBENCH_MOUNT_BLOCKDEV
> $BENCHMARK_DBENCH_MOUNT_POINT
> + hd_test_clean_umount $BENCHMARK_DBENCH3_MOUNT_BLOCKDEV \
> + $BENCHMARK_DBENCH3_MOUNT_POINT
> }
> -
> -
> diff --git a/engine/tests/Benchmark.dbench3/parser.py
> b/engine/tests/Benchmark.dbench3/parser.py
> index b664936..0201262 100755
> --- a/engine/tests/Benchmark.dbench3/parser.py
> +++ b/engine/tests/Benchmark.dbench3/parser.py
> @@ -12,6 +12,6 @@ regex_string = '^(Throughput)(.*)(MB/sec)(.*)(procs)$'
> matches = plib.parse_log(regex_string)
>
> if matches:
> - measurements['default.dbench'] = [{"name": "Throughput", "measure" :
> float(matches[0][1])}]
> + measurements['default.dbench3'] = [{"name": "Throughput", "measure" :
> float(matches[0][1])}]
>
> sys.exit(plib.process(measurements))
> diff --git a/engine/tests/Benchmark.dbench3/reference.json
> b/engine/tests/Benchmark.dbench3/reference.json
> index f08c750..d4715ad 100644
> --- a/engine/tests/Benchmark.dbench3/reference.json
> +++ b/engine/tests/Benchmark.dbench3/reference.json
> @@ -4,7 +4,7 @@
> "name":"default",
> "test_cases":[
> {
> - "name":"dbench",
> + "name":"dbench3",
> "measurements":[
> {
> "name":"Throughput",
> diff --git a/engine/tests/Benchmark.dbench3/spec.json
> b/engine/tests/Benchmark.dbench3/spec.json
> index 61dbfda..f592f01 100644
> --- a/engine/tests/Benchmark.dbench3/spec.json
> +++ b/engine/tests/Benchmark.dbench3/spec.json
> @@ -1,5 +1,5 @@
> {
> - "testName": "Benchmark.dbench",
> + "testName": "Benchmark.dbench3",
> "specs": {
> "sata": {
> "MOUNT_BLOCKDEV":"$SATA_DEV",
> diff --git a/engine/tests/Benchmark.dbench4/test.yaml
> b/engine/tests/Benchmark.dbench4/test.yaml
> index 166e235..c18ed97 100644
> --- a/engine/tests/Benchmark.dbench4/test.yaml
> +++ b/engine/tests/Benchmark.dbench4/test.yaml
> @@ -1,5 +1,5 @@
> fuego_package_version: 1
> -name: Benchmark.dbench
> +name: Benchmark.dbench4
> description: |
> Measure disk throughput for simulated netbench run.
> license: GPL-3.0
> --
> 1.9.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Fuego] [PATCH] dbench: adjust test materials for dbench version
2018-04-09 9:09 ` Daniel Sangorrin
@ 2018-04-09 17:25 ` Tim.Bird
2018-04-10 3:11 ` Daniel Sangorrin
0 siblings, 1 reply; 4+ messages in thread
From: Tim.Bird @ 2018-04-09 17:25 UTC (permalink / raw)
To: daniel.sangorrin, fuego
> -----Original Message-----
> From: Daniel Sangorrin
> Hi Tim,
>
> > -----Original Message-----
> > From: Tim.Bird@sony.com [mailto:Tim.Bird@sony.com]
> > Sent: Thursday, April 5, 2018 7:43 AM
> > To: daniel.sangorrin@toshiba.co.jp; fuego@lists.linuxfoundation.org
> > Subject: [PATCH] dbench: adjust test materials for dbench version
> >
> > Daniel,
> >
> > I had a host of issues with the dbench upgrade. I used the patch below
> > to adjust some of the test materials for the different dbench tests.
> > Let me know if you have feedback on this.
>
> Thanks for the review and the fixes as well!
>
> > In hindsight, maybe splitting the tests wasn't such a great idea.
> > It may have been better to write a parser that could recognize
> > and handle both versions (3 and 4) of the test output format.
> > Although it appears that most systems will be using dbench
> > version 4.0 or above, it's not impossible that Benchmark.dbench4
> > could discover dbench version 3.x on a system, and try to use
> > it, which would result in errors.
>
> Yes, that's true.
> Another option would be to check the version and ask the user to
> run Benchmark.dbench3 instead of Benchmark.dbench4. Which one
> do you prefer?
I prefer checking the version and telling the user to run Benchmark.dbench3.
If we get serious about running tests that already exist on the target board,
then it could come in handy to have an example of version checking code,
so this would be nice to have.
>
> > Also, I find that using the test name to prefix the test variable
> > (from the spec file), is problematic in this case. To rename a test
> > requires changing all the variable names. Maybe we should use
> > a generic prefix, like TVAR_ or something.
>
> Good idea. The variables would be sorter as well.
It's too big a change for the 1.3 release, but I'll make a note and
we can consider it for 1.4.
>
> > Also, I wasn't sure whether I should change the testcase name
> > (in the parser and resulting json file) to match the test name (dbench4).
> > I ended up doing this for dbench3, but left dbench4 alone, as I expect
> > it will become our defacto 'dbench' test in the future.
>
> mm I am worrying a new version will come up and we will have to change
> many things.
For now, I think we should just keep an eye on it. Tests don't change
very often. We'll see how often this situation comes up and determine
how we ought to deal with test version changes in general. So far, I believe
this is the first parser that has broken for us in Fuego, on a test version change.
>
> >
> > And I found that the name in the spec file is only used for generating
> > the test variable names. I tried changing it in dbench4, but then changed
> > my mind, and left it alone.
> >
> > Something is way different between versions 3 and 4 in reporting
> throughput
> > on my systems. I'm getting pretty significant differences in results on most
> > of my boards (the only one with 4.x results consistent with 3.x is on my
> Renesas
> > arm64 board).
> >
> > Here is a table of results:
> > test spec board tguid result
> > ------------------------------------------------------------
> > dbench3 default bbb <<< wouldn't compile >>>
> > dbench3 default bbb dbench3.Throughput 11.5173
> > dbench4 default bbb dbench.Throughput 0.0282559
> > dbench4 roota bbb dbench.Throughput 0.116864
> >
> > dbench testdir docker dbench.Throughput 1741.26
> > dbench testdir docker dbench.Throughput 1831.78
> > dbench3 default docker dbench3.Throughput 1169.77
> > dbench4 default docker dbench.Throughput 10.394
> > dbench4 roota docker dbench.Throughput 10.959
> >
> > dbench default min1 dbench.Throughput 254.03
> > dbench3 default min1 dbench3.Throughput 253.83
> > dbench3 default min1 dbench3.Throughput 251.732
> > dbench4 default min1 dbench.Throughput 32.1764
> > dbench4 default min1 dbench.Throughput 33.0374
> > dbench4 roota min1 dbench.Throughput 34.0026
> > dbench4 roota min1 dbench.Throughput 21.3904
> >
> > dbench default ren1 dbench.Throughput 8.41553
> > dbench default ren1 dbench.Throughput 8.62088
> > dbench3 default ren1 dbench3.Throughput 8.43887
> > dbench3 default ren1 dbench3.Throughput 8.45205
> > dbench4 default ren1 dbench.Throughput 7.53374
> > dbench4 roota ren1 dbench.Throughput 8.03552
> > dbench4 roota ren1 dbench.Throughput 7.72424
> >
> > dbench default rpi3-1 dbench.Throughput 136.099
> > dbench default rpi3-1 dbench.Throughput 132.869
> > dbench3 default rpi3-1 dbench3.Throughput 159.953
> > dbench4 default rpi3-1 dbench.Throughput 39.9088
> > dbench4 roota rpi3-1 dbench.Throughput 50.6633
> >
> > Note that the 'roota' spec tries to mimic the settings used by
> > the old Benchmark.dbench test.
> >
> > I'm not sure which version of the test to believe for my
> > throughput. But something is different enough to skew the
> > results by a large amount - in the case of docker by 100x.
> >
> > Have you seen differences in the reported throughput?
>
> Yes, they are completely different in my boards too. I am not sure why,
> probably the dbench developers know better.
Do you plan to ask them, or should I?
I'd like to know which is the "real" number.
Thanks,
-- Tim
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Fuego] [PATCH] dbench: adjust test materials for dbench version
2018-04-09 17:25 ` Tim.Bird
@ 2018-04-10 3:11 ` Daniel Sangorrin
0 siblings, 0 replies; 4+ messages in thread
From: Daniel Sangorrin @ 2018-04-10 3:11 UTC (permalink / raw)
To: Tim.Bird, fuego
> > Yes, they are completely different in my boards too. I am not sure why,
> > probably the dbench developers know better.
>
> Do you plan to ask them, or should I?
>
> I'd like to know which is the "real" number.
I opened an issue here, let's wait.
https://github.com/sahlberg/dbench/issues/6
Thanks,
Daniel
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-04-10 3:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-04 22:42 [Fuego] [PATCH] dbench: adjust test materials for dbench version Tim.Bird
2018-04-09 9:09 ` Daniel Sangorrin
2018-04-09 17:25 ` Tim.Bird
2018-04-10 3:11 ` Daniel Sangorrin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.