Hi Tim,
Sorry my mistake, I just cloned fuego-core 'next' branch early this week and thought I have the latest code with me.
When I did a git pull now, it pulled in fixes for Benchmark.Stream as well as Functional.expat tests.
Both the tests are passing now.
Thanks a lot.
Regards,
Dhinakar,
Senior Technical Manager,
2-07-526, Phoenix Building,
+91-9902007650
Samsung Research Institute, Bangalore.
--------- Original Message ---------
Sender : Bird, Timothy <Tim.Bird@sony.com>
Date : 2017-10-06 20:45 (GMT+5:30)
Title : RE: [Fuego] Benchmark.Stream fails with ValueError: invalid literal for float() on x86 64-bit target
To : Dhinakar Kalyanasundaram<dhinakar.k@samsung.com>, null<fuego@lists.linuxfoundation.org>
> -----Original Message----- > From: fuego-bounces@lists.linuxfoundation.org [mailto:fuego- > bounces@lists.linuxfoundation.org] On Behalf Of Dhinakar Kalyanasundaram > Sent: Friday, October 06, 2017 5:27 AM > To: fuego@lists.linuxfoundation.org > Subject: [Fuego] Benchmark.Stream fails with ValueError: invalid literal for > float() on x86 64-bit target > > Hi, > > > > Benchmark.Stream fails with ValueError: invalid literal for float() on x86 64-bit > target. > > The execution log has been pasted below for reference. > > Please let me know how it can be fixed. Thanks in advance. This is fixed in my 'next' branch with the following commit. https://bitbucket.org/tbird20d/fuego-core/commits/b175ddb18094b2639ee31272a275f79cd7d81434 Please do a 'git pull' to an appropriate fuego-core repository on your host, and let me know if it fixes it for you. -- Tim > ##### doing fuego phase: build ######## > The test is already built > Fuego test_build duration=0 seconds > ##### doing fuego phase: deploy ######## > ##### doing fuego phase: pre_deploy ######## > ##### doing fuego phase: test_deploy ######## > ##### doing fuego phase: post_deploy ######## > ##### doing fuego phase: run ######## > WARNING: test log file parameter empty, so will use default > ------------------------------------------------------------- > STREAM version $Revision: 5.9 $ > ------------------------------------------------------------- > This system uses 8 bytes per DOUBLE PRECISION word. > ------------------------------------------------------------- > Array size = 2000000, Offset = 0 > Total memory required = 45.8 MB. > Each test is run 10 times, but only > the *best* time for each is used. > ------------------------------------------------------------- > Printing one line per active thread.... > ------------------------------------------------------------- > Your clock granularity/precision appears to be 1 microseconds. > Each test below will take on the order of 2795 microseconds. > (= 2795 clock ticks) > Increase the size of the arrays if this shows that > you are not getting at least 20 clock ticks per test. > ------------------------------------------------------------- > WARNING -- The above is only a rough guideline. > For best results, please be sure you know the > precision of your system timer. > ------------------------------------------------------------- > Function Rate (MB/s) Avg time Min time Max time > Copy: 8328.7451 0.0039 0.0038 0.0040 > Scale: 7978.2279 0.0040 0.0040 0.0041 > Add: 8857.6969 0.0055 0.0054 0.0057 > Triad: 9012.3368 0.0054 0.0053 0.0054 > ------------------------------------------------------------- > Solution Validates > ------------------------------------------------------------- > ##### doing fuego phase: post_test ######## > Teardown board link > ##### doing fuego phase: processing ######## > running python with > PATH=/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gam > es > Reading current values from /fuego-rw/logs/Benchmark.Stream/TRAV- > Ethernet-x86.default.2.2/testlog.txt > Traceback (most recent call last): > File "/fuego-core/engine/tests/Benchmark.Stream/parser.py", line 24, in > <module> > sys.exit(plib.process_data(ref_section_pat, cur_dict, 's', 'Rate, MB/s')) > File "/fuego-core/engine/scripts/parser/common.py", line 600, in > process_data > new_measure = {"name":measure, "measure": float(value)} > ValueError: invalid literal for float(): 9012.3368 0.0054 This was caused by a bug in the parser.py for this test. > ERROR: results did not satisfy the threshold > Fuego: all test phases complete! > Build step 'Execute shell' marked build as failure > [description-setter] Description set: <a > href="/storm/userContent/fuego.logs/Benchmark.Stream/TRAV-Ethernet- > x86.default.2.2/testlog.txt">testlog</a> <a > href="/storm/userContent/fuego.logs/Benchmark.Stream/TRAV-Ethernet- > x86.default.2.2/run.json">run.json</a> <a > href="/storm/userContent/fuego.logs/Benchmark.Stream/TRAV-Ethernet- > x86.default.2.2/consolelog.txt">fuegolog</a> <a > href="/storm/userContent/fuego.logs/Benchmark.Stream/TRAV-Ethernet- > x86.default.2.2/devlog.txt">devlog</a> <a > href="/storm/userContent/fuego.logs/Benchmark.Stream/TRAV-Ethernet- > x86.default.2.2/prolog.sh">prolog.sh</a> > Finished: FAILURE >