All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] [PATCH] test/py: make each unit test a pytest
@ 2016-01-28 23:45 Stephen Warren
  2016-01-29  3:52 ` Simon Glass
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Warren @ 2016-01-28 23:45 UTC (permalink / raw)
  To: u-boot

From: Stephen Warren <swarren@nvidia.com>

A custom fixture named ut_subtest is implemented which is parametrized
with the names of all unit tests that the U-Boot binary supports. This
causes each U-Boot unit test to be exposes as a separate pytest. In turn,
this allows more fine-grained pass/fail counts and test selection, e.g.:

test.py --bd sandbox -k ut_dm_usb

... will run about 8 tests at present.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
---
This depends on at least my recently sent "test/py: run C-based unit tests".

 test/py/conftest.py      | 105 ++++++++++++++++++++++++++++++++++++-----------
 test/py/tests/test_ut.py |  14 +++----
 2 files changed, 86 insertions(+), 33 deletions(-)

diff --git a/test/py/conftest.py b/test/py/conftest.py
index 3e162cafcc4a..05491a2453c0 100644
--- a/test/py/conftest.py
+++ b/test/py/conftest.py
@@ -21,7 +21,9 @@ import pexpect
 import pytest
 from _pytest.runner import runtestprotocol
 import ConfigParser
+import re
 import StringIO
+import subprocess
 import sys
 
 # Globals: The HTML log file, and the connection to the U-Boot console.
@@ -189,8 +191,43 @@ def pytest_configure(config):
         import u_boot_console_exec_attach
         console = u_boot_console_exec_attach.ConsoleExecAttach(log, ubconfig)
 
-def pytest_generate_tests(metafunc):
-    """pytest hook: parameterize test functions based on custom rules.
+re_ut_test_list = re.compile(r'_u_boot_list_2_(dm|env)_test_2_\1_test_(.*)\s*$')
+def generate_ut_subtest(metafunc, fixture_name):
+    """Provide parametrization for a ut_subtest fixture.
+
+    Determines the set of unit tests built into a U-Boot binary by parsing the
+    list of symbols present in the U-Boot binary. Provides this information to
+    test functions by parameterizing their ut_subtest fixture parameter.
+
+    Args:
+        metafunc: The pytest test function.
+        fixture_name: The fixture name to test.
+
+    Returns:
+        Nothing.
+    """
+
+    # This does rely on an objdump binary, but that's quite likely to be
+    # present. This approach trivially takes care of any source or Makefile-
+    # level conditional compilation which may occur, and matches the test
+    # execution order of a plain "ut dm" command. A source-scanning approach
+    # would not do neither. This approach also doesn't require access to the
+    # U-Boot source tree when running tests.
+
+    cmd = 'objdump -t "%s" | sort' % (console.config.build_dir + '/u-boot')
+    out = subprocess.check_output(cmd, shell=True)
+    vals = []
+    for l in out.splitlines():
+        m = re_ut_test_list.search(l)
+        if not m:
+            continue
+        vals.append(m.group(1) + ' ' + m.group(2))
+
+    ids = ['ut_' + s.replace(' ', '_') for s in vals]
+    metafunc.parametrize(fixture_name, vals, ids=ids)
+
+def generate_config(metafunc, fixture_name):
+    """Provide parametrization for {env,brd}__ fixtures.
 
     If a test function takes parameter(s) (fixture names) of the form brd__xxx
     or env__xxx, the brd and env configuration dictionaries are consulted to
@@ -199,6 +236,7 @@ def pytest_generate_tests(metafunc):
 
     Args:
         metafunc: The pytest test function.
+        fixture_name: The fixture name to test.
 
     Returns:
         Nothing.
@@ -208,30 +246,49 @@ def pytest_generate_tests(metafunc):
         'brd': console.config.brd,
         'env': console.config.env,
     }
+    parts = fixture_name.split('__')
+    if len(parts) < 2:
+        return
+    if parts[0] not in subconfigs:
+        return
+    subconfig = subconfigs[parts[0]]
+    vals = []
+    val = subconfig.get(fixture_name, [])
+    # If that exact name is a key in the data source:
+    if val:
+        # ... use the dict value as a single parameter value.
+        vals = (val, )
+    else:
+        # ... otherwise, see if there's a key that contains a list of
+        # values to use instead.
+        vals = subconfig.get(fixture_name+ 's', [])
+    def fixture_id(index, val):
+        try:
+            return val['fixture_id']
+        except:
+            return fixture_name + str(index)
+    ids = [fixture_id(index, val) for (index, val) in enumerate(vals)]
+    metafunc.parametrize(fixture_name, vals, ids=ids)
+
+def pytest_generate_tests(metafunc):
+    """pytest hook: parameterize test functions based on custom rules.
+
+    Check each test function parameter (fixture name) to see if it is one of
+    our custom names, and if so, provide the correct parametrization for that
+    parameter.
+
+    Args:
+        metafunc: The pytest test function.
+
+    Returns:
+        Nothing.
+    """
+
     for fn in metafunc.fixturenames:
-        parts = fn.split('__')
-        if len(parts) < 2:
+        if fn == 'ut_subtest':
+            generate_ut_subtest(metafunc, fn)
             continue
-        if parts[0] not in subconfigs:
-            continue
-        subconfig = subconfigs[parts[0]]
-        vals = []
-        val = subconfig.get(fn, [])
-        # If that exact name is a key in the data source:
-        if val:
-            # ... use the dict value as a single parameter value.
-            vals = (val, )
-        else:
-            # ... otherwise, see if there's a key that contains a list of
-            # values to use instead.
-            vals = subconfig.get(fn + 's', [])
-        def fixture_id(index, val):
-            try:
-                return val["fixture_id"]
-            except:
-                return fn + str(index)
-        ids = [fixture_id(index, val) for (index, val) in enumerate(vals)]
-        metafunc.parametrize(fn, vals, ids=ids)
+        generate_config(metafunc, fn)
 
 @pytest.fixture(scope='function')
 def u_boot_console(request):
diff --git a/test/py/tests/test_ut.py b/test/py/tests/test_ut.py
index b033ca54d756..cd85b3ddc0ce 100644
--- a/test/py/tests/test_ut.py
+++ b/test/py/tests/test_ut.py
@@ -6,8 +6,8 @@ import os.path
 import pytest
 
 @pytest.mark.buildconfigspec('ut_dm')
-def test_ut_dm(u_boot_console):
-    """Execute the "ut dm" command."""
+def test_ut_dm_init(u_boot_console):
+    """Initialize data for ut dm tests."""
 
     fn = u_boot_console.config.source_dir + '/testflash.bin'
     if not os.path.exists(fn):
@@ -16,14 +16,10 @@ def test_ut_dm(u_boot_console):
         with open(fn, 'wb') as fh:
             fh.write(data)
 
-    output = u_boot_console.run_command('ut dm')
-    assert output.endswith('Failures: 0')
-
-@pytest.mark.buildconfigspec('ut_env')
-def test_ut_env(u_boot_console):
-    """Execute the "ut env" command."""
+def test_ut(u_boot_console, ut_subtest):
+    """Execute a "ut" subtest."""
 
-    output = u_boot_console.run_command('ut env')
+    output = u_boot_console.run_command('ut ' + ut_subtest)
     assert output.endswith('Failures: 0')
 
 @pytest.mark.buildconfigspec('ut_time')
-- 
2.7.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-28 23:45 [U-Boot] [PATCH] test/py: make each unit test a pytest Stephen Warren
@ 2016-01-29  3:52 ` Simon Glass
  2016-01-29  5:08   ` Stephen Warren
  0 siblings, 1 reply; 7+ messages in thread
From: Simon Glass @ 2016-01-29  3:52 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org> wrote:
> From: Stephen Warren <swarren@nvidia.com>
>
> A custom fixture named ut_subtest is implemented which is parametrized
> with the names of all unit tests that the U-Boot binary supports. This
> causes each U-Boot unit test to be exposes as a separate pytest. In turn,
> this allows more fine-grained pass/fail counts and test selection, e.g.:
>
> test.py --bd sandbox -k ut_dm_usb
>
> ... will run about 8 tests at present.
>
> Signed-off-by: Stephen Warren <swarren@nvidia.com>
> ---
> This depends on at least my recently sent "test/py: run C-based unit tests".
>
>  test/py/conftest.py      | 105 ++++++++++++++++++++++++++++++++++++-----------
>  test/py/tests/test_ut.py |  14 +++----
>  2 files changed, 86 insertions(+), 33 deletions(-)

This seems a bit extreme. It might be better to move the remaining
three commands under the 'ut' subcommand. Then all unit tests would be
visible from the 'ut' help...

>
> diff --git a/test/py/conftest.py b/test/py/conftest.py
> index 3e162cafcc4a..05491a2453c0 100644
> --- a/test/py/conftest.py
> +++ b/test/py/conftest.py
> @@ -21,7 +21,9 @@ import pexpect
>  import pytest
>  from _pytest.runner import runtestprotocol
>  import ConfigParser
> +import re
>  import StringIO
> +import subprocess
>  import sys
>
>  # Globals: The HTML log file, and the connection to the U-Boot console.
> @@ -189,8 +191,43 @@ def pytest_configure(config):
>          import u_boot_console_exec_attach
>          console = u_boot_console_exec_attach.ConsoleExecAttach(log, ubconfig)
>
> -def pytest_generate_tests(metafunc):
> -    """pytest hook: parameterize test functions based on custom rules.
> +re_ut_test_list = re.compile(r'_u_boot_list_2_(dm|env)_test_2_\1_test_(.*)\s*$')
> +def generate_ut_subtest(metafunc, fixture_name):
> +    """Provide parametrization for a ut_subtest fixture.
> +
> +    Determines the set of unit tests built into a U-Boot binary by parsing the
> +    list of symbols present in the U-Boot binary. Provides this information to
> +    test functions by parameterizing their ut_subtest fixture parameter.
> +
> +    Args:
> +        metafunc: The pytest test function.
> +        fixture_name: The fixture name to test.
> +
> +    Returns:
> +        Nothing.
> +    """
> +
> +    # This does rely on an objdump binary, but that's quite likely to be
> +    # present. This approach trivially takes care of any source or Makefile-
> +    # level conditional compilation which may occur, and matches the test
> +    # execution order of a plain "ut dm" command. A source-scanning approach
> +    # would not do neither. This approach also doesn't require access to the
> +    # U-Boot source tree when running tests.
> +
> +    cmd = 'objdump -t "%s" | sort' % (console.config.build_dir + '/u-boot')
> +    out = subprocess.check_output(cmd, shell=True)
> +    vals = []
> +    for l in out.splitlines():
> +        m = re_ut_test_list.search(l)
> +        if not m:
> +            continue
> +        vals.append(m.group(1) + ' ' + m.group(2))
> +
> +    ids = ['ut_' + s.replace(' ', '_') for s in vals]
> +    metafunc.parametrize(fixture_name, vals, ids=ids)
> +
> +def generate_config(metafunc, fixture_name):
> +    """Provide parametrization for {env,brd}__ fixtures.
>
>      If a test function takes parameter(s) (fixture names) of the form brd__xxx
>      or env__xxx, the brd and env configuration dictionaries are consulted to
> @@ -199,6 +236,7 @@ def pytest_generate_tests(metafunc):
>
>      Args:
>          metafunc: The pytest test function.
> +        fixture_name: The fixture name to test.
>
>      Returns:
>          Nothing.
> @@ -208,30 +246,49 @@ def pytest_generate_tests(metafunc):
>          'brd': console.config.brd,
>          'env': console.config.env,
>      }
> +    parts = fixture_name.split('__')
> +    if len(parts) < 2:
> +        return
> +    if parts[0] not in subconfigs:
> +        return
> +    subconfig = subconfigs[parts[0]]
> +    vals = []
> +    val = subconfig.get(fixture_name, [])
> +    # If that exact name is a key in the data source:
> +    if val:
> +        # ... use the dict value as a single parameter value.
> +        vals = (val, )
> +    else:
> +        # ... otherwise, see if there's a key that contains a list of
> +        # values to use instead.
> +        vals = subconfig.get(fixture_name+ 's', [])
> +    def fixture_id(index, val):
> +        try:
> +            return val['fixture_id']
> +        except:
> +            return fixture_name + str(index)
> +    ids = [fixture_id(index, val) for (index, val) in enumerate(vals)]
> +    metafunc.parametrize(fixture_name, vals, ids=ids)
> +
> +def pytest_generate_tests(metafunc):
> +    """pytest hook: parameterize test functions based on custom rules.
> +
> +    Check each test function parameter (fixture name) to see if it is one of
> +    our custom names, and if so, provide the correct parametrization for that
> +    parameter.
> +
> +    Args:
> +        metafunc: The pytest test function.
> +
> +    Returns:
> +        Nothing.
> +    """
> +
>      for fn in metafunc.fixturenames:
> -        parts = fn.split('__')
> -        if len(parts) < 2:
> +        if fn == 'ut_subtest':
> +            generate_ut_subtest(metafunc, fn)
>              continue
> -        if parts[0] not in subconfigs:
> -            continue
> -        subconfig = subconfigs[parts[0]]
> -        vals = []
> -        val = subconfig.get(fn, [])
> -        # If that exact name is a key in the data source:
> -        if val:
> -            # ... use the dict value as a single parameter value.
> -            vals = (val, )
> -        else:
> -            # ... otherwise, see if there's a key that contains a list of
> -            # values to use instead.
> -            vals = subconfig.get(fn + 's', [])
> -        def fixture_id(index, val):
> -            try:
> -                return val["fixture_id"]
> -            except:
> -                return fn + str(index)
> -        ids = [fixture_id(index, val) for (index, val) in enumerate(vals)]
> -        metafunc.parametrize(fn, vals, ids=ids)
> +        generate_config(metafunc, fn)
>
>  @pytest.fixture(scope='function')
>  def u_boot_console(request):
> diff --git a/test/py/tests/test_ut.py b/test/py/tests/test_ut.py
> index b033ca54d756..cd85b3ddc0ce 100644
> --- a/test/py/tests/test_ut.py
> +++ b/test/py/tests/test_ut.py
> @@ -6,8 +6,8 @@ import os.path
>  import pytest
>
>  @pytest.mark.buildconfigspec('ut_dm')
> -def test_ut_dm(u_boot_console):
> -    """Execute the "ut dm" command."""
> +def test_ut_dm_init(u_boot_console):
> +    """Initialize data for ut dm tests."""
>
>      fn = u_boot_console.config.source_dir + '/testflash.bin'
>      if not os.path.exists(fn):
> @@ -16,14 +16,10 @@ def test_ut_dm(u_boot_console):
>          with open(fn, 'wb') as fh:
>              fh.write(data)
>
> -    output = u_boot_console.run_command('ut dm')
> -    assert output.endswith('Failures: 0')
> -
> - at pytest.mark.buildconfigspec('ut_env')
> -def test_ut_env(u_boot_console):
> -    """Execute the "ut env" command."""
> +def test_ut(u_boot_console, ut_subtest):
> +    """Execute a "ut" subtest."""
>
> -    output = u_boot_console.run_command('ut env')
> +    output = u_boot_console.run_command('ut ' + ut_subtest)
>      assert output.endswith('Failures: 0')
>
>  @pytest.mark.buildconfigspec('ut_time')
> --
> 2.7.0
>

Regards,
Simon

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-29  3:52 ` Simon Glass
@ 2016-01-29  5:08   ` Stephen Warren
  2016-01-29 18:23     ` Simon Glass
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Warren @ 2016-01-29  5:08 UTC (permalink / raw)
  To: u-boot

On 01/28/2016 08:52 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> From: Stephen Warren <swarren@nvidia.com>
>>
>> A custom fixture named ut_subtest is implemented which is parametrized
>> with the names of all unit tests that the U-Boot binary supports. This
>> causes each U-Boot unit test to be exposes as a separate pytest. In turn,
>> this allows more fine-grained pass/fail counts and test selection, e.g.:
>>
>> test.py --bd sandbox -k ut_dm_usb
>>
>> ... will run about 8 tests at present.
>>
>> Signed-off-by: Stephen Warren <swarren@nvidia.com>
>> ---
>> This depends on at least my recently sent "test/py: run C-based unit tests".
>>
>>  test/py/conftest.py      | 105 ++++++++++++++++++++++++++++++++++++-----------
>>  test/py/tests/test_ut.py |  14 +++----
>>  2 files changed, 86 insertions(+), 33 deletions(-)
> 
> This seems a bit extreme. It might be better to move the remaining
> three commands under the 'ut' subcommand. Then all unit tests would be
> visible from the 'ut' help...

I'm not sure what you mean by "extreme"? Do you mean you don't want each
unit test exposed as a separate pytest? I thought based on our previous
conversation that was exactly what you wanted. If not, I'm not sure what
the deficiency in the current code is; either all the dm subtests are
executed at once by a single pytest with a single overall status, or
they're each a separate pytest with individual status. Any grouping
that's in between those seems like it would be entirely arbitrary?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-29  5:08   ` Stephen Warren
@ 2016-01-29 18:23     ` Simon Glass
  2016-01-29 18:48       ` Stephen Warren
  0 siblings, 1 reply; 7+ messages in thread
From: Simon Glass @ 2016-01-29 18:23 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 28 January 2016 at 22:08, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 01/28/2016 08:52 PM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>> From: Stephen Warren <swarren@nvidia.com>
>>>
>>> A custom fixture named ut_subtest is implemented which is parametrized
>>> with the names of all unit tests that the U-Boot binary supports. This
>>> causes each U-Boot unit test to be exposes as a separate pytest. In turn,
>>> this allows more fine-grained pass/fail counts and test selection, e.g.:
>>>
>>> test.py --bd sandbox -k ut_dm_usb
>>>
>>> ... will run about 8 tests at present.
>>>
>>> Signed-off-by: Stephen Warren <swarren@nvidia.com>
>>> ---
>>> This depends on at least my recently sent "test/py: run C-based unit tests".
>>>
>>>  test/py/conftest.py      | 105 ++++++++++++++++++++++++++++++++++++-----------
>>>  test/py/tests/test_ut.py |  14 +++----
>>>  2 files changed, 86 insertions(+), 33 deletions(-)
>>
>> This seems a bit extreme. It might be better to move the remaining
>> three commands under the 'ut' subcommand. Then all unit tests would be
>> visible from the 'ut' help...
>
> I'm not sure what you mean by "extreme"? Do you mean you don't want each
> unit test exposed as a separate pytest? I thought based on our previous
> conversation that was exactly what you wanted. If not, I'm not sure what
> the deficiency in the current code is; either all the dm subtests are
> executed at once by a single pytest with a single overall status, or
> they're each a separate pytest with individual status. Any grouping
> that's in between those seems like it would be entirely arbitrary?

I mean that there might be a simpler way to find out what unit tests
are available in U-Boot rather than using objdump! Can the 'ut'
command itself report this?

Also I'd prefer to move tests to be subcommands of 'ut' before wiring
them up the the pytest stuff.

Regards,
Simon

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-29 18:23     ` Simon Glass
@ 2016-01-29 18:48       ` Stephen Warren
  2016-01-29 20:11         ` Simon Glass
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Warren @ 2016-01-29 18:48 UTC (permalink / raw)
  To: u-boot

On 01/29/2016 11:23 AM, Simon Glass wrote:
> Hi Stephen,
>
> On 28 January 2016 at 22:08, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 01/28/2016 08:52 PM, Simon Glass wrote:
>>> Hi Stephen,
>>>
>>> On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>> From: Stephen Warren <swarren@nvidia.com>
>>>>
>>>> A custom fixture named ut_subtest is implemented which is parametrized
>>>> with the names of all unit tests that the U-Boot binary supports. This
>>>> causes each U-Boot unit test to be exposes as a separate pytest. In turn,
>>>> this allows more fine-grained pass/fail counts and test selection, e.g.:
>>>>
>>>> test.py --bd sandbox -k ut_dm_usb
>>>>
>>>> ... will run about 8 tests at present.
>>>>
>>>> Signed-off-by: Stephen Warren <swarren@nvidia.com>
>>>> ---
>>>> This depends on at least my recently sent "test/py: run C-based unit tests".
>>>>
>>>>   test/py/conftest.py      | 105 ++++++++++++++++++++++++++++++++++++-----------
>>>>   test/py/tests/test_ut.py |  14 +++----
>>>>   2 files changed, 86 insertions(+), 33 deletions(-)
>>>
>>> This seems a bit extreme. It might be better to move the remaining
>>> three commands under the 'ut' subcommand. Then all unit tests would be
>>> visible from the 'ut' help...
>>
>> I'm not sure what you mean by "extreme"? Do you mean you don't want each
>> unit test exposed as a separate pytest? I thought based on our previous
>> conversation that was exactly what you wanted. If not, I'm not sure what
>> the deficiency in the current code is; either all the dm subtests are
>> executed at once by a single pytest with a single overall status, or
>> they're each a separate pytest with individual status. Any grouping
>> that's in between those seems like it would be entirely arbitrary?
>
> I mean that there might be a simpler way to find out what unit tests
> are available in U-Boot rather than using objdump! Can the 'ut'
> command itself report this?

Well, the Python code could parse the ELF binary itself... :-)

We can't parse the source code to determine the test list, since it'd be 
hard to determine which tests were actually compiled in (based on 
.config feature support), v.s. which were simply written but not compiled.

Perhaps we could add a new command-line option to U-Boot that /only/ 
prints out the list of support unit tests. That would mean executing the 
U-Boot binary on the host to determine the list though, which would 
limit the approach to sandbox; it couldn't ever work if we enabled unit 
tests on real HW. objdump should work in that scenario.

Or perhaps the build process could dump out a list of enabled unit 
tests, so test/py could simply read that file. At least that would push 
the objdump usage into the build process where it's basically guaranteed 
we have an objdump binary, plus we can use $(CROSS_COMPILE)objdump which 
would be better for cross-compiled binaries...

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-29 18:48       ` Stephen Warren
@ 2016-01-29 20:11         ` Simon Glass
  2016-02-02 19:43           ` Stephen Warren
  0 siblings, 1 reply; 7+ messages in thread
From: Simon Glass @ 2016-01-29 20:11 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 29 January 2016 at 11:48, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 01/29/2016 11:23 AM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 28 January 2016 at 22:08, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>
>>> On 01/28/2016 08:52 PM, Simon Glass wrote:
>>>>
>>>> Hi Stephen,
>>>>
>>>> On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org>
>>>> wrote:
>>>>>
>>>>> From: Stephen Warren <swarren@nvidia.com>
>>>>>
>>>>> A custom fixture named ut_subtest is implemented which is parametrized
>>>>> with the names of all unit tests that the U-Boot binary supports. This
>>>>> causes each U-Boot unit test to be exposes as a separate pytest. In
>>>>> turn,
>>>>> this allows more fine-grained pass/fail counts and test selection,
>>>>> e.g.:
>>>>>
>>>>> test.py --bd sandbox -k ut_dm_usb
>>>>>
>>>>> ... will run about 8 tests at present.
>>>>>
>>>>> Signed-off-by: Stephen Warren <swarren@nvidia.com>
>>>>> ---
>>>>> This depends on at least my recently sent "test/py: run C-based unit
>>>>> tests".
>>>>>
>>>>>   test/py/conftest.py      | 105
>>>>> ++++++++++++++++++++++++++++++++++++-----------
>>>>>   test/py/tests/test_ut.py |  14 +++----
>>>>>   2 files changed, 86 insertions(+), 33 deletions(-)
>>>>
>>>>
>>>> This seems a bit extreme. It might be better to move the remaining
>>>> three commands under the 'ut' subcommand. Then all unit tests would be
>>>> visible from the 'ut' help...
>>>
>>>
>>> I'm not sure what you mean by "extreme"? Do you mean you don't want each
>>> unit test exposed as a separate pytest? I thought based on our previous
>>> conversation that was exactly what you wanted. If not, I'm not sure what
>>> the deficiency in the current code is; either all the dm subtests are
>>> executed at once by a single pytest with a single overall status, or
>>> they're each a separate pytest with individual status. Any grouping
>>> that's in between those seems like it would be entirely arbitrary?
>>
>>
>> I mean that there might be a simpler way to find out what unit tests
>> are available in U-Boot rather than using objdump! Can the 'ut'
>> command itself report this?
>
>
> Well, the Python code could parse the ELF binary itself... :-)

Eek!

>
> We can't parse the source code to determine the test list, since it'd be
> hard to determine which tests were actually compiled in (based on .config
> feature support), v.s. which were simply written but not compiled.
>
> Perhaps we could add a new command-line option to U-Boot that /only/ prints
> out the list of support unit tests. That would mean executing the U-Boot
> binary on the host to determine the list though, which would limit the
> approach to sandbox; it couldn't ever work if we enabled unit tests on real
> HW. objdump should work in that scenario.

That was what I was thinking actually. The 'ut' command already prints
a list when given no args, but you could add 'ut list'.

I'm not quite clear how useful the 'ut' tests are on real hardware.
They are seldom enabled. Do you actually parse the U-Boot binary for
the board?

>
> Or perhaps the build process could dump out a list of enabled unit tests, so
> test/py could simply read that file. At least that would push the objdump
> usage into the build process where it's basically guaranteed we have an
> objdump binary, plus we can use $(CROSS_COMPILE)objdump which would be
> better for cross-compiled binaries...

Or do you think it would be acceptable to just have a hard-coded list
of tests and try each one?

Or maybe your current approach is better than the alternatives...

Regards,
Simon

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [U-Boot] [PATCH] test/py: make each unit test a pytest
  2016-01-29 20:11         ` Simon Glass
@ 2016-02-02 19:43           ` Stephen Warren
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Warren @ 2016-02-02 19:43 UTC (permalink / raw)
  To: u-boot

On 01/29/2016 01:11 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 29 January 2016 at 11:48, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 01/29/2016 11:23 AM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 28 January 2016 at 22:08, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>>
>>>> On 01/28/2016 08:52 PM, Simon Glass wrote:
>>>>>
>>>>> Hi Stephen,
>>>>>
>>>>> On 28 January 2016 at 16:45, Stephen Warren <swarren@wwwdotorg.org>
>>>>> wrote:
>>>>>>
>>>>>> From: Stephen Warren <swarren@nvidia.com>
>>>>>>
>>>>>> A custom fixture named ut_subtest is implemented which is parametrized
>>>>>> with the names of all unit tests that the U-Boot binary supports. This
>>>>>> causes each U-Boot unit test to be exposes as a separate pytest. In
>>>>>> turn,
>>>>>> this allows more fine-grained pass/fail counts and test selection,
>>>>>> e.g.:
>>>>>>
>>>>>> test.py --bd sandbox -k ut_dm_usb
>>>>>>
>>>>>> ... will run about 8 tests at present.
>>>>>>
>>>>>> Signed-off-by: Stephen Warren <swarren@nvidia.com>
>>>>>> ---
>>>>>> This depends on at least my recently sent "test/py: run C-based unit
>>>>>> tests".
>>>>>>
>>>>>>   test/py/conftest.py      | 105
>>>>>> ++++++++++++++++++++++++++++++++++++-----------
>>>>>>   test/py/tests/test_ut.py |  14 +++----
>>>>>>   2 files changed, 86 insertions(+), 33 deletions(-)
>>>>>
>>>>>
>>>>> This seems a bit extreme. It might be better to move the remaining
>>>>> three commands under the 'ut' subcommand. Then all unit tests would be
>>>>> visible from the 'ut' help...
>>>>
>>>>
>>>> I'm not sure what you mean by "extreme"? Do you mean you don't want each
>>>> unit test exposed as a separate pytest? I thought based on our previous
>>>> conversation that was exactly what you wanted. If not, I'm not sure what
>>>> the deficiency in the current code is; either all the dm subtests are
>>>> executed at once by a single pytest with a single overall status, or
>>>> they're each a separate pytest with individual status. Any grouping
>>>> that's in between those seems like it would be entirely arbitrary?
>>>
>>>
>>> I mean that there might be a simpler way to find out what unit tests
>>> are available in U-Boot rather than using objdump! Can the 'ut'
>>> command itself report this?
>>
>>
>> Well, the Python code could parse the ELF binary itself... :-)
> 
> Eek!
> 
>> We can't parse the source code to determine the test list, since it'd be
>> hard to determine which tests were actually compiled in (based on .config
>> feature support), v.s. which were simply written but not compiled.
>>
>> Perhaps we could add a new command-line option to U-Boot that /only/ prints
>> out the list of support unit tests. That would mean executing the U-Boot
>> binary on the host to determine the list though, which would limit the
>> approach to sandbox; it couldn't ever work if we enabled unit tests on real
>> HW. objdump should work in that scenario.
> 
> That was what I was thinking actually. The 'ut' command already prints
> a list when given no args, but you could add 'ut list'.
> 
> I'm not quite clear how useful the 'ut' tests are on real hardware.
> They are seldom enabled. Do you actually parse the U-Boot binary for
> the board?

There's no other place in the test system that parses the U-Boot binary.

>> Or perhaps the build process could dump out a list of enabled unit tests, so
>> test/py could simply read that file. At least that would push the objdump
>> usage into the build process where it's basically guaranteed we have an
>> objdump binary, plus we can use $(CROSS_COMPILE)objdump which would be
>> better for cross-compiled binaries...
> 
> Or do you think it would be acceptable to just have a hard-coded list
> of tests and try each one?
> 
> Or maybe your current approach is better than the alternatives...

Overall, I still like the idea of simply parsing the binary best.

I see no reason why (at least some subset of) unit tests could not ever
be enabled in non-sandbox builds. So, I'd like the test system not to
assume that they won't be. This means we can't execute the binary to
find out the list of enabled tests beforehand, since determining the
list has to happen before we run any tests, and hence happen via code on
the host machine, which can't run target binaries.

One improvement I can make is to run the objdump during the build
process. As part of generating u-boot and u-boot.map, we can also call
objdump to generate u-boot.syms. This isolates use of "compiler" tools
like objdump to the Makefile; a more typical place to run them. The
parsing of u-boot.syms can be left up to the test scripts.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-02-02 19:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-28 23:45 [U-Boot] [PATCH] test/py: make each unit test a pytest Stephen Warren
2016-01-29  3:52 ` Simon Glass
2016-01-29  5:08   ` Stephen Warren
2016-01-29 18:23     ` Simon Glass
2016-01-29 18:48       ` Stephen Warren
2016-01-29 20:11         ` Simon Glass
2016-02-02 19:43           ` Stephen Warren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.