All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] [PATCH] Implement pytest-based test infrastructure
@ 2015-11-15  6:53 Stephen Warren
  2015-11-19 14:45 ` Simon Glass
  0 siblings, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-15  6:53 UTC (permalink / raw)
  To: u-boot

This tool aims to test U-Boot by executing U-Boot shell commands using the
console interface. A single top-level script exists to execute or attach
to the U-Boot console, run the entire script of tests against it, and
summarize the results. Advantages of this approach are:

- Testing is performed in the same way a user or script would interact
  with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself.
  It is asserted that writing test-related code in Python is simpler and
  more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.

A few simple tests are provided as examples. Soon, we should convert as
many as possible of the other tests in test/* and test/cmd_ut.c too.

In the future, I hope to publish (out-of-tree) the hook scripts, relay
control utilities, and udev rules I will use for my own HW setup.

See README.md for more details!

Signed-off-by: Stephen Warren <swarren@wwwdotorg.org>
---
 .gitignore                           |   1 +
 test/py/README.md                    | 287 +++++++++++++++++++++++++++++++++++
 test/py/board_jetson_tk1.py          |   1 +
 test/py/board_sandbox.py             |   1 +
 test/py/board_seaboard.py            |   1 +
 test/py/conftest.py                  | 225 +++++++++++++++++++++++++++
 test/py/multiplexed_log.css          |  70 +++++++++
 test/py/multiplexed_log.py           | 172 +++++++++++++++++++++
 test/py/pytest.ini                   |   5 +
 test/py/soc_tegra124.py              |   1 +
 test/py/soc_tegra20.py               |   1 +
 test/py/test.py                      |  12 ++
 test/py/test_000_version.py          |   9 ++
 test/py/test_env.py                  |  96 ++++++++++++
 test/py/test_help.py                 |   2 +
 test/py/test_md.py                   |  12 ++
 test/py/test_sandbox_exit.py         |  15 ++
 test/py/test_unknown_cmd.py          |   4 +
 test/py/uboot_console_base.py        | 143 +++++++++++++++++
 test/py/uboot_console_exec_attach.py |  28 ++++
 test/py/uboot_console_sandbox.py     |  22 +++
 21 files changed, 1108 insertions(+)
 create mode 100644 test/py/README.md
 create mode 100644 test/py/board_jetson_tk1.py
 create mode 100644 test/py/board_sandbox.py
 create mode 100644 test/py/board_seaboard.py
 create mode 100644 test/py/conftest.py
 create mode 100644 test/py/multiplexed_log.css
 create mode 100644 test/py/multiplexed_log.py
 create mode 100644 test/py/pytest.ini
 create mode 100644 test/py/soc_tegra124.py
 create mode 100644 test/py/soc_tegra20.py
 create mode 100755 test/py/test.py
 create mode 100644 test/py/test_000_version.py
 create mode 100644 test/py/test_env.py
 create mode 100644 test/py/test_help.py
 create mode 100644 test/py/test_md.py
 create mode 100644 test/py/test_sandbox_exit.py
 create mode 100644 test/py/test_unknown_cmd.py
 create mode 100644 test/py/uboot_console_base.py
 create mode 100644 test/py/uboot_console_exec_attach.py
 create mode 100644 test/py/uboot_console_sandbox.py

diff --git a/.gitignore b/.gitignore
index 33abbd3d0783..b276b3a160bb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -20,6 +20,7 @@
 *.bin
 *.patch
 *.cfgtmp
+*.pyc
 
 # host programs on Cygwin
 *.exe
diff --git a/test/py/README.md b/test/py/README.md
new file mode 100644
index 000000000000..70104d2f3b5e
--- /dev/null
+++ b/test/py/README.md
@@ -0,0 +1,287 @@
+# U-Boot pytest suite
+
+## Introduction
+
+This tool aims to test U-Boot by executing U-Boot shell commands using the
+console interface. A single top-level script exists to execute or attach to the
+U-Boot console, run the entire script of tests against it, and summarize the
+results. Advantages of this approach are:
+
+- Testing is performed in the same way a user or script would interact with
+  U-Boot; there can be no disconnect.
+- There is no need to write or embed test-related code into U-Boot itself.
+  It is asserted that writing test-related code in Python is simpler and more
+  flexible that writing it all in C.
+- It is reasonably simple to interact with U-Boot in this way.
+
+## Requirements
+
+The test suite is implemented using pytest. Interaction with the U-Boot
+console uses pexpect. Interaction with real hardware uses the tools of your
+choice; you get to implement various "hook" scripts that are called by the
+test suite at the appropriate time.
+
+On Debian or Debian-like distributions, the following packages are required.
+Similar package names should exist in other distributions.
+
+| Package        | Version tested (Ubuntu 14.04) |
+| -------------- | ----------------------------- |
+| python         | 2.7.5-5ubuntu3                |
+| python-pytest  | 2.5.1-1                       |
+| python-pexpect | 3.1-1ubuntu0.1                |
+
+The test script supports either:
+
+- Executing a sandbox port of U-Boot on the local machine as a sub-process,
+  and interacting with it over stdin/stdout.
+- Executing external "hook" scripts to flash a U-Boot binary onto a physical
+  board, attach to the board's console stream, and reset the board. Further
+  details are described later.
+
+### Using `virtualenv` to provide requirements
+
+Older distributions (e.g. Ubuntu 10.04) may not provide all the required
+packages, or may provide versions that are too old to run the test suite. One
+can use the Python `virtualenv` script to locally install more up-to-date
+versions of the required packages without interfering with the OS installation.
+For example:
+
+```bash
+$ cd /path/to/u-boot
+$ sudo apt-get install python python-virtualenv pexpect
+$ virtualenv venv
+$ . ./venv/bin/activate
+$ pip install pytest
+```
+
+## Testing sandbox
+
+To run the testsuite on the sandbox port (U-Boot built as a native user-space
+application), simply execute:
+
+```
+./test/py/test.py --bd sandbox --build
+```
+
+The `--bd` option tells the test suite which board type is being tested. This
+lets the test suite know which features the board has, and hence exactly what
+can be tested.
+
+The `--build` option tells U-Boot to compile U-Boot. Alternatively, you may
+omit this option and build U-Boot yourself, in whatever way you choose, before
+running the test script.
+
+The test script will attach to U-Boot, execute all valid tests for the board,
+then print a summary of the test process. A complete log of the test session
+will be written to `${build_dir}/test-log.html`. This is best viewed in a web
+browser, by may be read directly as plain text, perhaps with the aid of the
+`html2text` utility.
+
+## Command-line options
+
+- `--board-type`, `--bd`, `-B` set the type of the board to be tested. For
+  example, `sandbox` or `seaboard`.
+- `--board-identity`, `--id` set the identity of the board to be tested.
+  This allows differentiation between multiple instances of the same type of
+  physical board that are attached to the same host machine. This parameter is
+  not interpreted by the test script in any way, but rather is simply passed
+  to the hook scripts described below, and may be used in any site-specific
+  way deemed necessary.
+- `--build` indicates that the test script should compile U-Boot itself
+  before running the tests. If using this option, make sure that any
+  environment variables required by the build process are already set, such as
+  `$CROSS_COMPILE`.
+- `--build-dir` sets the directory containing the compiled U-Boot binaries.
+  If omitted, this is `${source_dir}/build-${board_type}`.
+- `--result-dir` sets the directory to write results, such as log files,
+  into. If omitted, the build directory is used.
+- `--persistent-data-dir` sets the directory used to store persistent test
+  data. This is test data that may be re-used across test runs, such as file-
+  system images.
+
+`pytest` also implements a number of its own command-line options. Please see
+`pytest` documentation for complete details. Execute `py.test --version` for
+a brief summary. Note that U-Boot's test.py script passes all command-line
+arguments directly to `pytest` for processing.
+
+## Testing real hardware
+
+The tools and techniques used to interact with real hardware will vary
+radically between different host and target systems, and the whims of the user.
+For this reason, the test suite does not attempt to directly interact with real
+hardware in any way. Rather, it expects a standardized set of "hook" scripts to
+exist which implement certain actions on behalf of the test suite. This keeps
+the test suite simple and isolated from system variances unrelated to U-Boot
+features.
+
+### Hook scripts
+
+The test suite requires the following hook scripts to be executable via
+`$PATH`:
+
+#### Environment variables
+
+The following environment variables are set when running hook scripts:
+
+- `UBOOT_BOARD_TYPE` the board type being tested.
+- `UBOOT_BOARD_IDENTITY` the board identity being tested, or `na` if none was
+  specified.
+- `UBOOT_SOURCE_DIR` the U-Boot source directory.
+- `UBOOT_TEST_PY_DIR` the full path to `test/py/` in the source directory.
+- `UBOOT_BUILD_DIR` the U-Boot build directory.
+- `UBOOT_RESULT_DIR` the test result directory.
+- `UBOOT_PERSISTENT_DATA_DIR` the test peristent data directory.
+
+#### `uboot-test-console`
+
+This script provides access to the U-Boot console. The script's stdin/stdout
+should be connected to the board's console. This script should continue to run
+indefinitely, until killed. The test suite will run this script in parallel
+with all other hooks.
+
+This script may be implemented by executing e.g. `cu`, `conmux`, etc.
+
+If you are able to run U-Boot under a hardware simulator such as qemu, then
+you would likely spawn that simulator from this script. However, note that
+`uboot-test-reset` may be called multiple times per test script run, and must
+cause U-Boot to start execution from scratch each time. Hopefully your
+simulator includes a virtual reset button!
+
+#### `uboot-test-flash`
+
+Prior to running the test suite against a board, some arrangement must be made
+so that the board executes the particular U-Boot binary to be tested. Often,
+this involves writing the U-Boot binary to the board's flash ROM. The test
+suite calls this hook script for that purpose.
+
+This script should perform the entire flashing process synchronously; the
+script should only exit once flashing is complete, and a board reset will
+cause the newly flashed U-Boot binary to be executed.
+
+It is conceivable that this script will do nothing. This might be useful in
+the following cases:
+
+- Some other process has already written the desired U-Boot binary into the
+  board's flash prior to running the test suite.
+- The board allows U-Boot to be downloaded directly into RAM, and executed
+  from there. Use of this feature will reduce wear on the board's flash, so
+  may be preferable if available, and if cold boot testing of U-Boot is not
+  required. If this feature is used, the `uboot-test-reset` script should
+  peform this download, since the board could conceivably be reset multiple
+  times in a single test run.
+
+It is up to the user to determine if those situations exist, and to code this
+hook script appropriately.
+
+This script will typically be implemented by calling out to some SoC- or
+board-specific vendor flashing utility.
+
+#### `uboot-test-reset`
+
+Whenever the test suite needs to reset the target board, this script is
+executed. This is guaranteed to happen at least once, prior to executing the
+first test function. If the test script determines the remote U-Boot has
+crashed or hung, it will execute this script again to restore U-Boot to an
+operational state before running the next test function.
+
+This script will likely be implemented by communicating with some form of
+relay or electronic switch attached to the board's reset signal.
+
+The semantics of this script require that when it is executed, U-Boot will
+start running from scratch. If the U-Boot binary to be tested has been written
+to flash, pulsing the board's reset signal is likely all this script need do.
+However, in some scenarios, this script may perform other actions. For
+example, it may call out to some SoC- or board-specific vendor utility in order
+to download the U-Boot binary directly into RAM and execute it. This would
+avoid the need for `uboot-test-flash` to actually write U-Boot to flash, thus
+saving wear on the flash chip(s).
+
+### Board-type-specific configuration
+
+Each board has a different configuration and behaviour. Many of these
+differences can be automatically detected by parsing the `.config` file in the
+build directory. However, some differences can't yet be handled automatically.
+
+For each board, an optional Python module `board_${board_type}.py` may exist
+to provide board-specific information to the test script. Any global value
+defined in these modules is available for use by any test function. The data
+contained in these scripts must be purely derived from U-Boot source code.
+Hence, these configuration files are part of the U-Boot source tree too.
+
+### Execution environment configuration
+
+Each user's hardware setup may enable testing different subsets of the features
+implemented by a particular board's configuration of U-Boot. For example, a
+U-Boot configuration may support USB device mode and USB Mass Storage, but this
+can only be tested if a USB cable is connected between the board and the host
+machine running the test script.
+
+For each board, optional Python modules `boardenv_${board_type}.py` and
+`boardenv_${board_type}_${board_identity}.py` may exist to provide
+board-specific and board-identity-specific information to the test script. Any
+global value defined in these modules is available for use by any test
+function. The data contained in these is specific to a particular user's
+hardware configuration. Hence, these configuration files are not part of the
+U-Boot source tree, and should be installed outside of the source tree. Users
+should set `$PYTHONPATH` prior to running the test script to allow these
+modules to be loaded.
+
+### Configuration parameter usage
+
+The test scripts rely on the following variables being defined by the board
+module:
+
+- `ram_base` an integer indicating the address of the start of RAM. This may
+  be used by tests that read/write RAM.
+
+### Complete invocation example
+
+Assuming that you have installed the hook scripts into $HOME/ubtest/bin, and
+any required environment configuration Python modules into $HOME/ubtest/py,
+then you would likely invoke the test script as follows:
+
+If U-Boot has already been built:
+
+```bash
+PATH=$HOME/ubtest/bin:$PATH \
+    PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \
+    ./test/py/test.py --bd seaboard
+```
+
+If you want the test script to compile U-Boot for you too, then you likely
+need to set `$CROSS_COMPILE` to allow this, and invoke the test script as
+follow:
+
+```bash
+CROSS_COMPILE=arm-none-eabi- \
+    PATH=$HOME/ubtest/bin:$PATH \
+    PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \
+    ./test/py/test.py --bd seaboard --build
+```
+
+## Writing tests
+
+Please refer to the pytest documentation for details of writing pytest tests.
+Details specific to the U-Boot test suite are described below.
+
+A test fixture named `uboot_console` should be used by each test function. This
+provides the means to interact with the U-Boot console, and retrieve board and
+environment configuration information.
+
+The function `uboot_console.run_command()` executes a shell command on the
+U-Boot console, and returns all output from that command. This allows
+validation or interpretation of the command output. This function validates
+that certain strings are not seen on the U-Boot console. These include shell
+error messages and the U-Boot sign-on message (in order to detect unexpected
+board resets). See the source of `uboot_console_base.py` for a complete list of
+"bad" strings. Some test scenarios are expected to trigger these strings. Use
+`uboot_console.disable_check()` to temporarily disable checking for specific
+strings. See `test_unknown_cmd.py` for an example.
+
+Board- and board-environment configuration values may be accessed as sub-fields
+of the `uboot_console.config` object, for example
+`uboot_console.config.ram_base`.
+
+Build configuration values (from `.config`) may be accessed via the dictionary
+`uboot_console.config.buildconfig`, with keys equal to the Kconfig variable
+names.
diff --git a/test/py/board_jetson_tk1.py b/test/py/board_jetson_tk1.py
new file mode 100644
index 000000000000..3fb0753a07f2
--- /dev/null
+++ b/test/py/board_jetson_tk1.py
@@ -0,0 +1 @@
+from soc_tegra124 import *
diff --git a/test/py/board_sandbox.py b/test/py/board_sandbox.py
new file mode 100644
index 000000000000..b3ed9ec44651
--- /dev/null
+++ b/test/py/board_sandbox.py
@@ -0,0 +1 @@
+ram_base = 0
diff --git a/test/py/board_seaboard.py b/test/py/board_seaboard.py
new file mode 100644
index 000000000000..8d32b661849d
--- /dev/null
+++ b/test/py/board_seaboard.py
@@ -0,0 +1 @@
+from soc_tegra20 import *
diff --git a/test/py/conftest.py b/test/py/conftest.py
new file mode 100644
index 000000000000..4b40bdd89a60
--- /dev/null
+++ b/test/py/conftest.py
@@ -0,0 +1,225 @@
+import atexit
+import errno
+import os
+import os.path
+import pexpect
+import pytest
+from _pytest.runner import runtestprotocol
+import ConfigParser
+import StringIO
+import sys
+
+log = None
+console = None
+
+def mkdir_p(path):
+    try:
+        os.makedirs(path)
+    except OSError as exc:
+        if exc.errno == errno.EEXIST and os.path.isdir(path):
+            pass
+        else:
+            raise
+
+def pytest_addoption(parser):
+    parser.addoption("--build-dir", default=None,
+        help="U-Boot build directory (O=)")
+    parser.addoption("--result-dir", default=None,
+        help="U-Boot test result/tmp directory")
+    parser.addoption("--persistent-data-dir", default=None,
+        help="U-Boot test persistent generated data directory")
+    parser.addoption("--board-type", "--bd", "-B", default="sandbox",
+        help="U-Boot board type")
+    parser.addoption("--board-identity", "--id", default="na",
+        help="U-Boot board identity/instance")
+    parser.addoption("--build", default=False, action="store_true",
+        help="Compile U-Boot before running tests")
+
+def pytest_configure(config):
+    global log
+    global console
+    global ubconfig
+
+    test_py_dir = os.path.dirname(os.path.abspath(__file__))
+    source_dir = os.path.dirname(os.path.dirname(test_py_dir))
+
+    board_type = config.getoption("board_type")
+    board_type_fn = board_type.replace("-", "_")
+
+    board_identity = config.getoption("board_identity")
+    board_identity_fn = board_identity.replace("-", "_")
+
+    build_dir = config.getoption("build_dir")
+    if not build_dir:
+        build_dir = source_dir + "/build-" + board_type
+    mkdir_p(build_dir)
+
+    result_dir = config.getoption("result_dir")
+    if not result_dir:
+        result_dir = build_dir
+    mkdir_p(result_dir)
+
+    persistent_data_dir = config.getoption("persistent_data_dir")
+    if not persistent_data_dir:
+        persistent_data_dir = build_dir + "/persistent-data"
+    mkdir_p(persistent_data_dir)
+
+    import multiplexed_log
+    log = multiplexed_log.Logfile(result_dir + "/test-log.html")
+
+    if config.getoption("build"):
+        if build_dir != source_dir:
+            o_opt = "O=%s" % build_dir
+        else:
+            o_opt = ""
+        cmds = (
+            ["make", o_opt, "-s", board_type + "_defconfig"],
+            ["make", o_opt, "-s", "-j8"],
+        )
+        runner = log.get_runner("make", sys.stdout)
+        for cmd in cmds:
+            runner.run(cmd, cwd=source_dir)
+        runner.close()
+
+    board_type_modfn = "board_" + board_type_fn
+    ubconfig = __import__(board_type_modfn)
+
+    override_modfns = [
+        "boardenv_" + board_type_fn,
+        "boardenv_" + board_type_fn + "_" + board_identity_fn,
+    ]
+    for override_modfn in override_modfns:
+        try:
+            override_mod = __import__(override_modfn)
+        except ImportError:
+            continue
+        for (k, v) in override_mod.__dict__.iteritems():
+            if k.startswith("_"):
+                continue
+            ubconfig.__dict__[k] = v
+
+    dot_config = build_dir + "/.config"
+    if os.path.exists(dot_config):
+        with open(dot_config, "rt") as f:
+            ini_str = "[root]\n" + f.read()
+            ini_sio = StringIO.StringIO(ini_str)
+            parser = ConfigParser.RawConfigParser()
+            parser.readfp(ini_sio)
+            ubconfig.buildconfig = dict(parser.items("root"))
+    else:
+        ubconfig.buildconfig = dict()
+
+    ubconfig.test_py_dir = test_py_dir
+    ubconfig.source_dir = source_dir
+    ubconfig.build_dir = build_dir
+    ubconfig.result_dir = result_dir
+    ubconfig.persistent_data_dir = persistent_data_dir
+    ubconfig.board_type = board_type
+    ubconfig.board_identity = board_identity
+
+    env_vars = (
+        "board_type",
+        "board_identity",
+        "source_dir",
+        "test_py_dir",
+        "build_dir",
+        "result_dir",
+        "persistent_data_dir",
+    )
+    for v in env_vars:
+        os.environ["UBOOT_" + v.upper()] = getattr(ubconfig, v)
+
+    if board_type == "sandbox":
+        import uboot_console_sandbox
+        console = uboot_console_sandbox.ConsoleSandbox(log, ubconfig)
+    else:
+        import uboot_console_exec_attach
+        console = uboot_console_exec_attach.ConsoleExecAttach(log, ubconfig)
+
+ at pytest.fixture(scope="session")
+def uboot_console(request):
+    return console
+
+def cleanup():
+    if console:
+        console.close()
+    if log:
+        log.close()
+atexit.register(cleanup)
+
+def setup_boardspec(item):
+    mark = item.get_marker("boardspec")
+    if not mark:
+        return
+    required_boards = []
+    for board in mark.args:
+        if board.startswith("!"):
+            if ubconfig.board_type == board[1:]:
+                pytest.skip("board not supported")
+                return
+        else:
+            required_boards.append(board)
+    if required_boards and ubconfig.board_type not in required_boards:
+        pytest.skip("board not supported")
+
+def setup_buildconfigspec(item):
+    mark = item.get_marker("buildconfigspec")
+    if not mark:
+        return
+    for option in mark.args:
+        if not ubconfig.buildconfig.get("config_" + option.lower(), None):
+            pytest.skip(".config feature not enabled")
+
+def setup_envspec(item):
+    mark = item.get_marker("envspec")
+    if not mark:
+        return
+    for feature in mark.args:
+        if not ubconfig.__dict__.get("env__" + feature, False):
+            pytest.skip("env feature not supported")
+
+def pytest_runtest_setup(item):
+    log.start_section(item.name)
+    if console.at_prompt:
+        console.logstream.write(console.prompt, implicit=True)
+    setup_boardspec(item)
+    setup_buildconfigspec(item)
+    setup_envspec(item)
+
+def pytest_runtest_protocol(item, nextitem):
+    reports = runtestprotocol(item, nextitem=nextitem)
+    failed = None
+    skipped = None
+    for report in reports:
+        if report.outcome == "failed":
+            failed = report
+            break
+        if report.outcome == "skipped":
+            if not skipped:
+                skipped = report
+
+    try:
+        if failed:
+            msg = "FAILED:\n" + str(failed.longrepr)
+            log.status_fail(msg)
+        elif skipped:
+            msg = "SKIPPED:\n" + str(skipped.longrepr)
+            log.status_skipped(msg)
+        else:
+            log.status_pass("OK")
+    except:
+        # If something went wrong with logging, it's better to let the test
+        # process continue, which may report other exceptions that triggered
+        # the logging issue (e.g. console.log wasn't created). Hence, just
+        # squash the exception. If the test setup failed due to e.g. syntax
+        # error somewhere else, this won't be seen. However, once that issue
+        # is fixed, if this exception still exists, it will then be logged as
+        # part of the test's stdout.
+        import traceback
+        print "Exception occurred while logging runtest status:"
+        traceback.print_exc()
+        # FIXME: Can we force a test failure here?
+
+    log.end_section(item.name)
+
+    return reports
diff --git a/test/py/multiplexed_log.css b/test/py/multiplexed_log.css
new file mode 100644
index 000000000000..f0bfffe892de
--- /dev/null
+++ b/test/py/multiplexed_log.css
@@ -0,0 +1,70 @@
+body {
+    background-color: black;
+    color: #ffffff;
+}
+
+.implicit {
+    color: #808080;
+}
+
+.section {
+    border-style: solid;
+    border-color: #303030;
+    border-width: 0px 0px 0px 5px;
+    padding-left: 5px
+}
+
+.section-header {
+    background-color: #303030;
+    margin-left: -5px;
+    margin-top: 5px;
+}
+
+.section-trailer {
+    display: none;
+}
+
+.stream {
+    border-style: solid;
+    border-color: #303030;
+    border-width: 0px 0px 0px 5px;
+    padding-left: 5px
+}
+
+.stream-header {
+    background-color: #303030;
+    margin-left: -5px;
+    margin-top: 5px;
+}
+
+.stream-trailer {
+    display: none;
+}
+
+.error {
+    color: #ff0000
+}
+
+.warning {
+    color: #ffff00
+}
+
+.info {
+    color: #808080
+}
+
+.action {
+    color: #8080ff
+}
+
+.status-pass {
+    color: #00ff00
+}
+
+.status-skipped {
+    color: #ffff00
+}
+
+.status-fail {
+    color: #ff0000
+}
diff --git a/test/py/multiplexed_log.py b/test/py/multiplexed_log.py
new file mode 100644
index 000000000000..46e27de22fe5
--- /dev/null
+++ b/test/py/multiplexed_log.py
@@ -0,0 +1,172 @@
+import cgi
+import os.path
+import shutil
+import subprocess
+
+mod_dir = os.path.dirname(os.path.abspath(__file__))
+
+class LogfileStream(object):
+    def __init__(self, logfile, name, chained_file):
+        self.logfile = logfile
+        self.name = name
+        self.chained_file = chained_file
+
+    def close(self):
+        pass
+
+    def write(self, data, implicit=False):
+        self.logfile.write(self, data, implicit)
+        if self.chained_file:
+            self.chained_file.write(data)
+
+    def flush(self):
+        self.logfile.flush()
+        if self.chained_file:
+            self.chained_file.flush()
+
+class RunAndLog(object):
+    def __init__(self, logfile, name, chained_file):
+        self.logfile = logfile
+        self.name = name
+        self.chained_file = chained_file
+
+    def close(self):
+        pass
+
+    def run(self, cmd, cwd=None):
+        msg = "+" + " ".join(cmd) + "\n"
+        if self.chained_file:
+            self.chained_file.write(msg)
+        self.logfile.write(self, msg)
+
+        try:
+            p = subprocess.Popen(cmd, cwd=cwd,
+                stdin=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+            (output, stderr) = p.communicate()
+            status = p.returncode
+        except subprocess.CalledProcessError as cpe:
+            output = cpe.output
+            status = cpe.returncode
+        self.logfile.write(self, output)
+        if status:
+            if self.chained_file:
+                self.chained_file.write(output)
+            raise Exception("command failed; exit code " + str(status))
+
+class Logfile(object):
+    def __init__(self, fn):
+        self.f = open(fn, "wt")
+        self.last_stream = None
+        self.linebreak = True
+        self.blocks = []
+        shutil.copy(mod_dir + "/multiplexed_log.css", os.path.dirname(fn))
+        self.f.write("""\
+<html>
+<head>
+<link rel="stylesheet" type="text/css" href="multiplexed_log.css">
+</head>
+<body>
+<tt>
+""")
+
+    def close(self):
+        self.f.write("""\
+</tt>
+</body>
+</html>
+""")
+        self.f.close()
+
+    def _escape(self, data):
+        data = data.replace(chr(13), "")
+        data = "".join((c in self._nonprint) and ("%%%02x" % ord(c)) or
+                       c for c in data)
+        data = cgi.escape(data)
+        data = data.replace(" ", "&nbsp;")
+        self.linebreak = data[-1:-1] == "\n"
+        data = data.replace(chr(10), "<br/>\n")
+        return data
+
+    def _terminate_stream(self):
+        if not self.last_stream:
+            return
+        if not self.linebreak:
+            self.f.write("<br/>\n")
+        self.f.write("<div class=\"stream-trailer\" id=\"" +
+                     self.last_stream.name + "\">End stream: " +
+                     self.last_stream.name + "</div>\n")
+        self.f.write("</div>\n")
+        self.last_stream = None
+
+    def _note(self, note_type, msg):
+        self._terminate_stream()
+        self.f.write("<div class=\"" + note_type + "\">\n")
+        self.f.write(self._escape(msg))
+        self.f.write("<br/>\n")
+        self.f.write("</div>\n")
+        self.linebreak = True
+
+    def start_section(self, marker):
+        self._terminate_stream()
+        self.blocks.append(marker)
+        blk_path = "/".join(self.blocks)
+        self.f.write("<div class=\"section\" id=\"" + blk_path + "\">\n")
+        self.f.write("<div class=\"section-header\" id=\"" + blk_path +
+                     "\">Section: " + blk_path + "</div>\n")
+
+    def end_section(self, marker):
+        if (not self.blocks) or (marker != self.blocks[-1]):
+            raise Exception("Block nesting mismatch: \"%s\" \"%s\"" %
+                            (marker, "/".join(self.blocks)))
+        self._terminate_stream()
+        blk_path = "/".join(self.blocks)
+        self.f.write("<div class=\"section-trailer\" id=\"section-trailer-" +
+                     blk_path + "\">End section: " + blk_path + "</div>\n")
+        self.f.write("</div>\n")
+        self.blocks.pop()
+
+    def error(self, msg):
+        self._note("error", msg)
+
+    def warning(self, msg):
+        self._note("warning", msg)
+
+    def info(self, msg):
+        self._note("info", msg)
+
+    def action(self, msg):
+        self._note("action", msg)
+
+    def status_pass(self, msg):
+        self._note("status-pass", msg)
+
+    def status_skipped(self, msg):
+        self._note("status-skipped", msg)
+
+    def status_fail(self, msg):
+        self._note("status-fail", msg)
+
+    def get_stream(self, name, chained_file=None):
+        return LogfileStream(self, name, chained_file)
+
+    def get_runner(self, name, chained_file=None):
+        return RunAndLog(self, name, chained_file)
+
+    _nonprint = ("^%" + "".join(chr(c) for c in range(0, 32) if c != 10) +
+                 "".join(chr(c) for c in range(127, 256)))
+
+    def write(self, stream, data, implicit=False):
+        if stream != self.last_stream:
+            self._terminate_stream()
+            self.f.write("<div class=\"stream\" id=\"%s\">\n" % stream.name)
+            self.f.write("<div class=\"stream-header\" id=\"" + stream.name +
+                         "\">Stream: " + stream.name + "</div>\n")
+        if implicit:
+            self.f.write("<span class=\"implicit\">")
+        self.f.write(self._escape(data))
+        if implicit:
+            self.f.write("</span>")
+        self.last_stream = stream
+
+    def flush(self):
+        self.f.flush()
diff --git a/test/py/pytest.ini b/test/py/pytest.ini
new file mode 100644
index 000000000000..da0d9e553a4b
--- /dev/null
+++ b/test/py/pytest.ini
@@ -0,0 +1,5 @@
+[pytest]
+markers =
+    boardspec: U-Boot: Describes the set of boards a test can/can't run on.
+    buildconfigspec: U-Boot: Describes Kconfig/config-header constraints.
+    envspec: U-Boot: Describes execution environment constraints.
diff --git a/test/py/soc_tegra124.py b/test/py/soc_tegra124.py
new file mode 100644
index 000000000000..a7427dc53261
--- /dev/null
+++ b/test/py/soc_tegra124.py
@@ -0,0 +1 @@
+ram_base = 0x80000000
diff --git a/test/py/soc_tegra20.py b/test/py/soc_tegra20.py
new file mode 100644
index 000000000000..16e3a4966c5c
--- /dev/null
+++ b/test/py/soc_tegra20.py
@@ -0,0 +1 @@
+ram_base = 0x00000000
diff --git a/test/py/test.py b/test/py/test.py
new file mode 100755
index 000000000000..b578af48eb23
--- /dev/null
+++ b/test/py/test.py
@@ -0,0 +1,12 @@
+#!/usr/bin/env python
+
+import os
+import os.path
+import sys
+
+sys.argv.pop(0)
+
+args = ["py.test", os.path.dirname(__file__)]
+args.extend(sys.argv)
+
+os.execvp("py.test", args)
diff --git a/test/py/test_000_version.py b/test/py/test_000_version.py
new file mode 100644
index 000000000000..34822fb398e5
--- /dev/null
+++ b/test/py/test_000_version.py
@@ -0,0 +1,9 @@
+# pytest runs tests the order of their module path, which is related to the
+# filename containing the test. This file is named such that it is sorted
+# first, simply as a very basic sanity check of the functionality of the U-Boot
+# command prompt.
+
+def test_version(uboot_console):
+    with uboot_console.disable_check("main_signon"):
+        response = uboot_console.run_command("version")
+    uboot_console.validate_main_signon_in_text(response)
diff --git a/test/py/test_env.py b/test/py/test_env.py
new file mode 100644
index 000000000000..16891cd6bb15
--- /dev/null
+++ b/test/py/test_env.py
@@ -0,0 +1,96 @@
+import pytest
+
+# FIXME: This might be useful for other tests;
+# perhaps refactor it into ConsoleBase or some other state object?
+class StateTestEnv(object):
+    def __init__(self, uboot_console):
+        self.uboot_console = uboot_console
+        self.get_env()
+        self.set_var = self.get_non_existent_var()
+
+    def get_env(self):
+        response = self.uboot_console.run_command("printenv")
+        self.env = {}
+        for l in response.splitlines():
+            if not "=" in l:
+                continue
+            (var, value) = l.strip().split("=")
+            self.env[var] = value
+
+    def get_existent_var(self):
+        for var in self.env:
+            return var
+
+    def get_non_existent_var(self):
+        n = 0
+        while True:
+            var = "test_env_" + str(n)
+            if var not in self.env:
+                return var
+            n += 1
+
+ at pytest.fixture(scope="module")
+def state_test_env(uboot_console):
+    return StateTestEnv(uboot_console)
+
+def unset_var(state_test_env, var):
+    state_test_env.uboot_console.run_command("setenv " + var)
+    if var in state_test_env.env:
+        del state_test_env.env[var]
+
+def set_var(state_test_env, var, value):
+    state_test_env.uboot_console.run_command("setenv " + var + " " + value)
+    state_test_env.env[var] = value
+
+def validate_empty(state_test_env, var):
+    response = state_test_env.uboot_console.run_command("echo $" + var)
+    assert response == ""
+
+def validate_set(state_test_env, var, value):
+    response = state_test_env.uboot_console.run_command("echo $" + var)
+    assert response == value
+
+def test_env_echo_exists(state_test_env):
+    """Echo a variable that exists"""
+    var = state_test_env.get_existent_var()
+    value = state_test_env.env[var]
+    validate_set(state_test_env, var, value)
+
+def test_env_echo_non_existent(state_test_env):
+    """Echo a variable that doesn't exist"""
+    var = state_test_env.set_var
+    validate_empty(state_test_env, var)
+
+def test_env_printenv_non_existent(state_test_env):
+    """Check printenv error message"""
+    var = state_test_env.set_var
+    c = state_test_env.uboot_console
+    with c.disable_check("error_notification"):
+        response = c.run_command("printenv " + var)
+    assert(response == "## Error: \"" + var + "\" not defined")
+
+def test_env_unset_non_existent(state_test_env):
+    """Unset a nonexistent variable"""
+    var = state_test_env.get_non_existent_var()
+    unset_var(state_test_env, var)
+    validate_empty(state_test_env, var)
+
+def test_env_set_non_existent(state_test_env):
+    """Set a new variable"""
+    var = state_test_env.set_var
+    value = "foo"
+    set_var(state_test_env, var, value)
+    validate_set(state_test_env, var, value)
+
+def test_env_set_existing(state_test_env):
+    """Set an existing variable"""
+    var = state_test_env.set_var
+    value = "bar"
+    set_var(state_test_env, var, value)
+    validate_set(state_test_env, var, value)
+
+def test_env_unset_existing(state_test_env):
+    """Unset a variable"""
+    var = state_test_env.set_var
+    unset_var(state_test_env, var)
+    validate_empty(state_test_env, var)
diff --git a/test/py/test_help.py b/test/py/test_help.py
new file mode 100644
index 000000000000..c2b9ace475e1
--- /dev/null
+++ b/test/py/test_help.py
@@ -0,0 +1,2 @@
+def test_help(uboot_console):
+    uboot_console.run_command("help")
diff --git a/test/py/test_md.py b/test/py/test_md.py
new file mode 100644
index 000000000000..49cdd2685234
--- /dev/null
+++ b/test/py/test_md.py
@@ -0,0 +1,12 @@
+import pytest
+
+ at pytest.mark.buildconfigspec("cmd_memory")
+def test_md(uboot_console):
+    addr = "%08x" % uboot_console.config.ram_base
+    val = "a5f09876"
+    expected_response = addr + ": " + val
+    response = uboot_console.run_command("md " + addr + " 10")
+    assert(not (expected_response in response))
+    uboot_console.run_command("mw " + addr + " " + val)
+    response = uboot_console.run_command("md " + addr + " 10")
+    assert(expected_response in response)
diff --git a/test/py/test_sandbox_exit.py b/test/py/test_sandbox_exit.py
new file mode 100644
index 000000000000..6aefa703a965
--- /dev/null
+++ b/test/py/test_sandbox_exit.py
@@ -0,0 +1,15 @@
+import pytest
+import signal
+
+ at pytest.mark.boardspec("sandbox")
+ at pytest.mark.buildconfigspec("reset")
+def test_reset(uboot_console):
+    uboot_console.run_command("reset", False)
+    assert(uboot_console.validate_exited())
+    uboot_console.ensure_spawned()
+
+ at pytest.mark.boardspec("sandbox")
+def test_ctrlc(uboot_console):
+    uboot_console.kill(signal.SIGINT)
+    assert(uboot_console.validate_exited())
+    uboot_console.ensure_spawned()
diff --git a/test/py/test_unknown_cmd.py b/test/py/test_unknown_cmd.py
new file mode 100644
index 000000000000..19ac52cc24ce
--- /dev/null
+++ b/test/py/test_unknown_cmd.py
@@ -0,0 +1,4 @@
+def test_unknown_command(uboot_console):
+    with uboot_console.disable_check("unknown_command"):
+        response = uboot_console.run_command("non_existent_cmd")
+    assert("Unknown command 'non_existent_cmd' - try 'help'" in response)
diff --git a/test/py/uboot_console_base.py b/test/py/uboot_console_base.py
new file mode 100644
index 000000000000..dfd986860e75
--- /dev/null
+++ b/test/py/uboot_console_base.py
@@ -0,0 +1,143 @@
+import multiplexed_log
+import os
+import re
+import sys
+
+pattern_uboot_spl_signon = re.compile("(U-Boot SPL \\d{4}\\.\\d{2}-[^\r\n]*)")
+pattern_uboot_main_signon = re.compile("(U-Boot \\d{4}\\.\\d{2}-[^\r\n]*)")
+pattern_stop_autoboot_prompt = re.compile("Hit any key to stop autoboot: ")
+pattern_unknown_command = re.compile("Unknown command '.*' - try 'help'")
+pattern_error_notification = re.compile("## Error: ")
+
+class ConsoleDisableCheck(object):
+    def __init__(self, console, check_type):
+        self.console = console
+        self.check_type = check_type
+
+    def __enter__(self):
+        self.console.disable_check_count[self.check_type] += 1
+
+    def __exit__(self, extype, value, traceback):
+        self.console.disable_check_count[self.check_type] -= 1
+
+class ConsoleBase(object):
+    def __init__(self, log, config):
+        self.log = log
+        self.config = config
+
+        self.logstream = self.log.get_stream("console", sys.stdout)
+
+        # Array slice removes leading/trailing quotes
+        self.prompt = self.config.buildconfig["config_sys_prompt"][1:-1]
+        self.prompt_escaped = re.escape(self.prompt)
+        self.p = None
+        self.disable_check_count = {
+            "spl_signon": 0,
+            "main_signon": 0,
+            "unknown_command": 0,
+            "error_notification": 0,
+        }
+
+        self.at_prompt = False
+
+    def close(self):
+        if self.p:
+            self.p.close()
+        self.logstream.close()
+
+    def run_command(self, cmd, wait_for_prompt=True):
+        self.ensure_spawned()
+        bad_patterns = []
+        bad_pattern_ids = []
+        if (self.disable_check_count["spl_signon"] == 0 and
+                self.uboot_spl_signon):
+            bad_patterns.append(self.uboot_spl_signon_escaped)
+            bad_pattern_ids.append("SPL signon")
+        if self.disable_check_count["main_signon"] == 0:
+            bad_patterns.append(self.uboot_main_signon_escaped)
+            bad_pattern_ids.append("U-Boot main signon")
+        if self.disable_check_count["unknown_command"] == 0:
+            bad_patterns.append(pattern_unknown_command)
+            bad_pattern_ids.append("Unknown command")
+        if self.disable_check_count["error_notification"] == 0:
+            bad_patterns.append(pattern_error_notification)
+            bad_pattern_ids.append("Error notification")
+        try:
+            if cmd:
+                self.p.send(cmd)
+                try:
+                    m = self.p.expect([re.escape(cmd)] + bad_patterns)
+                    if m != 0:
+                        self.at_prompt = False
+                        raise Exception("Bad pattern found on console: " +
+                                        bad_pattern_ids[m - 1])
+                except Exception as ex:
+                    self.at_prompt = False
+                    print cmd
+                    self.logstream.write(cmd, implicit=True)
+                    raise
+            self.p.send("\n")
+            if not wait_for_prompt:
+                self.at_prompt = False
+                return
+            m = self.p.expect([self.prompt_escaped] + bad_patterns)
+            if m != 0:
+                self.at_prompt = False
+                raise Exception("Bad pattern found on console: " +
+                                bad_pattern_ids[m - 1])
+            self.at_prompt = True
+            return self.p.before.strip()
+        except Exception as ex:
+            self.at_prompt = False
+            self.log.error(str(ex))
+            self.cleanup_spawn()
+            raise
+
+    def ensure_spawned(self):
+        if self.p:
+            return
+        try:
+            self.at_prompt = False
+            self.log.action("Starting U-Boot")
+            self.p = self.get_spawn()
+            # Real targets can take a long time to scroll large amounts of
+            # text if LCD is enabled. This value may need tweaking in the
+            # future, possibly per-test to be optimal. This works for "help"
+            # on board "seaboard".
+            self.p.timeout = 30
+            self.p.logfile_read = self.logstream
+            if self.config.buildconfig.get("CONFIG_SPL", False) == "y":
+                self.p.expect(pattern_uboot_spl_signon)
+                self.uboot_spl_signon = self.p.after
+                self.uboot_spl_signon_escaped = re.escape(self.p.after)
+            else:
+                self.uboot_spl_signon = None
+            self.p.expect(pattern_uboot_main_signon)
+            self.uboot_main_signon = self.p.after
+            self.uboot_main_signon_escaped = re.escape(self.p.after)
+            while True:
+                match = self.p.expect([self.prompt_escaped,
+                                       pattern_stop_autoboot_prompt])
+                if match == 1:
+                    self.p.send(chr(3)) # CTRL-C
+                    continue
+                break
+            self.at_prompt = True
+        except Exception as ex:
+            self.log.error(str(ex))
+            self.cleanup_spawn()
+            raise
+
+    def cleanup_spawn(self):
+        try:
+            if self.p:
+                self.p.close()
+        except:
+            pass
+        self.p = None
+
+    def validate_main_signon_in_text(self, text):
+        assert(self.uboot_main_signon in text)
+
+    def disable_check(self, check_type):
+        return ConsoleDisableCheck(self, check_type)
diff --git a/test/py/uboot_console_exec_attach.py b/test/py/uboot_console_exec_attach.py
new file mode 100644
index 000000000000..7960d66107c3
--- /dev/null
+++ b/test/py/uboot_console_exec_attach.py
@@ -0,0 +1,28 @@
+import os
+import pexpect
+from uboot_console_base import ConsoleBase
+
+def cmdline(app, args):
+    return app + ' "' + '" "'.join(args) + '"'
+
+class ConsoleExecAttach(ConsoleBase):
+    def __init__(self, log, config):
+        super(ConsoleExecAttach, self).__init__(log, config)
+
+        self.log.action("Flashing U-Boot")
+        cmd = ["uboot-test-flash", config.board_type, config.board_identity]
+        runner = self.log.get_runner(cmd[0])
+        runner.run(cmd)
+        runner.close()
+
+    def get_spawn(self):
+        args = [self.config.board_type, self.config.board_identity]
+        s = pexpect.spawn("uboot-test-console", args=args)
+
+        self.log.action("Resetting board")
+        cmd = ["uboot-test-reset"] + args
+        runner = self.log.get_runner(cmd[0])
+        runner.run(cmd)
+        runner.close()
+
+        return s
diff --git a/test/py/uboot_console_sandbox.py b/test/py/uboot_console_sandbox.py
new file mode 100644
index 000000000000..c3aae3862ca9
--- /dev/null
+++ b/test/py/uboot_console_sandbox.py
@@ -0,0 +1,22 @@
+import os
+import pexpect
+from uboot_console_base import ConsoleBase
+
+class ConsoleSandbox(ConsoleBase):
+    def __init__(self, log, config):
+        super(ConsoleSandbox, self).__init__(log, config)
+
+    def get_spawn(self):
+        return pexpect.spawn(self.config.build_dir + "/u-boot")
+
+    def kill(self, sig):
+        self.ensure_spawned()
+        self.log.action("kill %d" % sig)
+        self.p.kill(sig)
+
+    def validate_exited(self):
+        p = self.p
+        self.p = None
+        ret = p.isalive()
+        p.close()
+        return ret
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-15  6:53 [U-Boot] [PATCH] Implement pytest-based test infrastructure Stephen Warren
@ 2015-11-19 14:45 ` Simon Glass
  2015-11-19 17:00   ` Stephen Warren
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-11-19 14:45 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org> wrote:
> This tool aims to test U-Boot by executing U-Boot shell commands using the
> console interface. A single top-level script exists to execute or attach
> to the U-Boot console, run the entire script of tests against it, and
> summarize the results. Advantages of this approach are:
>
> - Testing is performed in the same way a user or script would interact
>   with U-Boot; there can be no disconnect.
> - There is no need to write or embed test-related code into U-Boot itself.
>   It is asserted that writing test-related code in Python is simpler and
>   more flexible that writing it all in C.
> - It is reasonably simple to interact with U-Boot in this way.
>
> A few simple tests are provided as examples. Soon, we should convert as
> many as possible of the other tests in test/* and test/cmd_ut.c too.

It's great to see this and thank you for putting in the effort!

It looks like a good way of doing functional tests. I still see a role
for unit tests and things like test/dm. But if we can arrange to call
all U-Boot tests (unit and functional) from one 'test.py' command that
would be a win.

I'll look more when I can get it to work - see below.

>
> In the future, I hope to publish (out-of-tree) the hook scripts, relay
> control utilities, and udev rules I will use for my own HW setup.
>
> See README.md for more details!
>
> Signed-off-by: Stephen Warren <swarren@wwwdotorg.org>
> ---
>  .gitignore                           |   1 +
>  test/py/README.md                    | 287 +++++++++++++++++++++++++++++++++++
>  test/py/board_jetson_tk1.py          |   1 +
>  test/py/board_sandbox.py             |   1 +
>  test/py/board_seaboard.py            |   1 +
>  test/py/conftest.py                  | 225 +++++++++++++++++++++++++++
>  test/py/multiplexed_log.css          |  70 +++++++++
>  test/py/multiplexed_log.py           | 172 +++++++++++++++++++++
>  test/py/pytest.ini                   |   5 +
>  test/py/soc_tegra124.py              |   1 +
>  test/py/soc_tegra20.py               |   1 +
>  test/py/test.py                      |  12 ++
>  test/py/test_000_version.py          |   9 ++
>  test/py/test_env.py                  |  96 ++++++++++++
>  test/py/test_help.py                 |   2 +
>  test/py/test_md.py                   |  12 ++
>  test/py/test_sandbox_exit.py         |  15 ++
>  test/py/test_unknown_cmd.py          |   4 +
>  test/py/uboot_console_base.py        | 143 +++++++++++++++++
>  test/py/uboot_console_exec_attach.py |  28 ++++
>  test/py/uboot_console_sandbox.py     |  22 +++
>  21 files changed, 1108 insertions(+)
>  create mode 100644 test/py/README.md
>  create mode 100644 test/py/board_jetson_tk1.py
>  create mode 100644 test/py/board_sandbox.py
>  create mode 100644 test/py/board_seaboard.py
>  create mode 100644 test/py/conftest.py
>  create mode 100644 test/py/multiplexed_log.css
>  create mode 100644 test/py/multiplexed_log.py
>  create mode 100644 test/py/pytest.ini
>  create mode 100644 test/py/soc_tegra124.py
>  create mode 100644 test/py/soc_tegra20.py
>  create mode 100755 test/py/test.py
>  create mode 100644 test/py/test_000_version.py
>  create mode 100644 test/py/test_env.py
>  create mode 100644 test/py/test_help.py
>  create mode 100644 test/py/test_md.py
>  create mode 100644 test/py/test_sandbox_exit.py
>  create mode 100644 test/py/test_unknown_cmd.py
>  create mode 100644 test/py/uboot_console_base.py
>  create mode 100644 test/py/uboot_console_exec_attach.py
>  create mode 100644 test/py/uboot_console_sandbox.py

I get this on my Ubuntu 64-bit machine (14.04.3)

$ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
  File "./test/py/test.py", line 12, in <module>
    os.execvp("py.test", args)
  File "/usr/lib/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-19 14:45 ` Simon Glass
@ 2015-11-19 17:00   ` Stephen Warren
  2015-11-19 19:09     ` Stephen Warren
  2015-11-23 23:44     ` Tom Rini
  0 siblings, 2 replies; 19+ messages in thread
From: Stephen Warren @ 2015-11-19 17:00 UTC (permalink / raw)
  To: u-boot

On 11/19/2015 07:45 AM, Simon Glass wrote:
> Hi Stephen,
>
> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> This tool aims to test U-Boot by executing U-Boot shell commands using the
>> console interface. A single top-level script exists to execute or attach
>> to the U-Boot console, run the entire script of tests against it, and
>> summarize the results. Advantages of this approach are:
>>
>> - Testing is performed in the same way a user or script would interact
>>    with U-Boot; there can be no disconnect.
>> - There is no need to write or embed test-related code into U-Boot itself.
>>    It is asserted that writing test-related code in Python is simpler and
>>    more flexible that writing it all in C.
>> - It is reasonably simple to interact with U-Boot in this way.
>>
>> A few simple tests are provided as examples. Soon, we should convert as
>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>
> It's great to see this and thank you for putting in the effort!
>
> It looks like a good way of doing functional tests. I still see a role
> for unit tests and things like test/dm. But if we can arrange to call
> all U-Boot tests (unit and functional) from one 'test.py' command that
> would be a win.
>
> I'll look more when I can get it to work - see below.
...
> I get this on my Ubuntu 64-bit machine (14.04.3)
>
> $ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
>    File "./test/py/test.py", line 12, in <module>
>      os.execvp("py.test", args)
>    File "/usr/lib/python2.7/os.py", line 344, in execvp
>      _execvpe(file, args)
>    File "/usr/lib/python2.7/os.py", line 380, in _execvpe
>      func(fullname, *argrest)
> OSError: [Errno 2] No such file or directory

"py.test" isn't in your $PATH. Did you install it? See the following in 
test/py/README.md:

> ## Requirements
>
> The test suite is implemented using pytest. Interaction with the U-Boot
> console uses pexpect. Interaction with real hardware uses the tools of your
> choice; you get to implement various "hook" scripts that are called by the
> test suite at the appropriate time.
>
> On Debian or Debian-like distributions, the following packages are required.
> Similar package names should exist in other distributions.
>
> | Package        | Version tested (Ubuntu 14.04) |
> | -------------- | ----------------------------- |
> | python         | 2.7.5-5ubuntu3                |
> | python-pytest  | 2.5.1-1                       |
> | python-pexpect | 3.1-1ubuntu0.1                |

In the main Python code, I trapped at least one exception location and 
made it print a message about checking the docs for missing 
requirements. I can probably patch the top-level test.py to do the same.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-19 17:00   ` Stephen Warren
@ 2015-11-19 19:09     ` Stephen Warren
  2015-11-21 16:49       ` Simon Glass
  2015-11-23 23:44     ` Tom Rini
  1 sibling, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-19 19:09 UTC (permalink / raw)
  To: u-boot

On 11/19/2015 10:00 AM, Stephen Warren wrote:
> On 11/19/2015 07:45 AM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>> wrote:
>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>> using the
>>> console interface. A single top-level script exists to execute or attach
>>> to the U-Boot console, run the entire script of tests against it, and
>>> summarize the results. Advantages of this approach are:
>>>
>>> - Testing is performed in the same way a user or script would interact
>>>    with U-Boot; there can be no disconnect.
>>> - There is no need to write or embed test-related code into U-Boot
>>> itself.
>>>    It is asserted that writing test-related code in Python is simpler
>>> and
>>>    more flexible that writing it all in C.
>>> - It is reasonably simple to interact with U-Boot in this way.
>>>
>>> A few simple tests are provided as examples. Soon, we should convert as
>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>
>> It's great to see this and thank you for putting in the effort!
>>
>> It looks like a good way of doing functional tests. I still see a role
>> for unit tests and things like test/dm. But if we can arrange to call
>> all U-Boot tests (unit and functional) from one 'test.py' command that
>> would be a win.
>>
>> I'll look more when I can get it to work - see below.
...
> made it print a message about checking the docs for missing
> requirements. I can probably patch the top-level test.py to do the same.

I've pushed such a patch to:

git://github.com/swarren/u-boot.git tegra_dev
(the separate pytests branch has now been deleted)

There are also a variety of other patches there related to this testing 
infra-structure. I guess I'll hold off sending them to the list until 
there's been some general feedback on the patches I've already posted, 
but feel free to pull the branch down and play with it. Note that it's 
likely to get rebased as I work.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-19 19:09     ` Stephen Warren
@ 2015-11-21 16:49       ` Simon Glass
  2015-11-22 17:30         ` Stephen Warren
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-11-21 16:49 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>
> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>
>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>> wrote:
>>>>
>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>> using the
>>>> console interface. A single top-level script exists to execute or attach
>>>> to the U-Boot console, run the entire script of tests against it, and
>>>> summarize the results. Advantages of this approach are:
>>>>
>>>> - Testing is performed in the same way a user or script would interact
>>>>    with U-Boot; there can be no disconnect.
>>>> - There is no need to write or embed test-related code into U-Boot
>>>> itself.
>>>>    It is asserted that writing test-related code in Python is simpler
>>>> and
>>>>    more flexible that writing it all in C.
>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>
>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>
>>>
>>> It's great to see this and thank you for putting in the effort!
>>>
>>> It looks like a good way of doing functional tests. I still see a role
>>> for unit tests and things like test/dm. But if we can arrange to call
>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>> would be a win.
>>>
>>> I'll look more when I can get it to work - see below.
>
> ...
>>
>> made it print a message about checking the docs for missing
>> requirements. I can probably patch the top-level test.py to do the same.
>
>
> I've pushed such a patch to:
>
> git://github.com/swarren/u-boot.git tegra_dev
> (the separate pytests branch has now been deleted)
>
> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.

OK I got it working thank you. It is horribly slow though - do you
know what is holding it up? For me to takes 12 seconds to run the
(very basic) tests.

Also please see dm_test_usb_tree() which uses a console buffer to
check command output. I wonder if we should use something like that
for simple unit tests, and use python for the more complicated
functional tests?

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-21 16:49       ` Simon Glass
@ 2015-11-22 17:30         ` Stephen Warren
  2015-11-24  1:45           ` Simon Glass
  0 siblings, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-22 17:30 UTC (permalink / raw)
  To: u-boot

On 11/21/2015 09:49 AM, Simon Glass wrote:
> Hi Stephen,
> 
> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>
>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>
>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>
>>>> Hi Stephen,
>>>>
>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>> wrote:
>>>>>
>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>> using the
>>>>> console interface. A single top-level script exists to execute or attach
>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>> summarize the results. Advantages of this approach are:
>>>>>
>>>>> - Testing is performed in the same way a user or script would interact
>>>>>    with U-Boot; there can be no disconnect.
>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>> itself.
>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>> and
>>>>>    more flexible that writing it all in C.
>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>
>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>
>>>>
>>>> It's great to see this and thank you for putting in the effort!
>>>>
>>>> It looks like a good way of doing functional tests. I still see a role
>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>> would be a win.
>>>>
>>>> I'll look more when I can get it to work - see below.
>>
>> ...
>>>
>>> made it print a message about checking the docs for missing
>>> requirements. I can probably patch the top-level test.py to do the same.
>>
>>
>> I've pushed such a patch to:
>>
>> git://github.com/swarren/u-boot.git tegra_dev
>> (the separate pytests branch has now been deleted)
>>
>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
> 
> OK I got it working thank you. It is horribly slow though - do you
> know what is holding it up? For me to takes 12 seconds to run the
> (very basic) tests.

It looks like pexpect includes a default delay to simulate human
interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
and add the following somewhere soon after the assignment to self.p:

            self.p.delaybeforesend = 0

... that will more than halve the execution time. (8.3 -> 3.5s on my
5-year-old laptop).

That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
for some easy-to-use automated testing.

> Also please see dm_test_usb_tree() which uses a console buffer to
> check command output.

OK, I'll take a look.

> I wonder if we should use something like that
> for simple unit tests, and use python for the more complicated
> functional tests?

I'm not sure that's a good idea; it'd be best to settle on a single way
of executing tests so that (a) people don't have to run/implement
different kinds of tests in different ways (b) we can leverage test code
across as many tests as possible.

(Well, doing unit tests and system level tests differently might be
necessary since one calls functions and the other uses the shell "user
interface", but having multiple ways of doing e.g. system tests doesn't
seem like a good idea.)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-19 17:00   ` Stephen Warren
  2015-11-19 19:09     ` Stephen Warren
@ 2015-11-23 23:44     ` Tom Rini
  2015-11-23 23:55       ` Stephen Warren
  1 sibling, 1 reply; 19+ messages in thread
From: Tom Rini @ 2015-11-23 23:44 UTC (permalink / raw)
  To: u-boot

On Thu, Nov 19, 2015 at 10:00:32AM -0700, Stephen Warren wrote:
> On 11/19/2015 07:45 AM, Simon Glass wrote:
> >Hi Stephen,
> >
> >On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org> wrote:
> >>This tool aims to test U-Boot by executing U-Boot shell commands using the
> >>console interface. A single top-level script exists to execute or attach
> >>to the U-Boot console, run the entire script of tests against it, and
> >>summarize the results. Advantages of this approach are:
> >>
> >>- Testing is performed in the same way a user or script would interact
> >>   with U-Boot; there can be no disconnect.
> >>- There is no need to write or embed test-related code into U-Boot itself.
> >>   It is asserted that writing test-related code in Python is simpler and
> >>   more flexible that writing it all in C.
> >>- It is reasonably simple to interact with U-Boot in this way.
> >>
> >>A few simple tests are provided as examples. Soon, we should convert as
> >>many as possible of the other tests in test/* and test/cmd_ut.c too.
> >
> >It's great to see this and thank you for putting in the effort!
> >
> >It looks like a good way of doing functional tests. I still see a role
> >for unit tests and things like test/dm. But if we can arrange to call
> >all U-Boot tests (unit and functional) from one 'test.py' command that
> >would be a win.
> >
> >I'll look more when I can get it to work - see below.
> ...
> >I get this on my Ubuntu 64-bit machine (14.04.3)
> >
> >$ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
> >   File "./test/py/test.py", line 12, in <module>
> >     os.execvp("py.test", args)
> >   File "/usr/lib/python2.7/os.py", line 344, in execvp
> >     _execvpe(file, args)
> >   File "/usr/lib/python2.7/os.py", line 380, in _execvpe
> >     func(fullname, *argrest)
> >OSError: [Errno 2] No such file or directory
> 
> "py.test" isn't in your $PATH. Did you install it? See the following
> in test/py/README.md:
> 
> >## Requirements
> >
> >The test suite is implemented using pytest. Interaction with the U-Boot
> >console uses pexpect. Interaction with real hardware uses the tools of your
> >choice; you get to implement various "hook" scripts that are called by the
> >test suite at the appropriate time.
> >
> >On Debian or Debian-like distributions, the following packages are required.
> >Similar package names should exist in other distributions.
> >
> >| Package        | Version tested (Ubuntu 14.04) |
> >| -------------- | ----------------------------- |
> >| python         | 2.7.5-5ubuntu3                |
> >| python-pytest  | 2.5.1-1                       |
> >| python-pexpect | 3.1-1ubuntu0.1                |
> 
> In the main Python code, I trapped at least one exception location
> and made it print a message about checking the docs for missing
> requirements. I can probably patch the top-level test.py to do the
> same.

Isn't there some way to inject the local to U-Boot copy of the libraries
in?  I swear I've done something like that before in python..


-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20151123/2dcdd578/attachment.sig>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-23 23:44     ` Tom Rini
@ 2015-11-23 23:55       ` Stephen Warren
  0 siblings, 0 replies; 19+ messages in thread
From: Stephen Warren @ 2015-11-23 23:55 UTC (permalink / raw)
  To: u-boot

On 11/23/2015 04:44 PM, Tom Rini wrote:
> On Thu, Nov 19, 2015 at 10:00:32AM -0700, Stephen Warren wrote:
...
>> See the following in test/py/README.md:
>>
>>> ## Requirements
>>>
>>> The test suite is implemented using pytest. Interaction with the U-Boot
>>> console uses pexpect. Interaction with real hardware uses the tools of your
>>> choice; you get to implement various "hook" scripts that are called by the
>>> test suite at the appropriate time.
>>>
>>> On Debian or Debian-like distributions, the following packages are required.
>>> Similar package names should exist in other distributions.
>>>
>>> | Package        | Version tested (Ubuntu 14.04) |
>>> | -------------- | ----------------------------- |
>>> | python         | 2.7.5-5ubuntu3                |
>>> | python-pytest  | 2.5.1-1                       |
>>> | python-pexpect | 3.1-1ubuntu0.1                |
>>
>> In the main Python code, I trapped at least one exception location
>> and made it print a message about checking the docs for missing
>> requirements. I can probably patch the top-level test.py to do the
>> same.
>
> Isn't there some way to inject the local to U-Boot copy of the libraries
> in?  I swear I've done something like that before in python..

It would certainly be possible to either check in the required Python 
libraries in the U-Boot source tree, or include instructions for people 
to manually create a "virtualenv" (or perhaps even automatically do this 
from test.py). However, I was hoping to avoid the need to for that since 
those options are a bit more complex than "just install these 3 packages 
and run the script". (And in fact I've already mentioned 
virtualenv-based setup instructions in the README for people which 
archaic distros).

Still, if we find that varying versions of pytest/pexpect don't work 
well, we could certainly choose one of those options.

BTW, I've created a ton of patches on top of all these that I haven't 
posted yet. See:

git://github.com/swarren/u-boot.git tegra_dev

I'm not sure if I should squash all that into a V2 of this patch, or 
just post them all as incremental fixes/enhancements?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-22 17:30         ` Stephen Warren
@ 2015-11-24  1:45           ` Simon Glass
  2015-11-24  2:18             ` Simon Glass
  2015-11-24  4:44             ` Stephen Warren
  0 siblings, 2 replies; 19+ messages in thread
From: Simon Glass @ 2015-11-24  1:45 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 11/21/2015 09:49 AM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>
>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>
>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>
>>>>> Hi Stephen,
>>>>>
>>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>>> wrote:
>>>>>>
>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>> using the
>>>>>> console interface. A single top-level script exists to execute or attach
>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>> summarize the results. Advantages of this approach are:
>>>>>>
>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>    with U-Boot; there can be no disconnect.
>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>> itself.
>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>> and
>>>>>>    more flexible that writing it all in C.
>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>
>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>
>>>>>
>>>>> It's great to see this and thank you for putting in the effort!
>>>>>
>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>> would be a win.
>>>>>
>>>>> I'll look more when I can get it to work - see below.
>>>
>>> ...
>>>>
>>>> made it print a message about checking the docs for missing
>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>
>>>
>>> I've pushed such a patch to:
>>>
>>> git://github.com/swarren/u-boot.git tegra_dev
>>> (the separate pytests branch has now been deleted)
>>>
>>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
>>
>> OK I got it working thank you. It is horribly slow though - do you
>> know what is holding it up? For me to takes 12 seconds to run the
>> (very basic) tests.
>
> It looks like pexpect includes a default delay to simulate human
> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
> and add the following somewhere soon after the assignment to self.p:
>
>             self.p.delaybeforesend = 0
>
> ... that will more than halve the execution time. (8.3 -> 3.5s on my
> 5-year-old laptop).
>
> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
> for some easy-to-use automated testing.

Sure, but my reference is to the difference between a native C test
and this framework. As we add more and more tests the overhead will be
significant. If it takes 8 seconds to run the current (fairly trivial)
tests, it might take a minute to run a larger suite, and to me that is
too long (e.g. to bisect for a failing commit).

I wonder what is causing the delay?

>
>> Also please see dm_test_usb_tree() which uses a console buffer to
>> check command output.
>
> OK, I'll take a look.
>
>> I wonder if we should use something like that
>> for simple unit tests, and use python for the more complicated
>> functional tests?
>
> I'm not sure that's a good idea; it'd be best to settle on a single way
> of executing tests so that (a) people don't have to run/implement
> different kinds of tests in different ways (b) we can leverage test code
> across as many tests as possible.
>
> (Well, doing unit tests and system level tests differently might be
> necessary since one calls functions and the other uses the shell "user
> interface", but having multiple ways of doing e.g. system tests doesn't
> seem like a good idea.)

As you found with some of the tests, it is convenient/necessary to be
able to call U-Boot C functions in some tests. So I don't see this as
a one-size-fits-all solution.

I think it is perfectly reasonable for the python framework to run the
existing C tests - there is no need to rewrite them in Python. Also
for the driver model tests - we can just run the tests from some sort
of python wrapper and get the best of both worlds, right?

Please don't take this to indicate any lack of enthusiasm for what you
are doing - it's a great development and I'm sure it will help a lot!
We really need to unify all the tests so we can run them all in one
step.

I just think we should aim to have the automated tests run in a few
seconds (let's say 5-10 at the outside). We need to make sure that the
python framework will allow this even when running thousands of tests.

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24  1:45           ` Simon Glass
@ 2015-11-24  2:18             ` Simon Glass
  2015-11-24  4:24               ` Stephen Warren
  2015-11-24  4:44             ` Stephen Warren
  1 sibling, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-11-24  2:18 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 23 November 2015 at 18:45, Simon Glass <sjg@chromium.org> wrote:
> Hi Stephen,
>
> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>> Hi Stephen,
>>>
>>> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>>
>>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>>
>>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>>
>>>>>> Hi Stephen,
>>>>>>
>>>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>>>> wrote:
>>>>>>>
>>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>>> using the
>>>>>>> console interface. A single top-level script exists to execute or attach
>>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>>> summarize the results. Advantages of this approach are:
>>>>>>>
>>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>>    with U-Boot; there can be no disconnect.
>>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>>> itself.
>>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>>> and
>>>>>>>    more flexible that writing it all in C.
>>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>>
>>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>>
>>>>>>
>>>>>> It's great to see this and thank you for putting in the effort!
>>>>>>
>>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>>> would be a win.
>>>>>>
>>>>>> I'll look more when I can get it to work - see below.
>>>>
>>>> ...
>>>>>
>>>>> made it print a message about checking the docs for missing
>>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>>
>>>>
>>>> I've pushed such a patch to:
>>>>
>>>> git://github.com/swarren/u-boot.git tegra_dev
>>>> (the separate pytests branch has now been deleted)
>>>>
>>>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
>>>
>>> OK I got it working thank you. It is horribly slow though - do you
>>> know what is holding it up? For me to takes 12 seconds to run the
>>> (very basic) tests.
>>
>> It looks like pexpect includes a default delay to simulate human
>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>> and add the following somewhere soon after the assignment to self.p:
>>
>>             self.p.delaybeforesend = 0
>>
>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>> 5-year-old laptop).
>>
>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>> for some easy-to-use automated testing.
>
> Sure, but my reference is to the difference between a native C test
> and this framework. As we add more and more tests the overhead will be
> significant. If it takes 8 seconds to run the current (fairly trivial)
> tests, it might take a minute to run a larger suite, and to me that is
> too long (e.g. to bisect for a failing commit).
>
> I wonder what is causing the delay?
>
>>
>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>> check command output.
>>
>> OK, I'll take a look.
>>
>>> I wonder if we should use something like that
>>> for simple unit tests, and use python for the more complicated
>>> functional tests?
>>
>> I'm not sure that's a good idea; it'd be best to settle on a single way
>> of executing tests so that (a) people don't have to run/implement
>> different kinds of tests in different ways (b) we can leverage test code
>> across as many tests as possible.
>>
>> (Well, doing unit tests and system level tests differently might be
>> necessary since one calls functions and the other uses the shell "user
>> interface", but having multiple ways of doing e.g. system tests doesn't
>> seem like a good idea.)
>
> As you found with some of the tests, it is convenient/necessary to be
> able to call U-Boot C functions in some tests. So I don't see this as
> a one-size-fits-all solution.
>
> I think it is perfectly reasonable for the python framework to run the
> existing C tests - there is no need to rewrite them in Python. Also
> for the driver model tests - we can just run the tests from some sort
> of python wrapper and get the best of both worlds, right?
>
> Please don't take this to indicate any lack of enthusiasm for what you
> are doing - it's a great development and I'm sure it will help a lot!
> We really need to unify all the tests so we can run them all in one
> step.
>
> I just think we should aim to have the automated tests run in a few
> seconds (let's say 5-10 at the outside). We need to make sure that the
> python framework will allow this even when running thousands of tests.

BTW I would like to see if buildman can run tests automatically on
each commit. It's been a long-term goal for a while.

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24  2:18             ` Simon Glass
@ 2015-11-24  4:24               ` Stephen Warren
  0 siblings, 0 replies; 19+ messages in thread
From: Stephen Warren @ 2015-11-24  4:24 UTC (permalink / raw)
  To: u-boot

On 11/23/2015 07:18 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 23 November 2015 at 18:45, Simon Glass <sjg@chromium.org> wrote:
>> Hi Stephen,
>>
>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>>> Hi Stephen,
>>>>
>>>> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>>>
>>>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>>>
>>>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>>>
>>>>>>> Hi Stephen,
>>>>>>>
>>>>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>>>> using the
>>>>>>>> console interface. A single top-level script exists to execute or attach
>>>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>>>> summarize the results. Advantages of this approach are:
>>>>>>>>
>>>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>>>    with U-Boot; there can be no disconnect.
>>>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>>>> itself.
>>>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>>>> and
>>>>>>>>    more flexible that writing it all in C.
>>>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>>>
>>>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>>>
>>>>>>>
>>>>>>> It's great to see this and thank you for putting in the effort!
>>>>>>>
>>>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>>>> would be a win.
>>>>>>>
>>>>>>> I'll look more when I can get it to work - see below.
>>>>>
>>>>> ...
>>>>>>
>>>>>> made it print a message about checking the docs for missing
>>>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>>>
>>>>>
>>>>> I've pushed such a patch to:
>>>>>
>>>>> git://github.com/swarren/u-boot.git tegra_dev
>>>>> (the separate pytests branch has now been deleted)
>>>>>
>>>>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
>>>>
>>>> OK I got it working thank you. It is horribly slow though - do you
>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>> (very basic) tests.
>>>
>>> It looks like pexpect includes a default delay to simulate human
>>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>>> and add the following somewhere soon after the assignment to self.p:
>>>
>>>             self.p.delaybeforesend = 0
>>>
>>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>>> 5-year-old laptop).
>>>
>>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>>> for some easy-to-use automated testing.
>>
>> Sure, but my reference is to the difference between a native C test
>> and this framework. As we add more and more tests the overhead will be
>> significant. If it takes 8 seconds to run the current (fairly trivial)
>> tests, it might take a minute to run a larger suite, and to me that is
>> too long (e.g. to bisect for a failing commit).
>>
>> I wonder what is causing the delay?
>>
>>>
>>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>>> check command output.
>>>
>>> OK, I'll take a look.
>>>
>>>> I wonder if we should use something like that
>>>> for simple unit tests, and use python for the more complicated
>>>> functional tests?
>>>
>>> I'm not sure that's a good idea; it'd be best to settle on a single way
>>> of executing tests so that (a) people don't have to run/implement
>>> different kinds of tests in different ways (b) we can leverage test code
>>> across as many tests as possible.
>>>
>>> (Well, doing unit tests and system level tests differently might be
>>> necessary since one calls functions and the other uses the shell "user
>>> interface", but having multiple ways of doing e.g. system tests doesn't
>>> seem like a good idea.)
>>
>> As you found with some of the tests, it is convenient/necessary to be
>> able to call U-Boot C functions in some tests. So I don't see this as
>> a one-size-fits-all solution.
>>
>> I think it is perfectly reasonable for the python framework to run the
>> existing C tests - there is no need to rewrite them in Python. Also
>> for the driver model tests - we can just run the tests from some sort
>> of python wrapper and get the best of both worlds, right?
>>
>> Please don't take this to indicate any lack of enthusiasm for what you
>> are doing - it's a great development and I'm sure it will help a lot!
>> We really need to unify all the tests so we can run them all in one
>> step.
>>
>> I just think we should aim to have the automated tests run in a few
>> seconds (let's say 5-10 at the outside). We need to make sure that the
>> python framework will allow this even when running thousands of tests.
> 
> BTW I would like to see if buildman can run tests automatically on
> each commit. It's been a long-term goal for a while.

Related, I was wondering if the test script's --build could/should rely
on buildman somehow. That might save the user having to set
CROSS_COMPILE before running test.py, assuming they'd already set up
buildman.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24  1:45           ` Simon Glass
  2015-11-24  2:18             ` Simon Glass
@ 2015-11-24  4:44             ` Stephen Warren
  2015-11-24 19:04               ` Simon Glass
  1 sibling, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-24  4:44 UTC (permalink / raw)
  To: u-boot

On 11/23/2015 06:45 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>> Hi Stephen,
>>>
>>> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>>
>>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>>
>>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>>
>>>>>> Hi Stephen,
>>>>>>
>>>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>>>> wrote:
>>>>>>>
>>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>>> using the
>>>>>>> console interface. A single top-level script exists to execute or attach
>>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>>> summarize the results. Advantages of this approach are:
>>>>>>>
>>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>>    with U-Boot; there can be no disconnect.
>>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>>> itself.
>>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>>> and
>>>>>>>    more flexible that writing it all in C.
>>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>>
>>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>>
>>>>>>
>>>>>> It's great to see this and thank you for putting in the effort!
>>>>>>
>>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>>> would be a win.
>>>>>>
>>>>>> I'll look more when I can get it to work - see below.
>>>>
>>>> ...
>>>>>
>>>>> made it print a message about checking the docs for missing
>>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>>
>>>>
>>>> I've pushed such a patch to:
>>>>
>>>> git://github.com/swarren/u-boot.git tegra_dev
>>>> (the separate pytests branch has now been deleted)
>>>>
>>>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
>>>
>>> OK I got it working thank you. It is horribly slow though - do you
>>> know what is holding it up? For me to takes 12 seconds to run the
>>> (very basic) tests.
>>
>> It looks like pexpect includes a default delay to simulate human
>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>> and add the following somewhere soon after the assignment to self.p:
>>
>>             self.p.delaybeforesend = 0
>>
>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>> 5-year-old laptop).
>>
>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>> for some easy-to-use automated testing.
> 
> Sure, but my reference is to the difference between a native C test
> and this framework. As we add more and more tests the overhead will be
> significant. If it takes 8 seconds to run the current (fairly trivial)
> tests, it might take a minute to run a larger suite, and to me that is
> too long (e.g. to bisect for a failing commit).
> 
> I wonder what is causing the delay?

I actually hope the opposite.

Most of the tests supported today are the most trivial possible tests,
i.e. they take very little CPU time on the target to implement. I would
naively expect that once we implement more interesting tests (USB Mass
Storage, USB enumeration, eMMC/SD/USB data reading, Ethernet DHCP/TFTP,
...) the command invocation overhead will rapidly become insignificant.
This certainly seems to be true for the UMS test I have locally, but who
knows whether this will be more generally true.

I put a bit of time measurement into run_command() and found that on my
system at work, for p.send("the shell command to execute") was actually
(marginally) slower on sandbox than on real HW, despite real HW being a
115200 baud serial port, and the code splitting the shell commands into
chunks that are sent and waited for synchronously to avoid overflowing
UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
seems to be non-blocking, so I don't think termios VMIN/VTIME come into
play (setting them to 0 made no difference), and the two raw modes took
the same time. I meant to look into pexpect's termios settings to see if
there was anything to tweak there, but forgot today.

I did do one experiment to compare expect (the Tcl version) and pexpect.
If I do roughly the following in both:

spawn u-boot (sandbox)
wait for prompt
100 times:
    send "echo $foo\n"
    wait for "echo $foo"
    wait for shell prompt
send "reset"
wait for "reset"
send "\n"

... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
That's a pity. Still, I'm sure as heck not going to rewrite all this in
Tcl:-( I wonder if something similar to pexpect but more targetted at
simple "interactive shell" cases would remove any of that overhead.

>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>> check command output.
>>
>> OK, I'll take a look.
>>
>>> I wonder if we should use something like that
>>> for simple unit tests, and use python for the more complicated
>>> functional tests?
>>
>> I'm not sure that's a good idea; it'd be best to settle on a single way
>> of executing tests so that (a) people don't have to run/implement
>> different kinds of tests in different ways (b) we can leverage test code
>> across as many tests as possible.
>>
>> (Well, doing unit tests and system level tests differently might be
>> necessary since one calls functions and the other uses the shell "user
>> interface", but having multiple ways of doing e.g. system tests doesn't
>> seem like a good idea.)
> 
> As you found with some of the tests, it is convenient/necessary to be
> able to call U-Boot C functions in some tests. So I don't see this as
> a one-size-fits-all solution.

Yes, although I expect the split would be need-to-call-a-C-function ->
put the code in U-Boot vs. anything else in Python via the shell prompt.

> I think it is perfectly reasonable for the python framework to run the
> existing C tests

Yes.

> - there is no need to rewrite them in Python.

Probably not as an absolute mandate. Still, consistency would be nice.
One advantage of having things as individual pytests is that the status
of separate tests doesn't get aggregated; you can see that of 1000
tests, 10 failed, rather than seeing that 1000 logical tests were
executed as part of 25 pytests, and 2 of those failed, each only because
of 1 subtest with the other hundred subtests passing.

> Also
> for the driver model tests - we can just run the tests from some sort
> of python wrapper and get the best of both worlds, right?

I expect so, yes. I haven't looked at those yet.

> Please don't take this to indicate any lack of enthusiasm for what you
> are doing - it's a great development and I'm sure it will help a lot!
> We really need to unify all the tests so we can run them all in one
> step.

Thanks:-)

> I just think we should aim to have the automated tests run in a few
> seconds (let's say 5-10 at the outside). We need to make sure that the
> python framework will allow this even when running thousands of tests.

I'd be happy with something that took minutes, or longer. Given "build
all boards" takes a very long time (and I'm sure we'd like everyone to
do that, although I imagine few do), something of the same order of
magnitude might even be reasonable? Thousands of test sounds like rather
a lot; perhaps that number makes sense for tiny unit tests. I was
thinking of testing fewer larger user-visible features that generally
will have disk/network/... IO rates as the limiting factor. Perhaps one
of those tests could indeed be "run 1000 tiny C-based unit tests via a
single shell command".

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24  4:44             ` Stephen Warren
@ 2015-11-24 19:04               ` Simon Glass
  2015-11-24 21:28                 ` Stephen Warren
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-11-24 19:04 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 23 November 2015 at 21:44, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 11/23/2015 06:45 PM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>>> Hi Stephen,
>>>>
>>>> On 19 November 2015 at 12:09, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>>>
>>>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>>>
>>>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>>>
>>>>>>> Hi Stephen,
>>>>>>>
>>>>>>> On 14 November 2015 at 23:53, Stephen Warren <swarren@wwwdotorg.org>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>>>> using the
>>>>>>>> console interface. A single top-level script exists to execute or attach
>>>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>>>> summarize the results. Advantages of this approach are:
>>>>>>>>
>>>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>>>    with U-Boot; there can be no disconnect.
>>>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>>>> itself.
>>>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>>>> and
>>>>>>>>    more flexible that writing it all in C.
>>>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>>>
>>>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>>>
>>>>>>>
>>>>>>> It's great to see this and thank you for putting in the effort!
>>>>>>>
>>>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>>>> would be a win.
>>>>>>>
>>>>>>> I'll look more when I can get it to work - see below.
>>>>>
>>>>> ...
>>>>>>
>>>>>> made it print a message about checking the docs for missing
>>>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>>>
>>>>>
>>>>> I've pushed such a patch to:
>>>>>
>>>>> git://github.com/swarren/u-boot.git tegra_dev
>>>>> (the separate pytests branch has now been deleted)
>>>>>
>>>>> There are also a variety of other patches there related to this testing infra-structure. I guess I'll hold off sending them to the list until there's been some general feedback on the patches I've already posted, but feel free to pull the branch down and play with it. Note that it's likely to get rebased as I work.
>>>>
>>>> OK I got it working thank you. It is horribly slow though - do you
>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>> (very basic) tests.
>>>
>>> It looks like pexpect includes a default delay to simulate human
>>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>>> and add the following somewhere soon after the assignment to self.p:
>>>
>>>             self.p.delaybeforesend = 0
>>>
>>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>>> 5-year-old laptop).
>>>
>>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>>> for some easy-to-use automated testing.
>>
>> Sure, but my reference is to the difference between a native C test
>> and this framework. As we add more and more tests the overhead will be
>> significant. If it takes 8 seconds to run the current (fairly trivial)
>> tests, it might take a minute to run a larger suite, and to me that is
>> too long (e.g. to bisect for a failing commit).
>>
>> I wonder what is causing the delay?
>
> I actually hope the opposite.
>
> Most of the tests supported today are the most trivial possible tests,
> i.e. they take very little CPU time on the target to implement. I would
> naively expect that once we implement more interesting tests (USB Mass
> Storage, USB enumeration, eMMC/SD/USB data reading, Ethernet DHCP/TFTP,
> ...) the command invocation overhead will rapidly become insignificant.
> This certainly seems to be true for the UMS test I have locally, but who
> knows whether this will be more generally true.

We do have a USB enumeration and storage test including data reading.
We have some simple 'ping' Ethernet tests. These run in close to no
time (they fudge the timer).

I think you are referring to tests running on real hardware. In that
case I'm sure you are right - e.g. the USB or Ethernet PHY delays will
dwarf the framework time.

I should have been clear that I am most concerned about sandbox tests
running quickly. To me that is where we have most of gain/lose.

>
> I put a bit of time measurement into run_command() and found that on my
> system at work, for p.send("the shell command to execute") was actually
> (marginally) slower on sandbox than on real HW, despite real HW being a
> 115200 baud serial port, and the code splitting the shell commands into
> chunks that are sent and waited for synchronously to avoid overflowing
> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
> play (setting them to 0 made no difference), and the two raw modes took
> the same time. I meant to look into pexpect's termios settings to see if
> there was anything to tweak there, but forgot today.
>
> I did do one experiment to compare expect (the Tcl version) and pexpect.
> If I do roughly the following in both:
>
> spawn u-boot (sandbox)
> wait for prompt
> 100 times:
>     send "echo $foo\n"
>     wait for "echo $foo"
>     wait for shell prompt
> send "reset"
> wait for "reset"
> send "\n"
>
> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
> That's a pity. Still, I'm sure as heck not going to rewrite all this in
> Tcl:-( I wonder if something similar to pexpect but more targetted at
> simple "interactive shell" cases would remove any of that overhead.

It is possible that we should use sandbox in 'cooked' mode so that
lines an entered synchronously. The -t option might help here, or we
may need something else.

>
>>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>>> check command output.
>>>
>>> OK, I'll take a look.
>>>
>>>> I wonder if we should use something like that
>>>> for simple unit tests, and use python for the more complicated
>>>> functional tests?
>>>
>>> I'm not sure that's a good idea; it'd be best to settle on a single way
>>> of executing tests so that (a) people don't have to run/implement
>>> different kinds of tests in different ways (b) we can leverage test code
>>> across as many tests as possible.
>>>
>>> (Well, doing unit tests and system level tests differently might be
>>> necessary since one calls functions and the other uses the shell "user
>>> interface", but having multiple ways of doing e.g. system tests doesn't
>>> seem like a good idea.)
>>
>> As you found with some of the tests, it is convenient/necessary to be
>> able to call U-Boot C functions in some tests. So I don't see this as
>> a one-size-fits-all solution.
>
> Yes, although I expect the split would be need-to-call-a-C-function ->
> put the code in U-Boot vs. anything else in Python via the shell prompt.
>
>> I think it is perfectly reasonable for the python framework to run the
>> existing C tests
>
> Yes.
>
>> - there is no need to rewrite them in Python.
>
> Probably not as an absolute mandate. Still, consistency would be nice.
> One advantage of having things as individual pytests is that the status
> of separate tests doesn't get aggregated; you can see that of 1000
> tests, 10 failed, rather than seeing that 1000 logical tests were
> executed as part of 25 pytests, and 2 of those failed, each only because
> of 1 subtest with the other hundred subtests passing.

Indeed. As things stand we would want the framework to 'understand'
driver model tests, and integrate the results of calling out to those,
into its own report.

>
>> Also
>> for the driver model tests - we can just run the tests from some sort
>> of python wrapper and get the best of both worlds, right?
>
> I expect so, yes. I haven't looked at those yet.
>
>> Please don't take this to indicate any lack of enthusiasm for what you
>> are doing - it's a great development and I'm sure it will help a lot!
>> We really need to unify all the tests so we can run them all in one
>> step.
>
> Thanks:-)
>
>> I just think we should aim to have the automated tests run in a few
>> seconds (let's say 5-10 at the outside). We need to make sure that the
>> python framework will allow this even when running thousands of tests.
>
> I'd be happy with something that took minutes, or longer. Given "build
> all boards" takes a very long time (and I'm sure we'd like everyone to
> do that, although I imagine few do), something of the same order of
> magnitude might even be reasonable? Thousands of test sounds like rather
> a lot; perhaps that number makes sense for tiny unit tests. I was
> thinking of testing fewer larger user-visible features that generally
> will have disk/network/... IO rates as the limiting factor. Perhaps one
> of those tests could indeed be "run 1000 tiny C-based unit tests via a
> single shell command".

We have a few hundred tests at present and our coverage is poor, so I
don't think 1000 tests is out of the question within a year or two.

Just because tests are complex does not mean they need to be slow. At
least with sandbox, even a complex test should be able to run in a few
milliseconds in most cases.

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24 19:04               ` Simon Glass
@ 2015-11-24 21:28                 ` Stephen Warren
  2015-11-27  2:52                   ` Simon Glass
  0 siblings, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-24 21:28 UTC (permalink / raw)
  To: u-boot

On 11/24/2015 12:04 PM, Simon Glass wrote:
> Hi Stephen,
>
> On 23 November 2015 at 21:44, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 11/23/2015 06:45 PM, Simon Glass wrote:
>>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>> On 11/21/2015 09:49 AM, Simon Glass wrote:

>>>>> OK I got it working thank you. It is horribly slow though - do you
>>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>>> (very basic) tests.
..
>> I put a bit of time measurement into run_command() and found that on my
>> system at work, for p.send("the shell command to execute") was actually
>> (marginally) slower on sandbox than on real HW, despite real HW being a
>> 115200 baud serial port, and the code splitting the shell commands into
>> chunks that are sent and waited for synchronously to avoid overflowing
>> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
>> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
>> play (setting them to 0 made no difference), and the two raw modes took
>> the same time. I meant to look into pexpect's termios settings to see if
>> there was anything to tweak there, but forgot today.
>>
>> I did do one experiment to compare expect (the Tcl version) and pexpect.
>> If I do roughly the following in both:
>>
>> spawn u-boot (sandbox)
>> wait for prompt
>> 100 times:
>>      send "echo $foo\n"
>>      wait for "echo $foo"
>>      wait for shell prompt
>> send "reset"
>> wait for "reset"
>> send "\n"
>>
>> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
>> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
>> That's a pity. Still, I'm sure as heck not going to rewrite all this in
>> Tcl:-( I wonder if something similar to pexpect but more targetted at
>> simple "interactive shell" cases would remove any of that overhead.
>
> It is possible that we should use sandbox in 'cooked' mode so that
> lines an entered synchronously. The -t option might help here, or we
> may need something else.

I don't think cooked mode will work, since I believe cooked is 
line-buffered, yet when U-Boot emits the shell prompt there's no \n 
printed afterwards.

FWIW, I hacked out pexpect and replaced it with some custom code. That 
reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time 
against real HW didn't seem to be affected at all. Some features like 
timeouts and complete error handling are still missing, but I don't 
think that would affect the execution time. See my github tree for the 
WIP patch.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-24 21:28                 ` Stephen Warren
@ 2015-11-27  2:52                   ` Simon Glass
  2015-11-30 17:13                     ` Stephen Warren
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-11-27  2:52 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 24 November 2015 at 13:28, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 11/24/2015 12:04 PM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 23 November 2015 at 21:44, Stephen Warren <swarren@wwwdotorg.org>
>> wrote:
>>>
>>> On 11/23/2015 06:45 PM, Simon Glass wrote:
>>>>
>>>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org>
>>>> wrote:
>>>>>
>>>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>
>
>>>>>> OK I got it working thank you. It is horribly slow though - do you
>>>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>>>> (very basic) tests.
>
> ..
>
>>> I put a bit of time measurement into run_command() and found that on my
>>> system at work, for p.send("the shell command to execute") was actually
>>> (marginally) slower on sandbox than on real HW, despite real HW being a
>>> 115200 baud serial port, and the code splitting the shell commands into
>>> chunks that are sent and waited for synchronously to avoid overflowing
>>> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
>>> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
>>> play (setting them to 0 made no difference), and the two raw modes took
>>> the same time. I meant to look into pexpect's termios settings to see if
>>> there was anything to tweak there, but forgot today.
>>>
>>> I did do one experiment to compare expect (the Tcl version) and pexpect.
>>> If I do roughly the following in both:
>>>
>>> spawn u-boot (sandbox)
>>> wait for prompt
>>> 100 times:
>>>      send "echo $foo\n"
>>>      wait for "echo $foo"
>>>      wait for shell prompt
>>> send "reset"
>>> wait for "reset"
>>> send "\n"
>>>
>>> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
>>> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
>>> That's a pity. Still, I'm sure as heck not going to rewrite all this in
>>> Tcl:-( I wonder if something similar to pexpect but more targetted at
>>> simple "interactive shell" cases would remove any of that overhead.
>>
>>
>> It is possible that we should use sandbox in 'cooked' mode so that
>> lines an entered synchronously. The -t option might help here, or we
>> may need something else.
>
>
> I don't think cooked mode will work, since I believe cooked is
> line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
> afterwards.

Do you mean we need fflush() after writing the prompt? If so, that
should be easy to arrange. We have a similar problem with the LCD, and
added lcd_sync().

>
> FWIW, I hacked out pexpect and replaced it with some custom code. That
> reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
> against real HW didn't seem to be affected at all. Some features like
> timeouts and complete error handling are still missing, but I don't think
> that would affect the execution time. See my github tree for the WIP patch.

Interesting, that's a big improvement. I wonder if we should look at
building U-Boot with SWIG to remove all these overheads? Then the
U-Boot command line (and any other feature we want) could become a
Python class. Of course that would only work for sandbox.

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-27  2:52                   ` Simon Glass
@ 2015-11-30 17:13                     ` Stephen Warren
  2015-12-01 16:40                       ` Simon Glass
  0 siblings, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-11-30 17:13 UTC (permalink / raw)
  To: u-boot

On 11/26/2015 07:52 PM, Simon Glass wrote:
> Hi Stephen,
>
> On 24 November 2015 at 13:28, Stephen Warren <swarren@wwwdotorg.org> wrote:
>> On 11/24/2015 12:04 PM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 23 November 2015 at 21:44, Stephen Warren <swarren@wwwdotorg.org>
>>> wrote:
>>>>
>>>> On 11/23/2015 06:45 PM, Simon Glass wrote:
>>>>>
>>>>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org>
>>>>> wrote:
>>>>>>
>>>>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>
>>
>>>>>>> OK I got it working thank you. It is horribly slow though - do you
>>>>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>>>>> (very basic) tests.
>>
>> ..
>>
>>>> I put a bit of time measurement into run_command() and found that on my
>>>> system at work, for p.send("the shell command to execute") was actually
>>>> (marginally) slower on sandbox than on real HW, despite real HW being a
>>>> 115200 baud serial port, and the code splitting the shell commands into
>>>> chunks that are sent and waited for synchronously to avoid overflowing
>>>> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
>>>> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
>>>> play (setting them to 0 made no difference), and the two raw modes took
>>>> the same time. I meant to look into pexpect's termios settings to see if
>>>> there was anything to tweak there, but forgot today.
>>>>
>>>> I did do one experiment to compare expect (the Tcl version) and pexpect.
>>>> If I do roughly the following in both:
>>>>
>>>> spawn u-boot (sandbox)
>>>> wait for prompt
>>>> 100 times:
>>>>       send "echo $foo\n"
>>>>       wait for "echo $foo"
>>>>       wait for shell prompt
>>>> send "reset"
>>>> wait for "reset"
>>>> send "\n"
>>>>
>>>> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
>>>> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
>>>> That's a pity. Still, I'm sure as heck not going to rewrite all this in
>>>> Tcl:-( I wonder if something similar to pexpect but more targetted at
>>>> simple "interactive shell" cases would remove any of that overhead.
>>>
>>>
>>> It is possible that we should use sandbox in 'cooked' mode so that
>>> lines an entered synchronously. The -t option might help here, or we
>>> may need something else.
>>
>>
>> I don't think cooked mode will work, since I believe cooked is
>> line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
>> afterwards.
>
> Do you mean we need fflush() after writing the prompt? If so, that
> should be easy to arrange. We have a similar problem with the LCD, and
> added lcd_sync().

Anything U-Boot does will only affect its own buffer when sending into 
the PTY.

If the test program used cooked mode for its reading side of the PTY, 
then even with fflush() on the sending side, I don't believe reading 
from the PTY would return characters until a \n appeared.

FWIW, passing "-t cooked" to U-Boot (which affects data in the other 
direction to the discussion above) (plus hacking the code to disable 
terminal-level input echoing) doesn't make any difference to the test 
timing. That's not particularly surprising, since the test program sends 
each command as a single write, so it's likely that U-Boot reads each 
command into its stdin buffers in one go anyway.

>> FWIW, I hacked out pexpect and replaced it with some custom code. That
>> reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
>> against real HW didn't seem to be affected at all. Some features like
>> timeouts and complete error handling are still missing, but I don't think
>> that would affect the execution time. See my github tree for the WIP patch.
>
> Interesting, that's a big improvement. I wonder if we should look at
> building U-Boot with SWIG to remove all these overheads? Then the
> U-Boot command line (and any other feature we want) could become a
> Python class. Of course that would only work for sandbox.

SWIG doesn't seem like a good direction; it would re-introduce different 
paths between sandbox and non-sandbox again. One of the main benefits of 
the test/py/ approach is that sandbox and real HW are treated the same.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-11-30 17:13                     ` Stephen Warren
@ 2015-12-01 16:40                       ` Simon Glass
  2015-12-01 23:24                         ` Stephen Warren
  0 siblings, 1 reply; 19+ messages in thread
From: Simon Glass @ 2015-12-01 16:40 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 30 November 2015 at 10:13, Stephen Warren <swarren@wwwdotorg.org> wrote:
>
> On 11/26/2015 07:52 PM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 24 November 2015 at 13:28, Stephen Warren <swarren@wwwdotorg.org> wrote:
>>>
>>> On 11/24/2015 12:04 PM, Simon Glass wrote:
>>>>
>>>>
>>>> Hi Stephen,
>>>>
>>>> On 23 November 2015 at 21:44, Stephen Warren <swarren@wwwdotorg.org>
>>>> wrote:
>>>>>
>>>>>
>>>>> On 11/23/2015 06:45 PM, Simon Glass wrote:
>>>>>>
>>>>>>
>>>>>> On 22 November 2015 at 10:30, Stephen Warren <swarren@wwwdotorg.org>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>>
>>>
>>>
>>>>>>>> OK I got it working thank you. It is horribly slow though - do you
>>>>>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>>>>>> (very basic) tests.
>>>
>>>
>>> ..
>>>
>>>>> I put a bit of time measurement into run_command() and found that on my
>>>>> system at work, for p.send("the shell command to execute") was actually
>>>>> (marginally) slower on sandbox than on real HW, despite real HW being a
>>>>> 115200 baud serial port, and the code splitting the shell commands into
>>>>> chunks that are sent and waited for synchronously to avoid overflowing
>>>>> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
>>>>> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
>>>>> play (setting them to 0 made no difference), and the two raw modes took
>>>>> the same time. I meant to look into pexpect's termios settings to see if
>>>>> there was anything to tweak there, but forgot today.
>>>>>
>>>>> I did do one experiment to compare expect (the Tcl version) and pexpect.
>>>>> If I do roughly the following in both:
>>>>>
>>>>> spawn u-boot (sandbox)
>>>>> wait for prompt
>>>>> 100 times:
>>>>>       send "echo $foo\n"
>>>>>       wait for "echo $foo"
>>>>>       wait for shell prompt
>>>>> send "reset"
>>>>> wait for "reset"
>>>>> send "\n"
>>>>>
>>>>> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
>>>>> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
>>>>> That's a pity. Still, I'm sure as heck not going to rewrite all this in
>>>>> Tcl:-( I wonder if something similar to pexpect but more targetted at
>>>>> simple "interactive shell" cases would remove any of that overhead.
>>>>
>>>>
>>>>
>>>> It is possible that we should use sandbox in 'cooked' mode so that
>>>> lines an entered synchronously. The -t option might help here, or we
>>>> may need something else.
>>>
>>>
>>>
>>> I don't think cooked mode will work, since I believe cooked is
>>> line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
>>> afterwards.
>>
>>
>> Do you mean we need fflush() after writing the prompt? If so, that
>> should be easy to arrange. We have a similar problem with the LCD, and
>> added lcd_sync().
>
>
> Anything U-Boot does will only affect its own buffer when sending into the PTY.
>
> If the test program used cooked mode for its reading side of the PTY, then even with fflush() on the sending side, I don't believe reading from the PTY would return characters until a \n appeared.

It normally works for me - do you have the PTY set up correctly?

>
> FWIW, passing "-t cooked" to U-Boot (which affects data in the other direction to the discussion above) (plus hacking the code to disable terminal-level input echoing) doesn't make any difference to the test timing. That's not particularly surprising, since the test program sends each command as a single write, so it's likely that U-Boot reads each command into its stdin buffers in one go anyway.

Yes, I'm not really sure what is going on. But we should try to avoid
unnecessary waits and delays in the test framework, and spend as much
effort as possible actually running test rather than dealing with I/O,
etc.

>
>>> FWIW, I hacked out pexpect and replaced it with some custom code. That
>>> reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
>>> against real HW didn't seem to be affected at all. Some features like
>>> timeouts and complete error handling are still missing, but I don't think
>>> that would affect the execution time. See my github tree for the WIP patch.
>>
>>
>> Interesting, that's a big improvement. I wonder if we should look at
>> building U-Boot with SWIG to remove all these overheads? Then the
>> U-Boot command line (and any other feature we want) could become a
>> Python class. Of course that would only work for sandbox.
>
>
> SWIG doesn't seem like a good direction; it would re-introduce different paths between sandbox and non-sandbox again. One of the main benefits of the test/py/ approach is that sandbox and real HW are treated the same.

At present we don't have a sensible test framework for anything other
than sandbox, so to me the main benefit is that with your setup, we
do.

The benefit of the existing sandbox tests is that they are very fast.
We could bisect for a test failure in a few minutes. I'd like to make
sure that we can still write C tests (that are called from your
framework with results integrated into it) and that the Python tests
are also fast.

How do we move this forward? Are you planing to resend the patch with
the faster approach?

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-12-01 16:40                       ` Simon Glass
@ 2015-12-01 23:24                         ` Stephen Warren
  2015-12-02 13:37                           ` Simon Glass
  0 siblings, 1 reply; 19+ messages in thread
From: Stephen Warren @ 2015-12-01 23:24 UTC (permalink / raw)
  To: u-boot

On 12/01/2015 09:40 AM, Simon Glass wrote:
...
> At present we don't have a sensible test framework for anything other
> than sandbox, so to me the main benefit is that with your setup, we
> do.
>
> The benefit of the existing sandbox tests is that they are very fast.
> We could bisect for a test failure in a few minutes. I'd like to make
> sure that we can still write C tests (that are called from your
> framework with results integrated into it) and that the Python tests
> are also fast.
>
> How do we move this forward? Are you planing to resend the patch with
> the faster approach?

I'm tempted to squash down all/most the fixes/enhancements I've made 
since posting the original into a single commit rather than sending 
follow-on enhancements, since none of it is applied yet. I can keep the 
various test implementations etc. in separate commits as a series. Does 
that seem reasonable?

I need to do some more testing/clean-up of the version that doesn't use 
pexpect. For example, I have only tested sandbox and not real HW, and 
also haven't tested (and perhaps implemented some of) the support for 
matching unexpected error messages in the console log. Still, that all 
shouldn't take too long.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [U-Boot] [PATCH] Implement pytest-based test infrastructure
  2015-12-01 23:24                         ` Stephen Warren
@ 2015-12-02 13:37                           ` Simon Glass
  0 siblings, 0 replies; 19+ messages in thread
From: Simon Glass @ 2015-12-02 13:37 UTC (permalink / raw)
  To: u-boot

Hi Stephen,

On 1 December 2015 at 16:24, Stephen Warren <swarren@wwwdotorg.org> wrote:
> On 12/01/2015 09:40 AM, Simon Glass wrote:
> ...
>>
>> At present we don't have a sensible test framework for anything other
>> than sandbox, so to me the main benefit is that with your setup, we
>> do.
>>
>> The benefit of the existing sandbox tests is that they are very fast.
>> We could bisect for a test failure in a few minutes. I'd like to make
>> sure that we can still write C tests (that are called from your
>> framework with results integrated into it) and that the Python tests
>> are also fast.
>>
>> How do we move this forward? Are you planing to resend the patch with
>> the faster approach?
>
>
> I'm tempted to squash down all/most the fixes/enhancements I've made since
> posting the original into a single commit rather than sending follow-on
> enhancements, since none of it is applied yet. I can keep the various test
> implementations etc. in separate commits as a series. Does that seem
> reasonable?

It does to me. I think ideally we should have the infrastructure in one
patch (i.e. with just a noddy/sample test). Then you can add tests in
another patch or patches.

>
> I need to do some more testing/clean-up of the version that doesn't use
> pexpect. For example, I have only tested sandbox and not real HW, and also
> haven't tested (and perhaps implemented some of) the support for matching
> unexpected error messages in the console log. Still, that all shouldn't
take
> too long.

OK sounds good.

Regards,
Simon

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-12-02 13:37 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-15  6:53 [U-Boot] [PATCH] Implement pytest-based test infrastructure Stephen Warren
2015-11-19 14:45 ` Simon Glass
2015-11-19 17:00   ` Stephen Warren
2015-11-19 19:09     ` Stephen Warren
2015-11-21 16:49       ` Simon Glass
2015-11-22 17:30         ` Stephen Warren
2015-11-24  1:45           ` Simon Glass
2015-11-24  2:18             ` Simon Glass
2015-11-24  4:24               ` Stephen Warren
2015-11-24  4:44             ` Stephen Warren
2015-11-24 19:04               ` Simon Glass
2015-11-24 21:28                 ` Stephen Warren
2015-11-27  2:52                   ` Simon Glass
2015-11-30 17:13                     ` Stephen Warren
2015-12-01 16:40                       ` Simon Glass
2015-12-01 23:24                         ` Stephen Warren
2015-12-02 13:37                           ` Simon Glass
2015-11-23 23:44     ` Tom Rini
2015-11-23 23:55       ` Stephen Warren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.