All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH XTF 0/4] Add monitor tests to XTF
@ 2018-12-14 13:34 Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 1/4] xtf-runner: split into logical components Petre Pircalabu
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Petre Pircalabu @ 2018-12-14 13:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Petre Pircalabu, andrew.cooper3

Extend the framework to support (simple) monitor related tests.

Petre Pircalabu (4):
  xtf-runner: split into logical components
  xtf: Add executable test class
  xtf: Add monitor test class
  xtf: Add emul-unimpl test

 Makefile                       |   6 +-
 build/common.mk                |  22 ++-
 build/files.mk                 |   3 +
 build/gen.mk                   |  25 ++-
 build/mkinfo.py                |  84 +++++++--
 docs/all-tests.dox             |   5 +
 include/monitor/monitor.h      | 117 ++++++++++++
 monitor/Makefile               |  20 ++
 monitor/monitor.c              | 409 +++++++++++++++++++++++++++++++++++++++++
 tests/emul-unimpl/Makefile     |  15 ++
 tests/emul-unimpl/extra.cfg.in |   3 +
 tests/emul-unimpl/main.c       |  59 ++++++
 tests/emul-unimpl/monitor.c    | 310 +++++++++++++++++++++++++++++++
 xtf-runner                     | 334 ++++-----------------------------
 xtf/__init__.py                |  12 ++
 xtf/domu_test.py               | 179 ++++++++++++++++++
 xtf/exceptions.py              |   6 +
 xtf/executable_test.py         |  83 +++++++++
 xtf/logger.py                  |  23 +++
 xtf/monitor_test.py            | 132 +++++++++++++
 xtf/suite.py                   | 100 ++++++++++
 xtf/test.py                    | 139 ++++++++++++++
 xtf/utils.py                   |  17 ++
 xtf/xl_domu.py                 | 121 ++++++++++++
 24 files changed, 1900 insertions(+), 324 deletions(-)
 create mode 100644 include/monitor/monitor.h
 create mode 100644 monitor/Makefile
 create mode 100644 monitor/monitor.c
 create mode 100644 tests/emul-unimpl/Makefile
 create mode 100644 tests/emul-unimpl/extra.cfg.in
 create mode 100644 tests/emul-unimpl/main.c
 create mode 100644 tests/emul-unimpl/monitor.c
 create mode 100644 xtf/__init__.py
 create mode 100644 xtf/domu_test.py
 create mode 100644 xtf/exceptions.py
 create mode 100644 xtf/executable_test.py
 create mode 100644 xtf/logger.py
 create mode 100644 xtf/monitor_test.py
 create mode 100644 xtf/suite.py
 create mode 100644 xtf/test.py
 create mode 100644 xtf/utils.py
 create mode 100644 xtf/xl_domu.py

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH XTF 1/4] xtf-runner: split into logical components
  2018-12-14 13:34 [PATCH XTF 0/4] Add monitor tests to XTF Petre Pircalabu
@ 2018-12-14 13:34 ` Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 2/4] xtf: Add executable test class Petre Pircalabu
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Petre Pircalabu @ 2018-12-14 13:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Petre Pircalabu, andrew.cooper3

Split the xtf-runner script file into multiple modules in order to
support multiple test types.

Features:
  - 2 abstract types (TestInfo and TestInstance) to represent the
  test information (info.json) and, respectively to implement the test
  execution.
    TestInfo has to implement the "all_instances" method to create the
    list of TestInstance objects.
    TestInstance has to implement "set_up", "run", and "clean-up"
    methods.
  - TestResult - represents an XTF test result (SUCCESS, SKIP, ERROR,
  FAILURE, CRASH). The values should be kept in sync with the C code
  from report.h
  - Dynamic test class loading. Each info.json shoudl contain a
  "class_name" field which specifies the test info class describing the
  test. This value defaults to "xtf.domu_test.DomuTestInfo"
  - custom test info parameters. info.json can have the "extra"
  field, implemented as a dictionary,  which contains parameters
  specific for a certain test info class.
    e.g. TEST-EXTRA-INFO := arg1='--address=0x80000000 --id=4' arg2=42
  - logger class (print depending on the quiet field)
  - DomuTestInfo/DomuTest instance. Simple test which loads a XEN DomU
  and checks the output for a specific pattern.
  - toolstack abstraction using a wrapper class (e.g.
  (xtf.xl_domu.XLDomU)

Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 build/gen.mk      |  13 ++-
 build/mkinfo.py   |  84 +++++++++++---
 xtf-runner        | 334 +++++-------------------------------------------------
 xtf/__init__.py   |  12 ++
 xtf/domu_test.py  | 179 +++++++++++++++++++++++++++++
 xtf/exceptions.py |   6 +
 xtf/logger.py     |  23 ++++
 xtf/suite.py      |  97 ++++++++++++++++
 xtf/test.py       | 139 +++++++++++++++++++++++
 xtf/xl_domu.py    | 121 ++++++++++++++++++++
 10 files changed, 687 insertions(+), 321 deletions(-)
 create mode 100644 xtf/__init__.py
 create mode 100644 xtf/domu_test.py
 create mode 100644 xtf/exceptions.py
 create mode 100644 xtf/logger.py
 create mode 100644 xtf/suite.py
 create mode 100644 xtf/test.py
 create mode 100644 xtf/xl_domu.py

diff --git a/build/gen.mk b/build/gen.mk
index 8d7a6bf..c19ca6a 100644
--- a/build/gen.mk
+++ b/build/gen.mk
@@ -27,12 +27,23 @@ else
 TEST-CFGS := $(foreach env,$(TEST-ENVS),test-$(env)-$(NAME).cfg)
 endif
 
+CLASS ?= "xtf.domu_test.DomuTestInfo"
+
 .PHONY: build
 build: $(foreach env,$(TEST-ENVS),test-$(env)-$(NAME)) $(TEST-CFGS)
 build: info.json
 
+MKINFO-OPTS := -n "$(NAME)"
+MKINFO-OPTS += -c "$(CLASS)"
+MKINFO-OPTS += -t "$(CATEGORY)"
+MKINFO-OPTS += -e "$(TEST-ENVS)"
+MKINFO-OPTS += -v "$(VARY-CFG)"
+ifneq (x$(TEST-EXTRA-INFO), x)
+MKINFO-OPTS += -x "$(TEST-EXTRA-INFO)"
+endif
+
 info.json: $(ROOT)/build/mkinfo.py FORCE
-	@$(PYTHON) $< $@.tmp "$(NAME)" "$(CATEGORY)" "$(TEST-ENVS)" "$(VARY-CFG)"
+	@$(PYTHON) $< $(MKINFO-OPTS) $@.tmp
 	@$(call move-if-changed,$@.tmp,$@)
 
 .PHONY: install install-each-env
diff --git a/build/mkinfo.py b/build/mkinfo.py
index 94891a9..afa355c 100644
--- a/build/mkinfo.py
+++ b/build/mkinfo.py
@@ -1,24 +1,74 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
+""" mkinfo.py
 
-import sys, os, json
+    Generates a test info json file.
+    The script is ran at build stage using the parameters specified
+    in the test's Makefile.
+"""
 
-# Usage: mkcfg.py $OUT $NAME $CATEGORY $ENVS $VARIATIONS
-_, out, name, cat, envs, variations = sys.argv
+import json
+import sys
+import shlex
+from   optparse import OptionParser
 
-template = {
-    "name": name,
-    "category": cat,
-    "environments": [],
-    "variations": [],
-    }
+def main():
+    """ Main entrypoint """
+    # Avoid wrapping the epilog text
+    OptionParser.format_epilog = lambda self, formatter: self.epilog
 
-if envs:
-    template["environments"] = envs.split(" ")
-if variations:
-    template["variations"] = variations.split(" ")
+    parser = OptionParser(
+        usage = "%prog [OPTIONS] out_file",
+        description = "Xen Test Framework json generation tool",
+        )
 
-open(out, "w").write(
-    json.dumps(template, indent=4, separators=(',', ': '))
-    + "\n"
-    )
+    parser.add_option("-n", "--name", action = "store",
+                      dest = "name",
+                      help = "Test name",
+                      )
+    parser.add_option("-c", "--class", action = "store",
+                      dest = "class_name",
+                      help = "Test class name",
+                      )
+    parser.add_option("-t", "--category", action = "store",
+                      dest = "cat",
+                      help = "Test category",
+                      )
+    parser.add_option("-e", "--environments", action = "store",
+                      dest = "envs",
+                      help = "Test environments (e.g hvm64, pv64 ...)",
+                      )
+    parser.add_option("-v", "--variations", action = "store",
+                      dest = "variations",
+                      help = "Test variations",
+                      )
+    parser.add_option("-x", "--extra", action = "store",
+                      dest = "extra",
+                      help = "Test specific parameters",
+                      )
+
+    opts, args = parser.parse_args()
+    template = {
+        "name": opts.name,
+        "class_name": opts.class_name,
+        "category": opts.cat,
+        "environments": [],
+        "variations": [],
+        "extra": {}
+        }
+
+    if opts.envs:
+        template["environments"] = opts.envs.split(" ")
+    if opts.variations:
+        template["variations"] = opts.variations.split(" ")
+    if opts.extra:
+        template["extra"] = dict([(e.split('=',1))
+                                 for e in shlex.split(opts.extra)])
+
+    open(args[0], "w").write(
+        json.dumps(template, indent=4, separators=(',', ': '))
+        + "\n"
+        )
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/xtf-runner b/xtf-runner
index 172cb1d..1a4901a 100755
--- a/xtf-runner
+++ b/xtf-runner
@@ -7,154 +7,30 @@
 Currently assumes the presence and availability of the `xl` toolstack.
 """
 
-import sys, os, os.path as path
+import os
+import sys
 
 from optparse import OptionParser
-from subprocess import Popen, PIPE, call as subproc_call
+from subprocess import Popen, PIPE
 
-try:
-    import json
-except ImportError:
-    import simplejson as json
-
-# All results of a test, keep in sync with C code report.h.
-# Notes:
-#  - WARNING is not a result on its own.
-#  - CRASH isn't known to the C code, but covers all cases where a valid
-#    result was not found.
-all_results = ['SUCCESS', 'SKIP', 'ERROR', 'FAILURE', 'CRASH']
+from xtf import default_categories, non_default_categories, all_categories
+from xtf import pv_environments, hvm_environments, all_environments
+from xtf.exceptions import RunnerError
+from xtf.logger import Logger
+from xtf.suite import get_all_test_info, gather_all_test_info
+from xtf.test import TestResult
 
 # Return the exit code for different states.  Avoid using 1 and 2 because
 # python interpreter uses them -- see document for sys.exit.
 def exit_code(state):
     """ Convert a test result to an xtf-runner exit code. """
-    return { "SUCCESS": 0,
-             "SKIP":    3,
-             "ERROR":   4,
-             "FAILURE": 5,
-             "CRASH":   6,
+    return { TestResult.SUCCESS: 0,
+             TestResult.SKIP:    3,
+             TestResult.ERROR:   4,
+             TestResult.FAILURE: 5,
+             TestResult.CRASH:   6,
     }[state]
 
-# All test categories
-default_categories     = set(("functional", "xsa"))
-non_default_categories = set(("special", "utility", "in-development"))
-all_categories         = default_categories | non_default_categories
-
-# All test environments
-pv_environments        = set(("pv64", "pv32pae"))
-hvm_environments       = set(("hvm64", "hvm32pae", "hvm32pse", "hvm32"))
-all_environments       = pv_environments | hvm_environments
-
-
-class RunnerError(Exception):
-    """ Errors relating to xtf-runner itself """
-
-class TestInstance(object):
-    """ Object representing a single test. """
-
-    def __init__(self, arg):
-        """ Parse and verify 'arg' as a test instance. """
-        self.env, self.name, self.variation = parse_test_instance_string(arg)
-
-        if self.env is None:
-            raise RunnerError("No environment for '%s'" % (arg, ))
-
-        if self.variation is None and get_all_test_info()[self.name].variations:
-            raise RunnerError("Test '%s' has variations, but none specified"
-                              % (self.name, ))
-
-    def vm_name(self):
-        """ Return the VM name as `xl` expects it. """
-        return repr(self)
-
-    def cfg_path(self):
-        """ Return the path to the `xl` config file for this test. """
-        return path.join("tests", self.name, repr(self) + ".cfg")
-
-    def __repr__(self):
-        if not self.variation:
-            return "test-%s-%s" % (self.env, self.name)
-        else:
-            return "test-%s-%s~%s" % (self.env, self.name, self.variation)
-
-    def __hash__(self):
-        return hash(repr(self))
-
-    def __cmp__(self, other):
-        return cmp(repr(self), repr(other))
-
-
-class TestInfo(object):
-    """ Object representing a tests info.json, in a more convenient form. """
-
-    def __init__(self, test_json):
-        """Parse and verify 'test_json'.
-
-        May raise KeyError, TypeError or ValueError.
-        """
-
-        name = test_json["name"]
-        if not isinstance(name, basestring):
-            raise TypeError("Expected string for 'name', got '%s'"
-                            % (type(name), ))
-        self.name = name
-
-        cat = test_json["category"]
-        if not isinstance(cat, basestring):
-            raise TypeError("Expected string for 'category', got '%s'"
-                            % (type(cat), ))
-        if not cat in all_categories:
-            raise ValueError("Unknown category '%s'" % (cat, ))
-        self.cat = cat
-
-        envs = test_json["environments"]
-        if not isinstance(envs, list):
-            raise TypeError("Expected list for 'environments', got '%s'"
-                            % (type(envs), ))
-        if not envs:
-            raise ValueError("Expected at least one environment")
-        for env in envs:
-            if not env in all_environments:
-                raise ValueError("Unknown environments '%s'" % (env, ))
-        self.envs = envs
-
-        variations = test_json["variations"]
-        if not isinstance(variations, list):
-            raise TypeError("Expected list for 'variations', got '%s'"
-                            % (type(variations), ))
-        self.variations = variations
-
-    def all_instances(self, env_filter = None, vary_filter = None):
-        """Return a list of TestInstances, for each supported environment.
-        Optionally filtered by env_filter.  May return an empty list if
-        the filter doesn't match any supported environment.
-        """
-
-        if env_filter:
-            envs = set(env_filter).intersection(self.envs)
-        else:
-            envs = self.envs
-
-        if vary_filter:
-            variations = set(vary_filter).intersection(self.variations)
-        else:
-            variations = self.variations
-
-        res = []
-        if variations:
-            for env in envs:
-                for vary in variations:
-                    res.append(TestInstance("test-%s-%s~%s"
-                                            % (env, self.name, vary)))
-        else:
-            res = [ TestInstance("test-%s-%s" % (env, self.name))
-                    for env in envs ]
-        return res
-
-    def __repr__(self):
-        return "TestInfo(%s)" % (self.name, )
-
-
 def parse_test_instance_string(arg):
     """Parse a test instance string.
 
@@ -221,47 +97,6 @@ def parse_test_instance_string(arg):
 
     return env, name, variation
 
-
-# Cached test json from disk
-_all_test_info = {}
-
-def get_all_test_info():
-    """ Open and collate each info.json """
-
-    # Short circuit if already cached
-    if _all_test_info:
-        return _all_test_info
-
-    for test in os.listdir("tests"):
-
-        info_file = None
-        try:
-
-            # Ignore directories which don't have a info.json inside them
-            try:
-                info_file = open(path.join("tests", test, "info.json"))
-            except IOError:
-                continue
-
-            # Ignore tests which have bad JSON
-            try:
-                test_info = TestInfo(json.load(info_file))
-
-                if test_info.name != test:
-                    continue
-
-            except (ValueError, KeyError, TypeError):
-                continue
-
-            _all_test_info[test] = test_info
-
-        finally:
-            if info_file:
-                info_file.close()
-
-    return _all_test_info
-
-
 def tests_from_selection(cats, envs, tests):
     """Given a selection of possible categories, environment and tests, return
     all tests within the provided parameters.
@@ -433,136 +268,25 @@ def list_tests(opts):
     for sel in opts.selection:
         print sel
 
-
-def interpret_result(logline):
-    """ Interpret the final log line of a guest for a result """
-
-    if not "Test result:" in logline:
-        return "CRASH"
-
-    for res in all_results:
-        if res in logline:
-            return res
-
-    return "CRASH"
-
-
-def run_test_console(opts, test):
-    """ Run a specific, obtaining results via xenconsole """
-
-    cmd = ['xl', 'create', '-p', test.cfg_path()]
-    if not opts.quiet:
-        print "Executing '%s'" % (" ".join(cmd), )
-
-    create = Popen(cmd, stdout = PIPE, stderr = PIPE)
-    _, stderr = create.communicate()
-
-    if create.returncode:
-        if opts.quiet:
-            print "Executing '%s'" % (" ".join(cmd), )
-        print stderr
-        raise RunnerError("Failed to create VM")
-
-    cmd = ['xl', 'console', test.vm_name()]
-    if not opts.quiet:
-        print "Executing '%s'" % (" ".join(cmd), )
-
-    console = Popen(cmd, stdout = PIPE)
-
-    cmd = ['xl', 'unpause', test.vm_name()]
-    if not opts.quiet:
-        print "Executing '%s'" % (" ".join(cmd), )
-
-    rc = subproc_call(cmd)
-    if rc:
-        if opts.quiet:
-            print "Executing '%s'" % (" ".join(cmd), )
-        raise RunnerError("Failed to unpause VM")
-
-    stdout, _ = console.communicate()
-
-    if console.returncode:
-        raise RunnerError("Failed to obtain VM console")
-
-    lines = stdout.splitlines()
-
-    if lines:
-        if not opts.quiet:
-            print "\n".join(lines)
-            print ""
-
-    else:
-        return "CRASH"
-
-    return interpret_result(lines[-1])
-
-
-def run_test_logfile(opts, test):
-    """ Run a specific test, obtaining results from a logfile """
-
-    logpath = path.join(opts.logfile_dir,
-                        opts.logfile_pattern.replace("%s", str(test)))
-
-    if not opts.quiet:
-        print "Using logfile '%s'" % (logpath, )
-
-    fd = os.open(logpath, os.O_CREAT | os.O_RDONLY, 0644)
-    logfile = os.fdopen(fd)
-    logfile.seek(0, os.SEEK_END)
-
-    cmd = ['xl', 'create', '-F', test.cfg_path()]
-    if not opts.quiet:
-        print "Executing '%s'" % (" ".join(cmd), )
-
-    guest = Popen(cmd, stdout = PIPE, stderr = PIPE)
-
-    _, stderr = guest.communicate()
-
-    if guest.returncode:
-        if opts.quiet:
-            print "Executing '%s'" % (" ".join(cmd), )
-        print stderr
-        raise RunnerError("Failed to run test")
-
-    line = ""
-    for line in logfile.readlines():
-
-        line = line.rstrip()
-        if not opts.quiet:
-            print line
-
-        if "Test result:" in line:
-            print ""
-            break
-
-    logfile.close()
-
-    return interpret_result(line)
-
-
 def run_tests(opts):
     """ Run tests """
 
     tests = opts.selection
-    if not len(tests):
+    if not tests:
         raise RunnerError("No tests to run")
 
-    run_test = { "console": run_test_console,
-                 "logfile": run_test_logfile,
-    }.get(opts.results_mode, None)
-
-    if run_test is None:
-        raise RunnerError("Unknown mode '%s'" % (opts.mode, ))
-
-    rc = all_results.index('SUCCESS')
+    rc = TestResult()
     results = []
 
     for test in tests:
+        res = TestResult()
+        test.set_up(opts, res)
+        if res == TestResult.SUCCESS:
+            test.run(res)
+        test.clean_up(res)
 
-        res = run_test(opts, test)
-        res_idx = all_results.index(res)
-        if res_idx > rc:
-            rc = res_idx
+        if res > rc:
+            rc = res
 
         results.append(res)
 
@@ -571,7 +295,7 @@ def run_tests(opts):
     for test, res in zip(tests, results):
         print "%-40s %s" % (test, res)
 
-    return exit_code(all_results[rc])
+    return exit_code(rc)
 
 
 def main():
@@ -581,7 +305,7 @@ def main():
     sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)
 
     # Normalise $CWD to the directory this script is in
-    os.chdir(path.dirname(path.abspath(sys.argv[0])))
+    os.chdir(os.path.dirname(os.path.abspath(sys.argv[0])))
 
     # Avoid wrapping the epilog text
     OptionParser.format_epilog = lambda self, formatter: self.epilog
@@ -715,12 +439,16 @@ def main():
     opts, args = parser.parse_args()
     opts.args = args
 
+    Logger().initialize(opts)
+
+    gather_all_test_info()
+
     opts.selection = interpret_selection(opts)
 
     if opts.list_tests:
         return list_tests(opts)
-    else:
-        return run_tests(opts)
+
+    return run_tests(opts)
 
 
 if __name__ == "__main__":
diff --git a/xtf/__init__.py b/xtf/__init__.py
new file mode 100644
index 0000000..889c1d5
--- /dev/null
+++ b/xtf/__init__.py
@@ -0,0 +1,12 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# All test categories
+default_categories     = set(("functional", "xsa"))
+non_default_categories = set(("special", "utility", "in-development"))
+all_categories         = default_categories | non_default_categories
+
+# All test environments
+pv_environments        = set(("pv64", "pv32pae"))
+hvm_environments       = set(("hvm64", "hvm32pae", "hvm32pse", "hvm32"))
+all_environments       = pv_environments | hvm_environments
diff --git a/xtf/domu_test.py b/xtf/domu_test.py
new file mode 100644
index 0000000..4052167
--- /dev/null
+++ b/xtf/domu_test.py
@@ -0,0 +1,179 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+Basic DomU test
+Runs a domain and checks the output for a spcific pattern.
+"""
+
+import os
+import StringIO
+
+from xtf import all_environments
+from xtf.exceptions import RunnerError
+from xtf.logger import Logger
+from xtf.test import TestInstance, TestInfo, TestResult
+from xtf.xl_domu import XLDomU
+
+class DomuTestInstance(TestInstance):
+    """ Object representing a single DOMU test. """
+
+    def __init__(self, env, name, variation):
+        super(DomuTestInstance, self).__init__(name)
+
+        self.env, self.variation = env, variation
+
+        if self.env is None:
+            raise RunnerError("No environment for '%s'" % (self.name, ))
+
+        self.domu = XLDomU(self.cfg_path())
+        self.results_mode = 'console'
+        self.logpath = None
+        if not Logger().quiet:
+            self.output = StringIO.StringIO()
+        else:
+            self.output = None
+
+    def vm_name(self):
+        """ Return the VM name as `xl` expects it. """
+        return repr(self)
+
+    def cfg_path(self):
+        """ Return the path to the `xl` config file for this test. """
+        return os.path.join("tests", self.name, repr(self) + ".cfg")
+
+    def __repr__(self):
+        if self.variation:
+            return "test-%s-%s~%s" % (self.env, self.name, self.variation)
+        return "test-%s-%s" % (self.env, self.name)
+
+    def set_up(self, opts, result):
+        self.results_mode = opts.results_mode
+        if self.results_mode not in ['console', 'logfile']:
+            raise RunnerError("Unknown mode '%s'" % (opts.results_mode, ))
+
+        self.logpath = os.path.join(opts.logfile_dir,
+                          opts.logfile_pattern.replace("%s", str(self)))
+        self.domu.create()
+
+    def run(self, result):
+        """Executes the test instance"""
+        run_test = { "console": self._run_test_console,
+                     "logfile": self._run_test_logfile,
+        }.get(self.results_mode, None)
+
+        run_test(result)
+
+    def clean_up(self, result):
+        if self.output:
+            self.output.close()
+
+        # wait for completion
+        if not self.domu.cleanup():
+            result.set(TestResult.CRASH)
+
+    def _run_test_console(self, result):
+        """ Run a specific, obtaining results via xenconsole """
+
+        console = self.domu.console(self.output)
+
+        # start the domain
+        self.domu.unpause()
+        value = console.expect(self.result_pattern())
+
+        if self.output is not None:
+            Logger().log(self.output.getvalue())
+
+        result.set(value)
+
+    def _run_test_logfile(self, result):
+        """ Run a specific test, obtaining results from a logfile """
+
+        Logger().log("Using logfile '%s'" % (self.logpath, ))
+
+        fd = os.open(self.logpath, os.O_CREAT | os.O_RDONLY, 0644)
+        logfile = os.fdopen(fd)
+        logfile.seek(0, os.SEEK_END)
+
+        self.domu.unpause()
+
+        # wait for completion
+        if not self.domu.cleanup():
+            result.set(TestResult.CRASH)
+
+        line = ""
+        for line in logfile.readlines():
+            line = line.rstrip()
+            Logger().log(line)
+
+            if "Test result:" in line:
+                print ""
+                break
+
+        logfile.close()
+
+        result.set(TestInstance.parse_result(line))
+
+
+class DomuTestInfo(TestInfo):
+    """ Object representing a tests info.json, in a more convenient form. """
+
+    def __init__(self, test_json):
+        """Parse and verify 'test_json'.
+
+        May raise KeyError, TypeError or ValueError.
+        """
+
+        super(DomuTestInfo, self).__init__(test_json)
+        self.instance_class = DomuTestInstance
+
+        envs = test_json["environments"]
+        if not isinstance(envs, list):
+            raise TypeError("Expected list for 'environments', got '%s'"
+                            % (type(envs), ))
+        if not envs:
+            raise ValueError("Expected at least one environment")
+        for env in envs:
+            if env not in all_environments:
+                raise ValueError("Unknown environments '%s'" % (env, ))
+        self.envs = envs
+
+        variations = test_json["variations"]
+        if not isinstance(variations, list):
+            raise TypeError("Expected list for 'variations', got '%s'"
+                            % (type(variations), ))
+        self.variations = variations
+
+        extra = test_json["extra"]
+        if not isinstance(extra, dict):
+            raise TypeError("Expected dict for 'extra', got '%s'"
+                            % (type(extra), ))
+        self.extra = extra
+
+    def all_instances(self, env_filter = None, vary_filter = None):
+        """Return a list of TestInstances, for each supported environment.
+        Optionally filtered by env_filter.  May return an empty list if
+        the filter doesn't match any supported environment.
+        """
+
+        if env_filter:
+            envs = set(env_filter).intersection(self.envs)
+        else:
+            envs = self.envs
+
+        if vary_filter:
+            variations = set(vary_filter).intersection(self.variations)
+        else:
+            variations = self.variations
+
+        res = []
+        if variations:
+            for env in envs:
+                for vary in variations:
+                    res.append(self.instance_class(env, self.name, vary))
+        else:
+            res = [ self.instance_class(env, self.name, None)
+                    for env in envs ]
+        return res
+
+    def __repr__(self):
+        return "%s(%s)" % (self.__class__.__name__, self.name, )
diff --git a/xtf/exceptions.py b/xtf/exceptions.py
new file mode 100644
index 0000000..26801a2
--- /dev/null
+++ b/xtf/exceptions.py
@@ -0,0 +1,6 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+class RunnerError(Exception):
+    """ Errors relating to xtf-runner itself """
+
diff --git a/xtf/logger.py b/xtf/logger.py
new file mode 100644
index 0000000..ec279e5
--- /dev/null
+++ b/xtf/logger.py
@@ -0,0 +1,23 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+class Singleton(type):
+    """Singleton meta class"""
+    _instances = {}
+    def __call__(cls, *args, **kwargs):
+        if cls not in cls._instances:
+            cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
+        return cls._instances[cls]
+
+class Logger(object):
+    """Logger class for XTF."""
+    __metaclass__ = Singleton
+
+    def initialize(self, opts):
+        """Initialize logger"""
+        self.quiet = opts.quiet
+
+    def log(self, message):
+        """Display the message"""
+        if not self.quiet:
+            print message
diff --git a/xtf/suite.py b/xtf/suite.py
new file mode 100644
index 0000000..ad7d30f
--- /dev/null
+++ b/xtf/suite.py
@@ -0,0 +1,97 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+import os, os.path as path
+import sys
+import imp
+
+try:
+    import json
+except ImportError:
+    import simplejson as json
+
+from xtf.exceptions import RunnerError
+
+# Cached test json from disk
+_all_test_info = {}
+
+def _load_module(name):
+    """Loads module dynamically"""
+    components = name.split(".")
+    module_path = sys.path
+
+    for index in xrange(len(components)):
+        module_name = components[index]
+        module = sys.modules.get(module_name)
+        if module:
+            if hasattr(module, '__path__'):
+                module_path = module.__path__
+            continue
+
+        try:
+            mod_file, filename, description = imp.find_module(module_name,
+                                                              module_path)
+            module = imp.load_module(module_name, mod_file, filename,
+                                     description)
+            if hasattr(module, '__path__'):
+                module_path = module.__path__
+        finally:
+            if mod_file:
+                mod_file.close()
+
+    return module
+
+def _load_class(name):
+    """Loads python class dynamically"""
+    components = name.split(".")
+    class_name = components[-1]
+    module = _load_module(".".join(components[:-1]))
+
+    try:
+        cls = module.__dict__[class_name]
+        return cls
+    except KeyError:
+        return None
+
+
+def get_all_test_info():
+    """ Returns all available test info instances """
+
+    if not _all_test_info:
+        raise RunnerError("No available test info")
+
+    return _all_test_info
+
+
+def gather_all_test_info():
+    """ Open and collate each info.json """
+
+    for test in os.listdir("tests"):
+
+        info_file = None
+        try:
+
+            # Ignore directories which don't have a info.json inside them
+            try:
+                info_file = open(path.join("tests", test, "info.json"))
+            except IOError:
+                continue
+
+            # Ignore tests which have bad JSON
+            try:
+                json_info = json.load(info_file)
+                test_class = _load_class(json_info["class_name"])
+                test_info = test_class(json_info)
+
+                if test_info.name != test:
+                    continue
+
+            except (ValueError, KeyError, TypeError):
+                continue
+
+            _all_test_info[test] = test_info
+
+        finally:
+            if info_file:
+                info_file.close()
+
diff --git a/xtf/test.py b/xtf/test.py
new file mode 100644
index 0000000..4440b47
--- /dev/null
+++ b/xtf/test.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+"""
+Base XTF Test Classess
+"""
+import pexpect
+from   xtf import all_categories
+
+class TestResult(object):
+    """
+    Test result wrapper class
+    All results of a test, keep in sync with C code report.h.
+    Notes:
+     - WARNING is not a result on its own.
+     - CRASH isn't known to the C code, but covers all cases where a valid
+       result was not found.
+    """
+
+    SUCCESS = 'SUCCESS'
+    SKIP = 'SKIP'
+    ERROR = 'ERROR'
+    FAILURE = 'FAILURE'
+    CRASH = 'CRASH'
+
+    all_results = [SUCCESS, SKIP, ERROR, FAILURE, CRASH]
+
+    def __init__(self, value=SUCCESS):
+        self.set(value)
+
+    def __cmp__(self, other):
+        if isinstance(other, TestResult):
+            return cmp(TestResult.all_results.index(self._value),
+                   TestResult.all_results.index(repr(other)))
+        elif isinstance(other, (str, unicode)):
+            if other in TestResult.all_results:
+                return cmp(TestResult.all_results.index(self._value),
+                       TestResult.all_results.index(other))
+
+        raise ValueError
+
+    def __repr__(self):
+        return self._value
+
+    def __hash__(self):
+        return hash(repr(self))
+
+    def set(self, value):
+        """
+        The result can be set using both a string value or an index
+        if the index used is out-of-bounds the result will be initialized
+        to CRASH
+        """
+        if isinstance(value, (int, long)):
+            try:
+                self._value = TestResult.all_results[value]
+            except IndexError:
+                self._value = TestResult.CRASH
+        else:
+            if value in TestResult.all_results:
+                self._value = value
+            else:
+                self._value = TestResult.CRASH
+
+
+class TestInstance(object):
+    """Base class for a XTF Test Instance object"""
+
+    @staticmethod
+    def parse_result(logline):
+        """ Interpret the final log line of a guest for a result """
+
+        if "Test result:" not in logline:
+            return TestResult.CRASH
+
+        for res in TestResult.all_results:
+            if res in logline:
+                return res
+
+        return TestResult.CRASH
+
+    @staticmethod
+    def result_pattern():
+        """the test result pattern."""
+        return ['Test result: ' + x for x in TestResult.all_results] + \
+               [pexpect.TIMEOUT, pexpect.EOF]
+
+    def __init__(self, name):
+        self.name = name
+
+    def __hash__(self):
+        return hash(repr(self))
+
+    def __cmp__(self, other):
+        return cmp(repr(self), repr(other))
+
+    def set_up(self, opts, result):
+        """Sets up the necessary resources needed to run the test."""
+        raise NotImplementedError
+
+    def run(self, result):
+        """Runs the Test Instance."""
+        raise NotImplementedError
+
+    def clean_up(self, result):
+        """Cleans up the test data."""
+        raise NotImplementedError
+
+
+class TestInfo(object):
+    """Base class for a XTF Test Info object.
+    It represents a tests info.json, in a more convenient form.
+    """
+
+    def __init__(self, test_json):
+        """Parse and verify 'test_json'.
+
+        May raise KeyError, TypeError or ValueError.
+        """
+        name = test_json["name"]
+        if not isinstance(name, basestring):
+            raise TypeError("Expected string for 'name', got '%s'"
+                            % (type(name), ))
+        self.name = name
+
+        cat = test_json["category"]
+        if not isinstance(cat, basestring):
+            raise TypeError("Expected string for 'category', got '%s'"
+                            % (type(cat), ))
+        if cat not in all_categories:
+            raise ValueError("Unknown category '%s'" % (cat, ))
+        self.cat = cat
+
+    def all_instances(self, env_filter = None, vary_filter = None):
+        """Return a list of TestInstances, for each supported environment.
+        Optionally filtered by env_filter.  May return an empty list if
+        the filter doesn't match any supported environment.
+        """
+        raise NotImplementedError
diff --git a/xtf/xl_domu.py b/xtf/xl_domu.py
new file mode 100644
index 0000000..f76dbfe
--- /dev/null
+++ b/xtf/xl_domu.py
@@ -0,0 +1,121 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+"""XL DomU class"""
+########################################################################
+# Imports
+########################################################################
+
+import imp
+import os.path
+import time
+
+from   subprocess import Popen, PIPE
+
+import pexpect
+
+from   xtf.exceptions import RunnerError
+from   xtf.logger import Logger
+
+########################################################################
+# Functions
+########################################################################
+
+def _run_cmd(args, quiet=False):
+    """Execute command using Popen"""
+    proc = Popen(args, stdout = PIPE, stderr = PIPE)
+    if not quiet:
+        Logger().log("Executing '%s'" % (" ".join(args), ))
+    _, stderr = proc.communicate()
+    return proc.returncode, _, stderr
+
+def _xl_create(xl_conf_file, paused, fg):
+    """Creates a XEN Domain using the XL toolstack"""
+    args = ['xl', 'create']
+    if paused:
+        args.append('-p')
+    if fg:
+        args.append('-F')
+    args.append(xl_conf_file)
+    ret, _, stderr = _run_cmd(args)
+    if ret:
+        raise RunnerError("_xl_create", ret, _, stderr)
+
+def _xl_dom_id(xl_dom_name):
+    """Returns the ID of a XEN domain specified by name"""
+    args = ['xl', 'domid', xl_dom_name]
+    ret, _, stderr = _run_cmd(args)
+    if ret:
+        raise RunnerError("_xl_dom_id", ret, _, stderr)
+    return long(_)
+
+def _xl_destroy(domid):
+    """Destroy the domain specified by domid"""
+    args = ['xl', 'destroy', str(domid)]
+    ret, _, stderr = _run_cmd(args)
+    if ret:
+        raise RunnerError("_xl_destroy", ret, _, stderr)
+
+def _xl_unpause(domid):
+    """Unpauses the domain specified by domid"""
+    args = ['xl', 'unpause', str(domid)]
+    ret, _, stderr = _run_cmd(args)
+    if ret:
+        raise RunnerError("_xl_unpause", ret, _, stderr)
+
+def _is_alive(domid):
+    """Checks if the domain is alive using xenstore."""
+    args = ['xenstore-exists', os.path.join('/local/domain', str(domid))]
+    ret = _run_cmd(args, True)[0]
+    return ret == 0
+
+
+########################################################################
+# Classes
+########################################################################
+
+class XLDomU(object):
+    """XEN DomU implementation using the XL toolstack"""
+
+    def __init__(self, conf):
+        super(XLDomU, self).__init__()
+        self.__xl_conf_file = conf
+        self.dom_id = 0
+        code = open(conf)
+        self.__config = imp.new_module(conf)
+        exec code in self.__config.__dict__
+        self.__console = None
+
+    def create(self, paused=True, fg=False):
+        """Creates the XEN domain."""
+        _xl_create(self.__xl_conf_file, paused, fg)
+        self.dom_id = _xl_dom_id(self.__config.name)
+
+    def cleanup(self, timeout=10):
+        """Destroys the domain."""
+
+        if self.dom_id == 0:
+            return True
+
+        for _ in xrange(timeout):
+            if not _is_alive(self.dom_id):
+                return True
+            time.sleep(1)
+
+        if _is_alive(self.dom_id):
+            _xl_destroy(self.dom_id)
+            self.dom_id = 0
+            return False
+
+        return True
+
+    def unpause(self):
+        """Unpauses the domain."""
+        _xl_unpause(self.dom_id)
+
+    def console(self, logfile=None):
+        """Creates the domain_console handler."""
+        if self.__console is None:
+            self.__console = pexpect.spawn('xl', ['console', str(self.dom_id)],
+                    logfile=logfile)
+        return self.__console
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH XTF 2/4] xtf: Add executable test class
  2018-12-14 13:34 [PATCH XTF 0/4] Add monitor tests to XTF Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 1/4] xtf-runner: split into logical components Petre Pircalabu
@ 2018-12-14 13:34 ` Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 3/4] xtf: Add monitor " Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 4/4] xtf: Add emul-unimpl test Petre Pircalabu
  3 siblings, 0 replies; 6+ messages in thread
From: Petre Pircalabu @ 2018-12-14 13:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Petre Pircalabu, andrew.cooper3

The Executable test class runs on host (dom0). The class spawns a
process and searches the program output(stdio) for a specific pattern.

Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 xtf/__init__.py        |  2 +-
 xtf/executable_test.py | 83 ++++++++++++++++++++++++++++++++++++++++++++++++++
 xtf/suite.py           |  5 ++-
 3 files changed, 88 insertions(+), 2 deletions(-)
 create mode 100644 xtf/executable_test.py

diff --git a/xtf/__init__.py b/xtf/__init__.py
index 889c1d5..07c269a 100644
--- a/xtf/__init__.py
+++ b/xtf/__init__.py
@@ -3,7 +3,7 @@
 
 # All test categories
 default_categories     = set(("functional", "xsa"))
-non_default_categories = set(("special", "utility", "in-development"))
+non_default_categories = set(("special", "utility", "in-development", "host"))
 all_categories         = default_categories | non_default_categories
 
 # All test environments
diff --git a/xtf/executable_test.py b/xtf/executable_test.py
new file mode 100644
index 0000000..31aa6e4
--- /dev/null
+++ b/xtf/executable_test.py
@@ -0,0 +1,83 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+"""
+Executable test classes
+
+Spawns a process and waits for a specific pattern
+"""
+
+import StringIO
+import pexpect
+
+from xtf.logger import Logger
+from xtf.test import TestInstance, TestInfo, TestResult
+
+class ExecutableTestInstance(TestInstance):
+    """Executable Test Instance"""
+    def __init__(self, name, cmd, args, pattern):
+        super(ExecutableTestInstance, self).__init__(name)
+
+        self._cmd = cmd
+        self._args = [x.encode('utf-8') for x in args]
+        self._pattern = [x.encode('utf-8') for x in pattern]
+        self._proc = None
+        self.env = "dom0"
+        self.output = StringIO.StringIO()
+
+    def __repr__(self):
+        return "test-%s-%s" %(self.env, self.name)
+
+    def wait_pattern(self, pattern):
+        """Expect the pattern given as parameter."""
+        return self._proc.expect(pattern + [pexpect.TIMEOUT, pexpect.EOF])
+
+    def set_up(self, opts, result):
+        self._proc = pexpect.spawn(self._cmd, self._args, logfile = self.output)
+        print self._cmd, self._args
+
+        if self._proc is None:
+            result.set(TestResult.ERROR)
+
+    def run(self, result):
+        """Executes the test instance"""
+        if self.wait_pattern(self._pattern) > len(self._pattern):
+            result.set(TestResult.FAILURE)
+            return
+
+        result.set(TestResult.SUCCESS)
+
+    def clean_up(self, result):
+        if self.output:
+            Logger().log(self.output.getvalue())
+            self.output.close()
+
+class ExecutableTestInfo(TestInfo):
+    """ Object representing a tests info.json, in a more convenient form. """
+
+    def __init__(self, test_json):
+        super(ExecutableTestInfo, self).__init__(test_json)
+        self.instance_class = ExecutableTestInstance
+
+        cmd = test_json["cmd"]
+        if not isinstance(cmd, (str, unicode)):
+            raise TypeError("Expected string for 'cmd', got '%s')"
+                            % (type(cmd), ))
+        self.cmd = cmd
+
+        args = test_json["args"]
+        if not isinstance(args, list):
+            raise TypeError("Expected list for 'args', got '%s')"
+                            % (type(args), ))
+        self.args = args
+
+        pattern = test_json["pattern"]
+        if not isinstance(pattern, list):
+            raise TypeError("Expected list for 'pattern', got '%s')"
+                            % (type(pattern), ))
+        self.pattern = pattern
+
+    def all_instances(self, env_filter = None, vary_filter = None):
+        """Returns an ExecutableTestInstance object"""
+        return [self.instance_class(self.name, self.cmd, self.args,
+                                    self.pattern),]
diff --git a/xtf/suite.py b/xtf/suite.py
index ad7d30f..2e0727c 100644
--- a/xtf/suite.py
+++ b/xtf/suite.py
@@ -75,7 +75,10 @@ def gather_all_test_info():
             try:
                 info_file = open(path.join("tests", test, "info.json"))
             except IOError:
-                continue
+                try:
+                    info_file = open(path.join("tests", test, "host.json"))
+                except IOError:
+                    continue
 
             # Ignore tests which have bad JSON
             try:
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH XTF 3/4] xtf: Add monitor test class
  2018-12-14 13:34 [PATCH XTF 0/4] Add monitor tests to XTF Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 1/4] xtf-runner: split into logical components Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 2/4] xtf: Add executable test class Petre Pircalabu
@ 2018-12-14 13:34 ` Petre Pircalabu
  2018-12-14 13:34 ` [PATCH XTF 4/4] xtf: Add emul-unimpl test Petre Pircalabu
  3 siblings, 0 replies; 6+ messages in thread
From: Petre Pircalabu @ 2018-12-14 13:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Petre Pircalabu, andrew.cooper3

This class starts alongside the domain a monitor application which opens
an event channel corresponding to that domain and handles the received
requests.
Use the "monitor_args" key to pass test specific arguments to the
monitor application.
The arguments will be added in the test's Makefile using the
TEST-EXTRA-INFO variable.

Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 Makefile                  |   6 +-
 build/common.mk           |  22 ++-
 build/files.mk            |   3 +
 build/gen.mk              |  12 ++
 docs/all-tests.dox        |   5 +
 include/monitor/monitor.h | 117 +++++++++++++
 monitor/Makefile          |  20 +++
 monitor/monitor.c         | 409 ++++++++++++++++++++++++++++++++++++++++++++++
 xtf/__init__.py           |   2 +-
 xtf/monitor_test.py       | 132 +++++++++++++++
 xtf/utils.py              |  17 ++
 11 files changed, 741 insertions(+), 4 deletions(-)
 create mode 100644 include/monitor/monitor.h
 create mode 100644 monitor/Makefile
 create mode 100644 monitor/monitor.c
 create mode 100644 xtf/monitor_test.py
 create mode 100644 xtf/utils.py

diff --git a/Makefile b/Makefile
index 15a865f..db28075 100644
--- a/Makefile
+++ b/Makefile
@@ -32,7 +32,9 @@ INSTALL_PROGRAM := $(INSTALL) -p
 OBJCOPY         := $(CROSS_COMPILE)objcopy
 PYTHON          := python
 
-export CC CPP INSTALL INSTALL_DATA INSTALL_DIR INSTALL_PROGRAM OBJCOPY PYTHON
+HOSTCC          := gcc
+
+export CC CPP INSTALL INSTALL_DATA INSTALL_DIR INSTALL_PROGRAM OBJCOPY PYTHON HOSTCC
 
 .PHONY: all
 all:
@@ -51,7 +53,7 @@ install:
 	done
 
 define all_sources
-	find include/ arch/ common/ tests/ -name "*.[hcsS]"
+	find include/ arch/ common/ tests/ monitor/ -name "*.[hcsS]"
 endef
 
 .PHONY: cscope
diff --git a/build/common.mk b/build/common.mk
index b786ddf..1ec0fa4 100644
--- a/build/common.mk
+++ b/build/common.mk
@@ -1,4 +1,4 @@
-ALL_CATEGORIES     := special functional xsa utility in-development
+ALL_CATEGORIES     := special functional xsa utility in-development monitor
 
 ALL_ENVIRONMENTS   := pv64 pv32pae hvm64 hvm32pae hvm32pse hvm32
 
@@ -35,11 +35,20 @@ COMMON_AFLAGS-x86_64 := -m64
 COMMON_CFLAGS-x86_32 := -m32
 COMMON_CFLAGS-x86_64 := -m64
 
+#HOSTCFLAGS := -Wall -Werror
+HOSTCFLAGS  :=
+HOSTLDFLAGS :=
+HOSTLDLIBS  :=
+HOSTCFLAGS  += -D__XEN_TOOLS__ -g -O3 -I$(ROOT)/include/monitor
+HOSTCFLAGS  += -DXC_WANT_COMPAT_DEVICEMODEL_API -DXC_WANT_COMPAT_MAP_FOREIGN_API
+HOSTLDLIBS  += -lxenctrl -lxenstore -lxenevtchn
+
 defcfg-pv    := $(ROOT)/config/default-pv.cfg.in
 defcfg-hvm   := $(ROOT)/config/default-hvm.cfg.in
 
 obj-perarch :=
 obj-perenv  :=
+obj-monitor :=
 include $(ROOT)/build/files.mk
 
 
@@ -90,8 +99,19 @@ DEPS-$(1) = $$(head-$(1)) \
 
 endef
 
+# Setup monitor rules
+define MONITOR_setup
+DEPS-MONITOR = \
+	$$(obj-monitor:%.o=%-monitor.o)
+
+%-monitor.o: %.c
+	$$(HOSTCC) $$(HOSTCFLAGS) -c $$< -o $$@
+endef
+
 $(foreach env,$(ALL_ENVIRONMENTS),$(eval $(call PERENV_setup,$(env))))
 
+$(eval $(call MONITOR_setup))
+
 define move-if-changed
 	if ! cmp -s $(1) $(2); then mv -f $(1) $(2); else rm -f $(1); fi
 endef
diff --git a/build/files.mk b/build/files.mk
index dfa27e4..972c797 100644
--- a/build/files.mk
+++ b/build/files.mk
@@ -54,3 +54,6 @@ $(foreach env,$(32BIT_ENVIRONMENTS),$(eval obj-$(env) += $(obj-32)))
 # 64bit specific objects
 obj-64  += $(ROOT)/arch/x86/entry_64.o
 $(foreach env,$(64BIT_ENVIRONMENTS),$(eval obj-$(env) += $(obj-64)))
+
+# Monitor common objects
+obj-monitor += $(ROOT)/monitor/monitor.o
diff --git a/build/gen.mk b/build/gen.mk
index c19ca6a..1e6773a 100644
--- a/build/gen.mk
+++ b/build/gen.mk
@@ -32,6 +32,9 @@ CLASS ?= "xtf.domu_test.DomuTestInfo"
 .PHONY: build
 build: $(foreach env,$(TEST-ENVS),test-$(env)-$(NAME)) $(TEST-CFGS)
 build: info.json
+ifeq (x$(CATEGORY),xmonitor)
+build: test-monitor-$(NAME)
+endif
 
 MKINFO-OPTS := -n "$(NAME)"
 MKINFO-OPTS += -c "$(CLASS)"
@@ -100,6 +103,15 @@ install-each-env: install-$(1) install-$(1).cfg
 endef
 $(foreach env,$(TEST-ENVS),$(eval $(call PERENV_build,$(env))))
 
+define MONITOR_build
+test-monitor-$(NAME): $(DEPS-MONITOR)
+	@echo $(obj-monitor)
+	@echo $(DEPS-MONITOR)
+	$(HOSTCC) $(HOSTLDFLAGS) $(DEPS-MONITOR) $(HOSTLDLIBS) -o $$@
+endef
+
+$(eval $(call MONITOR_build))
+
 .PHONY: clean
 clean:
 	find $(ROOT) \( -name "*.o" -o -name "*.d" \) -delete
diff --git a/docs/all-tests.dox b/docs/all-tests.dox
index 732d44c..3ee552e 100644
--- a/docs/all-tests.dox
+++ b/docs/all-tests.dox
@@ -145,4 +145,9 @@ enable BTS.
 @subpage test-nested-svm - Nested SVM tests.
 
 @subpage test-nested-vmx - Nested VT-x tests.
+
+
+@section index-monitor Monitor
+
+@subpage test-emul_unimplemented - @Test EMUL_UNIMPLEMENTED event generation
 */
diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h
new file mode 100644
index 0000000..d01c259
--- /dev/null
+++ b/include/monitor/monitor.h
@@ -0,0 +1,117 @@
+/*
+ * XTF Monitor interface
+ */
+
+#ifndef XTF_MONITOR_H
+#define XTF_MONITOR_H
+
+#include <inttypes.h>
+#include <xenctrl.h>
+#include <xenevtchn.h>
+#include <xenstore.h>
+#include <xen/vm_event.h>
+
+typedef enum
+{
+    XTF_MON_LEVEL_FATAL,
+    XTF_MON_LEVEL_ERROR,
+    XTF_MON_LEVEL_WARNING,
+    XTF_MON_LEVEL_INFO,
+    XTF_MON_LEVEL_DEBUG,
+    XTF_MON_LEVEL_TRACE,
+} xtf_mon_log_level_t;
+
+/* Should be in sync with "test_status" from common/report.c */
+typedef enum {
+    XTF_MON_RUNNING, /**< Test not yet completed.       */
+    XTF_MON_SUCCESS, /**< Test was successful.          */
+    XTF_MON_SKIP,    /**< Test cannot be completed.     */
+    XTF_MON_ERROR,   /**< Issue with the test itself.   */
+    XTF_MON_FAILURE, /**< Issue with the tested matter. */
+} xtf_mon_status_t;
+
+void xtf_log(xtf_mon_log_level_t lvl, const char *fmt, ...) __attribute__((__format__(__printf__, 2, 3)));
+
+#define XTF_MON_FATAL(format...)    xtf_log(XTF_MON_LEVEL_FATAL,    format)
+#define XTF_MON_ERROR(format...)    xtf_log(XTF_MON_LEVEL_ERROR,    format)
+#define XTF_MON_WARNING(format...)  xtf_log(XTF_MON_LEVEL_WARNING,  format)
+#define XTF_MON_INFO(format...)     xtf_log(XTF_MON_LEVEL_INFO,     format)
+#define XTF_MON_DEBUG(format...)    xtf_log(XTF_MON_LEVEL_DEBUG,    format)
+#define XTF_MON_TRACE(format...)    xtf_log(XTF_MON_LEVEL_TRACE,    format)
+
+typedef struct xtf_evtchn_ops
+{
+    int (*mem_access_handler)(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+    int (*singlestep_handler)(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+    int (*emul_unimpl_handler)(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+} xtf_evtchn_ops_t;
+
+/** XTF Event channel interface */
+typedef struct xtf_evtchn
+{
+    xenevtchn_handle *xce_handle;           /**< Event channel handle */
+    xenevtchn_port_or_error_t remote_port;  /**< Event channel remote port */
+    evtchn_port_t local_port;               /**< Event channel local port */
+    vm_event_back_ring_t back_ring;         /**< vm_event back ring */
+    void *ring_page;                        /**< Shared ring page */
+    xtf_evtchn_ops_t ops;                   /**< Test specific event callbacks */
+} xtf_evtchn_t;
+
+int add_evtchn(xtf_evtchn_t *evt, domid_t domain_id);
+xtf_evtchn_t *get_evtchn(domid_t domain_id);
+#define evtchn(domain_id) ( get_evtchn(domain_id) )
+
+/** XTF Monitor Driver */
+typedef struct xtf_monitor
+{
+    xc_interface *xch;                      /**< Control interface */
+    struct xs_handle *xsh;                  /**< XEN store handle */
+    xtf_evtchn_t *evt;                      /**< Event channel list */
+    xtf_mon_log_level_t log_lvl;            /**< Log Level */
+    xtf_mon_status_t status;                /**< Test Status */
+    int (*setup)(int, char*[]);             /**< Test specific setup */
+    int (*init)(void);                      /**< Test specific initialization */
+    int (*run)(void);                       /**< Test specific routine */
+    int (*cleanup)(void);                   /**< Test specific cleanup */
+    int (*get_result)(void);                /**< Returns the test's result */
+} xtf_monitor_t;
+
+xtf_monitor_t *get_monitor();
+#define monitor ( get_monitor() )
+#define xtf_xch ( monitor->xch )
+#define xtf_xsh ( monitor->xsh )
+
+#define call_helper(func, ... )         ( (func) ? func(__VA_ARGS__) : 0 )
+#define xtf_monitor_setup(argc, argv)   ( call_helper(monitor->setup, argc, argv) )
+#define xtf_monitor_init()              ( call_helper(monitor->init) )
+#define xtf_monitor_run()               ( call_helper(monitor->run) )
+#define xtf_monitor_cleanup()           ( call_helper(monitor->cleanup) )
+#define xtf_monitor_get_result()        ( call_helper(monitor->get_result) )
+
+int xtf_evtchn_init(domid_t domain_id);
+int xtf_evtchn_cleanup(domid_t domain_id);
+int xtf_evtchn_loop(domid_t domain_id);
+
+extern const char monitor_test_help[];
+
+void usage();
+
+extern xtf_monitor_t *xtf_monitor_instance;
+
+#define XTF_MONITOR(param) \
+static void  __attribute__((constructor)) register_monitor_##param() \
+{ \
+    xtf_monitor_instance = (xtf_monitor_t *)&param; \
+}
+
+#endif /* XTF_MONITOR_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/monitor/Makefile b/monitor/Makefile
new file mode 100644
index 0000000..64d4f8a
--- /dev/null
+++ b/monitor/Makefile
@@ -0,0 +1,20 @@
+.PHONY: all
+
+all: monitor
+
+HOSTCC ?= gcc
+
+OBJS = monitor.o
+
+#HOSTCFLAGS += -Wall -Werror
+HOSTCFLAGS += -D__XEN_TOOLS__ -g -O0
+HOSTCFLAGS += -DXC_WANT_COMPAT_DEVICEMODEL_API -DXC_WANT_COMPAT_MAP_FOREIGN_API
+
+%.o : %.c
+	$(HOSTCC) -c $(HOSTCFLAGS) $(HOSTCPPFLAGS) $< -o $@
+
+monitor: $(OBJS)
+	$(HOSTCC) -o $@ $^ -lxenctrl -lxenstore -lxenevtchn
+
+clean:
+	$(RM) $(OBJS) monitor
diff --git a/monitor/monitor.c b/monitor/monitor.c
new file mode 100644
index 0000000..943ff35
--- /dev/null
+++ b/monitor/monitor.c
@@ -0,0 +1,409 @@
+/**
+ * @file monitor/monitor.c
+ *
+ * Common functions for test specific monitor applications.
+ */
+
+#include <errno.h>
+#include <monitor.h>
+#include <poll.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+
+void xtf_log(xtf_mon_log_level_t lvl, const char *fmt, ...)
+{
+    static const char* log_level_names[] = {
+        "FATAL",
+        "ERROR",
+        "WARNING",
+        "INFO",
+        "DEBUG",
+        "TRACE",
+    };
+
+    if ( lvl <= monitor->log_lvl )
+    {
+        va_list argptr;
+
+        fprintf(stderr, "[%s]\t", log_level_names[lvl]);
+        va_start(argptr, fmt);
+        vfprintf(stderr, fmt, argptr);
+        va_end(argptr);
+    }
+}
+
+static void xtf_print_status(xtf_mon_status_t status)
+{
+    const char *xtf_mon_status_name[] =
+    {
+        "RUNNING",
+        "SUCCESS",
+        "SKIP",
+        "ERROR",
+        "FAILURE"
+    };
+
+    if ( status > XTF_MON_RUNNING && status <= XTF_MON_FAILURE )
+    {
+        printf("Test result: %s\n", xtf_mon_status_name[status]);
+    }
+}
+
+void usage()
+{
+    fprintf(stderr, "%s", monitor_test_help);
+}
+
+xtf_monitor_t *xtf_monitor_instance;
+xtf_monitor_t *get_monitor()
+{
+    return xtf_monitor_instance;
+}
+
+xtf_evtchn_t *get_evtchn(domid_t domain_id)
+{
+    (void)(domain_id);
+    return monitor->evt;
+}
+
+int add_evtchn(xtf_evtchn_t *evt, domid_t domain_id)
+{
+    (void)(domain_id);
+    monitor->evt = evt;
+}
+
+int xtf_evtchn_init(domid_t domain_id)
+{
+    int rc;
+    xtf_evtchn_t *evt = evtchn(domain_id);
+
+    if ( !evt )
+    {
+        XTF_MON_ERROR("Invalid event channel\n");
+        return -EINVAL;
+    }
+
+    evt->ring_page = xc_monitor_enable(monitor->xch, domain_id,
+            &evt->remote_port);
+    if ( !evt->ring_page )
+    {
+        XTF_MON_ERROR("Error enabling monitor\n");
+        return -1;
+    }
+
+    evt->xce_handle = xenevtchn_open(NULL, 0);
+    if ( !evt->xce_handle )
+    {
+        XTF_MON_ERROR("Failed to open XEN event channel\n");
+        return -1;
+    }
+
+    rc = xenevtchn_bind_interdomain(evt->xce_handle, domain_id,
+            evt->remote_port);
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Failed to bind XEN event channel\n");
+        return rc;
+    }
+    evt->local_port = rc;
+
+    /* Initialise ring */
+    SHARED_RING_INIT((vm_event_sring_t *)evt->ring_page);
+    BACK_RING_INIT(&evt->back_ring, (vm_event_sring_t *)evt->ring_page,
+            XC_PAGE_SIZE);
+
+    return 0;
+}
+
+int xtf_evtchn_cleanup(domid_t domain_id)
+{
+    int rc;
+    xtf_evtchn_t *evt = evtchn(domain_id);
+
+    if ( !evt )
+        return -EINVAL;
+
+    if ( evt->ring_page )
+        munmap(evt->ring_page, XC_PAGE_SIZE);
+
+    rc = xc_monitor_disable(monitor->xch, domain_id);
+    if ( rc != 0 )
+    {
+        XTF_MON_INFO("Error disabling monitor\n");
+        return rc;
+    }
+
+    rc = xenevtchn_unbind(evt->xce_handle, evt->local_port);
+    if ( rc != 0 )
+    {
+        XTF_MON_INFO("Failed to unbind XEN event channel\n");
+        return rc;
+    }
+
+    rc = xenevtchn_close(evt->xce_handle);
+    if ( rc != 0 )
+    {
+        XTF_MON_INFO("Failed to close XEN event channel\n");
+        return rc;
+    }
+
+    return 0;
+}
+
+static int xtf_wait_for_event(domid_t domain_id, xc_interface *xch, xenevtchn_handle *xce, unsigned long ms)
+{
+    struct pollfd fds[2] = {
+        { .fd = xenevtchn_fd(xce), .events = POLLIN | POLLERR },
+        { .fd = xs_fileno(monitor->xsh), .events = POLLIN | POLLERR },
+    };
+    int port;
+    int rc;
+
+    rc = poll(fds, 2, ms);
+
+    if ( rc < 0 )
+        return -errno;
+
+    if ( rc == 0 )
+        return 0;
+
+    if ( fds[0].revents )
+    {
+        port = xenevtchn_pending(xce);
+        if ( port == -1 )
+            return -errno;
+
+        rc = xenevtchn_unmask(xce, port);
+        if ( rc != 0 )
+            return -errno;
+
+        return port;
+    }
+
+    if ( fds[1].revents )
+    {
+        if ( !xs_is_domain_introduced(monitor->xsh, domain_id) )
+        {
+            return 1;
+        }
+
+        return 0;
+    }
+
+    return -2;  /* Error */
+}
+
+static void xtf_evtchn_get_request(xtf_evtchn_t *evt, vm_event_request_t *req)
+{
+    vm_event_back_ring_t *back_ring;
+    RING_IDX req_cons;
+
+    back_ring = &evt->back_ring;
+    req_cons = back_ring->req_cons;
+
+    /* Copy request */
+    memcpy(req, RING_GET_REQUEST(back_ring, req_cons), sizeof(*req));
+    req_cons++;
+
+    /* Update ring */
+    back_ring->req_cons = req_cons;
+    back_ring->sring->req_event = req_cons + 1;
+}
+
+static void xtf_evtchn_put_response(xtf_evtchn_t *evt, vm_event_response_t *rsp)
+{
+    vm_event_back_ring_t *back_ring;
+    RING_IDX rsp_prod;
+
+    back_ring = &evt->back_ring;
+    rsp_prod = back_ring->rsp_prod_pvt;
+
+    /* Copy response */
+    memcpy(RING_GET_RESPONSE(back_ring, rsp_prod), rsp, sizeof(*rsp));
+    rsp_prod++;
+
+    /* Update ring */
+    back_ring->rsp_prod_pvt = rsp_prod;
+    RING_PUSH_RESPONSES(back_ring);
+}
+
+int xtf_evtchn_loop(domid_t domain_id)
+{
+    vm_event_request_t req;
+    vm_event_response_t rsp;
+    int rc;
+    xtf_evtchn_t *evt = evtchn(domain_id);
+
+    if ( !evt )
+        return -EINVAL;
+
+    printf("Monitor initialization complete.\n");
+
+    for (;;)
+    {
+        rc = xtf_wait_for_event(domain_id, xtf_xch, evt->xce_handle, 100);
+        if ( rc < -1 )
+        {
+            XTF_MON_ERROR("Error getting event");
+            return rc;
+        }
+        else if ( rc == 1 )
+        {
+            XTF_MON_INFO("Domain %d exited\n", domain_id);
+            return 0;
+        }
+        
+        while ( RING_HAS_UNCONSUMED_REQUESTS(&evt->back_ring) )
+        {
+            xtf_evtchn_get_request(evt, &req);
+
+            if ( req.version != VM_EVENT_INTERFACE_VERSION )
+            {
+                XTF_MON_ERROR("Error: vm_event interface version mismatch!\n");
+                return -1;
+            }
+
+            memset( &rsp, 0, sizeof (rsp) );
+            rsp.version = VM_EVENT_INTERFACE_VERSION;
+            rsp.vcpu_id = req.vcpu_id;
+            rsp.flags = (req.flags & VM_EVENT_FLAG_VCPU_PAUSED);
+            rsp.reason = req.reason;
+
+            rc = 0;
+
+            switch (req.reason)
+            {
+            case VM_EVENT_REASON_MEM_ACCESS:
+                XTF_MON_DEBUG("mem_access rip = %016lx gfn = %lx offset = %lx gla =%lx\n",
+                    req.data.regs.x86.rip,
+                    req.u.mem_access.gfn,
+                    req.u.mem_access.offset,
+                    req.u.mem_access.gla);
+
+                if ( evt->ops.mem_access_handler )
+                    rc = evt->ops.mem_access_handler(domain_id, &req, &rsp);
+                break;
+            case VM_EVENT_REASON_SINGLESTEP:
+                XTF_MON_DEBUG("Singlestep: rip=%016lx, vcpu %d, altp2m %u\n",
+                    req.data.regs.x86.rip,
+                    req.vcpu_id,
+                    req.altp2m_idx);
+                if ( evt->ops.singlestep_handler )
+                    rc = evt->ops.singlestep_handler(domain_id, &req, &rsp);
+                break;
+            case VM_EVENT_REASON_EMUL_UNIMPLEMENTED:
+                XTF_MON_DEBUG("Emulation unimplemented: rip=%016lx, vcpu %d:\n",
+                    req.data.regs.x86.rip,
+                    req.vcpu_id);
+                if ( evt->ops.emul_unimpl_handler )
+                    rc = evt->ops.emul_unimpl_handler(domain_id, &req, &rsp);
+                break;
+            default:
+                XTF_MON_ERROR("Unknown request id = %d\n", req.reason);
+            }
+
+            if ( rc )
+                return rc;
+
+            /* Put the response on the ring */
+            xtf_evtchn_put_response(evt, &rsp);
+        }
+        /* Tell Xen page is ready */
+        rc = xenevtchn_notify(evt->xce_handle, evt->local_port);
+
+        if ( rc != 0 )
+        {
+            XTF_MON_ERROR("Error resuming page");
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
+int main(int argc, char* argv[])
+{
+    int rc;
+
+    monitor->status = XTF_MON_RUNNING;
+    monitor->log_lvl = XTF_MON_LEVEL_ERROR;
+
+    /* test specific setup sequence */
+    rc = xtf_monitor_setup(argc, argv);
+    if ( rc )
+    {
+        monitor->status = XTF_MON_ERROR;
+        goto e_exit;
+    }
+
+    monitor->xch = xc_interface_open(NULL, NULL, 0);
+    if ( !monitor->xch )
+    {
+        XTF_MON_FATAL("Error initialising xenaccess\n");
+        rc = -EINVAL;
+        monitor->status = XTF_MON_ERROR;
+        goto e_exit;
+    }
+
+    monitor->xsh = xs_open(XS_OPEN_READONLY);
+    if ( !monitor->xsh )
+    {
+        XTF_MON_FATAL("Error opening XEN store\n");
+        rc = -EINVAL;
+        monitor->status = XTF_MON_ERROR;
+        goto cleanup;
+    }
+
+    if ( !xs_watch( monitor->xsh, "@releaseDomain", "RELEASE_TOKEN") )
+    {
+        XTF_MON_FATAL("Error monitoring releaseDomain\n");
+        rc = -EINVAL;
+        monitor->status = XTF_MON_ERROR;
+        goto cleanup;
+    }
+
+    /* test specific initialization sequence */
+    rc = xtf_monitor_init();
+    if ( rc )
+    {
+        monitor->status = XTF_MON_ERROR;
+        goto cleanup;
+    }
+
+    /* Run test */
+    rc = xtf_monitor_run();
+    if ( rc )
+    {
+        XTF_MON_ERROR("Error running test\n");
+        monitor->status = XTF_MON_ERROR;
+    }
+
+    monitor->status = xtf_monitor_get_result();
+
+cleanup:
+    /* test specific cleanup sequence */
+    xtf_monitor_cleanup();
+    if ( monitor->xsh )
+    {
+        xs_unwatch(monitor->xsh, "@releaseDomain", "RELEASE_TOKEN");
+        xs_close(monitor->xsh);
+        monitor->xsh = NULL;
+    }
+
+    xc_interface_close(monitor->xch);
+
+e_exit:
+    xtf_print_status(monitor->status);
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xtf/__init__.py b/xtf/__init__.py
index 07c269a..e405013 100644
--- a/xtf/__init__.py
+++ b/xtf/__init__.py
@@ -3,7 +3,7 @@
 
 # All test categories
 default_categories     = set(("functional", "xsa"))
-non_default_categories = set(("special", "utility", "in-development", "host"))
+non_default_categories = set(("special", "utility", "in-development", "host", "monitor"))
 all_categories         = default_categories | non_default_categories
 
 # All test environments
diff --git a/xtf/monitor_test.py b/xtf/monitor_test.py
new file mode 100644
index 0000000..b9b010e
--- /dev/null
+++ b/xtf/monitor_test.py
@@ -0,0 +1,132 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+""" Monitor test classes.
+
+    The monitor test spawns an test monitor (event channel handler application)
+    instance and runs a DomU image which interacts with it.
+"""
+
+import os
+from   subprocess import Popen
+
+from   xtf.exceptions import RunnerError
+from   xtf.domu_test import DomuTestInstance, DomuTestInfo
+from   xtf.executable_test import ExecutableTestInstance
+from   xtf.logger import Logger
+from   xtf.test import TestResult, TestInstance
+from   xtf.utils import XTFAsyncCall
+
+class MonitorTestInstance(TestInstance):
+    """Monitor test instance"""
+
+    def __init__(self, env, name, variation, monitor_args):
+        super(MonitorTestInstance, self).__init__(name)
+
+        self.env, self.variation = env, variation
+
+        if self.env is None:
+            raise RunnerError("No environment for '%s'" % (self.name, ))
+
+        self.monitor_args = monitor_args.replace("@@VM_PATH@@", self.vm_path())
+
+        self.domu_test = None
+        self.monitor_test = None
+
+    def vm_name(self):
+        """ Return the VM name as `xl` expects it. """
+        return repr(self)
+
+    def cfg_path(self):
+        """ Return the path to the `xl` config file for this test. """
+        return os.path.join("tests", self.name, repr(self) + ".cfg")
+
+    def __repr__(self):
+        if self.variation:
+            return "test-%s-%s~%s" % (self.env, self.name, self.variation)
+        return "test-%s-%s" % (self.env, self.name)
+
+    def vm_path(self):
+        """ Return the VM path. """
+        return os.path.join("tests", self.name, repr(self))
+
+    def monitor_path(self):
+        """ Return the path to the test's monitor app if applicable. """
+        return os.path.join("tests", self.name, "test-monitor-" + self.name)
+
+    def start_monitor(self, dom_id):
+        """ Starts the monitor application. """
+        cmd = [" ".join([self.monitor_path(), self.monitor_args, str(dom_id)])]
+        Logger().log("Executing '%s'" % (" ".join(cmd), ))
+        return Popen(cmd, shell=True)
+
+    def set_up(self, opts, result):
+        self.domu_test = DomuTestInstance(self.env, self.name, self.variation)
+        self.domu_test.set_up(opts, result)
+        if result != TestResult.SUCCESS:
+            return
+
+        monitor_cmd = ' '.join([self.monitor_path(), self.monitor_args,
+                                str(self.domu_test.domu.dom_id)])
+
+        self.monitor_test = ExecutableTestInstance(self.name, '/bin/sh',
+                                                   ['-c', monitor_cmd], "")
+        self.monitor_test.set_up(opts, result)
+        match = self.monitor_test.wait_pattern(['Monitor initialization complete.'])
+        if match != 0:
+            result.set(TestResult.CRASH)
+
+    def run(self, result):
+        t1 = XTFAsyncCall(target=self.domu_test.run, args=(result,))
+        t2 = XTFAsyncCall(target=self.monitor_test.wait_pattern,
+                args=(self.result_pattern(), ))
+
+        for th in (t1, t2):
+            th.start()
+
+        t1.join()
+        res = TestResult(t2.join())
+        if res > result:
+            result.set(str(res))
+
+
+    def clean_up(self, result):
+        if self.domu_test:
+            self.domu_test.clean_up(result)
+
+        if self.monitor_test:
+            self.monitor_test.clean_up(result)
+
+class MonitorTestInfo(DomuTestInfo):
+    """Monitor test info"""
+
+    def __init__(self, test_json):
+        super(MonitorTestInfo, self).__init__(test_json)
+        self.instance_class = MonitorTestInstance
+        self.monitor_args = self.extra['monitor_args']
+
+    def all_instances(self, env_filter = None, vary_filter = None):
+        """Return a list of TestInstances, for each supported environment.
+        Optionally filtered by env_filter.  May return an empty list if
+        the filter doesn't match any supported environment.
+        """
+
+        if env_filter:
+            envs = set(env_filter).intersection(self.envs)
+        else:
+            envs = self.envs
+
+        if vary_filter:
+            variations = set(vary_filter).intersection(self.variations)
+        else:
+            variations = self.variations
+
+        res = []
+        if variations:
+            for env in envs:
+                for vary in variations:
+                    res.append(self.instance_class(env, self.name, vary,
+                                                   self.monitor_args))
+        else:
+            res = [ self.instance_class(env, self.name, None, self.monitor_args)
+                    for env in envs ]
+        return res
diff --git a/xtf/utils.py b/xtf/utils.py
new file mode 100644
index 0000000..96c570b
--- /dev/null
+++ b/xtf/utils.py
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+""" XTF utils """
+
+import threading
+
+class XTFAsyncCall(threading.Thread):
+    def __init__(self, group=None, target=None, name=None, args=(), kwargs={}):
+        super(XTFAsyncCall, self).__init__(group, target, name, args, kwargs)
+        self._return = None
+    def run(self):
+        if self._Thread__target is not None:
+            self._return = self._Thread__target(*self._Thread__args,
+                                                **self._Thread__kwargs)
+    def join(self):
+        threading.Thread.join(self)
+        return self._return
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH XTF 4/4] xtf: Add emul-unimpl test
  2018-12-14 13:34 [PATCH XTF 0/4] Add monitor tests to XTF Petre Pircalabu
                   ` (2 preceding siblings ...)
  2018-12-14 13:34 ` [PATCH XTF 3/4] xtf: Add monitor " Petre Pircalabu
@ 2018-12-14 13:34 ` Petre Pircalabu
  2018-12-17  9:44   ` Jan Beulich
  3 siblings, 1 reply; 6+ messages in thread
From: Petre Pircalabu @ 2018-12-14 13:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Petre Pircalabu, andrew.cooper3

Add a new test to verify if XEN can correctly handle the
X86EMUL_UNIMPLEMENTED event.

The XTF DomU test image just executes a instruction not implemented by
the XEN X86 emulator (fstenv) and checks if the execution was
successfull. This instruction will be the first one in a custom .text
section.

In order to instruct XEN to try to emulate that instruction the monitor
application changes the attributes of that specific page in order to
inhibit execution. This will trigger a MEM_ACCESS request that will be
responded by toggling the EMULATE flag.
The emulation will fail, which will trigger an EMUL_UNIMPLEMENTED
request which will be handled by enabling execution on that specific
page (altp2m) and singlestepping that instruction.

The test will be successfull if the instruction can be executed
correctly.

Signed-off-by: Petre Pircalabu <ppircalabu@bitdefender.com>
---
 docs/all-tests.dox             |   2 +-
 tests/emul-unimpl/Makefile     |  15 ++
 tests/emul-unimpl/extra.cfg.in |   3 +
 tests/emul-unimpl/main.c       |  59 ++++++++
 tests/emul-unimpl/monitor.c    | 310 +++++++++++++++++++++++++++++++++++++++++
 5 files changed, 388 insertions(+), 1 deletion(-)
 create mode 100644 tests/emul-unimpl/Makefile
 create mode 100644 tests/emul-unimpl/extra.cfg.in
 create mode 100644 tests/emul-unimpl/main.c
 create mode 100644 tests/emul-unimpl/monitor.c

diff --git a/docs/all-tests.dox b/docs/all-tests.dox
index 3ee552e..b7457be 100644
--- a/docs/all-tests.dox
+++ b/docs/all-tests.dox
@@ -149,5 +149,5 @@ enable BTS.
 
 @section index-monitor Monitor
 
-@subpage test-emul_unimplemented - @Test EMUL_UNIMPLEMENTED event generation
+@subpage test-emul-unimpl - @Test EMUL_UNIMPLEMENTED event generation
 */
diff --git a/tests/emul-unimpl/Makefile b/tests/emul-unimpl/Makefile
new file mode 100644
index 0000000..5d79e42
--- /dev/null
+++ b/tests/emul-unimpl/Makefile
@@ -0,0 +1,15 @@
+include $(ROOT)/build/common.mk
+
+NAME      		:= emul-unimpl
+CATEGORY  		:= monitor
+TEST-ENVS 		:= hvm64
+CLASS	  		:= xtf.monitor_test.MonitorTestInfo
+TEST-EXTRA-INFO	:= monitor_args='--address=0x\$$(nm --defined-only @@VM_PATH@@ | grep test_fn | cut -d \  -f 1)'
+
+TEST-EXTRA-CFG := extra.cfg.in
+
+obj-perenv += main.o
+
+obj-monitor += monitor.o
+
+include $(ROOT)/build/gen.mk
diff --git a/tests/emul-unimpl/extra.cfg.in b/tests/emul-unimpl/extra.cfg.in
new file mode 100644
index 0000000..e432a0c
--- /dev/null
+++ b/tests/emul-unimpl/extra.cfg.in
@@ -0,0 +1,3 @@
+# Enable altp2m
+altp2m = "mixed"
+altp2mhvm = 1
diff --git a/tests/emul-unimpl/main.c b/tests/emul-unimpl/main.c
new file mode 100644
index 0000000..63a52cc
--- /dev/null
+++ b/tests/emul-unimpl/main.c
@@ -0,0 +1,59 @@
+/**
+ * @file tests/emul-unimpl/main.c
+ * @ref test-emul-unimpl
+ *
+ * @page test-emul-unimpl emul-unimpl
+ *
+ * @todo Docs for test-emul-unimpl
+ *
+ * @see tests/emul-unimpl/main.c
+ */
+#include <xtf.h>
+
+const char test_title[] = "Test emul-unimpl";
+
+static char fpu_env[128];
+
+static void __attribute__((section(".text.secondary"))) __attribute__ ((noinline)) test_fn(void)
+{
+    __asm__ __volatile__("fstenv %0"
+                         : "=m" (fpu_env)
+                         :
+                         : "memory");
+    __asm__ __volatile__("fwait");
+}
+
+void test_main(void)
+{
+    int i;
+
+    __asm__ __volatile__ ("pushf");
+    __asm__ __volatile__ ("cli");
+    test_fn();
+    __asm__ __volatile__ ("popf");
+
+    for ( i = 0; i < 14 ; i++ )
+    {
+        if ( fpu_env[i] != 0 )
+            break;
+    }
+
+    if ( i == 14 )
+    {
+        xtf_error(NULL);
+    }
+    else
+    {
+        xtf_success(NULL);
+    }
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tests/emul-unimpl/monitor.c b/tests/emul-unimpl/monitor.c
new file mode 100644
index 0000000..4302e51
--- /dev/null
+++ b/tests/emul-unimpl/monitor.c
@@ -0,0 +1,310 @@
+/**
+ * @file tests/emul-unimpl/monitor.c
+ */
+
+#include <errno.h>
+#include <getopt.h>
+#include <inttypes.h>
+#include <monitor.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
+typedef enum
+{
+    INIT,
+    MEM_ACCESS,
+    SINGLESTEP,
+    EMUL_UNIMPL
+} emul_unimpl_state_t;
+
+typedef struct emul_unimpl_monitor
+{
+    xtf_monitor_t mon;
+    domid_t domain_id;
+    uint64_t address;
+    uint16_t altp2m_view_id;
+    unsigned long gfn;
+    emul_unimpl_state_t state;
+} emul_unimpl_monitor_t;
+
+const char monitor_test_help[] = \
+    "Usage: test-monitor-emul-unimpl [options] <domid>\n"
+    "\t -a <address>: the address where an invalid instruction will be injected\n"
+    ;
+
+static int emul_unimpl_setup(int argc, char *argv[]);
+static int emul_unimpl_init();
+static int emul_unimpl_run();
+static int emul_unimpl_cleanup();
+static int emul_unimpl_get_result();
+
+static emul_unimpl_monitor_t monitor_instance =
+{
+    .mon =
+    {
+        .setup      = emul_unimpl_setup,
+        .init       = emul_unimpl_init,
+        .run        = emul_unimpl_run,
+        .cleanup    = emul_unimpl_cleanup,
+        .get_result = emul_unimpl_get_result,
+    }
+};
+
+static int emul_unimpl_mem_access(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+static int emul_unimpl_singlestep(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+static int emul_unimpl_emul_unimpl(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp);
+
+static xtf_evtchn_t evtchn_instance =
+{
+    .ops = 
+    {
+        .mem_access_handler     = emul_unimpl_mem_access,
+        .singlestep_handler     = emul_unimpl_singlestep,
+        .emul_unimpl_handler    = emul_unimpl_emul_unimpl,
+    }
+};
+
+static int emul_unimpl_setup(int argc, char *argv[])
+{
+    int ret, c;
+    static struct option long_options[] = {
+        {"help",    no_argument,    0,  'h'},
+        {"address", required_argument,    0,  'a'},
+        {0, 0, 0, 0}
+    };
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if ( !pmon )
+        return -EINVAL;
+
+    if ( argc == 1 )
+    {
+        usage();
+        return -EINVAL;
+    }
+    while ( 1 )
+    {
+        int option_index = 0;
+        c = getopt_long(argc, argv, "ha:", long_options, &option_index);
+        if ( c == -1 ) break;
+
+        switch ( c )
+        {
+            case 'h':
+                usage();
+                exit(0);
+                break;
+            case 'a':
+                pmon->address = strtoul(optarg, NULL, 0);
+                break;
+            default:
+                XTF_MON_ERROR("%s: Invalid option %s\n", argv[0], optarg);
+                return -EINVAL;
+        }
+
+        if ( !pmon->address )
+        {
+            XTF_MON_ERROR("%s: Please specify a valid instruction injection address\n", argv[0]);
+            return -EINVAL;
+        }
+
+        if ( optind != argc - 1 )
+        {
+            XTF_MON_ERROR("%s: Please specify the domain id\n", argv[0]);
+            return -EINVAL;
+        }
+    }
+
+    pmon->domain_id = atoi(argv[optind]);
+
+    if ( pmon->domain_id <= 0 )
+    {
+        XTF_MON_ERROR("%s: Invalid domain id\n", argv[0]);
+        return -EINVAL;
+    }
+
+    pmon->state = INIT;
+
+    add_evtchn(&evtchn_instance, pmon->domain_id);
+
+    return 0;
+}
+
+static int emul_unimpl_init()
+{
+    int rc = 0;
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if ( !pmon )
+        return -EINVAL;
+
+    rc = xtf_evtchn_init(pmon->domain_id);
+    if ( rc < 0 )
+        return rc;
+
+    rc = xc_domain_set_access_required(xtf_xch, pmon->domain_id, 1);
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d setting mem_access listener required\n", rc);
+        return rc;
+    }
+
+    rc = xc_monitor_emul_unimplemented(xtf_xch, pmon->domain_id, 1);
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d emulation unimplemented with vm_event\n", rc);
+        return rc;
+    }
+
+    rc = xc_altp2m_set_domain_state(xtf_xch, pmon->domain_id, 1);
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d enabling altp2m on domain!\n", rc);
+        return rc;
+    }
+
+    rc = xc_altp2m_create_view(xtf_xch, pmon->domain_id, XENMEM_access_rwx,
+            &pmon->altp2m_view_id );
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d creating altp2m view!\n", rc);
+        return rc;
+    }
+
+    pmon->gfn = xc_translate_foreign_address(xtf_xch, pmon->domain_id, 0, pmon->address);
+
+    rc = xc_altp2m_set_mem_access(xtf_xch, pmon->domain_id, pmon->altp2m_view_id,
+            pmon->gfn, XENMEM_access_rw);
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d setting altp2m memory access!\n", rc);
+        return rc;
+    }
+
+    rc = xc_altp2m_switch_to_view(xtf_xch, pmon->domain_id, pmon->altp2m_view_id );
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d switching to altp2m view!\n", rc);
+        return rc;
+    }
+
+    rc = xc_monitor_singlestep(xtf_xch, pmon->domain_id, 1 );
+    if ( rc < 0 )
+    {
+        XTF_MON_ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
+        return rc;
+    }
+
+    return 0;
+}
+
+static int emul_unimpl_run()
+{
+    int rc;
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if ( !pmon )
+        return -EINVAL;
+
+    return xtf_evtchn_loop(pmon->domain_id);
+}
+
+static int emul_unimpl_cleanup()
+{
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if ( !pmon )
+        return -EINVAL;
+
+    xc_altp2m_switch_to_view(xtf_xch, pmon->domain_id, 0 );
+
+    xc_altp2m_destroy_view(xtf_xch, pmon->domain_id, pmon->altp2m_view_id);
+
+    xc_altp2m_set_domain_state(xtf_xch, pmon->domain_id, 0);
+
+    xc_monitor_singlestep(xtf_xch, pmon->domain_id, 0);
+
+    xtf_evtchn_cleanup(pmon->domain_id);
+
+    return 0;
+}
+
+static int emul_unimpl_get_result()
+{
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if ( !pmon )
+        return XTF_MON_ERROR;
+
+    return (pmon->state == EMUL_UNIMPL) ? XTF_MON_SUCCESS : XTF_MON_FAILURE;
+
+}
+
+static int emul_unimpl_mem_access(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp)
+{
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+    volatile unsigned char *p;
+    int i;
+
+    if (!pmon)
+        return -EINVAL;
+
+    rsp->flags |= VM_EVENT_FLAG_EMULATE | VM_EVENT_FLAG_TOGGLE_SINGLESTEP;
+
+    if ( pmon->state == INIT )
+        pmon->state = MEM_ACCESS;
+
+    return 0;
+}
+
+static int emul_unimpl_singlestep(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp)
+{
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+
+    if (!pmon)
+        return -EINVAL;
+
+    rsp->flags |= VM_EVENT_FLAG_ALTERNATE_P2M | VM_EVENT_FLAG_TOGGLE_SINGLESTEP;
+    rsp->altp2m_idx = pmon->altp2m_view_id;
+
+    /* Restore the execute rights on the test page. */
+    xc_altp2m_set_mem_access(xtf_xch, pmon->domain_id, pmon->altp2m_view_id,
+        pmon->gfn, XENMEM_access_rwx);
+
+    if ( pmon->state == EMUL_UNIMPL )
+        pmon->state = SINGLESTEP;
+
+    return 0;
+}
+
+static int emul_unimpl_emul_unimpl(domid_t domain_id, vm_event_request_t *req, vm_event_response_t *rsp)
+{
+    emul_unimpl_monitor_t *pmon = (emul_unimpl_monitor_t *)monitor;
+    volatile unsigned char *p;
+    int i;
+
+    if (!pmon)
+        return -EINVAL;
+
+    rsp->flags |= VM_EVENT_FLAG_ALTERNATE_P2M ;
+    rsp->altp2m_idx = 0;
+
+    if (pmon->state == MEM_ACCESS )
+        pmon->state = EMUL_UNIMPL;
+
+    return 0;
+}
+
+XTF_MONITOR(monitor_instance);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH XTF 4/4] xtf: Add emul-unimpl test
  2018-12-14 13:34 ` [PATCH XTF 4/4] xtf: Add emul-unimpl test Petre Pircalabu
@ 2018-12-17  9:44   ` Jan Beulich
  0 siblings, 0 replies; 6+ messages in thread
From: Jan Beulich @ 2018-12-17  9:44 UTC (permalink / raw)
  To: Petre Pircalabu; +Cc: Andrew Cooper, xen-devel

>>> On 14.12.18 at 14:34, <ppircalabu@bitdefender.com> wrote:
> Add a new test to verify if XEN can correctly handle the
> X86EMUL_UNIMPLEMENTED event.
> 
> The XTF DomU test image just executes a instruction not implemented by
> the XEN X86 emulator (fstenv) and checks if the execution was
> successfull. This instruction will be the first one in a custom .text
> section.

May I suggest that you use an insn which is liable to remain
unimplemented? FSTENV, together with {F,FX,X}{SAVE,RSTOR}
are at the top of my (emulator) list of items to work on. If you
pick any instruction which we _can_ reasonably implement,
chances would then be that eventually your test will fail,
preventing an osstest push. This would better be avoided from
the beginning.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-12-17  9:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-14 13:34 [PATCH XTF 0/4] Add monitor tests to XTF Petre Pircalabu
2018-12-14 13:34 ` [PATCH XTF 1/4] xtf-runner: split into logical components Petre Pircalabu
2018-12-14 13:34 ` [PATCH XTF 2/4] xtf: Add executable test class Petre Pircalabu
2018-12-14 13:34 ` [PATCH XTF 3/4] xtf: Add monitor " Petre Pircalabu
2018-12-14 13:34 ` [PATCH XTF 4/4] xtf: Add emul-unimpl test Petre Pircalabu
2018-12-17  9:44   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.