All of lore.kernel.org
 help / color / mirror / Atom feed
* [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
@ 2011-02-09  1:50 Michael Goldish
  2011-02-09  2:56 ` Cleber Rosa
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Michael Goldish @ 2011-02-09  1:50 UTC (permalink / raw)
  To: autotest, kvm; +Cc: Uri Lublin

This is a reimplementation of the dict generator.  It is much faster than the
current implementation and uses a very small amount of memory.  Running time
and memory usage scale polynomially with the number of defined variants,
compared to exponentially in the current implementation.

Instead of regular expressions in the filters, the following syntax is used:

, means OR
.. means AND
. means IMMEDIATELY-FOLLOWED-BY

Example:

only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide

means select all dicts whose names have:

(qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
(smp2 AND qcow2 AND migrate AND ide)

'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
'ide, scsi' is equivalent to 'scsi, ide'.

Filters can be used in 3 ways:
only <filter>
no <filter>
<filter>:

The last one starts a conditional block, e.g.

Fedora.14..qcow2:
    no migrate, reboot
    foo = bar

Interface changes:
- The main class is now called 'Parser' instead of 'config'.
- fork_and_parse() has been removed.  parse_file() and parse_string() should be
  used instead.
- When run as a standalone program, kvm_config.py just prints the shortnames of
  the generated dicts by default, and can optionally print the full names and
  contents of the dicts.
- By default, debug messages are not printed, but they can be enabled by
  passing debug=True to Parser's constructor, or by running kvm_config.py -v.
- The 'depend' key has been renamed to 'dep'.

Signed-off-by: Michael Goldish <mgoldish@redhat.com>
Signed-off-by: Uri Lublin <ulublin@redhat.com>
---
 client/tests/kvm/control               |   28 +-
 client/tests/kvm/control.parallel      |   12 +-
 client/tests/kvm/kvm_config.py         | 1051 ++++++++++++++------------------
 client/tests/kvm/kvm_scheduler.py      |    9 +-
 client/tests/kvm/kvm_utils.py          |    2 +-
 client/tests/kvm/tests.cfg.sample      |   13 +-
 client/tests/kvm/tests_base.cfg.sample |   46 +-
 7 files changed, 513 insertions(+), 648 deletions(-)

diff --git a/client/tests/kvm/control b/client/tests/kvm/control
index d226adf..be37678 100644
--- a/client/tests/kvm/control
+++ b/client/tests/kvm/control
@@ -35,13 +35,11 @@ str = """
 # build configuration here.  For example:
 #release_tag = 84
 """
-build_cfg = kvm_config.config()
-# As the base test config is quite large, in order to save memory, we use the
-# fork_and_parse() method, that creates another parser process and destroys it
-# at the end of the parsing, so the memory spent can be given back to the OS.
-build_cfg_path = os.path.join(kvm_test_dir, "build.cfg")
-build_cfg.fork_and_parse(build_cfg_path, str)
-if not kvm_utils.run_tests(build_cfg.get_generator(), job):
+
+parser = kvm_config.Parser()
+parser.parse_file(os.path.join(kvm_test_dir, "build.cfg"))
+parser.parse_string(str)
+if not kvm_utils.run_tests(parser.get_dicts(), job):
     logging.error("KVM build step failed, exiting.")
     sys.exit(1)
 
@@ -49,10 +47,11 @@ str = """
 # This string will be parsed after tests.cfg.  Make any desired changes to the
 # test configuration here.  For example:
 #display = sdl
-#install|setup: timeout_multiplier = 3
+#install, setup: timeout_multiplier = 3
 """
-tests_cfg = kvm_config.config()
-tests_cfg_path = os.path.join(kvm_test_dir, "tests.cfg")
+
+parser = kvm_config.Parser()
+parser.parse_file(os.path.join(kvm_test_dir, "tests.cfg"))
 
 if args:
     # We get test parameters from command line
@@ -67,11 +66,12 @@ if args:
                 str += "%s = %s\n" % (key, value)
         except IndexError:
             pass
-tests_cfg.fork_and_parse(tests_cfg_path, str)
+parser.parse_string(str)
 
-# Run the tests
-kvm_utils.run_tests(tests_cfg.get_generator(), job)
+logging.info("Selected tests:")
+for i, d in enumerate(parser.get_dicts()):
+    logging.info("Test %4d:  %s" % (i + 1, d["shortname"]))
+kvm_utils.run_tests(parser.get_dicts(), job)
 
 # Generate a nice HTML report inside the job's results dir
 kvm_utils.create_report(kvm_test_dir, job.resultdir)
-
diff --git a/client/tests/kvm/control.parallel b/client/tests/kvm/control.parallel
index ac84638..640ccf5 100644
--- a/client/tests/kvm/control.parallel
+++ b/client/tests/kvm/control.parallel
@@ -163,16 +163,15 @@ import kvm_config
 str = """
 # This string will be parsed after tests.cfg.  Make any desired changes to the
 # test configuration here.  For example:
-#install|setup: timeout_multiplier = 3
-#only fc8_quick
+#install, setup: timeout_multiplier = 3
 #display = sdl
 """
-cfg = kvm_config.config()
-filename = os.path.join(pwd, "tests.cfg")
-cfg.fork_and_parse(filename, str)
 
-tests = cfg.get_list()
+parser = kvm_config.Parser()
+parser.parse_file(os.path.join(pwd, "tests.cfg"))
+parser.parse_string(str)
 
+tests = list(parser.get_dicts())
 
 # -------------
 # Run the tests
@@ -192,7 +191,6 @@ s = kvm_scheduler.scheduler(tests, num_workers, total_cpus, total_mem, pwd)
 job.parallel([s.scheduler],
              *[(s.worker, i, job.run_test) for i in range(num_workers)])
 
-
 # create the html report in result dir
 reporter = os.path.join(pwd, 'make_html_report.py')
 html_file = os.path.join(job.resultdir,'results.html')
diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
index 13cdfe2..1b27181 100755
--- a/client/tests/kvm/kvm_config.py
+++ b/client/tests/kvm/kvm_config.py
@@ -1,18 +1,149 @@
 #!/usr/bin/python
 """
-KVM configuration file utility functions.
+KVM test configuration file parser
 
-@copyright: Red Hat 2008-2010
+@copyright: Red Hat 2008-2011
 """
 
-import logging, re, os, sys, optparse, array, traceback, cPickle
-import common
-import kvm_utils
-from autotest_lib.client.common_lib import error
-from autotest_lib.client.common_lib import logging_manager
+import re, os, sys, optparse, collections
+
+
+# Filter syntax:
+# , means OR
+# .. means AND
+# . means IMMEDIATELY-FOLLOWED-BY
+
+# Example:
+# qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
+# means match all dicts whose names have:
+# (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
+# ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
+# (smp2 AND qcow2 AND migrate AND ide)
+
+# Note:
+# 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
+# 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
+# 'ide, scsi' is equivalent to 'scsi, ide'.
+
+# Filters can be used in 3 ways:
+# only <filter>
+# no <filter>
+# <filter>:
+# The last one starts a conditional block.
+
+
+num_failed_cases = 5
+
+
+class Node(object):
+    def __init__(self):
+        self.name = []
+        self.dep = []
+        self.content = []
+        self.children = []
+        self.labels = set()
+        self.append_to_shortname = False
+        self.failed_cases = collections.deque()
+
+
+# Filter must inherit from object (otherwise type() won't work)
+class Filter(object):
+    def __init__(self, s):
+        self.filter = []
+        for word in s.replace(",", " ").split():
+            word = [block.split(".") for block in word.split("..")]
+            self.filter += [word]
+
+
+    def match_adjacent(self, block, ctx, ctx_set):
+        # TODO: explain what this function does
+        if block[0] not in ctx_set:
+            return 0
+        if len(block) == 1:
+            return 1
+        if block[1] not in ctx_set:
+            return int(ctx[-1] == block[0])
+        k = 0
+        i = ctx.index(block[0])
+        while i < len(ctx):
+            if k > 0 and ctx[i] != block[k]:
+                i -= k - 1
+                k = 0
+            if ctx[i] == block[k]:
+                k += 1
+                if k >= len(block):
+                    break
+                if block[k] not in ctx_set:
+                    break
+            i += 1
+        return k
+
+
+    def might_match_adjacent(self, block, ctx, ctx_set, descendant_labels):
+        matched = self.match_adjacent(block, ctx, ctx_set)
+        for elem in block[matched:]:
+            if elem not in descendant_labels:
+                return False
+        return True
+
+
+    def match(self, ctx, ctx_set):
+        for word in self.filter:
+            for block in word:
+                if self.match_adjacent(block, ctx, ctx_set) != len(block):
+                    break
+            else:
+                return True
+        return False
+
+
+    def might_match(self, ctx, ctx_set, descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if not self.might_match_adjacent(block, ctx, ctx_set,
+                                                 descendant_labels):
+                    break
+            else:
+                return True
+        return False
+
+
+class NoOnlyFilter(Filter):
+    def __init__(self, line):
+        Filter.__init__(self, line.split(None, 1)[1])
+        self.line = line
+
+
+class OnlyFilter(NoOnlyFilter):
+    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
+                   descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if (self.match_adjacent(block, ctx, ctx_set) >
+                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
+                    return self.might_match(ctx, ctx_set, descendant_labels)
+        return False
+
 
+class NoFilter(NoOnlyFilter):
+    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
+                   descendant_labels):
+        for word in self.filter:
+            for block in word:
+                if (self.match_adjacent(block, ctx, ctx_set) <
+                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
+                    return not self.match(ctx, ctx_set)
+        return False
 
-class config:
+
+class Condition(NoFilter):
+    def __init__(self, line):
+        Filter.__init__(self, line.rstrip(":"))
+        self.line = line
+        self.content = []
+
+
+class Parser(object):
     """
     Parse an input file or string that follows the KVM Test Config File format
     and generate a list of dicts that will be later used as configuration
@@ -21,17 +152,14 @@ class config:
     @see: http://www.linux-kvm.org/page/KVM-Autotest/Test_Config_File
     """
 
-    def __init__(self, filename=None, debug=True):
+    def __init__(self, filename=None, debug=False):
         """
-        Initialize the list and optionally parse a file.
+        Initialize the parser and optionally parse a file.
 
-        @param filename: Path of the file that will be taken.
+        @param filename: Path of the file to parse.
         @param debug: Whether to turn on debugging output.
         """
-        self.list = [array.array("H", [4, 4, 4, 4])]
-        self.object_cache = []
-        self.object_cache_indices = {}
-        self.regex_cache = {}
+        self.node = Node()
         self.debug = debug
         if filename:
             self.parse_file(filename)
@@ -39,689 +167,436 @@ class config:
 
     def parse_file(self, filename):
         """
-        Parse file.  If it doesn't exist, raise an IOError.
+        Parse a file.
 
         @param filename: Path of the configuration file.
         """
-        if not os.path.exists(filename):
-            raise IOError("File %s not found" % filename)
-        str = open(filename).read()
-        self.list = self.parse(configreader(filename, str), self.list)
+        self.node = self._parse(FileReader(filename), self.node)
 
 
-    def parse_string(self, str):
+    def parse_string(self, s):
         """
         Parse a string.
 
-        @param str: String to parse.
+        @param s: String to parse.
         """
-        self.list = self.parse(configreader('<string>', str, real_file=False), self.list)
+        self.node = self._parse(StrReader(s), self.node)
 
 
-    def fork_and_parse(self, filename=None, str=None):
-        """
-        Parse a file and/or a string in a separate process to save memory.
-
-        Python likes to keep memory to itself even after the objects occupying
-        it have been destroyed.  If during a call to parse_file() or
-        parse_string() a lot of memory is used, it can only be freed by
-        terminating the process.  This function works around the problem by
-        doing the parsing in a forked process and then terminating it, freeing
-        any unneeded memory.
-
-        Note: if an exception is raised during parsing, its information will be
-        printed, and the resulting list will be empty.  The exception will not
-        be raised in the process calling this function.
-
-        @param filename: Path of file to parse (optional).
-        @param str: String to parse (optional).
-        """
-        r, w = os.pipe()
-        r, w = os.fdopen(r, "r"), os.fdopen(w, "w")
-        pid = os.fork()
-        if not pid:
-            # Child process
-            r.close()
-            try:
-                if filename:
-                    self.parse_file(filename)
-                if str:
-                    self.parse_string(str)
-            except:
-                traceback.print_exc()
-                self.list = []
-            # Convert the arrays to strings before pickling because at least
-            # some Python versions can't pickle/unpickle arrays
-            l = [a.tostring() for a in self.list]
-            cPickle.dump((l, self.object_cache), w, -1)
-            w.close()
-            os._exit(0)
-        else:
-            # Parent process
-            w.close()
-            (l, self.object_cache) = cPickle.load(r)
-            r.close()
-            os.waitpid(pid, 0)
-            self.list = []
-            for s in l:
-                a = array.array("H")
-                a.fromstring(s)
-                self.list.append(a)
-
-
-    def get_generator(self):
+    def get_dicts(self, node=None, ctx=[], content=[], shortname=[], dep=[]):
         """
         Generate dictionaries from the code parsed so far.  This should
-        probably be called after parsing something.
+        be called after parsing something.
 
         @return: A dict generator.
         """
-        for a in self.list:
-            name, shortname, depend, content = _array_get_all(a,
-                                                              self.object_cache)
-            dict = {"name": name, "shortname": shortname, "depend": depend}
-            self._apply_content_to_dict(dict, content)
-            yield dict
-
-
-    def get_list(self):
-        """
-        Generate a list of dictionaries from the code parsed so far.
-        This should probably be called after parsing something.
+        def apply_ops_to_dict(d, content):
+            for filename, linenum, s in content:
+                op_found = None
+                op_pos = len(s)
+                for op in ops:
+                    if op in s:
+                        pos = s.index(op)
+                        if pos < op_pos:
+                            op_found = op
+                            op_pos = pos
+                if not op_found:
+                    continue
+                left, value = map(str.strip, s.split(op_found, 1))
+                if value and ((value[0] == '"' and value[-1] == '"') or
+                              (value[0] == "'" and value[-1] == "'")):
+                    value = value[1:-1]
+                filters_and_key = map(str.strip, left.split(":"))
+                for f in filters_and_key[:-1]:
+                    if not Filter(f).match(ctx, ctx_set):
+                        break
+                else:
+                    key = filters_and_key[-1]
+                    ops[op_found](d, key, value)
+
+        def process_content(content, failed_filters):
+            # 1. Check that the filters in content are OK with the current
+            #    context (ctx).
+            # 2. Move the parts of content that are still relevant into
+            #    new_content and unpack conditional blocks if appropriate.
+            #    For example, if an 'only' statement fully matches ctx, it
+            #    becomes irrelevant and is not appended to new_content.
+            #    If a conditional block fully matches, its contents are
+            #    unpacked into new_content.
+            # 3. Move failed filters into failed_filters, so that next time we
+            #    reach this node or one of its ancestors, we'll check those
+            #    filters first.
+            for t in content:
+                filename, linenum, obj = t
+                if type(obj) is str:
+                    new_content.append(t)
+                    continue
+                elif type(obj) is OnlyFilter:
+                    if not obj.might_match(ctx, ctx_set, labels):
+                        self._debug("    filter did not pass: %r (%s:%s)",
+                                    obj.line, filename, linenum)
+                        failed_filters.append(t)
+                        return False
+                    elif obj.match(ctx, ctx_set):
+                        continue
+                elif type(obj) is NoFilter:
+                    if obj.match(ctx, ctx_set):
+                        self._debug("    filter did not pass: %r (%s:%s)",
+                                    obj.line, filename, linenum)
+                        failed_filters.append(t)
+                        return False
+                    elif not obj.might_match(ctx, ctx_set, labels):
+                        continue
+                elif type(obj) is Condition:
+                    if obj.match(ctx, ctx_set):
+                        self._debug("    conditional block matches: %r (%s:%s)",
+                                    obj.line, filename, linenum)
+                        # Check and unpack the content inside this Condition
+                        # object (note: the failed filters should go into
+                        # new_internal_filters because we don't expect them to
+                        # come from outside this node, even if the Condition
+                        # itself was external)
+                        if not process_content(obj.content,
+                                               new_internal_filters):
+                            failed_filters.append(t)
+                            return False
+                        continue
+                    elif not obj.might_match(ctx, ctx_set, labels):
+                        continue
+                new_content.append(t)
+            return True
+
+        def might_pass(failed_ctx,
+                       failed_ctx_set,
+                       failed_external_filters,
+                       failed_internal_filters):
+            for t in failed_external_filters:
+                if t not in content:
+                    return True
+                filename, linenum, filter = t
+                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
+                                     labels):
+                    return True
+            for t in failed_internal_filters:
+                filename, linenum, filter = t
+                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
+                                     labels):
+                    return True
+            return False
+
+        def add_failed_case():
+            node.failed_cases.appendleft((ctx, ctx_set,
+                                          new_external_filters,
+                                          new_internal_filters))
+            if len(node.failed_cases) > num_failed_cases:
+                node.failed_cases.pop()
+
+        node = node or self.node
+        # Update dep
+        for d in node.dep:
+            temp = ctx + [d]
+            dep = dep + [".".join([s for s in temp if s])]
+        # Update ctx
+        ctx = ctx + node.name
+        ctx_set = set(ctx)
+        labels = node.labels
+        # Get the current name
+        name = ".".join([s for s in ctx if s])
+        if node.name:
+            self._debug("checking out %r", name)
+        # Check previously failed filters
+        for i, failed_case in enumerate(node.failed_cases):
+            if not might_pass(*failed_case):
+                self._debug("    this subtree has failed before")
+                del node.failed_cases[i]
+                node.failed_cases.appendleft(failed_case)
+                return
+        # Check content and unpack it into new_content
+        new_content = []
+        new_external_filters = []
+        new_internal_filters = []
+        if (not process_content(node.content, new_internal_filters) or
+            not process_content(content, new_external_filters)):
+            add_failed_case()
+            return
+        # Update shortname
+        if node.append_to_shortname:
+            shortname = shortname + node.name
+        # Recurse into children
+        count = 0
+        for n in node.children:
+            for d in self.get_dicts(n, ctx, new_content, shortname, dep):
+                count += 1
+                yield d
+        # Reached leaf?
+        if not node.children:
+            self._debug("    reached leaf, returning it")
+            d = {"name": name, "dep": dep,
+                 "shortname": ".".join([s for s in shortname if s])}
+            apply_ops_to_dict(d, new_content)
+            yield d
+        # If this node did not produce any dicts, remember the failed filters
+        # of its descendants
+        elif not count:
+            new_external_filters = []
+            new_internal_filters = []
+            for n in node.children:
+                (failed_ctx,
+                 failed_ctx_set,
+                 failed_external_filters,
+                 failed_internal_filters) = n.failed_cases[0]
+                for obj in failed_internal_filters:
+                    if obj not in new_internal_filters:
+                        new_internal_filters.append(obj)
+                for obj in failed_external_filters:
+                    if obj in content:
+                        if obj not in new_external_filters:
+                            new_external_filters.append(obj)
+                    else:
+                        if obj not in new_internal_filters:
+                            new_internal_filters.append(obj)
+            add_failed_case()
 
-        @return: A list of dicts.
-        """
-        return list(self.get_generator())
 
+    def _debug(self, s, *args):
+        if self.debug:
+            s = "DEBUG: %s" % s
+            print s % args
 
-    def count(self, filter=".*"):
-        """
-        Return the number of dictionaries whose names match filter.
 
-        @param filter: A regular expression string.
-        """
-        exp = self._get_filter_regex(filter)
-        count = 0
-        for a in self.list:
-            name = _array_get_name(a, self.object_cache)
-            if exp.search(name):
-                count += 1
-        return count
+    def _warn(self, s, *args):
+        s = "WARNING: %s" % s
+        print s % args
 
 
-    def parse_variants(self, cr, list, subvariants=False, prev_indent=-1):
+    def _parse_variants(self, cr, node, prev_indent=-1):
         """
-        Read and parse lines from a configreader object until a line with an
+        Read and parse lines from a FileReader object until a line with an
         indent level lower than or equal to prev_indent is encountered.
 
-        @brief: Parse a 'variants' or 'subvariants' block from a configreader
-            object.
-        @param cr: configreader object to be parsed.
-        @param list: List of arrays to operate on.
-        @param subvariants: If True, parse in 'subvariants' mode;
-            otherwise parse in 'variants' mode.
+        @param cr: A FileReader/StrReader object.
+        @param node: A node to operate on.
         @param prev_indent: The indent level of the "parent" block.
-        @return: The resulting list of arrays.
+        @return: A node object.
         """
-        new_list = []
+        node4 = Node()
 
         while True:
-            pos = cr.tell()
-            (indented_line, line, indent) = cr.get_next_line()
-            if indent <= prev_indent:
-                cr.seek(pos)
+            line, indent, linenum = cr.get_next_line(prev_indent)
+            if not line:
                 break
 
-            # Get name and dependencies
-            (name, depend) = map(str.strip, line.lstrip("- ").split(":"))
+            name, dep = map(str.strip, line.lstrip("- ").split(":"))
 
-            # See if name should be added to the 'shortname' field
-            add_to_shortname = not name.startswith("@")
-            name = name.lstrip("@")
+            node2 = Node()
+            node2.children = [node]
+            node2.labels = node.labels
 
-            # Store name and dependencies in cache and get their indices
-            n = self._store_str(name)
-            d = self._store_str(depend)
+            node3 = self._parse(cr, node2, prev_indent=indent)
+            node3.name = name.lstrip("@").split(".")
+            node3.dep = dep.replace(",", " ").split()
+            node3.append_to_shortname = not name.startswith("@")
 
-            # Make a copy of list
-            temp_list = [a[:] for a in list]
+            node4.children += [node3]
+            node4.labels.update(node3.labels)
+            node4.labels.update(node3.name)
 
-            if subvariants:
-                # If we're parsing 'subvariants', first modify the list
-                if add_to_shortname:
-                    for a in temp_list:
-                        _array_append_to_name_shortname_depend(a, n, d)
-                else:
-                    for a in temp_list:
-                        _array_append_to_name_depend(a, n, d)
-                temp_list = self.parse(cr, temp_list, restricted=True,
-                                       prev_indent=indent)
-            else:
-                # If we're parsing 'variants', parse before modifying the list
-                if self.debug:
-                    _debug_print(indented_line,
-                                 "Entering variant '%s' "
-                                 "(variant inherits %d dicts)" %
-                                 (name, len(list)))
-                temp_list = self.parse(cr, temp_list, restricted=False,
-                                       prev_indent=indent)
-                if add_to_shortname:
-                    for a in temp_list:
-                        _array_prepend_to_name_shortname_depend(a, n, d)
-                else:
-                    for a in temp_list:
-                        _array_prepend_to_name_depend(a, n, d)
+        return node4
 
-            new_list += temp_list
 
-        return new_list
-
-
-    def parse(self, cr, list, restricted=False, prev_indent=-1):
+    def _parse(self, cr, node, prev_indent=-1):
         """
-        Read and parse lines from a configreader object until a line with an
+        Read and parse lines from a StrReader object until a line with an
         indent level lower than or equal to prev_indent is encountered.
 
-        @brief: Parse a configreader object.
-        @param cr: A configreader object.
-        @param list: A list of arrays to operate on (list is modified in
-            place and should not be used after the call).
-        @param restricted: If True, operate in restricted mode
-            (prohibit 'variants').
+        @param cr: A FileReader/StrReader object.
+        @param node: A Node or a Condition object to operate on.
         @param prev_indent: The indent level of the "parent" block.
-        @return: The resulting list of arrays.
-        @note: List is destroyed and should not be used after the call.
-            Only the returned list should be used.
+        @return: A node object.
         """
-        current_block = ""
-
         while True:
-            pos = cr.tell()
-            (indented_line, line, indent) = cr.get_next_line()
-            if indent <= prev_indent:
-                cr.seek(pos)
-                self._append_content_to_arrays(list, current_block)
+            line, indent, linenum = cr.get_next_line(prev_indent)
+            if not line:
                 break
 
-            len_list = len(list)
-
-            # Parse assignment operators (keep lines in temporary buffer)
-            if "=" in line:
-                if self.debug and not restricted:
-                    _debug_print(indented_line,
-                                 "Parsing operator (%d dicts in current "
-                                 "context)" % len_list)
-                current_block += line + "\n"
-                continue
-
-            # Flush the temporary buffer
-            self._append_content_to_arrays(list, current_block)
-            current_block = ""
-
             words = line.split()
 
-            # Parse 'no' and 'only' statements
-            if words[0] == "no" or words[0] == "only":
-                if len(words) <= 1:
-                    continue
-                filters = map(self._get_filter_regex, words[1:])
-                filtered_list = []
-                if words[0] == "no":
-                    for a in list:
-                        name = _array_get_name(a, self.object_cache)
-                        for filter in filters:
-                            if filter.search(name):
-                                break
-                        else:
-                            filtered_list.append(a)
-                if words[0] == "only":
-                    for a in list:
-                        name = _array_get_name(a, self.object_cache)
-                        for filter in filters:
-                            if filter.search(name):
-                                filtered_list.append(a)
-                                break
-                list = filtered_list
-                if self.debug and not restricted:
-                    _debug_print(indented_line,
-                                 "Parsing no/only (%d dicts in current "
-                                 "context, %d remain)" %
-                                 (len_list, len(list)))
-                continue
-
             # Parse 'variants'
             if line == "variants:":
-                # 'variants' not allowed in restricted mode
-                # (inside an exception or inside subvariants)
-                if restricted:
-                    e_msg = "Using variants in this context is not allowed"
-                    cr.raise_error(e_msg)
-                if self.debug and not restricted:
-                    _debug_print(indented_line,
-                                 "Entering variants block (%d dicts in "
-                                 "current context)" % len_list)
-                list = self.parse_variants(cr, list, subvariants=False,
-                                           prev_indent=indent)
-                continue
-
-            # Parse 'subvariants' (the block is parsed for each dict
-            # separately)
-            if line == "subvariants:":
-                if self.debug and not restricted:
-                    _debug_print(indented_line,
-                                 "Entering subvariants block (%d dicts in "
-                                 "current context)" % len_list)
-                new_list = []
-                # Remember current position
-                pos = cr.tell()
-                # Read the lines in any case
-                self.parse_variants(cr, [], subvariants=True,
-                                    prev_indent=indent)
-                # Iterate over the list...
-                for index in xrange(len(list)):
-                    # Revert to initial position in this 'subvariants' block
-                    cr.seek(pos)
-                    # Everything inside 'subvariants' should be parsed in
-                    # restricted mode
-                    new_list += self.parse_variants(cr, list[index:index+1],
-                                                    subvariants=True,
-                                                    prev_indent=indent)
-                list = new_list
+                # 'variants' is not allowed inside a conditional block
+                if isinstance(node, Condition):
+                    raise ValueError("'variants' is not allowed inside a "
+                                     "conditional block (%s:%s)" %
+                                     (cr.filename, linenum))
+                node = self._parse_variants(cr, node, prev_indent=indent)
                 continue
 
             # Parse 'include' statements
             if words[0] == "include":
-                if len(words) <= 1:
+                if len(words) < 2:
+                    self._warn("%r (%s:%s): missing parameter. What are you "
+                               "including?", line, cr.filename, linenum)
                     continue
-                if self.debug and not restricted:
-                    _debug_print(indented_line, "Entering file %s" % words[1])
-
-                cur_filename = cr.real_filename()
-                if cur_filename is None:
-                    cr.raise_error("'include' is valid only when parsing a file")
-
-                filename = os.path.join(os.path.dirname(cur_filename),
-                                        words[1])
-                if not os.path.exists(filename):
-                    cr.raise_error("Cannot include %s -- file not found" % (filename))
-
-                str = open(filename).read()
-                list = self.parse(configreader(filename, str), list, restricted)
-                if self.debug and not restricted:
-                    _debug_print("", "Leaving file %s" % words[1])
+                if not isinstance(cr, FileReader):
+                    self._warn("%r (%s:%s): cannot include because no file is "
+                               "currently open", line, cr.filename, linenum)
+                    continue
+                filename = os.path.join(os.path.dirname(cr.filename), words[1])
+                if not os.path.isfile(filename):
+                    self._warn("%r (%s:%s): file doesn't exist or is not a "
+                               "regular file", line, cr.filename, linenum)
+                    continue
+                node = self._parse(FileReader(filename), node)
+                continue
 
+            # Parse 'only' and 'no' filters
+            if words[0] in ("only", "no"):
+                if len(words) < 2:
+                    self._warn("%r (%s:%s): missing parameter", line,
+                               cr.filename, linenum)
+                    continue
+                if words[0] == "only":
+                    node.content += [(cr.filename, linenum, OnlyFilter(line))]
+                elif words[0] == "no":
+                    node.content += [(cr.filename, linenum, NoFilter(line))]
                 continue
 
-            # Parse multi-line exceptions
-            # (the block is parsed for each dict separately)
+            # Parse conditional blocks
             if line.endswith(":"):
-                if self.debug and not restricted:
-                    _debug_print(indented_line,
-                                 "Entering multi-line exception block "
-                                 "(%d dicts in current context outside "
-                                 "exception)" % len_list)
-                line = line[:-1]
-                new_list = []
-                # Remember current position
-                pos = cr.tell()
-                # Read the lines in any case
-                self.parse(cr, [], restricted=True, prev_indent=indent)
-                # Iterate over the list...
-                exp = self._get_filter_regex(line)
-                for index in xrange(len(list)):
-                    name = _array_get_name(list[index], self.object_cache)
-                    if exp.search(name):
-                        # Revert to initial position in this exception block
-                        cr.seek(pos)
-                        # Everything inside an exception should be parsed in
-                        # restricted mode
-                        new_list += self.parse(cr, list[index:index+1],
-                                               restricted=True,
-                                               prev_indent=indent)
-                    else:
-                        new_list.append(list[index])
-                list = new_list
+                cond = Condition(line)
+                self._parse(cr, cond, prev_indent=indent)
+                node.content += [(cr.filename, linenum, cond)]
                 continue
 
-        return list
+            node.content += [(cr.filename, linenum, line)]
+            continue
 
-
-    def _get_filter_regex(self, filter):
-        """
-        Return a regex object corresponding to a given filter string.
-
-        All regular expressions given to the parser are passed through this
-        function first.  Its purpose is to make them more specific and better
-        suited to match dictionary names: it forces simple expressions to match
-        only between dots or at the beginning or end of a string.  For example,
-        the filter 'foo' will match 'foo.bar' but not 'foobar'.
-        """
-        try:
-            return self.regex_cache[filter]
-        except KeyError:
-            exp = re.compile(r"(\.|^)(%s)(\.|$)" % filter)
-            self.regex_cache[filter] = exp
-            return exp
-
-
-    def _store_str(self, str):
-        """
-        Store str in the internal object cache, if it isn't already there, and
-        return its identifying index.
-
-        @param str: String to store.
-        @return: The index of str in the object cache.
-        """
-        try:
-            return self.object_cache_indices[str]
-        except KeyError:
-            self.object_cache.append(str)
-            index = len(self.object_cache) - 1
-            self.object_cache_indices[str] = index
-            return index
-
-
-    def _append_content_to_arrays(self, list, content):
-        """
-        Append content (config code containing assignment operations) to a list
-        of arrays.
-
-        @param list: List of arrays to operate on.
-        @param content: String containing assignment operations.
-        """
-        if content:
-            str_index = self._store_str(content)
-            for a in list:
-                _array_append_to_content(a, str_index)
-
-
-    def _apply_content_to_dict(self, dict, content):
-        """
-        Apply the operations in content (config code containing assignment
-        operations) to a dict.
-
-        @param dict: Dictionary to operate on.  Must have 'name' key.
-        @param content: String containing assignment operations.
-        """
-        for line in content.splitlines():
-            op_found = None
-            op_pos = len(line)
-            for op in ops:
-                pos = line.find(op)
-                if pos >= 0 and pos < op_pos:
-                    op_found = op
-                    op_pos = pos
-            if not op_found:
-                continue
-            (left, value) = map(str.strip, line.split(op_found, 1))
-            if value and ((value[0] == '"' and value[-1] == '"') or
-                          (value[0] == "'" and value[-1] == "'")):
-                value = value[1:-1]
-            filters_and_key = map(str.strip, left.split(":"))
-            filters = filters_and_key[:-1]
-            key = filters_and_key[-1]
-            for filter in filters:
-                exp = self._get_filter_regex(filter)
-                if not exp.search(dict["name"]):
-                    break
-            else:
-                ops[op_found](dict, key, value)
+        return node
 
 
 # Assignment operators
 
-def _op_set(dict, key, value):
-    dict[key] = value
+def _op_set(d, key, value):
+    d[key] = value
 
 
-def _op_append(dict, key, value):
-    dict[key] = dict.get(key, "") + value
+def _op_append(d, key, value):
+    d[key] = d.get(key, "") + value
 
 
-def _op_prepend(dict, key, value):
-    dict[key] = value + dict.get(key, "")
+def _op_prepend(d, key, value):
+    d[key] = value + d.get(key, "")
 
 
-def _op_regex_set(dict, exp, value):
+def _op_regex_set(d, exp, value):
     exp = re.compile("^(%s)$" % exp)
-    for key in dict:
+    for key in d:
         if exp.match(key):
-            dict[key] = value
+            d[key] = value
 
 
-def _op_regex_append(dict, exp, value):
+def _op_regex_append(d, exp, value):
     exp = re.compile("^(%s)$" % exp)
-    for key in dict:
+    for key in d:
         if exp.match(key):
-            dict[key] += value
+            d[key] += value
 
 
-def _op_regex_prepend(dict, exp, value):
+def _op_regex_prepend(d, exp, value):
     exp = re.compile("^(%s)$" % exp)
-    for key in dict:
+    for key in d:
         if exp.match(key):
-            dict[key] = value + dict[key]
-
+            d[key] = value + d[key]
 
-ops = {
-    "=": _op_set,
-    "+=": _op_append,
-    "<=": _op_prepend,
-    "?=": _op_regex_set,
-    "?+=": _op_regex_append,
-    "?<=": _op_regex_prepend,
-}
-
-
-# Misc functions
-
-def _debug_print(str1, str2=""):
-    """
-    Nicely print two strings and an arrow.
 
-    @param str1: First string.
-    @param str2: Second string.
-    """
-    if str2:
-        str = "%-50s ---> %s" % (str1, str2)
-    else:
-        str = str1
-    logging.debug(str)
+ops = {"=": _op_set,
+       "+=": _op_append,
+       "<=": _op_prepend,
+       "?=": _op_regex_set,
+       "?+=": _op_regex_append,
+       "?<=": _op_regex_prepend}
 
 
-# configreader
+# StrReader and FileReader
 
-class configreader:
+class StrReader(object):
     """
-    Preprocess an input string and provide file-like services.
-    This is intended as a replacement for the file and StringIO classes,
-    whose readline() and/or seek() methods seem to be slow.
+    Preprocess an input string for easy reading.
     """
-
-    def __init__(self, filename, str, real_file=True):
+    def __init__(self, s):
         """
         Initialize the reader.
 
-        @param filename: the filename we're parsing
-        @param str: The string to parse.
-        @param real_file: Indicates if filename represents a real file. Defaults to True.
+        @param s: The string to parse.
         """
-        self.filename = filename
-        self.is_real_file = real_file
-        self.line_index = 0
-        self.lines = []
-        self.real_number = []
-        for num, line in enumerate(str.splitlines()):
+        self.filename = "<string>"
+        self._lines = []
+        self._line_index = 0
+        for linenum, line in enumerate(s.splitlines()):
             line = line.rstrip().expandtabs()
-            stripped_line = line.strip()
+            stripped_line = line.lstrip()
             indent = len(line) - len(stripped_line)
             if (not stripped_line
                 or stripped_line.startswith("#")
                 or stripped_line.startswith("//")):
                 continue
-            self.lines.append((line, stripped_line, indent))
-            self.real_number.append(num + 1)
-
-
-    def real_filename(self):
-        """Returns the filename we're reading, in case it is a real file
-
-        @returns the filename we are parsing, or None in case we're not parsing a real file
-        """
-        if self.is_real_file:
-            return self.filename
-
-    def get_next_line(self):
-        """
-        Get the next non-empty, non-comment line in the string.
+            self._lines.append((stripped_line, indent, linenum + 1))
 
-        @param file: File like object.
-        @return: (line, stripped_line, indent), where indent is the line's
-            indent level or -1 if no line is available.
-        """
-        try:
-            if self.line_index < len(self.lines):
-                return self.lines[self.line_index]
-            else:
-                return (None, None, -1)
-        finally:
-            self.line_index += 1
 
-
-    def tell(self):
-        """
-        Return the current line index.
-        """
-        return self.line_index
-
-
-    def seek(self, index):
-        """
-        Set the current line index.
+    def get_next_line(self, prev_indent):
         """
-        self.line_index = index
+        Get the next non-empty, non-comment line in the string, whose
+        indentation level is higher than prev_indent.
 
-    def raise_error(self, msg):
-        """Raise an error related to the last line returned by get_next_line()
+        @param prev_indent: The indentation level of the previous block.
+        @return: (line, indent, linenum), where indent is the line's
+            indentation level.  If no line is available, (None, -1, -1) is
+            returned.
         """
-        if self.line_index == 0: # nothing was read. shouldn't happen, but...
-            line_id = 'BEGIN'
-        elif self.line_index >= len(self.lines): # past EOF
-            line_id = 'EOF'
-        else:
-            # line_index is the _next_ line. get the previous one
-            line_id = str(self.real_number[self.line_index-1])
-        raise error.AutotestError("%s:%s: %s" % (self.filename, line_id, msg))
-
-
-# Array structure:
-# ----------------
-# The first 4 elements contain the indices of the 4 segments.
-# a[0] -- Index of beginning of 'name' segment (always 4).
-# a[1] -- Index of beginning of 'shortname' segment.
-# a[2] -- Index of beginning of 'depend' segment.
-# a[3] -- Index of beginning of 'content' segment.
-# The next elements in the array comprise the aforementioned segments:
-# The 'name' segment begins with a[a[0]] and ends with a[a[1]-1].
-# The 'shortname' segment begins with a[a[1]] and ends with a[a[2]-1].
-# The 'depend' segment begins with a[a[2]] and ends with a[a[3]-1].
-# The 'content' segment begins with a[a[3]] and ends at the end of the array.
-
-# The following functions append/prepend to various segments of an array.
-
-def _array_append_to_name_shortname_depend(a, name, depend):
-    a.insert(a[1], name)
-    a.insert(a[2] + 1, name)
-    a.insert(a[3] + 2, depend)
-    a[1] += 1
-    a[2] += 2
-    a[3] += 3
-
-
-def _array_prepend_to_name_shortname_depend(a, name, depend):
-    a[1] += 1
-    a[2] += 2
-    a[3] += 3
-    a.insert(a[0], name)
-    a.insert(a[1], name)
-    a.insert(a[2], depend)
-
+        if self._line_index >= len(self._lines):
+            return None, -1, -1
+        line, indent, linenum = self._lines[self._line_index]
+        if indent <= prev_indent:
+            return None, -1, -1
+        self._line_index += 1
+        return line, indent, linenum
 
-def _array_append_to_name_depend(a, name, depend):
-    a.insert(a[1], name)
-    a.insert(a[3] + 1, depend)
-    a[1] += 1
-    a[2] += 1
-    a[3] += 2
 
-
-def _array_prepend_to_name_depend(a, name, depend):
-    a[1] += 1
-    a[2] += 1
-    a[3] += 2
-    a.insert(a[0], name)
-    a.insert(a[2], depend)
-
-
-def _array_append_to_content(a, content):
-    a.append(content)
-
-
-def _array_get_name(a, object_cache):
-    """
-    Return the name of a dictionary represented by a given array.
-
-    @param a: Array representing a dictionary.
-    @param object_cache: A list of strings referenced by elements in the array.
+class FileReader(StrReader):
     """
-    return ".".join([object_cache[i] for i in a[a[0]:a[1]]])
-
-
-def _array_get_all(a, object_cache):
+    Preprocess an input file for easy reading.
     """
-    Return a 4-tuple containing all the data stored in a given array, in a
-    format that is easy to turn into an actual dictionary.
+    def __init__(self, filename):
+        """
+        Initialize the reader.
 
-    @param a: Array representing a dictionary.
-    @param object_cache: A list of strings referenced by elements in the array.
-    @return: A 4-tuple: (name, shortname, depend, content), in which all
-        members are strings except depend which is a list of strings.
-    """
-    name = ".".join([object_cache[i] for i in a[a[0]:a[1]]])
-    shortname = ".".join([object_cache[i] for i in a[a[1]:a[2]]])
-    content = "".join([object_cache[i] for i in a[a[3]:]])
-    depend = []
-    prefix = ""
-    for n, d in zip(a[a[0]:a[1]], a[a[2]:a[3]]):
-        for dep in object_cache[d].split():
-            depend.append(prefix + dep)
-        prefix += object_cache[n] + "."
-    return name, shortname, depend, content
+        @parse filename: The name of the input file.
+        """
+        StrReader.__init__(self, open(filename).read())
+        self.filename = filename
 
 
 if __name__ == "__main__":
-    parser = optparse.OptionParser("usage: %prog [options] [filename]")
-    parser.add_option('--verbose', dest="debug", action='store_true',
-                      help='include debug messages in console output')
+    parser = optparse.OptionParser("usage: %prog [options] <filename>")
+    parser.add_option("-v", "--verbose", dest="debug", action="store_true",
+                      help="include debug messages in console output")
+    parser.add_option("-f", "--fullname", dest="fullname", action="store_true",
+                      help="show full dict names instead of short names")
+    parser.add_option("-c", "--contents", dest="contents", action="store_true",
+                      help="show dict contents")
 
     options, args = parser.parse_args()
-    debug = options.debug
-    if args:
-        filenames = args
-    else:
-        filenames = [os.path.join(os.path.dirname(sys.argv[0]), "tests.cfg")]
-
-    # Here we configure the stand alone program to use the autotest
-    # logging system.
-    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig(),
-                                      verbose=debug)
-    cfg = config(debug=debug)
-    for fn in filenames:
-        cfg.parse_file(fn)
-    dicts = cfg.get_generator()
-    for i, dict in enumerate(dicts):
-        print "Dictionary #%d:" % (i)
-        keys = dict.keys()
-        keys.sort()
-        for key in keys:
-            print "    %s = %s" % (key, dict[key])
+    if not args:
+        parser.error("filename required")
+
+    c = Parser(args[0], debug=options.debug)
+    for i, d in enumerate(c.get_dicts()):
+        if options.fullname:
+            print "dict %4d:  %s" % (i + 1, d["name"])
+        else:
+            print "dict %4d:  %s" % (i + 1, d["shortname"])
+        if options.contents:
+            keys = d.keys()
+            keys.sort()
+            for key in keys:
+                print "    %s = %s" % (key, d[key])
diff --git a/client/tests/kvm/kvm_scheduler.py b/client/tests/kvm/kvm_scheduler.py
index 95282e4..b96bb32 100644
--- a/client/tests/kvm/kvm_scheduler.py
+++ b/client/tests/kvm/kvm_scheduler.py
@@ -63,7 +63,6 @@ class scheduler:
                 test_index = int(cmd[1])
                 test = self.tests[test_index].copy()
                 test.update(self_dict)
-                test = kvm_utils.get_sub_pool(test, index, self.num_workers)
                 test_iterations = int(test.get("iterations", 1))
                 status = run_test_func("kvm", params=test,
                                        tag=test.get("shortname"),
@@ -129,7 +128,7 @@ class scheduler:
                     # If the test failed, mark all dependent tests as "failed" too
                     if not status:
                         for i, other_test in enumerate(self.tests):
-                            for dep in other_test.get("depend", []):
+                            for dep in other_test.get("dep", []):
                                 if dep in test["name"]:
                                     test_status[i] = "fail"
 
@@ -154,7 +153,7 @@ class scheduler:
                         continue
                     # Make sure the test's dependencies are satisfied
                     dependencies_satisfied = True
-                    for dep in test["depend"]:
+                    for dep in test["dep"]:
                         dependencies = [j for j, t in enumerate(self.tests)
                                         if dep in t["name"]]
                         bad_status_deps = [j for j in dependencies
@@ -200,14 +199,14 @@ class scheduler:
                     used_mem[worker] = test_used_mem
                     # Assign all related tests to this worker
                     for j, other_test in enumerate(self.tests):
-                        for other_dep in other_test["depend"]:
+                        for other_dep in other_test["dep"]:
                             # All tests that depend on this test
                             if other_dep in test["name"]:
                                 test_worker[j] = worker
                                 break
                             # ... and all tests that share a dependency
                             # with this test
-                            for dep in test["depend"]:
+                            for dep in test["dep"]:
                                 if dep in other_dep or other_dep in dep:
                                     test_worker[j] = worker
                                     break
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 44ebb88..9e25a0a 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -1101,7 +1101,7 @@ def run_tests(test_list, job):
         if dict.get("skip") == "yes":
             continue
         dependencies_satisfied = True
-        for dep in dict.get("depend"):
+        for dep in dict.get("dep"):
             for test_name in status_dict.keys():
                 if not dep in test_name:
                     continue
diff --git a/client/tests/kvm/tests.cfg.sample b/client/tests/kvm/tests.cfg.sample
index bde7aba..4b3b965 100644
--- a/client/tests/kvm/tests.cfg.sample
+++ b/client/tests/kvm/tests.cfg.sample
@@ -18,10 +18,9 @@ include cdkeys.cfg
 image_name(_.*)? ?<= /tmp/kvm_autotest_root/images/
 cdrom(_.*)? ?<= /tmp/kvm_autotest_root/
 floppy ?<= /tmp/kvm_autotest_root/
-Linux:
-    unattended_install:
-        kernel ?<= /tmp/kvm_autotest_root/
-        initrd ?<= /tmp/kvm_autotest_root/
+Linux..unattended_install:
+    kernel ?<= /tmp/kvm_autotest_root/
+    initrd ?<= /tmp/kvm_autotest_root/
 
 # Here are the test sets variants. The variant 'qemu_kvm_windows_quick' is
 # fully commented, the following ones have comments only on noteworthy points
@@ -49,7 +48,7 @@ variants:
         # Operating system choice
         only Win7.64
         # Subtest choice. You can modify that line to add more subtests
-        only unattended_install.cdrom boot shutdown
+        only unattended_install.cdrom, boot, shutdown
 
     # Runs qemu, f14 64 bit guest OS, install, boot, shutdown
     - @qemu_f14_quick:
@@ -65,7 +64,7 @@ variants:
         only no_pci_assignable
         only smallpages
         only Fedora.14.64
-        only unattended_install.cdrom boot shutdown
+        only unattended_install.cdrom, boot, shutdown
         # qemu needs -enable-kvm on the cmdline
         extra_params += ' -enable-kvm'
 
@@ -81,7 +80,7 @@ variants:
         only no_pci_assignable
         only smallpages
         only Fedora.14.64
-        only unattended_install.cdrom boot shutdown
+        only unattended_install.cdrom, boot, shutdown
 
 # You may provide information about the DTM server for WHQL tests here:
 #whql:
diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index 80362db..e65bed2 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -1722,8 +1722,8 @@ variants:
 
     # Windows section
     - @Windows:
-        no autotest linux_s3 vlan ioquit unattended_install.(url|nfs|remote_ks)
-        no jumbo nicdriver_unload nic_promisc multicast mac_change ethtool clock_getres
+        no autotest, linux_s3, vlan, ioquit, unattended_install.url, unattended_install.nfs, unattended_install.remote_ks
+        no jumbo, nicdriver_unload, nic_promisc, multicast, mac_change, ethtool, clock_getres
 
         shutdown_command = shutdown /s /f /t 0
         reboot_command = shutdown /r /f /t 0
@@ -1747,7 +1747,7 @@ variants:
         mem_chk_cmd = wmic memphysical
         mem_chk_cur_cmd = wmic memphysical
 
-        unattended_install.cdrom|whql.support_vm_install:
+        unattended_install.cdrom, whql.support_vm_install:
             timeout = 7200
             finish_program = deps/finish.exe
             cdroms += " winutils"
@@ -1857,7 +1857,7 @@ variants:
                             steps = WinXP-32.steps
                         setup:
                             steps = WinXP-32-rss.steps
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/WindowsXP-sp2-vlk.iso
                             md5sum_cd1 = 743450644b1d9fe97b3cf379e22dceb0
                             md5sum_1m_cd1 = b473bf75af2d1269fec8958cf0202bfd
@@ -1890,7 +1890,7 @@ variants:
                             steps = WinXP-64.steps
                         setup:
                             steps = WinXP-64-rss.steps
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/WindowsXP-64.iso
                             md5sum_cd1 = 8d3f007ec9c2060cec8a50ee7d7dc512
                             md5sum_1m_cd1 = e812363ff427effc512b7801ee70e513
@@ -1928,7 +1928,7 @@ variants:
                             steps = Win2003-32.steps
                         setup:
                             steps = Win2003-32-rss.steps
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/Windows2003_r2_VLK.iso
                             md5sum_cd1 = 03e921e9b4214773c21a39f5c3f42ef7
                             md5sum_1m_cd1 = 37c2fdec15ac4ec16aa10fdfdb338aa3
@@ -1960,7 +1960,7 @@ variants:
                             steps = Win2003-64.steps
                         setup:
                             steps = Win2003-64-rss.steps
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/Windows2003-x64.iso
                             md5sum_cd1 = 5703f87c9fd77d28c05ffadd3354dbbd
                             md5sum_1m_cd1 = 439393c384116aa09e08a0ad047dcea8
@@ -2008,7 +2008,7 @@ variants:
                                     steps = Win-Vista-32.steps
                                 setup:
                                     steps = WinVista-32-rss.steps
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/WindowsVista-32.iso
                                     md5sum_cd1 = 1008f323d5170c8e614e52ccb85c0491
                                     md5sum_1m_cd1 = c724e9695da483bc0fd59e426eaefc72
@@ -2025,7 +2025,7 @@ variants:
 
                             - sp2:
                                 image_name += -sp2-32
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/en_windows_vista_with_sp2_x86_dvd_342266.iso
                                     md5sum_cd1 = 19ca90a425667812977bab6f4ce24175
                                     md5sum_1m_cd1 = 89c15020e0e6125be19acf7a2e5dc614
@@ -2059,7 +2059,7 @@ variants:
                                     steps = Win-Vista-64.steps
                                 setup:
                                     steps = WinVista-64-rss.steps
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/WindowsVista-64.iso
                                     md5sum_cd1 = 11e2010d857fffc47813295e6be6d58d
                                     md5sum_1m_cd1 = 0947bcd5390546139e25f25217d6f165
@@ -2076,7 +2076,7 @@ variants:
 
                             - sp2:
                                 image_name += -sp2-64
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/en_windows_vista_sp2_x64_dvd_342267.iso
                                     md5sum_cd1 = a1c024d7abaf34bac3368e88efbc2574
                                     md5sum_1m_cd1 = 3d84911a80f3df71d1026f7adedc2181
@@ -2112,7 +2112,7 @@ variants:
                                     steps = Win2008-32.steps
                                 setup:
                                     steps = Win2008-32-rss.steps
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/Windows2008-x86.iso
                                     md5sum=0bfca49f0164de0a8eba236ced47007d
                                     md5sum_1m=07d7f5006393f74dc76e6e2e943e2440
@@ -2127,7 +2127,7 @@ variants:
 
                             - sp2:
                                 image_name += -sp2-32
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x86_dvd_342333.iso
                                     md5sum_cd1 = b9201aeb6eef04a3c573d036a8780bdf
                                     md5sum_1m_cd1 = b7a9d42e55ea1e85105a3a6ad4da8e04
@@ -2156,7 +2156,7 @@ variants:
                                     passwd = 1q2w3eP
                                 setup:
                                     steps = Win2008-64-rss.steps
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/Windows2008-x64.iso
                                     md5sum=27c58cdb3d620f28c36333a5552f271c
                                     md5sum_1m=efdcc11d485a1ef9afa739cb8e0ca766
@@ -2171,7 +2171,7 @@ variants:
 
                             - sp2:
                                 image_name += -sp2-64
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x64_dvd_342336.iso
                                     md5sum_cd1 = e94943ef484035b3288d8db69599a6b5
                                     md5sum_1m_cd1 = ee55506823d0efffb5532ddd88a8e47b
@@ -2188,7 +2188,7 @@ variants:
 
                             - r2:
                                 image_name += -r2-64
-                                unattended_install.cdrom|whql.support_vm_install:
+                                unattended_install.cdrom, whql.support_vm_install:
                                     cdrom_cd1 = isos/windows/en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
                                     md5sum_cd1 = 0207ef392c60efdda92071b0559ca0f9
                                     md5sum_1m_cd1 = a5a22ce25008bd7109f6d830d627e3ed
@@ -2216,7 +2216,7 @@ variants:
                 variants:
                     - 32:
                         image_name += -32
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/en_windows_7_ultimate_x86_dvd_x15-65921.iso
                             md5sum_cd1 = d0b8b407e8a3d4b75ee9c10147266b89
                             md5sum_1m_cd1 = 2b0c2c22b1ae95065db08686bf83af93
@@ -2249,7 +2249,7 @@ variants:
                             steps = Win7-64.steps
                         setup:
                             steps = Win7-64-rss.steps
-                        unattended_install.cdrom|whql.support_vm_install:
+                        unattended_install.cdrom, whql.support_vm_install:
                             cdrom_cd1 = isos/windows/en_windows_7_ultimate_x64_dvd_x15-65922.iso
                             md5sum_cd1 = f43d22e4fb07bf617d573acd8785c028
                             md5sum_1m_cd1 = b44d8cf99dbed2a5cb02765db8dfd48f
@@ -2329,7 +2329,7 @@ variants:
                 md5sum_cd1 = 9fae22f2666369968a76ef59e9a81ced
 
 
-whql.support_vm_install|whql.client_install.support_vm:
+whql.support_vm_install, whql.client_install.support_vm:
     image_name += -supportvm
 
 
@@ -2352,7 +2352,7 @@ variants:
         drive_format=virtio
 
 
-virtio_net|virtio_blk|e1000|balloon_check:
+virtio_net, virtio_blk, e1000, balloon_check:
     only Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
     # only WinXP Win2003 Win2008 WinVista Win7 Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
 
@@ -2365,15 +2365,9 @@ variants:
         check_image = yes
     - vmdk:
         no ioquit
-        only Fedora Ubuntu Windows
-        only smp2
-        only rtl8139
         image_format = vmdk
     - raw:
         no ioquit
-        only Fedora Ubuntu Windows
-        only smp2
-        only rtl8139
         image_format = raw
 
 
-- 
1.7.3.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09  1:50 [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py Michael Goldish
@ 2011-02-09  2:56 ` Cleber Rosa
  2011-02-09  9:28 ` Avi Kivity
  2011-02-09 16:06 ` Ryan Harper
  2 siblings, 0 replies; 19+ messages in thread
From: Cleber Rosa @ 2011-02-09  2:56 UTC (permalink / raw)
  To: Michael Goldish; +Cc: autotest, Uri Lublin, kvm

Top posting to make the congratulations reach you sooner: this was very 
much anticipated and very much appreciated!

Saving 2 minutes on each test job run is great! But going from 2 minutes 
to (almost) nil on every config experimentation is amazing!

only kudos..congrats..cheers

On 02/08/2011 11:50 PM, Michael Goldish wrote:
> This is a reimplementation of the dict generator.  It is much faster than the
> current implementation and uses a very small amount of memory.  Running time
> and memory usage scale polynomially with the number of defined variants,
> compared to exponentially in the current implementation.
>
> Instead of regular expressions in the filters, the following syntax is used:
>
> , means OR
> .. means AND
> . means IMMEDIATELY-FOLLOWED-BY
>
> Example:
>
> only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
>
> means select all dicts whose names have:
>
> (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> (smp2 AND qcow2 AND migrate AND ide)
>
> 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
> 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
> 'ide, scsi' is equivalent to 'scsi, ide'.
>
> Filters can be used in 3 ways:
> only<filter>
> no<filter>
> <filter>:
>
> The last one starts a conditional block, e.g.
>
> Fedora.14..qcow2:
>      no migrate, reboot
>      foo = bar
>
> Interface changes:
> - The main class is now called 'Parser' instead of 'config'.
> - fork_and_parse() has been removed.  parse_file() and parse_string() should be
>    used instead.
> - When run as a standalone program, kvm_config.py just prints the shortnames of
>    the generated dicts by default, and can optionally print the full names and
>    contents of the dicts.
> - By default, debug messages are not printed, but they can be enabled by
>    passing debug=True to Parser's constructor, or by running kvm_config.py -v.
> - The 'depend' key has been renamed to 'dep'.
>
> Signed-off-by: Michael Goldish<mgoldish@redhat.com>
> Signed-off-by: Uri Lublin<ulublin@redhat.com>
> ---
>   client/tests/kvm/control               |   28 +-
>   client/tests/kvm/control.parallel      |   12 +-
>   client/tests/kvm/kvm_config.py         | 1051 ++++++++++++++------------------
>   client/tests/kvm/kvm_scheduler.py      |    9 +-
>   client/tests/kvm/kvm_utils.py          |    2 +-
>   client/tests/kvm/tests.cfg.sample      |   13 +-
>   client/tests/kvm/tests_base.cfg.sample |   46 +-
>   7 files changed, 513 insertions(+), 648 deletions(-)
>
> diff --git a/client/tests/kvm/control b/client/tests/kvm/control
> index d226adf..be37678 100644
> --- a/client/tests/kvm/control
> +++ b/client/tests/kvm/control
> @@ -35,13 +35,11 @@ str = """
>   # build configuration here.  For example:
>   #release_tag = 84
>   """
> -build_cfg = kvm_config.config()
> -# As the base test config is quite large, in order to save memory, we use the
> -# fork_and_parse() method, that creates another parser process and destroys it
> -# at the end of the parsing, so the memory spent can be given back to the OS.
> -build_cfg_path = os.path.join(kvm_test_dir, "build.cfg")
> -build_cfg.fork_and_parse(build_cfg_path, str)
> -if not kvm_utils.run_tests(build_cfg.get_generator(), job):
> +
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(kvm_test_dir, "build.cfg"))
> +parser.parse_string(str)
> +if not kvm_utils.run_tests(parser.get_dicts(), job):
>       logging.error("KVM build step failed, exiting.")
>       sys.exit(1)
>
> @@ -49,10 +47,11 @@ str = """
>   # This string will be parsed after tests.cfg.  Make any desired changes to the
>   # test configuration here.  For example:
>   #display = sdl
> -#install|setup: timeout_multiplier = 3
> +#install, setup: timeout_multiplier = 3
>   """
> -tests_cfg = kvm_config.config()
> -tests_cfg_path = os.path.join(kvm_test_dir, "tests.cfg")
> +
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(kvm_test_dir, "tests.cfg"))
>
>   if args:
>       # We get test parameters from command line
> @@ -67,11 +66,12 @@ if args:
>                   str += "%s = %s\n" % (key, value)
>           except IndexError:
>               pass
> -tests_cfg.fork_and_parse(tests_cfg_path, str)
> +parser.parse_string(str)
>
> -# Run the tests
> -kvm_utils.run_tests(tests_cfg.get_generator(), job)
> +logging.info("Selected tests:")
> +for i, d in enumerate(parser.get_dicts()):
> +    logging.info("Test %4d:  %s" % (i + 1, d["shortname"]))
> +kvm_utils.run_tests(parser.get_dicts(), job)
>
>   # Generate a nice HTML report inside the job's results dir
>   kvm_utils.create_report(kvm_test_dir, job.resultdir)
> -
> diff --git a/client/tests/kvm/control.parallel b/client/tests/kvm/control.parallel
> index ac84638..640ccf5 100644
> --- a/client/tests/kvm/control.parallel
> +++ b/client/tests/kvm/control.parallel
> @@ -163,16 +163,15 @@ import kvm_config
>   str = """
>   # This string will be parsed after tests.cfg.  Make any desired changes to the
>   # test configuration here.  For example:
> -#install|setup: timeout_multiplier = 3
> -#only fc8_quick
> +#install, setup: timeout_multiplier = 3
>   #display = sdl
>   """
> -cfg = kvm_config.config()
> -filename = os.path.join(pwd, "tests.cfg")
> -cfg.fork_and_parse(filename, str)
>
> -tests = cfg.get_list()
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(pwd, "tests.cfg"))
> +parser.parse_string(str)
>
> +tests = list(parser.get_dicts())
>
>   # -------------
>   # Run the tests
> @@ -192,7 +191,6 @@ s = kvm_scheduler.scheduler(tests, num_workers, total_cpus, total_mem, pwd)
>   job.parallel([s.scheduler],
>                *[(s.worker, i, job.run_test) for i in range(num_workers)])
>
> -
>   # create the html report in result dir
>   reporter = os.path.join(pwd, 'make_html_report.py')
>   html_file = os.path.join(job.resultdir,'results.html')
> diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
> index 13cdfe2..1b27181 100755
> --- a/client/tests/kvm/kvm_config.py
> +++ b/client/tests/kvm/kvm_config.py
> @@ -1,18 +1,149 @@
>   #!/usr/bin/python
>   """
> -KVM configuration file utility functions.
> +KVM test configuration file parser
>
> -@copyright: Red Hat 2008-2010
> +@copyright: Red Hat 2008-2011
>   """
>
> -import logging, re, os, sys, optparse, array, traceback, cPickle
> -import common
> -import kvm_utils
> -from autotest_lib.client.common_lib import error
> -from autotest_lib.client.common_lib import logging_manager
> +import re, os, sys, optparse, collections
> +
> +
> +# Filter syntax:
> +# , means OR
> +# .. means AND
> +# . means IMMEDIATELY-FOLLOWED-BY
> +
> +# Example:
> +# qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> +# means match all dicts whose names have:
> +# (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> +# ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> +# (smp2 AND qcow2 AND migrate AND ide)
> +
> +# Note:
> +# 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
> +# 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
> +# 'ide, scsi' is equivalent to 'scsi, ide'.
> +
> +# Filters can be used in 3 ways:
> +# only<filter>
> +# no<filter>
> +#<filter>:
> +# The last one starts a conditional block.
> +
> +
> +num_failed_cases = 5
> +
> +
> +class Node(object):
> +    def __init__(self):
> +        self.name = []
> +        self.dep = []
> +        self.content = []
> +        self.children = []
> +        self.labels = set()
> +        self.append_to_shortname = False
> +        self.failed_cases = collections.deque()
> +
> +
> +# Filter must inherit from object (otherwise type() won't work)
> +class Filter(object):
> +    def __init__(self, s):
> +        self.filter = []
> +        for word in s.replace(",", " ").split():
> +            word = [block.split(".") for block in word.split("..")]
> +            self.filter += [word]
> +
> +
> +    def match_adjacent(self, block, ctx, ctx_set):
> +        # TODO: explain what this function does
> +        if block[0] not in ctx_set:
> +            return 0
> +        if len(block) == 1:
> +            return 1
> +        if block[1] not in ctx_set:
> +            return int(ctx[-1] == block[0])
> +        k = 0
> +        i = ctx.index(block[0])
> +        while i<  len(ctx):
> +            if k>  0 and ctx[i] != block[k]:
> +                i -= k - 1
> +                k = 0
> +            if ctx[i] == block[k]:
> +                k += 1
> +                if k>= len(block):
> +                    break
> +                if block[k] not in ctx_set:
> +                    break
> +            i += 1
> +        return k
> +
> +
> +    def might_match_adjacent(self, block, ctx, ctx_set, descendant_labels):
> +        matched = self.match_adjacent(block, ctx, ctx_set)
> +        for elem in block[matched:]:
> +            if elem not in descendant_labels:
> +                return False
> +        return True
> +
> +
> +    def match(self, ctx, ctx_set):
> +        for word in self.filter:
> +            for block in word:
> +                if self.match_adjacent(block, ctx, ctx_set) != len(block):
> +                    break
> +            else:
> +                return True
> +        return False
> +
> +
> +    def might_match(self, ctx, ctx_set, descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if not self.might_match_adjacent(block, ctx, ctx_set,
> +                                                 descendant_labels):
> +                    break
> +            else:
> +                return True
> +        return False
> +
> +
> +class NoOnlyFilter(Filter):
> +    def __init__(self, line):
> +        Filter.__init__(self, line.split(None, 1)[1])
> +        self.line = line
> +
> +
> +class OnlyFilter(NoOnlyFilter):
> +    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                   descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if (self.match_adjacent(block, ctx, ctx_set)>
> +                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
> +                    return self.might_match(ctx, ctx_set, descendant_labels)
> +        return False
> +
>
> +class NoFilter(NoOnlyFilter):
> +    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                   descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if (self.match_adjacent(block, ctx, ctx_set)<
> +                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
> +                    return not self.match(ctx, ctx_set)
> +        return False
>
> -class config:
> +
> +class Condition(NoFilter):
> +    def __init__(self, line):
> +        Filter.__init__(self, line.rstrip(":"))
> +        self.line = line
> +        self.content = []
> +
> +
> +class Parser(object):
>       """
>       Parse an input file or string that follows the KVM Test Config File format
>       and generate a list of dicts that will be later used as configuration
> @@ -21,17 +152,14 @@ class config:
>       @see: http://www.linux-kvm.org/page/KVM-Autotest/Test_Config_File
>       """
>
> -    def __init__(self, filename=None, debug=True):
> +    def __init__(self, filename=None, debug=False):
>           """
> -        Initialize the list and optionally parse a file.
> +        Initialize the parser and optionally parse a file.
>
> -        @param filename: Path of the file that will be taken.
> +        @param filename: Path of the file to parse.
>           @param debug: Whether to turn on debugging output.
>           """
> -        self.list = [array.array("H", [4, 4, 4, 4])]
> -        self.object_cache = []
> -        self.object_cache_indices = {}
> -        self.regex_cache = {}
> +        self.node = Node()
>           self.debug = debug
>           if filename:
>               self.parse_file(filename)
> @@ -39,689 +167,436 @@ class config:
>
>       def parse_file(self, filename):
>           """
> -        Parse file.  If it doesn't exist, raise an IOError.
> +        Parse a file.
>
>           @param filename: Path of the configuration file.
>           """
> -        if not os.path.exists(filename):
> -            raise IOError("File %s not found" % filename)
> -        str = open(filename).read()
> -        self.list = self.parse(configreader(filename, str), self.list)
> +        self.node = self._parse(FileReader(filename), self.node)
>
>
> -    def parse_string(self, str):
> +    def parse_string(self, s):
>           """
>           Parse a string.
>
> -        @param str: String to parse.
> +        @param s: String to parse.
>           """
> -        self.list = self.parse(configreader('<string>', str, real_file=False), self.list)
> +        self.node = self._parse(StrReader(s), self.node)
>
>
> -    def fork_and_parse(self, filename=None, str=None):
> -        """
> -        Parse a file and/or a string in a separate process to save memory.
> -
> -        Python likes to keep memory to itself even after the objects occupying
> -        it have been destroyed.  If during a call to parse_file() or
> -        parse_string() a lot of memory is used, it can only be freed by
> -        terminating the process.  This function works around the problem by
> -        doing the parsing in a forked process and then terminating it, freeing
> -        any unneeded memory.
> -
> -        Note: if an exception is raised during parsing, its information will be
> -        printed, and the resulting list will be empty.  The exception will not
> -        be raised in the process calling this function.
> -
> -        @param filename: Path of file to parse (optional).
> -        @param str: String to parse (optional).
> -        """
> -        r, w = os.pipe()
> -        r, w = os.fdopen(r, "r"), os.fdopen(w, "w")
> -        pid = os.fork()
> -        if not pid:
> -            # Child process
> -            r.close()
> -            try:
> -                if filename:
> -                    self.parse_file(filename)
> -                if str:
> -                    self.parse_string(str)
> -            except:
> -                traceback.print_exc()
> -                self.list = []
> -            # Convert the arrays to strings before pickling because at least
> -            # some Python versions can't pickle/unpickle arrays
> -            l = [a.tostring() for a in self.list]
> -            cPickle.dump((l, self.object_cache), w, -1)
> -            w.close()
> -            os._exit(0)
> -        else:
> -            # Parent process
> -            w.close()
> -            (l, self.object_cache) = cPickle.load(r)
> -            r.close()
> -            os.waitpid(pid, 0)
> -            self.list = []
> -            for s in l:
> -                a = array.array("H")
> -                a.fromstring(s)
> -                self.list.append(a)
> -
> -
> -    def get_generator(self):
> +    def get_dicts(self, node=None, ctx=[], content=[], shortname=[], dep=[]):
>           """
>           Generate dictionaries from the code parsed so far.  This should
> -        probably be called after parsing something.
> +        be called after parsing something.
>
>           @return: A dict generator.
>           """
> -        for a in self.list:
> -            name, shortname, depend, content = _array_get_all(a,
> -                                                              self.object_cache)
> -            dict = {"name": name, "shortname": shortname, "depend": depend}
> -            self._apply_content_to_dict(dict, content)
> -            yield dict
> -
> -
> -    def get_list(self):
> -        """
> -        Generate a list of dictionaries from the code parsed so far.
> -        This should probably be called after parsing something.
> +        def apply_ops_to_dict(d, content):
> +            for filename, linenum, s in content:
> +                op_found = None
> +                op_pos = len(s)
> +                for op in ops:
> +                    if op in s:
> +                        pos = s.index(op)
> +                        if pos<  op_pos:
> +                            op_found = op
> +                            op_pos = pos
> +                if not op_found:
> +                    continue
> +                left, value = map(str.strip, s.split(op_found, 1))
> +                if value and ((value[0] == '"' and value[-1] == '"') or
> +                              (value[0] == "'" and value[-1] == "'")):
> +                    value = value[1:-1]
> +                filters_and_key = map(str.strip, left.split(":"))
> +                for f in filters_and_key[:-1]:
> +                    if not Filter(f).match(ctx, ctx_set):
> +                        break
> +                else:
> +                    key = filters_and_key[-1]
> +                    ops[op_found](d, key, value)
> +
> +        def process_content(content, failed_filters):
> +            # 1. Check that the filters in content are OK with the current
> +            #    context (ctx).
> +            # 2. Move the parts of content that are still relevant into
> +            #    new_content and unpack conditional blocks if appropriate.
> +            #    For example, if an 'only' statement fully matches ctx, it
> +            #    becomes irrelevant and is not appended to new_content.
> +            #    If a conditional block fully matches, its contents are
> +            #    unpacked into new_content.
> +            # 3. Move failed filters into failed_filters, so that next time we
> +            #    reach this node or one of its ancestors, we'll check those
> +            #    filters first.
> +            for t in content:
> +                filename, linenum, obj = t
> +                if type(obj) is str:
> +                    new_content.append(t)
> +                    continue
> +                elif type(obj) is OnlyFilter:
> +                    if not obj.might_match(ctx, ctx_set, labels):
> +                        self._debug("    filter did not pass: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        failed_filters.append(t)
> +                        return False
> +                    elif obj.match(ctx, ctx_set):
> +                        continue
> +                elif type(obj) is NoFilter:
> +                    if obj.match(ctx, ctx_set):
> +                        self._debug("    filter did not pass: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        failed_filters.append(t)
> +                        return False
> +                    elif not obj.might_match(ctx, ctx_set, labels):
> +                        continue
> +                elif type(obj) is Condition:
> +                    if obj.match(ctx, ctx_set):
> +                        self._debug("    conditional block matches: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        # Check and unpack the content inside this Condition
> +                        # object (note: the failed filters should go into
> +                        # new_internal_filters because we don't expect them to
> +                        # come from outside this node, even if the Condition
> +                        # itself was external)
> +                        if not process_content(obj.content,
> +                                               new_internal_filters):
> +                            failed_filters.append(t)
> +                            return False
> +                        continue
> +                    elif not obj.might_match(ctx, ctx_set, labels):
> +                        continue
> +                new_content.append(t)
> +            return True
> +
> +        def might_pass(failed_ctx,
> +                       failed_ctx_set,
> +                       failed_external_filters,
> +                       failed_internal_filters):
> +            for t in failed_external_filters:
> +                if t not in content:
> +                    return True
> +                filename, linenum, filter = t
> +                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                                     labels):
> +                    return True
> +            for t in failed_internal_filters:
> +                filename, linenum, filter = t
> +                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                                     labels):
> +                    return True
> +            return False
> +
> +        def add_failed_case():
> +            node.failed_cases.appendleft((ctx, ctx_set,
> +                                          new_external_filters,
> +                                          new_internal_filters))
> +            if len(node.failed_cases)>  num_failed_cases:
> +                node.failed_cases.pop()
> +
> +        node = node or self.node
> +        # Update dep
> +        for d in node.dep:
> +            temp = ctx + [d]
> +            dep = dep + [".".join([s for s in temp if s])]
> +        # Update ctx
> +        ctx = ctx + node.name
> +        ctx_set = set(ctx)
> +        labels = node.labels
> +        # Get the current name
> +        name = ".".join([s for s in ctx if s])
> +        if node.name:
> +            self._debug("checking out %r", name)
> +        # Check previously failed filters
> +        for i, failed_case in enumerate(node.failed_cases):
> +            if not might_pass(*failed_case):
> +                self._debug("    this subtree has failed before")
> +                del node.failed_cases[i]
> +                node.failed_cases.appendleft(failed_case)
> +                return
> +        # Check content and unpack it into new_content
> +        new_content = []
> +        new_external_filters = []
> +        new_internal_filters = []
> +        if (not process_content(node.content, new_internal_filters) or
> +            not process_content(content, new_external_filters)):
> +            add_failed_case()
> +            return
> +        # Update shortname
> +        if node.append_to_shortname:
> +            shortname = shortname + node.name
> +        # Recurse into children
> +        count = 0
> +        for n in node.children:
> +            for d in self.get_dicts(n, ctx, new_content, shortname, dep):
> +                count += 1
> +                yield d
> +        # Reached leaf?
> +        if not node.children:
> +            self._debug("    reached leaf, returning it")
> +            d = {"name": name, "dep": dep,
> +                 "shortname": ".".join([s for s in shortname if s])}
> +            apply_ops_to_dict(d, new_content)
> +            yield d
> +        # If this node did not produce any dicts, remember the failed filters
> +        # of its descendants
> +        elif not count:
> +            new_external_filters = []
> +            new_internal_filters = []
> +            for n in node.children:
> +                (failed_ctx,
> +                 failed_ctx_set,
> +                 failed_external_filters,
> +                 failed_internal_filters) = n.failed_cases[0]
> +                for obj in failed_internal_filters:
> +                    if obj not in new_internal_filters:
> +                        new_internal_filters.append(obj)
> +                for obj in failed_external_filters:
> +                    if obj in content:
> +                        if obj not in new_external_filters:
> +                            new_external_filters.append(obj)
> +                    else:
> +                        if obj not in new_internal_filters:
> +                            new_internal_filters.append(obj)
> +            add_failed_case()
>
> -        @return: A list of dicts.
> -        """
> -        return list(self.get_generator())
>
> +    def _debug(self, s, *args):
> +        if self.debug:
> +            s = "DEBUG: %s" % s
> +            print s % args
>
> -    def count(self, filter=".*"):
> -        """
> -        Return the number of dictionaries whose names match filter.
>
> -        @param filter: A regular expression string.
> -        """
> -        exp = self._get_filter_regex(filter)
> -        count = 0
> -        for a in self.list:
> -            name = _array_get_name(a, self.object_cache)
> -            if exp.search(name):
> -                count += 1
> -        return count
> +    def _warn(self, s, *args):
> +        s = "WARNING: %s" % s
> +        print s % args
>
>
> -    def parse_variants(self, cr, list, subvariants=False, prev_indent=-1):
> +    def _parse_variants(self, cr, node, prev_indent=-1):
>           """
> -        Read and parse lines from a configreader object until a line with an
> +        Read and parse lines from a FileReader object until a line with an
>           indent level lower than or equal to prev_indent is encountered.
>
> -        @brief: Parse a 'variants' or 'subvariants' block from a configreader
> -            object.
> -        @param cr: configreader object to be parsed.
> -        @param list: List of arrays to operate on.
> -        @param subvariants: If True, parse in 'subvariants' mode;
> -            otherwise parse in 'variants' mode.
> +        @param cr: A FileReader/StrReader object.
> +        @param node: A node to operate on.
>           @param prev_indent: The indent level of the "parent" block.
> -        @return: The resulting list of arrays.
> +        @return: A node object.
>           """
> -        new_list = []
> +        node4 = Node()
>
>           while True:
> -            pos = cr.tell()
> -            (indented_line, line, indent) = cr.get_next_line()
> -            if indent<= prev_indent:
> -                cr.seek(pos)
> +            line, indent, linenum = cr.get_next_line(prev_indent)
> +            if not line:
>                   break
>
> -            # Get name and dependencies
> -            (name, depend) = map(str.strip, line.lstrip("- ").split(":"))
> +            name, dep = map(str.strip, line.lstrip("- ").split(":"))
>
> -            # See if name should be added to the 'shortname' field
> -            add_to_shortname = not name.startswith("@")
> -            name = name.lstrip("@")
> +            node2 = Node()
> +            node2.children = [node]
> +            node2.labels = node.labels
>
> -            # Store name and dependencies in cache and get their indices
> -            n = self._store_str(name)
> -            d = self._store_str(depend)
> +            node3 = self._parse(cr, node2, prev_indent=indent)
> +            node3.name = name.lstrip("@").split(".")
> +            node3.dep = dep.replace(",", " ").split()
> +            node3.append_to_shortname = not name.startswith("@")
>
> -            # Make a copy of list
> -            temp_list = [a[:] for a in list]
> +            node4.children += [node3]
> +            node4.labels.update(node3.labels)
> +            node4.labels.update(node3.name)
>
> -            if subvariants:
> -                # If we're parsing 'subvariants', first modify the list
> -                if add_to_shortname:
> -                    for a in temp_list:
> -                        _array_append_to_name_shortname_depend(a, n, d)
> -                else:
> -                    for a in temp_list:
> -                        _array_append_to_name_depend(a, n, d)
> -                temp_list = self.parse(cr, temp_list, restricted=True,
> -                                       prev_indent=indent)
> -            else:
> -                # If we're parsing 'variants', parse before modifying the list
> -                if self.debug:
> -                    _debug_print(indented_line,
> -                                 "Entering variant '%s' "
> -                                 "(variant inherits %d dicts)" %
> -                                 (name, len(list)))
> -                temp_list = self.parse(cr, temp_list, restricted=False,
> -                                       prev_indent=indent)
> -                if add_to_shortname:
> -                    for a in temp_list:
> -                        _array_prepend_to_name_shortname_depend(a, n, d)
> -                else:
> -                    for a in temp_list:
> -                        _array_prepend_to_name_depend(a, n, d)
> +        return node4
>
> -            new_list += temp_list
>
> -        return new_list
> -
> -
> -    def parse(self, cr, list, restricted=False, prev_indent=-1):
> +    def _parse(self, cr, node, prev_indent=-1):
>           """
> -        Read and parse lines from a configreader object until a line with an
> +        Read and parse lines from a StrReader object until a line with an
>           indent level lower than or equal to prev_indent is encountered.
>
> -        @brief: Parse a configreader object.
> -        @param cr: A configreader object.
> -        @param list: A list of arrays to operate on (list is modified in
> -            place and should not be used after the call).
> -        @param restricted: If True, operate in restricted mode
> -            (prohibit 'variants').
> +        @param cr: A FileReader/StrReader object.
> +        @param node: A Node or a Condition object to operate on.
>           @param prev_indent: The indent level of the "parent" block.
> -        @return: The resulting list of arrays.
> -        @note: List is destroyed and should not be used after the call.
> -            Only the returned list should be used.
> +        @return: A node object.
>           """
> -        current_block = ""
> -
>           while True:
> -            pos = cr.tell()
> -            (indented_line, line, indent) = cr.get_next_line()
> -            if indent<= prev_indent:
> -                cr.seek(pos)
> -                self._append_content_to_arrays(list, current_block)
> +            line, indent, linenum = cr.get_next_line(prev_indent)
> +            if not line:
>                   break
>
> -            len_list = len(list)
> -
> -            # Parse assignment operators (keep lines in temporary buffer)
> -            if "=" in line:
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Parsing operator (%d dicts in current "
> -                                 "context)" % len_list)
> -                current_block += line + "\n"
> -                continue
> -
> -            # Flush the temporary buffer
> -            self._append_content_to_arrays(list, current_block)
> -            current_block = ""
> -
>               words = line.split()
>
> -            # Parse 'no' and 'only' statements
> -            if words[0] == "no" or words[0] == "only":
> -                if len(words)<= 1:
> -                    continue
> -                filters = map(self._get_filter_regex, words[1:])
> -                filtered_list = []
> -                if words[0] == "no":
> -                    for a in list:
> -                        name = _array_get_name(a, self.object_cache)
> -                        for filter in filters:
> -                            if filter.search(name):
> -                                break
> -                        else:
> -                            filtered_list.append(a)
> -                if words[0] == "only":
> -                    for a in list:
> -                        name = _array_get_name(a, self.object_cache)
> -                        for filter in filters:
> -                            if filter.search(name):
> -                                filtered_list.append(a)
> -                                break
> -                list = filtered_list
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Parsing no/only (%d dicts in current "
> -                                 "context, %d remain)" %
> -                                 (len_list, len(list)))
> -                continue
> -
>               # Parse 'variants'
>               if line == "variants:":
> -                # 'variants' not allowed in restricted mode
> -                # (inside an exception or inside subvariants)
> -                if restricted:
> -                    e_msg = "Using variants in this context is not allowed"
> -                    cr.raise_error(e_msg)
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering variants block (%d dicts in "
> -                                 "current context)" % len_list)
> -                list = self.parse_variants(cr, list, subvariants=False,
> -                                           prev_indent=indent)
> -                continue
> -
> -            # Parse 'subvariants' (the block is parsed for each dict
> -            # separately)
> -            if line == "subvariants:":
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering subvariants block (%d dicts in "
> -                                 "current context)" % len_list)
> -                new_list = []
> -                # Remember current position
> -                pos = cr.tell()
> -                # Read the lines in any case
> -                self.parse_variants(cr, [], subvariants=True,
> -                                    prev_indent=indent)
> -                # Iterate over the list...
> -                for index in xrange(len(list)):
> -                    # Revert to initial position in this 'subvariants' block
> -                    cr.seek(pos)
> -                    # Everything inside 'subvariants' should be parsed in
> -                    # restricted mode
> -                    new_list += self.parse_variants(cr, list[index:index+1],
> -                                                    subvariants=True,
> -                                                    prev_indent=indent)
> -                list = new_list
> +                # 'variants' is not allowed inside a conditional block
> +                if isinstance(node, Condition):
> +                    raise ValueError("'variants' is not allowed inside a "
> +                                     "conditional block (%s:%s)" %
> +                                     (cr.filename, linenum))
> +                node = self._parse_variants(cr, node, prev_indent=indent)
>                   continue
>
>               # Parse 'include' statements
>               if words[0] == "include":
> -                if len(words)<= 1:
> +                if len(words)<  2:
> +                    self._warn("%r (%s:%s): missing parameter. What are you "
> +                               "including?", line, cr.filename, linenum)
>                       continue
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line, "Entering file %s" % words[1])
> -
> -                cur_filename = cr.real_filename()
> -                if cur_filename is None:
> -                    cr.raise_error("'include' is valid only when parsing a file")
> -
> -                filename = os.path.join(os.path.dirname(cur_filename),
> -                                        words[1])
> -                if not os.path.exists(filename):
> -                    cr.raise_error("Cannot include %s -- file not found" % (filename))
> -
> -                str = open(filename).read()
> -                list = self.parse(configreader(filename, str), list, restricted)
> -                if self.debug and not restricted:
> -                    _debug_print("", "Leaving file %s" % words[1])
> +                if not isinstance(cr, FileReader):
> +                    self._warn("%r (%s:%s): cannot include because no file is "
> +                               "currently open", line, cr.filename, linenum)
> +                    continue
> +                filename = os.path.join(os.path.dirname(cr.filename), words[1])
> +                if not os.path.isfile(filename):
> +                    self._warn("%r (%s:%s): file doesn't exist or is not a "
> +                               "regular file", line, cr.filename, linenum)
> +                    continue
> +                node = self._parse(FileReader(filename), node)
> +                continue
>
> +            # Parse 'only' and 'no' filters
> +            if words[0] in ("only", "no"):
> +                if len(words)<  2:
> +                    self._warn("%r (%s:%s): missing parameter", line,
> +                               cr.filename, linenum)
> +                    continue
> +                if words[0] == "only":
> +                    node.content += [(cr.filename, linenum, OnlyFilter(line))]
> +                elif words[0] == "no":
> +                    node.content += [(cr.filename, linenum, NoFilter(line))]
>                   continue
>
> -            # Parse multi-line exceptions
> -            # (the block is parsed for each dict separately)
> +            # Parse conditional blocks
>               if line.endswith(":"):
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering multi-line exception block "
> -                                 "(%d dicts in current context outside "
> -                                 "exception)" % len_list)
> -                line = line[:-1]
> -                new_list = []
> -                # Remember current position
> -                pos = cr.tell()
> -                # Read the lines in any case
> -                self.parse(cr, [], restricted=True, prev_indent=indent)
> -                # Iterate over the list...
> -                exp = self._get_filter_regex(line)
> -                for index in xrange(len(list)):
> -                    name = _array_get_name(list[index], self.object_cache)
> -                    if exp.search(name):
> -                        # Revert to initial position in this exception block
> -                        cr.seek(pos)
> -                        # Everything inside an exception should be parsed in
> -                        # restricted mode
> -                        new_list += self.parse(cr, list[index:index+1],
> -                                               restricted=True,
> -                                               prev_indent=indent)
> -                    else:
> -                        new_list.append(list[index])
> -                list = new_list
> +                cond = Condition(line)
> +                self._parse(cr, cond, prev_indent=indent)
> +                node.content += [(cr.filename, linenum, cond)]
>                   continue
>
> -        return list
> +            node.content += [(cr.filename, linenum, line)]
> +            continue
>
> -
> -    def _get_filter_regex(self, filter):
> -        """
> -        Return a regex object corresponding to a given filter string.
> -
> -        All regular expressions given to the parser are passed through this
> -        function first.  Its purpose is to make them more specific and better
> -        suited to match dictionary names: it forces simple expressions to match
> -        only between dots or at the beginning or end of a string.  For example,
> -        the filter 'foo' will match 'foo.bar' but not 'foobar'.
> -        """
> -        try:
> -            return self.regex_cache[filter]
> -        except KeyError:
> -            exp = re.compile(r"(\.|^)(%s)(\.|$)" % filter)
> -            self.regex_cache[filter] = exp
> -            return exp
> -
> -
> -    def _store_str(self, str):
> -        """
> -        Store str in the internal object cache, if it isn't already there, and
> -        return its identifying index.
> -
> -        @param str: String to store.
> -        @return: The index of str in the object cache.
> -        """
> -        try:
> -            return self.object_cache_indices[str]
> -        except KeyError:
> -            self.object_cache.append(str)
> -            index = len(self.object_cache) - 1
> -            self.object_cache_indices[str] = index
> -            return index
> -
> -
> -    def _append_content_to_arrays(self, list, content):
> -        """
> -        Append content (config code containing assignment operations) to a list
> -        of arrays.
> -
> -        @param list: List of arrays to operate on.
> -        @param content: String containing assignment operations.
> -        """
> -        if content:
> -            str_index = self._store_str(content)
> -            for a in list:
> -                _array_append_to_content(a, str_index)
> -
> -
> -    def _apply_content_to_dict(self, dict, content):
> -        """
> -        Apply the operations in content (config code containing assignment
> -        operations) to a dict.
> -
> -        @param dict: Dictionary to operate on.  Must have 'name' key.
> -        @param content: String containing assignment operations.
> -        """
> -        for line in content.splitlines():
> -            op_found = None
> -            op_pos = len(line)
> -            for op in ops:
> -                pos = line.find(op)
> -                if pos>= 0 and pos<  op_pos:
> -                    op_found = op
> -                    op_pos = pos
> -            if not op_found:
> -                continue
> -            (left, value) = map(str.strip, line.split(op_found, 1))
> -            if value and ((value[0] == '"' and value[-1] == '"') or
> -                          (value[0] == "'" and value[-1] == "'")):
> -                value = value[1:-1]
> -            filters_and_key = map(str.strip, left.split(":"))
> -            filters = filters_and_key[:-1]
> -            key = filters_and_key[-1]
> -            for filter in filters:
> -                exp = self._get_filter_regex(filter)
> -                if not exp.search(dict["name"]):
> -                    break
> -            else:
> -                ops[op_found](dict, key, value)
> +        return node
>
>
>   # Assignment operators
>
> -def _op_set(dict, key, value):
> -    dict[key] = value
> +def _op_set(d, key, value):
> +    d[key] = value
>
>
> -def _op_append(dict, key, value):
> -    dict[key] = dict.get(key, "") + value
> +def _op_append(d, key, value):
> +    d[key] = d.get(key, "") + value
>
>
> -def _op_prepend(dict, key, value):
> -    dict[key] = value + dict.get(key, "")
> +def _op_prepend(d, key, value):
> +    d[key] = value + d.get(key, "")
>
>
> -def _op_regex_set(dict, exp, value):
> +def _op_regex_set(d, exp, value):
>       exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>           if exp.match(key):
> -            dict[key] = value
> +            d[key] = value
>
>
> -def _op_regex_append(dict, exp, value):
> +def _op_regex_append(d, exp, value):
>       exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>           if exp.match(key):
> -            dict[key] += value
> +            d[key] += value
>
>
> -def _op_regex_prepend(dict, exp, value):
> +def _op_regex_prepend(d, exp, value):
>       exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>           if exp.match(key):
> -            dict[key] = value + dict[key]
> -
> +            d[key] = value + d[key]
>
> -ops = {
> -    "=": _op_set,
> -    "+=": _op_append,
> -    "<=": _op_prepend,
> -    "?=": _op_regex_set,
> -    "?+=": _op_regex_append,
> -    "?<=": _op_regex_prepend,
> -}
> -
> -
> -# Misc functions
> -
> -def _debug_print(str1, str2=""):
> -    """
> -    Nicely print two strings and an arrow.
>
> -    @param str1: First string.
> -    @param str2: Second string.
> -    """
> -    if str2:
> -        str = "%-50s --->  %s" % (str1, str2)
> -    else:
> -        str = str1
> -    logging.debug(str)
> +ops = {"=": _op_set,
> +       "+=": _op_append,
> +       "<=": _op_prepend,
> +       "?=": _op_regex_set,
> +       "?+=": _op_regex_append,
> +       "?<=": _op_regex_prepend}
>
>
> -# configreader
> +# StrReader and FileReader
>
> -class configreader:
> +class StrReader(object):
>       """
> -    Preprocess an input string and provide file-like services.
> -    This is intended as a replacement for the file and StringIO classes,
> -    whose readline() and/or seek() methods seem to be slow.
> +    Preprocess an input string for easy reading.
>       """
> -
> -    def __init__(self, filename, str, real_file=True):
> +    def __init__(self, s):
>           """
>           Initialize the reader.
>
> -        @param filename: the filename we're parsing
> -        @param str: The string to parse.
> -        @param real_file: Indicates if filename represents a real file. Defaults to True.
> +        @param s: The string to parse.
>           """
> -        self.filename = filename
> -        self.is_real_file = real_file
> -        self.line_index = 0
> -        self.lines = []
> -        self.real_number = []
> -        for num, line in enumerate(str.splitlines()):
> +        self.filename = "<string>"
> +        self._lines = []
> +        self._line_index = 0
> +        for linenum, line in enumerate(s.splitlines()):
>               line = line.rstrip().expandtabs()
> -            stripped_line = line.strip()
> +            stripped_line = line.lstrip()
>               indent = len(line) - len(stripped_line)
>               if (not stripped_line
>                   or stripped_line.startswith("#")
>                   or stripped_line.startswith("//")):
>                   continue
> -            self.lines.append((line, stripped_line, indent))
> -            self.real_number.append(num + 1)
> -
> -
> -    def real_filename(self):
> -        """Returns the filename we're reading, in case it is a real file
> -
> -        @returns the filename we are parsing, or None in case we're not parsing a real file
> -        """
> -        if self.is_real_file:
> -            return self.filename
> -
> -    def get_next_line(self):
> -        """
> -        Get the next non-empty, non-comment line in the string.
> +            self._lines.append((stripped_line, indent, linenum + 1))
>
> -        @param file: File like object.
> -        @return: (line, stripped_line, indent), where indent is the line's
> -            indent level or -1 if no line is available.
> -        """
> -        try:
> -            if self.line_index<  len(self.lines):
> -                return self.lines[self.line_index]
> -            else:
> -                return (None, None, -1)
> -        finally:
> -            self.line_index += 1
>
> -
> -    def tell(self):
> -        """
> -        Return the current line index.
> -        """
> -        return self.line_index
> -
> -
> -    def seek(self, index):
> -        """
> -        Set the current line index.
> +    def get_next_line(self, prev_indent):
>           """
> -        self.line_index = index
> +        Get the next non-empty, non-comment line in the string, whose
> +        indentation level is higher than prev_indent.
>
> -    def raise_error(self, msg):
> -        """Raise an error related to the last line returned by get_next_line()
> +        @param prev_indent: The indentation level of the previous block.
> +        @return: (line, indent, linenum), where indent is the line's
> +            indentation level.  If no line is available, (None, -1, -1) is
> +            returned.
>           """
> -        if self.line_index == 0: # nothing was read. shouldn't happen, but...
> -            line_id = 'BEGIN'
> -        elif self.line_index>= len(self.lines): # past EOF
> -            line_id = 'EOF'
> -        else:
> -            # line_index is the _next_ line. get the previous one
> -            line_id = str(self.real_number[self.line_index-1])
> -        raise error.AutotestError("%s:%s: %s" % (self.filename, line_id, msg))
> -
> -
> -# Array structure:
> -# ----------------
> -# The first 4 elements contain the indices of the 4 segments.
> -# a[0] -- Index of beginning of 'name' segment (always 4).
> -# a[1] -- Index of beginning of 'shortname' segment.
> -# a[2] -- Index of beginning of 'depend' segment.
> -# a[3] -- Index of beginning of 'content' segment.
> -# The next elements in the array comprise the aforementioned segments:
> -# The 'name' segment begins with a[a[0]] and ends with a[a[1]-1].
> -# The 'shortname' segment begins with a[a[1]] and ends with a[a[2]-1].
> -# The 'depend' segment begins with a[a[2]] and ends with a[a[3]-1].
> -# The 'content' segment begins with a[a[3]] and ends at the end of the array.
> -
> -# The following functions append/prepend to various segments of an array.
> -
> -def _array_append_to_name_shortname_depend(a, name, depend):
> -    a.insert(a[1], name)
> -    a.insert(a[2] + 1, name)
> -    a.insert(a[3] + 2, depend)
> -    a[1] += 1
> -    a[2] += 2
> -    a[3] += 3
> -
> -
> -def _array_prepend_to_name_shortname_depend(a, name, depend):
> -    a[1] += 1
> -    a[2] += 2
> -    a[3] += 3
> -    a.insert(a[0], name)
> -    a.insert(a[1], name)
> -    a.insert(a[2], depend)
> -
> +        if self._line_index>= len(self._lines):
> +            return None, -1, -1
> +        line, indent, linenum = self._lines[self._line_index]
> +        if indent<= prev_indent:
> +            return None, -1, -1
> +        self._line_index += 1
> +        return line, indent, linenum
>
> -def _array_append_to_name_depend(a, name, depend):
> -    a.insert(a[1], name)
> -    a.insert(a[3] + 1, depend)
> -    a[1] += 1
> -    a[2] += 1
> -    a[3] += 2
>
> -
> -def _array_prepend_to_name_depend(a, name, depend):
> -    a[1] += 1
> -    a[2] += 1
> -    a[3] += 2
> -    a.insert(a[0], name)
> -    a.insert(a[2], depend)
> -
> -
> -def _array_append_to_content(a, content):
> -    a.append(content)
> -
> -
> -def _array_get_name(a, object_cache):
> -    """
> -    Return the name of a dictionary represented by a given array.
> -
> -    @param a: Array representing a dictionary.
> -    @param object_cache: A list of strings referenced by elements in the array.
> +class FileReader(StrReader):
>       """
> -    return ".".join([object_cache[i] for i in a[a[0]:a[1]]])
> -
> -
> -def _array_get_all(a, object_cache):
> +    Preprocess an input file for easy reading.
>       """
> -    Return a 4-tuple containing all the data stored in a given array, in a
> -    format that is easy to turn into an actual dictionary.
> +    def __init__(self, filename):
> +        """
> +        Initialize the reader.
>
> -    @param a: Array representing a dictionary.
> -    @param object_cache: A list of strings referenced by elements in the array.
> -    @return: A 4-tuple: (name, shortname, depend, content), in which all
> -        members are strings except depend which is a list of strings.
> -    """
> -    name = ".".join([object_cache[i] for i in a[a[0]:a[1]]])
> -    shortname = ".".join([object_cache[i] for i in a[a[1]:a[2]]])
> -    content = "".join([object_cache[i] for i in a[a[3]:]])
> -    depend = []
> -    prefix = ""
> -    for n, d in zip(a[a[0]:a[1]], a[a[2]:a[3]]):
> -        for dep in object_cache[d].split():
> -            depend.append(prefix + dep)
> -        prefix += object_cache[n] + "."
> -    return name, shortname, depend, content
> +        @parse filename: The name of the input file.
> +        """
> +        StrReader.__init__(self, open(filename).read())
> +        self.filename = filename
>
>
>   if __name__ == "__main__":
> -    parser = optparse.OptionParser("usage: %prog [options] [filename]")
> -    parser.add_option('--verbose', dest="debug", action='store_true',
> -                      help='include debug messages in console output')
> +    parser = optparse.OptionParser("usage: %prog [options]<filename>")
> +    parser.add_option("-v", "--verbose", dest="debug", action="store_true",
> +                      help="include debug messages in console output")
> +    parser.add_option("-f", "--fullname", dest="fullname", action="store_true",
> +                      help="show full dict names instead of short names")
> +    parser.add_option("-c", "--contents", dest="contents", action="store_true",
> +                      help="show dict contents")
>
>       options, args = parser.parse_args()
> -    debug = options.debug
> -    if args:
> -        filenames = args
> -    else:
> -        filenames = [os.path.join(os.path.dirname(sys.argv[0]), "tests.cfg")]
> -
> -    # Here we configure the stand alone program to use the autotest
> -    # logging system.
> -    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig(),
> -                                      verbose=debug)
> -    cfg = config(debug=debug)
> -    for fn in filenames:
> -        cfg.parse_file(fn)
> -    dicts = cfg.get_generator()
> -    for i, dict in enumerate(dicts):
> -        print "Dictionary #%d:" % (i)
> -        keys = dict.keys()
> -        keys.sort()
> -        for key in keys:
> -            print "    %s = %s" % (key, dict[key])
> +    if not args:
> +        parser.error("filename required")
> +
> +    c = Parser(args[0], debug=options.debug)
> +    for i, d in enumerate(c.get_dicts()):
> +        if options.fullname:
> +            print "dict %4d:  %s" % (i + 1, d["name"])
> +        else:
> +            print "dict %4d:  %s" % (i + 1, d["shortname"])
> +        if options.contents:
> +            keys = d.keys()
> +            keys.sort()
> +            for key in keys:
> +                print "    %s = %s" % (key, d[key])
> diff --git a/client/tests/kvm/kvm_scheduler.py b/client/tests/kvm/kvm_scheduler.py
> index 95282e4..b96bb32 100644
> --- a/client/tests/kvm/kvm_scheduler.py
> +++ b/client/tests/kvm/kvm_scheduler.py
> @@ -63,7 +63,6 @@ class scheduler:
>                   test_index = int(cmd[1])
>                   test = self.tests[test_index].copy()
>                   test.update(self_dict)
> -                test = kvm_utils.get_sub_pool(test, index, self.num_workers)
>                   test_iterations = int(test.get("iterations", 1))
>                   status = run_test_func("kvm", params=test,
>                                          tag=test.get("shortname"),
> @@ -129,7 +128,7 @@ class scheduler:
>                       # If the test failed, mark all dependent tests as "failed" too
>                       if not status:
>                           for i, other_test in enumerate(self.tests):
> -                            for dep in other_test.get("depend", []):
> +                            for dep in other_test.get("dep", []):
>                                   if dep in test["name"]:
>                                       test_status[i] = "fail"
>
> @@ -154,7 +153,7 @@ class scheduler:
>                           continue
>                       # Make sure the test's dependencies are satisfied
>                       dependencies_satisfied = True
> -                    for dep in test["depend"]:
> +                    for dep in test["dep"]:
>                           dependencies = [j for j, t in enumerate(self.tests)
>                                           if dep in t["name"]]
>                           bad_status_deps = [j for j in dependencies
> @@ -200,14 +199,14 @@ class scheduler:
>                       used_mem[worker] = test_used_mem
>                       # Assign all related tests to this worker
>                       for j, other_test in enumerate(self.tests):
> -                        for other_dep in other_test["depend"]:
> +                        for other_dep in other_test["dep"]:
>                               # All tests that depend on this test
>                               if other_dep in test["name"]:
>                                   test_worker[j] = worker
>                                   break
>                               # ... and all tests that share a dependency
>                               # with this test
> -                            for dep in test["depend"]:
> +                            for dep in test["dep"]:
>                                   if dep in other_dep or other_dep in dep:
>                                       test_worker[j] = worker
>                                       break
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index 44ebb88..9e25a0a 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -1101,7 +1101,7 @@ def run_tests(test_list, job):
>           if dict.get("skip") == "yes":
>               continue
>           dependencies_satisfied = True
> -        for dep in dict.get("depend"):
> +        for dep in dict.get("dep"):
>               for test_name in status_dict.keys():
>                   if not dep in test_name:
>                       continue
> diff --git a/client/tests/kvm/tests.cfg.sample b/client/tests/kvm/tests.cfg.sample
> index bde7aba..4b3b965 100644
> --- a/client/tests/kvm/tests.cfg.sample
> +++ b/client/tests/kvm/tests.cfg.sample
> @@ -18,10 +18,9 @@ include cdkeys.cfg
>   image_name(_.*)? ?<= /tmp/kvm_autotest_root/images/
>   cdrom(_.*)? ?<= /tmp/kvm_autotest_root/
>   floppy ?<= /tmp/kvm_autotest_root/
> -Linux:
> -    unattended_install:
> -        kernel ?<= /tmp/kvm_autotest_root/
> -        initrd ?<= /tmp/kvm_autotest_root/
> +Linux..unattended_install:
> +    kernel ?<= /tmp/kvm_autotest_root/
> +    initrd ?<= /tmp/kvm_autotest_root/
>
>   # Here are the test sets variants. The variant 'qemu_kvm_windows_quick' is
>   # fully commented, the following ones have comments only on noteworthy points
> @@ -49,7 +48,7 @@ variants:
>           # Operating system choice
>           only Win7.64
>           # Subtest choice. You can modify that line to add more subtests
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
>
>       # Runs qemu, f14 64 bit guest OS, install, boot, shutdown
>       - @qemu_f14_quick:
> @@ -65,7 +64,7 @@ variants:
>           only no_pci_assignable
>           only smallpages
>           only Fedora.14.64
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
>           # qemu needs -enable-kvm on the cmdline
>           extra_params += ' -enable-kvm'
>
> @@ -81,7 +80,7 @@ variants:
>           only no_pci_assignable
>           only smallpages
>           only Fedora.14.64
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
>
>   # You may provide information about the DTM server for WHQL tests here:
>   #whql:
> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
> index 80362db..e65bed2 100644
> --- a/client/tests/kvm/tests_base.cfg.sample
> +++ b/client/tests/kvm/tests_base.cfg.sample
> @@ -1722,8 +1722,8 @@ variants:
>
>       # Windows section
>       - @Windows:
> -        no autotest linux_s3 vlan ioquit unattended_install.(url|nfs|remote_ks)
> -        no jumbo nicdriver_unload nic_promisc multicast mac_change ethtool clock_getres
> +        no autotest, linux_s3, vlan, ioquit, unattended_install.url, unattended_install.nfs, unattended_install.remote_ks
> +        no jumbo, nicdriver_unload, nic_promisc, multicast, mac_change, ethtool, clock_getres
>
>           shutdown_command = shutdown /s /f /t 0
>           reboot_command = shutdown /r /f /t 0
> @@ -1747,7 +1747,7 @@ variants:
>           mem_chk_cmd = wmic memphysical
>           mem_chk_cur_cmd = wmic memphysical
>
> -        unattended_install.cdrom|whql.support_vm_install:
> +        unattended_install.cdrom, whql.support_vm_install:
>               timeout = 7200
>               finish_program = deps/finish.exe
>               cdroms += " winutils"
> @@ -1857,7 +1857,7 @@ variants:
>                               steps = WinXP-32.steps
>                           setup:
>                               steps = WinXP-32-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/WindowsXP-sp2-vlk.iso
>                               md5sum_cd1 = 743450644b1d9fe97b3cf379e22dceb0
>                               md5sum_1m_cd1 = b473bf75af2d1269fec8958cf0202bfd
> @@ -1890,7 +1890,7 @@ variants:
>                               steps = WinXP-64.steps
>                           setup:
>                               steps = WinXP-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/WindowsXP-64.iso
>                               md5sum_cd1 = 8d3f007ec9c2060cec8a50ee7d7dc512
>                               md5sum_1m_cd1 = e812363ff427effc512b7801ee70e513
> @@ -1928,7 +1928,7 @@ variants:
>                               steps = Win2003-32.steps
>                           setup:
>                               steps = Win2003-32-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/Windows2003_r2_VLK.iso
>                               md5sum_cd1 = 03e921e9b4214773c21a39f5c3f42ef7
>                               md5sum_1m_cd1 = 37c2fdec15ac4ec16aa10fdfdb338aa3
> @@ -1960,7 +1960,7 @@ variants:
>                               steps = Win2003-64.steps
>                           setup:
>                               steps = Win2003-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/Windows2003-x64.iso
>                               md5sum_cd1 = 5703f87c9fd77d28c05ffadd3354dbbd
>                               md5sum_1m_cd1 = 439393c384116aa09e08a0ad047dcea8
> @@ -2008,7 +2008,7 @@ variants:
>                                       steps = Win-Vista-32.steps
>                                   setup:
>                                       steps = WinVista-32-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/WindowsVista-32.iso
>                                       md5sum_cd1 = 1008f323d5170c8e614e52ccb85c0491
>                                       md5sum_1m_cd1 = c724e9695da483bc0fd59e426eaefc72
> @@ -2025,7 +2025,7 @@ variants:
>
>                               - sp2:
>                                   image_name += -sp2-32
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/en_windows_vista_with_sp2_x86_dvd_342266.iso
>                                       md5sum_cd1 = 19ca90a425667812977bab6f4ce24175
>                                       md5sum_1m_cd1 = 89c15020e0e6125be19acf7a2e5dc614
> @@ -2059,7 +2059,7 @@ variants:
>                                       steps = Win-Vista-64.steps
>                                   setup:
>                                       steps = WinVista-64-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/WindowsVista-64.iso
>                                       md5sum_cd1 = 11e2010d857fffc47813295e6be6d58d
>                                       md5sum_1m_cd1 = 0947bcd5390546139e25f25217d6f165
> @@ -2076,7 +2076,7 @@ variants:
>
>                               - sp2:
>                                   image_name += -sp2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/en_windows_vista_sp2_x64_dvd_342267.iso
>                                       md5sum_cd1 = a1c024d7abaf34bac3368e88efbc2574
>                                       md5sum_1m_cd1 = 3d84911a80f3df71d1026f7adedc2181
> @@ -2112,7 +2112,7 @@ variants:
>                                       steps = Win2008-32.steps
>                                   setup:
>                                       steps = Win2008-32-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/Windows2008-x86.iso
>                                       md5sum=0bfca49f0164de0a8eba236ced47007d
>                                       md5sum_1m=07d7f5006393f74dc76e6e2e943e2440
> @@ -2127,7 +2127,7 @@ variants:
>
>                               - sp2:
>                                   image_name += -sp2-32
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x86_dvd_342333.iso
>                                       md5sum_cd1 = b9201aeb6eef04a3c573d036a8780bdf
>                                       md5sum_1m_cd1 = b7a9d42e55ea1e85105a3a6ad4da8e04
> @@ -2156,7 +2156,7 @@ variants:
>                                       passwd = 1q2w3eP
>                                   setup:
>                                       steps = Win2008-64-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/Windows2008-x64.iso
>                                       md5sum=27c58cdb3d620f28c36333a5552f271c
>                                       md5sum_1m=efdcc11d485a1ef9afa739cb8e0ca766
> @@ -2171,7 +2171,7 @@ variants:
>
>                               - sp2:
>                                   image_name += -sp2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x64_dvd_342336.iso
>                                       md5sum_cd1 = e94943ef484035b3288d8db69599a6b5
>                                       md5sum_1m_cd1 = ee55506823d0efffb5532ddd88a8e47b
> @@ -2188,7 +2188,7 @@ variants:
>
>                               - r2:
>                                   image_name += -r2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                       cdrom_cd1 = isos/windows/en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>                                       md5sum_cd1 = 0207ef392c60efdda92071b0559ca0f9
>                                       md5sum_1m_cd1 = a5a22ce25008bd7109f6d830d627e3ed
> @@ -2216,7 +2216,7 @@ variants:
>                   variants:
>                       - 32:
>                           image_name += -32
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/en_windows_7_ultimate_x86_dvd_x15-65921.iso
>                               md5sum_cd1 = d0b8b407e8a3d4b75ee9c10147266b89
>                               md5sum_1m_cd1 = 2b0c2c22b1ae95065db08686bf83af93
> @@ -2249,7 +2249,7 @@ variants:
>                               steps = Win7-64.steps
>                           setup:
>                               steps = Win7-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                               cdrom_cd1 = isos/windows/en_windows_7_ultimate_x64_dvd_x15-65922.iso
>                               md5sum_cd1 = f43d22e4fb07bf617d573acd8785c028
>                               md5sum_1m_cd1 = b44d8cf99dbed2a5cb02765db8dfd48f
> @@ -2329,7 +2329,7 @@ variants:
>                   md5sum_cd1 = 9fae22f2666369968a76ef59e9a81ced
>
>
> -whql.support_vm_install|whql.client_install.support_vm:
> +whql.support_vm_install, whql.client_install.support_vm:
>       image_name += -supportvm
>
>
> @@ -2352,7 +2352,7 @@ variants:
>           drive_format=virtio
>
>
> -virtio_net|virtio_blk|e1000|balloon_check:
> +virtio_net, virtio_blk, e1000, balloon_check:
>       only Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
>       # only WinXP Win2003 Win2008 WinVista Win7 Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
>
> @@ -2365,15 +2365,9 @@ variants:
>           check_image = yes
>       - vmdk:
>           no ioquit
> -        only Fedora Ubuntu Windows
> -        only smp2
> -        only rtl8139
>           image_format = vmdk
>       - raw:
>           no ioquit
> -        only Fedora Ubuntu Windows
> -        only smp2
> -        only rtl8139
>           image_format = raw
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09  1:50 [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py Michael Goldish
  2011-02-09  2:56 ` Cleber Rosa
@ 2011-02-09  9:28 ` Avi Kivity
  2011-02-09 10:07   ` Michael Goldish
  2011-02-10  1:18   ` Amos Kong
  2011-02-09 16:06 ` Ryan Harper
  2 siblings, 2 replies; 19+ messages in thread
From: Avi Kivity @ 2011-02-09  9:28 UTC (permalink / raw)
  To: Michael Goldish; +Cc: autotest, kvm, Uri Lublin

On 02/09/2011 03:50 AM, Michael Goldish wrote:
> This is a reimplementation of the dict generator.  It is much faster than the
> current implementation and uses a very small amount of memory.  Running time
> and memory usage scale polynomially with the number of defined variants,
> compared to exponentially in the current implementation.
>
> Instead of regular expressions in the filters, the following syntax is used:
>
> , means OR
> .. means AND
> . means IMMEDIATELY-FOLLOWED-BY
>
> Example:
>
> only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
>


Is it not possible to keep the old syntax?  Breaking people's scripts is 
bad.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09  9:28 ` Avi Kivity
@ 2011-02-09 10:07   ` Michael Goldish
  2011-02-09 10:19     ` Avi Kivity
  2011-02-10  1:18   ` Amos Kong
  1 sibling, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2011-02-09 10:07 UTC (permalink / raw)
  To: Avi Kivity; +Cc: autotest, Uri Lublin, kvm

On 02/09/2011 11:28 AM, Avi Kivity wrote:
> On 02/09/2011 03:50 AM, Michael Goldish wrote:
>> This is a reimplementation of the dict generator.  It is much faster
>> than the
>> current implementation and uses a very small amount of memory. 
>> Running time
>> and memory usage scale polynomially with the number of defined variants,
>> compared to exponentially in the current implementation.
>>
>> Instead of regular expressions in the filters, the following syntax is
>> used:
>>
>> , means OR
>> .. means AND
>> . means IMMEDIATELY-FOLLOWED-BY
>>
>> Example:
>>
>> only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
>>
> 
> 
> Is it not possible to keep the old syntax?  Breaking people's scripts is
> bad.

No, because the old syntax uses regexps and there's no clean way to
prune tree branches early if those are supported.

For users who have their own tests_base.cfg (if there are any), we may
have to keep the old parser as an alternative, or detect the presence of
an incompatible cfg file and warn about it.  Does that sound like a good
idea?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09 10:07   ` Michael Goldish
@ 2011-02-09 10:19     ` Avi Kivity
  0 siblings, 0 replies; 19+ messages in thread
From: Avi Kivity @ 2011-02-09 10:19 UTC (permalink / raw)
  To: Michael Goldish; +Cc: autotest, kvm, Uri Lublin

On 02/09/2011 12:07 PM, Michael Goldish wrote:
> On 02/09/2011 11:28 AM, Avi Kivity wrote:
> >  On 02/09/2011 03:50 AM, Michael Goldish wrote:
> >>  This is a reimplementation of the dict generator.  It is much faster
> >>  than the
> >>  current implementation and uses a very small amount of memory.
> >>  Running time
> >>  and memory usage scale polynomially with the number of defined variants,
> >>  compared to exponentially in the current implementation.
> >>
> >>  Instead of regular expressions in the filters, the following syntax is
> >>  used:
> >>
> >>  , means OR
> >>  .. means AND
> >>  . means IMMEDIATELY-FOLLOWED-BY
> >>
> >>  Example:
> >>
> >>  only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> >>
> >
> >
> >  Is it not possible to keep the old syntax?  Breaking people's scripts is
> >  bad.
>
> No, because the old syntax uses regexps and there's no clean way to
> prune tree branches early if those are supported.
>

Ok.

> For users who have their own tests_base.cfg (if there are any), we may
> have to keep the old parser as an alternative, or detect the presence of
> an incompatible cfg file and warn about it.  Does that sound like a good
> idea?

No.  It increases the maintenance burden and user confusion.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09  1:50 [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py Michael Goldish
  2011-02-09  2:56 ` Cleber Rosa
  2011-02-09  9:28 ` Avi Kivity
@ 2011-02-09 16:06 ` Ryan Harper
  2011-02-09 16:21   ` Eduardo Habkost
  2 siblings, 1 reply; 19+ messages in thread
From: Ryan Harper @ 2011-02-09 16:06 UTC (permalink / raw)
  To: Michael Goldish; +Cc: autotest, kvm, Uri Lublin

* Michael Goldish <mgoldish@redhat.com> [2011-02-08 19:51]:
> This is a reimplementation of the dict generator.  It is much faster than the
> current implementation and uses a very small amount of memory.  Running time
> and memory usage scale polynomially with the number of defined variants,
> compared to exponentially in the current implementation.

Thanks for looking at this.  I know I've run into all of this when
running a complicated configuration on a lower memory system. 

> 
> Instead of regular expressions in the filters, the following syntax is used:
> 
> , means OR
> .. means AND
> . means IMMEDIATELY-FOLLOWED-BY

Is there any reason we can't use | for or, and & for AND?  I know this
is just nit picking, but, it certainly reads easier and doesn't need a
translation.  AFAICT, in the implementation, we're just using .split(),
so, I think the delimiters aren't critical.

> 
> Example:
> 
> only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> 
> means select all dicts whose names have:
> 
> (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> (smp2 AND qcow2 AND migrate AND ide)

  >>> config = "qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide"
  >>> config
  'qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide'
  >>> config.split("|")
  ['qcow2&Fedora.14', 'RHEL.6&raw&boot', 'smp2&qcow2&migrate&ide']



> 
> 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
> 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
> 'ide, scsi' is equivalent to 'scsi, ide'.
> 
> Filters can be used in 3 ways:
> only <filter>
> no <filter>
> <filter>:
> 
> The last one starts a conditional block, e.g.
> 
> Fedora.14..qcow2:
>     no migrate, reboot
>     foo = bar
> 
> Interface changes:
> - The main class is now called 'Parser' instead of 'config'.
> - fork_and_parse() has been removed.  parse_file() and parse_string() should be
>   used instead.
> - When run as a standalone program, kvm_config.py just prints the shortnames of
>   the generated dicts by default, and can optionally print the full names and
>   contents of the dicts.
> - By default, debug messages are not printed, but they can be enabled by
>   passing debug=True to Parser's constructor, or by running kvm_config.py -v.
> - The 'depend' key has been renamed to 'dep'.
> 
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> Signed-off-by: Uri Lublin <ulublin@redhat.com>
> ---
>  client/tests/kvm/control               |   28 +-
>  client/tests/kvm/control.parallel      |   12 +-
>  client/tests/kvm/kvm_config.py         | 1051 ++++++++++++++------------------
>  client/tests/kvm/kvm_scheduler.py      |    9 +-
>  client/tests/kvm/kvm_utils.py          |    2 +-
>  client/tests/kvm/tests.cfg.sample      |   13 +-
>  client/tests/kvm/tests_base.cfg.sample |   46 +-
>  7 files changed, 513 insertions(+), 648 deletions(-)
> 
> diff --git a/client/tests/kvm/control b/client/tests/kvm/control
> index d226adf..be37678 100644
> --- a/client/tests/kvm/control
> +++ b/client/tests/kvm/control
> @@ -35,13 +35,11 @@ str = """
>  # build configuration here.  For example:
>  #release_tag = 84
>  """
> -build_cfg = kvm_config.config()
> -# As the base test config is quite large, in order to save memory, we use the
> -# fork_and_parse() method, that creates another parser process and destroys it
> -# at the end of the parsing, so the memory spent can be given back to the OS.
> -build_cfg_path = os.path.join(kvm_test_dir, "build.cfg")
> -build_cfg.fork_and_parse(build_cfg_path, str)
> -if not kvm_utils.run_tests(build_cfg.get_generator(), job):
> +
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(kvm_test_dir, "build.cfg"))
> +parser.parse_string(str)
> +if not kvm_utils.run_tests(parser.get_dicts(), job):
>      logging.error("KVM build step failed, exiting.")
>      sys.exit(1)
> 
> @@ -49,10 +47,11 @@ str = """
>  # This string will be parsed after tests.cfg.  Make any desired changes to the
>  # test configuration here.  For example:
>  #display = sdl
> -#install|setup: timeout_multiplier = 3
> +#install, setup: timeout_multiplier = 3
>  """
> -tests_cfg = kvm_config.config()
> -tests_cfg_path = os.path.join(kvm_test_dir, "tests.cfg")
> +
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(kvm_test_dir, "tests.cfg"))
> 
>  if args:
>      # We get test parameters from command line
> @@ -67,11 +66,12 @@ if args:
>                  str += "%s = %s\n" % (key, value)
>          except IndexError:
>              pass
> -tests_cfg.fork_and_parse(tests_cfg_path, str)
> +parser.parse_string(str)
> 
> -# Run the tests
> -kvm_utils.run_tests(tests_cfg.get_generator(), job)
> +logging.info("Selected tests:")
> +for i, d in enumerate(parser.get_dicts()):
> +    logging.info("Test %4d:  %s" % (i + 1, d["shortname"]))
> +kvm_utils.run_tests(parser.get_dicts(), job)
> 
>  # Generate a nice HTML report inside the job's results dir
>  kvm_utils.create_report(kvm_test_dir, job.resultdir)
> -
> diff --git a/client/tests/kvm/control.parallel b/client/tests/kvm/control.parallel
> index ac84638..640ccf5 100644
> --- a/client/tests/kvm/control.parallel
> +++ b/client/tests/kvm/control.parallel
> @@ -163,16 +163,15 @@ import kvm_config
>  str = """
>  # This string will be parsed after tests.cfg.  Make any desired changes to the
>  # test configuration here.  For example:
> -#install|setup: timeout_multiplier = 3
> -#only fc8_quick
> +#install, setup: timeout_multiplier = 3
>  #display = sdl
>  """
> -cfg = kvm_config.config()
> -filename = os.path.join(pwd, "tests.cfg")
> -cfg.fork_and_parse(filename, str)
> 
> -tests = cfg.get_list()
> +parser = kvm_config.Parser()
> +parser.parse_file(os.path.join(pwd, "tests.cfg"))
> +parser.parse_string(str)
> 
> +tests = list(parser.get_dicts())
> 
>  # -------------
>  # Run the tests
> @@ -192,7 +191,6 @@ s = kvm_scheduler.scheduler(tests, num_workers, total_cpus, total_mem, pwd)
>  job.parallel([s.scheduler],
>               *[(s.worker, i, job.run_test) for i in range(num_workers)])
> 
> -
>  # create the html report in result dir
>  reporter = os.path.join(pwd, 'make_html_report.py')
>  html_file = os.path.join(job.resultdir,'results.html')
> diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
> index 13cdfe2..1b27181 100755
> --- a/client/tests/kvm/kvm_config.py
> +++ b/client/tests/kvm/kvm_config.py
> @@ -1,18 +1,149 @@
>  #!/usr/bin/python
>  """
> -KVM configuration file utility functions.
> +KVM test configuration file parser
> 
> -@copyright: Red Hat 2008-2010
> +@copyright: Red Hat 2008-2011
>  """
> 
> -import logging, re, os, sys, optparse, array, traceback, cPickle
> -import common
> -import kvm_utils
> -from autotest_lib.client.common_lib import error
> -from autotest_lib.client.common_lib import logging_manager
> +import re, os, sys, optparse, collections
> +
> +
> +# Filter syntax:
> +# , means OR
> +# .. means AND
> +# . means IMMEDIATELY-FOLLOWED-BY
> +
> +# Example:
> +# qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> +# means match all dicts whose names have:
> +# (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> +# ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> +# (smp2 AND qcow2 AND migrate AND ide)
> +
> +# Note:
> +# 'qcow2..Fedora.14' is equivalent to 'Fedora.14..qcow2'.
> +# 'qcow2..Fedora.14' is not equivalent to 'qcow2..14.Fedora'.
> +# 'ide, scsi' is equivalent to 'scsi, ide'.
> +
> +# Filters can be used in 3 ways:
> +# only <filter>
> +# no <filter>
> +# <filter>:
> +# The last one starts a conditional block.
> +
> +
> +num_failed_cases = 5
> +
> +
> +class Node(object):
> +    def __init__(self):
> +        self.name = []
> +        self.dep = []
> +        self.content = []
> +        self.children = []
> +        self.labels = set()
> +        self.append_to_shortname = False
> +        self.failed_cases = collections.deque()
> +
> +
> +# Filter must inherit from object (otherwise type() won't work)
> +class Filter(object):
> +    def __init__(self, s):
> +        self.filter = []
> +        for word in s.replace(",", " ").split():
> +            word = [block.split(".") for block in word.split("..")]
> +            self.filter += [word]
> +
> +
> +    def match_adjacent(self, block, ctx, ctx_set):
> +        # TODO: explain what this function does
> +        if block[0] not in ctx_set:
> +            return 0
> +        if len(block) == 1:
> +            return 1
> +        if block[1] not in ctx_set:
> +            return int(ctx[-1] == block[0])
> +        k = 0
> +        i = ctx.index(block[0])
> +        while i < len(ctx):
> +            if k > 0 and ctx[i] != block[k]:
> +                i -= k - 1
> +                k = 0
> +            if ctx[i] == block[k]:
> +                k += 1
> +                if k >= len(block):
> +                    break
> +                if block[k] not in ctx_set:
> +                    break
> +            i += 1
> +        return k
> +
> +
> +    def might_match_adjacent(self, block, ctx, ctx_set, descendant_labels):
> +        matched = self.match_adjacent(block, ctx, ctx_set)
> +        for elem in block[matched:]:
> +            if elem not in descendant_labels:
> +                return False
> +        return True
> +
> +
> +    def match(self, ctx, ctx_set):
> +        for word in self.filter:
> +            for block in word:
> +                if self.match_adjacent(block, ctx, ctx_set) != len(block):
> +                    break
> +            else:
> +                return True
> +        return False
> +
> +
> +    def might_match(self, ctx, ctx_set, descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if not self.might_match_adjacent(block, ctx, ctx_set,
> +                                                 descendant_labels):
> +                    break
> +            else:
> +                return True
> +        return False
> +
> +
> +class NoOnlyFilter(Filter):
> +    def __init__(self, line):
> +        Filter.__init__(self, line.split(None, 1)[1])
> +        self.line = line
> +
> +
> +class OnlyFilter(NoOnlyFilter):
> +    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                   descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if (self.match_adjacent(block, ctx, ctx_set) >
> +                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
> +                    return self.might_match(ctx, ctx_set, descendant_labels)
> +        return False
> +
> 
> +class NoFilter(NoOnlyFilter):
> +    def might_pass(self, failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                   descendant_labels):
> +        for word in self.filter:
> +            for block in word:
> +                if (self.match_adjacent(block, ctx, ctx_set) <
> +                    self.match_adjacent(block, failed_ctx, failed_ctx_set)):
> +                    return not self.match(ctx, ctx_set)
> +        return False
> 
> -class config:
> +
> +class Condition(NoFilter):
> +    def __init__(self, line):
> +        Filter.__init__(self, line.rstrip(":"))
> +        self.line = line
> +        self.content = []
> +
> +
> +class Parser(object):
>      """
>      Parse an input file or string that follows the KVM Test Config File format
>      and generate a list of dicts that will be later used as configuration
> @@ -21,17 +152,14 @@ class config:
>      @see: http://www.linux-kvm.org/page/KVM-Autotest/Test_Config_File
>      """
> 
> -    def __init__(self, filename=None, debug=True):
> +    def __init__(self, filename=None, debug=False):
>          """
> -        Initialize the list and optionally parse a file.
> +        Initialize the parser and optionally parse a file.
> 
> -        @param filename: Path of the file that will be taken.
> +        @param filename: Path of the file to parse.
>          @param debug: Whether to turn on debugging output.
>          """
> -        self.list = [array.array("H", [4, 4, 4, 4])]
> -        self.object_cache = []
> -        self.object_cache_indices = {}
> -        self.regex_cache = {}
> +        self.node = Node()
>          self.debug = debug
>          if filename:
>              self.parse_file(filename)
> @@ -39,689 +167,436 @@ class config:
> 
>      def parse_file(self, filename):
>          """
> -        Parse file.  If it doesn't exist, raise an IOError.
> +        Parse a file.
> 
>          @param filename: Path of the configuration file.
>          """
> -        if not os.path.exists(filename):
> -            raise IOError("File %s not found" % filename)
> -        str = open(filename).read()
> -        self.list = self.parse(configreader(filename, str), self.list)
> +        self.node = self._parse(FileReader(filename), self.node)
> 
> 
> -    def parse_string(self, str):
> +    def parse_string(self, s):
>          """
>          Parse a string.
> 
> -        @param str: String to parse.
> +        @param s: String to parse.
>          """
> -        self.list = self.parse(configreader('<string>', str, real_file=False), self.list)
> +        self.node = self._parse(StrReader(s), self.node)
> 
> 
> -    def fork_and_parse(self, filename=None, str=None):
> -        """
> -        Parse a file and/or a string in a separate process to save memory.
> -
> -        Python likes to keep memory to itself even after the objects occupying
> -        it have been destroyed.  If during a call to parse_file() or
> -        parse_string() a lot of memory is used, it can only be freed by
> -        terminating the process.  This function works around the problem by
> -        doing the parsing in a forked process and then terminating it, freeing
> -        any unneeded memory.
> -
> -        Note: if an exception is raised during parsing, its information will be
> -        printed, and the resulting list will be empty.  The exception will not
> -        be raised in the process calling this function.
> -
> -        @param filename: Path of file to parse (optional).
> -        @param str: String to parse (optional).
> -        """
> -        r, w = os.pipe()
> -        r, w = os.fdopen(r, "r"), os.fdopen(w, "w")
> -        pid = os.fork()
> -        if not pid:
> -            # Child process
> -            r.close()
> -            try:
> -                if filename:
> -                    self.parse_file(filename)
> -                if str:
> -                    self.parse_string(str)
> -            except:
> -                traceback.print_exc()
> -                self.list = []
> -            # Convert the arrays to strings before pickling because at least
> -            # some Python versions can't pickle/unpickle arrays
> -            l = [a.tostring() for a in self.list]
> -            cPickle.dump((l, self.object_cache), w, -1)
> -            w.close()
> -            os._exit(0)
> -        else:
> -            # Parent process
> -            w.close()
> -            (l, self.object_cache) = cPickle.load(r)
> -            r.close()
> -            os.waitpid(pid, 0)
> -            self.list = []
> -            for s in l:
> -                a = array.array("H")
> -                a.fromstring(s)
> -                self.list.append(a)
> -
> -
> -    def get_generator(self):
> +    def get_dicts(self, node=None, ctx=[], content=[], shortname=[], dep=[]):
>          """
>          Generate dictionaries from the code parsed so far.  This should
> -        probably be called after parsing something.
> +        be called after parsing something.
> 
>          @return: A dict generator.
>          """
> -        for a in self.list:
> -            name, shortname, depend, content = _array_get_all(a,
> -                                                              self.object_cache)
> -            dict = {"name": name, "shortname": shortname, "depend": depend}
> -            self._apply_content_to_dict(dict, content)
> -            yield dict
> -
> -
> -    def get_list(self):
> -        """
> -        Generate a list of dictionaries from the code parsed so far.
> -        This should probably be called after parsing something.
> +        def apply_ops_to_dict(d, content):
> +            for filename, linenum, s in content:
> +                op_found = None
> +                op_pos = len(s)
> +                for op in ops:
> +                    if op in s:
> +                        pos = s.index(op)
> +                        if pos < op_pos:
> +                            op_found = op
> +                            op_pos = pos
> +                if not op_found:
> +                    continue
> +                left, value = map(str.strip, s.split(op_found, 1))
> +                if value and ((value[0] == '"' and value[-1] == '"') or
> +                              (value[0] == "'" and value[-1] == "'")):
> +                    value = value[1:-1]
> +                filters_and_key = map(str.strip, left.split(":"))
> +                for f in filters_and_key[:-1]:
> +                    if not Filter(f).match(ctx, ctx_set):
> +                        break
> +                else:
> +                    key = filters_and_key[-1]
> +                    ops[op_found](d, key, value)
> +
> +        def process_content(content, failed_filters):
> +            # 1. Check that the filters in content are OK with the current
> +            #    context (ctx).
> +            # 2. Move the parts of content that are still relevant into
> +            #    new_content and unpack conditional blocks if appropriate.
> +            #    For example, if an 'only' statement fully matches ctx, it
> +            #    becomes irrelevant and is not appended to new_content.
> +            #    If a conditional block fully matches, its contents are
> +            #    unpacked into new_content.
> +            # 3. Move failed filters into failed_filters, so that next time we
> +            #    reach this node or one of its ancestors, we'll check those
> +            #    filters first.
> +            for t in content:
> +                filename, linenum, obj = t
> +                if type(obj) is str:
> +                    new_content.append(t)
> +                    continue
> +                elif type(obj) is OnlyFilter:
> +                    if not obj.might_match(ctx, ctx_set, labels):
> +                        self._debug("    filter did not pass: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        failed_filters.append(t)
> +                        return False
> +                    elif obj.match(ctx, ctx_set):
> +                        continue
> +                elif type(obj) is NoFilter:
> +                    if obj.match(ctx, ctx_set):
> +                        self._debug("    filter did not pass: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        failed_filters.append(t)
> +                        return False
> +                    elif not obj.might_match(ctx, ctx_set, labels):
> +                        continue
> +                elif type(obj) is Condition:
> +                    if obj.match(ctx, ctx_set):
> +                        self._debug("    conditional block matches: %r (%s:%s)",
> +                                    obj.line, filename, linenum)
> +                        # Check and unpack the content inside this Condition
> +                        # object (note: the failed filters should go into
> +                        # new_internal_filters because we don't expect them to
> +                        # come from outside this node, even if the Condition
> +                        # itself was external)
> +                        if not process_content(obj.content,
> +                                               new_internal_filters):
> +                            failed_filters.append(t)
> +                            return False
> +                        continue
> +                    elif not obj.might_match(ctx, ctx_set, labels):
> +                        continue
> +                new_content.append(t)
> +            return True
> +
> +        def might_pass(failed_ctx,
> +                       failed_ctx_set,
> +                       failed_external_filters,
> +                       failed_internal_filters):
> +            for t in failed_external_filters:
> +                if t not in content:
> +                    return True
> +                filename, linenum, filter = t
> +                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                                     labels):
> +                    return True
> +            for t in failed_internal_filters:
> +                filename, linenum, filter = t
> +                if filter.might_pass(failed_ctx, failed_ctx_set, ctx, ctx_set,
> +                                     labels):
> +                    return True
> +            return False
> +
> +        def add_failed_case():
> +            node.failed_cases.appendleft((ctx, ctx_set,
> +                                          new_external_filters,
> +                                          new_internal_filters))
> +            if len(node.failed_cases) > num_failed_cases:
> +                node.failed_cases.pop()
> +
> +        node = node or self.node
> +        # Update dep
> +        for d in node.dep:
> +            temp = ctx + [d]
> +            dep = dep + [".".join([s for s in temp if s])]
> +        # Update ctx
> +        ctx = ctx + node.name
> +        ctx_set = set(ctx)
> +        labels = node.labels
> +        # Get the current name
> +        name = ".".join([s for s in ctx if s])
> +        if node.name:
> +            self._debug("checking out %r", name)
> +        # Check previously failed filters
> +        for i, failed_case in enumerate(node.failed_cases):
> +            if not might_pass(*failed_case):
> +                self._debug("    this subtree has failed before")
> +                del node.failed_cases[i]
> +                node.failed_cases.appendleft(failed_case)
> +                return
> +        # Check content and unpack it into new_content
> +        new_content = []
> +        new_external_filters = []
> +        new_internal_filters = []
> +        if (not process_content(node.content, new_internal_filters) or
> +            not process_content(content, new_external_filters)):
> +            add_failed_case()
> +            return
> +        # Update shortname
> +        if node.append_to_shortname:
> +            shortname = shortname + node.name
> +        # Recurse into children
> +        count = 0
> +        for n in node.children:
> +            for d in self.get_dicts(n, ctx, new_content, shortname, dep):
> +                count += 1
> +                yield d
> +        # Reached leaf?
> +        if not node.children:
> +            self._debug("    reached leaf, returning it")
> +            d = {"name": name, "dep": dep,
> +                 "shortname": ".".join([s for s in shortname if s])}
> +            apply_ops_to_dict(d, new_content)
> +            yield d
> +        # If this node did not produce any dicts, remember the failed filters
> +        # of its descendants
> +        elif not count:
> +            new_external_filters = []
> +            new_internal_filters = []
> +            for n in node.children:
> +                (failed_ctx,
> +                 failed_ctx_set,
> +                 failed_external_filters,
> +                 failed_internal_filters) = n.failed_cases[0]
> +                for obj in failed_internal_filters:
> +                    if obj not in new_internal_filters:
> +                        new_internal_filters.append(obj)
> +                for obj in failed_external_filters:
> +                    if obj in content:
> +                        if obj not in new_external_filters:
> +                            new_external_filters.append(obj)
> +                    else:
> +                        if obj not in new_internal_filters:
> +                            new_internal_filters.append(obj)
> +            add_failed_case()
> 
> -        @return: A list of dicts.
> -        """
> -        return list(self.get_generator())
> 
> +    def _debug(self, s, *args):
> +        if self.debug:
> +            s = "DEBUG: %s" % s
> +            print s % args
> 
> -    def count(self, filter=".*"):
> -        """
> -        Return the number of dictionaries whose names match filter.
> 
> -        @param filter: A regular expression string.
> -        """
> -        exp = self._get_filter_regex(filter)
> -        count = 0
> -        for a in self.list:
> -            name = _array_get_name(a, self.object_cache)
> -            if exp.search(name):
> -                count += 1
> -        return count
> +    def _warn(self, s, *args):
> +        s = "WARNING: %s" % s
> +        print s % args
> 
> 
> -    def parse_variants(self, cr, list, subvariants=False, prev_indent=-1):
> +    def _parse_variants(self, cr, node, prev_indent=-1):
>          """
> -        Read and parse lines from a configreader object until a line with an
> +        Read and parse lines from a FileReader object until a line with an
>          indent level lower than or equal to prev_indent is encountered.
> 
> -        @brief: Parse a 'variants' or 'subvariants' block from a configreader
> -            object.
> -        @param cr: configreader object to be parsed.
> -        @param list: List of arrays to operate on.
> -        @param subvariants: If True, parse in 'subvariants' mode;
> -            otherwise parse in 'variants' mode.
> +        @param cr: A FileReader/StrReader object.
> +        @param node: A node to operate on.
>          @param prev_indent: The indent level of the "parent" block.
> -        @return: The resulting list of arrays.
> +        @return: A node object.
>          """
> -        new_list = []
> +        node4 = Node()
> 
>          while True:
> -            pos = cr.tell()
> -            (indented_line, line, indent) = cr.get_next_line()
> -            if indent <= prev_indent:
> -                cr.seek(pos)
> +            line, indent, linenum = cr.get_next_line(prev_indent)
> +            if not line:
>                  break
> 
> -            # Get name and dependencies
> -            (name, depend) = map(str.strip, line.lstrip("- ").split(":"))
> +            name, dep = map(str.strip, line.lstrip("- ").split(":"))
> 
> -            # See if name should be added to the 'shortname' field
> -            add_to_shortname = not name.startswith("@")
> -            name = name.lstrip("@")
> +            node2 = Node()
> +            node2.children = [node]
> +            node2.labels = node.labels
> 
> -            # Store name and dependencies in cache and get their indices
> -            n = self._store_str(name)
> -            d = self._store_str(depend)
> +            node3 = self._parse(cr, node2, prev_indent=indent)
> +            node3.name = name.lstrip("@").split(".")
> +            node3.dep = dep.replace(",", " ").split()
> +            node3.append_to_shortname = not name.startswith("@")
> 
> -            # Make a copy of list
> -            temp_list = [a[:] for a in list]
> +            node4.children += [node3]
> +            node4.labels.update(node3.labels)
> +            node4.labels.update(node3.name)
> 
> -            if subvariants:
> -                # If we're parsing 'subvariants', first modify the list
> -                if add_to_shortname:
> -                    for a in temp_list:
> -                        _array_append_to_name_shortname_depend(a, n, d)
> -                else:
> -                    for a in temp_list:
> -                        _array_append_to_name_depend(a, n, d)
> -                temp_list = self.parse(cr, temp_list, restricted=True,
> -                                       prev_indent=indent)
> -            else:
> -                # If we're parsing 'variants', parse before modifying the list
> -                if self.debug:
> -                    _debug_print(indented_line,
> -                                 "Entering variant '%s' "
> -                                 "(variant inherits %d dicts)" %
> -                                 (name, len(list)))
> -                temp_list = self.parse(cr, temp_list, restricted=False,
> -                                       prev_indent=indent)
> -                if add_to_shortname:
> -                    for a in temp_list:
> -                        _array_prepend_to_name_shortname_depend(a, n, d)
> -                else:
> -                    for a in temp_list:
> -                        _array_prepend_to_name_depend(a, n, d)
> +        return node4
> 
> -            new_list += temp_list
> 
> -        return new_list
> -
> -
> -    def parse(self, cr, list, restricted=False, prev_indent=-1):
> +    def _parse(self, cr, node, prev_indent=-1):
>          """
> -        Read and parse lines from a configreader object until a line with an
> +        Read and parse lines from a StrReader object until a line with an
>          indent level lower than or equal to prev_indent is encountered.
> 
> -        @brief: Parse a configreader object.
> -        @param cr: A configreader object.
> -        @param list: A list of arrays to operate on (list is modified in
> -            place and should not be used after the call).
> -        @param restricted: If True, operate in restricted mode
> -            (prohibit 'variants').
> +        @param cr: A FileReader/StrReader object.
> +        @param node: A Node or a Condition object to operate on.
>          @param prev_indent: The indent level of the "parent" block.
> -        @return: The resulting list of arrays.
> -        @note: List is destroyed and should not be used after the call.
> -            Only the returned list should be used.
> +        @return: A node object.
>          """
> -        current_block = ""
> -
>          while True:
> -            pos = cr.tell()
> -            (indented_line, line, indent) = cr.get_next_line()
> -            if indent <= prev_indent:
> -                cr.seek(pos)
> -                self._append_content_to_arrays(list, current_block)
> +            line, indent, linenum = cr.get_next_line(prev_indent)
> +            if not line:
>                  break
> 
> -            len_list = len(list)
> -
> -            # Parse assignment operators (keep lines in temporary buffer)
> -            if "=" in line:
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Parsing operator (%d dicts in current "
> -                                 "context)" % len_list)
> -                current_block += line + "\n"
> -                continue
> -
> -            # Flush the temporary buffer
> -            self._append_content_to_arrays(list, current_block)
> -            current_block = ""
> -
>              words = line.split()
> 
> -            # Parse 'no' and 'only' statements
> -            if words[0] == "no" or words[0] == "only":
> -                if len(words) <= 1:
> -                    continue
> -                filters = map(self._get_filter_regex, words[1:])
> -                filtered_list = []
> -                if words[0] == "no":
> -                    for a in list:
> -                        name = _array_get_name(a, self.object_cache)
> -                        for filter in filters:
> -                            if filter.search(name):
> -                                break
> -                        else:
> -                            filtered_list.append(a)
> -                if words[0] == "only":
> -                    for a in list:
> -                        name = _array_get_name(a, self.object_cache)
> -                        for filter in filters:
> -                            if filter.search(name):
> -                                filtered_list.append(a)
> -                                break
> -                list = filtered_list
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Parsing no/only (%d dicts in current "
> -                                 "context, %d remain)" %
> -                                 (len_list, len(list)))
> -                continue
> -
>              # Parse 'variants'
>              if line == "variants:":
> -                # 'variants' not allowed in restricted mode
> -                # (inside an exception or inside subvariants)
> -                if restricted:
> -                    e_msg = "Using variants in this context is not allowed"
> -                    cr.raise_error(e_msg)
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering variants block (%d dicts in "
> -                                 "current context)" % len_list)
> -                list = self.parse_variants(cr, list, subvariants=False,
> -                                           prev_indent=indent)
> -                continue
> -
> -            # Parse 'subvariants' (the block is parsed for each dict
> -            # separately)
> -            if line == "subvariants:":
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering subvariants block (%d dicts in "
> -                                 "current context)" % len_list)
> -                new_list = []
> -                # Remember current position
> -                pos = cr.tell()
> -                # Read the lines in any case
> -                self.parse_variants(cr, [], subvariants=True,
> -                                    prev_indent=indent)
> -                # Iterate over the list...
> -                for index in xrange(len(list)):
> -                    # Revert to initial position in this 'subvariants' block
> -                    cr.seek(pos)
> -                    # Everything inside 'subvariants' should be parsed in
> -                    # restricted mode
> -                    new_list += self.parse_variants(cr, list[index:index+1],
> -                                                    subvariants=True,
> -                                                    prev_indent=indent)
> -                list = new_list
> +                # 'variants' is not allowed inside a conditional block
> +                if isinstance(node, Condition):
> +                    raise ValueError("'variants' is not allowed inside a "
> +                                     "conditional block (%s:%s)" %
> +                                     (cr.filename, linenum))
> +                node = self._parse_variants(cr, node, prev_indent=indent)
>                  continue
> 
>              # Parse 'include' statements
>              if words[0] == "include":
> -                if len(words) <= 1:
> +                if len(words) < 2:
> +                    self._warn("%r (%s:%s): missing parameter. What are you "
> +                               "including?", line, cr.filename, linenum)
>                      continue
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line, "Entering file %s" % words[1])
> -
> -                cur_filename = cr.real_filename()
> -                if cur_filename is None:
> -                    cr.raise_error("'include' is valid only when parsing a file")
> -
> -                filename = os.path.join(os.path.dirname(cur_filename),
> -                                        words[1])
> -                if not os.path.exists(filename):
> -                    cr.raise_error("Cannot include %s -- file not found" % (filename))
> -
> -                str = open(filename).read()
> -                list = self.parse(configreader(filename, str), list, restricted)
> -                if self.debug and not restricted:
> -                    _debug_print("", "Leaving file %s" % words[1])
> +                if not isinstance(cr, FileReader):
> +                    self._warn("%r (%s:%s): cannot include because no file is "
> +                               "currently open", line, cr.filename, linenum)
> +                    continue
> +                filename = os.path.join(os.path.dirname(cr.filename), words[1])
> +                if not os.path.isfile(filename):
> +                    self._warn("%r (%s:%s): file doesn't exist or is not a "
> +                               "regular file", line, cr.filename, linenum)
> +                    continue
> +                node = self._parse(FileReader(filename), node)
> +                continue
> 
> +            # Parse 'only' and 'no' filters
> +            if words[0] in ("only", "no"):
> +                if len(words) < 2:
> +                    self._warn("%r (%s:%s): missing parameter", line,
> +                               cr.filename, linenum)
> +                    continue
> +                if words[0] == "only":
> +                    node.content += [(cr.filename, linenum, OnlyFilter(line))]
> +                elif words[0] == "no":
> +                    node.content += [(cr.filename, linenum, NoFilter(line))]
>                  continue
> 
> -            # Parse multi-line exceptions
> -            # (the block is parsed for each dict separately)
> +            # Parse conditional blocks
>              if line.endswith(":"):
> -                if self.debug and not restricted:
> -                    _debug_print(indented_line,
> -                                 "Entering multi-line exception block "
> -                                 "(%d dicts in current context outside "
> -                                 "exception)" % len_list)
> -                line = line[:-1]
> -                new_list = []
> -                # Remember current position
> -                pos = cr.tell()
> -                # Read the lines in any case
> -                self.parse(cr, [], restricted=True, prev_indent=indent)
> -                # Iterate over the list...
> -                exp = self._get_filter_regex(line)
> -                for index in xrange(len(list)):
> -                    name = _array_get_name(list[index], self.object_cache)
> -                    if exp.search(name):
> -                        # Revert to initial position in this exception block
> -                        cr.seek(pos)
> -                        # Everything inside an exception should be parsed in
> -                        # restricted mode
> -                        new_list += self.parse(cr, list[index:index+1],
> -                                               restricted=True,
> -                                               prev_indent=indent)
> -                    else:
> -                        new_list.append(list[index])
> -                list = new_list
> +                cond = Condition(line)
> +                self._parse(cr, cond, prev_indent=indent)
> +                node.content += [(cr.filename, linenum, cond)]
>                  continue
> 
> -        return list
> +            node.content += [(cr.filename, linenum, line)]
> +            continue
> 
> -
> -    def _get_filter_regex(self, filter):
> -        """
> -        Return a regex object corresponding to a given filter string.
> -
> -        All regular expressions given to the parser are passed through this
> -        function first.  Its purpose is to make them more specific and better
> -        suited to match dictionary names: it forces simple expressions to match
> -        only between dots or at the beginning or end of a string.  For example,
> -        the filter 'foo' will match 'foo.bar' but not 'foobar'.
> -        """
> -        try:
> -            return self.regex_cache[filter]
> -        except KeyError:
> -            exp = re.compile(r"(\.|^)(%s)(\.|$)" % filter)
> -            self.regex_cache[filter] = exp
> -            return exp
> -
> -
> -    def _store_str(self, str):
> -        """
> -        Store str in the internal object cache, if it isn't already there, and
> -        return its identifying index.
> -
> -        @param str: String to store.
> -        @return: The index of str in the object cache.
> -        """
> -        try:
> -            return self.object_cache_indices[str]
> -        except KeyError:
> -            self.object_cache.append(str)
> -            index = len(self.object_cache) - 1
> -            self.object_cache_indices[str] = index
> -            return index
> -
> -
> -    def _append_content_to_arrays(self, list, content):
> -        """
> -        Append content (config code containing assignment operations) to a list
> -        of arrays.
> -
> -        @param list: List of arrays to operate on.
> -        @param content: String containing assignment operations.
> -        """
> -        if content:
> -            str_index = self._store_str(content)
> -            for a in list:
> -                _array_append_to_content(a, str_index)
> -
> -
> -    def _apply_content_to_dict(self, dict, content):
> -        """
> -        Apply the operations in content (config code containing assignment
> -        operations) to a dict.
> -
> -        @param dict: Dictionary to operate on.  Must have 'name' key.
> -        @param content: String containing assignment operations.
> -        """
> -        for line in content.splitlines():
> -            op_found = None
> -            op_pos = len(line)
> -            for op in ops:
> -                pos = line.find(op)
> -                if pos >= 0 and pos < op_pos:
> -                    op_found = op
> -                    op_pos = pos
> -            if not op_found:
> -                continue
> -            (left, value) = map(str.strip, line.split(op_found, 1))
> -            if value and ((value[0] == '"' and value[-1] == '"') or
> -                          (value[0] == "'" and value[-1] == "'")):
> -                value = value[1:-1]
> -            filters_and_key = map(str.strip, left.split(":"))
> -            filters = filters_and_key[:-1]
> -            key = filters_and_key[-1]
> -            for filter in filters:
> -                exp = self._get_filter_regex(filter)
> -                if not exp.search(dict["name"]):
> -                    break
> -            else:
> -                ops[op_found](dict, key, value)
> +        return node
> 
> 
>  # Assignment operators
> 
> -def _op_set(dict, key, value):
> -    dict[key] = value
> +def _op_set(d, key, value):
> +    d[key] = value
> 
> 
> -def _op_append(dict, key, value):
> -    dict[key] = dict.get(key, "") + value
> +def _op_append(d, key, value):
> +    d[key] = d.get(key, "") + value
> 
> 
> -def _op_prepend(dict, key, value):
> -    dict[key] = value + dict.get(key, "")
> +def _op_prepend(d, key, value):
> +    d[key] = value + d.get(key, "")
> 
> 
> -def _op_regex_set(dict, exp, value):
> +def _op_regex_set(d, exp, value):
>      exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>          if exp.match(key):
> -            dict[key] = value
> +            d[key] = value
> 
> 
> -def _op_regex_append(dict, exp, value):
> +def _op_regex_append(d, exp, value):
>      exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>          if exp.match(key):
> -            dict[key] += value
> +            d[key] += value
> 
> 
> -def _op_regex_prepend(dict, exp, value):
> +def _op_regex_prepend(d, exp, value):
>      exp = re.compile("^(%s)$" % exp)
> -    for key in dict:
> +    for key in d:
>          if exp.match(key):
> -            dict[key] = value + dict[key]
> -
> +            d[key] = value + d[key]
> 
> -ops = {
> -    "=": _op_set,
> -    "+=": _op_append,
> -    "<=": _op_prepend,
> -    "?=": _op_regex_set,
> -    "?+=": _op_regex_append,
> -    "?<=": _op_regex_prepend,
> -}
> -
> -
> -# Misc functions
> -
> -def _debug_print(str1, str2=""):
> -    """
> -    Nicely print two strings and an arrow.
> 
> -    @param str1: First string.
> -    @param str2: Second string.
> -    """
> -    if str2:
> -        str = "%-50s ---> %s" % (str1, str2)
> -    else:
> -        str = str1
> -    logging.debug(str)
> +ops = {"=": _op_set,
> +       "+=": _op_append,
> +       "<=": _op_prepend,
> +       "?=": _op_regex_set,
> +       "?+=": _op_regex_append,
> +       "?<=": _op_regex_prepend}
> 
> 
> -# configreader
> +# StrReader and FileReader
> 
> -class configreader:
> +class StrReader(object):
>      """
> -    Preprocess an input string and provide file-like services.
> -    This is intended as a replacement for the file and StringIO classes,
> -    whose readline() and/or seek() methods seem to be slow.
> +    Preprocess an input string for easy reading.
>      """
> -
> -    def __init__(self, filename, str, real_file=True):
> +    def __init__(self, s):
>          """
>          Initialize the reader.
> 
> -        @param filename: the filename we're parsing
> -        @param str: The string to parse.
> -        @param real_file: Indicates if filename represents a real file. Defaults to True.
> +        @param s: The string to parse.
>          """
> -        self.filename = filename
> -        self.is_real_file = real_file
> -        self.line_index = 0
> -        self.lines = []
> -        self.real_number = []
> -        for num, line in enumerate(str.splitlines()):
> +        self.filename = "<string>"
> +        self._lines = []
> +        self._line_index = 0
> +        for linenum, line in enumerate(s.splitlines()):
>              line = line.rstrip().expandtabs()
> -            stripped_line = line.strip()
> +            stripped_line = line.lstrip()
>              indent = len(line) - len(stripped_line)
>              if (not stripped_line
>                  or stripped_line.startswith("#")
>                  or stripped_line.startswith("//")):
>                  continue
> -            self.lines.append((line, stripped_line, indent))
> -            self.real_number.append(num + 1)
> -
> -
> -    def real_filename(self):
> -        """Returns the filename we're reading, in case it is a real file
> -
> -        @returns the filename we are parsing, or None in case we're not parsing a real file
> -        """
> -        if self.is_real_file:
> -            return self.filename
> -
> -    def get_next_line(self):
> -        """
> -        Get the next non-empty, non-comment line in the string.
> +            self._lines.append((stripped_line, indent, linenum + 1))
> 
> -        @param file: File like object.
> -        @return: (line, stripped_line, indent), where indent is the line's
> -            indent level or -1 if no line is available.
> -        """
> -        try:
> -            if self.line_index < len(self.lines):
> -                return self.lines[self.line_index]
> -            else:
> -                return (None, None, -1)
> -        finally:
> -            self.line_index += 1
> 
> -
> -    def tell(self):
> -        """
> -        Return the current line index.
> -        """
> -        return self.line_index
> -
> -
> -    def seek(self, index):
> -        """
> -        Set the current line index.
> +    def get_next_line(self, prev_indent):
>          """
> -        self.line_index = index
> +        Get the next non-empty, non-comment line in the string, whose
> +        indentation level is higher than prev_indent.
> 
> -    def raise_error(self, msg):
> -        """Raise an error related to the last line returned by get_next_line()
> +        @param prev_indent: The indentation level of the previous block.
> +        @return: (line, indent, linenum), where indent is the line's
> +            indentation level.  If no line is available, (None, -1, -1) is
> +            returned.
>          """
> -        if self.line_index == 0: # nothing was read. shouldn't happen, but...
> -            line_id = 'BEGIN'
> -        elif self.line_index >= len(self.lines): # past EOF
> -            line_id = 'EOF'
> -        else:
> -            # line_index is the _next_ line. get the previous one
> -            line_id = str(self.real_number[self.line_index-1])
> -        raise error.AutotestError("%s:%s: %s" % (self.filename, line_id, msg))
> -
> -
> -# Array structure:
> -# ----------------
> -# The first 4 elements contain the indices of the 4 segments.
> -# a[0] -- Index of beginning of 'name' segment (always 4).
> -# a[1] -- Index of beginning of 'shortname' segment.
> -# a[2] -- Index of beginning of 'depend' segment.
> -# a[3] -- Index of beginning of 'content' segment.
> -# The next elements in the array comprise the aforementioned segments:
> -# The 'name' segment begins with a[a[0]] and ends with a[a[1]-1].
> -# The 'shortname' segment begins with a[a[1]] and ends with a[a[2]-1].
> -# The 'depend' segment begins with a[a[2]] and ends with a[a[3]-1].
> -# The 'content' segment begins with a[a[3]] and ends at the end of the array.
> -
> -# The following functions append/prepend to various segments of an array.
> -
> -def _array_append_to_name_shortname_depend(a, name, depend):
> -    a.insert(a[1], name)
> -    a.insert(a[2] + 1, name)
> -    a.insert(a[3] + 2, depend)
> -    a[1] += 1
> -    a[2] += 2
> -    a[3] += 3
> -
> -
> -def _array_prepend_to_name_shortname_depend(a, name, depend):
> -    a[1] += 1
> -    a[2] += 2
> -    a[3] += 3
> -    a.insert(a[0], name)
> -    a.insert(a[1], name)
> -    a.insert(a[2], depend)
> -
> +        if self._line_index >= len(self._lines):
> +            return None, -1, -1
> +        line, indent, linenum = self._lines[self._line_index]
> +        if indent <= prev_indent:
> +            return None, -1, -1
> +        self._line_index += 1
> +        return line, indent, linenum
> 
> -def _array_append_to_name_depend(a, name, depend):
> -    a.insert(a[1], name)
> -    a.insert(a[3] + 1, depend)
> -    a[1] += 1
> -    a[2] += 1
> -    a[3] += 2
> 
> -
> -def _array_prepend_to_name_depend(a, name, depend):
> -    a[1] += 1
> -    a[2] += 1
> -    a[3] += 2
> -    a.insert(a[0], name)
> -    a.insert(a[2], depend)
> -
> -
> -def _array_append_to_content(a, content):
> -    a.append(content)
> -
> -
> -def _array_get_name(a, object_cache):
> -    """
> -    Return the name of a dictionary represented by a given array.
> -
> -    @param a: Array representing a dictionary.
> -    @param object_cache: A list of strings referenced by elements in the array.
> +class FileReader(StrReader):
>      """
> -    return ".".join([object_cache[i] for i in a[a[0]:a[1]]])
> -
> -
> -def _array_get_all(a, object_cache):
> +    Preprocess an input file for easy reading.
>      """
> -    Return a 4-tuple containing all the data stored in a given array, in a
> -    format that is easy to turn into an actual dictionary.
> +    def __init__(self, filename):
> +        """
> +        Initialize the reader.
> 
> -    @param a: Array representing a dictionary.
> -    @param object_cache: A list of strings referenced by elements in the array.
> -    @return: A 4-tuple: (name, shortname, depend, content), in which all
> -        members are strings except depend which is a list of strings.
> -    """
> -    name = ".".join([object_cache[i] for i in a[a[0]:a[1]]])
> -    shortname = ".".join([object_cache[i] for i in a[a[1]:a[2]]])
> -    content = "".join([object_cache[i] for i in a[a[3]:]])
> -    depend = []
> -    prefix = ""
> -    for n, d in zip(a[a[0]:a[1]], a[a[2]:a[3]]):
> -        for dep in object_cache[d].split():
> -            depend.append(prefix + dep)
> -        prefix += object_cache[n] + "."
> -    return name, shortname, depend, content
> +        @parse filename: The name of the input file.
> +        """
> +        StrReader.__init__(self, open(filename).read())
> +        self.filename = filename
> 
> 
>  if __name__ == "__main__":
> -    parser = optparse.OptionParser("usage: %prog [options] [filename]")
> -    parser.add_option('--verbose', dest="debug", action='store_true',
> -                      help='include debug messages in console output')
> +    parser = optparse.OptionParser("usage: %prog [options] <filename>")
> +    parser.add_option("-v", "--verbose", dest="debug", action="store_true",
> +                      help="include debug messages in console output")
> +    parser.add_option("-f", "--fullname", dest="fullname", action="store_true",
> +                      help="show full dict names instead of short names")
> +    parser.add_option("-c", "--contents", dest="contents", action="store_true",
> +                      help="show dict contents")
> 
>      options, args = parser.parse_args()
> -    debug = options.debug
> -    if args:
> -        filenames = args
> -    else:
> -        filenames = [os.path.join(os.path.dirname(sys.argv[0]), "tests.cfg")]
> -
> -    # Here we configure the stand alone program to use the autotest
> -    # logging system.
> -    logging_manager.configure_logging(kvm_utils.KvmLoggingConfig(),
> -                                      verbose=debug)
> -    cfg = config(debug=debug)
> -    for fn in filenames:
> -        cfg.parse_file(fn)
> -    dicts = cfg.get_generator()
> -    for i, dict in enumerate(dicts):
> -        print "Dictionary #%d:" % (i)
> -        keys = dict.keys()
> -        keys.sort()
> -        for key in keys:
> -            print "    %s = %s" % (key, dict[key])
> +    if not args:
> +        parser.error("filename required")
> +
> +    c = Parser(args[0], debug=options.debug)
> +    for i, d in enumerate(c.get_dicts()):
> +        if options.fullname:
> +            print "dict %4d:  %s" % (i + 1, d["name"])
> +        else:
> +            print "dict %4d:  %s" % (i + 1, d["shortname"])
> +        if options.contents:
> +            keys = d.keys()
> +            keys.sort()
> +            for key in keys:
> +                print "    %s = %s" % (key, d[key])
> diff --git a/client/tests/kvm/kvm_scheduler.py b/client/tests/kvm/kvm_scheduler.py
> index 95282e4..b96bb32 100644
> --- a/client/tests/kvm/kvm_scheduler.py
> +++ b/client/tests/kvm/kvm_scheduler.py
> @@ -63,7 +63,6 @@ class scheduler:
>                  test_index = int(cmd[1])
>                  test = self.tests[test_index].copy()
>                  test.update(self_dict)
> -                test = kvm_utils.get_sub_pool(test, index, self.num_workers)
>                  test_iterations = int(test.get("iterations", 1))
>                  status = run_test_func("kvm", params=test,
>                                         tag=test.get("shortname"),
> @@ -129,7 +128,7 @@ class scheduler:
>                      # If the test failed, mark all dependent tests as "failed" too
>                      if not status:
>                          for i, other_test in enumerate(self.tests):
> -                            for dep in other_test.get("depend", []):
> +                            for dep in other_test.get("dep", []):
>                                  if dep in test["name"]:
>                                      test_status[i] = "fail"
> 
> @@ -154,7 +153,7 @@ class scheduler:
>                          continue
>                      # Make sure the test's dependencies are satisfied
>                      dependencies_satisfied = True
> -                    for dep in test["depend"]:
> +                    for dep in test["dep"]:
>                          dependencies = [j for j, t in enumerate(self.tests)
>                                          if dep in t["name"]]
>                          bad_status_deps = [j for j in dependencies
> @@ -200,14 +199,14 @@ class scheduler:
>                      used_mem[worker] = test_used_mem
>                      # Assign all related tests to this worker
>                      for j, other_test in enumerate(self.tests):
> -                        for other_dep in other_test["depend"]:
> +                        for other_dep in other_test["dep"]:
>                              # All tests that depend on this test
>                              if other_dep in test["name"]:
>                                  test_worker[j] = worker
>                                  break
>                              # ... and all tests that share a dependency
>                              # with this test
> -                            for dep in test["depend"]:
> +                            for dep in test["dep"]:
>                                  if dep in other_dep or other_dep in dep:
>                                      test_worker[j] = worker
>                                      break
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index 44ebb88..9e25a0a 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -1101,7 +1101,7 @@ def run_tests(test_list, job):
>          if dict.get("skip") == "yes":
>              continue
>          dependencies_satisfied = True
> -        for dep in dict.get("depend"):
> +        for dep in dict.get("dep"):
>              for test_name in status_dict.keys():
>                  if not dep in test_name:
>                      continue
> diff --git a/client/tests/kvm/tests.cfg.sample b/client/tests/kvm/tests.cfg.sample
> index bde7aba..4b3b965 100644
> --- a/client/tests/kvm/tests.cfg.sample
> +++ b/client/tests/kvm/tests.cfg.sample
> @@ -18,10 +18,9 @@ include cdkeys.cfg
>  image_name(_.*)? ?<= /tmp/kvm_autotest_root/images/
>  cdrom(_.*)? ?<= /tmp/kvm_autotest_root/
>  floppy ?<= /tmp/kvm_autotest_root/
> -Linux:
> -    unattended_install:
> -        kernel ?<= /tmp/kvm_autotest_root/
> -        initrd ?<= /tmp/kvm_autotest_root/
> +Linux..unattended_install:
> +    kernel ?<= /tmp/kvm_autotest_root/
> +    initrd ?<= /tmp/kvm_autotest_root/
> 
>  # Here are the test sets variants. The variant 'qemu_kvm_windows_quick' is
>  # fully commented, the following ones have comments only on noteworthy points
> @@ -49,7 +48,7 @@ variants:
>          # Operating system choice
>          only Win7.64
>          # Subtest choice. You can modify that line to add more subtests
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
> 
>      # Runs qemu, f14 64 bit guest OS, install, boot, shutdown
>      - @qemu_f14_quick:
> @@ -65,7 +64,7 @@ variants:
>          only no_pci_assignable
>          only smallpages
>          only Fedora.14.64
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
>          # qemu needs -enable-kvm on the cmdline
>          extra_params += ' -enable-kvm'
> 
> @@ -81,7 +80,7 @@ variants:
>          only no_pci_assignable
>          only smallpages
>          only Fedora.14.64
> -        only unattended_install.cdrom boot shutdown
> +        only unattended_install.cdrom, boot, shutdown
> 
>  # You may provide information about the DTM server for WHQL tests here:
>  #whql:
> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
> index 80362db..e65bed2 100644
> --- a/client/tests/kvm/tests_base.cfg.sample
> +++ b/client/tests/kvm/tests_base.cfg.sample
> @@ -1722,8 +1722,8 @@ variants:
> 
>      # Windows section
>      - @Windows:
> -        no autotest linux_s3 vlan ioquit unattended_install.(url|nfs|remote_ks)
> -        no jumbo nicdriver_unload nic_promisc multicast mac_change ethtool clock_getres
> +        no autotest, linux_s3, vlan, ioquit, unattended_install.url, unattended_install.nfs, unattended_install.remote_ks
> +        no jumbo, nicdriver_unload, nic_promisc, multicast, mac_change, ethtool, clock_getres
> 
>          shutdown_command = shutdown /s /f /t 0
>          reboot_command = shutdown /r /f /t 0
> @@ -1747,7 +1747,7 @@ variants:
>          mem_chk_cmd = wmic memphysical
>          mem_chk_cur_cmd = wmic memphysical
> 
> -        unattended_install.cdrom|whql.support_vm_install:
> +        unattended_install.cdrom, whql.support_vm_install:
>              timeout = 7200
>              finish_program = deps/finish.exe
>              cdroms += " winutils"
> @@ -1857,7 +1857,7 @@ variants:
>                              steps = WinXP-32.steps
>                          setup:
>                              steps = WinXP-32-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/WindowsXP-sp2-vlk.iso
>                              md5sum_cd1 = 743450644b1d9fe97b3cf379e22dceb0
>                              md5sum_1m_cd1 = b473bf75af2d1269fec8958cf0202bfd
> @@ -1890,7 +1890,7 @@ variants:
>                              steps = WinXP-64.steps
>                          setup:
>                              steps = WinXP-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/WindowsXP-64.iso
>                              md5sum_cd1 = 8d3f007ec9c2060cec8a50ee7d7dc512
>                              md5sum_1m_cd1 = e812363ff427effc512b7801ee70e513
> @@ -1928,7 +1928,7 @@ variants:
>                              steps = Win2003-32.steps
>                          setup:
>                              steps = Win2003-32-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/Windows2003_r2_VLK.iso
>                              md5sum_cd1 = 03e921e9b4214773c21a39f5c3f42ef7
>                              md5sum_1m_cd1 = 37c2fdec15ac4ec16aa10fdfdb338aa3
> @@ -1960,7 +1960,7 @@ variants:
>                              steps = Win2003-64.steps
>                          setup:
>                              steps = Win2003-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/Windows2003-x64.iso
>                              md5sum_cd1 = 5703f87c9fd77d28c05ffadd3354dbbd
>                              md5sum_1m_cd1 = 439393c384116aa09e08a0ad047dcea8
> @@ -2008,7 +2008,7 @@ variants:
>                                      steps = Win-Vista-32.steps
>                                  setup:
>                                      steps = WinVista-32-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/WindowsVista-32.iso
>                                      md5sum_cd1 = 1008f323d5170c8e614e52ccb85c0491
>                                      md5sum_1m_cd1 = c724e9695da483bc0fd59e426eaefc72
> @@ -2025,7 +2025,7 @@ variants:
> 
>                              - sp2:
>                                  image_name += -sp2-32
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/en_windows_vista_with_sp2_x86_dvd_342266.iso
>                                      md5sum_cd1 = 19ca90a425667812977bab6f4ce24175
>                                      md5sum_1m_cd1 = 89c15020e0e6125be19acf7a2e5dc614
> @@ -2059,7 +2059,7 @@ variants:
>                                      steps = Win-Vista-64.steps
>                                  setup:
>                                      steps = WinVista-64-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/WindowsVista-64.iso
>                                      md5sum_cd1 = 11e2010d857fffc47813295e6be6d58d
>                                      md5sum_1m_cd1 = 0947bcd5390546139e25f25217d6f165
> @@ -2076,7 +2076,7 @@ variants:
> 
>                              - sp2:
>                                  image_name += -sp2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/en_windows_vista_sp2_x64_dvd_342267.iso
>                                      md5sum_cd1 = a1c024d7abaf34bac3368e88efbc2574
>                                      md5sum_1m_cd1 = 3d84911a80f3df71d1026f7adedc2181
> @@ -2112,7 +2112,7 @@ variants:
>                                      steps = Win2008-32.steps
>                                  setup:
>                                      steps = Win2008-32-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/Windows2008-x86.iso
>                                      md5sum=0bfca49f0164de0a8eba236ced47007d
>                                      md5sum_1m=07d7f5006393f74dc76e6e2e943e2440
> @@ -2127,7 +2127,7 @@ variants:
> 
>                              - sp2:
>                                  image_name += -sp2-32
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x86_dvd_342333.iso
>                                      md5sum_cd1 = b9201aeb6eef04a3c573d036a8780bdf
>                                      md5sum_1m_cd1 = b7a9d42e55ea1e85105a3a6ad4da8e04
> @@ -2156,7 +2156,7 @@ variants:
>                                      passwd = 1q2w3eP
>                                  setup:
>                                      steps = Win2008-64-rss.steps
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/Windows2008-x64.iso
>                                      md5sum=27c58cdb3d620f28c36333a5552f271c
>                                      md5sum_1m=efdcc11d485a1ef9afa739cb8e0ca766
> @@ -2171,7 +2171,7 @@ variants:
> 
>                              - sp2:
>                                  image_name += -sp2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/en_windows_server_2008_datacenter_enterprise_standard_sp2_x64_dvd_342336.iso
>                                      md5sum_cd1 = e94943ef484035b3288d8db69599a6b5
>                                      md5sum_1m_cd1 = ee55506823d0efffb5532ddd88a8e47b
> @@ -2188,7 +2188,7 @@ variants:
> 
>                              - r2:
>                                  image_name += -r2-64
> -                                unattended_install.cdrom|whql.support_vm_install:
> +                                unattended_install.cdrom, whql.support_vm_install:
>                                      cdrom_cd1 = isos/windows/en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>                                      md5sum_cd1 = 0207ef392c60efdda92071b0559ca0f9
>                                      md5sum_1m_cd1 = a5a22ce25008bd7109f6d830d627e3ed
> @@ -2216,7 +2216,7 @@ variants:
>                  variants:
>                      - 32:
>                          image_name += -32
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/en_windows_7_ultimate_x86_dvd_x15-65921.iso
>                              md5sum_cd1 = d0b8b407e8a3d4b75ee9c10147266b89
>                              md5sum_1m_cd1 = 2b0c2c22b1ae95065db08686bf83af93
> @@ -2249,7 +2249,7 @@ variants:
>                              steps = Win7-64.steps
>                          setup:
>                              steps = Win7-64-rss.steps
> -                        unattended_install.cdrom|whql.support_vm_install:
> +                        unattended_install.cdrom, whql.support_vm_install:
>                              cdrom_cd1 = isos/windows/en_windows_7_ultimate_x64_dvd_x15-65922.iso
>                              md5sum_cd1 = f43d22e4fb07bf617d573acd8785c028
>                              md5sum_1m_cd1 = b44d8cf99dbed2a5cb02765db8dfd48f
> @@ -2329,7 +2329,7 @@ variants:
>                  md5sum_cd1 = 9fae22f2666369968a76ef59e9a81ced
> 
> 
> -whql.support_vm_install|whql.client_install.support_vm:
> +whql.support_vm_install, whql.client_install.support_vm:
>      image_name += -supportvm
> 
> 
> @@ -2352,7 +2352,7 @@ variants:
>          drive_format=virtio
> 
> 
> -virtio_net|virtio_blk|e1000|balloon_check:
> +virtio_net, virtio_blk, e1000, balloon_check:
>      only Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
>      # only WinXP Win2003 Win2008 WinVista Win7 Fedora.11 Fedora.12 Fedora.13 Fedora.14 RHEL.5 RHEL.6 OpenSUSE.11 SLES.11 Ubuntu-8.10-server
> 
> @@ -2365,15 +2365,9 @@ variants:
>          check_image = yes
>      - vmdk:
>          no ioquit
> -        only Fedora Ubuntu Windows
> -        only smp2
> -        only rtl8139
>          image_format = vmdk
>      - raw:
>          no ioquit
> -        only Fedora Ubuntu Windows
> -        only smp2
> -        only rtl8139
>          image_format = raw
> 
> 
> -- 
> 1.7.3.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09 16:06 ` Ryan Harper
@ 2011-02-09 16:21   ` Eduardo Habkost
  2011-02-09 23:31     ` [Autotest] " Ryan Harper
  0 siblings, 1 reply; 19+ messages in thread
From: Eduardo Habkost @ 2011-02-09 16:21 UTC (permalink / raw)
  To: Ryan Harper; +Cc: autotest, Uri Lublin, kvm

On Wed, Feb 09, 2011 at 10:06:03AM -0600, Ryan Harper wrote:
> > 
> > Instead of regular expressions in the filters, the following syntax is used:
> > 
> > , means OR
> > .. means AND
> > . means IMMEDIATELY-FOLLOWED-BY
> 
> Is there any reason we can't use | for or, and & for AND?  I know this
> is just nit picking, but, it certainly reads easier and doesn't need a
> translation.  AFAICT, in the implementation, we're just using .split(),
> so, I think the delimiters aren't critical.

I think the main reason is that " " also means "OR" today (as we use
.split() and I guess we don't want to diverge too much from the previous
format), and having C-like operators that don't allow spaces would lead
to confusion. e.g. I am sure somebody would try to write
"foo & bar | baz" eventually--how would we interpret that?

> 
> > 
> > Example:
> > 
> > only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> > 
> > means select all dicts whose names have:
> > 
> > (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> > ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> > (smp2 AND qcow2 AND migrate AND ide)
> 
>   >>> config = "qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide"
>   >>> config
>   'qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide'
>   >>> config.split("|")
>   ['qcow2&Fedora.14', 'RHEL.6&raw&boot', 'smp2&qcow2&migrate&ide']

What bothers me about the examples above is the absense of spaces, that
makes it not very readable to my eyes.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09 16:21   ` Eduardo Habkost
@ 2011-02-09 23:31     ` Ryan Harper
  2011-02-10  9:14       ` Michael Goldish
  0 siblings, 1 reply; 19+ messages in thread
From: Ryan Harper @ 2011-02-09 23:31 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: Ryan Harper, Michael Goldish, autotest, Uri Lublin, kvm

* Eduardo Habkost <ehabkost@redhat.com> [2011-02-09 10:22]:
> On Wed, Feb 09, 2011 at 10:06:03AM -0600, Ryan Harper wrote:
> > > 
> > > Instead of regular expressions in the filters, the following syntax is used:
> > > 
> > > , means OR
> > > .. means AND
> > > . means IMMEDIATELY-FOLLOWED-BY
> > 
> > Is there any reason we can't use | for or, and & for AND?  I know this
> > is just nit picking, but, it certainly reads easier and doesn't need a
> > translation.  AFAICT, in the implementation, we're just using .split(),
> > so, I think the delimiters aren't critical.
> 
> I think the main reason is that " " also means "OR" today (as we use
> .split() and I guess we don't want to diverge too much from the previous
> format), and having C-like operators that don't allow spaces would lead
> to confusion. e.g. I am sure somebody would try to write
> "foo & bar | baz" eventually--how would we interpret that?

isn't the comma taking the place for " " as OR? Are you keeping both?

".." looks like a mistake to me where one meant to put "."

I'd suggest ignoring " " as a OR operator, then as with most operations,
you need either parens or order of operation precendence which one
can use to interpret foo & bar | baz.


> 
> > 
> > > 
> > > Example:
> > > 
> > > only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> > > 
> > > means select all dicts whose names have:
> > > 
> > > (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
> > > ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
> > > (smp2 AND qcow2 AND migrate AND ide)
> > 
> >   >>> config = "qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide"
> >   >>> config
> >   'qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide'
> >   >>> config.split("|")
> >   ['qcow2&Fedora.14', 'RHEL.6&raw&boot', 'smp2&qcow2&migrate&ide']
> 
> What bothers me about the examples above is the absense of spaces, that
> makes it not very readable to my eyes. 

I don't disagree, but the . and .. I don't find very readable either and
I need a look-up table to distinguish , from .. and . and " ".  The
logical operators are well known and recognized.


> 
> -- 
> Eduardo

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ryanh@us.ibm.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09  9:28 ` Avi Kivity
  2011-02-09 10:07   ` Michael Goldish
@ 2011-02-10  1:18   ` Amos Kong
  2011-02-10 12:42     ` Lucas Meneghel Rodrigues
  1 sibling, 1 reply; 19+ messages in thread
From: Amos Kong @ 2011-02-10  1:18 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Michael Goldish, autotest, kvm, Uri Lublin

On Wed, Feb 09, 2011 at 11:28:56AM +0200, Avi Kivity wrote:
> On 02/09/2011 03:50 AM, Michael Goldish wrote:
> >This is a reimplementation of the dict generator.  It is much faster than the
> >current implementation and uses a very small amount of memory.  Running time
> >and memory usage scale polynomially with the number of defined variants,
> >compared to exponentially in the current implementation.
> >
> >Instead of regular expressions in the filters, the following syntax is used:
> >
> >, means OR
> >.. means AND
> >. means IMMEDIATELY-FOLLOWED-BY
> >
> >Example:
> >
> >only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> >
> 
> 
> Is it not possible to keep the old syntax?  Breaking people's
> scripts is bad.

we only need convert the configure file, it's not too complex
 
> -- 
> error compiling committee.c: too many arguments to function
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-09 23:31     ` [Autotest] " Ryan Harper
@ 2011-02-10  9:14       ` Michael Goldish
  2011-02-10 10:34         ` [Autotest] " Avi Kivity
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2011-02-10  9:14 UTC (permalink / raw)
  To: Ryan Harper; +Cc: autotest, Uri Lublin, Eduardo Habkost, kvm

On 02/10/2011 01:31 AM, Ryan Harper wrote:
> * Eduardo Habkost <ehabkost@redhat.com> [2011-02-09 10:22]:
>> On Wed, Feb 09, 2011 at 10:06:03AM -0600, Ryan Harper wrote:
>>>>
>>>> Instead of regular expressions in the filters, the following syntax is used:
>>>>
>>>> , means OR
>>>> .. means AND
>>>> . means IMMEDIATELY-FOLLOWED-BY
>>>
>>> Is there any reason we can't use | for or, and & for AND?  I know this
>>> is just nit picking, but, it certainly reads easier and doesn't need a
>>> translation.  AFAICT, in the implementation, we're just using .split(),
>>> so, I think the delimiters aren't critical.
>>
>> I think the main reason is that " " also means "OR" today (as we use
>> .split() and I guess we don't want to diverge too much from the previous
>> format), and having C-like operators that don't allow spaces would lead
>> to confusion. e.g. I am sure somebody would try to write
>> "foo & bar | baz" eventually--how would we interpret that?
> 
> isn't the comma taking the place for " " as OR? Are you keeping both?

We're keeping both because that allows for some degree of backward
compatibility.  The new syntax is backward compatible with simple
scripts that don't use any regexp operators like .* and |.

> ".." looks like a mistake to me where one meant to put "."
> 
> I'd suggest ignoring " " as a OR operator, then as with most operations,
> you need either parens or order of operation precendence which one
> can use to interpret foo & bar | baz.
> 
> 
>>
>>>
>>>>
>>>> Example:
>>>>
>>>> only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
>>>>
>>>> means select all dicts whose names have:
>>>>
>>>> (qcow2 AND (Fedora IMMEDIATELY-FOLLOWED-BY 14)) OR
>>>> ((RHEL IMMEDIATELY-FOLLOWED-BY 6) AND raw AND boot) OR
>>>> (smp2 AND qcow2 AND migrate AND ide)
>>>
>>>   >>> config = "qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide"
>>>   >>> config
>>>   'qcow2&Fedora.14|RHEL.6&raw&boot|smp2&qcow2&migrate&ide'
>>>   >>> config.split("|")
>>>   ['qcow2&Fedora.14', 'RHEL.6&raw&boot', 'smp2&qcow2&migrate&ide']
>>
>> What bothers me about the examples above is the absense of spaces, that
>> makes it not very readable to my eyes. 
> 
> I don't disagree, but the . and .. I don't find very readable either and
> I need a look-up table to distinguish , from .. and . and " ".  The
> logical operators are well known and recognized.

I thought , was intuitive enough:

only boot, reboot, migrate

seems pretty nice to me.  I also thought '.' was obvious because it
appears in test names, which leaves us with '..':

only Fedora..boot

I thought this could be read as "I don't care what's between Fedora and
boot".  Does that make any sense to you?

Either way, if we use | and &, I don't think we'll support parentheses
because that would greatly complicate the code.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10  9:14       ` Michael Goldish
@ 2011-02-10 10:34         ` Avi Kivity
  2011-02-10 10:46           ` Michael Goldish
  0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2011-02-10 10:34 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, Eduardo Habkost, autotest, Uri Lublin, kvm

On 02/10/2011 11:14 AM, Michael Goldish wrote:
> only Fedora..boot
>

So this would include Fedora.9.32.boot and Fedora.9.64.boot, but exclude 
Windows.XP.32.boot or Fedora.9.32.migrate?  seems reasonable.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 10:34         ` [Autotest] " Avi Kivity
@ 2011-02-10 10:46           ` Michael Goldish
  2011-02-10 10:47             ` Avi Kivity
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2011-02-10 10:46 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Ryan Harper, Eduardo Habkost, autotest, Uri Lublin, kvm

On 02/10/2011 12:34 PM, Avi Kivity wrote:
> On 02/10/2011 11:14 AM, Michael Goldish wrote:
>> only Fedora..boot
>>
> 
> So this would include Fedora.9.32.boot and Fedora.9.64.boot, but exclude
> Windows.XP.32.boot or Fedora.9.32.migrate?  seems reasonable.

Correct, and it would also include boot.Fedora.9.32 and
boot.9.32.Fedora, if there were such things.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 10:46           ` Michael Goldish
@ 2011-02-10 10:47             ` Avi Kivity
  2011-02-10 10:55               ` Michael Goldish
  0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2011-02-10 10:47 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Ryan Harper, Eduardo Habkost, autotest, Uri Lublin, kvm

On 02/10/2011 12:46 PM, Michael Goldish wrote:
> On 02/10/2011 12:34 PM, Avi Kivity wrote:
> >  On 02/10/2011 11:14 AM, Michael Goldish wrote:
> >>  only Fedora..boot
> >>
> >
> >  So this would include Fedora.9.32.boot and Fedora.9.64.boot, but exclude
> >  Windows.XP.32.boot or Fedora.9.32.migrate?  seems reasonable.
>
> Correct, and it would also include boot.Fedora.9.32 and
> boot.9.32.Fedora, if there were such things.

That's counterintuitive and requires careful planning.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 10:47             ` Avi Kivity
@ 2011-02-10 10:55               ` Michael Goldish
  2011-02-10 10:57                 ` [Autotest] " Michael Goldish
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2011-02-10 10:55 UTC (permalink / raw)
  To: Avi Kivity; +Cc: autotest, Uri Lublin, Eduardo Habkost, kvm

On 02/10/2011 12:47 PM, Avi Kivity wrote:
> On 02/10/2011 12:46 PM, Michael Goldish wrote:
>> On 02/10/2011 12:34 PM, Avi Kivity wrote:
>> >  On 02/10/2011 11:14 AM, Michael Goldish wrote:
>> >>  only Fedora..boot
>> >>
>> >
>> >  So this would include Fedora.9.32.boot and Fedora.9.64.boot, but
>> exclude
>> >  Windows.XP.32.boot or Fedora.9.32.migrate?  seems reasonable.
>>
>> Correct, and it would also include boot.Fedora.9.32 and
>> boot.9.32.Fedora, if there were such things.
> 
> That's counterintuitive and requires careful planning.

I can't easily think of a case where this might cause confusion.  The
purpose of this is to allow people to write:

only qcow2..raw..rtl8139

without having to remember the order in which those were defined in
tests_base.cfg.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 10:55               ` Michael Goldish
@ 2011-02-10 10:57                 ` Michael Goldish
  2011-02-10 11:03                   ` Avi Kivity
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Goldish @ 2011-02-10 10:57 UTC (permalink / raw)
  To: Avi Kivity; +Cc: autotest, Uri Lublin, Eduardo Habkost, kvm

On 02/10/2011 12:55 PM, Michael Goldish wrote:
> On 02/10/2011 12:47 PM, Avi Kivity wrote:
>> On 02/10/2011 12:46 PM, Michael Goldish wrote:
>>> On 02/10/2011 12:34 PM, Avi Kivity wrote:
>>>>  On 02/10/2011 11:14 AM, Michael Goldish wrote:
>>>>>  only Fedora..boot
>>>>>
>>>>
>>>>  So this would include Fedora.9.32.boot and Fedora.9.64.boot, but
>>> exclude
>>>>  Windows.XP.32.boot or Fedora.9.32.migrate?  seems reasonable.
>>>
>>> Correct, and it would also include boot.Fedora.9.32 and
>>> boot.9.32.Fedora, if there were such things.
>>
>> That's counterintuitive and requires careful planning.
> 
> I can't easily think of a case where this might cause confusion.  The
> purpose of this is to allow people to write:
> 
> only qcow2..raw..rtl8139
> 
> without having to remember the order in which those were defined in
> tests_base.cfg.

Sorry, I meant something like

only qcow2..hugepages..rtl8139

Obviously qcow2 and raw can't coexist.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 10:57                 ` [Autotest] " Michael Goldish
@ 2011-02-10 11:03                   ` Avi Kivity
  2011-02-10 11:46                     ` Michael Goldish
  2011-02-10 13:45                     ` Eduardo Habkost
  0 siblings, 2 replies; 19+ messages in thread
From: Avi Kivity @ 2011-02-10 11:03 UTC (permalink / raw)
  To: Michael Goldish; +Cc: autotest, Uri Lublin, Eduardo Habkost, kvm

On 02/10/2011 12:57 PM, Michael Goldish wrote:
> >
> >  I can't easily think of a case where this might cause confusion.  The
> >  purpose of this is to allow people to write:
> >
> >  only qcow2..raw..rtl8139
> >
> >  without having to remember the order in which those were defined in
> >  tests_base.cfg.
>
> Sorry, I meant something like
>
> only qcow2..hugepages..rtl8139
>
> Obviously qcow2 and raw can't coexist.

The config files describe a cartesian product, in which order matters.

[A B C] x [1 2] generates [A1 A2 B1 B2 C1 C2]; no confusion here if you 
specify A..1

however

[A B C] x [A B] generates [AA AB BA BB CA CB]; A..B is ambiguous

we might require that keywords be unique.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Autotest] [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 11:03                   ` Avi Kivity
@ 2011-02-10 11:46                     ` Michael Goldish
  2011-02-10 13:45                     ` Eduardo Habkost
  1 sibling, 0 replies; 19+ messages in thread
From: Michael Goldish @ 2011-02-10 11:46 UTC (permalink / raw)
  To: Avi Kivity; +Cc: autotest, Uri Lublin, Eduardo Habkost, kvm

On 02/10/2011 01:03 PM, Avi Kivity wrote:
> On 02/10/2011 12:57 PM, Michael Goldish wrote:
>> >
>> >  I can't easily think of a case where this might cause confusion.  The
>> >  purpose of this is to allow people to write:
>> >
>> >  only qcow2..raw..rtl8139
>> >
>> >  without having to remember the order in which those were defined in
>> >  tests_base.cfg.
>>
>> Sorry, I meant something like
>>
>> only qcow2..hugepages..rtl8139
>>
>> Obviously qcow2 and raw can't coexist.
> 
> The config files describe a cartesian product, in which order matters.
> 
> [A B C] x [1 2] generates [A1 A2 B1 B2 C1 C2]; no confusion here if you
> specify A..1
> 
> however
> 
> [A B C] x [A B] generates [AA AB BA BB CA CB]; A..B is ambiguous

This is a bad idea anyway:

[A B C] x [A B] x [install boot migrate]

'only A..install' is ambiguous regardless of whether we match in-order
or not.

> we might require that keywords be unique.

Ambiguity can be resolved by prefixing a name with its immediate parent.
 If we have Fedora.9.32 and Fedora.9.64, and some test 'foo' has both a
32 bit and a 64 bit version, then the following isn't ambiguous:

only Fedora.9.32..foo.32

If we require that keywords be unique, such combinations will not be
possible.  The same applies to RHEL.3..sometest.3.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10  1:18   ` Amos Kong
@ 2011-02-10 12:42     ` Lucas Meneghel Rodrigues
  0 siblings, 0 replies; 19+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-02-10 12:42 UTC (permalink / raw)
  To: Amos Kong; +Cc: Avi Kivity, Michael Goldish, autotest, kvm, Uri Lublin

On Thu, 2011-02-10 at 09:18 +0800, Amos Kong wrote:
> On Wed, Feb 09, 2011 at 11:28:56AM +0200, Avi Kivity wrote:
> > On 02/09/2011 03:50 AM, Michael Goldish wrote:
> > >This is a reimplementation of the dict generator.  It is much faster than the
> > >current implementation and uses a very small amount of memory.  Running time
> > >and memory usage scale polynomially with the number of defined variants,
> > >compared to exponentially in the current implementation.
> > >
> > >Instead of regular expressions in the filters, the following syntax is used:
> > >
> > >, means OR
> > >.. means AND
> > >. means IMMEDIATELY-FOLLOWED-BY
> > >
> > >Example:
> > >
> > >only qcow2..Fedora.14, RHEL.6..raw..boot, smp2..qcow2..migrate..ide
> > >
> > 
> > 
> > Is it not possible to keep the old syntax?  Breaking people's
> > scripts is bad.
> 
> we only need convert the configure file, it's not too complex

Yes, the benefits of the new format outnumber the inconveniences. As for
my opinion on the operator, .. is sufficiently clear and expressive to
do most of the stuff we need to do with configuration anyway.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py
  2011-02-10 11:03                   ` Avi Kivity
  2011-02-10 11:46                     ` Michael Goldish
@ 2011-02-10 13:45                     ` Eduardo Habkost
  1 sibling, 0 replies; 19+ messages in thread
From: Eduardo Habkost @ 2011-02-10 13:45 UTC (permalink / raw)
  To: Avi Kivity; +Cc: autotest, Uri Lublin, kvm

On Thu, Feb 10, 2011 at 01:03:53PM +0200, Avi Kivity wrote:
> On 02/10/2011 12:57 PM, Michael Goldish wrote:
> >>
> >>  I can't easily think of a case where this might cause confusion.  The
> >>  purpose of this is to allow people to write:
> >>
> >>  only qcow2..raw..rtl8139
> >>
> >>  without having to remember the order in which those were defined in
> >>  tests_base.cfg.
> >
> >Sorry, I meant something like
> >
> >only qcow2..hugepages..rtl8139
> >
> >Obviously qcow2 and raw can't coexist.
> 
> The config files describe a cartesian product, in which order matters.

Mathematically speaking, the ordering in the result is different, but BA
and AB are often equivalent for the user.

In many situations, people don't care in which order (as an example)
"qcow" and "ide" are defined on the base config, they just want to
exclude the combination of "qcow" and "ide".

> 
> [A B C] x [1 2] generates [A1 A2 B1 B2 C1 C2]; no confusion here if
> you specify A..1
> 
> however
> 
> [A B C] x [A B] generates [AA AB BA BB CA CB]; A..B is ambiguous

If you do the above and reuse keywords, "A" is also ambiguous, "B" is
also ambiguous. "A..B" being ambiguous is a consequence of "A" and "B"
being ambiguous. If you don't want to be ambiguous, just use "A.B" or
"B.A".

> 
> we might require that keywords be unique.

I wouldn't be against that. At least for the use cases I see, people
have been assuming that keywords are unique on most "only" and "no"
statements.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2011-02-10 13:45 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-09  1:50 [KVM-AUTOTEST PATCH] KVM test: refactor kvm_config.py Michael Goldish
2011-02-09  2:56 ` Cleber Rosa
2011-02-09  9:28 ` Avi Kivity
2011-02-09 10:07   ` Michael Goldish
2011-02-09 10:19     ` Avi Kivity
2011-02-10  1:18   ` Amos Kong
2011-02-10 12:42     ` Lucas Meneghel Rodrigues
2011-02-09 16:06 ` Ryan Harper
2011-02-09 16:21   ` Eduardo Habkost
2011-02-09 23:31     ` [Autotest] " Ryan Harper
2011-02-10  9:14       ` Michael Goldish
2011-02-10 10:34         ` [Autotest] " Avi Kivity
2011-02-10 10:46           ` Michael Goldish
2011-02-10 10:47             ` Avi Kivity
2011-02-10 10:55               ` Michael Goldish
2011-02-10 10:57                 ` [Autotest] " Michael Goldish
2011-02-10 11:03                   ` Avi Kivity
2011-02-10 11:46                     ` Michael Goldish
2011-02-10 13:45                     ` Eduardo Habkost

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.