All of lore.kernel.org
 help / color / mirror / Atom feed
* Add ability client part starts autotest like server part
@ 2011-08-26  7:12 Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to client part of autotest Jiří Župka
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Jiří Župka @ 2011-08-26  7:12 UTC (permalink / raw)
  To: kvm-autotest, kvm, autotest, lmr, ldoktor, akong; +Cc: jzupka

This patch series was created because client part of autotest 
started to be used like server part and there are lot of tests 
which can be unified to one test (multicast, netperf) if there 
will be able to start already done tests from client part of 
autotest on virtual machine.

The patch series adds autotest client part ability for start
autotest on remote system over network like server part of autotest.
More info is in last patch from patch series.

[AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to
[AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side
[AUTOTEST][PATCH 3/3] autotest: Client/server part unification.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to client part of autotest.
  2011-08-26  7:12 Add ability client part starts autotest like server part Jiří Župka
@ 2011-08-26  7:12 ` Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side to client side Jiří Župka
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Jiří Župka @ 2011-08-26  7:12 UTC (permalink / raw)
  To: kvm-autotest, kvm, autotest, lmr, ldoktor, akong

This patch is part of patch which allow start autotest test on other
system from client part of autotest.

Signed-off-by: Jiří Župka <jzupka@redhat.com>
---
 client/common_lib/autotest.py           | 1080 ++++++++++++++++++++++++++++++
 client/common_lib/base_utils.py         |   99 +++-
 client/common_lib/installable_object.py |   38 ++
 client/common_lib/prebuild.py           |   63 ++
 server/autotest.py                      | 1082 +------------------------------
 server/autotest_unittest.py             |   24 +-
 server/base_utils.py                    |   99 +---
 server/git.py                           |    3 +-
 server/installable_object.py            |   38 --
 server/kernel.py                        |    2 +-
 server/prebuild.py                      |   63 --
 11 files changed, 1297 insertions(+), 1294 deletions(-)
 create mode 100644 client/common_lib/autotest.py
 create mode 100644 client/common_lib/installable_object.py
 create mode 100644 client/common_lib/prebuild.py
 delete mode 100644 server/installable_object.py
 delete mode 100644 server/prebuild.py

diff --git a/client/common_lib/autotest.py b/client/common_lib/autotest.py
new file mode 100644
index 0000000..b103fb3
--- /dev/null
+++ b/client/common_lib/autotest.py
@@ -0,0 +1,1080 @@
+# Copyright 2007 Google Inc. Released under the GPL v2
+
+import re, os, sys, traceback, subprocess, time, pickle, glob, tempfile
+import logging, getpass
+from autotest_lib.client.common_lib import installable_object, prebuild, utils
+from autotest_lib.client.common_lib import base_job, log, error, autotemp
+from autotest_lib.client.common_lib import global_config, packages
+from autotest_lib.client.common_lib import utils as client_utils
+
+AUTOTEST_SVN  = 'svn://test.kernel.org/autotest/trunk/client'
+AUTOTEST_HTTP = 'http://test.kernel.org/svn/autotest/trunk/client'
+
+# Timeouts for powering down and up respectively
+HALT_TIME = 300
+BOOT_TIME = 1800
+CRASH_RECOVERY_TIME = 9000
+
+
+get_value = global_config.global_config.get_config_value
+autoserv_prebuild = get_value('AUTOSERV', 'enable_server_prebuild',
+                              type=bool, default=False)
+
+
+class AutodirNotFoundError(Exception):
+    """No Autotest installation could be found."""
+
+
+class BaseAutotest(installable_object.InstallableObject):
+    """
+    This class represents the Autotest program.
+
+    Autotest is used to run tests automatically and collect the results.
+    It also supports profilers.
+
+    Implementation details:
+    This is a leaf class in an abstract class hierarchy, it must
+    implement the unimplemented methods in parent classes.
+    """
+
+    def __init__(self, host = None):
+        self.host = host
+        self.got = False
+        self.installed = False
+        self.serverdir = utils.get_server_dir()
+        super(BaseAutotest, self).__init__()
+
+
+    install_in_tmpdir = False
+    @classmethod
+    def set_install_in_tmpdir(cls, flag):
+        """ Sets a flag that controls whether or not Autotest should by
+        default be installed in a "standard" directory (e.g.
+        /home/autotest, /usr/local/autotest) or a temporary directory. """
+        cls.install_in_tmpdir = flag
+
+
+    @classmethod
+    def get_client_autodir_paths(cls, host):
+        return global_config.global_config.get_config_value(
+                'AUTOSERV', 'client_autodir_paths', type=list)
+
+
+    @classmethod
+    def get_installed_autodir(cls, host):
+        """
+        Find where the Autotest client is installed on the host.
+        @returns an absolute path to an installed Autotest client root.
+        @raises AutodirNotFoundError if no Autotest installation can be found.
+        """
+        autodir = host.get_autodir()
+        if autodir:
+            logging.debug('Using existing host autodir: %s', autodir)
+            return autodir
+
+        for path in Autotest.get_client_autodir_paths(host):
+            try:
+                autotest_binary = os.path.join(path, 'bin', 'autotest')
+                host.run('test -x %s' % utils.sh_escape(autotest_binary))
+                host.run('test -w %s' % utils.sh_escape(path))
+                logging.debug('Found existing autodir at %s', path)
+                return path
+            except error.AutoservRunError:
+                logging.debug('%s does not exist on %s', autotest_binary,
+                              host.hostname)
+        raise AutodirNotFoundError
+
+
+    @classmethod
+    def get_install_dir(cls, host):
+        """
+        Determines the location where autotest should be installed on
+        host. If self.install_in_tmpdir is set, it will return a unique
+        temporary directory that autotest can be installed in. Otherwise, looks
+        for an existing installation to use; if none is found, looks for a
+        usable directory in the global config client_autodir_paths.
+        """
+        try:
+            install_dir = cls.get_installed_autodir(host)
+        except AutodirNotFoundError:
+            install_dir = cls._find_installable_dir(host)
+
+        if cls.install_in_tmpdir:
+            return host.get_tmp_dir(parent=install_dir)
+        return install_dir
+
+
+    @classmethod
+    def _find_installable_dir(cls, host):
+        client_autodir_paths = cls.get_client_autodir_paths(host)
+        for path in client_autodir_paths:
+            try:
+                host.run('mkdir -p %s' % utils.sh_escape(path))
+                host.run('test -w %s' % utils.sh_escape(path))
+                return path
+            except error.AutoservRunError:
+                logging.debug('Failed to create %s', path)
+        raise error.AutoservInstallError(
+                'Unable to find a place to install Autotest; tried %s' %
+                ', '.join(client_autodir_paths))
+
+
+    def get_fetch_location(self):
+        c = global_config.global_config
+        repos = c.get_config_value("PACKAGES", 'fetch_location', type=list,
+                                   default=[])
+        repos.reverse()
+        return repos
+
+
+    def install(self, host=None, autodir=None):
+        self._install(host=host, autodir=autodir)
+
+
+    def install_full_client(self, host=None, autodir=None):
+        self._install(host=host, autodir=autodir, use_autoserv=False,
+                      use_packaging=False)
+
+
+    def install_no_autoserv(self, host=None, autodir=None):
+        self._install(host=host, autodir=autodir, use_autoserv=False)
+
+
+    def _install_using_packaging(self, host, autodir):
+        repos = self.get_fetch_location()
+        if not repos:
+            raise error.PackageInstallError("No repos to install an "
+                                            "autotest client from")
+        pkgmgr = packages.PackageManager(autodir, hostname=host.hostname,
+                                         repo_urls=repos,
+                                         do_locking=False,
+                                         run_function=host.run,
+                                         run_function_dargs=dict(timeout=600))
+        # The packages dir is used to store all the packages that
+        # are fetched on that client. (for the tests,deps etc.
+        # too apart from the client)
+        pkg_dir = os.path.join(autodir, 'packages')
+        # clean up the autodir except for the packages directory
+        host.run('cd %s && ls | grep -v "^packages$"'
+                 ' | xargs rm -rf && rm -rf .[^.]*' % autodir)
+        pkgmgr.install_pkg('autotest', 'client', pkg_dir, autodir,
+                           preserve_install_dir=True)
+        self.installed = True
+
+
+    def _install_using_send_file(self, host, autodir):
+        dirs_to_exclude = set(["tests", "site_tests", "deps", "profilers"])
+        light_files = [os.path.join(self.source_material, f)
+                       for f in os.listdir(self.source_material)
+                       if f not in dirs_to_exclude]
+        host.send_file(light_files, autodir, delete_dest=True)
+
+        # create empty dirs for all the stuff we excluded
+        commands = []
+        for path in dirs_to_exclude:
+            abs_path = os.path.join(autodir, path)
+            abs_path = utils.sh_escape(abs_path)
+            commands.append("mkdir -p '%s'" % abs_path)
+            commands.append("touch '%s'/__init__.py" % abs_path)
+        host.run(';'.join(commands))
+
+
+    def _install(self, host=None, autodir=None, use_autoserv=True,
+                 use_packaging=True):
+        """
+        Install autotest.  If get() was not called previously, an
+        attempt will be made to install from the autotest svn
+        repository.
+
+        @param host A Host instance on which autotest will be installed
+        @param autodir Location on the remote host to install to
+        @param use_autoserv Enable install modes that depend on the client
+            running with the autoserv harness
+        @param use_packaging Enable install modes that use the packaging system
+
+        @exception AutoservError if a tarball was not specified and
+            the target host does not have svn installed in its path
+        """
+        if not host:
+            host = self.host
+        if not self.got:
+            self.get()
+        host.wait_up(timeout=30)
+        host.setup()
+        logging.info("Installing autotest on %s", host.hostname)
+
+        # set up the autotest directory on the remote machine
+        if not autodir:
+            autodir = self.get_install_dir(host)
+        logging.info('Using installation dir %s', autodir)
+        host.set_autodir(autodir)
+        host.run('mkdir -p %s' % utils.sh_escape(autodir))
+
+        # make sure there are no files in $AUTODIR/results
+        results_path = os.path.join(autodir, 'results')
+        host.run('rm -rf %s/*' % utils.sh_escape(results_path),
+                 ignore_status=True)
+
+        # Fetch the autotest client from the nearest repository
+        if use_packaging:
+            try:
+                self._install_using_packaging(host, autodir)
+                return
+            except (error.PackageInstallError, error.AutoservRunError,
+                    global_config.ConfigError), e:
+                logging.info("Could not install autotest using the packaging "
+                             "system: %s. Trying other methods",  e)
+
+        # try to install from file or directory
+        if self.source_material:
+            c = global_config.global_config
+            supports_autoserv_packaging = c.get_config_value(
+                "PACKAGES", "serve_packages_from_autoserv", type=bool)
+            # Copy autotest recursively
+            if supports_autoserv_packaging and use_autoserv:
+                self._install_using_send_file(host, autodir)
+            else:
+                host.send_file(self.source_material, autodir, delete_dest=True)
+            logging.info("Installation of autotest completed")
+            self.installed = True
+            return
+
+        # if that fails try to install using svn
+        if utils.run('which svn').exit_status:
+            raise error.AutoservError('svn not found on target machine: %s' %
+                                      host.name)
+        try:
+            host.run('svn checkout %s %s' % (AUTOTEST_SVN, autodir))
+        except error.AutoservRunError, e:
+            host.run('svn checkout %s %s' % (AUTOTEST_HTTP, autodir))
+        logging.info("Installation of autotest completed")
+        self.installed = True
+
+
+    def uninstall(self, host=None):
+        """
+        Uninstall (i.e. delete) autotest. Removes the autotest client install
+        from the specified host.
+
+        @params host a Host instance from which the client will be removed
+        """
+        if not self.installed:
+            return
+        if not host:
+            host = self.host
+        autodir = host.get_autodir()
+        if not autodir:
+            return
+
+        # perform the actual uninstall
+        host.run("rm -rf %s" % utils.sh_escape(autodir), ignore_status=True)
+        host.set_autodir(None)
+        self.installed = False
+
+
+    def get(self, location = None):
+        if not location:
+            location = os.path.join(self.serverdir, '../client')
+            location = os.path.abspath(location)
+        # If there's stuff run on our client directory already, it
+        # can cause problems. Try giving it a quick clean first.
+        cwd = os.getcwd()
+        os.chdir(location)
+        try:
+            utils.system('tools/make_clean', ignore_status=True)
+        finally:
+            os.chdir(cwd)
+        super(BaseAutotest, self).get(location)
+        self.got = True
+
+
+    def run(self, control_file, results_dir='.', host=None, timeout=None,
+            tag=None, parallel_flag=False, background=False,
+            client_disconnect_timeout=1800):
+        """
+        Run an autotest job on the remote machine.
+
+        @param control_file: An open file-like-obj of the control file.
+        @param results_dir: A str path where the results should be stored
+                on the local filesystem.
+        @param host: A Host instance on which the control file should
+                be run.
+        @param timeout: Maximum number of seconds to wait for the run or None.
+        @param tag: Tag name for the client side instance of autotest.
+        @param parallel_flag: Flag set when multiple jobs are run at the
+                same time.
+        @param background: Indicates that the client should be launched as
+                a background job; the code calling run will be responsible
+                for monitoring the client and collecting the results.
+        @param client_disconnect_timeout: Seconds to wait for the remote host
+                to come back after a reboot.  [default: 30 minutes]
+
+        @raises AutotestRunError: If there is a problem executing
+                the control file.
+        """
+        host = self._get_host_and_setup(host)
+        results_dir = os.path.abspath(results_dir)
+
+        if tag:
+            results_dir = os.path.join(results_dir, tag)
+
+        atrun = _Run(host, results_dir, tag, parallel_flag, background)
+        self._do_run(control_file, results_dir, host, atrun, timeout,
+                     client_disconnect_timeout)
+
+
+    def _get_host_and_setup(self, host):
+        if not host:
+            host = self.host
+        if not self.installed:
+            self.install(host)
+
+        host.wait_up(timeout=30)
+        return host
+
+
+    def _do_run(self, control_file, results_dir, host, atrun, timeout,
+                client_disconnect_timeout):
+        try:
+            atrun.verify_machine()
+        except:
+            logging.error("Verify failed on %s. Reinstalling autotest",
+                          host.hostname)
+            self.install(host)
+        atrun.verify_machine()
+        debug = os.path.join(results_dir, 'debug')
+        try:
+            os.makedirs(debug)
+        except Exception:
+            pass
+
+        delete_file_list = [atrun.remote_control_file,
+                            atrun.remote_control_file + '.state',
+                            atrun.manual_control_file,
+                            atrun.manual_control_file + '.state']
+        cmd = ';'.join('rm -f ' + control for control in delete_file_list)
+        host.run(cmd, ignore_status=True)
+
+        tmppath = utils.get(control_file)
+
+        # build up the initialization prologue for the control file
+        prologue_lines = []
+
+        # Add the additional user arguments
+        prologue_lines.append("args = %r\n" % self.job.args)
+
+        # If the packaging system is being used, add the repository list.
+        repos = None
+        try:
+            repos = self.get_fetch_location()
+            pkgmgr = packages.PackageManager('autotest', hostname=host.hostname,
+                                             repo_urls=repos)
+            prologue_lines.append('job.add_repository(%s)\n' % repos)
+        except global_config.ConfigError, e:
+            # If repos is defined packaging is enabled so log the error
+            if repos:
+                logging.error(e)
+
+        # on full-size installs, turn on any profilers the server is using
+        if not atrun.background:
+            running_profilers = host.job.profilers.add_log.iteritems()
+            for profiler, (args, dargs) in running_profilers:
+                call_args = [repr(profiler)]
+                call_args += [repr(arg) for arg in args]
+                call_args += ["%s=%r" % item for item in dargs.iteritems()]
+                prologue_lines.append("job.profilers.add(%s)\n"
+                                      % ", ".join(call_args))
+        cfile = "".join(prologue_lines)
+
+        cfile += open(tmppath).read()
+        open(tmppath, "w").write(cfile)
+
+        # Create and copy state file to remote_control_file + '.state'
+        state_file = host.job.preprocess_client_state()
+        host.send_file(state_file, atrun.remote_control_file + '.init.state')
+        os.remove(state_file)
+
+        # Copy control_file to remote_control_file on the host
+        host.send_file(tmppath, atrun.remote_control_file)
+        if os.path.abspath(tmppath) != os.path.abspath(control_file):
+            os.remove(tmppath)
+
+        atrun.execute_control(
+                timeout=timeout,
+                client_disconnect_timeout=client_disconnect_timeout)
+
+
+    def run_timed_test(self, test_name, results_dir='.', host=None,
+                       timeout=None, *args, **dargs):
+        """
+        Assemble a tiny little control file to just run one test,
+        and run it as an autotest client-side test
+        """
+        if not host:
+            host = self.host
+        if not self.installed:
+            self.install(host)
+        opts = ["%s=%s" % (o[0], repr(o[1])) for o in dargs.items()]
+        cmd = ", ".join([repr(test_name)] + map(repr, args) + opts)
+        control = "job.run_test(%s)\n" % cmd
+        self.run(control, results_dir, host, timeout=timeout)
+
+
+    def run_test(self, test_name, results_dir='.', host=None, *args, **dargs):
+        self.run_timed_test(test_name, results_dir, host, timeout=None,
+                            *args, **dargs)
+
+
+class _BaseRun(object):
+    """
+    Represents a run of autotest control file.  This class maintains
+    all the state necessary as an autotest control file is executed.
+
+    It is not intended to be used directly, rather control files
+    should be run using the run method in Autotest.
+    """
+    def __init__(self, host, results_dir, tag, parallel_flag, background):
+        self.host = host
+        self.results_dir = results_dir
+        self.env = host.env
+        self.tag = tag
+        self.parallel_flag = parallel_flag
+        self.background = background
+        self.autodir = Autotest.get_installed_autodir(self.host)
+        control = os.path.join(self.autodir, 'control')
+        if tag:
+            control += '.' + tag
+        self.manual_control_file = control
+        self.remote_control_file = control + '.autoserv'
+        self.config_file = os.path.join(self.autodir, 'global_config.ini')
+
+
+    def verify_machine(self):
+        binary = os.path.join(self.autodir, 'bin/autotest')
+        try:
+            self.host.run('ls %s > /dev/null 2>&1' % binary)
+        except:
+            raise error.AutoservInstallError(
+                "Autotest does not appear to be installed")
+
+        if not self.parallel_flag:
+            tmpdir = os.path.join(self.autodir, 'tmp')
+            download = os.path.join(self.autodir, 'tests/download')
+            self.host.run('umount %s' % tmpdir, ignore_status=True)
+            self.host.run('umount %s' % download, ignore_status=True)
+
+
+    def get_base_cmd_args(self, section):
+        args = ['--verbose']
+        if section > 0:
+            args.append('-c')
+        if self.tag:
+            args.append('-t %s' % self.tag)
+        if self.host.job.use_external_logging():
+            args.append('-l')
+        if self.host.hostname:
+            args.append('--hostname=%s' % self.host.hostname)
+        args.append('--user=%s' % self.host.job.user)
+
+        args.append(self.remote_control_file)
+        return args
+
+
+    def get_background_cmd(self, section):
+        cmd = ['nohup', os.path.join(self.autodir, 'bin/autotest_client')]
+        cmd += self.get_base_cmd_args(section)
+        cmd += ['>/dev/null', '2>/dev/null', '&']
+        return ' '.join(cmd)
+
+
+    def get_daemon_cmd(self, section, monitor_dir):
+        cmd = ['nohup', os.path.join(self.autodir, 'bin/autotestd'),
+               monitor_dir, '-H autoserv']
+        cmd += self.get_base_cmd_args(section)
+        cmd += ['>/dev/null', '2>/dev/null', '&']
+        return ' '.join(cmd)
+
+
+    def get_monitor_cmd(self, monitor_dir, stdout_read, stderr_read):
+        cmd = [os.path.join(self.autodir, 'bin', 'autotestd_monitor'),
+               monitor_dir, str(stdout_read), str(stderr_read)]
+        return ' '.join(cmd)
+
+
+    def get_client_log(self):
+        """Find what the "next" client.* prefix should be
+
+        @returns A string of the form client.INTEGER that should be prefixed
+            to all client debug log files.
+        """
+        max_digit = -1
+        debug_dir = os.path.join(self.results_dir, 'debug')
+        client_logs = glob.glob(os.path.join(debug_dir, 'client.*.*'))
+        for log in client_logs:
+            _, number, _ = log.split('.', 2)
+            if number.isdigit():
+                max_digit = max(max_digit, int(number))
+        return 'client.%d' % (max_digit + 1)
+
+
+    def copy_client_config_file(self, client_log_prefix=None):
+        """
+        Create and copy the client config file based on the server config.
+
+        @param client_log_prefix: Optional prefix to prepend to log files.
+        """
+        client_config_file = self._create_client_config_file(client_log_prefix)
+        self.host.send_file(client_config_file, self.config_file)
+        os.remove(client_config_file)
+
+
+    def _create_client_config_file(self, client_log_prefix=None):
+        """
+        Create a temporary file with the [CLIENT] section configuration values
+        taken from the server global_config.ini.
+
+        @param client_log_prefix: Optional prefix to prepend to log files.
+
+        @return: Path of the temporary file generated.
+        """
+        config = global_config.global_config.get_section_values('CLIENT')
+        if client_log_prefix:
+            config.set('CLIENT', 'default_logging_name', client_log_prefix)
+        return self._create_aux_file(config.write)
+
+
+    def _create_aux_file(self, func, *args):
+        """
+        Creates a temporary file and writes content to it according to a
+        content creation function. The file object is appended to *args, which
+        is then passed to the content creation function
+
+        @param func: Function that will be used to write content to the
+                temporary file.
+        @param *args: List of parameters that func takes.
+        @return: Path to the temporary file that was created.
+        """
+        fd, path = tempfile.mkstemp(dir=self.host.job.tmpdir)
+        aux_file = os.fdopen(fd, "w")
+        try:
+            list_args = list(args)
+            list_args.append(aux_file)
+            func(*list_args)
+        finally:
+            aux_file.close()
+        return path
+
+
+    @staticmethod
+    def is_client_job_finished(last_line):
+        return bool(re.match(r'^END .*\t----\t----\t.*$', last_line))
+
+
+    @staticmethod
+    def is_client_job_rebooting(last_line):
+        return bool(re.match(r'^\t*GOOD\t----\treboot\.start.*$', last_line))
+
+
+    def log_unexpected_abort(self, stderr_redirector):
+        stderr_redirector.flush_all_buffers()
+        msg = "Autotest client terminated unexpectedly"
+        self.host.job.record("END ABORT", None, None, msg)
+
+
+    def _execute_in_background(self, section, timeout):
+        full_cmd = self.get_background_cmd(section)
+        devnull = open(os.devnull, "w")
+
+        self.copy_client_config_file(self.get_client_log())
+
+        self.host.job.push_execution_context(self.results_dir)
+        try:
+            result = self.host.run(full_cmd, ignore_status=True,
+                                   timeout=timeout,
+                                   stdout_tee=devnull,
+                                   stderr_tee=devnull)
+        finally:
+            self.host.job.pop_execution_context()
+
+        return result
+
+
+    @staticmethod
+    def _strip_stderr_prologue(stderr):
+        """Strips the 'standard' prologue that get pre-pended to every
+        remote command and returns the text that was actually written to
+        stderr by the remote command."""
+        stderr_lines = stderr.split("\n")[1:]
+        if not stderr_lines:
+            return ""
+        elif stderr_lines[0].startswith("NOTE: autotestd_monitor"):
+            del stderr_lines[0]
+        return "\n".join(stderr_lines)
+
+
+    def _execute_daemon(self, section, timeout, stderr_redirector,
+                        client_disconnect_timeout):
+        monitor_dir = self.host.get_tmp_dir()
+        daemon_cmd = self.get_daemon_cmd(section, monitor_dir)
+
+        # grab the location for the server-side client log file
+        client_log_prefix = self.get_client_log()
+        client_log_path = os.path.join(self.results_dir, 'debug',
+                                       client_log_prefix + '.log')
+        client_log = open(client_log_path, 'w', 0)
+        self.copy_client_config_file(client_log_prefix)
+
+        stdout_read = stderr_read = 0
+        self.host.job.push_execution_context(self.results_dir)
+        try:
+            self.host.run(daemon_cmd, ignore_status=True, timeout=timeout)
+            disconnect_warnings = []
+            while True:
+                monitor_cmd = self.get_monitor_cmd(monitor_dir, stdout_read,
+                                                   stderr_read)
+                try:
+                    result = self.host.run(monitor_cmd, ignore_status=True,
+                                           timeout=timeout,
+                                           stdout_tee=client_log,
+                                           stderr_tee=stderr_redirector)
+                except error.AutoservRunError, e:
+                    result = e.result_obj
+                    result.exit_status = None
+                    disconnect_warnings.append(e.description)
+
+                    stderr_redirector.log_warning(
+                        "Autotest client was disconnected: %s" % e.description,
+                        "NETWORK")
+                except error.AutoservSSHTimeout:
+                    result = utils.CmdResult(monitor_cmd, "", "", None, 0)
+                    stderr_redirector.log_warning(
+                        "Attempt to connect to Autotest client timed out",
+                        "NETWORK")
+
+                stdout_read += len(result.stdout)
+                stderr_read += len(self._strip_stderr_prologue(result.stderr))
+
+                if result.exit_status is not None:
+                    return result
+                elif not self.host.wait_up(client_disconnect_timeout):
+                    raise error.AutoservSSHTimeout(
+                        "client was disconnected, reconnect timed out")
+        finally:
+            client_log.close()
+            self.host.job.pop_execution_context()
+
+
+    def execute_section(self, section, timeout, stderr_redirector,
+                        client_disconnect_timeout):
+        logging.info("Executing %s/bin/autotest %s/control phase %d",
+                     self.autodir, self.autodir, section)
+
+        if self.background:
+            result = self._execute_in_background(section, timeout)
+        else:
+            result = self._execute_daemon(section, timeout, stderr_redirector,
+                                          client_disconnect_timeout)
+
+        last_line = stderr_redirector.last_line
+
+        # check if we failed hard enough to warrant an exception
+        if result.exit_status == 1:
+            err = error.AutotestRunError("client job was aborted")
+        elif not self.background and not result.stderr:
+            err = error.AutotestRunError(
+                "execute_section %s failed to return anything\n"
+                "stdout:%s\n" % (section, result.stdout))
+        else:
+            err = None
+
+        # log something if the client failed AND never finished logging
+        if err and not self.is_client_job_finished(last_line):
+            self.log_unexpected_abort(stderr_redirector)
+
+        if err:
+            raise err
+        else:
+            return stderr_redirector.last_line
+
+
+    def _wait_for_reboot(self, old_boot_id):
+        logging.info("Client is rebooting")
+        logging.info("Waiting for client to halt")
+        if not self.host.wait_down(HALT_TIME, old_boot_id=old_boot_id):
+            err = "%s failed to shutdown after %d"
+            err %= (self.host.hostname, HALT_TIME)
+            raise error.AutotestRunError(err)
+        logging.info("Client down, waiting for restart")
+        if not self.host.wait_up(BOOT_TIME):
+            # since reboot failed
+            # hardreset the machine once if possible
+            # before failing this control file
+            warning = "%s did not come back up, hard resetting"
+            warning %= self.host.hostname
+            logging.warning(warning)
+            try:
+                self.host.hardreset(wait=False)
+            except (AttributeError, error.AutoservUnsupportedError):
+                warning = "Hard reset unsupported on %s"
+                warning %= self.host.hostname
+                logging.warning(warning)
+            raise error.AutotestRunError("%s failed to boot after %ds" %
+                                         (self.host.hostname, BOOT_TIME))
+        self.host.reboot_followup()
+
+
+    def execute_control(self, timeout=None, client_disconnect_timeout=None):
+        if not self.background:
+            collector = log_collector(self.host, self.tag, self.results_dir)
+            hostname = self.host.hostname
+            remote_results = collector.client_results_dir
+            local_results = collector.server_results_dir
+            self.host.job.add_client_log(hostname, remote_results,
+                                         local_results)
+            job_record_context = self.host.job.get_record_context()
+
+        section = 0
+        start_time = time.time()
+
+        logger = client_logger(self.host, self.tag, self.results_dir)
+        try:
+            while not timeout or time.time() < start_time + timeout:
+                if timeout:
+                    section_timeout = start_time + timeout - time.time()
+                else:
+                    section_timeout = None
+                boot_id = self.host.get_boot_id()
+                last = self.execute_section(section, section_timeout,
+                                            logger, client_disconnect_timeout)
+                if self.background:
+                    return
+                section += 1
+                if self.is_client_job_finished(last):
+                    logging.info("Client complete")
+                    return
+                elif self.is_client_job_rebooting(last):
+                    try:
+                        self._wait_for_reboot(boot_id)
+                    except error.AutotestRunError, e:
+                        self.host.job.record("ABORT", None, "reboot", str(e))
+                        self.host.job.record("END ABORT", None, None, str(e))
+                        raise
+                    continue
+
+                # if we reach here, something unexpected happened
+                self.log_unexpected_abort(logger)
+
+                # give the client machine a chance to recover from a crash
+                self.host.wait_up(CRASH_RECOVERY_TIME)
+                msg = ("Aborting - unexpected final status message from "
+                       "client on %s: %s\n") % (self.host.hostname, last)
+                raise error.AutotestRunError(msg)
+        finally:
+            logger.close()
+            if not self.background:
+                collector.collect_client_job_results()
+                collector.remove_redundant_client_logs()
+                state_file = os.path.basename(self.remote_control_file
+                                              + '.state')
+                state_path = os.path.join(self.results_dir, state_file)
+                self.host.job.postprocess_client_state(state_path)
+                self.host.job.remove_client_log(hostname, remote_results,
+                                                local_results)
+                job_record_context.restore()
+
+        # should only get here if we timed out
+        assert timeout
+        raise error.AutotestTimeoutError()
+
+
+class log_collector(object):
+    def __init__(self, host, client_tag, results_dir):
+        self.host = host
+        if not client_tag:
+            client_tag = "default"
+        self.client_results_dir = os.path.join(host.get_autodir(), "results",
+                                               client_tag)
+        self.server_results_dir = results_dir
+
+
+    def collect_client_job_results(self):
+        """ A method that collects all the current results of a running
+        client job into the results dir. By default does nothing as no
+        client job is running, but when running a client job you can override
+        this with something that will actually do something. """
+
+        # make an effort to wait for the machine to come up
+        try:
+            self.host.wait_up(timeout=30)
+        except error.AutoservError:
+            # don't worry about any errors, we'll try and
+            # get the results anyway
+            pass
+
+        # Copy all dirs in default to results_dir
+        try:
+            self.host.get_file(self.client_results_dir + '/',
+                               self.server_results_dir, preserve_symlinks=True)
+        except Exception:
+            # well, don't stop running just because we couldn't get logs
+            e_msg = "Unexpected error copying test result logs, continuing ..."
+            logging.error(e_msg)
+            traceback.print_exc(file=sys.stdout)
+
+
+    def remove_redundant_client_logs(self):
+        """Remove client.*.log files in favour of client.*.DEBUG files."""
+        debug_dir = os.path.join(self.server_results_dir, 'debug')
+        debug_files = [f for f in os.listdir(debug_dir)
+                       if re.search(r'^client\.\d+\.DEBUG$', f)]
+        for debug_file in debug_files:
+            log_file = debug_file.replace('DEBUG', 'log')
+            log_file = os.path.join(debug_dir, log_file)
+            if os.path.exists(log_file):
+                os.remove(log_file)
+
+
+# a file-like object for catching stderr from an autotest client and
+# extracting status logs from it
+class client_logger(object):
+    """Partial file object to write to both stdout and
+    the status log file.  We only implement those methods
+    utils.run() actually calls.
+    """
+    status_parser = re.compile(r"^AUTOTEST_STATUS:([^:]*):(.*)$")
+    test_complete_parser = re.compile(r"^AUTOTEST_TEST_COMPLETE:(.*)$")
+    fetch_package_parser = re.compile(
+        r"^AUTOTEST_FETCH_PACKAGE:([^:]*):([^:]*):(.*)$")
+    extract_indent = re.compile(r"^(\t*).*$")
+    extract_timestamp = re.compile(r".*\ttimestamp=(\d+)\t.*$")
+
+    def __init__(self, host, tag, server_results_dir):
+        self.host = host
+        self.job = host.job
+        self.log_collector = log_collector(host, tag, server_results_dir)
+        self.leftover = ""
+        self.last_line = ""
+        self.logs = {}
+
+
+    def _process_log_dict(self, log_dict):
+        log_list = log_dict.pop("logs", [])
+        for key in sorted(log_dict.iterkeys()):
+            log_list += self._process_log_dict(log_dict.pop(key))
+        return log_list
+
+
+    def _process_logs(self):
+        """Go through the accumulated logs in self.log and print them
+        out to stdout and the status log. Note that this processes
+        logs in an ordering where:
+
+        1) logs to different tags are never interleaved
+        2) logs to x.y come before logs to x.y.z for all z
+        3) logs to x.y come before x.z whenever y < z
+
+        Note that this will in general not be the same as the
+        chronological ordering of the logs. However, if a chronological
+        ordering is desired that one can be reconstructed from the
+        status log by looking at timestamp lines."""
+        log_list = self._process_log_dict(self.logs)
+        for entry in log_list:
+            self.job.record_entry(entry, log_in_subdir=False)
+        if log_list:
+            self.last_line = log_list[-1].render()
+
+
+    def _process_quoted_line(self, tag, line):
+        """Process a line quoted with an AUTOTEST_STATUS flag. If the
+        tag is blank then we want to push out all the data we've been
+        building up in self.logs, and then the newest line. If the
+        tag is not blank, then push the line into the logs for handling
+        later."""
+        entry = base_job.status_log_entry.parse(line)
+        if entry is None:
+            return  # the line contains no status lines
+        if tag == "":
+            self._process_logs()
+            self.job.record_entry(entry, log_in_subdir=False)
+            self.last_line = line
+        else:
+            tag_parts = [int(x) for x in tag.split(".")]
+            log_dict = self.logs
+            for part in tag_parts:
+                log_dict = log_dict.setdefault(part, {})
+            log_list = log_dict.setdefault("logs", [])
+            log_list.append(entry)
+
+
+    def _process_info_line(self, line):
+        """Check if line is an INFO line, and if it is, interpret any control
+        messages (e.g. enabling/disabling warnings) that it may contain."""
+        match = re.search(r"^\t*INFO\t----\t----(.*)\t[^\t]*$", line)
+        if not match:
+            return   # not an INFO line
+        for field in match.group(1).split('\t'):
+            if field.startswith("warnings.enable="):
+                func = self.job.warning_manager.enable_warnings
+            elif field.startswith("warnings.disable="):
+                func = self.job.warning_manager.disable_warnings
+            else:
+                continue
+            warning_type = field.split("=", 1)[1]
+            func(warning_type)
+
+
+    def _process_line(self, line):
+        """Write out a line of data to the appropriate stream. Status
+        lines sent by autotest will be prepended with
+        "AUTOTEST_STATUS", and all other lines are ssh error
+        messages."""
+        status_match = self.status_parser.search(line)
+        test_complete_match = self.test_complete_parser.search(line)
+        fetch_package_match = self.fetch_package_parser.search(line)
+        if status_match:
+            tag, line = status_match.groups()
+            self._process_info_line(line)
+            self._process_quoted_line(tag, line)
+        elif test_complete_match:
+            self._process_logs()
+            fifo_path, = test_complete_match.groups()
+            try:
+                self.log_collector.collect_client_job_results()
+                self.host.run("echo A > %s" % fifo_path)
+            except Exception:
+                msg = "Post-test log collection failed, continuing anyway"
+                logging.exception(msg)
+        elif fetch_package_match:
+            pkg_name, dest_path, fifo_path = fetch_package_match.groups()
+            serve_packages = global_config.global_config.get_config_value(
+                "PACKAGES", "serve_packages_from_autoserv", type=bool)
+            if serve_packages and pkg_name.endswith(".tar.bz2"):
+                try:
+                    self._send_tarball(pkg_name, dest_path)
+                except Exception:
+                    msg = "Package tarball creation failed, continuing anyway"
+                    logging.exception(msg)
+            try:
+                self.host.run("echo B > %s" % fifo_path)
+            except Exception:
+                msg = "Package tarball installation failed, continuing anyway"
+                logging.exception(msg)
+        else:
+            logging.info(line)
+
+
+    def _send_tarball(self, pkg_name, remote_dest):
+        name, pkg_type = self.job.pkgmgr.parse_tarball_name(pkg_name)
+        src_dirs = []
+        if pkg_type == 'test':
+            for test_dir in ['site_tests', 'tests']:
+                src_dir = os.path.join(self.job.clientdir, test_dir, name)
+                if os.path.exists(src_dir):
+                    src_dirs += [src_dir]
+                    if autoserv_prebuild:
+                        prebuild.setup(self.job.clientdir, src_dir)
+                    break
+        elif pkg_type == 'profiler':
+            src_dirs += [os.path.join(self.job.clientdir, 'profilers', name)]
+            if autoserv_prebuild:
+                prebuild.setup(self.job.clientdir, src_dir)
+        elif pkg_type == 'dep':
+            src_dirs += [os.path.join(self.job.clientdir, 'deps', name)]
+        elif pkg_type == 'client':
+            return  # you must already have a client to hit this anyway
+        else:
+            return  # no other types are supported
+
+        # iterate over src_dirs until we find one that exists, then tar it
+        for src_dir in src_dirs:
+            if os.path.exists(src_dir):
+                try:
+                    logging.info('Bundling %s into %s', src_dir, pkg_name)
+                    temp_dir = autotemp.tempdir(unique_id='autoserv-packager',
+                                                dir=self.job.tmpdir)
+                    tarball_path = self.job.pkgmgr.tar_package(
+                        pkg_name, src_dir, temp_dir.name, " .")
+                    self.host.send_file(tarball_path, remote_dest)
+                finally:
+                    temp_dir.clean()
+                return
+
+
+    def log_warning(self, msg, warning_type):
+        """Injects a WARN message into the current status logging stream."""
+        timestamp = int(time.time())
+        if self.job.warning_manager.is_valid(timestamp, warning_type):
+            self.job.record('WARN', None, None, msg)
+
+
+    def write(self, data):
+        # now start processing the existing buffer and the new data
+        data = self.leftover + data
+        lines = data.split('\n')
+        processed_lines = 0
+        try:
+            # process all the buffered data except the last line
+            # ignore the last line since we may not have all of it yet
+            for line in lines[:-1]:
+                self._process_line(line)
+                processed_lines += 1
+        finally:
+            # save any unprocessed lines for future processing
+            self.leftover = '\n'.join(lines[processed_lines:])
+
+
+    def flush(self):
+        sys.stdout.flush()
+
+
+    def flush_all_buffers(self):
+        if self.leftover:
+            self._process_line(self.leftover)
+            self.leftover = ""
+        self._process_logs()
+        self.flush()
+
+
+    def close(self):
+        self.flush_all_buffers()
+
+
+SiteAutotest = client_utils.import_site_class(
+    __file__, "autotest_lib.server.site_autotest", "SiteAutotest",
+    BaseAutotest)
+
+
+_SiteRun = client_utils.import_site_class(
+    __file__, "autotest_lib.server.site_autotest", "_SiteRun", _BaseRun)
+
+
+class Autotest(SiteAutotest):
+    pass
+
+
+class _Run(_SiteRun):
+    pass
+
+
+class AutotestHostMixin(object):
+    """A generic mixin to add a run_test method to classes, which will allow
+    you to run an autotest client test on a machine directly."""
+
+    # for testing purposes
+    _Autotest = Autotest
+
+    def run_test(self, test_name, **dargs):
+        """Run an autotest client test on the host.
+
+        @param test_name: The name of the client test.
+        @param dargs: Keyword arguments to pass to the test.
+
+        @returns: True if the test passes, False otherwise."""
+        at = self._Autotest()
+        control_file = ('result = job.run_test(%s)\n'
+                        'job.set_state("test_result", result)\n')
+        test_args = [repr(test_name)]
+        test_args += ['%s=%r' % (k, v) for k, v in dargs.iteritems()]
+        control_file %= ', '.join(test_args)
+        at.run(control_file, host=self)
+        return at.job.get_state('test_result', default=False)
diff --git a/client/common_lib/base_utils.py b/client/common_lib/base_utils.py
index ad19e80..26b9fb5 100644
--- a/client/common_lib/base_utils.py
+++ b/client/common_lib/base_utils.py
@@ -2,8 +2,8 @@
 # Copyright 2008 Google Inc. Released under the GPL v2
 
 import os, pickle, random, re, resource, select, shutil, signal, StringIO
-import socket, struct, subprocess, sys, time, textwrap, urlparse
-import warnings, smtplib, logging, urllib2
+import socket, struct, subprocess, sys, time, textwrap, urlparse, tempfile
+import warnings, smtplib, logging, urllib2, types
 from threading import Thread, Event
 try:
     import hashlib
@@ -1343,6 +1343,101 @@ class run_randomly:
             fn(*args, **dargs)
 
 
+def get_server_dir():
+    path = os.path.dirname(sys.modules['autotest_lib.server.utils'].__file__)
+    return os.path.abspath(path)
+
+
+# A dictionary of pid and a list of tmpdirs for that pid
+__tmp_dirs = {}
+
+
+def clean_tmp_dirs():
+    """Erase temporary directories that were created by the get_tmp_dir()
+    function and that are still present.
+    """
+    pid = os.getpid()
+    if pid not in __tmp_dirs:
+        return
+    for dir in __tmp_dirs[pid]:
+        try:
+            shutil.rmtree(dir)
+        except OSError, e:
+            if e.errno == 2:
+                pass
+    __tmp_dirs[pid] = []
+
+
+def get_tmp_dir():
+    """Return the pathname of a directory on the host suitable
+    for temporary file storage.
+
+    The directory and its content will be deleted automatically
+    at the end of the program execution if they are still present.
+    """
+    dir_name = tempfile.mkdtemp(prefix="autoserv-")
+    pid = os.getpid()
+    if not pid in __tmp_dirs:
+        __tmp_dirs[pid] = []
+    __tmp_dirs[pid].append(dir_name)
+    return dir_name
+
+
+def get(location, local_copy = False):
+    """Get a file or directory to a local temporary directory.
+
+    Args:
+            location: the source of the material to get. This source may
+                    be one of:
+                    * a local file or directory
+                    * a URL (http or ftp)
+                    * a python file-like object
+
+    Returns:
+            The location of the file or directory where the requested
+            content was saved. This will be contained in a temporary
+            directory on the local host. If the material to get was a
+            directory, the location will contain a trailing '/'
+    """
+    tmpdir = get_tmp_dir()
+
+    # location is a file-like object
+    if hasattr(location, "read"):
+        tmpfile = os.path.join(tmpdir, "file")
+        tmpfileobj = file(tmpfile, 'w')
+        shutil.copyfileobj(location, tmpfileobj)
+        tmpfileobj.close()
+        return tmpfile
+
+    if isinstance(location, types.StringTypes):
+        # location is a URL
+        if location.startswith('http') or location.startswith('ftp'):
+            tmpfile = os.path.join(tmpdir, os.path.basename(location))
+            urlretrieve(location, tmpfile)
+            return tmpfile
+        # location is a local path
+        elif os.path.exists(os.path.abspath(location)):
+            if not local_copy:
+                if os.path.isdir(location):
+                    return location.rstrip('/') + '/'
+                else:
+                    return location
+            tmpfile = os.path.join(tmpdir, os.path.basename(location))
+            if os.path.isdir(location):
+                tmpfile += '/'
+                shutil.copytree(location, tmpfile, symlinks=True)
+                return tmpfile
+            shutil.copyfile(location, tmpfile)
+            return tmpfile
+        # location is just a string, dump it to a file
+        else:
+            tmpfd, tmpfile = tempfile.mkstemp(dir=tmpdir)
+            tmpfileobj = os.fdopen(tmpfd, 'w')
+            tmpfileobj.write(location)
+            tmpfileobj.close()
+            return tmpfile
+
+
 def import_site_module(path, module, dummy=None, modulefile=None):
     """
     Try to import the site specific module if it exists.
diff --git a/client/common_lib/installable_object.py b/client/common_lib/installable_object.py
new file mode 100644
index 0000000..e0afc52
--- /dev/null
+++ b/client/common_lib/installable_object.py
@@ -0,0 +1,38 @@
+from autotest_lib.client.common_lib import utils
+
+
+class InstallableObject(object):
+    """
+    This class represents a software package that can be installed on
+    a Host.
+
+    Implementation details:
+    This is an abstract class, leaf subclasses must implement the methods
+    listed here. You must not instantiate this class but should
+    instantiate one of those leaf subclasses.
+    """
+
+    source_material= None
+
+    def __init__(self):
+        super(InstallableObject, self).__init__()
+
+
+    def get(self, location):
+        """
+        Get the source material required to install the object.
+
+        Through the utils.get() function, the argument passed will be
+        saved in a temporary location on the LocalHost. That location
+        is saved in the source_material attribute.
+
+        Args:
+                location: the path to the source material. This path
+                        may be of any type that the utils.get()
+                        function will accept.
+        """
+        self.source_material= utils.get(location)
+
+
+    def install(self, host):
+        pass
diff --git a/client/common_lib/prebuild.py b/client/common_lib/prebuild.py
new file mode 100644
index 0000000..b13fd7b
--- /dev/null
+++ b/client/common_lib/prebuild.py
@@ -0,0 +1,63 @@
+# Copyright 2010 Google Inc. Released under the GPL v2
+#
+# Eric Li <ericli@google.com>
+
+import logging, os, pickle, re, sys
+import common
+from autotest_lib.client.bin import setup_job as client_setup_job
+
+
+def touch_init(parent_dir, child_dir):
+    """
+    Touch __init__.py file all alone through from dir_patent to child_dir.
+
+    So client tests could be loaded as Python modules. Assume child_dir is a
+    subdirectory of parent_dir.
+    """
+
+    if not child_dir.startswith(parent_dir):
+        logging.error('%s is not a subdirectory of %s' % (child_dir,
+                                                          parent_dir))
+        return
+    sub_parent_dirs = parent_dir.split(os.path.sep)
+    sub_child_dirs = child_dir.split(os.path.sep)
+    for sub_dir in sub_child_dirs[len(sub_parent_dirs):]:
+        sub_parent_dirs.append(sub_dir)
+        path = os.path.sep.join(sub_parent_dirs)
+        init_py = os.path.join(path, '__init__.py')
+        open(init_py, 'a').close()
+
+
+def init_test(testdir):
+    """
+    Instantiate a client test object from a given test directory.
+
+    @param testdir The test directory.
+    @returns A test object or None if failed to instantiate.
+    """
+
+    class options:
+        tag = ''
+        verbose = None
+        cont = False
+        harness = 'autoserv'
+        hostname = None
+        user = None
+        log = True
+    return client_setup_job.init_test(options, testdir)
+
+
+def setup(autotest_client_dir, client_test_dir):
+    """
+    Setup prebuild of a client test.
+
+    @param autotest_client_dir: The autotest/client base directory.
+    @param client_test_dir: The actual test directory under client.
+    """
+
+    os.environ['AUTODIR'] = autotest_client_dir
+    touch_init(autotest_client_dir, client_test_dir)
+
+    # instantiate a client_test instance.
+    client_test = init_test(client_test_dir)
+    client_setup_job.setup_test(client_test)
diff --git a/server/autotest.py b/server/autotest.py
index 495dcc2..2b6755f 100644
--- a/server/autotest.py
+++ b/server/autotest.py
@@ -1,1081 +1 @@
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-import re, os, sys, traceback, subprocess, time, pickle, glob, tempfile
-import logging, getpass
-from autotest_lib.server import installable_object, prebuild, utils
-from autotest_lib.client.common_lib import base_job, log, error, autotemp
-from autotest_lib.client.common_lib import global_config, packages
-from autotest_lib.client.common_lib import utils as client_utils
-
-AUTOTEST_SVN = 'svn://test.kernel.org/autotest/trunk/client'
-AUTOTEST_HTTP = 'http://test.kernel.org/svn/autotest/trunk/client'
-
-
-get_value = global_config.global_config.get_config_value
-autoserv_prebuild = get_value('AUTOSERV', 'enable_server_prebuild',
-                              type=bool, default=False)
-
-
-class AutodirNotFoundError(Exception):
-    """No Autotest installation could be found."""
-
-
-class BaseAutotest(installable_object.InstallableObject):
-    """
-    This class represents the Autotest program.
-
-    Autotest is used to run tests automatically and collect the results.
-    It also supports profilers.
-
-    Implementation details:
-    This is a leaf class in an abstract class hierarchy, it must
-    implement the unimplemented methods in parent classes.
-    """
-
-    def __init__(self, host=None):
-        self.host = host
-        self.got = False
-        self.installed = False
-        self.serverdir = utils.get_server_dir()
-        super(BaseAutotest, self).__init__()
-
-
-    install_in_tmpdir = False
-    @classmethod
-    def set_install_in_tmpdir(cls, flag):
-        """ Sets a flag that controls whether or not Autotest should by
-        default be installed in a "standard" directory (e.g.
-        /home/autotest, /usr/local/autotest) or a temporary directory. """
-        cls.install_in_tmpdir = flag
-
-
-    @classmethod
-    def get_client_autodir_paths(cls, host):
-        return global_config.global_config.get_config_value(
-                'AUTOSERV', 'client_autodir_paths', type=list)
-
-
-    @classmethod
-    def get_installed_autodir(cls, host):
-        """
-        Find where the Autotest client is installed on the host.
-        @returns an absolute path to an installed Autotest client root.
-        @raises AutodirNotFoundError if no Autotest installation can be found.
-        """
-        autodir = host.get_autodir()
-        if autodir:
-            logging.debug('Using existing host autodir: %s', autodir)
-            return autodir
-
-        for path in Autotest.get_client_autodir_paths(host):
-            try:
-                autotest_binary = os.path.join(path, 'bin', 'autotest')
-                host.run('test -x %s' % utils.sh_escape(autotest_binary))
-                host.run('test -w %s' % utils.sh_escape(path))
-                logging.debug('Found existing autodir at %s', path)
-                return path
-            except error.AutoservRunError:
-                logging.debug('%s does not exist on %s', autotest_binary,
-                              host.hostname)
-        raise AutodirNotFoundError
-
-
-    @classmethod
-    def get_install_dir(cls, host):
-        """
-        Determines the location where autotest should be installed on
-        host. If self.install_in_tmpdir is set, it will return a unique
-        temporary directory that autotest can be installed in. Otherwise, looks
-        for an existing installation to use; if none is found, looks for a
-        usable directory in the global config client_autodir_paths.
-        """
-        try:
-            install_dir = cls.get_installed_autodir(host)
-        except AutodirNotFoundError:
-            install_dir = cls._find_installable_dir(host)
-
-        if cls.install_in_tmpdir:
-            return host.get_tmp_dir(parent=install_dir)
-        return install_dir
-
-
-    @classmethod
-    def _find_installable_dir(cls, host):
-        client_autodir_paths = cls.get_client_autodir_paths(host)
-        for path in client_autodir_paths:
-            try:
-                host.run('mkdir -p %s' % utils.sh_escape(path))
-                host.run('test -w %s' % utils.sh_escape(path))
-                return path
-            except error.AutoservRunError:
-                logging.debug('Failed to create %s', path)
-        raise error.AutoservInstallError(
-                'Unable to find a place to install Autotest; tried %s' %
-                ', '.join(client_autodir_paths))
-
-
-    def get_fetch_location(self):
-        c = global_config.global_config
-        repos = c.get_config_value("PACKAGES", 'fetch_location', type=list,
-                                   default=[])
-        repos.reverse()
-        return repos
-
-
-    def install(self, host=None, autodir=None):
-        self._install(host=host, autodir=autodir)
-
-
-    def install_full_client(self, host=None, autodir=None):
-        self._install(host=host, autodir=autodir, use_autoserv=False,
-                      use_packaging=False)
-
-
-    def install_no_autoserv(self, host=None, autodir=None):
-        self._install(host=host, autodir=autodir, use_autoserv=False)
-
-
-    def _install_using_packaging(self, host, autodir):
-        repos = self.get_fetch_location()
-        if not repos:
-            raise error.PackageInstallError("No repos to install an "
-                                            "autotest client from")
-        pkgmgr = packages.PackageManager(autodir, hostname=host.hostname,
-                                         repo_urls=repos,
-                                         do_locking=False,
-                                         run_function=host.run,
-                                         run_function_dargs=dict(timeout=600))
-        # The packages dir is used to store all the packages that
-        # are fetched on that client. (for the tests,deps etc.
-        # too apart from the client)
-        pkg_dir = os.path.join(autodir, 'packages')
-        # clean up the autodir except for the packages directory
-        host.run('cd %s && ls | grep -v "^packages$"'
-                 ' | xargs rm -rf && rm -rf .[^.]*' % autodir)
-        pkgmgr.install_pkg('autotest', 'client', pkg_dir, autodir,
-                           preserve_install_dir=True)
-        self.installed = True
-
-
-    def _install_using_send_file(self, host, autodir):
-        dirs_to_exclude = set(["tests", "site_tests", "deps", "profilers"])
-        light_files = [os.path.join(self.source_material, f)
-                       for f in os.listdir(self.source_material)
-                       if f not in dirs_to_exclude]
-        host.send_file(light_files, autodir, delete_dest=True)
-
-        # create empty dirs for all the stuff we excluded
-        commands = []
-        for path in dirs_to_exclude:
-            abs_path = os.path.join(autodir, path)
-            abs_path = utils.sh_escape(abs_path)
-            commands.append("mkdir -p '%s'" % abs_path)
-            commands.append("touch '%s'/__init__.py" % abs_path)
-        host.run(';'.join(commands))
-
-
-    def _install(self, host=None, autodir=None, use_autoserv=True,
-                 use_packaging=True):
-        """
-        Install autotest.  If get() was not called previously, an
-        attempt will be made to install from the autotest svn
-        repository.
-
-        @param host A Host instance on which autotest will be installed
-        @param autodir Location on the remote host to install to
-        @param use_autoserv Enable install modes that depend on the client
-            running with the autoserv harness
-        @param use_packaging Enable install modes that use the packaging system
-
-        @exception AutoservError if a tarball was not specified and
-            the target host does not have svn installed in its path
-        """
-        if not host:
-            host = self.host
-        if not self.got:
-            self.get()
-        host.wait_up(timeout=30)
-        host.setup()
-        logging.info("Installing autotest on %s", host.hostname)
-
-        # set up the autotest directory on the remote machine
-        if not autodir:
-            autodir = self.get_install_dir(host)
-        logging.info('Using installation dir %s', autodir)
-        host.set_autodir(autodir)
-        host.run('mkdir -p %s' % utils.sh_escape(autodir))
-
-        # make sure there are no files in $AUTODIR/results
-        results_path = os.path.join(autodir, 'results')
-        host.run('rm -rf %s/*' % utils.sh_escape(results_path),
-                 ignore_status=True)
-
-        # Fetch the autotest client from the nearest repository
-        if use_packaging:
-            try:
-                self._install_using_packaging(host, autodir)
-                return
-            except (error.PackageInstallError, error.AutoservRunError,
-                    global_config.ConfigError), e:
-                logging.info("Could not install autotest using the packaging "
-                             "system: %s. Trying other methods", e)
-
-        # try to install from file or directory
-        if self.source_material:
-            c = global_config.global_config
-            supports_autoserv_packaging = c.get_config_value(
-                "PACKAGES", "serve_packages_from_autoserv", type=bool)
-            # Copy autotest recursively
-            if supports_autoserv_packaging and use_autoserv:
-                self._install_using_send_file(host, autodir)
-            else:
-                host.send_file(self.source_material, autodir, delete_dest=True)
-            logging.info("Installation of autotest completed")
-            self.installed = True
-            return
-
-        # if that fails try to install using svn
-        if utils.run('which svn').exit_status:
-            raise error.AutoservError('svn not found on target machine: %s' %
-                                      host.name)
-        try:
-            host.run('svn checkout %s %s' % (AUTOTEST_SVN, autodir))
-        except error.AutoservRunError, e:
-            host.run('svn checkout %s %s' % (AUTOTEST_HTTP, autodir))
-        logging.info("Installation of autotest completed")
-        self.installed = True
-
-
-    def uninstall(self, host=None):
-        """
-        Uninstall (i.e. delete) autotest. Removes the autotest client install
-        from the specified host.
-
-        @params host a Host instance from which the client will be removed
-        """
-        if not self.installed:
-            return
-        if not host:
-            host = self.host
-        autodir = host.get_autodir()
-        if not autodir:
-            return
-
-        # perform the actual uninstall
-        host.run("rm -rf %s" % utils.sh_escape(autodir), ignore_status=True)
-        host.set_autodir(None)
-        self.installed = False
-
-
-    def get(self, location=None):
-        if not location:
-            location = os.path.join(self.serverdir, '../client')
-            location = os.path.abspath(location)
-        # If there's stuff run on our client directory already, it
-        # can cause problems. Try giving it a quick clean first.
-        cwd = os.getcwd()
-        os.chdir(location)
-        try:
-            utils.system('tools/make_clean', ignore_status=True)
-        finally:
-            os.chdir(cwd)
-        super(BaseAutotest, self).get(location)
-        self.got = True
-
-
-    def run(self, control_file, results_dir='.', host=None, timeout=None,
-            tag=None, parallel_flag=False, background=False,
-            client_disconnect_timeout=None):
-        """
-        Run an autotest job on the remote machine.
-
-        @param control_file: An open file-like-obj of the control file.
-        @param results_dir: A str path where the results should be stored
-                on the local filesystem.
-        @param host: A Host instance on which the control file should
-                be run.
-        @param timeout: Maximum number of seconds to wait for the run or None.
-        @param tag: Tag name for the client side instance of autotest.
-        @param parallel_flag: Flag set when multiple jobs are run at the
-                same time.
-        @param background: Indicates that the client should be launched as
-                a background job; the code calling run will be responsible
-                for monitoring the client and collecting the results.
-        @param client_disconnect_timeout: Seconds to wait for the remote host
-                to come back after a reboot. Defaults to the host setting for
-                DEFAULT_REBOOT_TIMEOUT.
-
-        @raises AutotestRunError: If there is a problem executing
-                the control file.
-        """
-        host = self._get_host_and_setup(host)
-        results_dir = os.path.abspath(results_dir)
-
-        if client_disconnect_timeout is None:
-            client_disconnect_timeout = host.DEFAULT_REBOOT_TIMEOUT
-
-        if tag:
-            results_dir = os.path.join(results_dir, tag)
-
-        atrun = _Run(host, results_dir, tag, parallel_flag, background)
-        self._do_run(control_file, results_dir, host, atrun, timeout,
-                     client_disconnect_timeout)
-
-
-    def _get_host_and_setup(self, host):
-        if not host:
-            host = self.host
-        if not self.installed:
-            self.install(host)
-
-        host.wait_up(timeout=30)
-        return host
-
-
-    def _do_run(self, control_file, results_dir, host, atrun, timeout,
-                client_disconnect_timeout):
-        try:
-            atrun.verify_machine()
-        except:
-            logging.error("Verify failed on %s. Reinstalling autotest",
-                          host.hostname)
-            self.install(host)
-        atrun.verify_machine()
-        debug = os.path.join(results_dir, 'debug')
-        try:
-            os.makedirs(debug)
-        except Exception:
-            pass
-
-        delete_file_list = [atrun.remote_control_file,
-                            atrun.remote_control_file + '.state',
-                            atrun.manual_control_file,
-                            atrun.manual_control_file + '.state']
-        cmd = ';'.join('rm -f ' + control for control in delete_file_list)
-        host.run(cmd, ignore_status=True)
-
-        tmppath = utils.get(control_file)
-
-        # build up the initialization prologue for the control file
-        prologue_lines = []
-
-        # Add the additional user arguments
-        prologue_lines.append("args = %r\n" % self.job.args)
-
-        # If the packaging system is being used, add the repository list.
-        repos = None
-        try:
-            repos = self.get_fetch_location()
-            pkgmgr = packages.PackageManager('autotest', hostname=host.hostname,
-                                             repo_urls=repos)
-            prologue_lines.append('job.add_repository(%s)\n' % repos)
-        except global_config.ConfigError, e:
-            # If repos is defined packaging is enabled so log the error
-            if repos:
-                logging.error(e)
-
-        # on full-size installs, turn on any profilers the server is using
-        if not atrun.background:
-            running_profilers = host.job.profilers.add_log.iteritems()
-            for profiler, (args, dargs) in running_profilers:
-                call_args = [repr(profiler)]
-                call_args += [repr(arg) for arg in args]
-                call_args += ["%s=%r" % item for item in dargs.iteritems()]
-                prologue_lines.append("job.profilers.add(%s)\n"
-                                      % ", ".join(call_args))
-        cfile = "".join(prologue_lines)
-
-        cfile += open(tmppath).read()
-        open(tmppath, "w").write(cfile)
-
-        # Create and copy state file to remote_control_file + '.state'
-        state_file = host.job.preprocess_client_state()
-        host.send_file(state_file, atrun.remote_control_file + '.init.state')
-        os.remove(state_file)
-
-        # Copy control_file to remote_control_file on the host
-        host.send_file(tmppath, atrun.remote_control_file)
-        if os.path.abspath(tmppath) != os.path.abspath(control_file):
-            os.remove(tmppath)
-
-        atrun.execute_control(
-                timeout=timeout,
-                client_disconnect_timeout=client_disconnect_timeout)
-
-
-    def run_timed_test(self, test_name, results_dir='.', host=None,
-                       timeout=None, *args, **dargs):
-        """
-        Assemble a tiny little control file to just run one test,
-        and run it as an autotest client-side test
-        """
-        if not host:
-            host = self.host
-        if not self.installed:
-            self.install(host)
-        opts = ["%s=%s" % (o[0], repr(o[1])) for o in dargs.items()]
-        cmd = ", ".join([repr(test_name)] + map(repr, args) + opts)
-        control = "job.run_test(%s)\n" % cmd
-        self.run(control, results_dir, host, timeout=timeout)
-
-
-    def run_test(self, test_name, results_dir='.', host=None, *args, **dargs):
-        self.run_timed_test(test_name, results_dir, host, timeout=None,
-                            *args, **dargs)
-
-
-class _BaseRun(object):
-    """
-    Represents a run of autotest control file.  This class maintains
-    all the state necessary as an autotest control file is executed.
-
-    It is not intended to be used directly, rather control files
-    should be run using the run method in Autotest.
-    """
-    def __init__(self, host, results_dir, tag, parallel_flag, background):
-        self.host = host
-        self.results_dir = results_dir
-        self.env = host.env
-        self.tag = tag
-        self.parallel_flag = parallel_flag
-        self.background = background
-        self.autodir = Autotest.get_installed_autodir(self.host)
-        control = os.path.join(self.autodir, 'control')
-        if tag:
-            control += '.' + tag
-        self.manual_control_file = control
-        self.remote_control_file = control + '.autoserv'
-        self.config_file = os.path.join(self.autodir, 'global_config.ini')
-
-
-    def verify_machine(self):
-        binary = os.path.join(self.autodir, 'bin/autotest')
-        try:
-            self.host.run('ls %s > /dev/null 2>&1' % binary)
-        except:
-            raise error.AutoservInstallError(
-                "Autotest does not appear to be installed")
-
-        if not self.parallel_flag:
-            tmpdir = os.path.join(self.autodir, 'tmp')
-            download = os.path.join(self.autodir, 'tests/download')
-            self.host.run('umount %s' % tmpdir, ignore_status=True)
-            self.host.run('umount %s' % download, ignore_status=True)
-
-
-    def get_base_cmd_args(self, section):
-        args = ['--verbose']
-        if section > 0:
-            args.append('-c')
-        if self.tag:
-            args.append('-t %s' % self.tag)
-        if self.host.job.use_external_logging():
-            args.append('-l')
-        if self.host.hostname:
-            args.append('--hostname=%s' % self.host.hostname)
-        args.append('--user=%s' % self.host.job.user)
-
-        args.append(self.remote_control_file)
-        return args
-
-
-    def get_background_cmd(self, section):
-        cmd = ['nohup', os.path.join(self.autodir, 'bin/autotest_client')]
-        cmd += self.get_base_cmd_args(section)
-        cmd += ['>/dev/null', '2>/dev/null', '&']
-        return ' '.join(cmd)
-
-
-    def get_daemon_cmd(self, section, monitor_dir):
-        cmd = ['nohup', os.path.join(self.autodir, 'bin/autotestd'),
-               monitor_dir, '-H autoserv']
-        cmd += self.get_base_cmd_args(section)
-        cmd += ['>/dev/null', '2>/dev/null', '&']
-        return ' '.join(cmd)
-
-
-    def get_monitor_cmd(self, monitor_dir, stdout_read, stderr_read):
-        cmd = [os.path.join(self.autodir, 'bin', 'autotestd_monitor'),
-               monitor_dir, str(stdout_read), str(stderr_read)]
-        return ' '.join(cmd)
-
-
-    def get_client_log(self):
-        """Find what the "next" client.* prefix should be
-
-        @returns A string of the form client.INTEGER that should be prefixed
-            to all client debug log files.
-        """
-        max_digit = -1
-        debug_dir = os.path.join(self.results_dir, 'debug')
-        client_logs = glob.glob(os.path.join(debug_dir, 'client.*.*'))
-        for log in client_logs:
-            _, number, _ = log.split('.', 2)
-            if number.isdigit():
-                max_digit = max(max_digit, int(number))
-        return 'client.%d' % (max_digit + 1)
-
-
-    def copy_client_config_file(self, client_log_prefix=None):
-        """
-        Create and copy the client config file based on the server config.
-
-        @param client_log_prefix: Optional prefix to prepend to log files.
-        """
-        client_config_file = self._create_client_config_file(client_log_prefix)
-        self.host.send_file(client_config_file, self.config_file)
-        os.remove(client_config_file)
-
-
-    def _create_client_config_file(self, client_log_prefix=None):
-        """
-        Create a temporary file with the [CLIENT] section configuration values
-        taken from the server global_config.ini.
-
-        @param client_log_prefix: Optional prefix to prepend to log files.
-
-        @return: Path of the temporary file generated.
-        """
-        config = global_config.global_config.get_section_values('CLIENT')
-        if client_log_prefix:
-            config.set('CLIENT', 'default_logging_name', client_log_prefix)
-        return self._create_aux_file(config.write)
-
-
-    def _create_aux_file(self, func, *args):
-        """
-        Creates a temporary file and writes content to it according to a
-        content creation function. The file object is appended to *args, which
-        is then passed to the content creation function
-
-        @param func: Function that will be used to write content to the
-                temporary file.
-        @param *args: List of parameters that func takes.
-        @return: Path to the temporary file that was created.
-        """
-        fd, path = tempfile.mkstemp(dir=self.host.job.tmpdir)
-        aux_file = os.fdopen(fd, "w")
-        try:
-            list_args = list(args)
-            list_args.append(aux_file)
-            func(*list_args)
-        finally:
-            aux_file.close()
-        return path
-
-
-    @staticmethod
-    def is_client_job_finished(last_line):
-        return bool(re.match(r'^END .*\t----\t----\t.*$', last_line))
-
-
-    @staticmethod
-    def is_client_job_rebooting(last_line):
-        return bool(re.match(r'^\t*GOOD\t----\treboot\.start.*$', last_line))
-
-
-    def log_unexpected_abort(self, stderr_redirector):
-        stderr_redirector.flush_all_buffers()
-        msg = "Autotest client terminated unexpectedly"
-        self.host.job.record("END ABORT", None, None, msg)
-
-
-    def _execute_in_background(self, section, timeout):
-        full_cmd = self.get_background_cmd(section)
-        devnull = open(os.devnull, "w")
-
-        self.copy_client_config_file(self.get_client_log())
-
-        self.host.job.push_execution_context(self.results_dir)
-        try:
-            result = self.host.run(full_cmd, ignore_status=True,
-                                   timeout=timeout,
-                                   stdout_tee=devnull,
-                                   stderr_tee=devnull)
-        finally:
-            self.host.job.pop_execution_context()
-
-        return result
-
-
-    @staticmethod
-    def _strip_stderr_prologue(stderr):
-        """Strips the 'standard' prologue that get pre-pended to every
-        remote command and returns the text that was actually written to
-        stderr by the remote command."""
-        stderr_lines = stderr.split("\n")[1:]
-        if not stderr_lines:
-            return ""
-        elif stderr_lines[0].startswith("NOTE: autotestd_monitor"):
-            del stderr_lines[0]
-        return "\n".join(stderr_lines)
-
-
-    def _execute_daemon(self, section, timeout, stderr_redirector,
-                        client_disconnect_timeout):
-        monitor_dir = self.host.get_tmp_dir()
-        daemon_cmd = self.get_daemon_cmd(section, monitor_dir)
-
-        # grab the location for the server-side client log file
-        client_log_prefix = self.get_client_log()
-        client_log_path = os.path.join(self.results_dir, 'debug',
-                                       client_log_prefix + '.log')
-        client_log = open(client_log_path, 'w', 0)
-        self.copy_client_config_file(client_log_prefix)
-
-        stdout_read = stderr_read = 0
-        self.host.job.push_execution_context(self.results_dir)
-        try:
-            self.host.run(daemon_cmd, ignore_status=True, timeout=timeout)
-            disconnect_warnings = []
-            while True:
-                monitor_cmd = self.get_monitor_cmd(monitor_dir, stdout_read,
-                                                   stderr_read)
-                try:
-                    result = self.host.run(monitor_cmd, ignore_status=True,
-                                           timeout=timeout,
-                                           stdout_tee=client_log,
-                                           stderr_tee=stderr_redirector)
-                except error.AutoservRunError, e:
-                    result = e.result_obj
-                    result.exit_status = None
-                    disconnect_warnings.append(e.description)
-
-                    stderr_redirector.log_warning(
-                        "Autotest client was disconnected: %s" % e.description,
-                        "NETWORK")
-                except error.AutoservSSHTimeout:
-                    result = utils.CmdResult(monitor_cmd, "", "", None, 0)
-                    stderr_redirector.log_warning(
-                        "Attempt to connect to Autotest client timed out",
-                        "NETWORK")
-
-                stdout_read += len(result.stdout)
-                stderr_read += len(self._strip_stderr_prologue(result.stderr))
-
-                if result.exit_status is not None:
-                    return result
-                elif not self.host.wait_up(client_disconnect_timeout):
-                    raise error.AutoservSSHTimeout(
-                        "client was disconnected, reconnect timed out")
-        finally:
-            client_log.close()
-            self.host.job.pop_execution_context()
-
-
-    def execute_section(self, section, timeout, stderr_redirector,
-                        client_disconnect_timeout):
-        logging.info("Executing %s/bin/autotest %s/control phase %d",
-                     self.autodir, self.autodir, section)
-
-        if self.background:
-            result = self._execute_in_background(section, timeout)
-        else:
-            result = self._execute_daemon(section, timeout, stderr_redirector,
-                                          client_disconnect_timeout)
-
-        last_line = stderr_redirector.last_line
-
-        # check if we failed hard enough to warrant an exception
-        if result.exit_status == 1:
-            err = error.AutotestRunError("client job was aborted")
-        elif not self.background and not result.stderr:
-            err = error.AutotestRunError(
-                "execute_section %s failed to return anything\n"
-                "stdout:%s\n" % (section, result.stdout))
-        else:
-            err = None
-
-        # log something if the client failed AND never finished logging
-        if err and not self.is_client_job_finished(last_line):
-            self.log_unexpected_abort(stderr_redirector)
-
-        if err:
-            raise err
-        else:
-            return stderr_redirector.last_line
-
-
-    def _wait_for_reboot(self, old_boot_id):
-        logging.info("Client is rebooting")
-        logging.info("Waiting for client to halt")
-        if not self.host.wait_down(self.host.WAIT_DOWN_REBOOT_TIMEOUT,
-                                   old_boot_id=old_boot_id):
-            err = "%s failed to shutdown after %d"
-            err %= (self.host.hostname, self.host.WAIT_DOWN_REBOOT_TIMEOUT)
-            raise error.AutotestRunError(err)
-        logging.info("Client down, waiting for restart")
-        if not self.host.wait_up(self.host.DEFAULT_REBOOT_TIMEOUT):
-            # since reboot failed
-            # hardreset the machine once if possible
-            # before failing this control file
-            warning = "%s did not come back up, hard resetting"
-            warning %= self.host.hostname
-            logging.warning(warning)
-            try:
-                self.host.hardreset(wait=False)
-            except (AttributeError, error.AutoservUnsupportedError):
-                warning = "Hard reset unsupported on %s"
-                warning %= self.host.hostname
-                logging.warning(warning)
-            raise error.AutotestRunError("%s failed to boot after %ds" %
-                                         (self.host.hostname,
-                                          self.host.DEFAULT_REBOOT_TIMEOUT))
-        self.host.reboot_followup()
-
-
-    def execute_control(self, timeout=None, client_disconnect_timeout=None):
-        if not self.background:
-            collector = log_collector(self.host, self.tag, self.results_dir)
-            hostname = self.host.hostname
-            remote_results = collector.client_results_dir
-            local_results = collector.server_results_dir
-            self.host.job.add_client_log(hostname, remote_results,
-                                         local_results)
-            job_record_context = self.host.job.get_record_context()
-
-        section = 0
-        start_time = time.time()
-
-        logger = client_logger(self.host, self.tag, self.results_dir)
-        try:
-            while not timeout or time.time() < start_time + timeout:
-                if timeout:
-                    section_timeout = start_time + timeout - time.time()
-                else:
-                    section_timeout = None
-                boot_id = self.host.get_boot_id()
-                last = self.execute_section(section, section_timeout,
-                                            logger, client_disconnect_timeout)
-                if self.background:
-                    return
-                section += 1
-                if self.is_client_job_finished(last):
-                    logging.info("Client complete")
-                    return
-                elif self.is_client_job_rebooting(last):
-                    try:
-                        self._wait_for_reboot(boot_id)
-                    except error.AutotestRunError, e:
-                        self.host.job.record("ABORT", None, "reboot", str(e))
-                        self.host.job.record("END ABORT", None, None, str(e))
-                        raise
-                    continue
-
-                # if we reach here, something unexpected happened
-                self.log_unexpected_abort(logger)
-
-                # give the client machine a chance to recover from a crash
-                self.host.wait_up(self.host.HOURS_TO_WAIT_FOR_RECOVERY * 3600)
-                msg = ("Aborting - unexpected final status message from "
-                       "client on %s: %s\n") % (self.host.hostname, last)
-                raise error.AutotestRunError(msg)
-        finally:
-            logger.close()
-            if not self.background:
-                collector.collect_client_job_results()
-                collector.remove_redundant_client_logs()
-                state_file = os.path.basename(self.remote_control_file
-                                              + '.state')
-                state_path = os.path.join(self.results_dir, state_file)
-                self.host.job.postprocess_client_state(state_path)
-                self.host.job.remove_client_log(hostname, remote_results,
-                                                local_results)
-                job_record_context.restore()
-
-        # should only get here if we timed out
-        assert timeout
-        raise error.AutotestTimeoutError()
-
-
-class log_collector(object):
-    def __init__(self, host, client_tag, results_dir):
-        self.host = host
-        if not client_tag:
-            client_tag = "default"
-        self.client_results_dir = os.path.join(host.get_autodir(), "results",
-                                               client_tag)
-        self.server_results_dir = results_dir
-
-
-    def collect_client_job_results(self):
-        """ A method that collects all the current results of a running
-        client job into the results dir. By default does nothing as no
-        client job is running, but when running a client job you can override
-        this with something that will actually do something. """
-
-        # make an effort to wait for the machine to come up
-        try:
-            self.host.wait_up(timeout=30)
-        except error.AutoservError:
-            # don't worry about any errors, we'll try and
-            # get the results anyway
-            pass
-
-        # Copy all dirs in default to results_dir
-        try:
-            self.host.get_file(self.client_results_dir + '/',
-                               self.server_results_dir, preserve_symlinks=True)
-        except Exception:
-            # well, don't stop running just because we couldn't get logs
-            e_msg = "Unexpected error copying test result logs, continuing ..."
-            logging.error(e_msg)
-            traceback.print_exc(file=sys.stdout)
-
-
-    def remove_redundant_client_logs(self):
-        """Remove client.*.log files in favour of client.*.DEBUG files."""
-        debug_dir = os.path.join(self.server_results_dir, 'debug')
-        debug_files = [f for f in os.listdir(debug_dir)
-                       if re.search(r'^client\.\d+\.DEBUG$', f)]
-        for debug_file in debug_files:
-            log_file = debug_file.replace('DEBUG', 'log')
-            log_file = os.path.join(debug_dir, log_file)
-            if os.path.exists(log_file):
-                os.remove(log_file)
-
-
-# a file-like object for catching stderr from an autotest client and
-# extracting status logs from it
-class client_logger(object):
-    """Partial file object to write to both stdout and
-    the status log file.  We only implement those methods
-    utils.run() actually calls.
-    """
-    status_parser = re.compile(r"^AUTOTEST_STATUS:([^:]*):(.*)$")
-    test_complete_parser = re.compile(r"^AUTOTEST_TEST_COMPLETE:(.*)$")
-    fetch_package_parser = re.compile(
-        r"^AUTOTEST_FETCH_PACKAGE:([^:]*):([^:]*):(.*)$")
-    extract_indent = re.compile(r"^(\t*).*$")
-    extract_timestamp = re.compile(r".*\ttimestamp=(\d+)\t.*$")
-
-    def __init__(self, host, tag, server_results_dir):
-        self.host = host
-        self.job = host.job
-        self.log_collector = log_collector(host, tag, server_results_dir)
-        self.leftover = ""
-        self.last_line = ""
-        self.logs = {}
-
-
-    def _process_log_dict(self, log_dict):
-        log_list = log_dict.pop("logs", [])
-        for key in sorted(log_dict.iterkeys()):
-            log_list += self._process_log_dict(log_dict.pop(key))
-        return log_list
-
-
-    def _process_logs(self):
-        """Go through the accumulated logs in self.log and print them
-        out to stdout and the status log. Note that this processes
-        logs in an ordering where:
-
-        1) logs to different tags are never interleaved
-        2) logs to x.y come before logs to x.y.z for all z
-        3) logs to x.y come before x.z whenever y < z
-
-        Note that this will in general not be the same as the
-        chronological ordering of the logs. However, if a chronological
-        ordering is desired that one can be reconstructed from the
-        status log by looking at timestamp lines."""
-        log_list = self._process_log_dict(self.logs)
-        for entry in log_list:
-            self.job.record_entry(entry, log_in_subdir=False)
-        if log_list:
-            self.last_line = log_list[-1].render()
-
-
-    def _process_quoted_line(self, tag, line):
-        """Process a line quoted with an AUTOTEST_STATUS flag. If the
-        tag is blank then we want to push out all the data we've been
-        building up in self.logs, and then the newest line. If the
-        tag is not blank, then push the line into the logs for handling
-        later."""
-        entry = base_job.status_log_entry.parse(line)
-        if entry is None:
-            return  # the line contains no status lines
-        if tag == "":
-            self._process_logs()
-            self.job.record_entry(entry, log_in_subdir=False)
-            self.last_line = line
-        else:
-            tag_parts = [int(x) for x in tag.split(".")]
-            log_dict = self.logs
-            for part in tag_parts:
-                log_dict = log_dict.setdefault(part, {})
-            log_list = log_dict.setdefault("logs", [])
-            log_list.append(entry)
-
-
-    def _process_info_line(self, line):
-        """Check if line is an INFO line, and if it is, interpret any control
-        messages (e.g. enabling/disabling warnings) that it may contain."""
-        match = re.search(r"^\t*INFO\t----\t----(.*)\t[^\t]*$", line)
-        if not match:
-            return   # not an INFO line
-        for field in match.group(1).split('\t'):
-            if field.startswith("warnings.enable="):
-                func = self.job.warning_manager.enable_warnings
-            elif field.startswith("warnings.disable="):
-                func = self.job.warning_manager.disable_warnings
-            else:
-                continue
-            warning_type = field.split("=", 1)[1]
-            func(warning_type)
-
-
-    def _process_line(self, line):
-        """Write out a line of data to the appropriate stream. Status
-        lines sent by autotest will be prepended with
-        "AUTOTEST_STATUS", and all other lines are ssh error
-        messages."""
-        status_match = self.status_parser.search(line)
-        test_complete_match = self.test_complete_parser.search(line)
-        fetch_package_match = self.fetch_package_parser.search(line)
-        if status_match:
-            tag, line = status_match.groups()
-            self._process_info_line(line)
-            self._process_quoted_line(tag, line)
-        elif test_complete_match:
-            self._process_logs()
-            fifo_path, = test_complete_match.groups()
-            try:
-                self.log_collector.collect_client_job_results()
-                self.host.run("echo A > %s" % fifo_path)
-            except Exception:
-                msg = "Post-test log collection failed, continuing anyway"
-                logging.exception(msg)
-        elif fetch_package_match:
-            pkg_name, dest_path, fifo_path = fetch_package_match.groups()
-            serve_packages = global_config.global_config.get_config_value(
-                "PACKAGES", "serve_packages_from_autoserv", type=bool)
-            if serve_packages and pkg_name.endswith(".tar.bz2"):
-                try:
-                    self._send_tarball(pkg_name, dest_path)
-                except Exception:
-                    msg = "Package tarball creation failed, continuing anyway"
-                    logging.exception(msg)
-            try:
-                self.host.run("echo B > %s" % fifo_path)
-            except Exception:
-                msg = "Package tarball installation failed, continuing anyway"
-                logging.exception(msg)
-        else:
-            logging.info(line)
-
-
-    def _send_tarball(self, pkg_name, remote_dest):
-        name, pkg_type = self.job.pkgmgr.parse_tarball_name(pkg_name)
-        src_dirs = []
-        if pkg_type == 'test':
-            for test_dir in ['site_tests', 'tests']:
-                src_dir = os.path.join(self.job.clientdir, test_dir, name)
-                if os.path.exists(src_dir):
-                    src_dirs += [src_dir]
-                    if autoserv_prebuild:
-                        prebuild.setup(self.job.clientdir, src_dir)
-                    break
-        elif pkg_type == 'profiler':
-            src_dirs += [os.path.join(self.job.clientdir, 'profilers', name)]
-            if autoserv_prebuild:
-                prebuild.setup(self.job.clientdir, src_dir)
-        elif pkg_type == 'dep':
-            src_dirs += [os.path.join(self.job.clientdir, 'deps', name)]
-        elif pkg_type == 'client':
-            return  # you must already have a client to hit this anyway
-        else:
-            return  # no other types are supported
-
-        # iterate over src_dirs until we find one that exists, then tar it
-        for src_dir in src_dirs:
-            if os.path.exists(src_dir):
-                try:
-                    logging.info('Bundling %s into %s', src_dir, pkg_name)
-                    temp_dir = autotemp.tempdir(unique_id='autoserv-packager',
-                                                dir=self.job.tmpdir)
-                    tarball_path = self.job.pkgmgr.tar_package(
-                        pkg_name, src_dir, temp_dir.name, " .")
-                    self.host.send_file(tarball_path, remote_dest)
-                finally:
-                    temp_dir.clean()
-                return
-
-
-    def log_warning(self, msg, warning_type):
-        """Injects a WARN message into the current status logging stream."""
-        timestamp = int(time.time())
-        if self.job.warning_manager.is_valid(timestamp, warning_type):
-            self.job.record('WARN', None, None, msg)
-
-
-    def write(self, data):
-        # now start processing the existing buffer and the new data
-        data = self.leftover + data
-        lines = data.split('\n')
-        processed_lines = 0
-        try:
-            # process all the buffered data except the last line
-            # ignore the last line since we may not have all of it yet
-            for line in lines[:-1]:
-                self._process_line(line)
-                processed_lines += 1
-        finally:
-            # save any unprocessed lines for future processing
-            self.leftover = '\n'.join(lines[processed_lines:])
-
-
-    def flush(self):
-        sys.stdout.flush()
-
-
-    def flush_all_buffers(self):
-        if self.leftover:
-            self._process_line(self.leftover)
-            self.leftover = ""
-        self._process_logs()
-        self.flush()
-
-
-    def close(self):
-        self.flush_all_buffers()
-
-
-SiteAutotest = client_utils.import_site_class(
-    __file__, "autotest_lib.server.site_autotest", "SiteAutotest",
-    BaseAutotest)
-
-
-_SiteRun = client_utils.import_site_class(
-    __file__, "autotest_lib.server.site_autotest", "_SiteRun", _BaseRun)
-
-
-class Autotest(SiteAutotest):
-    pass
-
-
-class _Run(_SiteRun):
-    pass
-
-
-class AutotestHostMixin(object):
-    """A generic mixin to add a run_test method to classes, which will allow
-    you to run an autotest client test on a machine directly."""
-
-    # for testing purposes
-    _Autotest = Autotest
-
-    def run_test(self, test_name, **dargs):
-        """Run an autotest client test on the host.
-
-        @param test_name: The name of the client test.
-        @param dargs: Keyword arguments to pass to the test.
-
-        @returns: True if the test passes, False otherwise."""
-        at = self._Autotest()
-        control_file = ('result = job.run_test(%s)\n'
-                        'job.set_state("test_result", result)\n')
-        test_args = [repr(test_name)]
-        test_args += ['%s=%r' % (k, v) for k, v in dargs.iteritems()]
-        control_file %= ', '.join(test_args)
-        at.run(control_file, host=self)
-        return at.job.get_state('test_result', default=False)
+from autotest_lib.client.common_lib.autotest import *
diff --git a/server/autotest_unittest.py b/server/autotest_unittest.py
index 2851970..a865780 100755
--- a/server/autotest_unittest.py
+++ b/server/autotest_unittest.py
@@ -5,10 +5,10 @@ __author__ = "raphtee@google.com (Travis Miller)"
 import unittest, os, tempfile, logging
 
 import common
-from autotest_lib.server import autotest, utils, hosts, server_job, profilers
+from autotest_lib.server import utils, hosts, server_job, profilers
 from autotest_lib.client.bin import sysinfo
 from autotest_lib.client.common_lib import utils as client_utils, packages
-from autotest_lib.client.common_lib import error
+from autotest_lib.client.common_lib import error, autotest
 from autotest_lib.client.common_lib.test_utils import mock
 
 
@@ -35,12 +35,12 @@ class TestBaseAutotest(unittest.TestCase):
         self.host.job.record = lambda *args: None
 
         # stubs
-        self.god.stub_function(utils, "get_server_dir")
-        self.god.stub_function(utils, "run")
-        self.god.stub_function(utils, "get")
-        self.god.stub_function(utils, "read_keyval")
-        self.god.stub_function(utils, "write_keyval")
-        self.god.stub_function(utils, "system")
+        self.god.stub_function(client_utils, "get_server_dir")
+        self.god.stub_function(client_utils, "run")
+        self.god.stub_function(client_utils, "get")
+        self.god.stub_function(client_utils, "read_keyval")
+        self.god.stub_function(client_utils, "write_keyval")
+        self.god.stub_function(client_utils, "system")
         self.god.stub_function(tempfile, "mkstemp")
         self.god.stub_function(tempfile, "mktemp")
         self.god.stub_function(os, "getcwd")
@@ -67,7 +67,7 @@ class TestBaseAutotest(unittest.TestCase):
         self.serverdir = "serverdir"
 
         # record
-        utils.get_server_dir.expect_call().and_return(self.serverdir)
+        client_utils.get_server_dir.expect_call().and_return(self.serverdir)
 
         # create the autotest object
         self.base_autotest = autotest.BaseAutotest(self.host)
@@ -93,9 +93,9 @@ class TestBaseAutotest(unittest.TestCase):
         # record
         os.getcwd.expect_call().and_return('cwd')
         os.chdir.expect_call(os.path.join(self.serverdir, '../client'))
-        utils.system.expect_call('tools/make_clean', ignore_status=True)
+        client_utils.system.expect_call('tools/make_clean', ignore_status=True)
         os.chdir.expect_call('cwd')
-        utils.get.expect_call(os.path.join(self.serverdir,
+        client_utils.get.expect_call(os.path.join(self.serverdir,
             '../client')).and_return('source_material')
 
         self.host.wait_up.expect_call(timeout=30)
@@ -201,7 +201,7 @@ class TestBaseAutotest(unittest.TestCase):
         cmd = ';'.join('rm -f ' + control for control in delete_file_list)
         self.host.run.expect_call(cmd, ignore_status=True)
 
-        utils.get.expect_call(control).and_return("temp")
+        client_utils.get.expect_call(control).and_return("temp")
 
         c = autotest.global_config.global_config
         c.get_config_value.expect_call("PACKAGES",
diff --git a/server/base_utils.py b/server/base_utils.py
index 1c58609..4a16b29 100644
--- a/server/base_utils.py
+++ b/server/base_utils.py
@@ -8,16 +8,12 @@ DO NOT import this file directly - it is mixed in by server/utils.py,
 import that instead
 """
 
-import atexit, os, re, shutil, textwrap, sys, tempfile, types
+import atexit, os, re, shutil, textwrap, sys
 
 from autotest_lib.client.common_lib import barrier, utils
 from autotest_lib.server import subcommand
 
 
-# A dictionary of pid and a list of tmpdirs for that pid
-__tmp_dirs = {}
-
-
 def scp_remote_escape(filename):
     """
     Escape special characters from a filename so that it can be passed
@@ -47,92 +43,8 @@ def scp_remote_escape(filename):
     return utils.sh_escape("".join(new_name))
 
 
-def get(location, local_copy = False):
-    """Get a file or directory to a local temporary directory.
-
-    Args:
-            location: the source of the material to get. This source may
-                    be one of:
-                    * a local file or directory
-                    * a URL (http or ftp)
-                    * a python file-like object
-
-    Returns:
-            The location of the file or directory where the requested
-            content was saved. This will be contained in a temporary
-            directory on the local host. If the material to get was a
-            directory, the location will contain a trailing '/'
-    """
-    tmpdir = get_tmp_dir()
-
-    # location is a file-like object
-    if hasattr(location, "read"):
-        tmpfile = os.path.join(tmpdir, "file")
-        tmpfileobj = file(tmpfile, 'w')
-        shutil.copyfileobj(location, tmpfileobj)
-        tmpfileobj.close()
-        return tmpfile
-
-    if isinstance(location, types.StringTypes):
-        # location is a URL
-        if location.startswith('http') or location.startswith('ftp'):
-            tmpfile = os.path.join(tmpdir, os.path.basename(location))
-            utils.urlretrieve(location, tmpfile)
-            return tmpfile
-        # location is a local path
-        elif os.path.exists(os.path.abspath(location)):
-            if not local_copy:
-                if os.path.isdir(location):
-                    return location.rstrip('/') + '/'
-                else:
-                    return location
-            tmpfile = os.path.join(tmpdir, os.path.basename(location))
-            if os.path.isdir(location):
-                tmpfile += '/'
-                shutil.copytree(location, tmpfile, symlinks=True)
-                return tmpfile
-            shutil.copyfile(location, tmpfile)
-            return tmpfile
-        # location is just a string, dump it to a file
-        else:
-            tmpfd, tmpfile = tempfile.mkstemp(dir=tmpdir)
-            tmpfileobj = os.fdopen(tmpfd, 'w')
-            tmpfileobj.write(location)
-            tmpfileobj.close()
-            return tmpfile
-
-
-def get_tmp_dir():
-    """Return the pathname of a directory on the host suitable
-    for temporary file storage.
-
-    The directory and its content will be deleted automatically
-    at the end of the program execution if they are still present.
-    """
-    dir_name = tempfile.mkdtemp(prefix="autoserv-")
-    pid = os.getpid()
-    if not pid in __tmp_dirs:
-        __tmp_dirs[pid] = []
-    __tmp_dirs[pid].append(dir_name)
-    return dir_name
-
-
-def __clean_tmp_dirs():
-    """Erase temporary directories that were created by the get_tmp_dir()
-    function and that are still present.
-    """
-    pid = os.getpid()
-    if pid not in __tmp_dirs:
-        return
-    for dir in __tmp_dirs[pid]:
-        try:
-            shutil.rmtree(dir)
-        except OSError, e:
-            if e.errno == 2:
-                pass
-    __tmp_dirs[pid] = []
-atexit.register(__clean_tmp_dirs)
-subcommand.subcommand.register_join_hook(lambda _: __clean_tmp_dirs())
+atexit.register(utils.clean_tmp_dirs)
+subcommand.subcommand.register_join_hook(lambda _: utils.clean_tmp_dirs())
 
 
 def unarchive(host, source_material):
@@ -175,11 +87,6 @@ def unarchive(host, source_material):
     return source_material
 
 
-def get_server_dir():
-    path = os.path.dirname(sys.modules['autotest_lib.server.utils'].__file__)
-    return os.path.abspath(path)
-
-
 def find_pid(command):
     for line in utils.system_output('ps -eo pid,cmd').rstrip().split('\n'):
         (pid, cmd) = line.split(None, 1)
diff --git a/server/git.py b/server/git.py
index 0428b80..cd32d5b 100644
--- a/server/git.py
+++ b/server/git.py
@@ -7,8 +7,9 @@ This module defines a class for handling building from git repos
 
 import os, warnings, logging
 from autotest_lib.client.common_lib import error, revision_control
+from autotest_lib.client.common_lib import installable_object
 from autotest_lib.client.bin import os_dep
-from autotest_lib.server import utils, installable_object
+from autotest_lib.server import utils
 
 
 class InstallableGitRepo(installable_object.InstallableObject):
diff --git a/server/installable_object.py b/server/installable_object.py
deleted file mode 100644
index bfb8a47..0000000
--- a/server/installable_object.py
+++ /dev/null
@@ -1,38 +0,0 @@
-from autotest_lib.server import utils
-
-
-class InstallableObject(object):
-    """
-    This class represents a software package that can be installed on
-    a Host.
-
-    Implementation details:
-    This is an abstract class, leaf subclasses must implement the methods
-    listed here. You must not instantiate this class but should
-    instantiate one of those leaf subclasses.
-    """
-
-    source_material= None
-
-    def __init__(self):
-        super(InstallableObject, self).__init__()
-
-
-    def get(self, location):
-        """
-        Get the source material required to install the object.
-
-        Through the utils.get() function, the argument passed will be
-        saved in a temporary location on the LocalHost. That location
-        is saved in the source_material attribute.
-
-        Args:
-                location: the path to the source material. This path
-                        may be of any type that the utils.get()
-                        function will accept.
-        """
-        self.source_material= utils.get(location)
-
-
-    def install(self, host):
-        pass
diff --git a/server/kernel.py b/server/kernel.py
index 8329caa..d21da40 100644
--- a/server/kernel.py
+++ b/server/kernel.py
@@ -7,7 +7,7 @@ This module defines the Kernel class
 """
 
 
-from autotest_lib.server import installable_object
+from autotest_lib.client.common_lib import installable_object
 
 
 class Kernel(installable_object.InstallableObject):
diff --git a/server/prebuild.py b/server/prebuild.py
deleted file mode 100644
index b13fd7b..0000000
--- a/server/prebuild.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2010 Google Inc. Released under the GPL v2
-#
-# Eric Li <ericli@google.com>
-
-import logging, os, pickle, re, sys
-import common
-from autotest_lib.client.bin import setup_job as client_setup_job
-
-
-def touch_init(parent_dir, child_dir):
-    """
-    Touch __init__.py file all alone through from dir_patent to child_dir.
-
-    So client tests could be loaded as Python modules. Assume child_dir is a
-    subdirectory of parent_dir.
-    """
-
-    if not child_dir.startswith(parent_dir):
-        logging.error('%s is not a subdirectory of %s' % (child_dir,
-                                                          parent_dir))
-        return
-    sub_parent_dirs = parent_dir.split(os.path.sep)
-    sub_child_dirs = child_dir.split(os.path.sep)
-    for sub_dir in sub_child_dirs[len(sub_parent_dirs):]:
-        sub_parent_dirs.append(sub_dir)
-        path = os.path.sep.join(sub_parent_dirs)
-        init_py = os.path.join(path, '__init__.py')
-        open(init_py, 'a').close()
-
-
-def init_test(testdir):
-    """
-    Instantiate a client test object from a given test directory.
-
-    @param testdir The test directory.
-    @returns A test object or None if failed to instantiate.
-    """
-
-    class options:
-        tag = ''
-        verbose = None
-        cont = False
-        harness = 'autoserv'
-        hostname = None
-        user = None
-        log = True
-    return client_setup_job.init_test(options, testdir)
-
-
-def setup(autotest_client_dir, client_test_dir):
-    """
-    Setup prebuild of a client test.
-
-    @param autotest_client_dir: The autotest/client base directory.
-    @param client_test_dir: The actual test directory under client.
-    """
-
-    os.environ['AUTODIR'] = autotest_client_dir
-    touch_init(autotest_client_dir, client_test_dir)
-
-    # instantiate a client_test instance.
-    client_test = init_test(client_test_dir)
-    client_setup_job.setup_test(client_test)
-- 
1.7.4.4

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side to client side.
  2011-08-26  7:12 Add ability client part starts autotest like server part Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to client part of autotest Jiří Župka
@ 2011-08-26  7:12 ` Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 3/3] autotest: Client/server part unification Jiří Župka
  2011-08-26 16:22 ` Add ability client part starts autotest like server part Lucas Meneghel Rodrigues
  3 siblings, 0 replies; 6+ messages in thread
From: Jiří Župka @ 2011-08-26  7:12 UTC (permalink / raw)
  To: kvm-autotest, kvm, autotest, lmr, ldoktor, akong

Signed-off-by: Jiří Župka <jzupka@redhat.com>
---
 client/bin/local_host.py                           |    6 +-
 client/common_lib/base_hosts/__init__.py           |    6 +
 client/common_lib/base_hosts/base_classes.py       |  709 ++++++++++++++++++++
 .../common_lib/base_hosts/base_classes_unittest.py |   43 ++
 client/common_lib/base_hosts/common.py             |    8 +
 client/common_lib/base_packages.py                 |    4 +-
 client/common_lib/base_utils.py                    |   92 +++
 client/common_lib/hosts/__init__.py                |   25 +-
 client/common_lib/hosts/abstract_ssh.py            |  608 +++++++++++++++++
 client/common_lib/hosts/base_classes.py            |  682 +------------------
 client/common_lib/hosts/base_classes_unittest.py   |  103 +++-
 client/common_lib/hosts/bootloader.py              |   67 ++
 client/common_lib/hosts/bootloader_unittest.py     |   97 +++
 client/common_lib/hosts/factory.py                 |   83 +++
 client/common_lib/hosts/guest.py                   |   70 ++
 client/common_lib/hosts/kvm_guest.py               |   46 ++
 client/common_lib/hosts/logfile_monitor.py         |  289 ++++++++
 client/common_lib/hosts/monitors/common.py         |    7 +
 client/common_lib/hosts/monitors/console.py        |   88 +++
 client/common_lib/hosts/monitors/console_patterns  |   71 ++
 .../hosts/monitors/console_patterns_test.py        |   53 ++
 client/common_lib/hosts/monitors/followfiles.py    |   27 +
 client/common_lib/hosts/monitors/monitors_util.py  |  379 +++++++++++
 .../hosts/monitors/monitors_util_unittest.py       |  177 +++++
 client/common_lib/hosts/netconsole.py              |  160 +++++
 client/common_lib/hosts/paramiko_host.py           |  310 +++++++++
 client/common_lib/hosts/remote.py                  |  272 ++++++++
 client/common_lib/hosts/remote_unittest.py         |   16 +
 client/common_lib/hosts/serial.py                  |  183 +++++
 client/common_lib/hosts/site_factory.py            |    4 +
 client/common_lib/hosts/ssh_host.py                |  245 +++++++
 client/common_lib/subcommand.py                    |  263 ++++++++
 client/common_lib/subcommand_unittest.py           |  443 ++++++++++++
 scheduler/drone_utility.py                         |    2 +-
 scheduler/drones_unittest.py                       |    2 +-
 server/autotest_unittest.py                        |    4 +-
 server/base_utils.py                               |   95 +---
 server/deb_kernel_unittest.py                      |    6 +-
 server/hosts/__init__.py                           |   32 -
 server/hosts/abstract_ssh.py                       |  608 -----------------
 server/hosts/base_classes.py                       |   80 ---
 server/hosts/base_classes_unittest.py              |  112 ---
 server/hosts/bootloader.py                         |   67 --
 server/hosts/bootloader_unittest.py                |   97 ---
 server/hosts/common.py                             |    8 -
 server/hosts/factory.py                            |   83 ---
 server/hosts/guest.py                              |   70 --
 server/hosts/kvm_guest.py                          |   46 --
 server/hosts/logfile_monitor.py                    |  290 --------
 server/hosts/monitors/common.py                    |    8 -
 server/hosts/monitors/console.py                   |   88 ---
 server/hosts/monitors/console_patterns             |   71 --
 server/hosts/monitors/console_patterns_test.py     |   53 --
 server/hosts/monitors/followfiles.py               |   27 -
 server/hosts/monitors/monitors_util.py             |  379 -----------
 server/hosts/monitors/monitors_util_unittest.py    |  177 -----
 server/hosts/netconsole.py                         |  160 -----
 server/hosts/paramiko_host.py                      |  310 ---------
 server/hosts/remote.py                             |  272 --------
 server/hosts/remote_unittest.py                    |   16 -
 server/hosts/serial.py                             |  184 -----
 server/hosts/site_factory.py                       |    4 -
 server/hosts/ssh_host.py                           |  245 -------
 server/kvm.py                                      |    4 +-
 server/profilers.py                                |    4 +-
 server/rpm_kernel_unittest.py                      |    6 +-
 server/server_job.py                               |   13 +-
 server/source_kernel_unittest.py                   |    3 +-
 server/subcommand.py                               |  263 --------
 server/subcommand_unittest.py                      |  443 ------------
 server/test.py                                     |    3 +-
 server/tests/iperf/iperf.py                        |    3 +-
 server/tests/netperf2/netperf2.py                  |    4 +-
 server/tests/netpipe/netpipe.py                    |    4 +-
 server/tests/reinstall/control                     |    2 +-
 75 files changed, 4984 insertions(+), 5000 deletions(-)
 create mode 100644 client/common_lib/base_hosts/__init__.py
 create mode 100644 client/common_lib/base_hosts/base_classes.py
 create mode 100755 client/common_lib/base_hosts/base_classes_unittest.py
 create mode 100644 client/common_lib/base_hosts/common.py
 create mode 100644 client/common_lib/hosts/abstract_ssh.py
 create mode 100644 client/common_lib/hosts/bootloader.py
 create mode 100755 client/common_lib/hosts/bootloader_unittest.py
 create mode 100644 client/common_lib/hosts/factory.py
 create mode 100644 client/common_lib/hosts/guest.py
 create mode 100644 client/common_lib/hosts/kvm_guest.py
 create mode 100644 client/common_lib/hosts/logfile_monitor.py
 create mode 100644 client/common_lib/hosts/monitors/__init__.py
 create mode 100644 client/common_lib/hosts/monitors/common.py
 create mode 100755 client/common_lib/hosts/monitors/console.py
 create mode 100644 client/common_lib/hosts/monitors/console_patterns
 create mode 100755 client/common_lib/hosts/monitors/console_patterns_test.py
 create mode 100755 client/common_lib/hosts/monitors/followfiles.py
 create mode 100644 client/common_lib/hosts/monitors/monitors_util.py
 create mode 100755 client/common_lib/hosts/monitors/monitors_util_unittest.py
 create mode 100644 client/common_lib/hosts/netconsole.py
 create mode 100644 client/common_lib/hosts/paramiko_host.py
 create mode 100644 client/common_lib/hosts/remote.py
 create mode 100755 client/common_lib/hosts/remote_unittest.py
 create mode 100644 client/common_lib/hosts/serial.py
 create mode 100644 client/common_lib/hosts/site_factory.py
 create mode 100644 client/common_lib/hosts/ssh_host.py
 create mode 100644 client/common_lib/subcommand.py
 create mode 100755 client/common_lib/subcommand_unittest.py
 delete mode 100644 server/hosts/__init__.py
 delete mode 100644 server/hosts/abstract_ssh.py
 delete mode 100644 server/hosts/base_classes.py
 delete mode 100755 server/hosts/base_classes_unittest.py
 delete mode 100644 server/hosts/bootloader.py
 delete mode 100755 server/hosts/bootloader_unittest.py
 delete mode 100644 server/hosts/common.py
 delete mode 100644 server/hosts/factory.py
 delete mode 100644 server/hosts/guest.py
 delete mode 100644 server/hosts/kvm_guest.py
 delete mode 100644 server/hosts/logfile_monitor.py
 delete mode 100644 server/hosts/monitors/__init__.py
 delete mode 100644 server/hosts/monitors/common.py
 delete mode 100755 server/hosts/monitors/console.py
 delete mode 100644 server/hosts/monitors/console_patterns
 delete mode 100755 server/hosts/monitors/console_patterns_test.py
 delete mode 100755 server/hosts/monitors/followfiles.py
 delete mode 100644 server/hosts/monitors/monitors_util.py
 delete mode 100755 server/hosts/monitors/monitors_util_unittest.py
 delete mode 100644 server/hosts/netconsole.py
 delete mode 100644 server/hosts/paramiko_host.py
 delete mode 100644 server/hosts/remote.py
 delete mode 100755 server/hosts/remote_unittest.py
 delete mode 100644 server/hosts/serial.py
 delete mode 100644 server/hosts/site_factory.py
 delete mode 100644 server/hosts/ssh_host.py
 delete mode 100644 server/subcommand.py
 delete mode 100755 server/subcommand_unittest.py

diff --git a/client/bin/local_host.py b/client/bin/local_host.py
index 377ea14..542b7a4 100644
--- a/client/bin/local_host.py
+++ b/client/bin/local_host.py
@@ -5,10 +5,10 @@ This file contains the implementation of a host object for the local machine.
 """
 
 import glob, os, platform
-from autotest_lib.client.common_lib import hosts, error
+from autotest_lib.client.common_lib import error, base_hosts
 from autotest_lib.client.bin import utils
 
-class LocalHost(hosts.Host):
+class LocalHost(base_hosts.Host):
     def _initialize(self, hostname=None, bootloader=None, *args, **dargs):
         super(LocalHost, self)._initialize(*args, **dargs)
 
@@ -29,7 +29,7 @@ class LocalHost(hosts.Host):
             stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
             stdin=None, args=()):
         """
-        @see common_lib.hosts.Host.run()
+        @see common_lib.base_hosts.Host.run()
         """
         try:
             result = utils.run(
diff --git a/client/common_lib/base_hosts/__init__.py b/client/common_lib/base_hosts/__init__.py
new file mode 100644
index 0000000..c2b42ca
--- /dev/null
+++ b/client/common_lib/base_hosts/__init__.py
@@ -0,0 +1,6 @@
+from autotest_lib.client.common_lib import utils
+import base_classes
+
+Host = utils.import_site_class(
+    __file__, "autotest_lib.client.common_lib.base_hosts.site_host", "SiteHost",
+    base_classes.Host)
\ No newline at end of file
diff --git a/client/common_lib/base_hosts/base_classes.py b/client/common_lib/base_hosts/base_classes.py
new file mode 100644
index 0000000..b267e79
--- /dev/null
+++ b/client/common_lib/base_hosts/base_classes.py
@@ -0,0 +1,709 @@
+# Copyright 2009 Google Inc. Released under the GPL v2
+
+"""
+This module defines the base classes for the Host hierarchy.
+
+Implementation details:
+You should import the "hosts" package instead of importing each type of host.
+
+        Host: a machine on which you can run programs
+"""
+
+__author__ = """
+mbligh@google.com (Martin J. Bligh),
+poirier@google.com (Benjamin Poirier),
+stutsman@google.com (Ryan Stutsman)
+"""
+
+import cPickle, cStringIO, logging, os, re, time
+
+from autotest_lib.client.common_lib import global_config, error, utils
+from autotest_lib.client.common_lib import host_protections
+from autotest_lib.client.bin import partition
+
+
+class Host(object):
+    """
+    This class represents a machine on which you can run programs.
+
+    It may be a local machine, the one autoserv is running on, a remote
+    machine or a virtual machine.
+
+    Implementation details:
+    This is an abstract class, leaf subclasses must implement the methods
+    listed here. You must not instantiate this class but should
+    instantiate one of those leaf subclasses.
+
+    When overriding methods that raise NotImplementedError, the leaf class
+    is fully responsible for the implementation and should not chain calls
+    to super. When overriding methods that are a NOP in Host, the subclass
+    should chain calls to super(). The criteria for fitting a new method into
+    one category or the other should be:
+        1. If two separate generic implementations could reasonably be
+           concatenated, then the abstract implementation should pass and
+           subclasses should chain calls to super.
+        2. If only one class could reasonably perform the stated function
+           (e.g. two separate run() implementations cannot both be executed)
+           then the method should raise NotImplementedError in Host, and
+           the implementor should NOT chain calls to super, to ensure that
+           only one implementation ever gets executed.
+    """
+
+    job = None
+    DEFAULT_REBOOT_TIMEOUT = 1800
+    WAIT_DOWN_REBOOT_TIMEOUT = 840
+    WAIT_DOWN_REBOOT_WARNING = 540
+    HOURS_TO_WAIT_FOR_RECOVERY = 2.5
+    # the number of hardware repair requests that need to happen before we
+    # actually send machines to hardware repair
+    HARDWARE_REPAIR_REQUEST_THRESHOLD = 4
+
+
+    def __init__(self, *args, **dargs):
+        self._initialize(*args, **dargs)
+
+
+    def _initialize(self, *args, **dargs):
+        self._already_repaired = []
+        self._removed_files = False
+
+
+    def close(self):
+        pass
+
+
+    def setup(self):
+        pass
+
+
+    def run(self, command, timeout=3600, ignore_status=False,
+            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
+            stdin=None, args=()):
+        """
+        Run a command on this host.
+
+        @param command: the command line string
+        @param timeout: time limit in seconds before attempting to
+                kill the running process. The run() function
+                will take a few seconds longer than 'timeout'
+                to complete if it has to kill the process.
+        @param ignore_status: do not raise an exception, no matter
+                what the exit code of the command is.
+        @param stdout_tee/stderr_tee: where to tee the stdout/stderr
+        @param stdin: stdin to pass (a string) to the executed command
+        @param args: sequence of strings to pass as arguments to command by
+                quoting them in " and escaping their contents if necessary
+
+        @return a utils.CmdResult object
+
+        @raises AutotestHostRunError: the exit code of the command execution
+                was not 0 and ignore_status was not enabled
+        """
+        raise NotImplementedError('Run not implemented!')
+
+
+    def run_output(self, command, *args, **dargs):
+        return self.run(command, *args, **dargs).stdout.rstrip()
+
+
+    def reboot(self):
+        raise NotImplementedError('Reboot not implemented!')
+
+
+    def sysrq_reboot(self):
+        raise NotImplementedError('Sysrq reboot not implemented!')
+
+
+    def reboot_setup(self, *args, **dargs):
+        pass
+
+
+    def reboot_followup(self, *args, **dargs):
+        pass
+
+
+    def get_file(self, source, dest, delete_dest=False):
+        raise NotImplementedError('Get file not implemented!')
+
+
+    def send_file(self, source, dest, delete_dest=False):
+        raise NotImplementedError('Send file not implemented!')
+
+
+    def get_tmp_dir(self):
+        raise NotImplementedError('Get temp dir not implemented!')
+
+
+    def is_up(self):
+        raise NotImplementedError('Is up not implemented!')
+
+
+    def is_shutting_down(self):
+        """ Indicates is a machine is currently shutting down. """
+        # runlevel() may not be available, so wrap it in try block.
+        try:
+            runlevel = int(self.run("runlevel").stdout.strip().split()[1])
+            return runlevel in (0, 6)
+        except:
+            return False
+
+
+    def get_wait_up_processes(self):
+        """ Gets the list of local processes to wait for in wait_up. """
+        get_config = global_config.global_config.get_config_value
+        proc_list = get_config("HOSTS", "wait_up_processes",
+                               default="").strip()
+        processes = set(p.strip() for p in proc_list.split(","))
+        processes.discard("")
+        return processes
+
+
+    def get_boot_id(self, timeout=60):
+        """ Get a unique ID associated with the current boot.
+
+        Should return a string with the semantics such that two separate
+        calls to Host.get_boot_id() return the same string if the host did
+        not reboot between the two calls, and two different strings if it
+        has rebooted at least once between the two calls.
+
+        @param timeout The number of seconds to wait before timing out.
+
+        @return A string unique to this boot or None if not available."""
+        BOOT_ID_FILE = '/proc/sys/kernel/random/boot_id'
+        NO_ID_MSG = 'no boot_id available'
+        cmd = 'if [ -f %r ]; then cat %r; else echo %r; fi' % (
+                BOOT_ID_FILE, BOOT_ID_FILE, NO_ID_MSG)
+        boot_id = self.run(cmd, timeout=timeout).stdout.strip()
+        if boot_id == NO_ID_MSG:
+            return None
+        return boot_id
+
+
+    def wait_up(self, timeout=None):
+        raise NotImplementedError('Wait up not implemented!')
+
+
+    def wait_down(self, timeout=None, warning_timer=None, old_boot_id=None):
+        raise NotImplementedError('Wait down not implemented!')
+
+
+    def wait_for_restart(self, timeout=DEFAULT_REBOOT_TIMEOUT,
+                         log_failure=True, old_boot_id=None, **dargs):
+        """ Wait for the host to come back from a reboot. This is a generic
+        implementation based entirely on wait_up and wait_down. """
+        if not self.wait_down(timeout=self.WAIT_DOWN_REBOOT_TIMEOUT,
+                              warning_timer=self.WAIT_DOWN_REBOOT_WARNING,
+                              old_boot_id=old_boot_id):
+            if log_failure:
+                self.record("ABORT", None, "reboot.verify", "shut down failed")
+            raise error.AutoservShutdownError("Host did not shut down")
+
+        self.wait_up(timeout)
+        time.sleep(2)    # this is needed for complete reliability
+        if self.wait_up(timeout):
+            self.record("GOOD", None, "reboot.verify")
+            self.reboot_followup(**dargs)
+        else:
+            self.record("ABORT", None, "reboot.verify",
+                        "Host did not return from reboot")
+            raise error.AutoservRebootError("Host did not return from reboot")
+
+
+    def verify(self):
+        self.verify_hardware()
+        self.verify_connectivity()
+        self.verify_software()
+
+
+    def verify_hardware(self):
+        pass
+
+
+    def verify_connectivity(self):
+        pass
+
+
+    def verify_software(self):
+        pass
+
+
+    def check_diskspace(self, path, gb):
+        """Raises an error if path does not have at least gb GB free.
+
+        @param path The path to check for free disk space.
+        @param gb A floating point number to compare with a granularity
+            of 1 MB.
+
+        1000 based SI units are used.
+
+        @raises AutoservDiskFullHostError if path has less than gb GB free.
+        """
+        one_mb = 10**6  # Bytes (SI unit).
+        mb_per_gb = 1000.0
+        logging.info('Checking for >= %s GB of space under %s on machine %s',
+                     gb, path, self.hostname)
+        df = self.run('df -PB %d %s | tail -1' % (one_mb, path)).stdout.split()
+        free_space_gb = int(df[3])/mb_per_gb
+        if free_space_gb < gb:
+            raise error.AutoservDiskFullHostError(path, gb, free_space_gb)
+        else:
+            logging.info('Found %s GB >= %s GB of space under %s on machine %s',
+                free_space_gb, gb, path, self.hostname)
+
+
+    def get_open_func(self, use_cache=True):
+        """
+        Defines and returns a function that may be used instead of built-in
+        open() to open and read files. The returned function is implemented
+        by using self.run('cat <file>') and may cache the results for the same
+        filename.
+
+        @param use_cache Cache results of self.run('cat <filename>') for the
+            same filename
+
+        @return a function that can be used instead of built-in open()
+        """
+        cached_files = {}
+
+        def open_func(filename):
+            if not use_cache or filename not in cached_files:
+                output = self.run('cat \'%s\'' % filename,
+                                  stdout_tee=open('/dev/null', 'w')).stdout
+                fd = cStringIO.StringIO(output)
+
+                if not use_cache:
+                    return fd
+
+                cached_files[filename] = fd
+            else:
+                cached_files[filename].seek(0)
+
+            return cached_files[filename]
+
+        return open_func
+
+
+    def check_partitions(self, root_part, filter_func=None):
+        """ Compare the contents of /proc/partitions with those of
+        /proc/mounts and raise exception in case unmounted partitions are found
+
+        root_part: in Linux /proc/mounts will never directly mention the root
+        partition as being mounted on / instead it will say that /dev/root is
+        mounted on /. Thus require this argument to filter out the root_part
+        from the ones checked to be mounted
+
+        filter_func: unnary predicate for additional filtering out of
+        partitions required to be mounted
+
+        Raise: error.AutoservHostError if unfiltered unmounted partition found
+        """
+
+        print 'Checking if non-swap partitions are mounted...'
+
+        unmounted = partition.get_unmounted_partition_list(root_part,
+            filter_func=filter_func, open_func=self.get_open_func())
+        if unmounted:
+            raise error.AutoservNotMountedHostError(
+                'Found unmounted partitions: %s' %
+                [part.device for part in unmounted])
+
+
+    def _repair_wait_for_reboot(self):
+        TIMEOUT = int(self.HOURS_TO_WAIT_FOR_RECOVERY * 3600)
+        if self.is_shutting_down():
+            logging.info('Host is shutting down, waiting for a restart')
+            self.wait_for_restart(TIMEOUT)
+        else:
+            self.wait_up(TIMEOUT)
+
+
+    def _get_mountpoint(self, path):
+        """Given a "path" get the mount point of the filesystem containing
+        that path."""
+        code = ('import os\n'
+                # sanitize the path and resolve symlinks
+                'path = os.path.realpath(%r)\n'
+                "while path != '/' and not os.path.ismount(path):\n"
+                '    path, _ = os.path.split(path)\n'
+                'print path\n') % path
+        return self.run('python -c "%s"' % code,
+                        stdout_tee=open(os.devnull, 'w')).stdout.rstrip()
+
+
+    def erase_dir_contents(self, path, ignore_status=True, timeout=3600):
+        """Empty a given directory path contents."""
+        rm_cmd = 'find "%s" -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -rf'
+        self.run(rm_cmd % path, ignore_status=ignore_status, timeout=timeout)
+        self._removed_files = True
+
+
+    def repair_full_disk(self, mountpoint):
+        # it's safe to remove /tmp and /var/tmp, site specific overrides may
+        # want to remove some other places too
+        if mountpoint == self._get_mountpoint('/tmp'):
+            self.erase_dir_contents('/tmp')
+
+        if mountpoint == self._get_mountpoint('/var/tmp'):
+            self.erase_dir_contents('/var/tmp')
+
+
+    def _call_repair_func(self, err, func, *args, **dargs):
+        for old_call in self._already_repaired:
+            if old_call == (func, args, dargs):
+                # re-raising the original exception because surrounding
+                # error handling may want to try other ways to fix it
+                logging.warn('Already done this (%s) repair procedure, '
+                             're-raising the original exception.', func)
+                raise err
+
+        try:
+            func(*args, **dargs)
+        except (error.AutoservHardwareRepairRequestedError,
+                error.AutoservHardwareRepairRequiredError):
+            # let these special exceptions propagate
+            raise
+        except error.AutoservError:
+            logging.exception('Repair failed but continuing in case it managed'
+                              ' to repair enough')
+
+        self._already_repaired.append((func, args, dargs))
+
+
+    def repair_filesystem_only(self):
+        """perform file system repairs only"""
+        while True:
+            # try to repair specific problems
+            try:
+                logging.info('Running verify to find failures to repair...')
+                self.verify()
+                if self._removed_files:
+                    logging.info('Removed files, rebooting to release the'
+                                 ' inodes')
+                    self.reboot()
+                return # verify succeeded, then repair succeeded
+            except error.AutoservHostIsShuttingDownError, err:
+                logging.exception('verify failed')
+                self._call_repair_func(err, self._repair_wait_for_reboot)
+            except error.AutoservDiskFullHostError, err:
+                logging.exception('verify failed')
+                self._call_repair_func(err, self.repair_full_disk,
+                                       self._get_mountpoint(err.path))
+
+
+    def repair_software_only(self):
+        """perform software repairs only"""
+        while True:
+            try:
+                self.repair_filesystem_only()
+                break
+            except (error.AutoservSshPingHostError, error.AutoservSSHTimeout,
+                    error.AutoservSshPermissionDeniedError,
+                    error.AutoservDiskFullHostError), err:
+                logging.exception('verify failed')
+                logging.info('Trying to reinstall the machine')
+                self._call_repair_func(err, self.machine_install)
+
+
+    def repair_full(self):
+        hardware_repair_requests = 0
+        while True:
+            try:
+                self.repair_software_only()
+                break
+            except error.AutoservHardwareRepairRequiredError, err:
+                logging.exception('software repair failed, '
+                                  'hardware repair requested')
+                hardware_repair_requests += 1
+                try_hardware_repair = (hardware_repair_requests >=
+                                       self.HARDWARE_REPAIR_REQUEST_THRESHOLD)
+                if try_hardware_repair:
+                    logging.info('hardware repair requested %d times, '
+                                 'trying hardware repair',
+                                 hardware_repair_requests)
+                    self._call_repair_func(err, self.request_hardware_repair)
+                else:
+                    logging.info('hardware repair requested %d times, '
+                                 'trying software repair again',
+                                 hardware_repair_requests)
+            except error.AutoservHardwareHostError, err:
+                logging.exception('verify failed')
+                # software repair failed, try hardware repair
+                logging.info('Hardware problem found, '
+                             'requesting hardware repairs')
+                self._call_repair_func(err, self.request_hardware_repair)
+
+
+    def repair_with_protection(self, protection_level):
+        """Perform the maximal amount of repair within the specified
+        protection level.
+
+        @param protection_level: the protection level to use for limiting
+                                 repairs, a host_protections.Protection
+        """
+        protection = host_protections.Protection
+        if protection_level == protection.DO_NOT_REPAIR:
+            logging.info('Protection is "Do not repair" so just verifying')
+            self.verify()
+        elif protection_level == protection.REPAIR_FILESYSTEM_ONLY:
+            logging.info('Attempting filesystem-only repair')
+            self.repair_filesystem_only()
+        elif protection_level == protection.REPAIR_SOFTWARE_ONLY:
+            logging.info('Attempting software repair only')
+            self.repair_software_only()
+        elif protection_level == protection.NO_PROTECTION:
+            logging.info('Attempting full repair')
+            self.repair_full()
+        else:
+            raise NotImplementedError('Unknown host protection level %s'
+                                      % protection_level)
+
+
+    def disable_ipfilters(self):
+        """Allow all network packets in and out of the host."""
+        self.run('iptables-save > /tmp/iptable-rules')
+        self.run('iptables -P INPUT ACCEPT')
+        self.run('iptables -P FORWARD ACCEPT')
+        self.run('iptables -P OUTPUT ACCEPT')
+
+
+    def enable_ipfilters(self):
+        """Re-enable the IP filters disabled from disable_ipfilters()"""
+        if self.path_exists('/tmp/iptable-rules'):
+            self.run('iptables-restore < /tmp/iptable-rules')
+
+
+    def cleanup(self):
+        pass
+
+
+    def machine_install(self):
+        raise NotImplementedError('Machine install not implemented!')
+
+
+    def install(self, installableObject):
+        installableObject.install(self)
+
+
+    def get_autodir(self):
+        raise NotImplementedError('Get autodir not implemented!')
+
+
+    def set_autodir(self):
+        raise NotImplementedError('Set autodir not implemented!')
+
+
+    def start_loggers(self):
+        """ Called to start continuous host logging. """
+        pass
+
+
+    def stop_loggers(self):
+        """ Called to stop continuous host logging. """
+        pass
+
+
+    # some extra methods simplify the retrieval of information about the
+    # Host machine, with generic implementations based on run(). subclasses
+    # should feel free to override these if they can provide better
+    # implementations for their specific Host types
+
+    def get_num_cpu(self):
+        """ Get the number of CPUs in the host according to /proc/cpuinfo. """
+        proc_cpuinfo = self.run('cat /proc/cpuinfo',
+                                stdout_tee=open(os.devnull, 'w')).stdout
+        cpus = 0
+        for line in proc_cpuinfo.splitlines():
+            if line.startswith('processor'):
+                cpus += 1
+        return cpus
+
+
+    def get_arch(self):
+        """ Get the hardware architecture of the remote machine. """
+        arch = self.run('/bin/uname -m').stdout.rstrip()
+        if re.match(r'i\d86$', arch):
+            arch = 'i386'
+        return arch
+
+
+    def get_kernel_ver(self):
+        """ Get the kernel version of the remote machine. """
+        return self.run('/bin/uname -r').stdout.rstrip()
+
+
+    def get_cmdline(self):
+        """ Get the kernel command line of the remote machine. """
+        return self.run('cat /proc/cmdline').stdout.rstrip()
+
+
+    def get_meminfo(self):
+        """ Get the kernel memory info (/proc/meminfo) of the remote machine
+        and return a dictionary mapping the various statistics. """
+        meminfo_dict = {}
+        meminfo = self.run('cat /proc/meminfo').stdout.splitlines()
+        for key, val in (line.split(':', 1) for line in meminfo):
+            meminfo_dict[key.strip()] = val.strip()
+        return meminfo_dict
+
+
+    def path_exists(self, path):
+        """ Determine if path exists on the remote machine. """
+        result = self.run('ls "%s" > /dev/null' % utils.sh_escape(path),
+                          ignore_status=True)
+        return result.exit_status == 0
+
+
+    # some extra helpers for doing job-related operations
+
+    def record(self, *args, **dargs):
+        """ Helper method for recording status logs against Host.job that
+        silently becomes a NOP if Host.job is not available. The args and
+        dargs are passed on to Host.job.record unchanged. """
+        if self.job:
+            self.job.record(*args, **dargs)
+
+
+    def log_kernel(self):
+        """ Helper method for logging kernel information into the status logs.
+        Intended for cases where the "current" kernel is not really defined
+        and we want to explicitly log it. Does nothing if this host isn't
+        actually associated with a job. """
+        if self.job:
+            kernel = self.get_kernel_ver()
+            self.job.record("INFO", None, None,
+                            optional_fields={"kernel": kernel})
+
+
+    def log_reboot(self, reboot_func):
+        """ Decorator for wrapping a reboot in a group for status
+        logging purposes. The reboot_func parameter should be an actual
+        function that carries out the reboot.
+        """
+        if self.job and not hasattr(self, "RUNNING_LOG_REBOOT"):
+            self.RUNNING_LOG_REBOOT = True
+            try:
+                self.job.run_reboot(reboot_func, self.get_kernel_ver)
+            finally:
+                del self.RUNNING_LOG_REBOOT
+        else:
+            reboot_func()
+
+
+    def request_hardware_repair(self):
+        """ Should somehow request (send a mail?) for hardware repairs on
+        this machine. The implementation can either return by raising the
+        special error.AutoservHardwareRepairRequestedError exception or can
+        try to wait until the machine is repaired and then return normally.
+        """
+        raise NotImplementedError("request_hardware_repair not implemented")
+
+
+    def list_files_glob(self, glob):
+        """
+        Get a list of files on a remote host given a glob pattern path.
+        """
+        SCRIPT = ("python -c 'import cPickle, glob, sys;"
+                  "cPickle.dump(glob.glob(sys.argv[1]), sys.stdout, 0)'")
+        output = self.run(SCRIPT, args=(glob,), stdout_tee=None,
+                          timeout=60).stdout
+        return cPickle.loads(output)
+
+
+    def symlink_closure(self, paths):
+        """
+        Given a sequence of path strings, return the set of all paths that
+        can be reached from the initial set by following symlinks.
+
+        @param paths: sequence of path strings.
+        @return: a sequence of path strings that are all the unique paths that
+                can be reached from the given ones after following symlinks.
+        """
+        SCRIPT = ("python -c 'import cPickle, os, sys\n"
+                  "paths = cPickle.load(sys.stdin)\n"
+                  "closure = {}\n"
+                  "while paths:\n"
+                  "    path = paths.keys()[0]\n"
+                  "    del paths[path]\n"
+                  "    if not os.path.exists(path):\n"
+                  "        continue\n"
+                  "    closure[path] = None\n"
+                  "    if os.path.islink(path):\n"
+                  "        link_to = os.path.join(os.path.dirname(path),\n"
+                  "                               os.readlink(path))\n"
+                  "        if link_to not in closure.keys():\n"
+                  "            paths[link_to] = None\n"
+                  "cPickle.dump(closure.keys(), sys.stdout, 0)'")
+        input_data = cPickle.dumps(dict((path, None) for path in paths), 0)
+        output = self.run(SCRIPT, stdout_tee=None, stdin=input_data,
+                          timeout=60).stdout
+        return cPickle.loads(output)
+
+
+    def cleanup_kernels(self, boot_dir='/boot'):
+        """
+        Remove any kernel image and associated files (vmlinux, system.map,
+        modules) for any image found in the boot directory that is not
+        referenced by entries in the bootloader configuration.
+
+        @param boot_dir: boot directory path string, default '/boot'
+        """
+        # find all the vmlinuz images referenced by the bootloader
+        vmlinuz_prefix = os.path.join(boot_dir, 'vmlinuz-')
+        boot_info = self.bootloader.get_entries()
+        used_kernver = [boot['kernel'][len(vmlinuz_prefix):]
+                        for boot in boot_info.itervalues()]
+
+        # find all the unused vmlinuz images in /boot
+        all_vmlinuz = self.list_files_glob(vmlinuz_prefix + '*')
+        used_vmlinuz = self.symlink_closure(vmlinuz_prefix + kernver
+                                            for kernver in used_kernver)
+        unused_vmlinuz = set(all_vmlinuz) - set(used_vmlinuz)
+
+        # find all the unused vmlinux images in /boot
+        vmlinux_prefix = os.path.join(boot_dir, 'vmlinux-')
+        all_vmlinux = self.list_files_glob(vmlinux_prefix + '*')
+        used_vmlinux = self.symlink_closure(vmlinux_prefix + kernver
+                                            for kernver in used_kernver)
+        unused_vmlinux = set(all_vmlinux) - set(used_vmlinux)
+
+        # find all the unused System.map files in /boot
+        systemmap_prefix = os.path.join(boot_dir, 'System.map-')
+        all_system_map = self.list_files_glob(systemmap_prefix + '*')
+        used_system_map = self.symlink_closure(
+            systemmap_prefix + kernver for kernver in used_kernver)
+        unused_system_map = set(all_system_map) - set(used_system_map)
+
+        # find all the module directories associated with unused kernels
+        modules_prefix = '/lib/modules/'
+        all_moddirs = [dir for dir in self.list_files_glob(modules_prefix + '*')
+                       if re.match(modules_prefix + r'\d+\.\d+\.\d+.*', dir)]
+        used_moddirs = self.symlink_closure(modules_prefix + kernver
+                                            for kernver in used_kernver)
+        unused_moddirs = set(all_moddirs) - set(used_moddirs)
+
+        # remove all the vmlinuz files we don't use
+        # TODO: if needed this should become package manager agnostic
+        for vmlinuz in unused_vmlinuz:
+            # try and get an rpm package name
+            rpm = self.run('rpm -qf', args=(vmlinuz,),
+                           ignore_status=True, timeout=120)
+            if rpm.exit_status == 0:
+                packages = set(line.strip() for line in
+                               rpm.stdout.splitlines())
+                # if we found some package names, try to remove them
+                for package in packages:
+                    self.run('rpm -e', args=(package,),
+                             ignore_status=True, timeout=120)
+            # remove the image files anyway, even if rpm didn't
+            self.run('rm -f', args=(vmlinuz,),
+                     ignore_status=True, timeout=120)
+
+        # remove all the vmlinux and System.map files left over
+        for f in (unused_vmlinux | unused_system_map):
+            self.run('rm -f', args=(f,),
+                     ignore_status=True, timeout=120)
+
+        # remove all unused module directories
+        # the regex match should keep us safe from removing the wrong files
+        for moddir in unused_moddirs:
+            self.run('rm -fr', args=(moddir,), ignore_status=True)
diff --git a/client/common_lib/base_hosts/base_classes_unittest.py b/client/common_lib/base_hosts/base_classes_unittest.py
new file mode 100755
index 0000000..10e5bfb
--- /dev/null
+++ b/client/common_lib/base_hosts/base_classes_unittest.py
@@ -0,0 +1,43 @@
+#!/usr/bin/python
+
+import unittest
+import common
+
+from autotest_lib.client.common_lib import error, utils
+from autotest_lib.client.common_lib.test_utils import mock
+from autotest_lib.client.common_lib.base_hosts import base_classes
+
+
+class test_host_class(unittest.TestCase):
+    def setUp(self):
+        self.god = mock.mock_god()
+
+
+    def tearDown(self):
+        self.god.unstub_all()
+
+
+    def test_run_output_notimplemented(self):
+        host = base_classes.Host()
+        self.assertRaises(NotImplementedError, host.run_output, "fake command")
+
+
+    def test_check_diskspace(self):
+        self.god.stub_function(base_classes.Host, 'run')
+        host = base_classes.Host()
+        host.hostname = 'unittest-host'
+        test_df_tail = ('/dev/sda1                    1061       939'
+                        '       123      89% /')
+        fake_cmd_status = utils.CmdResult(exit_status=0, stdout=test_df_tail)
+        host.run.expect_call('df -PB 1000000 /foo | tail -1').and_return(
+                fake_cmd_status)
+        self.assertRaises(error.AutoservDiskFullHostError,
+                          host.check_diskspace, '/foo', 0.2)
+        host.run.expect_call('df -PB 1000000 /foo | tail -1').and_return(
+                fake_cmd_status)
+        host.check_diskspace('/foo', 0.1)
+        self.god.check_playback()
+
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/client/common_lib/base_hosts/common.py b/client/common_lib/base_hosts/common.py
new file mode 100644
index 0000000..ce78b85
--- /dev/null
+++ b/client/common_lib/base_hosts/common.py
@@ -0,0 +1,8 @@
+import os, sys
+dirname = os.path.dirname(sys.modules[__name__].__file__)
+client_dir = os.path.abspath(os.path.join(dirname, "..", ".."))
+sys.path.insert(0, client_dir)
+import setup_modules
+sys.path.pop(0)
+setup_modules.setup(base_path=client_dir,
+                    root_module_name="autotest_lib.client")
diff --git a/client/common_lib/base_packages.py b/client/common_lib/base_packages.py
index cf71949..c4287e3 100644
--- a/client/common_lib/base_packages.py
+++ b/client/common_lib/base_packages.py
@@ -307,7 +307,7 @@ class BasePackageManager(object):
         '''
         Clean up custom upload/download areas
         '''
-        from autotest_lib.server import subcommand
+        from autotest_lib.client.common_lib import subcommand
         if not custom_repos:
             # Not all package types necessarily require or allow custom repos
             try:
@@ -466,7 +466,7 @@ class BasePackageManager(object):
 
     def upload_pkg(self, pkg_path, upload_path=None, update_checksum=False,
                    timeout=300):
-        from autotest_lib.server import subcommand
+        from autotest_lib.client.common_lib import subcommand
         if upload_path:
             upload_path_list = [upload_path]
             self.upkeep(upload_path_list)
diff --git a/client/common_lib/base_utils.py b/client/common_lib/base_utils.py
index 26b9fb5..9ed5286 100644
--- a/client/common_lib/base_utils.py
+++ b/client/common_lib/base_utils.py
@@ -1343,11 +1343,103 @@ class run_randomly:
             fn(*args, **dargs)
 
 
+def scp_remote_escape(filename):
+    """
+    Escape special characters from a filename so that it can be passed
+    to scp (within double quotes) as a remote file.
+
+    Bis-quoting has to be used with scp for remote files, "bis-quoting"
+    as in quoting x 2
+    scp does not support a newline in the filename
+
+    Args:
+            filename: the filename string to escape.
+
+    Returns:
+            The escaped filename string. The required englobing double
+            quotes are NOT added and so should be added at some point by
+            the caller.
+    """
+    escape_chars= r' !"$&' "'" r'()*,:;<=>?[\]^`{|}'
+
+    new_name= []
+    for char in filename:
+        if char in escape_chars:
+            new_name.append("\\%s" % (char,))
+        else:
+            new_name.append(char)
+
+    return sh_escape("".join(new_name))
+
+
+def get_public_key():
+    """
+    Return a valid string ssh public key for the user executing autoserv or
+    autotest. If there's no DSA or RSA public key, create a DSA keypair with
+    ssh-keygen and return it.
+    """
+
+    ssh_conf_path = os.path.expanduser('~/.ssh')
+
+    dsa_public_key_path = os.path.join(ssh_conf_path, 'id_dsa.pub')
+    dsa_private_key_path = os.path.join(ssh_conf_path, 'id_dsa')
+
+    rsa_public_key_path = os.path.join(ssh_conf_path, 'id_rsa.pub')
+    rsa_private_key_path = os.path.join(ssh_conf_path, 'id_rsa')
+
+    has_dsa_keypair = os.path.isfile(dsa_public_key_path) and \
+        os.path.isfile(dsa_private_key_path)
+    has_rsa_keypair = os.path.isfile(rsa_public_key_path) and \
+        os.path.isfile(rsa_private_key_path)
+
+    if has_dsa_keypair:
+        print 'DSA keypair found, using it'
+        public_key_path = dsa_public_key_path
+
+    elif has_rsa_keypair:
+        print 'RSA keypair found, using it'
+        public_key_path = rsa_public_key_path
+
+    else:
+        print 'Neither RSA nor DSA keypair found, creating DSA ssh key pair'
+        utils.system('ssh-keygen -t dsa -q -N "" -f %s' % dsa_private_key_path)
+        public_key_path = dsa_public_key_path
+
+    public_key = open(public_key_path, 'r')
+    public_key_str = public_key.read()
+    public_key.close()
+
+    return public_key_str
+
+
 def get_server_dir():
     path = os.path.dirname(sys.modules['autotest_lib.server.utils'].__file__)
     return os.path.abspath(path)
 
 
+def parse_machine(machine, user='root', password='', port=22):
+    """
+    Parse the machine string user:pass@host:port and return it separately,
+    if the machine string is not complete, use the default parameters
+    when appropriate.
+    """
+
+    if '@' in machine:
+        user, machine = machine.split('@', 1)
+
+    if ':' in user:
+        user, password = user.split(':', 1)
+
+    if ':' in machine:
+        machine, port = machine.split(':', 1)
+        port = int(port)
+
+    if not machine or not user:
+        raise ValueError
+
+    return machine, user, password, port
+
+
 # A dictionary of pid and a list of tmpdirs for that pid
 __tmp_dirs = {}
 
diff --git a/client/common_lib/hosts/__init__.py b/client/common_lib/hosts/__init__.py
index e065821..c13cf49 100644
--- a/client/common_lib/hosts/__init__.py
+++ b/client/common_lib/hosts/__init__.py
@@ -6,9 +6,24 @@ Implementation details:
 You should 'import hosts' instead of importing every available host module.
 """
 
-from autotest_lib.client.common_lib import utils
-import base_classes
+from base_classes import Host
+from remote import RemoteHost
+try:
+    from site_host import SiteHost
+except ImportError, e:
+    pass
 
-Host = utils.import_site_class(
-    __file__, "autotest_lib.client.common_lib.hosts.site_host", "SiteHost",
-    base_classes.Host)
+# host implementation classes
+from ssh_host import SSHHost
+from guest import Guest
+from kvm_guest import KVMGuest
+
+# extra logger classes
+from serial import SerialHost
+from netconsole import NetconsoleHost
+
+# bootloader classes
+from bootloader import Bootloader
+
+# factory function
+from factory import create_host
\ No newline at end of file
diff --git a/client/common_lib/hosts/abstract_ssh.py b/client/common_lib/hosts/abstract_ssh.py
new file mode 100644
index 0000000..929191e
--- /dev/null
+++ b/client/common_lib/hosts/abstract_ssh.py
@@ -0,0 +1,608 @@
+import os, time, types, socket, shutil, glob, logging, traceback
+from autotest_lib.client.common_lib import autotemp, error, logging_manager
+from autotest_lib.client.common_lib import utils, autotest
+from autotest_lib.client.common_lib.hosts import remote
+from autotest_lib.client.common_lib.global_config import global_config
+
+
+get_value = global_config.get_config_value
+enable_master_ssh = get_value('AUTOSERV', 'enable_master_ssh', type=bool,
+                              default=False)
+
+
+def _make_ssh_cmd_default(user="root", port=22, opts='', hosts_file='/dev/null',
+                          connect_timeout=30, alive_interval=300):
+    base_command = ("/usr/bin/ssh -a -x %s -o StrictHostKeyChecking=no "
+                    "-o UserKnownHostsFile=%s -o BatchMode=yes "
+                    "-o ConnectTimeout=%d -o ServerAliveInterval=%d "
+                    "-l %s -p %d")
+    assert isinstance(connect_timeout, (int, long))
+    assert connect_timeout > 0 # can't disable the timeout
+    return base_command % (opts, hosts_file, connect_timeout,
+                           alive_interval, user, port)
+
+
+make_ssh_command = utils.import_site_function(
+    __file__, "autotest_lib.server.hosts.site_host", "make_ssh_command",
+    _make_ssh_cmd_default)
+
+
+# import site specific Host class
+SiteHost = utils.import_site_class(
+    __file__, "autotest_lib.client.common_lib.hosts.site_host", "SiteHost",
+    remote.RemoteHost)
+
+
+class AbstractSSHHost(SiteHost):
+    """
+    This class represents a generic implementation of most of the
+    framework necessary for controlling a host via ssh. It implements
+    almost all of the abstract Host methods, except for the core
+    Host.run method.
+    """
+
+    def _initialize(self, hostname, user="root", port=22, password="",
+                    *args, **dargs):
+        super(AbstractSSHHost, self)._initialize(hostname=hostname,
+                                                 *args, **dargs)
+        self.ip = socket.getaddrinfo(self.hostname, None)[0][4][0]
+        self.user = user
+        self.port = port
+        self.password = password
+        self._use_rsync = None
+        self.known_hosts_file = os.tmpfile()
+        known_hosts_fd = self.known_hosts_file.fileno()
+        self.known_hosts_fd = '/dev/fd/%s' % known_hosts_fd
+
+        """
+        Master SSH connection background job, socket temp directory and socket
+        control path option. If master-SSH is enabled, these fields will be
+        initialized by start_master_ssh when a new SSH connection is initiated.
+        """
+        self.master_ssh_job = None
+        self.master_ssh_tempdir = None
+        self.master_ssh_option = ''
+
+
+    def use_rsync(self):
+        if self._use_rsync is not None:
+            return self._use_rsync
+
+        # Check if rsync is available on the remote host. If it's not,
+        # don't try to use it for any future file transfers.
+        self._use_rsync = self._check_rsync()
+        if not self._use_rsync:
+            logging.warn("rsync not available on remote host %s -- disabled",
+                         self.hostname)
+        return self._use_rsync
+
+
+    def _check_rsync(self):
+        """
+        Check if rsync is available on the remote host.
+        """
+        try:
+            self.run("rsync --version", stdout_tee=None, stderr_tee=None)
+        except error.AutoservRunError:
+            return False
+        return True
+
+
+    def _encode_remote_paths(self, paths, escape=True):
+        """
+        Given a list of file paths, encodes it as a single remote path, in
+        the style used by rsync and scp.
+        """
+        if escape:
+            paths = [utils.scp_remote_escape(path) for path in paths]
+        return '%s@%s:"%s"' % (self.user, self.hostname, " ".join(paths))
+
+
+    def _make_rsync_cmd(self, sources, dest, delete_dest, preserve_symlinks):
+        """
+        Given a list of source paths and a destination path, produces the
+        appropriate rsync command for copying them. Remote paths must be
+        pre-encoded.
+        """
+        ssh_cmd = make_ssh_command(user=self.user, port=self.port,
+                                   opts=self.master_ssh_option,
+                                   hosts_file=self.known_hosts_fd)
+        if delete_dest:
+            delete_flag = "--delete"
+        else:
+            delete_flag = ""
+        if preserve_symlinks:
+            symlink_flag = ""
+        else:
+            symlink_flag = "-L"
+        command = "rsync %s %s --timeout=1800 --rsh='%s' -az %s %s"
+        return command % (symlink_flag, delete_flag, ssh_cmd,
+                          " ".join(sources), dest)
+
+
+    def _make_ssh_cmd(self, cmd):
+        """
+        Create a base ssh command string for the host which can be used
+        to run commands directly on the machine
+        """
+        base_cmd = make_ssh_command(user=self.user, port=self.port,
+                                    opts=self.master_ssh_option,
+                                    hosts_file=self.known_hosts_fd)
+
+        return '%s %s "%s"' % (base_cmd, self.hostname, utils.sh_escape(cmd))
+
+    def _make_scp_cmd(self, sources, dest):
+        """
+        Given a list of source paths and a destination path, produces the
+        appropriate scp command for encoding it. Remote paths must be
+        pre-encoded.
+        """
+        command = ("scp -rq %s -o StrictHostKeyChecking=no "
+                   "-o UserKnownHostsFile=%s -P %d %s '%s'")
+        return command % (self.master_ssh_option, self.known_hosts_fd,
+                          self.port, " ".join(sources), dest)
+
+
+    def _make_rsync_compatible_globs(self, path, is_local):
+        """
+        Given an rsync-style path, returns a list of globbed paths
+        that will hopefully provide equivalent behaviour for scp. Does not
+        support the full range of rsync pattern matching behaviour, only that
+        exposed in the get/send_file interface (trailing slashes).
+
+        The is_local param is flag indicating if the paths should be
+        interpreted as local or remote paths.
+        """
+
+        # non-trailing slash paths should just work
+        if len(path) == 0 or path[-1] != "/":
+            return [path]
+
+        # make a function to test if a pattern matches any files
+        if is_local:
+            def glob_matches_files(path, pattern):
+                return len(glob.glob(path + pattern)) > 0
+        else:
+            def glob_matches_files(path, pattern):
+                result = self.run("ls \"%s\"%s" % (utils.sh_escape(path),
+                                                   pattern),
+                                  stdout_tee=None, ignore_status=True)
+                return result.exit_status == 0
+
+        # take a set of globs that cover all files, and see which are needed
+        patterns = ["*", ".[!.]*"]
+        patterns = [p for p in patterns if glob_matches_files(path, p)]
+
+        # convert them into a set of paths suitable for the commandline
+        if is_local:
+            return ["\"%s\"%s" % (utils.sh_escape(path), pattern)
+                    for pattern in patterns]
+        else:
+            return [utils.scp_remote_escape(path) + pattern
+                    for pattern in patterns]
+
+
+    def _make_rsync_compatible_source(self, source, is_local):
+        """
+        Applies the same logic as _make_rsync_compatible_globs, but
+        applies it to an entire list of sources, producing a new list of
+        sources, properly quoted.
+        """
+        return sum((self._make_rsync_compatible_globs(path, is_local)
+                    for path in source), [])
+
+
+    def _set_umask_perms(self, dest):
+        """
+        Given a destination file/dir (recursively) set the permissions on
+        all the files and directories to the max allowed by running umask.
+        """
+
+        # now this looks strange but I haven't found a way in Python to _just_
+        # get the umask, apparently the only option is to try to set it
+        umask = os.umask(0)
+        os.umask(umask)
+
+        max_privs = 0777 & ~umask
+
+        def set_file_privs(filename):
+            file_stat = os.stat(filename)
+
+            file_privs = max_privs
+            # if the original file permissions do not have at least one
+            # executable bit then do not set it anywhere
+            if not file_stat.st_mode & 0111:
+                file_privs &= ~0111
+
+            os.chmod(filename, file_privs)
+
+        # try a bottom-up walk so changes on directory permissions won't cut
+        # our access to the files/directories inside it
+        for root, dirs, files in os.walk(dest, topdown=False):
+            # when setting the privileges we emulate the chmod "X" behaviour
+            # that sets to execute only if it is a directory or any of the
+            # owner/group/other already has execute right
+            for dirname in dirs:
+                os.chmod(os.path.join(root, dirname), max_privs)
+
+            for filename in files:
+                set_file_privs(os.path.join(root, filename))
+
+
+        # now set privs for the dest itself
+        if os.path.isdir(dest):
+            os.chmod(dest, max_privs)
+        else:
+            set_file_privs(dest)
+
+
+    def get_file(self, source, dest, delete_dest=False, preserve_perm=True,
+                 preserve_symlinks=False):
+        """
+        Copy files from the remote host to a local path.
+
+        Directories will be copied recursively.
+        If a source component is a directory with a trailing slash,
+        the content of the directory will be copied, otherwise, the
+        directory itself and its content will be copied. This
+        behavior is similar to that of the program 'rsync'.
+
+        Args:
+                source: either
+                        1) a single file or directory, as a string
+                        2) a list of one or more (possibly mixed)
+                                files or directories
+                dest: a file or a directory (if source contains a
+                        directory or more than one element, you must
+                        supply a directory dest)
+                delete_dest: if this is true, the command will also clear
+                             out any old files at dest that are not in the
+                             source
+                preserve_perm: tells get_file() to try to preserve the sources
+                               permissions on files and dirs
+                preserve_symlinks: try to preserve symlinks instead of
+                                   transforming them into files/dirs on copy
+
+        Raises:
+                AutoservRunError: the scp command failed
+        """
+
+        # Start a master SSH connection if necessary.
+        self.start_master_ssh()
+
+        if isinstance(source, basestring):
+            source = [source]
+        dest = os.path.abspath(dest)
+
+        # If rsync is disabled or fails, try scp.
+        try_scp = True
+        if self.use_rsync():
+            try:
+                remote_source = self._encode_remote_paths(source)
+                local_dest = utils.sh_escape(dest)
+                rsync = self._make_rsync_cmd([remote_source], local_dest,
+                                             delete_dest, preserve_symlinks)
+                utils.run(rsync)
+                try_scp = False
+            except error.CmdError, e:
+                logging.warn("trying scp, rsync failed: %s" % e)
+
+        if try_scp:
+            # scp has no equivalent to --delete, just drop the entire dest dir
+            if delete_dest and os.path.isdir(dest):
+                shutil.rmtree(dest)
+                os.mkdir(dest)
+
+            remote_source = self._make_rsync_compatible_source(source, False)
+            if remote_source:
+                # _make_rsync_compatible_source() already did the escaping
+                remote_source = self._encode_remote_paths(remote_source,
+                                                          escape=False)
+                local_dest = utils.sh_escape(dest)
+                scp = self._make_scp_cmd([remote_source], local_dest)
+                try:
+                    utils.run(scp)
+                except error.CmdError, e:
+                    raise error.AutoservRunError(e.args[0], e.args[1])
+
+        if not preserve_perm:
+            # we have no way to tell scp to not try to preserve the
+            # permissions so set them after copy instead.
+            # for rsync we could use "--no-p --chmod=ugo=rwX" but those
+            # options are only in very recent rsync versions
+            self._set_umask_perms(dest)
+
+
+    def send_file(self, source, dest, delete_dest=False,
+                  preserve_symlinks=False):
+        """
+        Copy files from a local path to the remote host.
+
+        Directories will be copied recursively.
+        If a source component is a directory with a trailing slash,
+        the content of the directory will be copied, otherwise, the
+        directory itself and its content will be copied. This
+        behavior is similar to that of the program 'rsync'.
+
+        Args:
+                source: either
+                        1) a single file or directory, as a string
+                        2) a list of one or more (possibly mixed)
+                                files or directories
+                dest: a file or a directory (if source contains a
+                        directory or more than one element, you must
+                        supply a directory dest)
+                delete_dest: if this is true, the command will also clear
+                             out any old files at dest that are not in the
+                             source
+                preserve_symlinks: controls if symlinks on the source will be
+                    copied as such on the destination or transformed into the
+                    referenced file/directory
+
+        Raises:
+                AutoservRunError: the scp command failed
+        """
+
+        # Start a master SSH connection if necessary.
+        self.start_master_ssh()
+
+        if isinstance(source, basestring):
+            source = [source]
+        remote_dest = self._encode_remote_paths([dest])
+
+        # If rsync is disabled or fails, try scp.
+        try_scp = True
+        if self.use_rsync():
+            try:
+                local_sources = [utils.sh_escape(path) for path in source]
+                rsync = self._make_rsync_cmd(local_sources, remote_dest,
+                                             delete_dest, preserve_symlinks)
+                utils.run(rsync)
+                try_scp = False
+            except error.CmdError, e:
+                logging.warn("trying scp, rsync failed: %s" % e)
+
+        if try_scp:
+            # scp has no equivalent to --delete, just drop the entire dest dir
+            if delete_dest:
+                is_dir = self.run("ls -d %s/" % dest,
+                                  ignore_status=True).exit_status == 0
+                if is_dir:
+                    cmd = "rm -rf %s && mkdir %s"
+                    cmd %= (dest, dest)
+                    self.run(cmd)
+
+            local_sources = self._make_rsync_compatible_source(source, True)
+            if local_sources:
+                scp = self._make_scp_cmd(local_sources, remote_dest)
+                try:
+                    utils.run(scp)
+                except error.CmdError, e:
+                    raise error.AutoservRunError(e.args[0], e.args[1])
+
+
+    def ssh_ping(self, timeout=60):
+        try:
+            self.run("true", timeout=timeout, connect_timeout=timeout)
+        except error.AutoservSSHTimeout:
+            msg = "Host (ssh) verify timed out (timeout = %d)" % timeout
+            raise error.AutoservSSHTimeout(msg)
+        except error.AutoservSshPermissionDeniedError:
+            #let AutoservSshPermissionDeniedError be visible to the callers
+            raise
+        except error.AutoservRunError, e:
+            # convert the generic AutoservRunError into something more
+            # specific for this context
+            raise error.AutoservSshPingHostError(e.description + '\n' +
+                                                 repr(e.result_obj))
+
+
+    def is_up(self):
+        """
+        Check if the remote host is up.
+
+        @returns True if the remote host is up, False otherwise
+        """
+        try:
+            self.ssh_ping()
+        except error.AutoservError:
+            return False
+        else:
+            return True
+
+
+    def wait_up(self, timeout=None):
+        """
+        Wait until the remote host is up or the timeout expires.
+
+        In fact, it will wait until an ssh connection to the remote
+        host can be established, and getty is running.
+
+        @param timeout time limit in seconds before returning even
+            if the host is not up.
+
+        @returns True if the host was found to be up, False otherwise
+        """
+        if timeout:
+            end_time = time.time() + timeout
+
+        while not timeout or time.time() < end_time:
+            if self.is_up():
+                try:
+                    if self.are_wait_up_processes_up():
+                        logging.debug('Host %s is now up', self.hostname)
+                        return True
+                except error.AutoservError:
+                    pass
+            time.sleep(1)
+
+        logging.debug('Host %s is still down after waiting %d seconds',
+                      self.hostname, int(timeout + time.time() - end_time))
+        return False
+
+
+    def wait_down(self, timeout=None, warning_timer=None, old_boot_id=None):
+        """
+        Wait until the remote host is down or the timeout expires.
+
+        If old_boot_id is provided, this will wait until either the machine
+        is unpingable or self.get_boot_id() returns a value different from
+        old_boot_id. If the boot_id value has changed then the function
+        returns true under the assumption that the machine has shut down
+        and has now already come back up.
+
+        If old_boot_id is None then until the machine becomes unreachable the
+        method assumes the machine has not yet shut down.
+
+        @param timeout Time limit in seconds before returning even
+            if the host is still up.
+        @param warning_timer Time limit in seconds that will generate
+            a warning if the host is not down yet.
+        @param old_boot_id A string containing the result of self.get_boot_id()
+            prior to the host being told to shut down. Can be None if this is
+            not available.
+
+        @returns True if the host was found to be down, False otherwise
+        """
+        #TODO: there is currently no way to distinguish between knowing
+        #TODO: boot_id was unsupported and not knowing the boot_id.
+        current_time = time.time()
+        if timeout:
+            end_time = current_time + timeout
+
+        if warning_timer:
+            warn_time = current_time + warning_timer
+
+        if old_boot_id is not None:
+            logging.debug('Host %s pre-shutdown boot_id is %s',
+                          self.hostname, old_boot_id)
+
+        while not timeout or current_time < end_time:
+            try:
+                new_boot_id = self.get_boot_id()
+            except error.AutoservError:
+                logging.debug('Host %s is now unreachable over ssh, is down',
+                              self.hostname)
+                return True
+            else:
+                # if the machine is up but the boot_id value has changed from
+                # old boot id, then we can assume the machine has gone down
+                # and then already come back up
+                if old_boot_id is not None and old_boot_id != new_boot_id:
+                    logging.debug('Host %s now has boot_id %s and so must '
+                                  'have rebooted', self.hostname, new_boot_id)
+                    return True
+
+            if warning_timer and current_time > warn_time:
+                self.record("WARN", None, "shutdown",
+                            "Shutdown took longer than %ds" % warning_timer)
+                # Print the warning only once.
+                warning_timer = None
+                # If a machine is stuck switching runlevels
+                # This may cause the machine to reboot.
+                self.run('kill -HUP 1', ignore_status=True)
+
+            time.sleep(1)
+            current_time = time.time()
+
+        return False
+
+
+    # tunable constants for the verify & repair code
+    AUTOTEST_GB_DISKSPACE_REQUIRED = get_value("SERVER",
+                                               "gb_diskspace_required",
+                                               type=int,
+                                               default=20)
+
+
+    def verify_connectivity(self):
+        super(AbstractSSHHost, self).verify_connectivity()
+
+        logging.info('Pinging host ' + self.hostname)
+        self.ssh_ping()
+        logging.info("Host (ssh) %s is alive", self.hostname)
+
+        if self.is_shutting_down():
+            raise error.AutoservHostIsShuttingDownError("Host is shutting down")
+
+
+    def verify_software(self):
+        super(AbstractSSHHost, self).verify_software()
+        try:
+            self.check_diskspace(autotest.Autotest.get_install_dir(self),
+                                 self.AUTOTEST_GB_DISKSPACE_REQUIRED)
+        except error.AutoservHostError:
+            raise           # only want to raise if it's a space issue
+        except autotest.AutodirNotFoundError:
+            # autotest dir may not exist, etc. ignore
+            logging.debug('autodir space check exception, this is probably '
+                          'safe to ignore\n' + traceback.format_exc())
+
+
+    def close(self):
+        super(AbstractSSHHost, self).close()
+        self._cleanup_master_ssh()
+        self.known_hosts_file.close()
+
+
+    def _cleanup_master_ssh(self):
+        """
+        Release all resources (process, temporary directory) used by an active
+        master SSH connection.
+        """
+        # If a master SSH connection is running, kill it.
+        if self.master_ssh_job is not None:
+            utils.nuke_subprocess(self.master_ssh_job.sp)
+            self.master_ssh_job = None
+
+        # Remove the temporary directory for the master SSH socket.
+        if self.master_ssh_tempdir is not None:
+            self.master_ssh_tempdir.clean()
+            self.master_ssh_tempdir = None
+            self.master_ssh_option = ''
+
+
+    def start_master_ssh(self):
+        """
+        Called whenever a slave SSH connection needs to be initiated (e.g., by
+        run, rsync, scp). If master SSH support is enabled and a master SSH
+        connection is not active already, start a new one in the background.
+        Also, cleanup any zombie master SSH connections (e.g., dead due to
+        reboot).
+        """
+        if not enable_master_ssh:
+            return
+
+        # If a previously started master SSH connection is not running
+        # anymore, it needs to be cleaned up and then restarted.
+        if self.master_ssh_job is not None:
+            if self.master_ssh_job.sp.poll() is not None:
+                logging.info("Master ssh connection to %s is down.",
+                             self.hostname)
+                self._cleanup_master_ssh()
+
+        # Start a new master SSH connection.
+        if self.master_ssh_job is None:
+            # Create a shared socket in a temp location.
+            self.master_ssh_tempdir = autotemp.tempdir(unique_id='ssh-master')
+            self.master_ssh_option = ("-o ControlPath=%s/socket" %
+                                      self.master_ssh_tempdir.name)
+
+            # Start the master SSH connection in the background.
+            master_cmd = self.ssh_command(options="-N -o ControlMaster=yes")
+            logging.info("Starting master ssh connection '%s'" % master_cmd)
+            self.master_ssh_job = utils.BgJob(master_cmd)
+
+
+    def clear_known_hosts(self):
+        """Clears out the temporary ssh known_hosts file.
+
+        This is useful if the test SSHes to the machine, then reinstalls it,
+        then SSHes to it again.  It can be called after the reinstall to
+        reduce the spam in the logs.
+        """
+        logging.info("Clearing known hosts for host '%s', file '%s'.",
+                     self.hostname, self.known_hosts_fd)
+        # Clear out the file by opening it for writing and then closing.
+        fh = open(self.known_hosts_fd, "w")
+        fh.close()
diff --git a/client/common_lib/hosts/base_classes.py b/client/common_lib/hosts/base_classes.py
index 68cabe8..cf73b10 100644
--- a/client/common_lib/hosts/base_classes.py
+++ b/client/common_lib/hosts/base_classes.py
@@ -1,12 +1,14 @@
-# Copyright 2009 Google Inc. Released under the GPL v2
+#
+# Copyright 2007 Google Inc. Released under the GPL v2
 
 """
-This module defines the base classes for the Host hierarchy.
+This module defines the base classes for the server Host hierarchy.
 
 Implementation details:
 You should import the "hosts" package instead of importing each type of host.
 
         Host: a machine on which you can run programs
+        RemoteHost: a remote machine on which you can run programs
 """
 
 __author__ = """
@@ -15,14 +17,13 @@ poirier@google.com (Benjamin Poirier),
 stutsman@google.com (Ryan Stutsman)
 """
 
-import cPickle, cStringIO, logging, os, re, time
+import os
 
-from autotest_lib.client.common_lib import global_config, error, utils
-from autotest_lib.client.common_lib import host_protections
-from autotest_lib.client.bin import partition
+from autotest_lib.client.common_lib import base_hosts, utils
+from autotest_lib.client.common_lib.hosts import bootloader
 
 
-class Host(object):
+class Host(base_hosts.Host):
     """
     This class represents a machine on which you can run programs.
 
@@ -49,665 +50,30 @@ class Host(object):
            only one implementation ever gets executed.
     """
 
-    job = None
-    DEFAULT_REBOOT_TIMEOUT = global_config.global_config.get_config_value(
-        "HOSTS", "default_reboot_timeout", type=int, default=1800)
-    WAIT_DOWN_REBOOT_TIMEOUT = global_config.global_config.get_config_value(
-        "HOSTS", "wait_down_reboot_timeout", type=int, default=840)
-    WAIT_DOWN_REBOOT_WARNING = global_config.global_config.get_config_value(
-        "HOSTS", "wait_down_reboot_warning", type=int, default=540)
-    HOURS_TO_WAIT_FOR_RECOVERY = global_config.global_config.get_config_value(
-        "HOSTS", "hours_to_wait_for_recovery", type=float, default=2.5)
-    # the number of hardware repair requests that need to happen before we
-    # actually send machines to hardware repair
-    HARDWARE_REPAIR_REQUEST_THRESHOLD = 4
+    bootloader = None
 
 
     def __init__(self, *args, **dargs):
-        self._initialize(*args, **dargs)
+        super(Host, self).__init__(*args, **dargs)
 
-
-    def _initialize(self, *args, **dargs):
-        self._already_repaired = []
-        self._removed_files = False
-
-
-    def close(self):
-        pass
-
-
-    def setup(self):
-        pass
-
-
-    def run(self, command, timeout=3600, ignore_status=False,
-            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
-            stdin=None, args=()):
-        """
-        Run a command on this host.
-
-        @param command: the command line string
-        @param timeout: time limit in seconds before attempting to
-                kill the running process. The run() function
-                will take a few seconds longer than 'timeout'
-                to complete if it has to kill the process.
-        @param ignore_status: do not raise an exception, no matter
-                what the exit code of the command is.
-        @param stdout_tee/stderr_tee: where to tee the stdout/stderr
-        @param stdin: stdin to pass (a string) to the executed command
-        @param args: sequence of strings to pass as arguments to command by
-                quoting them in " and escaping their contents if necessary
-
-        @return a utils.CmdResult object
-
-        @raises AutotestHostRunError: the exit code of the command execution
-                was not 0 and ignore_status was not enabled
-        """
-        raise NotImplementedError('Run not implemented!')
-
-
-    def run_output(self, command, *args, **dargs):
-        return self.run(command, *args, **dargs).stdout.rstrip()
-
-
-    def reboot(self):
-        raise NotImplementedError('Reboot not implemented!')
-
-
-    def sysrq_reboot(self):
-        raise NotImplementedError('Sysrq reboot not implemented!')
-
-
-    def reboot_setup(self, *args, **dargs):
-        pass
-
-
-    def reboot_followup(self, *args, **dargs):
-        pass
-
-
-    def get_file(self, source, dest, delete_dest=False):
-        raise NotImplementedError('Get file not implemented!')
-
-
-    def send_file(self, source, dest, delete_dest=False):
-        raise NotImplementedError('Send file not implemented!')
-
-
-    def get_tmp_dir(self):
-        raise NotImplementedError('Get temp dir not implemented!')
-
-
-    def is_up(self):
-        raise NotImplementedError('Is up not implemented!')
-
-
-    def is_shutting_down(self):
-        """ Indicates is a machine is currently shutting down. """
-        # runlevel() may not be available, so wrap it in try block.
-        try:
-            runlevel = int(self.run("runlevel").stdout.strip().split()[1])
-            return runlevel in (0, 6)
-        except:
-            return False
-
-
-    def get_wait_up_processes(self):
-        """ Gets the list of local processes to wait for in wait_up. """
-        get_config = global_config.global_config.get_config_value
-        proc_list = get_config("HOSTS", "wait_up_processes",
-                               default="").strip()
-        processes = set(p.strip() for p in proc_list.split(","))
-        processes.discard("")
-        return processes
-
-
-    def get_boot_id(self, timeout=60):
-        """ Get a unique ID associated with the current boot.
-
-        Should return a string with the semantics such that two separate
-        calls to Host.get_boot_id() return the same string if the host did
-        not reboot between the two calls, and two different strings if it
-        has rebooted at least once between the two calls.
-
-        @param timeout The number of seconds to wait before timing out.
-
-        @return A string unique to this boot or None if not available."""
-        BOOT_ID_FILE = '/proc/sys/kernel/random/boot_id'
-        NO_ID_MSG = 'no boot_id available'
-        cmd = 'if [ -f %r ]; then cat %r; else echo %r; fi' % (
-                BOOT_ID_FILE, BOOT_ID_FILE, NO_ID_MSG)
-        boot_id = self.run(cmd, timeout=timeout).stdout.strip()
-        if boot_id == NO_ID_MSG:
-            return None
-        return boot_id
-
-
-    def wait_up(self, timeout=None):
-        raise NotImplementedError('Wait up not implemented!')
-
-
-    def wait_down(self, timeout=None, warning_timer=None, old_boot_id=None):
-        raise NotImplementedError('Wait down not implemented!')
-
-
-    def wait_for_restart(self, timeout=DEFAULT_REBOOT_TIMEOUT,
-                         down_timeout=WAIT_DOWN_REBOOT_TIMEOUT,
-                         down_warning=WAIT_DOWN_REBOOT_WARNING,
-                         log_failure=True, old_boot_id=None, **dargs):
-        """ Wait for the host to come back from a reboot. This is a generic
-        implementation based entirely on wait_up and wait_down. """
-        if not self.wait_down(timeout=down_timeout,
-                              warning_timer=down_warning,
-                              old_boot_id=old_boot_id):
-            if log_failure:
-                self.record("ABORT", None, "reboot.verify", "shut down failed")
-            raise error.AutoservShutdownError("Host did not shut down")
-
-        if self.wait_up(timeout):
-            self.record("GOOD", None, "reboot.verify")
-            self.reboot_followup(**dargs)
-        else:
-            self.record("ABORT", None, "reboot.verify",
-                        "Host did not return from reboot")
-            raise error.AutoservRebootError("Host did not return from reboot")
-
-
-    def verify(self):
-        self.verify_hardware()
-        self.verify_connectivity()
-        self.verify_software()
-
-
-    def verify_hardware(self):
-        pass
-
-
-    def verify_connectivity(self):
-        pass
-
-
-    def verify_software(self):
-        pass
-
-
-    def check_diskspace(self, path, gb):
-        """Raises an error if path does not have at least gb GB free.
-
-        @param path The path to check for free disk space.
-        @param gb A floating point number to compare with a granularity
-            of 1 MB.
-
-        1000 based SI units are used.
-
-        @raises AutoservDiskFullHostError if path has less than gb GB free.
-        """
-        one_mb = 10 ** 6  # Bytes (SI unit).
-        mb_per_gb = 1000.0
-        logging.info('Checking for >= %s GB of space under %s on machine %s',
-                     gb, path, self.hostname)
-        df = self.run('df -PB %d %s | tail -1' % (one_mb, path)).stdout.split()
-        free_space_gb = int(df[3]) / mb_per_gb
-        if free_space_gb < gb:
-            raise error.AutoservDiskFullHostError(path, gb, free_space_gb)
-        else:
-            logging.info('Found %s GB >= %s GB of space under %s on machine %s',
-                free_space_gb, gb, path, self.hostname)
-
-
-    def get_open_func(self, use_cache=True):
-        """
-        Defines and returns a function that may be used instead of built-in
-        open() to open and read files. The returned function is implemented
-        by using self.run('cat <file>') and may cache the results for the same
-        filename.
-
-        @param use_cache Cache results of self.run('cat <filename>') for the
-            same filename
-
-        @return a function that can be used instead of built-in open()
-        """
-        cached_files = {}
-
-        def open_func(filename):
-            if not use_cache or filename not in cached_files:
-                output = self.run('cat \'%s\'' % filename,
-                                  stdout_tee=open('/dev/null', 'w')).stdout
-                fd = cStringIO.StringIO(output)
-
-                if not use_cache:
-                    return fd
-
-                cached_files[filename] = fd
-            else:
-                cached_files[filename].seek(0)
-
-            return cached_files[filename]
-
-        return open_func
-
-
-    def check_partitions(self, root_part, filter_func=None):
-        """ Compare the contents of /proc/partitions with those of
-        /proc/mounts and raise exception in case unmounted partitions are found
-
-        root_part: in Linux /proc/mounts will never directly mention the root
-        partition as being mounted on / instead it will say that /dev/root is
-        mounted on /. Thus require this argument to filter out the root_part
-        from the ones checked to be mounted
-
-        filter_func: unnary predicate for additional filtering out of
-        partitions required to be mounted
-
-        Raise: error.AutoservHostError if unfiltered unmounted partition found
-        """
-
-        print 'Checking if non-swap partitions are mounted...'
-
-        unmounted = partition.get_unmounted_partition_list(root_part,
-            filter_func=filter_func, open_func=self.get_open_func())
-        if unmounted:
-            raise error.AutoservNotMountedHostError(
-                'Found unmounted partitions: %s' %
-                [part.device for part in unmounted])
-
-
-    def _repair_wait_for_reboot(self):
-        TIMEOUT = int(self.HOURS_TO_WAIT_FOR_RECOVERY * 3600)
-        if self.is_shutting_down():
-            logging.info('Host is shutting down, waiting for a restart')
-            self.wait_for_restart(TIMEOUT)
-        else:
-            self.wait_up(TIMEOUT)
-
-
-    def _get_mountpoint(self, path):
-        """Given a "path" get the mount point of the filesystem containing
-        that path."""
-        code = ('import os\n'
-                # sanitize the path and resolve symlinks
-                'path = os.path.realpath(%r)\n'
-                "while path != '/' and not os.path.ismount(path):\n"
-                '    path, _ = os.path.split(path)\n'
-                'print path\n') % path
-        return self.run('python -c "%s"' % code,
-                        stdout_tee=open(os.devnull, 'w')).stdout.rstrip()
-
-
-    def erase_dir_contents(self, path, ignore_status=True, timeout=3600):
-        """Empty a given directory path contents."""
-        rm_cmd = 'find "%s" -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -rf'
-        self.run(rm_cmd % path, ignore_status=ignore_status, timeout=timeout)
-        self._removed_files = True
-
-
-    def repair_full_disk(self, mountpoint):
-        # it's safe to remove /tmp and /var/tmp, site specific overrides may
-        # want to remove some other places too
-        if mountpoint == self._get_mountpoint('/tmp'):
-            self.erase_dir_contents('/tmp')
-
-        if mountpoint == self._get_mountpoint('/var/tmp'):
-            self.erase_dir_contents('/var/tmp')
-
-
-    def _call_repair_func(self, err, func, *args, **dargs):
-        for old_call in self._already_repaired:
-            if old_call == (func, args, dargs):
-                # re-raising the original exception because surrounding
-                # error handling may want to try other ways to fix it
-                logging.warn('Already done this (%s) repair procedure, '
-                             're-raising the original exception.', func)
-                raise err
-
-        try:
-            func(*args, **dargs)
-        except (error.AutoservHardwareRepairRequestedError,
-                error.AutoservHardwareRepairRequiredError):
-            # let these special exceptions propagate
-            raise
-        except error.AutoservError:
-            logging.exception('Repair failed but continuing in case it managed'
-                              ' to repair enough')
-
-        self._already_repaired.append((func, args, dargs))
-
-
-    def repair_filesystem_only(self):
-        """perform file system repairs only"""
-        while True:
-            # try to repair specific problems
-            try:
-                logging.info('Running verify to find failures to repair...')
-                self.verify()
-                if self._removed_files:
-                    logging.info('Removed files, rebooting to release the'
-                                 ' inodes')
-                    self.reboot()
-                return # verify succeeded, then repair succeeded
-            except error.AutoservHostIsShuttingDownError, err:
-                logging.exception('verify failed')
-                self._call_repair_func(err, self._repair_wait_for_reboot)
-            except error.AutoservDiskFullHostError, err:
-                logging.exception('verify failed')
-                self._call_repair_func(err, self.repair_full_disk,
-                                       self._get_mountpoint(err.path))
-
-
-    def repair_software_only(self):
-        """perform software repairs only"""
-        while True:
-            try:
-                self.repair_filesystem_only()
-                break
-            except (error.AutoservSshPingHostError, error.AutoservSSHTimeout,
-                    error.AutoservSshPermissionDeniedError,
-                    error.AutoservDiskFullHostError), err:
-                logging.exception('verify failed')
-                logging.info('Trying to reinstall the machine')
-                self._call_repair_func(err, self.machine_install)
-
-
-    def repair_full(self):
-        hardware_repair_requests = 0
-        while True:
-            try:
-                self.repair_software_only()
-                break
-            except error.AutoservHardwareRepairRequiredError, err:
-                logging.exception('software repair failed, '
-                                  'hardware repair requested')
-                hardware_repair_requests += 1
-                try_hardware_repair = (hardware_repair_requests >=
-                                       self.HARDWARE_REPAIR_REQUEST_THRESHOLD)
-                if try_hardware_repair:
-                    logging.info('hardware repair requested %d times, '
-                                 'trying hardware repair',
-                                 hardware_repair_requests)
-                    self._call_repair_func(err, self.request_hardware_repair)
-                else:
-                    logging.info('hardware repair requested %d times, '
-                                 'trying software repair again',
-                                 hardware_repair_requests)
-            except error.AutoservHardwareHostError, err:
-                logging.exception('verify failed')
-                # software repair failed, try hardware repair
-                logging.info('Hardware problem found, '
-                             'requesting hardware repairs')
-                self._call_repair_func(err, self.request_hardware_repair)
-
-
-    def repair_with_protection(self, protection_level):
-        """Perform the maximal amount of repair within the specified
-        protection level.
-
-        @param protection_level: the protection level to use for limiting
-                                 repairs, a host_protections.Protection
-        """
-        protection = host_protections.Protection
-        if protection_level == protection.DO_NOT_REPAIR:
-            logging.info('Protection is "Do not repair" so just verifying')
-            self.verify()
-        elif protection_level == protection.REPAIR_FILESYSTEM_ONLY:
-            logging.info('Attempting filesystem-only repair')
-            self.repair_filesystem_only()
-        elif protection_level == protection.REPAIR_SOFTWARE_ONLY:
-            logging.info('Attempting software repair only')
-            self.repair_software_only()
-        elif protection_level == protection.NO_PROTECTION:
-            logging.info('Attempting full repair')
-            self.repair_full()
-        else:
-            raise NotImplementedError('Unknown host protection level %s'
-                                      % protection_level)
-
-
-    def disable_ipfilters(self):
-        """Allow all network packets in and out of the host."""
-        self.run('iptables-save > /tmp/iptable-rules')
-        self.run('iptables -P INPUT ACCEPT')
-        self.run('iptables -P FORWARD ACCEPT')
-        self.run('iptables -P OUTPUT ACCEPT')
-
-
-    def enable_ipfilters(self):
-        """Re-enable the IP filters disabled from disable_ipfilters()"""
-        if self.path_exists('/tmp/iptable-rules'):
-            self.run('iptables-restore < /tmp/iptable-rules')
-
-
-    def cleanup(self):
-        pass
-
-
-    def machine_install(self):
-        raise NotImplementedError('Machine install not implemented!')
-
-
-    def install(self, installableObject):
-        installableObject.install(self)
-
-
-    def get_autodir(self):
-        raise NotImplementedError('Get autodir not implemented!')
-
-
-    def set_autodir(self):
-        raise NotImplementedError('Set autodir not implemented!')
-
-
-    def start_loggers(self):
-        """ Called to start continuous host logging. """
-        pass
-
-
-    def stop_loggers(self):
-        """ Called to stop continuous host logging. """
-        pass
-
-
-    # some extra methods simplify the retrieval of information about the
-    # Host machine, with generic implementations based on run(). subclasses
-    # should feel free to override these if they can provide better
-    # implementations for their specific Host types
-
-    def get_num_cpu(self):
-        """ Get the number of CPUs in the host according to /proc/cpuinfo. """
-        proc_cpuinfo = self.run('cat /proc/cpuinfo',
-                                stdout_tee=open(os.devnull, 'w')).stdout
-        cpus = 0
-        for line in proc_cpuinfo.splitlines():
-            if line.startswith('processor'):
-                cpus += 1
-        return cpus
-
-
-    def get_arch(self):
-        """ Get the hardware architecture of the remote machine. """
-        arch = self.run('/bin/uname -m').stdout.rstrip()
-        if re.match(r'i\d86$', arch):
-            arch = 'i386'
-        return arch
-
-
-    def get_kernel_ver(self):
-        """ Get the kernel version of the remote machine. """
-        return self.run('/bin/uname -r').stdout.rstrip()
-
-
-    def get_cmdline(self):
-        """ Get the kernel command line of the remote machine. """
-        return self.run('cat /proc/cmdline').stdout.rstrip()
-
-
-    def get_meminfo(self):
-        """ Get the kernel memory info (/proc/meminfo) of the remote machine
-        and return a dictionary mapping the various statistics. """
-        meminfo_dict = {}
-        meminfo = self.run('cat /proc/meminfo').stdout.splitlines()
-        for key, val in (line.split(':', 1) for line in meminfo):
-            meminfo_dict[key.strip()] = val.strip()
-        return meminfo_dict
-
-
-    def path_exists(self, path):
-        """ Determine if path exists on the remote machine. """
-        result = self.run('ls "%s" > /dev/null' % utils.sh_escape(path),
-                          ignore_status=True)
-        return result.exit_status == 0
-
-
-    # some extra helpers for doing job-related operations
-
-    def record(self, *args, **dargs):
-        """ Helper method for recording status logs against Host.job that
-        silently becomes a NOP if Host.job is not available. The args and
-        dargs are passed on to Host.job.record unchanged. """
+        self.start_loggers()
         if self.job:
-            self.job.record(*args, **dargs)
+            self.job.hosts.add(self)
 
 
-    def log_kernel(self):
-        """ Helper method for logging kernel information into the status logs.
-        Intended for cases where the "current" kernel is not really defined
-        and we want to explicitly log it. Does nothing if this host isn't
-        actually associated with a job. """
-        if self.job:
-            kernel = self.get_kernel_ver()
-            self.job.record("INFO", None, None,
-                            optional_fields={"kernel": kernel})
-
-
-    def log_reboot(self, reboot_func):
-        """ Decorator for wrapping a reboot in a group for status
-        logging purposes. The reboot_func parameter should be an actual
-        function that carries out the reboot.
-        """
-        if self.job and not hasattr(self, "RUNNING_LOG_REBOOT"):
-            self.RUNNING_LOG_REBOOT = True
-            try:
-                self.job.run_reboot(reboot_func, self.get_kernel_ver)
-            finally:
-                del self.RUNNING_LOG_REBOOT
-        else:
-            reboot_func()
-
-
-    def request_hardware_repair(self):
-        """ Should somehow request (send a mail?) for hardware repairs on
-        this machine. The implementation can either return by raising the
-        special error.AutoservHardwareRepairRequestedError exception or can
-        try to wait until the machine is repaired and then return normally.
-        """
-        raise NotImplementedError("request_hardware_repair not implemented")
-
-
-    def list_files_glob(self, glob):
-        """
-        Get a list of files on a remote host given a glob pattern path.
-        """
-        SCRIPT = ("python -c 'import cPickle, glob, sys;"
-                  "cPickle.dump(glob.glob(sys.argv[1]), sys.stdout, 0)'")
-        output = self.run(SCRIPT, args=(glob,), stdout_tee=None,
-                          timeout=60).stdout
-        return cPickle.loads(output)
-
-
-    def symlink_closure(self, paths):
-        """
-        Given a sequence of path strings, return the set of all paths that
-        can be reached from the initial set by following symlinks.
+    def _initialize(self, target_file_owner=None,
+                    *args, **dargs):
+        super(Host, self)._initialize(*args, **dargs)
 
-        @param paths: sequence of path strings.
-        @return: a sequence of path strings that are all the unique paths that
-                can be reached from the given ones after following symlinks.
-        """
-        SCRIPT = ("python -c 'import cPickle, os, sys\n"
-                  "paths = cPickle.load(sys.stdin)\n"
-                  "closure = {}\n"
-                  "while paths:\n"
-                  "    path = paths.keys()[0]\n"
-                  "    del paths[path]\n"
-                  "    if not os.path.exists(path):\n"
-                  "        continue\n"
-                  "    closure[path] = None\n"
-                  "    if os.path.islink(path):\n"
-                  "        link_to = os.path.join(os.path.dirname(path),\n"
-                  "                               os.readlink(path))\n"
-                  "        if link_to not in closure.keys():\n"
-                  "            paths[link_to] = None\n"
-                  "cPickle.dump(closure.keys(), sys.stdout, 0)'")
-        input_data = cPickle.dumps(dict((path, None) for path in paths), 0)
-        output = self.run(SCRIPT, stdout_tee=None, stdin=input_data,
-                          timeout=60).stdout
-        return cPickle.loads(output)
+        self.serverdir = utils.get_server_dir()
+        self.monitordir = os.path.join(os.path.dirname(__file__), "monitors")
+        self.bootloader = bootloader.Bootloader(self)
+        self.env = {}
+        self.target_file_owner = target_file_owner
 
 
-    def cleanup_kernels(self, boot_dir='/boot'):
-        """
-        Remove any kernel image and associated files (vmlinux, system.map,
-        modules) for any image found in the boot directory that is not
-        referenced by entries in the bootloader configuration.
-
-        @param boot_dir: boot directory path string, default '/boot'
-        """
-        # find all the vmlinuz images referenced by the bootloader
-        vmlinuz_prefix = os.path.join(boot_dir, 'vmlinuz-')
-        boot_info = self.bootloader.get_entries()
-        used_kernver = [boot['kernel'][len(vmlinuz_prefix):]
-                        for boot in boot_info.itervalues()]
-
-        # find all the unused vmlinuz images in /boot
-        all_vmlinuz = self.list_files_glob(vmlinuz_prefix + '*')
-        used_vmlinuz = self.symlink_closure(vmlinuz_prefix + kernver
-                                            for kernver in used_kernver)
-        unused_vmlinuz = set(all_vmlinuz) - set(used_vmlinuz)
-
-        # find all the unused vmlinux images in /boot
-        vmlinux_prefix = os.path.join(boot_dir, 'vmlinux-')
-        all_vmlinux = self.list_files_glob(vmlinux_prefix + '*')
-        used_vmlinux = self.symlink_closure(vmlinux_prefix + kernver
-                                            for kernver in used_kernver)
-        unused_vmlinux = set(all_vmlinux) - set(used_vmlinux)
-
-        # find all the unused System.map files in /boot
-        systemmap_prefix = os.path.join(boot_dir, 'System.map-')
-        all_system_map = self.list_files_glob(systemmap_prefix + '*')
-        used_system_map = self.symlink_closure(
-            systemmap_prefix + kernver for kernver in used_kernver)
-        unused_system_map = set(all_system_map) - set(used_system_map)
-
-        # find all the module directories associated with unused kernels
-        modules_prefix = '/lib/modules/'
-        all_moddirs = [dir for dir in self.list_files_glob(modules_prefix + '*')
-                       if re.match(modules_prefix + r'\d+\.\d+\.\d+.*', dir)]
-        used_moddirs = self.symlink_closure(modules_prefix + kernver
-                                            for kernver in used_kernver)
-        unused_moddirs = set(all_moddirs) - set(used_moddirs)
-
-        # remove all the vmlinuz files we don't use
-        # TODO: if needed this should become package manager agnostic
-        for vmlinuz in unused_vmlinuz:
-            # try and get an rpm package name
-            rpm = self.run('rpm -qf', args=(vmlinuz,),
-                           ignore_status=True, timeout=120)
-            if rpm.exit_status == 0:
-                packages = set(line.strip() for line in
-                               rpm.stdout.splitlines())
-                # if we found some package names, try to remove them
-                for package in packages:
-                    self.run('rpm -e', args=(package,),
-                             ignore_status=True, timeout=120)
-            # remove the image files anyway, even if rpm didn't
-            self.run('rm -f', args=(vmlinuz,),
-                     ignore_status=True, timeout=120)
-
-        # remove all the vmlinux and System.map files left over
-        for f in (unused_vmlinux | unused_system_map):
-            self.run('rm -f', args=(f,),
-                     ignore_status=True, timeout=120)
+    def close(self):
+        super(Host, self).close()
 
-        # remove all unused module directories
-        # the regex match should keep us safe from removing the wrong files
-        for moddir in unused_moddirs:
-            self.run('rm -fr', args=(moddir,), ignore_status=True)
+        if self.job:
+            self.job.hosts.discard(self)
diff --git a/client/common_lib/hosts/base_classes_unittest.py b/client/common_lib/hosts/base_classes_unittest.py
index bb8b85f..2d0ff05 100755
--- a/client/common_lib/hosts/base_classes_unittest.py
+++ b/client/common_lib/hosts/base_classes_unittest.py
@@ -3,39 +3,108 @@
 import unittest
 import common
 
-from autotest_lib.client.common_lib import error, utils
+from autotest_lib.client.common_lib import global_config
 from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.client.common_lib.hosts import base_classes
+from autotest_lib.client.common_lib import utils
+from autotest_lib.client.common_lib.hosts import base_classes, bootloader
 
 
 class test_host_class(unittest.TestCase):
     def setUp(self):
         self.god = mock.mock_god()
+        # stub out get_server_dir, global_config.get_config_value
+        self.god.stub_with(utils, "get_server_dir",
+                           lambda: "/unittest/server")
+        self.god.stub_function(global_config.global_config,
+                               "get_config_value")
+        # stub out the bootloader
+        self.real_bootloader = bootloader.Bootloader
+        bootloader.Bootloader = lambda arg: object()
 
 
     def tearDown(self):
         self.god.unstub_all()
+        bootloader.Bootloader = self.real_bootloader
 
 
-    def test_run_output_notimplemented(self):
+    def test_init(self):
+        self.god.stub_function(utils, "get_server_dir")
+        host = base_classes.Host.__new__(base_classes.Host)
+        bootloader.Bootloader = \
+                self.god.create_mock_class_obj(self.real_bootloader,
+                                               "Bootloader")
+        # overwrite this attribute as it's irrelevant for these tests
+        # and may cause problems with construction of the mock
+        bootloader.Bootloader.boottool_path = None
+        # set up the recording
+        utils.get_server_dir.expect_call().and_return("/unittest/server")
+        bootloader.Bootloader.expect_new(host)
+        # run the actual test
+        host.__init__()
+        self.god.check_playback()
+
+
+    def test_install(self):
+        host = base_classes.Host()
+        # create a dummy installable class
+        class installable(object):
+            def install(self, host):
+                pass
+        installableObj = self.god.create_mock_class(installable,
+                                                    "installableObj")
+        installableObj.install.expect_call(host)
+        # run the actual test
+        host.install(installableObj)
+        self.god.check_playback()
+
+
+    def test_get_wait_up_empty(self):
+        global_config.global_config.get_config_value.expect_call(
+            "HOSTS", "wait_up_processes", default="").and_return("")
+
+        host = base_classes.Host()
+        self.assertEquals(host.get_wait_up_processes(), set())
+        self.god.check_playback()
+
+
+    def test_get_wait_up_ignores_whitespace(self):
+        global_config.global_config.get_config_value.expect_call(
+            "HOSTS", "wait_up_processes", default="").and_return("  ")
+
         host = base_classes.Host()
-        self.assertRaises(NotImplementedError, host.run_output, "fake command")
+        self.assertEquals(host.get_wait_up_processes(), set())
+        self.god.check_playback()
+
+
+    def test_get_wait_up_single_process(self):
+        global_config.global_config.get_config_value.expect_call(
+            "HOSTS", "wait_up_processes", default="").and_return("proc1")
+
+        host = base_classes.Host()
+        self.assertEquals(host.get_wait_up_processes(),
+                          set(["proc1"]))
+        self.god.check_playback()
+
+
+    def test_get_wait_up_multiple_process(self):
+        global_config.global_config.get_config_value.expect_call(
+            "HOSTS", "wait_up_processes", default="").and_return(
+            "proc1,proc2,proc3")
+
+        host = base_classes.Host()
+        self.assertEquals(host.get_wait_up_processes(),
+                          set(["proc1", "proc2", "proc3"]))
+        self.god.check_playback()
+
 
+    def test_get_wait_up_drops_duplicates(self):
+        global_config.global_config.get_config_value.expect_call(
+            "HOSTS", "wait_up_processes", default="").and_return(
+            "proc1,proc2,proc1")
 
-    def test_check_diskspace(self):
-        self.god.stub_function(base_classes.Host, 'run')
         host = base_classes.Host()
-        host.hostname = 'unittest-host'
-        test_df_tail = ('/dev/sda1                    1061       939'
-                        '       123      89% /')
-        fake_cmd_status = utils.CmdResult(exit_status=0, stdout=test_df_tail)
-        host.run.expect_call('df -PB 1000000 /foo | tail -1').and_return(
-                fake_cmd_status)
-        self.assertRaises(error.AutoservDiskFullHostError,
-                          host.check_diskspace, '/foo', 0.2)
-        host.run.expect_call('df -PB 1000000 /foo | tail -1').and_return(
-                fake_cmd_status)
-        host.check_diskspace('/foo', 0.1)
+        self.assertEquals(host.get_wait_up_processes(),
+                          set(["proc1", "proc2"]))
         self.god.check_playback()
 
 
diff --git a/client/common_lib/hosts/bootloader.py b/client/common_lib/hosts/bootloader.py
new file mode 100644
index 0000000..bcb417a
--- /dev/null
+++ b/client/common_lib/hosts/bootloader.py
@@ -0,0 +1,67 @@
+#
+# Copyright 2007 Google Inc. Released under the GPL v2
+
+"""
+This module defines the Bootloader class.
+
+        Bootloader: a program to boot Kernels on a Host.
+"""
+
+import os, weakref
+from autotest_lib.client.common_lib import error, boottool
+from autotest_lib.client.common_lib import utils
+
+BOOTTOOL_SRC = '../client/tools/boottool'  # Get it from autotest client
+
+
+class Bootloader(boottool.boottool):
+    """
+    This class gives access to a host's bootloader services.
+
+    It can be used to add a kernel to the list of kernels that can be
+    booted by a bootloader. It can also make sure that this kernel will
+    be the one chosen at next reboot.
+    """
+
+    def __init__(self, host):
+        super(Bootloader, self).__init__()
+        self._host = weakref.ref(host)
+        self._boottool_path = None
+
+
+    def set_default(self, index):
+        if self._host().job:
+            self._host().job.last_boot_tag = None
+        super(Bootloader, self).set_default(index)
+
+
+    def boot_once(self, title):
+        if self._host().job:
+            self._host().job.last_boot_tag = title
+
+        super(Bootloader, self).boot_once(title)
+
+
+    def _install_boottool(self):
+        if self._host() is None:
+            raise error.AutoservError(
+                "Host does not exist anymore")
+        tmpdir = self._host().get_tmp_dir()
+        self._host().send_file(os.path.abspath(os.path.join(
+                utils.get_server_dir(), BOOTTOOL_SRC)), tmpdir)
+        self._boottool_path= os.path.join(tmpdir,
+                os.path.basename(BOOTTOOL_SRC))
+
+
+    def _get_boottool_path(self):
+        if not self._boottool_path:
+            self._install_boottool()
+        return self._boottool_path
+
+
+    def _run_boottool(self, *options):
+        cmd = self._get_boottool_path()
+        # FIXME: add unsafe options strings sequence to host.run() parameters
+        for option in options:
+            cmd += ' "%s"' % utils.sh_escape(option)
+        return self._host().run(cmd).stdout
diff --git a/client/common_lib/hosts/bootloader_unittest.py b/client/common_lib/hosts/bootloader_unittest.py
new file mode 100755
index 0000000..67a97fd
--- /dev/null
+++ b/client/common_lib/hosts/bootloader_unittest.py
@@ -0,0 +1,97 @@
+#!/usr/bin/python
+
+import unittest, os
+import common
+
+from autotest_lib.client.common_lib.test_utils import mock
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.common_lib import utils, hosts
+from autotest_lib.client.common_lib.hosts import bootloader
+
+
+class test_bootloader(unittest.TestCase):
+    def setUp(self):
+        self.god = mock.mock_god()
+
+        # mock out get_server_dir
+        self.god.stub_function(utils, "get_server_dir")
+
+
+    def tearDown(self):
+        self.god.unstub_all()
+
+
+    def create_mock_host(self):
+        # useful for building disposable RemoteHost mocks
+        return self.god.create_mock_class(hosts.RemoteHost, "host")
+
+
+    def create_install_boottool_mock(self, loader, dst_dir):
+        mock_install_boottool = \
+                self.god.create_mock_function("_install_boottool")
+        def install_boottool():
+            loader._boottool_path = dst_dir
+            mock_install_boottool()
+        loader._install_boottool = install_boottool
+        return mock_install_boottool
+
+
+    def test_install_fails_without_host(self):
+        host = self.create_mock_host()
+        loader = bootloader.Bootloader(host)
+        del host
+        self.assertRaises(error.AutoservError, loader._install_boottool)
+
+
+    def test_installs_to_tmpdir(self):
+        TMPDIR = "/unittest/tmp"
+        SERVERDIR = "/unittest/server"
+        BOOTTOOL_SRC = os.path.join(SERVERDIR, bootloader.BOOTTOOL_SRC)
+        BOOTTOOL_SRC = os.path.abspath(BOOTTOOL_SRC)
+        BOOTTOOL_DST = os.path.join(TMPDIR, "boottool")
+        # set up the recording
+        host = self.create_mock_host()
+        host.get_tmp_dir.expect_call().and_return(TMPDIR)
+        utils.get_server_dir.expect_call().and_return(SERVERDIR)
+        host.send_file.expect_call(BOOTTOOL_SRC, TMPDIR)
+        # run the test
+        loader = bootloader.Bootloader(host)
+        loader._install_boottool()
+        # assert the playback is correct
+        self.god.check_playback()
+        # assert the final dest is correct
+        self.assertEquals(loader._boottool_path, BOOTTOOL_DST)
+
+
+    def test_get_path_automatically_installs(self):
+        BOOTTOOL_DST = "/unittest/tmp/boottool"
+        host = self.create_mock_host()
+        loader = bootloader.Bootloader(host)
+        # mock out loader.install_boottool
+        mock_install = \
+                self.create_install_boottool_mock(loader, BOOTTOOL_DST)
+        # set up the recording
+        mock_install.expect_call()
+        # run the test
+        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
+        self.god.check_playback()
+
+
+    def test_install_is_only_called_once(self):
+        BOOTTOOL_DST = "/unittest/tmp/boottool"
+        host = self.create_mock_host()
+        loader = bootloader.Bootloader(host)
+        # mock out loader.install_boottool
+        mock_install = \
+                self.create_install_boottool_mock(loader, BOOTTOOL_DST)
+        # set up the recording
+        mock_install.expect_call()
+        # run the test
+        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
+        self.god.check_playback()
+        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
+        self.god.check_playback()
+
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/client/common_lib/hosts/factory.py b/client/common_lib/hosts/factory.py
new file mode 100644
index 0000000..af0859f
--- /dev/null
+++ b/client/common_lib/hosts/factory.py
@@ -0,0 +1,83 @@
+from autotest_lib.client.common_lib import utils, error, global_config
+from autotest_lib.client.common_lib import autotest, utils as server_utils
+from autotest_lib.client.common_lib.hosts import site_factory, ssh_host, serial
+from autotest_lib.client.common_lib.hosts import logfile_monitor
+
+DEFAULT_FOLLOW_PATH = '/var/log/kern.log'
+DEFAULT_PATTERNS_PATH = 'console_patterns'
+SSH_ENGINE = global_config.global_config.get_config_value('AUTOSERV',
+                                                          'ssh_engine',
+                                                          type=str)
+
+# for tracking which hostnames have already had job_start called
+_started_hostnames = set()
+
+def create_host(
+    hostname, auto_monitor=True, follow_paths=None, pattern_paths=None,
+    netconsole=False, **args):
+    # by default assume we're using SSH support
+    if SSH_ENGINE == 'paramiko':
+        from autotest_lib.client.common_lib.hosts import paramiko_host
+        classes = [paramiko_host.ParamikoHost]
+    elif SSH_ENGINE == 'raw_ssh':
+        classes = [ssh_host.SSHHost]
+    else:
+        raise error.AutoServError("Unknown SSH engine %s. Please verify the "
+                                  "value of the configuration key 'ssh_engine' "
+                                  "on autotest's global_config.ini file." %
+                                  SSH_ENGINE)
+
+    # by default mix in run_test support
+    classes.append(autotest.AutotestHostMixin)
+
+    # if the user really wants to use netconsole, let them
+    if netconsole:
+        classes.append(netconsole.NetconsoleHost)
+
+    if auto_monitor:
+        # use serial console support if it's available
+        conmux_args = {}
+        for key in ("conmux_server", "conmux_attach"):
+            if key in args:
+                conmux_args[key] = args[key]
+        if serial.SerialHost.host_is_supported(hostname, **conmux_args):
+            classes.append(serial.SerialHost)
+        else:
+            # no serial available, fall back to direct dmesg logging
+            if follow_paths is None:
+                follow_paths = [DEFAULT_FOLLOW_PATH]
+            else:
+                follow_paths = list(follow_paths) + [DEFAULT_FOLLOW_PATH]
+
+            if pattern_paths is None:
+                pattern_paths = [DEFAULT_PATTERNS_PATH]
+            else:
+                pattern_paths = (
+                    list(pattern_paths) + [DEFAULT_PATTERNS_PATH])
+
+            logfile_monitor_class = logfile_monitor.NewLogfileMonitorMixin(
+                follow_paths, pattern_paths)
+            classes.append(logfile_monitor_class)
+
+    elif follow_paths:
+        logfile_monitor_class = logfile_monitor.NewLogfileMonitorMixin(
+            follow_paths, pattern_paths)
+        classes.append(logfile_monitor_class)
+
+    # do any site-specific processing of the classes list
+    site_factory.postprocess_classes(classes, hostname,
+                                     auto_monitor=auto_monitor, **args)
+
+    hostname, args['user'], args['password'], args['port'] = \
+            server_utils.parse_machine(hostname, ssh_user, ssh_pass, ssh_port)
+
+    # create a custom host class for this machine and return an instance of it
+    host_class = type("%s_host" % hostname, tuple(classes), {})
+    host_instance = host_class(hostname, **args)
+
+    # call job_start if this is the first time this host is being used
+    if hostname not in _started_hostnames:
+        host_instance.job_start()
+        _started_hostnames.add(hostname)
+
+    return host_instance
diff --git a/client/common_lib/hosts/guest.py b/client/common_lib/hosts/guest.py
new file mode 100644
index 0000000..2d43022
--- /dev/null
+++ b/client/common_lib/hosts/guest.py
@@ -0,0 +1,70 @@
+#
+# Copyright 2007 Google Inc. Released under the GPL v2
+
+"""
+This module defines the Guest class in the Host hierarchy.
+
+Implementation details:
+You should import the "hosts" package instead of importing each type of host.
+
+        Guest: a virtual machine on which you can run programs
+"""
+
+__author__ = """
+mbligh@google.com (Martin J. Bligh),
+poirier@google.com (Benjamin Poirier),
+stutsman@google.com (Ryan Stutsman)
+"""
+
+
+from autotest_lib.client.common_lib.hosts import ssh_host
+
+
+class Guest(ssh_host.SSHHost):
+    """
+    This class represents a virtual machine on which you can run
+    programs.
+
+    It is not the machine autoserv is running on.
+
+    Implementation details:
+    This is an abstract class, leaf subclasses must implement the methods
+    listed here and in parent classes which have no implementation. They
+    may reimplement methods which already have an implementation. You
+    must not instantiate this class but should instantiate one of those
+    leaf subclasses.
+    """
+
+    controlling_hypervisor = None
+
+
+    def _initialize(self, controlling_hypervisor, *args, **dargs):
+        """
+        Construct a Guest object
+
+        Args:
+                controlling_hypervisor: Hypervisor object that is
+                        responsible for the creation and management of
+                        this guest
+        """
+        hostname = controlling_hypervisor.new_guest()
+        super(Guest, self)._initialize(hostname, *args, **dargs)
+        self.controlling_hypervisor = controlling_hypervisor
+
+
+    def __del__(self):
+        """
+        Destroy a Guest object
+        """
+        super(Guest, self).__del__()
+        self.controlling_hypervisor.delete_guest(self.hostname)
+
+
+    def hardreset(self, timeout=600, wait=True):
+        """
+        Perform a "hardreset" of the guest.
+
+        It is restarted through the hypervisor. That will restart it
+        even if the guest otherwise innaccessible through ssh.
+        """
+        return self.controlling_hypervisor.reset_guest(self.hostname)
diff --git a/client/common_lib/hosts/kvm_guest.py b/client/common_lib/hosts/kvm_guest.py
new file mode 100644
index 0000000..c17bb98
--- /dev/null
+++ b/client/common_lib/hosts/kvm_guest.py
@@ -0,0 +1,46 @@
+#
+# Copyright 2007 Google Inc. Released under the GPL v2
+
+"""
+This module defines the Host class.
+
+Implementation details:
+You should import the "hosts" package instead of importing each type of host.
+
+        KVMGuest: a KVM virtual machine on which you can run programs
+"""
+
+__author__ = """
+mbligh@google.com (Martin J. Bligh),
+poirier@google.com (Benjamin Poirier),
+stutsman@google.com (Ryan Stutsman)
+"""
+
+
+import guest
+
+
+class KVMGuest(guest.Guest):
+    """This class represents a KVM virtual machine on which you can run
+    programs.
+
+    Implementation details:
+    This is a leaf class in an abstract class hierarchy, it must
+    implement the unimplemented methods in parent classes.
+    """
+
+    def _init(self, controlling_hypervisor, qemu_options, *args, **dargs):
+        """
+        Construct a KVMGuest object
+
+        Args:
+                controlling_hypervisor: hypervisor object that is
+                        responsible for the creation and management of
+                        this guest
+                qemu_options: options to pass to qemu, these should be
+                        appropriately shell escaped, if need be.
+        """
+        hostname= controlling_hypervisor.new_guest(qemu_options)
+        # bypass Guest's __init__
+        super(KVMGuest, self)._initialize(hostname, *args, **dargs)
+        self.controlling_hypervisor= controlling_hypervisor
diff --git a/client/common_lib/hosts/logfile_monitor.py b/client/common_lib/hosts/logfile_monitor.py
new file mode 100644
index 0000000..acb0543
--- /dev/null
+++ b/client/common_lib/hosts/logfile_monitor.py
@@ -0,0 +1,289 @@
+import logging, os, sys, subprocess, tempfile, traceback
+import time
+
+from autotest_lib.client.common_lib import utils
+from autotest_lib.client.common_lib.hosts import abstract_ssh, monitors
+
+MONITORDIR = monitors.__path__[0]
+SUPPORTED_PYTHON_VERS = ('2.4', '2.5', '2.6')
+DEFAULT_PYTHON = '/usr/bin/python'
+
+
+class Error(Exception):
+    pass
+
+
+class InvalidPatternsPathError(Error):
+    """An invalid patterns_path was specified."""
+
+
+class InvalidConfigurationError(Error):
+    """An invalid configuration was specified."""
+
+
+class FollowFilesLaunchError(Error):
+    """Error occurred launching followfiles remotely."""
+
+
+def list_remote_pythons(host):
+    """List out installed pythons on host."""
+    result = host.run('ls /usr/bin/python[0-9]*')
+    return result.stdout.splitlines()
+
+
+def select_supported_python(installed_pythons):
+    """Select a supported python from a list"""
+    for python in installed_pythons:
+        if python[-3:] in SUPPORTED_PYTHON_VERS:
+            return python
+
+
+def copy_monitordir(host):
+    """Copy over monitordir to a tmpdir on the remote host."""
+    tmp_dir = host.get_tmp_dir()
+    host.send_file(MONITORDIR, tmp_dir)
+    return os.path.join(tmp_dir, 'monitors')
+
+
+def launch_remote_followfiles(host, lastlines_dirpath, follow_paths):
+    """Launch followfiles.py remotely on follow_paths."""
+    logging.info('Launching followfiles on target: %s, %s, %s',
+                 host.hostname, lastlines_dirpath, str(follow_paths))
+
+    # First make sure a supported Python is on host
+    installed_pythons = list_remote_pythons(host)
+    supported_python = select_supported_python(installed_pythons)
+    if not supported_python:
+        if DEFAULT_PYTHON in installed_pythons:
+            logging.info('No versioned Python binary found, '
+                         'defaulting to: %s', DEFAULT_PYTHON)
+            supported_python = DEFAULT_PYTHON
+        else:
+            raise FollowFilesLaunchError('No supported Python on host.')
+
+    remote_monitordir = copy_monitordir(host)
+    remote_script_path = os.path.join(remote_monitordir, 'followfiles.py')
+
+    followfiles_cmd = '%s %s --lastlines_dirpath=%s %s' % (
+        supported_python, remote_script_path,
+        lastlines_dirpath, ' '.join(follow_paths))
+
+    remote_ff_proc = subprocess.Popen(host._make_ssh_cmd(followfiles_cmd),
+                                      stdin=open(os.devnull, 'r'),
+                                      stdout=subprocess.PIPE, shell=True)
+
+
+    # Give it enough time to crash if it's going to (it shouldn't).
+    time.sleep(5)
+    doa = remote_ff_proc.poll()
+    if doa:
+        raise FollowFilesLaunchError('ssh command crashed.')
+
+    return remote_ff_proc
+
+
+def resolve_patterns_path(patterns_path):
+    """Resolve patterns_path to existing absolute local path or raise.
+
+    As a convenience we allow users to specify a non-absolute patterns_path.
+    However these need to be resolved before allowing them to be passed down
+    to console.py.
+
+    For now we expect non-absolute ones to be in self.monitordir.
+    """
+    if os.path.isabs(patterns_path):
+        if os.path.exists(patterns_path):
+            return patterns_path
+        else:
+            raise InvalidPatternsPathError('Absolute path does not exist.')
+    else:
+        patterns_path = os.path.join(MONITORDIR, patterns_path)
+        if os.path.exists(patterns_path):
+            return patterns_path
+        else:
+            raise InvalidPatternsPathError('Relative path does not exist.')
+
+
+def launch_local_console(
+        input_stream, console_log_path, pattern_paths=None):
+    """Launch console.py locally.
+
+    This will process the output from followfiles and
+    fire warning messages per configuration in pattern_paths.
+    """
+    r, w = os.pipe()
+    local_script_path = os.path.join(MONITORDIR, 'console.py')
+    console_cmd = [sys.executable, local_script_path]
+    if pattern_paths:
+        console_cmd.append('--pattern_paths=%s' % ','.join(pattern_paths))
+
+    console_cmd += [console_log_path, str(w)]
+
+    # Setup warning stream before we actually launch
+    warning_stream = os.fdopen(r, 'r', 0)
+
+    devnull_w = open(os.devnull, 'w')
+    # Launch console.py locally
+    console_proc = subprocess.Popen(
+        console_cmd, stdin=input_stream,
+        stdout=devnull_w, stderr=devnull_w)
+    os.close(w)
+    return console_proc, warning_stream
+
+
+def _log_and_ignore_exceptions(f):
+    """Decorator: automatically log exception during a method call.
+    """
+    def wrapped(self, *args, **dargs):
+        try:
+            return f(self, *args, **dargs)
+        except Exception, e:
+            print "LogfileMonitor.%s failed with exception %s" % (f.__name__, e)
+            print "Exception ignored:"
+            traceback.print_exc(file=sys.stdout)
+    wrapped.__name__ = f.__name__
+    wrapped.__doc__ = f.__doc__
+    wrapped.__dict__.update(f.__dict__)
+    return wrapped
+
+
+class LogfileMonitorMixin(abstract_ssh.AbstractSSHHost):
+    """This can monitor one or more remote files using tail.
+
+    This class and its counterpart script, monitors/followfiles.py,
+    add most functionality one would need to launch and monitor
+    remote tail processes on self.hostname.
+
+    This can be used by subclassing normally or by calling
+    NewLogfileMonitorMixin (below)
+
+    It is configured via two class attributes:
+        follow_paths: Remote paths to monitor
+        pattern_paths: Local paths to alert pattern definition files.
+    """
+    follow_paths = ()
+    pattern_paths = ()
+
+    def _initialize(self, console_log=None, *args, **dargs):
+        super(LogfileMonitorMixin, self)._initialize(*args, **dargs)
+
+        self._lastlines_dirpath = None
+        self._console_proc = None
+        self._console_log = console_log or 'logfile_monitor.log'
+
+
+    def reboot_followup(self, *args, **dargs):
+        super(LogfileMonitorMixin, self).reboot_followup(*args, **dargs)
+        self.__stop_loggers()
+        self.__start_loggers()
+
+
+    def start_loggers(self):
+        super(LogfileMonitorMixin, self).start_loggers()
+        self.__start_loggers()
+
+
+    def remote_path_exists(self, remote_path):
+        """Return True if remote_path exists, False otherwise."""
+        return not self.run(
+            'ls %s' % remote_path, ignore_status=True).exit_status
+
+
+    def check_remote_paths(self, remote_paths):
+        """Return list of remote_paths that currently exist."""
+        return [
+            path for path in remote_paths if self.remote_path_exists(path)]
+
+
+    @_log_and_ignore_exceptions
+    def __start_loggers(self):
+        """Start multifile monitoring logger.
+
+        Launch monitors/followfiles.py on the target and hook its output
+        to monitors/console.py locally.
+        """
+        # Check if follow_paths exist, in the case that one doesn't
+        # emit a warning and proceed.
+        follow_paths_set = set(self.follow_paths)
+        existing = self.check_remote_paths(follow_paths_set)
+        missing = follow_paths_set.difference(existing)
+        if missing:
+            # Log warning that we are missing expected remote paths.
+            logging.warn('Target %s is missing expected remote paths: %s',
+                         self.hostname, ', '.join(missing))
+
+        # If none of them exist just return (for now).
+        if not existing:
+            return
+
+        # Create a new lastlines_dirpath on the remote host if not already set.
+        if not self._lastlines_dirpath:
+            self._lastlines_dirpath = self.get_tmp_dir(parent='/var/tmp')
+
+        # Launch followfiles on target
+        try:
+            self._followfiles_proc = launch_remote_followfiles(
+                self, self._lastlines_dirpath, existing)
+        except FollowFilesLaunchError:
+            # We're hosed, there is no point in proceeding.
+            logging.fatal('Failed to launch followfiles on target,'
+                          ' aborting logfile monitoring: %s', self.hostname)
+            if self.job:
+                # Put a warning in the status.log
+                self.job.record(
+                    'WARN', None, 'logfile.monitor',
+                    'followfiles launch failed')
+            return
+
+        # Ensure we have sane pattern_paths before launching console.py
+        sane_pattern_paths = []
+        for patterns_path in set(self.pattern_paths):
+            try:
+                patterns_path = resolve_patterns_path(patterns_path)
+            except InvalidPatternsPathError, e:
+                logging.warn('Specified patterns_path is invalid: %s, %s',
+                             patterns_path, str(e))
+            else:
+                sane_pattern_paths.append(patterns_path)
+
+        # Launch console.py locally, pass in output stream from followfiles.
+        self._console_proc, self._logfile_warning_stream = \
+            launch_local_console(
+                self._followfiles_proc.stdout, self._console_log,
+                sane_pattern_paths)
+
+        if self.job:
+            self.job.warning_loggers.add(self._logfile_warning_stream)
+
+
+    def stop_loggers(self):
+        super(LogfileMonitorMixin, self).stop_loggers()
+        self.__stop_loggers()
+
+
+    @_log_and_ignore_exceptions
+    def __stop_loggers(self):
+        if self._console_proc:
+            utils.nuke_subprocess(self._console_proc)
+            utils.nuke_subprocess(self._followfiles_proc)
+            self._console_proc = self._followfile_proc = None
+            if self.job:
+                self.job.warning_loggers.discard(self._logfile_warning_stream)
+            self._logfile_warning_stream.close()
+
+
+def NewLogfileMonitorMixin(follow_paths, pattern_paths=None):
+    """Create a custom in-memory subclass of LogfileMonitorMixin.
+
+    Args:
+      follow_paths: list; Remote paths to tail.
+      pattern_paths: list; Local alert pattern definition files.
+    """
+    if not follow_paths or (pattern_paths and not follow_paths):
+        raise InvalidConfigurationError
+
+    return type(
+        'LogfileMonitorMixin%d' % id(follow_paths),
+        (LogfileMonitorMixin,),
+        {'follow_paths': follow_paths,
+         'pattern_paths': pattern_paths or ()})
diff --git a/client/common_lib/hosts/monitors/__init__.py b/client/common_lib/hosts/monitors/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/client/common_lib/hosts/monitors/common.py b/client/common_lib/hosts/monitors/common.py
new file mode 100644
index 0000000..e0c5ccf
--- /dev/null
+++ b/client/common_lib/hosts/monitors/common.py
@@ -0,0 +1,7 @@
+import os, sys
+dirname = os.path.dirname(sys.modules[__name__].__file__)
+client_dir = os.path.abspath(os.path.join(dirname, "..", "..",".."))
+sys.path.insert(0, client_dir)
+import setup_modules
+sys.path.pop(0)
+setup_modules.setup(base_path=autotest_dir, root_module_name="autotest_lib")
diff --git a/client/common_lib/hosts/monitors/console.py b/client/common_lib/hosts/monitors/console.py
new file mode 100755
index 0000000..c516f9f
--- /dev/null
+++ b/client/common_lib/hosts/monitors/console.py
@@ -0,0 +1,88 @@
+#!/usr/bin/python
+#
+# Script for translating console output (from STDIN) into Autotest
+# warning messages.
+
+import gzip, optparse, os, signal, sys, time
+import common
+from autotest_lib.server.hosts.monitors import monitors_util
+
+PATTERNS_PATH = os.path.join(os.path.dirname(__file__), 'console_patterns')
+
+usage = 'usage: %prog [options] logfile_name warn_fd'
+parser = optparse.OptionParser(usage=usage)
+parser.add_option(
+    '-t', '--log_timestamp_format',
+    default='[%Y-%m-%d %H:%M:%S]',
+    help='Timestamp format for log messages')
+parser.add_option(
+    '-p', '--pattern_paths',
+    default=PATTERNS_PATH,
+    help='Path to alert hook patterns file')
+
+
+def _open_logfile(logfile_base_name):
+    """Opens an output file using the given name.
+
+    A timestamp and compression is added to the name.
+
+    @param logfile_base_name - The log file path without a compression suffix.
+    @returns An open file like object.  Its close method must be called before
+            exiting or data may be lost due to internal buffering.
+    """
+    timestamp = int(time.time())
+    while True:
+        logfile_name = '%s.%d-%d.gz' % (logfile_base_name,
+                                        timestamp, os.getpid())
+        if not os.path.exists(logfile_name):
+            break
+        timestamp += 1
+    logfile = gzip.GzipFile(logfile_name, 'w')
+    return logfile
+
+
+def _set_logfile_close_signal_handler(logfile):
+    """Setup a signal handler to explicitly call logfile.close() and exit.
+
+    Because we are writing a compressed file we need to make sure we properly
+    close to flush our internal buffer on exit. logfile_monitor.py sends us
+    a SIGTERM and waits 5 seconds for before sending a SIGKILL so we have
+    plenty of time to do this.
+
+    @param logfile - An open file object to be closed on SIGTERM.
+    """
+    def _on_signal_close_logfile_before_exit(unused_signal_no, unused_frame):
+        logfile.close()
+        os.exit(1)
+    signal.signal(signal.SIGTERM, _on_signal_close_logfile_before_exit)
+
+
+def _unset_signal_handler():
+    signal.signal(signal.SIGTERM, signal.SIG_DFL)
+
+
+def main():
+    (options, args) = parser.parse_args()
+    if len(args) != 2:
+        parser.print_help()
+        sys.exit(1)
+
+    logfile = _open_logfile(args[0])
+    warnfile = os.fdopen(int(args[1]), 'w', 0)
+    # For now we aggregate all the alert_hooks.
+    alert_hooks = []
+    for patterns_path in options.pattern_paths.split(','):
+        alert_hooks.extend(monitors_util.build_alert_hooks_from_path(
+                patterns_path, warnfile))
+
+    _set_logfile_close_signal_handler(logfile)
+    try:
+        monitors_util.process_input(
+            sys.stdin, logfile, options.log_timestamp_format, alert_hooks)
+    finally:
+        logfile.close()
+        _unset_signal_handler()
+
+
+if __name__ == '__main__':
+    main()
diff --git a/client/common_lib/hosts/monitors/console_patterns b/client/common_lib/hosts/monitors/console_patterns
new file mode 100644
index 0000000..72bf557
--- /dev/null
+++ b/client/common_lib/hosts/monitors/console_patterns
@@ -0,0 +1,71 @@
+BUG
+^.*Kernel panic ?(.*)
+machine panic'd (%s)
+
+BUG
+^.*Oops ?(.*)
+machine Oops'd (%s)
+
+BUG
+^.*kdb>
+machine dropped to kdb (see console)
+
+BUG
+^.*Open Firmware exception handle entered from non-OF code
+machine took an open firmware exception (see console)
+
+BUG
+^.*(BUG:.*)
+%s
+
+BUG
+^.*(kernel BUG .*)
+%s
+
+OOM
+^.*(invoked oom-killer:.*)
+%s
+
+BUG
+^(.*CommandAbort.*)
+%s
+
+LOCKDEP
+^.*(possible circular locking dependency detected.*)
+%s
+
+LOCKDEP
+^.*(unsafe lock order detected.*)
+%s
+
+LOCKDEP
+^.*(possible recursive locking detected.*)
+%s
+
+LOCKDEP
+^.*(inconsistent lock state.*)
+%s
+
+LOCKDEP
+^.*(possible irq lock inversion dependency detected.*)
+%s
+
+LOCKDEP
+^.*(bad unlock balance detected.*)
+%s
+
+LOCKDEP
+^.*(bad contention detected.*)
+%s
+
+LOCKDEP
+^.*(held lock freed.*)
+%s
+
+LOCKDEP
+^.*(lock held at task exit time.*)
+%s
+
+LOCKDEP
+^.*(lock held when returning to user space.*)
+%s
diff --git a/client/common_lib/hosts/monitors/console_patterns_test.py b/client/common_lib/hosts/monitors/console_patterns_test.py
new file mode 100755
index 0000000..c9e50a5
--- /dev/null
+++ b/client/common_lib/hosts/monitors/console_patterns_test.py
@@ -0,0 +1,53 @@
+#!/usr/bin/python
+
+import common
+import cStringIO, os, unittest
+from autotest_lib.client.common_lib.hosts.monitors import monitors_util
+
+class _MockWarnFile(object):
+    def __init__(self):
+        self.warnings = []
+
+
+    def write(self, data):
+        if data == '\n':
+            return
+        timestamp, type, message = data.split('\t')
+        self.warnings.append((type, message))
+
+
+class ConsolePatternsTestCase(unittest.TestCase):
+    def setUp(self):
+        self._warnfile = _MockWarnFile()
+        patterns_path = os.path.join(os.path.dirname(__file__),
+                                     'console_patterns')
+        self._alert_hooks = monitors_util.build_alert_hooks_from_path(
+                patterns_path, self._warnfile)
+        self._logfile = cStringIO.StringIO()
+
+
+    def _process_line(self, line):
+        input_file = cStringIO.StringIO(line + '\n')
+        monitors_util.process_input(input_file, self._logfile,
+                                    alert_hooks=self._alert_hooks)
+
+
+    def _assert_warning_fired(self, type, message):
+        key = (type, message)
+        self.assert_(key in self._warnfile.warnings,
+                     'Warning %s not found in: %s' % (key,
+                                                      self._warnfile.warnings))
+
+
+    def _assert_no_warnings_fired(self):
+        self.assertEquals(self._warnfile.warnings, [])
+
+
+class ConsolePatternsTest(ConsolePatternsTestCase):
+    def test_oops(self):
+        self._process_line('<0>Oops: 0002 [1] SMP ')
+        self._assert_warning_fired('BUG', "machine Oops'd (: 0002 [1] SMP)")
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/client/common_lib/hosts/monitors/followfiles.py b/client/common_lib/hosts/monitors/followfiles.py
new file mode 100755
index 0000000..f2ad4d9
--- /dev/null
+++ b/client/common_lib/hosts/monitors/followfiles.py
@@ -0,0 +1,27 @@
+#!/usr/bin/python
+#
+# Script for tailing one to many logfiles and merging their output.
+
+import optparse, os, signal, sys
+
+import monitors_util
+
+usage = 'usage: %prog [options] follow_path ...'
+parser = optparse.OptionParser(usage=usage)
+parser.add_option(
+    '-l', '--lastlines_dirpath',
+    help='Path to store/read last line data to/from.')
+
+
+def main():
+    (options, follow_paths) = parser.parse_args()
+    if len(follow_paths) < 1:
+        parser.print_help()
+        sys.exit(1)
+
+    monitors_util.follow_files(
+        follow_paths, sys.stdout, options.lastlines_dirpath)
+
+
+if __name__ == '__main__':
+    main()
diff --git a/client/common_lib/hosts/monitors/monitors_util.py b/client/common_lib/hosts/monitors/monitors_util.py
new file mode 100644
index 0000000..3c3afcc
--- /dev/null
+++ b/client/common_lib/hosts/monitors/monitors_util.py
@@ -0,0 +1,379 @@
+# Shared utility functions across monitors scripts.
+
+import fcntl, os, re, select, signal, subprocess, sys, time
+
+TERM_MSG = 'Console connection unexpectedly lost. Terminating monitor.'
+
+
+class Error(Exception):
+    pass
+
+
+class InvalidTimestampFormat(Error):
+    pass
+
+
+def prepend_timestamp(msg, format):
+    """Prepend timestamp to a message in a standard way.
+
+    Args:
+      msg: str; Message to prepend timestamp to.
+      format: str or callable; Either format string that
+          can be passed to time.strftime or a callable
+          that will generate the timestamp string.
+
+    Returns: str; 'timestamp\tmsg'
+    """
+    if type(format) is str:
+        timestamp = time.strftime(format, time.localtime())
+    elif callable(format):
+        timestamp = str(format())
+    else:
+        raise InvalidTimestampFormat
+
+    return '%s\t%s' % (timestamp, msg)
+
+
+def write_logline(logfile, msg, timestamp_format=None):
+    """Write msg, possibly prepended with a timestamp, as a terminated line.
+
+    Args:
+      logfile: file; File object to .write() msg to.
+      msg: str; Message to write.
+      timestamp_format: str or callable; If specified will
+          be passed into prepend_timestamp along with msg.
+    """
+    msg = msg.rstrip('\n')
+    if timestamp_format:
+        msg = prepend_timestamp(msg, timestamp_format)
+    logfile.write(msg + '\n')
+
+
+def make_alert(warnfile, msg_type, msg_template, timestamp_format=None):
+    """Create an alert generation function that writes to warnfile.
+
+    Args:
+      warnfile: file; File object to write msg's to.
+      msg_type: str; String describing the message type
+      msg_template: str; String template that function params
+          are passed through.
+      timestamp_format: str or callable; If specified will
+          be passed into prepend_timestamp along with msg.
+
+    Returns: function with a signature of (*params);
+        The format for a warning used here is:
+            %(timestamp)d\t%(msg_type)s\t%(status)s\n
+    """
+    if timestamp_format is None:
+        timestamp_format = lambda: int(time.time())
+
+    def alert(*params):
+        formatted_msg = msg_type + "\t" + msg_template % params
+        timestamped_msg = prepend_timestamp(formatted_msg, timestamp_format)
+        print >> warnfile, timestamped_msg
+    return alert
+
+
+def _assert_is_all_blank_lines(lines, source_file):
+    if sum(len(line.strip()) for line in lines) > 0:
+        raise ValueError('warning patterns are not separated by blank lines '
+                         'in %s' % source_file)
+
+
+def _read_overrides(overrides_file):
+    """
+    Read pattern overrides from overrides_file, which may be None.  Overrides
+    files are expected to have the format:
+    <old regex> <newline> <new regex> <newline> <newline>
+            old regex = a regex from the patterns file
+            new regex = the regex to replace it
+    Lines beginning with # are ignored.
+
+    Returns a dict mapping old regexes to their replacements.
+    """
+    if not overrides_file:
+        return {}
+    overrides_lines = [line for line in overrides_file.readlines()
+                       if not line.startswith('#')]
+    overrides_pairs = zip(overrides_lines[0::3], overrides_lines[1::3])
+    _assert_is_all_blank_lines(overrides_lines[2::3], overrides_file)
+    return dict(overrides_pairs)
+
+
+def build_alert_hooks(patterns_file, warnfile, overrides_file=None):
+    """Parse data in patterns file and transform into alert_hook list.
+
+    Args:
+      patterns_file: file; File to read alert pattern definitions from.
+      warnfile: file; File to configure alert function to write warning to.
+
+    Returns:
+      list; Regex to alert function mapping.
+          [(regex, alert_function), ...]
+    """
+    pattern_lines = patterns_file.readlines()
+    # expected pattern format:
+    # <msgtype> <newline> <regex> <newline> <alert> <newline> <newline>
+    #   msgtype = a string categorizing the type of the message - used for
+    #             enabling/disabling specific categories of warnings
+    #   regex   = a python regular expression
+    #   alert   = a string describing the alert message
+    #             if the regex matches the line, this displayed warning will
+    #             be the result of (alert % match.groups())
+    patterns = zip(pattern_lines[0::4], pattern_lines[1::4],
+                   pattern_lines[2::4])
+    _assert_is_all_blank_lines(pattern_lines[3::4], patterns_file)
+
+    overrides_map = _read_overrides(overrides_file)
+
+    hooks = []
+    for msgtype, regex, alert in patterns:
+        regex = overrides_map.get(regex, regex)
+        regex = re.compile(regex.rstrip('\n'))
+        alert_function = make_alert(warnfile, msgtype.rstrip('\n'),
+                                    alert.rstrip('\n'))
+        hooks.append((regex, alert_function))
+    return hooks
+
+
+def build_alert_hooks_from_path(patterns_path, warnfile):
+    """
+    Same as build_alert_hooks, but accepts a path to a patterns file and
+    automatically finds the corresponding site overrides file if one exists.
+    """
+    dirname, basename = os.path.split(patterns_path)
+    site_overrides_basename = 'site_' + basename + '_overrides'
+    site_overrides_path = os.path.join(dirname, site_overrides_basename)
+    site_overrides_file = None
+    patterns_file = open(patterns_path)
+    try:
+        if os.path.exists(site_overrides_path):
+            site_overrides_file = open(site_overrides_path)
+        try:
+            return build_alert_hooks(patterns_file, warnfile,
+                                     overrides_file=site_overrides_file)
+        finally:
+            if site_overrides_file:
+                site_overrides_file.close()
+    finally:
+        patterns_file.close()
+
+
+def process_input(
+    input, logfile, log_timestamp_format=None, alert_hooks=()):
+    """Continuously read lines from input stream and:
+
+    - Write them to log, possibly prefixed by timestamp.
+    - Watch for alert patterns.
+
+    Args:
+      input: file; Stream to read from.
+      logfile: file; Log file to write to
+      log_timestamp_format: str; Format to use for timestamping entries.
+          No timestamp is added if None.
+      alert_hooks: list; Generated from build_alert_hooks.
+          [(regex, alert_function), ...]
+    """
+    while True:
+        line = input.readline()
+        if len(line) == 0:
+            # this should only happen if the remote console unexpectedly
+            # goes away. terminate this process so that we don't spin
+            # forever doing 0-length reads off of input
+            write_logline(logfile, TERM_MSG, log_timestamp_format)
+            break
+
+        if line == '\n':
+            # If it's just an empty line we discard and continue.
+            continue
+
+        write_logline(logfile, line, log_timestamp_format)
+
+        for regex, callback in alert_hooks:
+            match = re.match(regex, line.strip())
+            if match:
+                callback(*match.groups())
+
+
+def lookup_lastlines(lastlines_dirpath, path):
+    """Retrieve last lines seen for path.
+
+    Open corresponding lastline file for path
+    If there isn't one or isn't a match return None
+
+    Args:
+      lastlines_dirpath: str; Dirpath to store lastlines files to.
+      path: str; Filepath to source file that lastlines came from.
+
+    Returns:
+      str; Last lines seen if they exist
+      - Or -
+      None; Otherwise
+    """
+    underscored = path.replace('/', '_')
+    try:
+        lastlines_file = open(os.path.join(lastlines_dirpath, underscored))
+    except (OSError, IOError):
+        return
+
+    lastlines = lastlines_file.read()
+    lastlines_file.close()
+    os.remove(lastlines_file.name)
+    if not lastlines:
+        return
+
+    try:
+        target_file = open(path)
+    except (OSError, IOError):
+        return
+
+    # Load it all in for now
+    target_data = target_file.read()
+    target_file.close()
+    # Get start loc in the target_data string, scanning from right
+    loc = target_data.rfind(lastlines)
+    if loc == -1:
+        return
+
+    # Then translate this into a reverse line number
+    # (count newlines that occur afterward)
+    reverse_lineno = target_data.count('\n', loc + len(lastlines))
+    return reverse_lineno
+
+
+def write_lastlines_file(lastlines_dirpath, path, data):
+    """Write data to lastlines file for path.
+
+    Args:
+      lastlines_dirpath: str; Dirpath to store lastlines files to.
+      path: str; Filepath to source file that data comes from.
+      data: str;
+
+    Returns:
+      str; Filepath that lastline data was written to.
+    """
+    underscored = path.replace('/', '_')
+    dest_path = os.path.join(lastlines_dirpath, underscored)
+    open(dest_path, 'w').write(data)
+    return dest_path
+
+
+def nonblocking(pipe):
+    """Set python file object to nonblocking mode.
+
+    This allows us to take advantage of pipe.read()
+    where we don't have to specify a buflen.
+    Cuts down on a few lines we'd have to maintain.
+
+    Args:
+      pipe: file; File object to modify
+
+    Returns: pipe
+    """
+    flags = fcntl.fcntl(pipe, fcntl.F_GETFL)
+    fcntl.fcntl(pipe, fcntl.F_SETFL, flags| os.O_NONBLOCK)
+    return pipe
+
+
+def launch_tails(follow_paths, lastlines_dirpath=None):
+    """Launch a tail process for each follow_path.
+
+    Args:
+      follow_paths: list;
+      lastlines_dirpath: str;
+
+    Returns:
+      tuple; (procs, pipes) or
+          ({path: subprocess.Popen, ...}, {file: path, ...})
+    """
+    if lastlines_dirpath and not os.path.exists(lastlines_dirpath):
+        os.makedirs(lastlines_dirpath)
+
+    tail_cmd = ('/usr/bin/tail', '--retry', '--follow=name')
+    procs = {}  # path -> tail_proc
+    pipes = {}  # tail_proc.stdout -> path
+    for path in follow_paths:
+        cmd = list(tail_cmd)
+        if lastlines_dirpath:
+            reverse_lineno = lookup_lastlines(lastlines_dirpath, path)
+            if reverse_lineno is None:
+                reverse_lineno = 1
+            cmd.append('--lines=%d' % reverse_lineno)
+
+        cmd.append(path)
+        tail_proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
+        procs[path] = tail_proc
+        pipes[nonblocking(tail_proc.stdout)] = path
+
+    return procs, pipes
+
+
+def poll_tail_pipes(pipes, lastlines_dirpath=None, waitsecs=5):
+    """Wait on tail pipes for new data for waitsecs, return any new lines.
+
+    Args:
+      pipes: dict; {subprocess.Popen: follow_path, ...}
+      lastlines_dirpath: str; Path to write lastlines to.
+      waitsecs: int; Timeout to pass to select
+
+    Returns:
+      tuple; (lines, bad_pipes) or ([line, ...], [subprocess.Popen, ...])
+    """
+    lines = []
+    bad_pipes = []
+    # Block until at least one is ready to read or waitsecs elapses
+    ready, _, _ = select.select(pipes.keys(), (), (), waitsecs)
+    for fi in ready:
+        path = pipes[fi]
+        data = fi.read()
+        if len(data) == 0:
+            # If no data, process is probably dead, add to bad_pipes
+            bad_pipes.append(fi)
+            continue
+
+        if lastlines_dirpath:
+            # Overwrite the lastlines file for this source path
+            # Probably just want to write the last 1-3 lines.
+            write_lastlines_file(lastlines_dirpath, path, data)
+
+        for line in data.splitlines():
+            lines.append('[%s]\t%s\n' % (path, line))
+
+    return lines, bad_pipes
+
+
+def snuff(subprocs):
+    """Helper for killing off remaining live subprocesses.
+
+    Args:
+      subprocs: list; [subprocess.Popen, ...]
+    """
+    for proc in subprocs:
+        if proc.poll() is None:
+            os.kill(proc.pid, signal.SIGKILL)
+            proc.wait()
+
+
+def follow_files(follow_paths, outstream, lastlines_dirpath=None, waitsecs=5):
+    """Launch tail on a set of files and merge their output into outstream.
+
+    Args:
+      follow_paths: list; Local paths to launch tail on.
+      outstream: file; Output stream to write aggregated lines to.
+      lastlines_dirpath: Local dirpath to record last lines seen in.
+      waitsecs: int; Timeout for poll_tail_pipes.
+    """
+    procs, pipes = launch_tails(follow_paths, lastlines_dirpath)
+    while pipes:
+        lines, bad_pipes = poll_tail_pipes(pipes, lastlines_dirpath, waitsecs)
+        for bad in bad_pipes:
+            pipes.pop(bad)
+
+        try:
+            outstream.writelines(['\n'] + lines)
+            outstream.flush()
+        except (IOError, OSError), e:
+            # Something is wrong. Stop looping.
+            break
+
+    snuff(procs.values())
diff --git a/client/common_lib/hosts/monitors/monitors_util_unittest.py b/client/common_lib/hosts/monitors/monitors_util_unittest.py
new file mode 100755
index 0000000..b7a19d7
--- /dev/null
+++ b/client/common_lib/hosts/monitors/monitors_util_unittest.py
@@ -0,0 +1,177 @@
+#!/usr/bin/python
+
+import fcntl, os, signal, subprocess, StringIO
+import tempfile, textwrap, time, unittest
+import monitors_util
+
+
+def InlineStringIO(text):
+    return StringIO.StringIO(textwrap.dedent(text).strip())
+
+
+class WriteLoglineTestCase(unittest.TestCase):
+    def setUp(self):
+        self.time_tuple = (2008, 10, 31, 18, 58, 17, 4, 305, 1)
+        self.format = '[%Y-%m-%d %H:%M:%S]'
+        self.formatted_time_tuple = '[2008-10-31 18:58:17]'
+        self.msg = 'testing testing'
+
+        # Stub out time.localtime()
+        self.orig_localtime = time.localtime
+        time.localtime = lambda: self.time_tuple
+
+
+    def tearDown(self):
+        time.localtime = self.orig_localtime
+
+
+    def test_prepend_timestamp(self):
+        timestamped = monitors_util.prepend_timestamp(
+            self.msg, self.format)
+        self.assertEquals(
+            '%s\t%s' % (self.formatted_time_tuple, self.msg), timestamped)
+
+
+    def test_write_logline_with_timestamp(self):
+        logfile = StringIO.StringIO()
+        monitors_util.write_logline(logfile, self.msg, self.format)
+        logfile.seek(0)
+        written = logfile.read()
+        self.assertEquals(
+            '%s\t%s\n' % (self.formatted_time_tuple, self.msg), written)
+
+
+    def test_write_logline_without_timestamp(self):
+        logfile = StringIO.StringIO()
+        monitors_util.write_logline(logfile, self.msg)
+        logfile.seek(0)
+        written = logfile.read()
+        self.assertEquals(
+            '%s\n' % self.msg, written)
+
+
+class AlertHooksTestCase(unittest.TestCase):
+    def setUp(self):
+        self.msg_template = 'alert yay %s haha %s'
+        self.params = ('foo', 'bar')
+        self.epoch_seconds = 1225501829.9300611
+        # Stub out time.time
+        self.orig_time = time.time
+        time.time = lambda: self.epoch_seconds
+
+
+    def tearDown(self):
+        time.time = self.orig_time
+
+
+    def test_make_alert(self):
+        warnfile = StringIO.StringIO()
+        alert = monitors_util.make_alert(warnfile, "MSGTYPE",
+                                         self.msg_template)
+        alert(*self.params)
+        warnfile.seek(0)
+        written = warnfile.read()
+        ts = str(int(self.epoch_seconds))
+        expected = '%s\tMSGTYPE\t%s\n' % (ts, self.msg_template % self.params)
+        self.assertEquals(expected, written)
+
+
+    def test_build_alert_hooks(self):
+        warnfile = StringIO.StringIO()
+        patterns_file = InlineStringIO("""
+            BUG
+            ^.*Kernel panic ?(.*)
+            machine panic'd (%s)
+
+            BUG
+            ^.*Oops ?(.*)
+            machine Oops'd (%s)
+            """)
+        hooks = monitors_util.build_alert_hooks(patterns_file, warnfile)
+        self.assertEquals(len(hooks), 2)
+
+
+class ProcessInputTestCase(unittest.TestCase):
+    def test_process_input_simple(self):
+        input = InlineStringIO("""
+            woo yay
+            this is a line
+            booya
+            """)
+        logfile = StringIO.StringIO()
+        monitors_util.process_input(input, logfile)
+        input.seek(0)
+        logfile.seek(0)
+
+        self.assertEquals(
+            '%s\n%s\n' % (input.read(), monitors_util.TERM_MSG),
+            logfile.read())
+
+
+class FollowFilesTestCase(unittest.TestCase):
+    def setUp(self):
+        self.logfile_dirpath = tempfile.mkdtemp()
+        self.logfile_path = os.path.join(self.logfile_dirpath, 'messages')
+        self.firstline = 'bip\n'
+        self.lastline_seen = 'wooo\n'
+        self.line_after_lastline_seen = 'yeah\n'
+        self.lastline = 'pow\n'
+
+        self.logfile = open(self.logfile_path, 'w')
+        self.logfile.write(self.firstline)
+        self.logfile.write(self.lastline_seen)
+        self.logfile.write(self.line_after_lastline_seen)  # 3
+        self.logfile.write('man\n')   # 2
+        self.logfile.write(self.lastline)   # 1
+        self.logfile.close()
+
+        self.lastlines_dirpath = tempfile.mkdtemp()
+        monitors_util.write_lastlines_file(
+            self.lastlines_dirpath, self.logfile_path, self.lastline_seen)
+
+
+    def test_lookup_lastlines(self):
+        reverse_lineno = monitors_util.lookup_lastlines(
+            self.lastlines_dirpath, self.logfile_path)
+        self.assertEquals(reverse_lineno, 3)
+
+
+    def test_nonblocking(self):
+        po = subprocess.Popen('echo', stdout=subprocess.PIPE)
+        flags = fcntl.fcntl(po.stdout, fcntl.F_GETFL)
+        self.assertEquals(flags, 0)
+        monitors_util.nonblocking(po.stdout)
+        flags = fcntl.fcntl(po.stdout, fcntl.F_GETFL)
+        self.assertEquals(flags, 2048)
+        po.wait()
+
+
+    def test_follow_files_nostate(self):
+        follow_paths = [self.logfile_path]
+        lastlines_dirpath = tempfile.mkdtemp()
+        procs, pipes = monitors_util.launch_tails(
+            follow_paths, lastlines_dirpath)
+        lines, bad_pipes = monitors_util.poll_tail_pipes(
+            pipes, lastlines_dirpath)
+        first_shouldmatch = '[%s]\t%s' % (
+            self.logfile_path, self.lastline)
+        self.assertEquals(lines[0], first_shouldmatch)
+        monitors_util.snuff(procs.values())
+
+
+    def test_follow_files(self):
+        follow_paths = [self.logfile_path]
+        procs, pipes = monitors_util.launch_tails(
+            follow_paths, self.lastlines_dirpath)
+        lines, bad_pipes = monitors_util.poll_tail_pipes(
+            pipes, self.lastlines_dirpath)
+        first_shouldmatch = '[%s]\t%s' % (
+            self.logfile_path, self.line_after_lastline_seen)
+        self.assertEquals(lines[0], first_shouldmatch)
+        monitors_util.snuff(procs.values())
+        last_shouldmatch = '[%s]\t%s' % (self.logfile_path, self.lastline)
+        self.assertEquals(lines[-1], last_shouldmatch)
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/client/common_lib/hosts/netconsole.py b/client/common_lib/hosts/netconsole.py
new file mode 100644
index 0000000..104ad17
--- /dev/null
+++ b/client/common_lib/hosts/netconsole.py
@@ -0,0 +1,160 @@
+import os, re, sys, subprocess, socket
+
+from autotest_lib.client.common_lib import utils, error
+from autotest_lib.client.common_lib.hosts import remote
+
+
+class NetconsoleHost(remote.RemoteHost):
+    def _initialize(self, console_log="netconsole.log", *args, **dargs):
+        super(NetconsoleHost, self)._initialize(*args, **dargs)
+
+        self.__logger = None
+        self.__console_log = console_log
+
+        # get a socket for us to listen on
+        self.__socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+        self.__socket.bind(('', 0))
+        self.__port = self.__socket.getsockname()[1]
+
+
+    @classmethod
+    def host_is_supported(cls, run_func):
+        local_ip = socket.gethostbyname(socket.gethostname())
+        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+        s.settimeout(1)
+        s.bind((local_ip, 0))
+        local_port = s.getsockname()[1]
+        send_udp_packet = (
+            """python -c "from socket import *; """
+            """s = socket(AF_INET, SOCK_DGRAM); """
+            """s.sendto('ping', ('%s', %d))" """ % (local_ip, local_port))
+        run_func(send_udp_packet)
+        try:
+            msg = s.recv(4)
+        except Exception:
+            supported = False
+        else:
+            supported = (msg == "ping")
+        s.close()
+        return supported
+
+
+    def start_loggers(self):
+        super(NetconsoleHost, self).start_loggers()
+
+        if not self.__console_log:
+            return
+
+        self.__netconsole_params = self.__determine_netconsole_params()
+        if self.__netconsole_params is None:
+            return
+
+        r, w = os.pipe()
+        script_path = os.path.join(self.monitordir, "console.py")
+        cmd = [sys.executable, script_path, self.__console_log, str(w)]
+
+        self.__warning_stream = os.fdopen(r, "r", 0)
+        if self.job:
+            self.job.warning_loggers.add(self.__warning_stream)
+
+        stdin = self.__socket.fileno()
+        stdout = stderr = open(os.devnull, "w")
+        self.__logger = subprocess.Popen(cmd, stdin=stdin, stdout=stdout,
+                                         stderr=stderr)
+        os.close(w)
+
+        self.__unload_netconsole_module()
+        self.__load_netconsole_module()
+
+
+    def stop_loggers(self):
+        super(NetconsoleHost, self).stop_loggers()
+
+        if self.__logger:
+            utils.nuke_subprocess(self.__logger)
+            self.__logger = None
+            if self.job:
+                self.job.warning_loggers.discard(self.__warning_stream)
+            self.__warning_stream.close()
+
+
+    def reboot_setup(self, *args, **dargs):
+        super(NetconsoleHost, self).reboot_setup(*args, **dargs)
+
+        if self.__netconsole_params is not None:
+            label = dargs.get("label", None)
+            if not label:
+                label = self.bootloader.get_default_title()
+            args = "debug " + self.__netconsole_params
+            self.bootloader.add_args(label, args)
+        self.__unload_netconsole_module()
+
+
+    def reboot_followup(self, *args, **dargs):
+        super(NetconsoleHost, self).reboot_followup(*args, **dargs)
+        self.__load_netconsole_module()
+
+
+    def __determine_netconsole_params(self):
+        """
+        Connect to the remote machine and determine the values to use for the
+        required netconsole parameters.
+        """
+        # determine the IP addresses of the local and remote machine
+        # PROBLEM: on machines with multiple IPs this may not make any sense
+        # It also doesn't work with IPv6
+        remote_ip = socket.gethostbyname(self.hostname)
+        local_ip = socket.gethostbyname(socket.gethostname())
+
+        # Get the gateway of the remote machine
+        try:
+            traceroute = self.run('traceroute -n %s' % local_ip)
+        except error.AutoservRunError:
+            return
+        first_node = traceroute.stdout.split("\n")[0]
+        match = re.search(r'\s+((\d+\.){3}\d+)\s+', first_node)
+        if match:
+            router_ip = match.group(1)
+        else:
+            return
+
+        # Look up the MAC address of the gateway
+        try:
+            self.run('ping -c 1 %s' % router_ip)
+            arp = self.run('arp -n -a %s' % router_ip)
+        except error.AutoservRunError:
+            return
+        match = re.search(r'\s+(([0-9A-F]{2}:){5}[0-9A-F]{2})\s+', arp.stdout)
+        if match:
+            gateway_mac = match.group(1)
+        else:
+            return None
+
+        return 'netconsole=@%s/,%s@%s/%s' % (remote_ip, self.__port, local_ip,
+                                             gateway_mac)
+
+
+    def __load_netconsole_module(self):
+        """
+        Make a best effort to load the netconsole module.
+
+        Note that loading the module can fail even when the remote machine is
+        working correctly if netconsole is already compiled into the kernel
+        and started.
+        """
+        if self.__netconsole_params is None:
+            return
+
+        try:
+            self.run('dmesg -n 8')
+            self.run('modprobe netconsole %s' % self.__netconsole_params)
+        except error.AutoservRunError, e:
+            # if it fails there isn't much we can do, just keep going
+            print "ERROR occured while loading netconsole: %s" % e
+
+
+    def __unload_netconsole_module(self):
+        try:
+            self.run('modprobe -r netconsole')
+        except error.AutoservRunError:
+            pass
diff --git a/client/common_lib/hosts/paramiko_host.py b/client/common_lib/hosts/paramiko_host.py
new file mode 100644
index 0000000..dca3f5b
--- /dev/null
+++ b/client/common_lib/hosts/paramiko_host.py
@@ -0,0 +1,310 @@
+import os, sys, time, signal, socket, re, fnmatch, logging, threading
+import paramiko
+
+from autotest_lib.client.common_lib import utils, error, global_config
+from autotest_lib.client.common_lib import subcommand
+from autotest_lib.client.common_lib.hosts import abstract_ssh
+
+
+class ParamikoHost(abstract_ssh.AbstractSSHHost):
+    KEEPALIVE_TIMEOUT_SECONDS = 30
+    CONNECT_TIMEOUT_SECONDS = 30
+    CONNECT_TIMEOUT_RETRIES = 3
+    BUFFSIZE = 2**16
+
+    def _initialize(self, hostname, *args, **dargs):
+        super(ParamikoHost, self)._initialize(hostname=hostname, *args, **dargs)
+
+        # paramiko is very noisy, tone down the logging
+        paramiko.util.log_to_file("/dev/null", paramiko.util.ERROR)
+
+        self.keys = self.get_user_keys(hostname)
+        self.pid = None
+
+
+    @staticmethod
+    def _load_key(path):
+        """Given a path to a private key file, load the appropriate keyfile.
+
+        Tries to load the file as both an RSAKey and a DSAKey. If the file
+        cannot be loaded as either type, returns None."""
+        try:
+            return paramiko.DSSKey.from_private_key_file(path)
+        except paramiko.SSHException:
+            try:
+                return paramiko.RSAKey.from_private_key_file(path)
+            except paramiko.SSHException:
+                return None
+
+
+    @staticmethod
+    def _parse_config_line(line):
+        """Given an ssh config line, return a (key, value) tuple for the
+        config value listed in the line, or (None, None)"""
+        match = re.match(r"\s*(\w+)\s*=?(.*)\n", line)
+        if match:
+            return match.groups()
+        else:
+            return None, None
+
+
+    @staticmethod
+    def get_user_keys(hostname):
+        """Returns a mapping of path -> paramiko.PKey entries available for
+        this user. Keys are found in the default locations (~/.ssh/id_[d|r]sa)
+        as well as any IdentityFile entries in the standard ssh config files.
+        """
+        raw_identity_files = ["~/.ssh/id_dsa", "~/.ssh/id_rsa"]
+        for config_path in ("/etc/ssh/ssh_config", "~/.ssh/config"):
+            config_path = os.path.expanduser(config_path)
+            if not os.path.exists(config_path):
+                continue
+            host_pattern = "*"
+            config_lines = open(config_path).readlines()
+            for line in config_lines:
+                key, value = ParamikoHost._parse_config_line(line)
+                if key == "Host":
+                    host_pattern = value
+                elif (key == "IdentityFile"
+                      and fnmatch.fnmatch(hostname, host_pattern)):
+                    raw_identity_files.append(value)
+
+        # drop any files that use percent-escapes; we don't support them
+        identity_files = []
+        UNSUPPORTED_ESCAPES = ["%d", "%u", "%l", "%h", "%r"]
+        for path in raw_identity_files:
+            # skip this path if it uses % escapes
+            if sum((escape in path) for escape in UNSUPPORTED_ESCAPES):
+                continue
+            path = os.path.expanduser(path)
+            if os.path.exists(path):
+                identity_files.append(path)
+
+        # load up all the keys that we can and return them
+        user_keys = {}
+        for path in identity_files:
+            key = ParamikoHost._load_key(path)
+            if key:
+                user_keys[path] = key
+
+        # load up all the ssh agent keys
+        use_sshagent = global_config.global_config.get_config_value(
+            'AUTOSERV', 'use_sshagent_with_paramiko', type=bool)
+        if use_sshagent:
+            ssh_agent = paramiko.Agent()
+            for i, key in enumerate(ssh_agent.get_keys()):
+                user_keys['agent-key-%d' % i] = key
+
+        return user_keys
+
+
+    def _check_transport_error(self, transport):
+        error = transport.get_exception()
+        if error:
+            transport.close()
+            raise error
+
+
+    def _connect_socket(self):
+        """Return a socket for use in instantiating a paramiko transport. Does
+        not have to be a literal socket, it can be anything that the
+        paramiko.Transport constructor accepts."""
+        return self.hostname, self.port
+
+
+    def _connect_transport(self, pkey):
+        for _ in xrange(self.CONNECT_TIMEOUT_RETRIES):
+            transport = paramiko.Transport(self._connect_socket())
+            completed = threading.Event()
+            transport.start_client(completed)
+            completed.wait(self.CONNECT_TIMEOUT_SECONDS)
+            if completed.isSet():
+                self._check_transport_error(transport)
+                completed.clear()
+                transport.auth_publickey(self.user, pkey, completed)
+                completed.wait(self.CONNECT_TIMEOUT_SECONDS)
+                if completed.isSet():
+                    self._check_transport_error(transport)
+                    if not transport.is_authenticated():
+                        transport.close()
+                        raise paramiko.AuthenticationException()
+                    return transport
+            logging.warn("SSH negotiation (%s:%d) timed out, retrying",
+                         self.hostname, self.port)
+            # HACK: we can't count on transport.join not hanging now, either
+            transport.join = lambda: None
+            transport.close()
+        logging.error("SSH negotation (%s:%d) has timed out %s times, "
+                      "giving up", self.hostname, self.port,
+                      self.CONNECT_TIMEOUT_RETRIES)
+        raise error.AutoservSSHTimeout("SSH negotiation timed out")
+
+
+    def _init_transport(self):
+        for path, key in self.keys.iteritems():
+            try:
+                logging.debug("Connecting with %s", path)
+                transport = self._connect_transport(key)
+                transport.set_keepalive(self.KEEPALIVE_TIMEOUT_SECONDS)
+                self.transport = transport
+                self.pid = os.getpid()
+                return
+            except paramiko.AuthenticationException:
+                logging.debug("Authentication failure")
+        else:
+            raise error.AutoservSshPermissionDeniedError(
+                "Permission denied using all keys available to ParamikoHost",
+                utils.CmdResult())
+
+
+    def _open_channel(self, timeout):
+        start_time = time.time()
+        if os.getpid() != self.pid:
+            if self.pid is not None:
+                # HACK: paramiko tries to join() on its worker thread
+                # and this just hangs on linux after a fork()
+                self.transport.join = lambda: None
+                self.transport.atfork()
+                join_hook = lambda cmd: self._close_transport()
+                subcommand.subcommand.register_join_hook(join_hook)
+                logging.debug("Reopening SSH connection after a process fork")
+            self._init_transport()
+
+        channel = None
+        try:
+            channel = self.transport.open_session()
+        except (socket.error, paramiko.SSHException, EOFError), e:
+            logging.warn("Exception occured while opening session: %s", e)
+            if time.time() - start_time >= timeout:
+                raise error.AutoservSSHTimeout("ssh failed: %s" % e)
+
+        if not channel:
+            # we couldn't get a channel; re-initing transport should fix that
+            try:
+                self.transport.close()
+            except Exception, e:
+                logging.debug("paramiko.Transport.close failed with %s", e)
+            self._init_transport()
+            return self.transport.open_session()
+        else:
+            return channel
+
+
+    def _close_transport(self):
+        if os.getpid() == self.pid:
+            self.transport.close()
+
+
+    def close(self):
+        super(ParamikoHost, self).close()
+        self._close_transport()
+
+
+    @classmethod
+    def _exhaust_stream(cls, tee, output_list, recvfunc):
+        while True:
+            try:
+                output_list.append(recvfunc(cls.BUFFSIZE))
+            except socket.timeout:
+                return
+            tee.write(output_list[-1])
+            if not output_list[-1]:
+                return
+
+
+    @classmethod
+    def __send_stdin(cls, channel, stdin):
+        if not stdin or not channel.send_ready():
+            # nothing more to send or just no space to send now
+            return
+
+        sent = channel.send(stdin[:cls.BUFFSIZE])
+        if not sent:
+            logging.warn('Could not send a single stdin byte.')
+        else:
+            stdin = stdin[sent:]
+            if not stdin:
+                # no more stdin input, close output direction
+                channel.shutdown_write()
+        return stdin
+
+
+    def run(self, command, timeout=3600, ignore_status=False,
+            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
+            connect_timeout=30, stdin=None, verbose=True, args=()):
+        """
+        Run a command on the remote host.
+        @see common_lib.hosts.host.run()
+
+        @param connect_timeout: connection timeout (in seconds)
+        @param options: string with additional ssh command options
+        @param verbose: log the commands
+
+        @raises AutoservRunError: if the command failed
+        @raises AutoservSSHTimeout: ssh connection has timed out
+        """
+
+        stdout = utils.get_stream_tee_file(
+                stdout_tee, utils.DEFAULT_STDOUT_LEVEL,
+                prefix=utils.STDOUT_PREFIX)
+        stderr = utils.get_stream_tee_file(
+                stderr_tee, utils.get_stderr_level(ignore_status),
+                prefix=utils.STDERR_PREFIX)
+
+        for arg in args:
+            command += ' "%s"' % utils.sh_escape(arg)
+
+        if verbose:
+            logging.debug("Running (ssh-paramiko) '%s'" % command)
+
+        # start up the command
+        start_time = time.time()
+        try:
+            channel = self._open_channel(timeout)
+            channel.exec_command(command)
+        except (socket.error, paramiko.SSHException, EOFError), e:
+            # This has to match the string from paramiko *exactly*.
+            if str(e) != 'Channel closed.':
+                raise error.AutoservSSHTimeout("ssh failed: %s" % e)
+
+        # pull in all the stdout, stderr until the command terminates
+        raw_stdout, raw_stderr = [], []
+        timed_out = False
+        while not channel.exit_status_ready():
+            if channel.recv_ready():
+                raw_stdout.append(channel.recv(self.BUFFSIZE))
+                stdout.write(raw_stdout[-1])
+            if channel.recv_stderr_ready():
+                raw_stderr.append(channel.recv_stderr(self.BUFFSIZE))
+                stderr.write(raw_stderr[-1])
+            if timeout and time.time() - start_time > timeout:
+                timed_out = True
+                break
+            stdin = self.__send_stdin(channel, stdin)
+            time.sleep(1)
+
+        if timed_out:
+            exit_status = -signal.SIGTERM
+        else:
+            exit_status = channel.recv_exit_status()
+        channel.settimeout(10)
+        self._exhaust_stream(stdout, raw_stdout, channel.recv)
+        self._exhaust_stream(stderr, raw_stderr, channel.recv_stderr)
+        channel.close()
+        duration = time.time() - start_time
+
+        # create the appropriate results
+        stdout = "".join(raw_stdout)
+        stderr = "".join(raw_stderr)
+        result = utils.CmdResult(command, stdout, stderr, exit_status,
+                                 duration)
+        if exit_status == -signal.SIGHUP:
+            msg = "ssh connection unexpectedly terminated"
+            raise error.AutoservRunError(msg, result)
+        if timed_out:
+            logging.warn('Paramiko command timed out after %s sec: %s', timeout,
+                         command)
+            raise error.AutoservRunError("command timed out", result)
+        if not ignore_status and exit_status:
+            raise error.AutoservRunError(command, result)
+        return result
diff --git a/client/common_lib/hosts/remote.py b/client/common_lib/hosts/remote.py
new file mode 100644
index 0000000..cbf1356
--- /dev/null
+++ b/client/common_lib/hosts/remote.py
@@ -0,0 +1,272 @@
+"""This class defines the Remote host class, mixing in the SiteHost class
+if it is available."""
+
+import os, logging, urllib
+from autotest_lib.client.common_lib import error
+from autotest_lib.server import utils
+from autotest_lib.client.common_lib.hosts import base_classes, bootloader
+
+
+class RemoteHost(base_classes.Host):
+    """
+    This class represents a remote machine on which you can run
+    programs.
+
+    It may be accessed through a network, a serial line, ...
+    It is not the machine autoserv is running on.
+
+    Implementation details:
+    This is an abstract class, leaf subclasses must implement the methods
+    listed here and in parent classes which have no implementation. They
+    may reimplement methods which already have an implementation. You
+    must not instantiate this class but should instantiate one of those
+    leaf subclasses.
+    """
+
+    DEFAULT_REBOOT_TIMEOUT = base_classes.Host.DEFAULT_REBOOT_TIMEOUT
+    LAST_BOOT_TAG = object()
+    DEFAULT_HALT_TIMEOUT = 2 * 60
+
+    VAR_LOG_MESSAGES_COPY_PATH = "/var/tmp/messages.autotest_start"
+
+    def _initialize(self, hostname, autodir=None, *args, **dargs):
+        super(RemoteHost, self)._initialize(*args, **dargs)
+
+        self.hostname = hostname
+        self.autodir = autodir
+        self.tmp_dirs = []
+
+
+    def __repr__(self):
+        return "<remote host: %s>" % self.hostname
+
+
+    def close(self):
+        super(RemoteHost, self).close()
+        self.stop_loggers()
+
+        if hasattr(self, 'tmp_dirs'):
+            for dir in self.tmp_dirs:
+                try:
+                    self.run('rm -rf "%s"' % (utils.sh_escape(dir)))
+                except error.AutoservRunError:
+                    pass
+
+
+    def job_start(self):
+        """
+        Abstract method, called the first time a remote host object
+        is created for a specific host after a job starts.
+
+        This method depends on the create_host factory being used to
+        construct your host object. If you directly construct host objects
+        you will need to call this method yourself (and enforce the
+        single-call rule).
+        """
+        try:
+            self.run('rm -f %s' % self.VAR_LOG_MESSAGES_COPY_PATH)
+            self.run('cp /var/log/messages %s' %
+                     self.VAR_LOG_MESSAGES_COPY_PATH)
+        except Exception, e:
+            # Non-fatal error
+            logging.info('Failed to copy /var/log/messages at startup: %s', e)
+
+
+    def get_autodir(self):
+        return self.autodir
+
+
+    def set_autodir(self, autodir):
+        """
+        This method is called to make the host object aware of the
+        where autotest is installed. Called in server/autotest.py
+        after a successful install
+        """
+        self.autodir = autodir
+
+
+    def sysrq_reboot(self):
+        self.run('echo b > /proc/sysrq-trigger &')
+
+
+    def halt(self, timeout=DEFAULT_HALT_TIMEOUT, wait=True):
+        self.run('/sbin/halt')
+        if wait:
+            self.wait_down(timeout=timeout)
+
+
+    def reboot(self, timeout=DEFAULT_REBOOT_TIMEOUT, label=LAST_BOOT_TAG,
+               kernel_args=None, wait=True, fastsync=False,
+               reboot_cmd=None, **dargs):
+        """
+        Reboot the remote host.
+
+        Args:
+                timeout - How long to wait for the reboot.
+                label - The label we should boot into.  If None, we will
+                        boot into the default kernel.  If it's LAST_BOOT_TAG,
+                        we'll boot into whichever kernel was .boot'ed last
+                        (or the default kernel if we haven't .boot'ed in this
+                        job).  If it's None, we'll boot into the default kernel.
+                        If it's something else, we'll boot into that.
+                wait - Should we wait to see if the machine comes back up.
+                fastsync - Don't wait for the sync to complete, just start one
+                        and move on. This is for cases where rebooting prompty
+                        is more important than data integrity and/or the
+                        machine may have disks that cause sync to never return.
+                reboot_cmd - Reboot command to execute.
+        """
+        if self.job:
+            if label == self.LAST_BOOT_TAG:
+                label = self.job.last_boot_tag
+            else:
+                self.job.last_boot_tag = label
+
+        self.reboot_setup(label=label, kernel_args=kernel_args, **dargs)
+
+        if label or kernel_args:
+            if not label:
+                label = self.bootloader.get_default_title()
+            self.bootloader.boot_once(label)
+            if kernel_args:
+                self.bootloader.add_args(label, kernel_args)
+
+        # define a function for the reboot and run it in a group
+        print "Reboot: initiating reboot"
+        def reboot():
+            self.record("GOOD", None, "reboot.start")
+            try:
+                current_boot_id = self.get_boot_id()
+
+                # sync before starting the reboot, so that a long sync during
+                # shutdown isn't timed out by wait_down's short timeout
+                if not fastsync:
+                    self.run('sync; sync', timeout=timeout, ignore_status=True)
+
+                if reboot_cmd:
+                    self.run(reboot_cmd)
+                else:
+                  # Try several methods of rebooting in increasing harshness.
+                    self.run('(('
+                             ' sync &'
+                             ' sleep 5; reboot &'
+                             ' sleep 60; reboot -f &'
+                             ' sleep 10; reboot -nf &'
+                             ' sleep 10; telinit 6 &'
+                             ') </dev/null >/dev/null 2>&1 &)')
+            except error.AutoservRunError:
+                self.record("ABORT", None, "reboot.start",
+                              "reboot command failed")
+                raise
+            if wait:
+                self.wait_for_restart(timeout, old_boot_id=current_boot_id,
+                                      **dargs)
+
+        # if this is a full reboot-and-wait, run the reboot inside a group
+        if wait:
+            self.log_reboot(reboot)
+        else:
+            reboot()
+
+
+    def reboot_followup(self, *args, **dargs):
+        super(RemoteHost, self).reboot_followup(*args, **dargs)
+        if self.job:
+            self.job.profilers.handle_reboot(self)
+
+
+    def wait_for_restart(self, timeout=DEFAULT_REBOOT_TIMEOUT, **dargs):
+        """
+        Wait for the host to come back from a reboot. This wraps the
+        generic wait_for_restart implementation in a reboot group.
+        """
+        def reboot_func():
+            super(RemoteHost, self).wait_for_restart(timeout=timeout, **dargs)
+        self.log_reboot(reboot_func)
+
+
+    def cleanup(self):
+        super(RemoteHost, self).cleanup()
+        self.reboot()
+
+
+    def get_tmp_dir(self, parent='/tmp'):
+        """
+        Return the pathname of a directory on the host suitable
+        for temporary file storage.
+
+        The directory and its content will be deleted automatically
+        on the destruction of the Host object that was used to obtain
+        it.
+        """
+        self.run("mkdir -p %s" % parent)
+        template = os.path.join(parent, 'autoserv-XXXXXX')
+        dir_name = self.run("mktemp -d %s" % template).stdout.rstrip()
+        self.tmp_dirs.append(dir_name)
+        return dir_name
+
+
+    def get_platform_label(self):
+        """
+        Return the platform label, or None if platform label is not set.
+        """
+
+        if self.job:
+            keyval_path = os.path.join(self.job.resultdir, 'host_keyvals',
+                                       self.hostname)
+            keyvals = utils.read_keyval(keyval_path)
+            return keyvals.get('platform', None)
+        else:
+            return None
+
+
+    def get_all_labels(self):
+        """
+        Return all labels, or empty list if label is not set.
+        """
+        if self.job:
+            keyval_path = os.path.join(self.job.resultdir, 'host_keyvals',
+                                       self.hostname)
+            keyvals = utils.read_keyval(keyval_path)
+            all_labels = keyvals.get('labels', '')
+            if all_labels:
+                all_labels = all_labels.split(',')
+                return [urllib.unquote(label) for label in all_labels]
+        return []
+
+
+    def delete_tmp_dir(self, tmpdir):
+        """
+        Delete the given temporary directory on the remote machine.
+        """
+        self.run('rm -rf "%s"' % utils.sh_escape(tmpdir), ignore_status=True)
+        self.tmp_dirs.remove(tmpdir)
+
+
+    def check_uptime(self):
+        """
+        Check that uptime is available and monotonically increasing.
+        """
+        if not self.is_up():
+            raise error.AutoservHostError('Client does not appear to be up')
+        result = self.run("/bin/cat /proc/uptime", 30)
+        return result.stdout.strip().split()[0]
+
+
+    def are_wait_up_processes_up(self):
+        """
+        Checks if any HOSTS waitup processes are running yet on the
+        remote host.
+
+        Returns True if any the waitup processes are running, False
+        otherwise.
+        """
+        processes = self.get_wait_up_processes()
+        if len(processes) == 0:
+            return True # wait up processes aren't being used
+        for procname in processes:
+            exit_status = self.run("{ ps -e || ps; } | grep '%s'" % procname,
+                                   ignore_status=True).exit_status
+            if exit_status == 0:
+                return True
+        return False
diff --git a/client/common_lib/hosts/remote_unittest.py b/client/common_lib/hosts/remote_unittest.py
new file mode 100755
index 0000000..03ddc77
--- /dev/null
+++ b/client/common_lib/hosts/remote_unittest.py
@@ -0,0 +1,16 @@
+#!/usr/bin/python
+
+import unittest
+import common
+
+from autotest_lib.client.common_lib.hosts import remote
+
+
+class test_remote_host(unittest.TestCase):
+    def test_has_hostname(self):
+        host = remote.RemoteHost("myhost")
+        self.assertEqual(host.hostname, "myhost")
+
+
+if __name__ == "__main__":
+    unittest.main()
diff --git a/client/common_lib/hosts/serial.py b/client/common_lib/hosts/serial.py
new file mode 100644
index 0000000..3fc26a6
--- /dev/null
+++ b/client/common_lib/hosts/serial.py
@@ -0,0 +1,183 @@
+import os, sys, subprocess, logging
+
+from autotest_lib.client.common_lib import utils, error
+from autotest_lib.client.common_lib.hosts import remote
+
+
+SiteHost = utils.import_site_class(
+    __file__, "autotest_lib.client.common_lib.hosts.site_host", "SiteHost",
+    remote.RemoteHost)
+
+
+class SerialHost(SiteHost):
+    DEFAULT_REBOOT_TIMEOUT = SiteHost.DEFAULT_REBOOT_TIMEOUT
+
+    def _initialize(self, conmux_server=None, conmux_attach=None,
+                    console_log="console.log", *args, **dargs):
+        super(SerialHost, self)._initialize(*args, **dargs)
+
+        self.__logger = None
+        self.__console_log = console_log
+
+        self.conmux_server = conmux_server
+        self.conmux_attach = self._get_conmux_attach(conmux_attach)
+
+
+    @classmethod
+    def _get_conmux_attach(cls, conmux_attach=None):
+        if conmux_attach:
+            return conmux_attach
+
+        # assume we're using the conmux-attach provided with autotest
+        server_dir = utils.get_server_dir()
+        path = os.path.join(server_dir, "..", "conmux", "conmux-attach")
+        path = os.path.abspath(path)
+        return path
+
+
+    @staticmethod
+    def _get_conmux_hostname(hostname, conmux_server):
+        if conmux_server:
+            return "%s/%s" % (conmux_server, hostname)
+        else:
+            return hostname
+
+
+    def get_conmux_hostname(self):
+        return self._get_conmux_hostname(self.hostname, self.conmux_server)
+
+
+    @classmethod
+    def host_is_supported(cls, hostname, conmux_server=None,
+                          conmux_attach=None):
+        """ Returns a boolean indicating if the remote host with "hostname"
+        supports use as a SerialHost """
+        conmux_attach = cls._get_conmux_attach(conmux_attach)
+        conmux_hostname = cls._get_conmux_hostname(hostname, conmux_server)
+        cmd = "%s %s echo 2> /dev/null" % (conmux_attach, conmux_hostname)
+        try:
+            result = utils.run(cmd, ignore_status=True, timeout=10)
+            return result.exit_status == 0
+        except error.CmdError:
+            logging.warning("Timed out while trying to attach to conmux")
+
+        return False
+
+
+    def start_loggers(self):
+        super(SerialHost, self).start_loggers()
+
+        if self.__console_log is None:
+            return
+
+        if not self.conmux_attach or not os.path.exists(self.conmux_attach):
+            return
+
+        r, w = os.pipe()
+        script_path = os.path.join(self.monitordir, 'console.py')
+        cmd = [self.conmux_attach, self.get_conmux_hostname(),
+               '%s %s %s %d' % (sys.executable, script_path,
+                                self.__console_log, w)]
+
+        self.__warning_stream = os.fdopen(r, 'r', 0)
+        if self.job:
+            self.job.warning_loggers.add(self.__warning_stream)
+
+        stdout = stderr = open(os.devnull, 'w')
+        self.__logger = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
+        os.close(w)
+
+
+    def stop_loggers(self):
+        super(SerialHost, self).stop_loggers()
+
+        if self.__logger:
+            utils.nuke_subprocess(self.__logger)
+            self.__logger = None
+            if self.job:
+                self.job.warning_loggers.discard(self.__warning_stream)
+            self.__warning_stream.close()
+
+
+    def run_conmux(self, cmd):
+        """
+        Send a command to the conmux session
+        """
+        if not self.conmux_attach or not os.path.exists(self.conmux_attach):
+            return False
+        cmd = '%s %s echo %s 2> /dev/null' % (self.conmux_attach,
+                                              self.get_conmux_hostname(),
+                                              cmd)
+        result = utils.system(cmd, ignore_status=True)
+        return result == 0
+
+
+    def hardreset(self, timeout=DEFAULT_REBOOT_TIMEOUT, wait=True,
+                  conmux_command='hardreset', num_attempts=1, halt=False,
+                  **wait_for_restart_kwargs):
+        """
+        Reach out and slap the box in the power switch.
+        @params conmux_command: The command to run via the conmux interface
+        @params timeout: timelimit in seconds before the machine is
+                         considered unreachable
+        @params wait: Whether or not to wait for the machine to reboot
+        @params num_attempts: Number of times to attempt hard reset erroring
+                              on the last attempt.
+        @params halt: Halts the machine before hardresetting.
+        @params **wait_for_restart_kwargs: keyword arguments passed to
+                wait_for_restart()
+        """
+        conmux_command = "'~$%s'" % conmux_command
+
+        # if the machine is up, grab the old boot id, otherwise use a dummy
+        # string and NOT None to ensure that wait_down always returns True,
+        # even if the machine comes back up before it's called
+        try:
+            old_boot_id = self.get_boot_id()
+        except error.AutoservSSHTimeout:
+            old_boot_id = 'unknown boot_id prior to SerialHost.hardreset'
+
+        def reboot():
+            if halt:
+                self.halt()
+            if not self.run_conmux(conmux_command):
+                self.record("ABORT", None, "reboot.start",
+                            "hard reset unavailable")
+                raise error.AutoservUnsupportedError(
+                    'Hard reset unavailable')
+            self.record("GOOD", None, "reboot.start", "hard reset")
+            if wait:
+                warning_msg = ('Serial console failed to respond to hard reset '
+                               'attempt (%s/%s)')
+                for attempt in xrange(num_attempts-1):
+                    try:
+                        self.wait_for_restart(timeout, log_failure=False,
+                                              old_boot_id=old_boot_id,
+                                              **wait_for_restart_kwargs)
+                    except error.AutoservShutdownError:
+                        logging.warning(warning_msg, attempt+1, num_attempts)
+                        # re-send the hard reset command
+                        self.run_conmux(conmux_command)
+                    else:
+                        break
+                else:
+                    # Run on num_attempts=1 or last retry
+                    try:
+                        self.wait_for_restart(timeout,
+                                              old_boot_id=old_boot_id,
+                                              **wait_for_restart_kwargs)
+                    except error.AutoservShutdownError:
+                        logging.warning(warning_msg, num_attempts, num_attempts)
+                        msg = "Host did not shutdown"
+                        raise error.AutoservShutdownError(msg)
+
+        if self.job:
+            self.job.disable_warnings("POWER_FAILURE")
+        try:
+            if wait:
+                self.log_reboot(reboot)
+            else:
+                reboot()
+        finally:
+            if self.job:
+                self.job.enable_warnings("POWER_FAILURE")
diff --git a/client/common_lib/hosts/site_factory.py b/client/common_lib/hosts/site_factory.py
new file mode 100644
index 0000000..59f7053
--- /dev/null
+++ b/client/common_lib/hosts/site_factory.py
@@ -0,0 +1,4 @@
+def postprocess_classes(classes, hostname, **args):
+    # by default, do nothing
+    # insert site-specific processing of the class list here
+    pass
diff --git a/client/common_lib/hosts/ssh_host.py b/client/common_lib/hosts/ssh_host.py
new file mode 100644
index 0000000..ee71901
--- /dev/null
+++ b/client/common_lib/hosts/ssh_host.py
@@ -0,0 +1,245 @@
+#
+# Copyright 2007 Google Inc. Released under the GPL v2
+
+"""
+This module defines the SSHHost class.
+
+Implementation details:
+You should import the "hosts" package instead of importing each type of host.
+
+        SSHHost: a remote machine with a ssh access
+"""
+
+import sys, re, traceback, logging
+from autotest_lib.client.common_lib import error, pxssh
+from autotest_lib.client.common_lib import utils
+from autotest_lib.client.common_lib.hosts import abstract_ssh
+
+
+class SSHHost(abstract_ssh.AbstractSSHHost):
+    """
+    This class represents a remote machine controlled through an ssh
+    session on which you can run programs.
+
+    It is not the machine autoserv is running on. The machine must be
+    configured for password-less login, for example through public key
+    authentication.
+
+    It includes support for controlling the machine through a serial
+    console on which you can run programs. If such a serial console is
+    set up on the machine then capabilities such as hard reset and
+    boot strap monitoring are available. If the machine does not have a
+    serial console available then ordinary SSH-based commands will
+    still be available, but attempts to use extensions such as
+    console logging or hard reset will fail silently.
+
+    Implementation details:
+    This is a leaf class in an abstract class hierarchy, it must
+    implement the unimplemented methods in parent classes.
+    """
+
+    def _initialize(self, hostname, *args, **dargs):
+        """
+        Construct a SSHHost object
+
+        Args:
+                hostname: network hostname or address of remote machine
+        """
+        super(SSHHost, self)._initialize(hostname=hostname, *args, **dargs)
+        self.setup_ssh()
+
+
+    def ssh_command(self, connect_timeout=30, options='', alive_interval=300):
+        """
+        Construct an ssh command with proper args for this host.
+        """
+        options = "%s %s" % (options, self.master_ssh_option)
+        base_cmd = abstract_ssh.make_ssh_command(user=self.user, port=self.port,
+                                                opts=options,
+                                                hosts_file=self.known_hosts_fd,
+                                                connect_timeout=connect_timeout,
+                                                alive_interval=alive_interval)
+        return "%s %s" % (base_cmd, self.hostname)
+
+
+    def _run(self, command, timeout, ignore_status, stdout, stderr,
+             connect_timeout, env, options, stdin, args):
+        """Helper function for run()."""
+        ssh_cmd = self.ssh_command(connect_timeout, options)
+        if not env.strip():
+            env = ""
+        else:
+            env = "export %s;" % env
+        for arg in args:
+            command += ' "%s"' % utils.sh_escape(arg)
+        full_cmd = '%s "%s %s"' % (ssh_cmd, env, utils.sh_escape(command))
+        result = utils.run(full_cmd, timeout, True, stdout, stderr,
+                           verbose=False, stdin=stdin,
+                           stderr_is_expected=ignore_status)
+
+        # The error messages will show up in band (indistinguishable
+        # from stuff sent through the SSH connection), so we have the
+        # remote computer echo the message "Connected." before running
+        # any command.  Since the following 2 errors have to do with
+        # connecting, it's safe to do these checks.
+        if result.exit_status == 255:
+            if re.search(r'^ssh: connect to host .* port .*: '
+                         r'Connection timed out\r$', result.stderr):
+                raise error.AutoservSSHTimeout("ssh timed out", result)
+            if "Permission denied." in result.stderr:
+                msg = "ssh permission denied"
+                raise error.AutoservSshPermissionDeniedError(msg, result)
+
+        if not ignore_status and result.exit_status > 0:
+            raise error.AutoservRunError("command execution error", result)
+
+        return result
+
+
+    def run(self, command, timeout=3600, ignore_status=False,
+            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
+            connect_timeout=30, options='', stdin=None, verbose=True, args=()):
+        """
+        Run a command on the remote host.
+        @see common_lib.hosts.host.run()
+
+        @param connect_timeout: connection timeout (in seconds)
+        @param options: string with additional ssh command options
+        @param verbose: log the commands
+
+        @raises AutoservRunError: if the command failed
+        @raises AutoservSSHTimeout: ssh connection has timed out
+        """
+        if verbose:
+            logging.debug("Running (ssh) '%s'" % command)
+
+        # Start a master SSH connection if necessary.
+        self.start_master_ssh()
+
+        env = " ".join("=".join(pair) for pair in self.env.iteritems())
+        try:
+            return self._run(command, timeout, ignore_status, stdout_tee,
+                             stderr_tee, connect_timeout, env, options,
+                             stdin, args)
+        except error.CmdError, cmderr:
+            # We get a CmdError here only if there is timeout of that command.
+            # Catch that and stuff it into AutoservRunError and raise it.
+            raise error.AutoservRunError(cmderr.args[0], cmderr.args[1])
+
+
+    def run_short(self, command, **kwargs):
+        """
+        Calls the run() command with a short default timeout.
+
+        Args:
+                Takes the same arguments as does run(),
+                with the exception of the timeout argument which
+                here is fixed at 60 seconds.
+                It returns the result of run.
+        """
+        return self.run(command, timeout=60, **kwargs)
+
+
+    def run_grep(self, command, timeout=30, ignore_status=False,
+                             stdout_ok_regexp=None, stdout_err_regexp=None,
+                             stderr_ok_regexp=None, stderr_err_regexp=None,
+                             connect_timeout=30):
+        """
+        Run a command on the remote host and look for regexp
+        in stdout or stderr to determine if the command was
+        successul or not.
+
+        Args:
+                command: the command line string
+                timeout: time limit in seconds before attempting to
+                        kill the running process. The run() function
+                        will take a few seconds longer than 'timeout'
+                        to complete if it has to kill the process.
+                ignore_status: do not raise an exception, no matter
+                        what the exit code of the command is.
+                stdout_ok_regexp: regexp that should be in stdout
+                        if the command was successul.
+                stdout_err_regexp: regexp that should be in stdout
+                        if the command failed.
+                stderr_ok_regexp: regexp that should be in stderr
+                        if the command was successul.
+                stderr_err_regexp: regexp that should be in stderr
+                        if the command failed.
+
+        Returns:
+                if the command was successul, raises an exception
+                otherwise.
+
+        Raises:
+                AutoservRunError:
+                - the exit code of the command execution was not 0.
+                - If stderr_err_regexp is found in stderr,
+                - If stdout_err_regexp is found in stdout,
+                - If stderr_ok_regexp is not found in stderr.
+                - If stdout_ok_regexp is not found in stdout,
+        """
+
+        # We ignore the status, because we will handle it at the end.
+        result = self.run(command, timeout, ignore_status=True,
+                          connect_timeout=connect_timeout,
+                          stderr_is_expected=ignore_status)
+
+        # Look for the patterns, in order
+        for (regexp, stream) in ((stderr_err_regexp, result.stderr),
+                                 (stdout_err_regexp, result.stdout)):
+            if regexp and stream:
+                err_re = re.compile (regexp)
+                if err_re.search(stream):
+                    raise error.AutoservRunError(
+                        '%s failed, found error pattern: "%s"' % (command,
+                                                                regexp), result)
+
+        for (regexp, stream) in ((stderr_ok_regexp, result.stderr),
+                                 (stdout_ok_regexp, result.stdout)):
+            if regexp and stream:
+                ok_re = re.compile (regexp)
+                if ok_re.search(stream):
+                    if ok_re.search(stream):
+                        return
+
+        if not ignore_status and result.exit_status > 0:
+            raise error.AutoservRunError("command execution error", result)
+
+
+    def setup_ssh_key(self):
+        logging.debug('Performing SSH key setup on %s:%d as %s.' %
+                      (self.hostname, self.port, self.user))
+
+        try:
+            host = pxssh.pxssh()
+            host.login(self.hostname, self.user, self.password,
+                        port=self.port)
+            public_key = utils.get_public_key()
+
+            host.sendline('mkdir -p ~/.ssh')
+            host.prompt()
+            host.sendline('chmod 700 ~/.ssh')
+            host.prompt()
+            host.sendline("echo '%s' >> ~/.ssh/authorized_keys; " %
+                            public_key)
+            host.prompt()
+            host.sendline('chmod 600 ~/.ssh/authorized_keys')
+            host.prompt()
+            host.logout()
+
+            logging.debug('SSH key setup complete.')
+
+        except:
+            logging.debug('SSH key setup has failed.')
+            try:
+                host.logout()
+            except:
+                pass
+
+
+    def setup_ssh(self):
+        if self.password:
+            try:
+                self.ssh_ping()
+            except error.AutoservSshPingHostError:
+                self.setup_ssh_key()
diff --git a/client/common_lib/subcommand.py b/client/common_lib/subcommand.py
new file mode 100644
index 0000000..8aa2d96
--- /dev/null
+++ b/client/common_lib/subcommand.py
@@ -0,0 +1,263 @@
+__author__ = """Copyright Andy Whitcroft, Martin J. Bligh - 2006, 2007"""
+
+import sys, os, subprocess, time, signal, cPickle, logging
+
+from autotest_lib.client.common_lib import error, utils
+
+
+# entry points that use subcommand must set this to their logging manager
+# to get log redirection for subcommands
+logging_manager_object = None
+
+
+def parallel(tasklist, timeout=None, return_results=False):
+    """
+    Run a set of predefined subcommands in parallel.
+
+    @param tasklist: A list of subcommand instances to execute.
+    @param timeout: Number of seconds after which the commands should timeout.
+    @param return_results: If True instead of an AutoServError being raised
+            on any error a list of the results|exceptions from the tasks is
+            returned.  [default: False]
+    """
+    run_error = False
+    for task in tasklist:
+        task.fork_start()
+
+    remaining_timeout = None
+    if timeout:
+        endtime = time.time() + timeout
+
+    results = []
+    for task in tasklist:
+        if timeout:
+            remaining_timeout = max(endtime - time.time(), 1)
+        try:
+            status = task.fork_waitfor(timeout=remaining_timeout)
+        except error.AutoservSubcommandError:
+            run_error = True
+        else:
+            if status != 0:
+                run_error = True
+
+        results.append(cPickle.load(task.result_pickle))
+        task.result_pickle.close()
+
+    if return_results:
+        return results
+    elif run_error:
+        message = 'One or more subcommands failed:\n'
+        for task, result in zip(tasklist, results):
+            message += 'task: %s returned/raised: %r\n' % (task, result)
+        raise error.AutoservError(message)
+
+
+def parallel_simple(function, arglist, log=True, timeout=None,
+                    return_results=False):
+    """
+    Each element in the arglist used to create a subcommand object,
+    where that arg is used both as a subdir name, and a single argument
+    to pass to "function".
+
+    We create a subcommand object for each element in the list,
+    then execute those subcommand objects in parallel.
+
+    NOTE: As an optimization, if len(arglist) == 1 a subcommand is not used.
+
+    @param function: A callable to run in parallel once per arg in arglist.
+    @param arglist: A list of single arguments to be used one per subcommand;
+            typically a list of machine names.
+    @param log: If True, output will be written to output in a subdirectory
+            named after each subcommand's arg.
+    @param timeout: Number of seconds after which the commands should timeout.
+    @param return_results: If True instead of an AutoServError being raised
+            on any error a list of the results|exceptions from the function
+            called on each arg is returned.  [default: False]
+
+    @returns None or a list of results/exceptions.
+    """
+    if not arglist:
+        logging.warn('parallel_simple was called with an empty arglist, '
+                     'did you forget to pass in a list of machines?')
+    # Bypass the multithreading if only one machine.
+    if len(arglist) == 1:
+        arg = arglist[0]
+        if return_results:
+            try:
+                result = function(arg)
+            except Exception, e:
+                return [e]
+            return [result]
+        else:
+            function(arg)
+            return
+
+    subcommands = []
+    for arg in arglist:
+        args = [arg]
+        if log:
+            subdir = str(arg)
+        else:
+            subdir = None
+        subcommands.append(subcommand(function, args, subdir))
+    return parallel(subcommands, timeout, return_results=return_results)
+
+
+class subcommand(object):
+    fork_hooks, join_hooks = [], []
+
+    def __init__(self, func, args, subdir = None):
+        # func(args) - the subcommand to run
+        # subdir     - the subdirectory to log results in
+        if subdir:
+            self.subdir = os.path.abspath(subdir)
+            if not os.path.exists(self.subdir):
+                os.mkdir(self.subdir)
+            self.debug = os.path.join(self.subdir, 'debug')
+            if not os.path.exists(self.debug):
+                os.mkdir(self.debug)
+        else:
+            self.subdir = None
+            self.debug = None
+
+        self.func = func
+        self.args = args
+        self.lambda_function = lambda: func(*args)
+        self.pid = None
+        self.returncode = None
+
+
+    def __str__(self):
+        return str('subcommand(func=%s,  args=%s, subdir=%s)' %
+                   (self.func, self.args, self.subdir))
+
+
+    @classmethod
+    def register_fork_hook(cls, hook):
+        """ Register a function to be called from the child process after
+        forking. """
+        cls.fork_hooks.append(hook)
+
+
+    @classmethod
+    def register_join_hook(cls, hook):
+        """ Register a function to be called when from the child process
+        just before the child process terminates (joins to the parent). """
+        cls.join_hooks.append(hook)
+
+
+    def redirect_output(self):
+        if self.subdir and logging_manager_object:
+            tag = os.path.basename(self.subdir)
+            logging_manager_object.tee_redirect_debug_dir(self.debug, tag=tag)
+
+
+    def fork_start(self):
+        sys.stdout.flush()
+        sys.stderr.flush()
+        r, w = os.pipe()
+        self.returncode = None
+        self.pid = os.fork()
+
+        if self.pid:                            # I am the parent
+            os.close(w)
+            self.result_pickle = os.fdopen(r, 'r')
+            return
+        else:
+            os.close(r)
+
+        # We are the child from this point on. Never return.
+        signal.signal(signal.SIGTERM, signal.SIG_DFL) # clear handler
+        if self.subdir:
+            os.chdir(self.subdir)
+        self.redirect_output()
+
+        try:
+            for hook in self.fork_hooks:
+                hook(self)
+            result = self.lambda_function()
+            os.write(w, cPickle.dumps(result, cPickle.HIGHEST_PROTOCOL))
+            exit_code = 0
+        except Exception, e:
+            logging.exception('function failed')
+            exit_code = 1
+            os.write(w, cPickle.dumps(e, cPickle.HIGHEST_PROTOCOL))
+
+        os.close(w)
+
+        try:
+            for hook in self.join_hooks:
+                hook(self)
+        finally:
+            sys.stdout.flush()
+            sys.stderr.flush()
+            os._exit(exit_code)
+
+
+    def _handle_exitstatus(self, sts):
+        """
+        This is partially borrowed from subprocess.Popen.
+        """
+        if os.WIFSIGNALED(sts):
+            self.returncode = -os.WTERMSIG(sts)
+        elif os.WIFEXITED(sts):
+            self.returncode = os.WEXITSTATUS(sts)
+        else:
+            # Should never happen
+            raise RuntimeError("Unknown child exit status!")
+
+        if self.returncode != 0:
+            print "subcommand failed pid %d" % self.pid
+            print "%s" % (self.func,)
+            print "rc=%d" % self.returncode
+            print
+            if self.debug:
+                stderr_file = os.path.join(self.debug, 'autoserv.stderr')
+                if os.path.exists(stderr_file):
+                    for line in open(stderr_file).readlines():
+                        print line,
+            print "\n--------------------------------------------\n"
+            raise error.AutoservSubcommandError(self.func, self.returncode)
+
+
+    def poll(self):
+        """
+        This is borrowed from subprocess.Popen.
+        """
+        if self.returncode is None:
+            try:
+                pid, sts = os.waitpid(self.pid, os.WNOHANG)
+                if pid == self.pid:
+                    self._handle_exitstatus(sts)
+            except os.error:
+                pass
+        return self.returncode
+
+
+    def wait(self):
+        """
+        This is borrowed from subprocess.Popen.
+        """
+        if self.returncode is None:
+            pid, sts = os.waitpid(self.pid, 0)
+            self._handle_exitstatus(sts)
+        return self.returncode
+
+
+    def fork_waitfor(self, timeout=None):
+        if not timeout:
+            return self.wait()
+        else:
+            end_time = time.time() + timeout
+            while time.time() <= end_time:
+                returncode = self.poll()
+                if returncode is not None:
+                    return returncode
+                time.sleep(1)
+
+            utils.nuke_pid(self.pid)
+            print "subcommand failed pid %d" % self.pid
+            print "%s" % (self.func,)
+            print "timeout after %ds" % timeout
+            print
+            return None
diff --git a/client/common_lib/subcommand_unittest.py b/client/common_lib/subcommand_unittest.py
new file mode 100755
index 0000000..5a2e521
--- /dev/null
+++ b/client/common_lib/subcommand_unittest.py
@@ -0,0 +1,443 @@
+#!/usr/bin/python
+# Copyright 2009 Google Inc. Released under the GPL v2
+
+import unittest
+
+import common
+from autotest_lib.client.common_lib.test_utils import mock
+from autotest_lib.client.common_lib import subcommand
+
+
+def _create_subcommand(func, args):
+    # to avoid __init__
+    class wrapper(subcommand.subcommand):
+        def __init__(self, func, args):
+            self.func = func
+            self.args = args
+            self.subdir = None
+            self.debug = None
+            self.pid = None
+            self.returncode = None
+            self.lambda_function = lambda: func(*args)
+
+    return wrapper(func, args)
+
+
+class subcommand_test(unittest.TestCase):
+    def setUp(self):
+        self.god = mock.mock_god()
+
+
+    def tearDown(self):
+        self.god.unstub_all()
+        # cleanup the hooks
+        subcommand.subcommand.fork_hooks = []
+        subcommand.subcommand.join_hooks = []
+
+
+    def test_create(self):
+        def check_attributes(cmd, func, args, subdir=None, debug=None,
+                             pid=None, returncode=None, fork_hooks=[],
+                             join_hooks=[]):
+            self.assertEquals(cmd.func, func)
+            self.assertEquals(cmd.args, args)
+            self.assertEquals(cmd.subdir, subdir)
+            self.assertEquals(cmd.debug, debug)
+            self.assertEquals(cmd.pid, pid)
+            self.assertEquals(cmd.returncode, returncode)
+            self.assertEquals(cmd.fork_hooks, fork_hooks)
+            self.assertEquals(cmd.join_hooks, join_hooks)
+
+        def func(arg1, arg2):
+            pass
+
+        cmd = subcommand.subcommand(func, (2, 3))
+        check_attributes(cmd, func, (2, 3))
+        self.god.check_playback()
+
+        self.god.stub_function(subcommand.os.path, 'abspath')
+        self.god.stub_function(subcommand.os.path, 'exists')
+        self.god.stub_function(subcommand.os, 'mkdir')
+
+        subcommand.os.path.abspath.expect_call('dir').and_return('/foo/dir')
+        subcommand.os.path.exists.expect_call('/foo/dir').and_return(False)
+        subcommand.os.mkdir.expect_call('/foo/dir')
+
+        (subcommand.os.path.exists.expect_call('/foo/dir/debug')
+                .and_return(False))
+        subcommand.os.mkdir.expect_call('/foo/dir/debug')
+
+        cmd = subcommand.subcommand(func, (2, 3), subdir='dir')
+        check_attributes(cmd, func, (2, 3), subdir='/foo/dir',
+                         debug='/foo/dir/debug')
+        self.god.check_playback()
+
+
+    def _setup_fork_start_parent(self):
+        self.god.stub_function(subcommand.os, 'fork')
+
+        subcommand.os.fork.expect_call().and_return(1000)
+        func = self.god.create_mock_function('func')
+        cmd = _create_subcommand(func, [])
+        cmd.fork_start()
+
+        return cmd
+
+
+    def test_fork_start_parent(self):
+        cmd = self._setup_fork_start_parent()
+
+        self.assertEquals(cmd.pid, 1000)
+        self.god.check_playback()
+
+
+    def _setup_fork_start_child(self):
+        self.god.stub_function(subcommand.os, 'pipe')
+        self.god.stub_function(subcommand.os, 'fork')
+        self.god.stub_function(subcommand.os, 'close')
+        self.god.stub_function(subcommand.os, 'write')
+        self.god.stub_function(subcommand.cPickle, 'dumps')
+        self.god.stub_function(subcommand.os, '_exit')
+
+
+    def test_fork_start_child(self):
+        self._setup_fork_start_child()
+
+        func = self.god.create_mock_function('func')
+        fork_hook = self.god.create_mock_function('fork_hook')
+        join_hook = self.god.create_mock_function('join_hook')
+
+        subcommand.subcommand.register_fork_hook(fork_hook)
+        subcommand.subcommand.register_join_hook(join_hook)
+        cmd = _create_subcommand(func, (1, 2))
+
+        subcommand.os.pipe.expect_call().and_return((10, 20))
+        subcommand.os.fork.expect_call().and_return(0)
+        subcommand.os.close.expect_call(10)
+        fork_hook.expect_call(cmd)
+        func.expect_call(1, 2).and_return(True)
+        subcommand.cPickle.dumps.expect_call(True,
+                subcommand.cPickle.HIGHEST_PROTOCOL).and_return('True')
+        subcommand.os.write.expect_call(20, 'True')
+        subcommand.os.close.expect_call(20)
+        join_hook.expect_call(cmd)
+        subcommand.os._exit.expect_call(0)
+
+        cmd.fork_start()
+        self.god.check_playback()
+
+
+    def test_fork_start_child_error(self):
+        self._setup_fork_start_child()
+        self.god.stub_function(subcommand.logging, 'exception')
+
+        func = self.god.create_mock_function('func')
+        cmd = _create_subcommand(func, (1, 2))
+        error = Exception('some error')
+
+        subcommand.os.pipe.expect_call().and_return((10, 20))
+        subcommand.os.fork.expect_call().and_return(0)
+        subcommand.os.close.expect_call(10)
+        func.expect_call(1, 2).and_raises(error)
+        subcommand.logging.exception.expect_call('function failed')
+        subcommand.cPickle.dumps.expect_call(error,
+                subcommand.cPickle.HIGHEST_PROTOCOL).and_return('error')
+        subcommand.os.write.expect_call(20, 'error')
+        subcommand.os.close.expect_call(20)
+        subcommand.os._exit.expect_call(1)
+
+        cmd.fork_start()
+        self.god.check_playback()
+
+
+    def _setup_poll(self):
+        cmd = self._setup_fork_start_parent()
+        self.god.stub_function(subcommand.os, 'waitpid')
+        return cmd
+
+
+    def test_poll_running(self):
+        cmd = self._setup_poll()
+
+        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
+                .and_raises(subcommand.os.error('waitpid')))
+        self.assertEquals(cmd.poll(), None)
+        self.god.check_playback()
+
+
+    def test_poll_finished_success(self):
+        cmd = self._setup_poll()
+
+        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
+                .and_return((1000, 0)))
+        self.assertEquals(cmd.poll(), 0)
+        self.god.check_playback()
+
+
+    def test_poll_finished_failure(self):
+        cmd = self._setup_poll()
+        self.god.stub_function(cmd, '_handle_exitstatus')
+
+        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
+                .and_return((1000, 10)))
+        cmd._handle_exitstatus.expect_call(10).and_raises(Exception('fail'))
+
+        self.assertRaises(Exception, cmd.poll)
+        self.god.check_playback()
+
+
+    def test_wait_success(self):
+        cmd = self._setup_poll()
+
+        (subcommand.os.waitpid.expect_call(1000, 0)
+                .and_return((1000, 0)))
+
+        self.assertEquals(cmd.wait(), 0)
+        self.god.check_playback()
+
+
+    def test_wait_failure(self):
+        cmd = self._setup_poll()
+        self.god.stub_function(cmd, '_handle_exitstatus')
+
+        (subcommand.os.waitpid.expect_call(1000, 0)
+                .and_return((1000, 10)))
+
+        cmd._handle_exitstatus.expect_call(10).and_raises(Exception('fail'))
+        self.assertRaises(Exception, cmd.wait)
+        self.god.check_playback()
+
+
+    def _setup_fork_waitfor(self):
+        cmd = self._setup_fork_start_parent()
+        self.god.stub_function(cmd, 'wait')
+        self.god.stub_function(cmd, 'poll')
+        self.god.stub_function(subcommand.time, 'time')
+        self.god.stub_function(subcommand.time, 'sleep')
+        self.god.stub_function(subcommand.utils, 'nuke_pid')
+
+        return cmd
+
+
+    def test_fork_waitfor_no_timeout(self):
+        cmd = self._setup_fork_waitfor()
+
+        cmd.wait.expect_call().and_return(0)
+
+        self.assertEquals(cmd.fork_waitfor(), 0)
+        self.god.check_playback()
+
+
+    def test_fork_waitfor_success(self):
+        cmd = self._setup_fork_waitfor()
+        self.god.stub_function(cmd, 'wait')
+        timeout = 10
+
+        subcommand.time.time.expect_call().and_return(1)
+        for i in xrange(timeout):
+            subcommand.time.time.expect_call().and_return(i + 1)
+            cmd.poll.expect_call().and_return(None)
+            subcommand.time.sleep.expect_call(1)
+        subcommand.time.time.expect_call().and_return(i + 2)
+        cmd.poll.expect_call().and_return(0)
+
+        self.assertEquals(cmd.fork_waitfor(timeout=timeout), 0)
+        self.god.check_playback()
+
+
+    def test_fork_waitfor_failure(self):
+        cmd = self._setup_fork_waitfor()
+        self.god.stub_function(cmd, 'wait')
+        timeout = 10
+
+        subcommand.time.time.expect_call().and_return(1)
+        for i in xrange(timeout):
+            subcommand.time.time.expect_call().and_return(i + 1)
+            cmd.poll.expect_call().and_return(None)
+            subcommand.time.sleep.expect_call(1)
+        subcommand.time.time.expect_call().and_return(i + 3)
+        subcommand.utils.nuke_pid.expect_call(cmd.pid)
+
+        self.assertEquals(cmd.fork_waitfor(timeout=timeout), None)
+        self.god.check_playback()
+
+
+class parallel_test(unittest.TestCase):
+    def setUp(self):
+        self.god = mock.mock_god()
+        self.god.stub_function(subcommand.cPickle, 'load')
+
+
+    def tearDown(self):
+        self.god.unstub_all()
+
+
+    def _get_cmd(self, func, args):
+        cmd = _create_subcommand(func, args)
+        cmd.result_pickle = self.god.create_mock_class(file, 'file')
+        return self.god.create_mock_class(cmd, 'subcommand')
+
+
+    def _get_tasklist(self):
+        return [self._get_cmd(lambda x: x * 2, (3,)),
+                self._get_cmd(lambda: None, [])]
+
+
+    def _setup_common(self):
+        tasklist = self._get_tasklist()
+
+        for task in tasklist:
+            task.fork_start.expect_call()
+
+        return tasklist
+
+
+    def test_success(self):
+        tasklist = self._setup_common()
+
+        for task in tasklist:
+            task.fork_waitfor.expect_call(timeout=None).and_return(0)
+            (subcommand.cPickle.load.expect_call(task.result_pickle)
+                    .and_return(6))
+            task.result_pickle.close.expect_call()
+
+        subcommand.parallel(tasklist)
+        self.god.check_playback()
+
+
+    def test_failure(self):
+        tasklist = self._setup_common()
+
+        for task in tasklist:
+            task.fork_waitfor.expect_call(timeout=None).and_return(1)
+            (subcommand.cPickle.load.expect_call(task.result_pickle)
+                    .and_return(6))
+            task.result_pickle.close.expect_call()
+
+        self.assertRaises(subcommand.error.AutoservError, subcommand.parallel,
+                          tasklist)
+        self.god.check_playback()
+
+
+    def test_timeout(self):
+        self.god.stub_function(subcommand.time, 'time')
+
+        tasklist = self._setup_common()
+        timeout = 10
+
+        subcommand.time.time.expect_call().and_return(1)
+
+        for task in tasklist:
+            subcommand.time.time.expect_call().and_return(1)
+            task.fork_waitfor.expect_call(timeout=timeout).and_return(None)
+            (subcommand.cPickle.load.expect_call(task.result_pickle)
+                    .and_return(6))
+            task.result_pickle.close.expect_call()
+
+        self.assertRaises(subcommand.error.AutoservError, subcommand.parallel,
+                          tasklist, timeout=timeout)
+        self.god.check_playback()
+
+
+    def test_return_results(self):
+        tasklist = self._setup_common()
+
+        tasklist[0].fork_waitfor.expect_call(timeout=None).and_return(0)
+        (subcommand.cPickle.load.expect_call(tasklist[0].result_pickle)
+                .and_return(6))
+        tasklist[0].result_pickle.close.expect_call()
+
+        error = Exception('fail')
+        tasklist[1].fork_waitfor.expect_call(timeout=None).and_return(1)
+        (subcommand.cPickle.load.expect_call(tasklist[1].result_pickle)
+                .and_return(error))
+        tasklist[1].result_pickle.close.expect_call()
+
+        self.assertEquals(subcommand.parallel(tasklist, return_results=True),
+                          [6, error])
+        self.god.check_playback()
+
+
+class test_parallel_simple(unittest.TestCase):
+    def setUp(self):
+        self.god = mock.mock_god()
+        self.god.stub_function(subcommand, 'parallel')
+        ctor = self.god.create_mock_function('subcommand')
+        self.god.stub_with(subcommand, 'subcommand', ctor)
+
+
+    def tearDown(self):
+        self.god.unstub_all()
+
+
+    def test_simple_success(self):
+        func = self.god.create_mock_function('func')
+
+        func.expect_call(3)
+
+        subcommand.parallel_simple(func, (3,))
+        self.god.check_playback()
+
+
+    def test_simple_failure(self):
+        func = self.god.create_mock_function('func')
+
+        error = Exception('fail')
+        func.expect_call(3).and_raises(error)
+
+        self.assertRaises(Exception, subcommand.parallel_simple, func, (3,))
+        self.god.check_playback()
+
+
+    def test_simple_return_value(self):
+        func = self.god.create_mock_function('func')
+
+        result = 1000
+        func.expect_call(3).and_return(result)
+
+        self.assertEquals(subcommand.parallel_simple(func, (3,),
+                                                     return_results=True),
+                          [result])
+        self.god.check_playback()
+
+
+    def _setup_many(self, count, log):
+        func = self.god.create_mock_function('func')
+
+        args = []
+        cmds = []
+        for i in xrange(count):
+            arg = i + 1
+            args.append(arg)
+
+            if log:
+                subdir = str(arg)
+            else:
+                subdir = None
+
+            cmd = object()
+            cmds.append(cmd)
+
+            (subcommand.subcommand.expect_call(func, [arg], subdir)
+                    .and_return(cmd))
+
+        subcommand.parallel.expect_call(cmds, None, return_results=False)
+        return func, args
+
+
+    def test_passthrough(self):
+        func, args = self._setup_many(4, True)
+
+        subcommand.parallel_simple(func, args)
+        self.god.check_playback()
+
+
+    def test_nolog(self):
+        func, args = self._setup_many(3, False)
+
+        subcommand.parallel_simple(func, args, log=False)
+        self.god.check_playback()
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/scheduler/drone_utility.py b/scheduler/drone_utility.py
index c84a033..6de5618 100755
--- a/scheduler/drone_utility.py
+++ b/scheduler/drone_utility.py
@@ -4,7 +4,7 @@ import pickle, subprocess, os, shutil, socket, sys, time, signal, getpass
 import datetime, traceback, tempfile, itertools, logging
 import common
 from autotest_lib.client.common_lib import utils, global_config, error
-from autotest_lib.server import hosts, subcommand
+from autotest_lib.client.common_lib import hosts, subcommand
 from autotest_lib.scheduler import email_manager, scheduler_config
 
 # An environment variable we add to the environment to enable us to
diff --git a/scheduler/drones_unittest.py b/scheduler/drones_unittest.py
index 0713d52..8181690 100755
--- a/scheduler/drones_unittest.py
+++ b/scheduler/drones_unittest.py
@@ -8,7 +8,7 @@ import common
 from autotest_lib.client.common_lib import utils
 from autotest_lib.client.common_lib.test_utils import mock, unittest
 from autotest_lib.scheduler import drones
-from autotest_lib.server.hosts import ssh_host
+from autotest_lib.client.common_lib.hosts import ssh_host
 
 
 class RemoteDroneTest(unittest.TestCase):
diff --git a/server/autotest_unittest.py b/server/autotest_unittest.py
index a865780..78d0dec 100755
--- a/server/autotest_unittest.py
+++ b/server/autotest_unittest.py
@@ -5,10 +5,10 @@ __author__ = "raphtee@google.com (Travis Miller)"
 import unittest, os, tempfile, logging
 
 import common
-from autotest_lib.server import utils, hosts, server_job, profilers
+from autotest_lib.server import utils, server_job, profilers
 from autotest_lib.client.bin import sysinfo
 from autotest_lib.client.common_lib import utils as client_utils, packages
-from autotest_lib.client.common_lib import error, autotest
+from autotest_lib.client.common_lib import error, autotest, hosts
 from autotest_lib.client.common_lib.test_utils import mock
 
 
diff --git a/server/base_utils.py b/server/base_utils.py
index 4a16b29..972d200 100644
--- a/server/base_utils.py
+++ b/server/base_utils.py
@@ -10,37 +10,7 @@ import that instead
 
 import atexit, os, re, shutil, textwrap, sys
 
-from autotest_lib.client.common_lib import barrier, utils
-from autotest_lib.server import subcommand
-
-
-def scp_remote_escape(filename):
-    """
-    Escape special characters from a filename so that it can be passed
-    to scp (within double quotes) as a remote file.
-
-    Bis-quoting has to be used with scp for remote files, "bis-quoting"
-    as in quoting x 2
-    scp does not support a newline in the filename
-
-    Args:
-            filename: the filename string to escape.
-
-    Returns:
-            The escaped filename string. The required englobing double
-            quotes are NOT added and so should be added at some point by
-            the caller.
-    """
-    escape_chars= r' !"$&' "'" r'()*,:;<=>?[\]^`{|}'
-
-    new_name= []
-    for char in filename:
-        if char in escape_chars:
-            new_name.append("\\%s" % (char,))
-        else:
-            new_name.append(char)
-
-    return utils.sh_escape("".join(new_name))
+from autotest_lib.client.common_lib import barrier, utils, subcommand
 
 
 atexit.register(utils.clean_tmp_dirs)
@@ -151,69 +121,6 @@ def form_ntuples_from_machines(machines, n=2, mapping_func=default_mappings):
     return (ntuples, failures)
 
 
-def parse_machine(machine, user='root', password='', port=22):
-    """
-    Parse the machine string user:pass@host:port and return it separately,
-    if the machine string is not complete, use the default parameters
-    when appropriate.
-    """
-
-    if '@' in machine:
-        user, machine = machine.split('@', 1)
-
-    if ':' in user:
-        user, password = user.split(':', 1)
-
-    if ':' in machine:
-        machine, port = machine.split(':', 1)
-        port = int(port)
-
-    if not machine or not user:
-        raise ValueError
-
-    return machine, user, password, port
-
-
-def get_public_key():
-    """
-    Return a valid string ssh public key for the user executing autoserv or
-    autotest. If there's no DSA or RSA public key, create a DSA keypair with
-    ssh-keygen and return it.
-    """
-
-    ssh_conf_path = os.path.expanduser('~/.ssh')
-
-    dsa_public_key_path = os.path.join(ssh_conf_path, 'id_dsa.pub')
-    dsa_private_key_path = os.path.join(ssh_conf_path, 'id_dsa')
-
-    rsa_public_key_path = os.path.join(ssh_conf_path, 'id_rsa.pub')
-    rsa_private_key_path = os.path.join(ssh_conf_path, 'id_rsa')
-
-    has_dsa_keypair = os.path.isfile(dsa_public_key_path) and \
-        os.path.isfile(dsa_private_key_path)
-    has_rsa_keypair = os.path.isfile(rsa_public_key_path) and \
-        os.path.isfile(rsa_private_key_path)
-
-    if has_dsa_keypair:
-        print 'DSA keypair found, using it'
-        public_key_path = dsa_public_key_path
-
-    elif has_rsa_keypair:
-        print 'RSA keypair found, using it'
-        public_key_path = rsa_public_key_path
-
-    else:
-        print 'Neither RSA nor DSA keypair found, creating DSA ssh key pair'
-        utils.system('ssh-keygen -t dsa -q -N "" -f %s' % dsa_private_key_path)
-        public_key_path = dsa_public_key_path
-
-    public_key = open(public_key_path, 'r')
-    public_key_str = public_key.read()
-    public_key.close()
-
-    return public_key_str
-
-
 def get_sync_control_file(control, host_name, host_num,
                           instance, num_jobs, port_base=63100):
     """
diff --git a/server/deb_kernel_unittest.py b/server/deb_kernel_unittest.py
index 2768b7d..1506de7 100755
--- a/server/deb_kernel_unittest.py
+++ b/server/deb_kernel_unittest.py
@@ -3,9 +3,9 @@
 import unittest, os
 import common
 from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.client.common_lib import utils as common_utils
-from autotest_lib.server import deb_kernel, utils, hosts
-from autotest_lib.server.hosts import bootloader
+from autotest_lib.client.common_lib import utils as common_utils, hosts
+from autotest_lib.server import deb_kernel, utils
+from autotest_lib.client.common_lib.hosts import bootloader
 
 
 class TestDebKernel(unittest.TestCase):
diff --git a/server/hosts/__init__.py b/server/hosts/__init__.py
deleted file mode 100644
index 2d90332..0000000
--- a/server/hosts/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""This is a convenience module to import all available types of hosts.
-
-Implementation details:
-You should 'import hosts' instead of importing every available host module.
-"""
-
-
-# host abstract classes
-from base_classes import Host
-from remote import RemoteHost
-try:
-    from site_host import SiteHost
-except ImportError, e:
-    pass
-
-# host implementation classes
-from ssh_host import SSHHost
-from guest import Guest
-from kvm_guest import KVMGuest
-
-# extra logger classes
-from serial import SerialHost
-from netconsole import NetconsoleHost
-
-# bootloader classes
-from bootloader import Bootloader
-
-# factory function
-from factory import create_host
diff --git a/server/hosts/abstract_ssh.py b/server/hosts/abstract_ssh.py
deleted file mode 100644
index 0f61391..0000000
--- a/server/hosts/abstract_ssh.py
+++ /dev/null
@@ -1,608 +0,0 @@
-import os, time, types, socket, shutil, glob, logging, traceback
-from autotest_lib.client.common_lib import autotemp, error, logging_manager
-from autotest_lib.server import utils, autotest
-from autotest_lib.server.hosts import remote
-from autotest_lib.client.common_lib.global_config import global_config
-
-
-get_value = global_config.get_config_value
-enable_master_ssh = get_value('AUTOSERV', 'enable_master_ssh', type=bool,
-                              default=False)
-
-
-def _make_ssh_cmd_default(user="root", port=22, opts='', hosts_file='/dev/null',
-                          connect_timeout=30, alive_interval=300):
-    base_command = ("/usr/bin/ssh -a -x %s -o StrictHostKeyChecking=no "
-                    "-o UserKnownHostsFile=%s -o BatchMode=yes "
-                    "-o ConnectTimeout=%d -o ServerAliveInterval=%d "
-                    "-l %s -p %d")
-    assert isinstance(connect_timeout, (int, long))
-    assert connect_timeout > 0 # can't disable the timeout
-    return base_command % (opts, hosts_file, connect_timeout,
-                           alive_interval, user, port)
-
-
-make_ssh_command = utils.import_site_function(
-    __file__, "autotest_lib.server.hosts.site_host", "make_ssh_command",
-    _make_ssh_cmd_default)
-
-
-# import site specific Host class
-SiteHost = utils.import_site_class(
-    __file__, "autotest_lib.server.hosts.site_host", "SiteHost",
-    remote.RemoteHost)
-
-
-class AbstractSSHHost(SiteHost):
-    """
-    This class represents a generic implementation of most of the
-    framework necessary for controlling a host via ssh. It implements
-    almost all of the abstract Host methods, except for the core
-    Host.run method.
-    """
-
-    def _initialize(self, hostname, user="root", port=22, password="",
-                    *args, **dargs):
-        super(AbstractSSHHost, self)._initialize(hostname=hostname,
-                                                 *args, **dargs)
-        self.ip = socket.getaddrinfo(self.hostname, None)[0][4][0]
-        self.user = user
-        self.port = port
-        self.password = password
-        self._use_rsync = None
-        self.known_hosts_file = os.tmpfile()
-        known_hosts_fd = self.known_hosts_file.fileno()
-        self.known_hosts_fd = '/dev/fd/%s' % known_hosts_fd
-
-        """
-        Master SSH connection background job, socket temp directory and socket
-        control path option. If master-SSH is enabled, these fields will be
-        initialized by start_master_ssh when a new SSH connection is initiated.
-        """
-        self.master_ssh_job = None
-        self.master_ssh_tempdir = None
-        self.master_ssh_option = ''
-
-
-    def use_rsync(self):
-        if self._use_rsync is not None:
-            return self._use_rsync
-
-        # Check if rsync is available on the remote host. If it's not,
-        # don't try to use it for any future file transfers.
-        self._use_rsync = self._check_rsync()
-        if not self._use_rsync:
-            logging.warn("rsync not available on remote host %s -- disabled",
-                         self.hostname)
-        return self._use_rsync
-
-
-    def _check_rsync(self):
-        """
-        Check if rsync is available on the remote host.
-        """
-        try:
-            self.run("rsync --version", stdout_tee=None, stderr_tee=None)
-        except error.AutoservRunError:
-            return False
-        return True
-
-
-    def _encode_remote_paths(self, paths, escape=True):
-        """
-        Given a list of file paths, encodes it as a single remote path, in
-        the style used by rsync and scp.
-        """
-        if escape:
-            paths = [utils.scp_remote_escape(path) for path in paths]
-        return '%s@%s:"%s"' % (self.user, self.hostname, " ".join(paths))
-
-
-    def _make_rsync_cmd(self, sources, dest, delete_dest, preserve_symlinks):
-        """
-        Given a list of source paths and a destination path, produces the
-        appropriate rsync command for copying them. Remote paths must be
-        pre-encoded.
-        """
-        ssh_cmd = make_ssh_command(user=self.user, port=self.port,
-                                   opts=self.master_ssh_option,
-                                   hosts_file=self.known_hosts_fd)
-        if delete_dest:
-            delete_flag = "--delete"
-        else:
-            delete_flag = ""
-        if preserve_symlinks:
-            symlink_flag = ""
-        else:
-            symlink_flag = "-L"
-        command = "rsync %s %s --timeout=1800 --rsh='%s' -az %s %s"
-        return command % (symlink_flag, delete_flag, ssh_cmd,
-                          " ".join(sources), dest)
-
-
-    def _make_ssh_cmd(self, cmd):
-        """
-        Create a base ssh command string for the host which can be used
-        to run commands directly on the machine
-        """
-        base_cmd = make_ssh_command(user=self.user, port=self.port,
-                                    opts=self.master_ssh_option,
-                                    hosts_file=self.known_hosts_fd)
-
-        return '%s %s "%s"' % (base_cmd, self.hostname, utils.sh_escape(cmd))
-
-    def _make_scp_cmd(self, sources, dest):
-        """
-        Given a list of source paths and a destination path, produces the
-        appropriate scp command for encoding it. Remote paths must be
-        pre-encoded.
-        """
-        command = ("scp -rq %s -o StrictHostKeyChecking=no "
-                   "-o UserKnownHostsFile=%s -P %d %s '%s'")
-        return command % (self.master_ssh_option, self.known_hosts_fd,
-                          self.port, " ".join(sources), dest)
-
-
-    def _make_rsync_compatible_globs(self, path, is_local):
-        """
-        Given an rsync-style path, returns a list of globbed paths
-        that will hopefully provide equivalent behaviour for scp. Does not
-        support the full range of rsync pattern matching behaviour, only that
-        exposed in the get/send_file interface (trailing slashes).
-
-        The is_local param is flag indicating if the paths should be
-        interpreted as local or remote paths.
-        """
-
-        # non-trailing slash paths should just work
-        if len(path) == 0 or path[-1] != "/":
-            return [path]
-
-        # make a function to test if a pattern matches any files
-        if is_local:
-            def glob_matches_files(path, pattern):
-                return len(glob.glob(path + pattern)) > 0
-        else:
-            def glob_matches_files(path, pattern):
-                result = self.run("ls \"%s\"%s" % (utils.sh_escape(path),
-                                                   pattern),
-                                  stdout_tee=None, ignore_status=True)
-                return result.exit_status == 0
-
-        # take a set of globs that cover all files, and see which are needed
-        patterns = ["*", ".[!.]*"]
-        patterns = [p for p in patterns if glob_matches_files(path, p)]
-
-        # convert them into a set of paths suitable for the commandline
-        if is_local:
-            return ["\"%s\"%s" % (utils.sh_escape(path), pattern)
-                    for pattern in patterns]
-        else:
-            return [utils.scp_remote_escape(path) + pattern
-                    for pattern in patterns]
-
-
-    def _make_rsync_compatible_source(self, source, is_local):
-        """
-        Applies the same logic as _make_rsync_compatible_globs, but
-        applies it to an entire list of sources, producing a new list of
-        sources, properly quoted.
-        """
-        return sum((self._make_rsync_compatible_globs(path, is_local)
-                    for path in source), [])
-
-
-    def _set_umask_perms(self, dest):
-        """
-        Given a destination file/dir (recursively) set the permissions on
-        all the files and directories to the max allowed by running umask.
-        """
-
-        # now this looks strange but I haven't found a way in Python to _just_
-        # get the umask, apparently the only option is to try to set it
-        umask = os.umask(0)
-        os.umask(umask)
-
-        max_privs = 0777 & ~umask
-
-        def set_file_privs(filename):
-            file_stat = os.stat(filename)
-
-            file_privs = max_privs
-            # if the original file permissions do not have at least one
-            # executable bit then do not set it anywhere
-            if not file_stat.st_mode & 0111:
-                file_privs &= ~0111
-
-            os.chmod(filename, file_privs)
-
-        # try a bottom-up walk so changes on directory permissions won't cut
-        # our access to the files/directories inside it
-        for root, dirs, files in os.walk(dest, topdown=False):
-            # when setting the privileges we emulate the chmod "X" behaviour
-            # that sets to execute only if it is a directory or any of the
-            # owner/group/other already has execute right
-            for dirname in dirs:
-                os.chmod(os.path.join(root, dirname), max_privs)
-
-            for filename in files:
-                set_file_privs(os.path.join(root, filename))
-
-
-        # now set privs for the dest itself
-        if os.path.isdir(dest):
-            os.chmod(dest, max_privs)
-        else:
-            set_file_privs(dest)
-
-
-    def get_file(self, source, dest, delete_dest=False, preserve_perm=True,
-                 preserve_symlinks=False):
-        """
-        Copy files from the remote host to a local path.
-
-        Directories will be copied recursively.
-        If a source component is a directory with a trailing slash,
-        the content of the directory will be copied, otherwise, the
-        directory itself and its content will be copied. This
-        behavior is similar to that of the program 'rsync'.
-
-        Args:
-                source: either
-                        1) a single file or directory, as a string
-                        2) a list of one or more (possibly mixed)
-                                files or directories
-                dest: a file or a directory (if source contains a
-                        directory or more than one element, you must
-                        supply a directory dest)
-                delete_dest: if this is true, the command will also clear
-                             out any old files at dest that are not in the
-                             source
-                preserve_perm: tells get_file() to try to preserve the sources
-                               permissions on files and dirs
-                preserve_symlinks: try to preserve symlinks instead of
-                                   transforming them into files/dirs on copy
-
-        Raises:
-                AutoservRunError: the scp command failed
-        """
-
-        # Start a master SSH connection if necessary.
-        self.start_master_ssh()
-
-        if isinstance(source, basestring):
-            source = [source]
-        dest = os.path.abspath(dest)
-
-        # If rsync is disabled or fails, try scp.
-        try_scp = True
-        if self.use_rsync():
-            try:
-                remote_source = self._encode_remote_paths(source)
-                local_dest = utils.sh_escape(dest)
-                rsync = self._make_rsync_cmd([remote_source], local_dest,
-                                             delete_dest, preserve_symlinks)
-                utils.run(rsync)
-                try_scp = False
-            except error.CmdError, e:
-                logging.warn("trying scp, rsync failed: %s" % e)
-
-        if try_scp:
-            # scp has no equivalent to --delete, just drop the entire dest dir
-            if delete_dest and os.path.isdir(dest):
-                shutil.rmtree(dest)
-                os.mkdir(dest)
-
-            remote_source = self._make_rsync_compatible_source(source, False)
-            if remote_source:
-                # _make_rsync_compatible_source() already did the escaping
-                remote_source = self._encode_remote_paths(remote_source,
-                                                          escape=False)
-                local_dest = utils.sh_escape(dest)
-                scp = self._make_scp_cmd([remote_source], local_dest)
-                try:
-                    utils.run(scp)
-                except error.CmdError, e:
-                    raise error.AutoservRunError(e.args[0], e.args[1])
-
-        if not preserve_perm:
-            # we have no way to tell scp to not try to preserve the
-            # permissions so set them after copy instead.
-            # for rsync we could use "--no-p --chmod=ugo=rwX" but those
-            # options are only in very recent rsync versions
-            self._set_umask_perms(dest)
-
-
-    def send_file(self, source, dest, delete_dest=False,
-                  preserve_symlinks=False):
-        """
-        Copy files from a local path to the remote host.
-
-        Directories will be copied recursively.
-        If a source component is a directory with a trailing slash,
-        the content of the directory will be copied, otherwise, the
-        directory itself and its content will be copied. This
-        behavior is similar to that of the program 'rsync'.
-
-        Args:
-                source: either
-                        1) a single file or directory, as a string
-                        2) a list of one or more (possibly mixed)
-                                files or directories
-                dest: a file or a directory (if source contains a
-                        directory or more than one element, you must
-                        supply a directory dest)
-                delete_dest: if this is true, the command will also clear
-                             out any old files at dest that are not in the
-                             source
-                preserve_symlinks: controls if symlinks on the source will be
-                    copied as such on the destination or transformed into the
-                    referenced file/directory
-
-        Raises:
-                AutoservRunError: the scp command failed
-        """
-
-        # Start a master SSH connection if necessary.
-        self.start_master_ssh()
-
-        if isinstance(source, basestring):
-            source = [source]
-        remote_dest = self._encode_remote_paths([dest])
-
-        # If rsync is disabled or fails, try scp.
-        try_scp = True
-        if self.use_rsync():
-            try:
-                local_sources = [utils.sh_escape(path) for path in source]
-                rsync = self._make_rsync_cmd(local_sources, remote_dest,
-                                             delete_dest, preserve_symlinks)
-                utils.run(rsync)
-                try_scp = False
-            except error.CmdError, e:
-                logging.warn("trying scp, rsync failed: %s" % e)
-
-        if try_scp:
-            # scp has no equivalent to --delete, just drop the entire dest dir
-            if delete_dest:
-                is_dir = self.run("ls -d %s/" % dest,
-                                  ignore_status=True).exit_status == 0
-                if is_dir:
-                    cmd = "rm -rf %s && mkdir %s"
-                    cmd %= (dest, dest)
-                    self.run(cmd)
-
-            local_sources = self._make_rsync_compatible_source(source, True)
-            if local_sources:
-                scp = self._make_scp_cmd(local_sources, remote_dest)
-                try:
-                    utils.run(scp)
-                except error.CmdError, e:
-                    raise error.AutoservRunError(e.args[0], e.args[1])
-
-
-    def ssh_ping(self, timeout=60):
-        try:
-            self.run("true", timeout=timeout, connect_timeout=timeout)
-        except error.AutoservSSHTimeout:
-            msg = "Host (ssh) verify timed out (timeout = %d)" % timeout
-            raise error.AutoservSSHTimeout(msg)
-        except error.AutoservSshPermissionDeniedError:
-            #let AutoservSshPermissionDeniedError be visible to the callers
-            raise
-        except error.AutoservRunError, e:
-            # convert the generic AutoservRunError into something more
-            # specific for this context
-            raise error.AutoservSshPingHostError(e.description + '\n' +
-                                                 repr(e.result_obj))
-
-
-    def is_up(self):
-        """
-        Check if the remote host is up.
-
-        @returns True if the remote host is up, False otherwise
-        """
-        try:
-            self.ssh_ping()
-        except error.AutoservError:
-            return False
-        else:
-            return True
-
-
-    def wait_up(self, timeout=None):
-        """
-        Wait until the remote host is up or the timeout expires.
-
-        In fact, it will wait until an ssh connection to the remote
-        host can be established, and getty is running.
-
-        @param timeout time limit in seconds before returning even
-            if the host is not up.
-
-        @returns True if the host was found to be up, False otherwise
-        """
-        if timeout:
-            end_time = time.time() + timeout
-
-        while not timeout or time.time() < end_time:
-            if self.is_up():
-                try:
-                    if self.are_wait_up_processes_up():
-                        logging.debug('Host %s is now up', self.hostname)
-                        return True
-                except error.AutoservError:
-                    pass
-            time.sleep(1)
-
-        logging.debug('Host %s is still down after waiting %d seconds',
-                      self.hostname, int(timeout + time.time() - end_time))
-        return False
-
-
-    def wait_down(self, timeout=None, warning_timer=None, old_boot_id=None):
-        """
-        Wait until the remote host is down or the timeout expires.
-
-        If old_boot_id is provided, this will wait until either the machine
-        is unpingable or self.get_boot_id() returns a value different from
-        old_boot_id. If the boot_id value has changed then the function
-        returns true under the assumption that the machine has shut down
-        and has now already come back up.
-
-        If old_boot_id is None then until the machine becomes unreachable the
-        method assumes the machine has not yet shut down.
-
-        @param timeout Time limit in seconds before returning even
-            if the host is still up.
-        @param warning_timer Time limit in seconds that will generate
-            a warning if the host is not down yet.
-        @param old_boot_id A string containing the result of self.get_boot_id()
-            prior to the host being told to shut down. Can be None if this is
-            not available.
-
-        @returns True if the host was found to be down, False otherwise
-        """
-        #TODO: there is currently no way to distinguish between knowing
-        #TODO: boot_id was unsupported and not knowing the boot_id.
-        current_time = time.time()
-        if timeout:
-            end_time = current_time + timeout
-
-        if warning_timer:
-            warn_time = current_time + warning_timer
-
-        if old_boot_id is not None:
-            logging.debug('Host %s pre-shutdown boot_id is %s',
-                          self.hostname, old_boot_id)
-
-        while not timeout or current_time < end_time:
-            try:
-                new_boot_id = self.get_boot_id()
-            except error.AutoservError:
-                logging.debug('Host %s is now unreachable over ssh, is down',
-                              self.hostname)
-                return True
-            else:
-                # if the machine is up but the boot_id value has changed from
-                # old boot id, then we can assume the machine has gone down
-                # and then already come back up
-                if old_boot_id is not None and old_boot_id != new_boot_id:
-                    logging.debug('Host %s now has boot_id %s and so must '
-                                  'have rebooted', self.hostname, new_boot_id)
-                    return True
-
-            if warning_timer and current_time > warn_time:
-                self.record("WARN", None, "shutdown",
-                            "Shutdown took longer than %ds" % warning_timer)
-                # Print the warning only once.
-                warning_timer = None
-                # If a machine is stuck switching runlevels
-                # This may cause the machine to reboot.
-                self.run('kill -HUP 1', ignore_status=True)
-
-            time.sleep(1)
-            current_time = time.time()
-
-        return False
-
-
-    # tunable constants for the verify & repair code
-    AUTOTEST_GB_DISKSPACE_REQUIRED = get_value("SERVER",
-                                               "gb_diskspace_required",
-                                               type=int,
-                                               default=20)
-
-
-    def verify_connectivity(self):
-        super(AbstractSSHHost, self).verify_connectivity()
-
-        logging.info('Pinging host ' + self.hostname)
-        self.ssh_ping()
-        logging.info("Host (ssh) %s is alive", self.hostname)
-
-        if self.is_shutting_down():
-            raise error.AutoservHostIsShuttingDownError("Host is shutting down")
-
-
-    def verify_software(self):
-        super(AbstractSSHHost, self).verify_software()
-        try:
-            self.check_diskspace(autotest.Autotest.get_install_dir(self),
-                                 self.AUTOTEST_GB_DISKSPACE_REQUIRED)
-        except error.AutoservHostError:
-            raise           # only want to raise if it's a space issue
-        except autotest.AutodirNotFoundError:
-            # autotest dir may not exist, etc. ignore
-            logging.debug('autodir space check exception, this is probably '
-                          'safe to ignore\n' + traceback.format_exc())
-
-
-    def close(self):
-        super(AbstractSSHHost, self).close()
-        self._cleanup_master_ssh()
-        self.known_hosts_file.close()
-
-
-    def _cleanup_master_ssh(self):
-        """
-        Release all resources (process, temporary directory) used by an active
-        master SSH connection.
-        """
-        # If a master SSH connection is running, kill it.
-        if self.master_ssh_job is not None:
-            utils.nuke_subprocess(self.master_ssh_job.sp)
-            self.master_ssh_job = None
-
-        # Remove the temporary directory for the master SSH socket.
-        if self.master_ssh_tempdir is not None:
-            self.master_ssh_tempdir.clean()
-            self.master_ssh_tempdir = None
-            self.master_ssh_option = ''
-
-
-    def start_master_ssh(self):
-        """
-        Called whenever a slave SSH connection needs to be initiated (e.g., by
-        run, rsync, scp). If master SSH support is enabled and a master SSH
-        connection is not active already, start a new one in the background.
-        Also, cleanup any zombie master SSH connections (e.g., dead due to
-        reboot).
-        """
-        if not enable_master_ssh:
-            return
-
-        # If a previously started master SSH connection is not running
-        # anymore, it needs to be cleaned up and then restarted.
-        if self.master_ssh_job is not None:
-            if self.master_ssh_job.sp.poll() is not None:
-                logging.info("Master ssh connection to %s is down.",
-                             self.hostname)
-                self._cleanup_master_ssh()
-
-        # Start a new master SSH connection.
-        if self.master_ssh_job is None:
-            # Create a shared socket in a temp location.
-            self.master_ssh_tempdir = autotemp.tempdir(unique_id='ssh-master')
-            self.master_ssh_option = ("-o ControlPath=%s/socket" %
-                                      self.master_ssh_tempdir.name)
-
-            # Start the master SSH connection in the background.
-            master_cmd = self.ssh_command(options="-N -o ControlMaster=yes")
-            logging.info("Starting master ssh connection '%s'" % master_cmd)
-            self.master_ssh_job = utils.BgJob(master_cmd)
-
-
-    def clear_known_hosts(self):
-        """Clears out the temporary ssh known_hosts file.
-
-        This is useful if the test SSHes to the machine, then reinstalls it,
-        then SSHes to it again.  It can be called after the reinstall to
-        reduce the spam in the logs.
-        """
-        logging.info("Clearing known hosts for host '%s', file '%s'.",
-                     self.hostname, self.known_hosts_fd)
-        # Clear out the file by opening it for writing and then closing.
-        fh = open(self.known_hosts_fd, "w")
-        fh.close()
diff --git a/server/hosts/base_classes.py b/server/hosts/base_classes.py
deleted file mode 100644
index 0759d5f..0000000
--- a/server/hosts/base_classes.py
+++ /dev/null
@@ -1,80 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""
-This module defines the base classes for the server Host hierarchy.
-
-Implementation details:
-You should import the "hosts" package instead of importing each type of host.
-
-        Host: a machine on which you can run programs
-        RemoteHost: a remote machine on which you can run programs
-"""
-
-__author__ = """
-mbligh@google.com (Martin J. Bligh),
-poirier@google.com (Benjamin Poirier),
-stutsman@google.com (Ryan Stutsman)
-"""
-
-import os
-
-from autotest_lib.client.common_lib import hosts
-from autotest_lib.server import utils
-from autotest_lib.server.hosts import bootloader
-
-
-class Host(hosts.Host):
-    """
-    This class represents a machine on which you can run programs.
-
-    It may be a local machine, the one autoserv is running on, a remote
-    machine or a virtual machine.
-
-    Implementation details:
-    This is an abstract class, leaf subclasses must implement the methods
-    listed here. You must not instantiate this class but should
-    instantiate one of those leaf subclasses.
-
-    When overriding methods that raise NotImplementedError, the leaf class
-    is fully responsible for the implementation and should not chain calls
-    to super. When overriding methods that are a NOP in Host, the subclass
-    should chain calls to super(). The criteria for fitting a new method into
-    one category or the other should be:
-        1. If two separate generic implementations could reasonably be
-           concatenated, then the abstract implementation should pass and
-           subclasses should chain calls to super.
-        2. If only one class could reasonably perform the stated function
-           (e.g. two separate run() implementations cannot both be executed)
-           then the method should raise NotImplementedError in Host, and
-           the implementor should NOT chain calls to super, to ensure that
-           only one implementation ever gets executed.
-    """
-
-    bootloader = None
-
-
-    def __init__(self, *args, **dargs):
-        super(Host, self).__init__(*args, **dargs)
-
-        self.start_loggers()
-        if self.job:
-            self.job.hosts.add(self)
-
-
-    def _initialize(self, target_file_owner=None,
-                    *args, **dargs):
-        super(Host, self)._initialize(*args, **dargs)
-
-        self.serverdir = utils.get_server_dir()
-        self.monitordir = os.path.join(os.path.dirname(__file__), "monitors")
-        self.bootloader = bootloader.Bootloader(self)
-        self.env = {}
-        self.target_file_owner = target_file_owner
-
-
-    def close(self):
-        super(Host, self).close()
-
-        if self.job:
-            self.job.hosts.discard(self)
diff --git a/server/hosts/base_classes_unittest.py b/server/hosts/base_classes_unittest.py
deleted file mode 100755
index f76bb67..0000000
--- a/server/hosts/base_classes_unittest.py
+++ /dev/null
@@ -1,112 +0,0 @@
-#!/usr/bin/python
-
-import unittest
-import common
-
-from autotest_lib.client.common_lib import global_config
-from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.server import utils
-from autotest_lib.server.hosts import base_classes, bootloader
-
-
-class test_host_class(unittest.TestCase):
-    def setUp(self):
-        self.god = mock.mock_god()
-        # stub out get_server_dir, global_config.get_config_value
-        self.god.stub_with(utils, "get_server_dir",
-                           lambda: "/unittest/server")
-        self.god.stub_function(global_config.global_config,
-                               "get_config_value")
-        # stub out the bootloader
-        self.real_bootloader = bootloader.Bootloader
-        bootloader.Bootloader = lambda arg: object()
-
-
-    def tearDown(self):
-        self.god.unstub_all()
-        bootloader.Bootloader = self.real_bootloader
-
-
-    def test_init(self):
-        self.god.stub_function(utils, "get_server_dir")
-        host = base_classes.Host.__new__(base_classes.Host)
-        bootloader.Bootloader = \
-                self.god.create_mock_class_obj(self.real_bootloader,
-                                               "Bootloader")
-        # overwrite this attribute as it's irrelevant for these tests
-        # and may cause problems with construction of the mock
-        bootloader.Bootloader.boottool_path = None
-        # set up the recording
-        utils.get_server_dir.expect_call().and_return("/unittest/server")
-        bootloader.Bootloader.expect_new(host)
-        # run the actual test
-        host.__init__()
-        self.god.check_playback()
-
-
-    def test_install(self):
-        host = base_classes.Host()
-        # create a dummy installable class
-        class installable(object):
-            def install(self, host):
-                pass
-        installableObj = self.god.create_mock_class(installable,
-                                                    "installableObj")
-        installableObj.install.expect_call(host)
-        # run the actual test
-        host.install(installableObj)
-        self.god.check_playback()
-
-
-    def test_get_wait_up_empty(self):
-        global_config.global_config.get_config_value.expect_call(
-            "HOSTS", "wait_up_processes", default="").and_return("")
-
-        host = base_classes.Host()
-        self.assertEquals(host.get_wait_up_processes(), set())
-        self.god.check_playback()
-
-
-    def test_get_wait_up_ignores_whitespace(self):
-        global_config.global_config.get_config_value.expect_call(
-            "HOSTS", "wait_up_processes", default="").and_return("  ")
-
-        host = base_classes.Host()
-        self.assertEquals(host.get_wait_up_processes(), set())
-        self.god.check_playback()
-
-
-    def test_get_wait_up_single_process(self):
-        global_config.global_config.get_config_value.expect_call(
-            "HOSTS", "wait_up_processes", default="").and_return("proc1")
-
-        host = base_classes.Host()
-        self.assertEquals(host.get_wait_up_processes(),
-                          set(["proc1"]))
-        self.god.check_playback()
-
-
-    def test_get_wait_up_multiple_process(self):
-        global_config.global_config.get_config_value.expect_call(
-            "HOSTS", "wait_up_processes", default="").and_return(
-            "proc1,proc2,proc3")
-
-        host = base_classes.Host()
-        self.assertEquals(host.get_wait_up_processes(),
-                          set(["proc1", "proc2", "proc3"]))
-        self.god.check_playback()
-
-
-    def test_get_wait_up_drops_duplicates(self):
-        global_config.global_config.get_config_value.expect_call(
-            "HOSTS", "wait_up_processes", default="").and_return(
-            "proc1,proc2,proc1")
-
-        host = base_classes.Host()
-        self.assertEquals(host.get_wait_up_processes(),
-                          set(["proc1", "proc2"]))
-        self.god.check_playback()
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/server/hosts/bootloader.py b/server/hosts/bootloader.py
deleted file mode 100644
index df055c8..0000000
--- a/server/hosts/bootloader.py
+++ /dev/null
@@ -1,67 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""
-This module defines the Bootloader class.
-
-        Bootloader: a program to boot Kernels on a Host.
-"""
-
-import os, weakref
-from autotest_lib.client.common_lib import error, boottool
-from autotest_lib.server import utils
-
-BOOTTOOL_SRC = '../client/tools/boottool'  # Get it from autotest client
-
-
-class Bootloader(boottool.boottool):
-    """
-    This class gives access to a host's bootloader services.
-
-    It can be used to add a kernel to the list of kernels that can be
-    booted by a bootloader. It can also make sure that this kernel will
-    be the one chosen at next reboot.
-    """
-
-    def __init__(self, host):
-        super(Bootloader, self).__init__()
-        self._host = weakref.ref(host)
-        self._boottool_path = None
-
-
-    def set_default(self, index):
-        if self._host().job:
-            self._host().job.last_boot_tag = None
-        super(Bootloader, self).set_default(index)
-
-
-    def boot_once(self, title):
-        if self._host().job:
-            self._host().job.last_boot_tag = title
-
-        super(Bootloader, self).boot_once(title)
-
-
-    def _install_boottool(self):
-        if self._host() is None:
-            raise error.AutoservError(
-                "Host does not exist anymore")
-        tmpdir = self._host().get_tmp_dir()
-        self._host().send_file(os.path.abspath(os.path.join(
-                utils.get_server_dir(), BOOTTOOL_SRC)), tmpdir)
-        self._boottool_path= os.path.join(tmpdir,
-                os.path.basename(BOOTTOOL_SRC))
-
-
-    def _get_boottool_path(self):
-        if not self._boottool_path:
-            self._install_boottool()
-        return self._boottool_path
-
-
-    def _run_boottool(self, *options):
-        cmd = self._get_boottool_path()
-        # FIXME: add unsafe options strings sequence to host.run() parameters
-        for option in options:
-            cmd += ' "%s"' % utils.sh_escape(option)
-        return self._host().run(cmd).stdout
diff --git a/server/hosts/bootloader_unittest.py b/server/hosts/bootloader_unittest.py
deleted file mode 100755
index 0315ce5..0000000
--- a/server/hosts/bootloader_unittest.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/python
-
-import unittest, os
-import common
-
-from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.client.common_lib import error
-from autotest_lib.server import utils, hosts
-from autotest_lib.server.hosts import bootloader
-
-
-class test_bootloader(unittest.TestCase):
-    def setUp(self):
-        self.god = mock.mock_god()
-
-        # mock out get_server_dir
-        self.god.stub_function(utils, "get_server_dir")
-
-
-    def tearDown(self):
-        self.god.unstub_all()
-
-
-    def create_mock_host(self):
-        # useful for building disposable RemoteHost mocks
-        return self.god.create_mock_class(hosts.RemoteHost, "host")
-
-
-    def create_install_boottool_mock(self, loader, dst_dir):
-        mock_install_boottool = \
-                self.god.create_mock_function("_install_boottool")
-        def install_boottool():
-            loader._boottool_path = dst_dir
-            mock_install_boottool()
-        loader._install_boottool = install_boottool
-        return mock_install_boottool
-
-
-    def test_install_fails_without_host(self):
-        host = self.create_mock_host()
-        loader = bootloader.Bootloader(host)
-        del host
-        self.assertRaises(error.AutoservError, loader._install_boottool)
-
-
-    def test_installs_to_tmpdir(self):
-        TMPDIR = "/unittest/tmp"
-        SERVERDIR = "/unittest/server"
-        BOOTTOOL_SRC = os.path.join(SERVERDIR, bootloader.BOOTTOOL_SRC)
-        BOOTTOOL_SRC = os.path.abspath(BOOTTOOL_SRC)
-        BOOTTOOL_DST = os.path.join(TMPDIR, "boottool")
-        # set up the recording
-        host = self.create_mock_host()
-        host.get_tmp_dir.expect_call().and_return(TMPDIR)
-        utils.get_server_dir.expect_call().and_return(SERVERDIR)
-        host.send_file.expect_call(BOOTTOOL_SRC, TMPDIR)
-        # run the test
-        loader = bootloader.Bootloader(host)
-        loader._install_boottool()
-        # assert the playback is correct
-        self.god.check_playback()
-        # assert the final dest is correct
-        self.assertEquals(loader._boottool_path, BOOTTOOL_DST)
-
-
-    def test_get_path_automatically_installs(self):
-        BOOTTOOL_DST = "/unittest/tmp/boottool"
-        host = self.create_mock_host()
-        loader = bootloader.Bootloader(host)
-        # mock out loader.install_boottool
-        mock_install = \
-                self.create_install_boottool_mock(loader, BOOTTOOL_DST)
-        # set up the recording
-        mock_install.expect_call()
-        # run the test
-        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
-        self.god.check_playback()
-
-
-    def test_install_is_only_called_once(self):
-        BOOTTOOL_DST = "/unittest/tmp/boottool"
-        host = self.create_mock_host()
-        loader = bootloader.Bootloader(host)
-        # mock out loader.install_boottool
-        mock_install = \
-                self.create_install_boottool_mock(loader, BOOTTOOL_DST)
-        # set up the recording
-        mock_install.expect_call()
-        # run the test
-        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
-        self.god.check_playback()
-        self.assertEquals(loader._get_boottool_path(), BOOTTOOL_DST)
-        self.god.check_playback()
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/server/hosts/common.py b/server/hosts/common.py
deleted file mode 100644
index 41607e1..0000000
--- a/server/hosts/common.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import os, sys
-dirname = os.path.dirname(sys.modules[__name__].__file__)
-autotest_dir = os.path.abspath(os.path.join(dirname, "..", ".."))
-client_dir = os.path.join(autotest_dir, "client")
-sys.path.insert(0, client_dir)
-import setup_modules
-sys.path.pop(0)
-setup_modules.setup(base_path=autotest_dir, root_module_name="autotest_lib")
diff --git a/server/hosts/factory.py b/server/hosts/factory.py
deleted file mode 100644
index 7a2a724..0000000
--- a/server/hosts/factory.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from autotest_lib.client.common_lib import utils, error, global_config
-from autotest_lib.server import autotest, utils as server_utils
-from autotest_lib.server.hosts import site_factory, ssh_host, serial
-from autotest_lib.server.hosts import logfile_monitor
-
-DEFAULT_FOLLOW_PATH = '/var/log/kern.log'
-DEFAULT_PATTERNS_PATH = 'console_patterns'
-SSH_ENGINE = global_config.global_config.get_config_value('AUTOSERV',
-                                                          'ssh_engine',
-                                                          type=str)
-
-# for tracking which hostnames have already had job_start called
-_started_hostnames = set()
-
-def create_host(
-    hostname, auto_monitor=True, follow_paths=None, pattern_paths=None,
-    netconsole=False, **args):
-    # by default assume we're using SSH support
-    if SSH_ENGINE == 'paramiko':
-        from autotest_lib.server.hosts import paramiko_host
-        classes = [paramiko_host.ParamikoHost]
-    elif SSH_ENGINE == 'raw_ssh':
-        classes = [ssh_host.SSHHost]
-    else:
-        raise error.AutoServError("Unknown SSH engine %s. Please verify the "
-                                  "value of the configuration key 'ssh_engine' "
-                                  "on autotest's global_config.ini file." %
-                                  SSH_ENGINE)
-
-    # by default mix in run_test support
-    classes.append(autotest.AutotestHostMixin)
-
-    # if the user really wants to use netconsole, let them
-    if netconsole:
-        classes.append(netconsole.NetconsoleHost)
-
-    if auto_monitor:
-        # use serial console support if it's available
-        conmux_args = {}
-        for key in ("conmux_server", "conmux_attach"):
-            if key in args:
-                conmux_args[key] = args[key]
-        if serial.SerialHost.host_is_supported(hostname, **conmux_args):
-            classes.append(serial.SerialHost)
-        else:
-            # no serial available, fall back to direct dmesg logging
-            if follow_paths is None:
-                follow_paths = [DEFAULT_FOLLOW_PATH]
-            else:
-                follow_paths = list(follow_paths) + [DEFAULT_FOLLOW_PATH]
-
-            if pattern_paths is None:
-                pattern_paths = [DEFAULT_PATTERNS_PATH]
-            else:
-                pattern_paths = (
-                    list(pattern_paths) + [DEFAULT_PATTERNS_PATH])
-
-            logfile_monitor_class = logfile_monitor.NewLogfileMonitorMixin(
-                follow_paths, pattern_paths)
-            classes.append(logfile_monitor_class)
-
-    elif follow_paths:
-        logfile_monitor_class = logfile_monitor.NewLogfileMonitorMixin(
-            follow_paths, pattern_paths)
-        classes.append(logfile_monitor_class)
-
-    # do any site-specific processing of the classes list
-    site_factory.postprocess_classes(classes, hostname,
-                                     auto_monitor=auto_monitor, **args)
-
-    hostname, args['user'], args['password'], args['port'] = \
-            server_utils.parse_machine(hostname, ssh_user, ssh_pass, ssh_port)
-
-    # create a custom host class for this machine and return an instance of it
-    host_class = type("%s_host" % hostname, tuple(classes), {})
-    host_instance = host_class(hostname, **args)
-
-    # call job_start if this is the first time this host is being used
-    if hostname not in _started_hostnames:
-        host_instance.job_start()
-        _started_hostnames.add(hostname)
-
-    return host_instance
diff --git a/server/hosts/guest.py b/server/hosts/guest.py
deleted file mode 100644
index bd57e9d..0000000
--- a/server/hosts/guest.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""
-This module defines the Guest class in the Host hierarchy.
-
-Implementation details:
-You should import the "hosts" package instead of importing each type of host.
-
-        Guest: a virtual machine on which you can run programs
-"""
-
-__author__ = """
-mbligh@google.com (Martin J. Bligh),
-poirier@google.com (Benjamin Poirier),
-stutsman@google.com (Ryan Stutsman)
-"""
-
-
-from autotest_lib.server.hosts import ssh_host
-
-
-class Guest(ssh_host.SSHHost):
-    """
-    This class represents a virtual machine on which you can run
-    programs.
-
-    It is not the machine autoserv is running on.
-
-    Implementation details:
-    This is an abstract class, leaf subclasses must implement the methods
-    listed here and in parent classes which have no implementation. They
-    may reimplement methods which already have an implementation. You
-    must not instantiate this class but should instantiate one of those
-    leaf subclasses.
-    """
-
-    controlling_hypervisor = None
-
-
-    def _initialize(self, controlling_hypervisor, *args, **dargs):
-        """
-        Construct a Guest object
-
-        Args:
-                controlling_hypervisor: Hypervisor object that is
-                        responsible for the creation and management of
-                        this guest
-        """
-        hostname = controlling_hypervisor.new_guest()
-        super(Guest, self)._initialize(hostname, *args, **dargs)
-        self.controlling_hypervisor = controlling_hypervisor
-
-
-    def __del__(self):
-        """
-        Destroy a Guest object
-        """
-        super(Guest, self).__del__()
-        self.controlling_hypervisor.delete_guest(self.hostname)
-
-
-    def hardreset(self, timeout=600, wait=True):
-        """
-        Perform a "hardreset" of the guest.
-
-        It is restarted through the hypervisor. That will restart it
-        even if the guest otherwise innaccessible through ssh.
-        """
-        return self.controlling_hypervisor.reset_guest(self.hostname)
diff --git a/server/hosts/kvm_guest.py b/server/hosts/kvm_guest.py
deleted file mode 100644
index c17bb98..0000000
--- a/server/hosts/kvm_guest.py
+++ /dev/null
@@ -1,46 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""
-This module defines the Host class.
-
-Implementation details:
-You should import the "hosts" package instead of importing each type of host.
-
-        KVMGuest: a KVM virtual machine on which you can run programs
-"""
-
-__author__ = """
-mbligh@google.com (Martin J. Bligh),
-poirier@google.com (Benjamin Poirier),
-stutsman@google.com (Ryan Stutsman)
-"""
-
-
-import guest
-
-
-class KVMGuest(guest.Guest):
-    """This class represents a KVM virtual machine on which you can run
-    programs.
-
-    Implementation details:
-    This is a leaf class in an abstract class hierarchy, it must
-    implement the unimplemented methods in parent classes.
-    """
-
-    def _init(self, controlling_hypervisor, qemu_options, *args, **dargs):
-        """
-        Construct a KVMGuest object
-
-        Args:
-                controlling_hypervisor: hypervisor object that is
-                        responsible for the creation and management of
-                        this guest
-                qemu_options: options to pass to qemu, these should be
-                        appropriately shell escaped, if need be.
-        """
-        hostname= controlling_hypervisor.new_guest(qemu_options)
-        # bypass Guest's __init__
-        super(KVMGuest, self)._initialize(hostname, *args, **dargs)
-        self.controlling_hypervisor= controlling_hypervisor
diff --git a/server/hosts/logfile_monitor.py b/server/hosts/logfile_monitor.py
deleted file mode 100644
index e5990f3..0000000
--- a/server/hosts/logfile_monitor.py
+++ /dev/null
@@ -1,290 +0,0 @@
-import logging, os, sys, subprocess, tempfile, traceback
-import time
-
-from autotest_lib.client.common_lib import utils
-from autotest_lib.server import utils as server_utils
-from autotest_lib.server.hosts import abstract_ssh, monitors
-
-MONITORDIR = monitors.__path__[0]
-SUPPORTED_PYTHON_VERS = ('2.4', '2.5', '2.6')
-DEFAULT_PYTHON = '/usr/bin/python'
-
-
-class Error(Exception):
-    pass
-
-
-class InvalidPatternsPathError(Error):
-    """An invalid patterns_path was specified."""
-
-
-class InvalidConfigurationError(Error):
-    """An invalid configuration was specified."""
-
-
-class FollowFilesLaunchError(Error):
-    """Error occurred launching followfiles remotely."""
-
-
-def list_remote_pythons(host):
-    """List out installed pythons on host."""
-    result = host.run('ls /usr/bin/python[0-9]*')
-    return result.stdout.splitlines()
-
-
-def select_supported_python(installed_pythons):
-    """Select a supported python from a list"""
-    for python in installed_pythons:
-        if python[-3:] in SUPPORTED_PYTHON_VERS:
-            return python
-
-
-def copy_monitordir(host):
-    """Copy over monitordir to a tmpdir on the remote host."""
-    tmp_dir = host.get_tmp_dir()
-    host.send_file(MONITORDIR, tmp_dir)
-    return os.path.join(tmp_dir, 'monitors')
-
-
-def launch_remote_followfiles(host, lastlines_dirpath, follow_paths):
-    """Launch followfiles.py remotely on follow_paths."""
-    logging.info('Launching followfiles on target: %s, %s, %s',
-                 host.hostname, lastlines_dirpath, str(follow_paths))
-
-    # First make sure a supported Python is on host
-    installed_pythons = list_remote_pythons(host)
-    supported_python = select_supported_python(installed_pythons)
-    if not supported_python:
-        if DEFAULT_PYTHON in installed_pythons:
-            logging.info('No versioned Python binary found, '
-                         'defaulting to: %s', DEFAULT_PYTHON)
-            supported_python = DEFAULT_PYTHON
-        else:
-            raise FollowFilesLaunchError('No supported Python on host.')
-
-    remote_monitordir = copy_monitordir(host)
-    remote_script_path = os.path.join(remote_monitordir, 'followfiles.py')
-
-    followfiles_cmd = '%s %s --lastlines_dirpath=%s %s' % (
-        supported_python, remote_script_path,
-        lastlines_dirpath, ' '.join(follow_paths))
-
-    remote_ff_proc = subprocess.Popen(host._make_ssh_cmd(followfiles_cmd),
-                                      stdin=open(os.devnull, 'r'),
-                                      stdout=subprocess.PIPE, shell=True)
-
-
-    # Give it enough time to crash if it's going to (it shouldn't).
-    time.sleep(5)
-    doa = remote_ff_proc.poll()
-    if doa:
-        raise FollowFilesLaunchError('ssh command crashed.')
-
-    return remote_ff_proc
-
-
-def resolve_patterns_path(patterns_path):
-    """Resolve patterns_path to existing absolute local path or raise.
-
-    As a convenience we allow users to specify a non-absolute patterns_path.
-    However these need to be resolved before allowing them to be passed down
-    to console.py.
-
-    For now we expect non-absolute ones to be in self.monitordir.
-    """
-    if os.path.isabs(patterns_path):
-        if os.path.exists(patterns_path):
-            return patterns_path
-        else:
-            raise InvalidPatternsPathError('Absolute path does not exist.')
-    else:
-        patterns_path = os.path.join(MONITORDIR, patterns_path)
-        if os.path.exists(patterns_path):
-            return patterns_path
-        else:
-            raise InvalidPatternsPathError('Relative path does not exist.')
-
-
-def launch_local_console(
-        input_stream, console_log_path, pattern_paths=None):
-    """Launch console.py locally.
-
-    This will process the output from followfiles and
-    fire warning messages per configuration in pattern_paths.
-    """
-    r, w = os.pipe()
-    local_script_path = os.path.join(MONITORDIR, 'console.py')
-    console_cmd = [sys.executable, local_script_path]
-    if pattern_paths:
-        console_cmd.append('--pattern_paths=%s' % ','.join(pattern_paths))
-
-    console_cmd += [console_log_path, str(w)]
-
-    # Setup warning stream before we actually launch
-    warning_stream = os.fdopen(r, 'r', 0)
-
-    devnull_w = open(os.devnull, 'w')
-    # Launch console.py locally
-    console_proc = subprocess.Popen(
-        console_cmd, stdin=input_stream,
-        stdout=devnull_w, stderr=devnull_w)
-    os.close(w)
-    return console_proc, warning_stream
-
-
-def _log_and_ignore_exceptions(f):
-    """Decorator: automatically log exception during a method call.
-    """
-    def wrapped(self, *args, **dargs):
-        try:
-            return f(self, *args, **dargs)
-        except Exception, e:
-            print "LogfileMonitor.%s failed with exception %s" % (f.__name__, e)
-            print "Exception ignored:"
-            traceback.print_exc(file=sys.stdout)
-    wrapped.__name__ = f.__name__
-    wrapped.__doc__ = f.__doc__
-    wrapped.__dict__.update(f.__dict__)
-    return wrapped
-
-
-class LogfileMonitorMixin(abstract_ssh.AbstractSSHHost):
-    """This can monitor one or more remote files using tail.
-
-    This class and its counterpart script, monitors/followfiles.py,
-    add most functionality one would need to launch and monitor
-    remote tail processes on self.hostname.
-
-    This can be used by subclassing normally or by calling
-    NewLogfileMonitorMixin (below)
-
-    It is configured via two class attributes:
-        follow_paths: Remote paths to monitor
-        pattern_paths: Local paths to alert pattern definition files.
-    """
-    follow_paths = ()
-    pattern_paths = ()
-
-    def _initialize(self, console_log=None, *args, **dargs):
-        super(LogfileMonitorMixin, self)._initialize(*args, **dargs)
-
-        self._lastlines_dirpath = None
-        self._console_proc = None
-        self._console_log = console_log or 'logfile_monitor.log'
-
-
-    def reboot_followup(self, *args, **dargs):
-        super(LogfileMonitorMixin, self).reboot_followup(*args, **dargs)
-        self.__stop_loggers()
-        self.__start_loggers()
-
-
-    def start_loggers(self):
-        super(LogfileMonitorMixin, self).start_loggers()
-        self.__start_loggers()
-
-
-    def remote_path_exists(self, remote_path):
-        """Return True if remote_path exists, False otherwise."""
-        return not self.run(
-            'ls %s' % remote_path, ignore_status=True).exit_status
-
-
-    def check_remote_paths(self, remote_paths):
-        """Return list of remote_paths that currently exist."""
-        return [
-            path for path in remote_paths if self.remote_path_exists(path)]
-
-
-    @_log_and_ignore_exceptions
-    def __start_loggers(self):
-        """Start multifile monitoring logger.
-
-        Launch monitors/followfiles.py on the target and hook its output
-        to monitors/console.py locally.
-        """
-        # Check if follow_paths exist, in the case that one doesn't
-        # emit a warning and proceed.
-        follow_paths_set = set(self.follow_paths)
-        existing = self.check_remote_paths(follow_paths_set)
-        missing = follow_paths_set.difference(existing)
-        if missing:
-            # Log warning that we are missing expected remote paths.
-            logging.warn('Target %s is missing expected remote paths: %s',
-                         self.hostname, ', '.join(missing))
-
-        # If none of them exist just return (for now).
-        if not existing:
-            return
-
-        # Create a new lastlines_dirpath on the remote host if not already set.
-        if not self._lastlines_dirpath:
-            self._lastlines_dirpath = self.get_tmp_dir(parent='/var/tmp')
-
-        # Launch followfiles on target
-        try:
-            self._followfiles_proc = launch_remote_followfiles(
-                self, self._lastlines_dirpath, existing)
-        except FollowFilesLaunchError:
-            # We're hosed, there is no point in proceeding.
-            logging.fatal('Failed to launch followfiles on target,'
-                          ' aborting logfile monitoring: %s', self.hostname)
-            if self.job:
-                # Put a warning in the status.log
-                self.job.record(
-                    'WARN', None, 'logfile.monitor',
-                    'followfiles launch failed')
-            return
-
-        # Ensure we have sane pattern_paths before launching console.py
-        sane_pattern_paths = []
-        for patterns_path in set(self.pattern_paths):
-            try:
-                patterns_path = resolve_patterns_path(patterns_path)
-            except InvalidPatternsPathError, e:
-                logging.warn('Specified patterns_path is invalid: %s, %s',
-                             patterns_path, str(e))
-            else:
-                sane_pattern_paths.append(patterns_path)
-
-        # Launch console.py locally, pass in output stream from followfiles.
-        self._console_proc, self._logfile_warning_stream = \
-            launch_local_console(
-                self._followfiles_proc.stdout, self._console_log,
-                sane_pattern_paths)
-
-        if self.job:
-            self.job.warning_loggers.add(self._logfile_warning_stream)
-
-
-    def stop_loggers(self):
-        super(LogfileMonitorMixin, self).stop_loggers()
-        self.__stop_loggers()
-
-
-    @_log_and_ignore_exceptions
-    def __stop_loggers(self):
-        if self._console_proc:
-            utils.nuke_subprocess(self._console_proc)
-            utils.nuke_subprocess(self._followfiles_proc)
-            self._console_proc = self._followfile_proc = None
-            if self.job:
-                self.job.warning_loggers.discard(self._logfile_warning_stream)
-            self._logfile_warning_stream.close()
-
-
-def NewLogfileMonitorMixin(follow_paths, pattern_paths=None):
-    """Create a custom in-memory subclass of LogfileMonitorMixin.
-
-    Args:
-      follow_paths: list; Remote paths to tail.
-      pattern_paths: list; Local alert pattern definition files.
-    """
-    if not follow_paths or (pattern_paths and not follow_paths):
-        raise InvalidConfigurationError
-
-    return type(
-        'LogfileMonitorMixin%d' % id(follow_paths),
-        (LogfileMonitorMixin,),
-        {'follow_paths': follow_paths,
-         'pattern_paths': pattern_paths or ()})
diff --git a/server/hosts/monitors/__init__.py b/server/hosts/monitors/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/server/hosts/monitors/common.py b/server/hosts/monitors/common.py
deleted file mode 100644
index c505ee4..0000000
--- a/server/hosts/monitors/common.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import os, sys
-dirname = os.path.dirname(sys.modules[__name__].__file__)
-autotest_dir = os.path.abspath(os.path.join(dirname, "..", "..", ".."))
-client_dir = os.path.join(autotest_dir, "client")
-sys.path.insert(0, client_dir)
-import setup_modules
-sys.path.pop(0)
-setup_modules.setup(base_path=autotest_dir, root_module_name="autotest_lib")
diff --git a/server/hosts/monitors/console.py b/server/hosts/monitors/console.py
deleted file mode 100755
index c516f9f..0000000
--- a/server/hosts/monitors/console.py
+++ /dev/null
@@ -1,88 +0,0 @@
-#!/usr/bin/python
-#
-# Script for translating console output (from STDIN) into Autotest
-# warning messages.
-
-import gzip, optparse, os, signal, sys, time
-import common
-from autotest_lib.server.hosts.monitors import monitors_util
-
-PATTERNS_PATH = os.path.join(os.path.dirname(__file__), 'console_patterns')
-
-usage = 'usage: %prog [options] logfile_name warn_fd'
-parser = optparse.OptionParser(usage=usage)
-parser.add_option(
-    '-t', '--log_timestamp_format',
-    default='[%Y-%m-%d %H:%M:%S]',
-    help='Timestamp format for log messages')
-parser.add_option(
-    '-p', '--pattern_paths',
-    default=PATTERNS_PATH,
-    help='Path to alert hook patterns file')
-
-
-def _open_logfile(logfile_base_name):
-    """Opens an output file using the given name.
-
-    A timestamp and compression is added to the name.
-
-    @param logfile_base_name - The log file path without a compression suffix.
-    @returns An open file like object.  Its close method must be called before
-            exiting or data may be lost due to internal buffering.
-    """
-    timestamp = int(time.time())
-    while True:
-        logfile_name = '%s.%d-%d.gz' % (logfile_base_name,
-                                        timestamp, os.getpid())
-        if not os.path.exists(logfile_name):
-            break
-        timestamp += 1
-    logfile = gzip.GzipFile(logfile_name, 'w')
-    return logfile
-
-
-def _set_logfile_close_signal_handler(logfile):
-    """Setup a signal handler to explicitly call logfile.close() and exit.
-
-    Because we are writing a compressed file we need to make sure we properly
-    close to flush our internal buffer on exit. logfile_monitor.py sends us
-    a SIGTERM and waits 5 seconds for before sending a SIGKILL so we have
-    plenty of time to do this.
-
-    @param logfile - An open file object to be closed on SIGTERM.
-    """
-    def _on_signal_close_logfile_before_exit(unused_signal_no, unused_frame):
-        logfile.close()
-        os.exit(1)
-    signal.signal(signal.SIGTERM, _on_signal_close_logfile_before_exit)
-
-
-def _unset_signal_handler():
-    signal.signal(signal.SIGTERM, signal.SIG_DFL)
-
-
-def main():
-    (options, args) = parser.parse_args()
-    if len(args) != 2:
-        parser.print_help()
-        sys.exit(1)
-
-    logfile = _open_logfile(args[0])
-    warnfile = os.fdopen(int(args[1]), 'w', 0)
-    # For now we aggregate all the alert_hooks.
-    alert_hooks = []
-    for patterns_path in options.pattern_paths.split(','):
-        alert_hooks.extend(monitors_util.build_alert_hooks_from_path(
-                patterns_path, warnfile))
-
-    _set_logfile_close_signal_handler(logfile)
-    try:
-        monitors_util.process_input(
-            sys.stdin, logfile, options.log_timestamp_format, alert_hooks)
-    finally:
-        logfile.close()
-        _unset_signal_handler()
-
-
-if __name__ == '__main__':
-    main()
diff --git a/server/hosts/monitors/console_patterns b/server/hosts/monitors/console_patterns
deleted file mode 100644
index 72bf557..0000000
--- a/server/hosts/monitors/console_patterns
+++ /dev/null
@@ -1,71 +0,0 @@
-BUG
-^.*Kernel panic ?(.*)
-machine panic'd (%s)
-
-BUG
-^.*Oops ?(.*)
-machine Oops'd (%s)
-
-BUG
-^.*kdb>
-machine dropped to kdb (see console)
-
-BUG
-^.*Open Firmware exception handle entered from non-OF code
-machine took an open firmware exception (see console)
-
-BUG
-^.*(BUG:.*)
-%s
-
-BUG
-^.*(kernel BUG .*)
-%s
-
-OOM
-^.*(invoked oom-killer:.*)
-%s
-
-BUG
-^(.*CommandAbort.*)
-%s
-
-LOCKDEP
-^.*(possible circular locking dependency detected.*)
-%s
-
-LOCKDEP
-^.*(unsafe lock order detected.*)
-%s
-
-LOCKDEP
-^.*(possible recursive locking detected.*)
-%s
-
-LOCKDEP
-^.*(inconsistent lock state.*)
-%s
-
-LOCKDEP
-^.*(possible irq lock inversion dependency detected.*)
-%s
-
-LOCKDEP
-^.*(bad unlock balance detected.*)
-%s
-
-LOCKDEP
-^.*(bad contention detected.*)
-%s
-
-LOCKDEP
-^.*(held lock freed.*)
-%s
-
-LOCKDEP
-^.*(lock held at task exit time.*)
-%s
-
-LOCKDEP
-^.*(lock held when returning to user space.*)
-%s
diff --git a/server/hosts/monitors/console_patterns_test.py b/server/hosts/monitors/console_patterns_test.py
deleted file mode 100755
index 55e3758..0000000
--- a/server/hosts/monitors/console_patterns_test.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/python
-
-import common
-import cStringIO, os, unittest
-from autotest_lib.server.hosts.monitors import monitors_util
-
-class _MockWarnFile(object):
-    def __init__(self):
-        self.warnings = []
-
-
-    def write(self, data):
-        if data == '\n':
-            return
-        timestamp, type, message = data.split('\t')
-        self.warnings.append((type, message))
-
-
-class ConsolePatternsTestCase(unittest.TestCase):
-    def setUp(self):
-        self._warnfile = _MockWarnFile()
-        patterns_path = os.path.join(os.path.dirname(__file__),
-                                     'console_patterns')
-        self._alert_hooks = monitors_util.build_alert_hooks_from_path(
-                patterns_path, self._warnfile)
-        self._logfile = cStringIO.StringIO()
-
-
-    def _process_line(self, line):
-        input_file = cStringIO.StringIO(line + '\n')
-        monitors_util.process_input(input_file, self._logfile,
-                                    alert_hooks=self._alert_hooks)
-
-
-    def _assert_warning_fired(self, type, message):
-        key = (type, message)
-        self.assert_(key in self._warnfile.warnings,
-                     'Warning %s not found in: %s' % (key,
-                                                      self._warnfile.warnings))
-
-
-    def _assert_no_warnings_fired(self):
-        self.assertEquals(self._warnfile.warnings, [])
-
-
-class ConsolePatternsTest(ConsolePatternsTestCase):
-    def test_oops(self):
-        self._process_line('<0>Oops: 0002 [1] SMP ')
-        self._assert_warning_fired('BUG', "machine Oops'd (: 0002 [1] SMP)")
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/server/hosts/monitors/followfiles.py b/server/hosts/monitors/followfiles.py
deleted file mode 100755
index f2ad4d9..0000000
--- a/server/hosts/monitors/followfiles.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/python
-#
-# Script for tailing one to many logfiles and merging their output.
-
-import optparse, os, signal, sys
-
-import monitors_util
-
-usage = 'usage: %prog [options] follow_path ...'
-parser = optparse.OptionParser(usage=usage)
-parser.add_option(
-    '-l', '--lastlines_dirpath',
-    help='Path to store/read last line data to/from.')
-
-
-def main():
-    (options, follow_paths) = parser.parse_args()
-    if len(follow_paths) < 1:
-        parser.print_help()
-        sys.exit(1)
-
-    monitors_util.follow_files(
-        follow_paths, sys.stdout, options.lastlines_dirpath)
-
-
-if __name__ == '__main__':
-    main()
diff --git a/server/hosts/monitors/monitors_util.py b/server/hosts/monitors/monitors_util.py
deleted file mode 100644
index 3c3afcc..0000000
--- a/server/hosts/monitors/monitors_util.py
+++ /dev/null
@@ -1,379 +0,0 @@
-# Shared utility functions across monitors scripts.
-
-import fcntl, os, re, select, signal, subprocess, sys, time
-
-TERM_MSG = 'Console connection unexpectedly lost. Terminating monitor.'
-
-
-class Error(Exception):
-    pass
-
-
-class InvalidTimestampFormat(Error):
-    pass
-
-
-def prepend_timestamp(msg, format):
-    """Prepend timestamp to a message in a standard way.
-
-    Args:
-      msg: str; Message to prepend timestamp to.
-      format: str or callable; Either format string that
-          can be passed to time.strftime or a callable
-          that will generate the timestamp string.
-
-    Returns: str; 'timestamp\tmsg'
-    """
-    if type(format) is str:
-        timestamp = time.strftime(format, time.localtime())
-    elif callable(format):
-        timestamp = str(format())
-    else:
-        raise InvalidTimestampFormat
-
-    return '%s\t%s' % (timestamp, msg)
-
-
-def write_logline(logfile, msg, timestamp_format=None):
-    """Write msg, possibly prepended with a timestamp, as a terminated line.
-
-    Args:
-      logfile: file; File object to .write() msg to.
-      msg: str; Message to write.
-      timestamp_format: str or callable; If specified will
-          be passed into prepend_timestamp along with msg.
-    """
-    msg = msg.rstrip('\n')
-    if timestamp_format:
-        msg = prepend_timestamp(msg, timestamp_format)
-    logfile.write(msg + '\n')
-
-
-def make_alert(warnfile, msg_type, msg_template, timestamp_format=None):
-    """Create an alert generation function that writes to warnfile.
-
-    Args:
-      warnfile: file; File object to write msg's to.
-      msg_type: str; String describing the message type
-      msg_template: str; String template that function params
-          are passed through.
-      timestamp_format: str or callable; If specified will
-          be passed into prepend_timestamp along with msg.
-
-    Returns: function with a signature of (*params);
-        The format for a warning used here is:
-            %(timestamp)d\t%(msg_type)s\t%(status)s\n
-    """
-    if timestamp_format is None:
-        timestamp_format = lambda: int(time.time())
-
-    def alert(*params):
-        formatted_msg = msg_type + "\t" + msg_template % params
-        timestamped_msg = prepend_timestamp(formatted_msg, timestamp_format)
-        print >> warnfile, timestamped_msg
-    return alert
-
-
-def _assert_is_all_blank_lines(lines, source_file):
-    if sum(len(line.strip()) for line in lines) > 0:
-        raise ValueError('warning patterns are not separated by blank lines '
-                         'in %s' % source_file)
-
-
-def _read_overrides(overrides_file):
-    """
-    Read pattern overrides from overrides_file, which may be None.  Overrides
-    files are expected to have the format:
-    <old regex> <newline> <new regex> <newline> <newline>
-            old regex = a regex from the patterns file
-            new regex = the regex to replace it
-    Lines beginning with # are ignored.
-
-    Returns a dict mapping old regexes to their replacements.
-    """
-    if not overrides_file:
-        return {}
-    overrides_lines = [line for line in overrides_file.readlines()
-                       if not line.startswith('#')]
-    overrides_pairs = zip(overrides_lines[0::3], overrides_lines[1::3])
-    _assert_is_all_blank_lines(overrides_lines[2::3], overrides_file)
-    return dict(overrides_pairs)
-
-
-def build_alert_hooks(patterns_file, warnfile, overrides_file=None):
-    """Parse data in patterns file and transform into alert_hook list.
-
-    Args:
-      patterns_file: file; File to read alert pattern definitions from.
-      warnfile: file; File to configure alert function to write warning to.
-
-    Returns:
-      list; Regex to alert function mapping.
-          [(regex, alert_function), ...]
-    """
-    pattern_lines = patterns_file.readlines()
-    # expected pattern format:
-    # <msgtype> <newline> <regex> <newline> <alert> <newline> <newline>
-    #   msgtype = a string categorizing the type of the message - used for
-    #             enabling/disabling specific categories of warnings
-    #   regex   = a python regular expression
-    #   alert   = a string describing the alert message
-    #             if the regex matches the line, this displayed warning will
-    #             be the result of (alert % match.groups())
-    patterns = zip(pattern_lines[0::4], pattern_lines[1::4],
-                   pattern_lines[2::4])
-    _assert_is_all_blank_lines(pattern_lines[3::4], patterns_file)
-
-    overrides_map = _read_overrides(overrides_file)
-
-    hooks = []
-    for msgtype, regex, alert in patterns:
-        regex = overrides_map.get(regex, regex)
-        regex = re.compile(regex.rstrip('\n'))
-        alert_function = make_alert(warnfile, msgtype.rstrip('\n'),
-                                    alert.rstrip('\n'))
-        hooks.append((regex, alert_function))
-    return hooks
-
-
-def build_alert_hooks_from_path(patterns_path, warnfile):
-    """
-    Same as build_alert_hooks, but accepts a path to a patterns file and
-    automatically finds the corresponding site overrides file if one exists.
-    """
-    dirname, basename = os.path.split(patterns_path)
-    site_overrides_basename = 'site_' + basename + '_overrides'
-    site_overrides_path = os.path.join(dirname, site_overrides_basename)
-    site_overrides_file = None
-    patterns_file = open(patterns_path)
-    try:
-        if os.path.exists(site_overrides_path):
-            site_overrides_file = open(site_overrides_path)
-        try:
-            return build_alert_hooks(patterns_file, warnfile,
-                                     overrides_file=site_overrides_file)
-        finally:
-            if site_overrides_file:
-                site_overrides_file.close()
-    finally:
-        patterns_file.close()
-
-
-def process_input(
-    input, logfile, log_timestamp_format=None, alert_hooks=()):
-    """Continuously read lines from input stream and:
-
-    - Write them to log, possibly prefixed by timestamp.
-    - Watch for alert patterns.
-
-    Args:
-      input: file; Stream to read from.
-      logfile: file; Log file to write to
-      log_timestamp_format: str; Format to use for timestamping entries.
-          No timestamp is added if None.
-      alert_hooks: list; Generated from build_alert_hooks.
-          [(regex, alert_function), ...]
-    """
-    while True:
-        line = input.readline()
-        if len(line) == 0:
-            # this should only happen if the remote console unexpectedly
-            # goes away. terminate this process so that we don't spin
-            # forever doing 0-length reads off of input
-            write_logline(logfile, TERM_MSG, log_timestamp_format)
-            break
-
-        if line == '\n':
-            # If it's just an empty line we discard and continue.
-            continue
-
-        write_logline(logfile, line, log_timestamp_format)
-
-        for regex, callback in alert_hooks:
-            match = re.match(regex, line.strip())
-            if match:
-                callback(*match.groups())
-
-
-def lookup_lastlines(lastlines_dirpath, path):
-    """Retrieve last lines seen for path.
-
-    Open corresponding lastline file for path
-    If there isn't one or isn't a match return None
-
-    Args:
-      lastlines_dirpath: str; Dirpath to store lastlines files to.
-      path: str; Filepath to source file that lastlines came from.
-
-    Returns:
-      str; Last lines seen if they exist
-      - Or -
-      None; Otherwise
-    """
-    underscored = path.replace('/', '_')
-    try:
-        lastlines_file = open(os.path.join(lastlines_dirpath, underscored))
-    except (OSError, IOError):
-        return
-
-    lastlines = lastlines_file.read()
-    lastlines_file.close()
-    os.remove(lastlines_file.name)
-    if not lastlines:
-        return
-
-    try:
-        target_file = open(path)
-    except (OSError, IOError):
-        return
-
-    # Load it all in for now
-    target_data = target_file.read()
-    target_file.close()
-    # Get start loc in the target_data string, scanning from right
-    loc = target_data.rfind(lastlines)
-    if loc == -1:
-        return
-
-    # Then translate this into a reverse line number
-    # (count newlines that occur afterward)
-    reverse_lineno = target_data.count('\n', loc + len(lastlines))
-    return reverse_lineno
-
-
-def write_lastlines_file(lastlines_dirpath, path, data):
-    """Write data to lastlines file for path.
-
-    Args:
-      lastlines_dirpath: str; Dirpath to store lastlines files to.
-      path: str; Filepath to source file that data comes from.
-      data: str;
-
-    Returns:
-      str; Filepath that lastline data was written to.
-    """
-    underscored = path.replace('/', '_')
-    dest_path = os.path.join(lastlines_dirpath, underscored)
-    open(dest_path, 'w').write(data)
-    return dest_path
-
-
-def nonblocking(pipe):
-    """Set python file object to nonblocking mode.
-
-    This allows us to take advantage of pipe.read()
-    where we don't have to specify a buflen.
-    Cuts down on a few lines we'd have to maintain.
-
-    Args:
-      pipe: file; File object to modify
-
-    Returns: pipe
-    """
-    flags = fcntl.fcntl(pipe, fcntl.F_GETFL)
-    fcntl.fcntl(pipe, fcntl.F_SETFL, flags| os.O_NONBLOCK)
-    return pipe
-
-
-def launch_tails(follow_paths, lastlines_dirpath=None):
-    """Launch a tail process for each follow_path.
-
-    Args:
-      follow_paths: list;
-      lastlines_dirpath: str;
-
-    Returns:
-      tuple; (procs, pipes) or
-          ({path: subprocess.Popen, ...}, {file: path, ...})
-    """
-    if lastlines_dirpath and not os.path.exists(lastlines_dirpath):
-        os.makedirs(lastlines_dirpath)
-
-    tail_cmd = ('/usr/bin/tail', '--retry', '--follow=name')
-    procs = {}  # path -> tail_proc
-    pipes = {}  # tail_proc.stdout -> path
-    for path in follow_paths:
-        cmd = list(tail_cmd)
-        if lastlines_dirpath:
-            reverse_lineno = lookup_lastlines(lastlines_dirpath, path)
-            if reverse_lineno is None:
-                reverse_lineno = 1
-            cmd.append('--lines=%d' % reverse_lineno)
-
-        cmd.append(path)
-        tail_proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-        procs[path] = tail_proc
-        pipes[nonblocking(tail_proc.stdout)] = path
-
-    return procs, pipes
-
-
-def poll_tail_pipes(pipes, lastlines_dirpath=None, waitsecs=5):
-    """Wait on tail pipes for new data for waitsecs, return any new lines.
-
-    Args:
-      pipes: dict; {subprocess.Popen: follow_path, ...}
-      lastlines_dirpath: str; Path to write lastlines to.
-      waitsecs: int; Timeout to pass to select
-
-    Returns:
-      tuple; (lines, bad_pipes) or ([line, ...], [subprocess.Popen, ...])
-    """
-    lines = []
-    bad_pipes = []
-    # Block until at least one is ready to read or waitsecs elapses
-    ready, _, _ = select.select(pipes.keys(), (), (), waitsecs)
-    for fi in ready:
-        path = pipes[fi]
-        data = fi.read()
-        if len(data) == 0:
-            # If no data, process is probably dead, add to bad_pipes
-            bad_pipes.append(fi)
-            continue
-
-        if lastlines_dirpath:
-            # Overwrite the lastlines file for this source path
-            # Probably just want to write the last 1-3 lines.
-            write_lastlines_file(lastlines_dirpath, path, data)
-
-        for line in data.splitlines():
-            lines.append('[%s]\t%s\n' % (path, line))
-
-    return lines, bad_pipes
-
-
-def snuff(subprocs):
-    """Helper for killing off remaining live subprocesses.
-
-    Args:
-      subprocs: list; [subprocess.Popen, ...]
-    """
-    for proc in subprocs:
-        if proc.poll() is None:
-            os.kill(proc.pid, signal.SIGKILL)
-            proc.wait()
-
-
-def follow_files(follow_paths, outstream, lastlines_dirpath=None, waitsecs=5):
-    """Launch tail on a set of files and merge their output into outstream.
-
-    Args:
-      follow_paths: list; Local paths to launch tail on.
-      outstream: file; Output stream to write aggregated lines to.
-      lastlines_dirpath: Local dirpath to record last lines seen in.
-      waitsecs: int; Timeout for poll_tail_pipes.
-    """
-    procs, pipes = launch_tails(follow_paths, lastlines_dirpath)
-    while pipes:
-        lines, bad_pipes = poll_tail_pipes(pipes, lastlines_dirpath, waitsecs)
-        for bad in bad_pipes:
-            pipes.pop(bad)
-
-        try:
-            outstream.writelines(['\n'] + lines)
-            outstream.flush()
-        except (IOError, OSError), e:
-            # Something is wrong. Stop looping.
-            break
-
-    snuff(procs.values())
diff --git a/server/hosts/monitors/monitors_util_unittest.py b/server/hosts/monitors/monitors_util_unittest.py
deleted file mode 100755
index b7a19d7..0000000
--- a/server/hosts/monitors/monitors_util_unittest.py
+++ /dev/null
@@ -1,177 +0,0 @@
-#!/usr/bin/python
-
-import fcntl, os, signal, subprocess, StringIO
-import tempfile, textwrap, time, unittest
-import monitors_util
-
-
-def InlineStringIO(text):
-    return StringIO.StringIO(textwrap.dedent(text).strip())
-
-
-class WriteLoglineTestCase(unittest.TestCase):
-    def setUp(self):
-        self.time_tuple = (2008, 10, 31, 18, 58, 17, 4, 305, 1)
-        self.format = '[%Y-%m-%d %H:%M:%S]'
-        self.formatted_time_tuple = '[2008-10-31 18:58:17]'
-        self.msg = 'testing testing'
-
-        # Stub out time.localtime()
-        self.orig_localtime = time.localtime
-        time.localtime = lambda: self.time_tuple
-
-
-    def tearDown(self):
-        time.localtime = self.orig_localtime
-
-
-    def test_prepend_timestamp(self):
-        timestamped = monitors_util.prepend_timestamp(
-            self.msg, self.format)
-        self.assertEquals(
-            '%s\t%s' % (self.formatted_time_tuple, self.msg), timestamped)
-
-
-    def test_write_logline_with_timestamp(self):
-        logfile = StringIO.StringIO()
-        monitors_util.write_logline(logfile, self.msg, self.format)
-        logfile.seek(0)
-        written = logfile.read()
-        self.assertEquals(
-            '%s\t%s\n' % (self.formatted_time_tuple, self.msg), written)
-
-
-    def test_write_logline_without_timestamp(self):
-        logfile = StringIO.StringIO()
-        monitors_util.write_logline(logfile, self.msg)
-        logfile.seek(0)
-        written = logfile.read()
-        self.assertEquals(
-            '%s\n' % self.msg, written)
-
-
-class AlertHooksTestCase(unittest.TestCase):
-    def setUp(self):
-        self.msg_template = 'alert yay %s haha %s'
-        self.params = ('foo', 'bar')
-        self.epoch_seconds = 1225501829.9300611
-        # Stub out time.time
-        self.orig_time = time.time
-        time.time = lambda: self.epoch_seconds
-
-
-    def tearDown(self):
-        time.time = self.orig_time
-
-
-    def test_make_alert(self):
-        warnfile = StringIO.StringIO()
-        alert = monitors_util.make_alert(warnfile, "MSGTYPE",
-                                         self.msg_template)
-        alert(*self.params)
-        warnfile.seek(0)
-        written = warnfile.read()
-        ts = str(int(self.epoch_seconds))
-        expected = '%s\tMSGTYPE\t%s\n' % (ts, self.msg_template % self.params)
-        self.assertEquals(expected, written)
-
-
-    def test_build_alert_hooks(self):
-        warnfile = StringIO.StringIO()
-        patterns_file = InlineStringIO("""
-            BUG
-            ^.*Kernel panic ?(.*)
-            machine panic'd (%s)
-
-            BUG
-            ^.*Oops ?(.*)
-            machine Oops'd (%s)
-            """)
-        hooks = monitors_util.build_alert_hooks(patterns_file, warnfile)
-        self.assertEquals(len(hooks), 2)
-
-
-class ProcessInputTestCase(unittest.TestCase):
-    def test_process_input_simple(self):
-        input = InlineStringIO("""
-            woo yay
-            this is a line
-            booya
-            """)
-        logfile = StringIO.StringIO()
-        monitors_util.process_input(input, logfile)
-        input.seek(0)
-        logfile.seek(0)
-
-        self.assertEquals(
-            '%s\n%s\n' % (input.read(), monitors_util.TERM_MSG),
-            logfile.read())
-
-
-class FollowFilesTestCase(unittest.TestCase):
-    def setUp(self):
-        self.logfile_dirpath = tempfile.mkdtemp()
-        self.logfile_path = os.path.join(self.logfile_dirpath, 'messages')
-        self.firstline = 'bip\n'
-        self.lastline_seen = 'wooo\n'
-        self.line_after_lastline_seen = 'yeah\n'
-        self.lastline = 'pow\n'
-
-        self.logfile = open(self.logfile_path, 'w')
-        self.logfile.write(self.firstline)
-        self.logfile.write(self.lastline_seen)
-        self.logfile.write(self.line_after_lastline_seen)  # 3
-        self.logfile.write('man\n')   # 2
-        self.logfile.write(self.lastline)   # 1
-        self.logfile.close()
-
-        self.lastlines_dirpath = tempfile.mkdtemp()
-        monitors_util.write_lastlines_file(
-            self.lastlines_dirpath, self.logfile_path, self.lastline_seen)
-
-
-    def test_lookup_lastlines(self):
-        reverse_lineno = monitors_util.lookup_lastlines(
-            self.lastlines_dirpath, self.logfile_path)
-        self.assertEquals(reverse_lineno, 3)
-
-
-    def test_nonblocking(self):
-        po = subprocess.Popen('echo', stdout=subprocess.PIPE)
-        flags = fcntl.fcntl(po.stdout, fcntl.F_GETFL)
-        self.assertEquals(flags, 0)
-        monitors_util.nonblocking(po.stdout)
-        flags = fcntl.fcntl(po.stdout, fcntl.F_GETFL)
-        self.assertEquals(flags, 2048)
-        po.wait()
-
-
-    def test_follow_files_nostate(self):
-        follow_paths = [self.logfile_path]
-        lastlines_dirpath = tempfile.mkdtemp()
-        procs, pipes = monitors_util.launch_tails(
-            follow_paths, lastlines_dirpath)
-        lines, bad_pipes = monitors_util.poll_tail_pipes(
-            pipes, lastlines_dirpath)
-        first_shouldmatch = '[%s]\t%s' % (
-            self.logfile_path, self.lastline)
-        self.assertEquals(lines[0], first_shouldmatch)
-        monitors_util.snuff(procs.values())
-
-
-    def test_follow_files(self):
-        follow_paths = [self.logfile_path]
-        procs, pipes = monitors_util.launch_tails(
-            follow_paths, self.lastlines_dirpath)
-        lines, bad_pipes = monitors_util.poll_tail_pipes(
-            pipes, self.lastlines_dirpath)
-        first_shouldmatch = '[%s]\t%s' % (
-            self.logfile_path, self.line_after_lastline_seen)
-        self.assertEquals(lines[0], first_shouldmatch)
-        monitors_util.snuff(procs.values())
-        last_shouldmatch = '[%s]\t%s' % (self.logfile_path, self.lastline)
-        self.assertEquals(lines[-1], last_shouldmatch)
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/server/hosts/netconsole.py b/server/hosts/netconsole.py
deleted file mode 100644
index f8e45b6..0000000
--- a/server/hosts/netconsole.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import os, re, sys, subprocess, socket
-
-from autotest_lib.client.common_lib import utils, error
-from autotest_lib.server.hosts import remote
-
-
-class NetconsoleHost(remote.RemoteHost):
-    def _initialize(self, console_log="netconsole.log", *args, **dargs):
-        super(NetconsoleHost, self)._initialize(*args, **dargs)
-
-        self.__logger = None
-        self.__console_log = console_log
-
-        # get a socket for us to listen on
-        self.__socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
-        self.__socket.bind(('', 0))
-        self.__port = self.__socket.getsockname()[1]
-
-
-    @classmethod
-    def host_is_supported(cls, run_func):
-        local_ip = socket.gethostbyname(socket.gethostname())
-        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
-        s.settimeout(1)
-        s.bind((local_ip, 0))
-        local_port = s.getsockname()[1]
-        send_udp_packet = (
-            """python -c "from socket import *; """
-            """s = socket(AF_INET, SOCK_DGRAM); """
-            """s.sendto('ping', ('%s', %d))" """ % (local_ip, local_port))
-        run_func(send_udp_packet)
-        try:
-            msg = s.recv(4)
-        except Exception:
-            supported = False
-        else:
-            supported = (msg == "ping")
-        s.close()
-        return supported
-
-
-    def start_loggers(self):
-        super(NetconsoleHost, self).start_loggers()
-
-        if not self.__console_log:
-            return
-
-        self.__netconsole_params = self.__determine_netconsole_params()
-        if self.__netconsole_params is None:
-            return
-
-        r, w = os.pipe()
-        script_path = os.path.join(self.monitordir, "console.py")
-        cmd = [sys.executable, script_path, self.__console_log, str(w)]
-
-        self.__warning_stream = os.fdopen(r, "r", 0)
-        if self.job:
-            self.job.warning_loggers.add(self.__warning_stream)
-
-        stdin = self.__socket.fileno()
-        stdout = stderr = open(os.devnull, "w")
-        self.__logger = subprocess.Popen(cmd, stdin=stdin, stdout=stdout,
-                                         stderr=stderr)
-        os.close(w)
-
-        self.__unload_netconsole_module()
-        self.__load_netconsole_module()
-
-
-    def stop_loggers(self):
-        super(NetconsoleHost, self).stop_loggers()
-
-        if self.__logger:
-            utils.nuke_subprocess(self.__logger)
-            self.__logger = None
-            if self.job:
-                self.job.warning_loggers.discard(self.__warning_stream)
-            self.__warning_stream.close()
-
-
-    def reboot_setup(self, *args, **dargs):
-        super(NetconsoleHost, self).reboot_setup(*args, **dargs)
-
-        if self.__netconsole_params is not None:
-            label = dargs.get("label", None)
-            if not label:
-                label = self.bootloader.get_default_title()
-            args = "debug " + self.__netconsole_params
-            self.bootloader.add_args(label, args)
-        self.__unload_netconsole_module()
-
-
-    def reboot_followup(self, *args, **dargs):
-        super(NetconsoleHost, self).reboot_followup(*args, **dargs)
-        self.__load_netconsole_module()
-
-
-    def __determine_netconsole_params(self):
-        """
-        Connect to the remote machine and determine the values to use for the
-        required netconsole parameters.
-        """
-        # determine the IP addresses of the local and remote machine
-        # PROBLEM: on machines with multiple IPs this may not make any sense
-        # It also doesn't work with IPv6
-        remote_ip = socket.gethostbyname(self.hostname)
-        local_ip = socket.gethostbyname(socket.gethostname())
-
-        # Get the gateway of the remote machine
-        try:
-            traceroute = self.run('traceroute -n %s' % local_ip)
-        except error.AutoservRunError:
-            return
-        first_node = traceroute.stdout.split("\n")[0]
-        match = re.search(r'\s+((\d+\.){3}\d+)\s+', first_node)
-        if match:
-            router_ip = match.group(1)
-        else:
-            return
-
-        # Look up the MAC address of the gateway
-        try:
-            self.run('ping -c 1 %s' % router_ip)
-            arp = self.run('arp -n -a %s' % router_ip)
-        except error.AutoservRunError:
-            return
-        match = re.search(r'\s+(([0-9A-F]{2}:){5}[0-9A-F]{2})\s+', arp.stdout)
-        if match:
-            gateway_mac = match.group(1)
-        else:
-            return None
-
-        return 'netconsole=@%s/,%s@%s/%s' % (remote_ip, self.__port, local_ip,
-                                             gateway_mac)
-
-
-    def __load_netconsole_module(self):
-        """
-        Make a best effort to load the netconsole module.
-
-        Note that loading the module can fail even when the remote machine is
-        working correctly if netconsole is already compiled into the kernel
-        and started.
-        """
-        if self.__netconsole_params is None:
-            return
-
-        try:
-            self.run('dmesg -n 8')
-            self.run('modprobe netconsole %s' % self.__netconsole_params)
-        except error.AutoservRunError, e:
-            # if it fails there isn't much we can do, just keep going
-            print "ERROR occured while loading netconsole: %s" % e
-
-
-    def __unload_netconsole_module(self):
-        try:
-            self.run('modprobe -r netconsole')
-        except error.AutoservRunError:
-            pass
diff --git a/server/hosts/paramiko_host.py b/server/hosts/paramiko_host.py
deleted file mode 100644
index d68b15e..0000000
--- a/server/hosts/paramiko_host.py
+++ /dev/null
@@ -1,310 +0,0 @@
-import os, sys, time, signal, socket, re, fnmatch, logging, threading
-import paramiko
-
-from autotest_lib.client.common_lib import utils, error, global_config
-from autotest_lib.server import subcommand
-from autotest_lib.server.hosts import abstract_ssh
-
-
-class ParamikoHost(abstract_ssh.AbstractSSHHost):
-    KEEPALIVE_TIMEOUT_SECONDS = 30
-    CONNECT_TIMEOUT_SECONDS = 30
-    CONNECT_TIMEOUT_RETRIES = 3
-    BUFFSIZE = 2**16
-
-    def _initialize(self, hostname, *args, **dargs):
-        super(ParamikoHost, self)._initialize(hostname=hostname, *args, **dargs)
-
-        # paramiko is very noisy, tone down the logging
-        paramiko.util.log_to_file("/dev/null", paramiko.util.ERROR)
-
-        self.keys = self.get_user_keys(hostname)
-        self.pid = None
-
-
-    @staticmethod
-    def _load_key(path):
-        """Given a path to a private key file, load the appropriate keyfile.
-
-        Tries to load the file as both an RSAKey and a DSAKey. If the file
-        cannot be loaded as either type, returns None."""
-        try:
-            return paramiko.DSSKey.from_private_key_file(path)
-        except paramiko.SSHException:
-            try:
-                return paramiko.RSAKey.from_private_key_file(path)
-            except paramiko.SSHException:
-                return None
-
-
-    @staticmethod
-    def _parse_config_line(line):
-        """Given an ssh config line, return a (key, value) tuple for the
-        config value listed in the line, or (None, None)"""
-        match = re.match(r"\s*(\w+)\s*=?(.*)\n", line)
-        if match:
-            return match.groups()
-        else:
-            return None, None
-
-
-    @staticmethod
-    def get_user_keys(hostname):
-        """Returns a mapping of path -> paramiko.PKey entries available for
-        this user. Keys are found in the default locations (~/.ssh/id_[d|r]sa)
-        as well as any IdentityFile entries in the standard ssh config files.
-        """
-        raw_identity_files = ["~/.ssh/id_dsa", "~/.ssh/id_rsa"]
-        for config_path in ("/etc/ssh/ssh_config", "~/.ssh/config"):
-            config_path = os.path.expanduser(config_path)
-            if not os.path.exists(config_path):
-                continue
-            host_pattern = "*"
-            config_lines = open(config_path).readlines()
-            for line in config_lines:
-                key, value = ParamikoHost._parse_config_line(line)
-                if key == "Host":
-                    host_pattern = value
-                elif (key == "IdentityFile"
-                      and fnmatch.fnmatch(hostname, host_pattern)):
-                    raw_identity_files.append(value)
-
-        # drop any files that use percent-escapes; we don't support them
-        identity_files = []
-        UNSUPPORTED_ESCAPES = ["%d", "%u", "%l", "%h", "%r"]
-        for path in raw_identity_files:
-            # skip this path if it uses % escapes
-            if sum((escape in path) for escape in UNSUPPORTED_ESCAPES):
-                continue
-            path = os.path.expanduser(path)
-            if os.path.exists(path):
-                identity_files.append(path)
-
-        # load up all the keys that we can and return them
-        user_keys = {}
-        for path in identity_files:
-            key = ParamikoHost._load_key(path)
-            if key:
-                user_keys[path] = key
-
-        # load up all the ssh agent keys
-        use_sshagent = global_config.global_config.get_config_value(
-            'AUTOSERV', 'use_sshagent_with_paramiko', type=bool)
-        if use_sshagent:
-            ssh_agent = paramiko.Agent()
-            for i, key in enumerate(ssh_agent.get_keys()):
-                user_keys['agent-key-%d' % i] = key
-
-        return user_keys
-
-
-    def _check_transport_error(self, transport):
-        error = transport.get_exception()
-        if error:
-            transport.close()
-            raise error
-
-
-    def _connect_socket(self):
-        """Return a socket for use in instantiating a paramiko transport. Does
-        not have to be a literal socket, it can be anything that the
-        paramiko.Transport constructor accepts."""
-        return self.hostname, self.port
-
-
-    def _connect_transport(self, pkey):
-        for _ in xrange(self.CONNECT_TIMEOUT_RETRIES):
-            transport = paramiko.Transport(self._connect_socket())
-            completed = threading.Event()
-            transport.start_client(completed)
-            completed.wait(self.CONNECT_TIMEOUT_SECONDS)
-            if completed.isSet():
-                self._check_transport_error(transport)
-                completed.clear()
-                transport.auth_publickey(self.user, pkey, completed)
-                completed.wait(self.CONNECT_TIMEOUT_SECONDS)
-                if completed.isSet():
-                    self._check_transport_error(transport)
-                    if not transport.is_authenticated():
-                        transport.close()
-                        raise paramiko.AuthenticationException()
-                    return transport
-            logging.warn("SSH negotiation (%s:%d) timed out, retrying",
-                         self.hostname, self.port)
-            # HACK: we can't count on transport.join not hanging now, either
-            transport.join = lambda: None
-            transport.close()
-        logging.error("SSH negotation (%s:%d) has timed out %s times, "
-                      "giving up", self.hostname, self.port,
-                      self.CONNECT_TIMEOUT_RETRIES)
-        raise error.AutoservSSHTimeout("SSH negotiation timed out")
-
-
-    def _init_transport(self):
-        for path, key in self.keys.iteritems():
-            try:
-                logging.debug("Connecting with %s", path)
-                transport = self._connect_transport(key)
-                transport.set_keepalive(self.KEEPALIVE_TIMEOUT_SECONDS)
-                self.transport = transport
-                self.pid = os.getpid()
-                return
-            except paramiko.AuthenticationException:
-                logging.debug("Authentication failure")
-        else:
-            raise error.AutoservSshPermissionDeniedError(
-                "Permission denied using all keys available to ParamikoHost",
-                utils.CmdResult())
-
-
-    def _open_channel(self, timeout):
-        start_time = time.time()
-        if os.getpid() != self.pid:
-            if self.pid is not None:
-                # HACK: paramiko tries to join() on its worker thread
-                # and this just hangs on linux after a fork()
-                self.transport.join = lambda: None
-                self.transport.atfork()
-                join_hook = lambda cmd: self._close_transport()
-                subcommand.subcommand.register_join_hook(join_hook)
-                logging.debug("Reopening SSH connection after a process fork")
-            self._init_transport()
-
-        channel = None
-        try:
-            channel = self.transport.open_session()
-        except (socket.error, paramiko.SSHException, EOFError), e:
-            logging.warn("Exception occured while opening session: %s", e)
-            if time.time() - start_time >= timeout:
-                raise error.AutoservSSHTimeout("ssh failed: %s" % e)
-
-        if not channel:
-            # we couldn't get a channel; re-initing transport should fix that
-            try:
-                self.transport.close()
-            except Exception, e:
-                logging.debug("paramiko.Transport.close failed with %s", e)
-            self._init_transport()
-            return self.transport.open_session()
-        else:
-            return channel
-
-
-    def _close_transport(self):
-        if os.getpid() == self.pid:
-            self.transport.close()
-
-
-    def close(self):
-        super(ParamikoHost, self).close()
-        self._close_transport()
-
-
-    @classmethod
-    def _exhaust_stream(cls, tee, output_list, recvfunc):
-        while True:
-            try:
-                output_list.append(recvfunc(cls.BUFFSIZE))
-            except socket.timeout:
-                return
-            tee.write(output_list[-1])
-            if not output_list[-1]:
-                return
-
-
-    @classmethod
-    def __send_stdin(cls, channel, stdin):
-        if not stdin or not channel.send_ready():
-            # nothing more to send or just no space to send now
-            return
-
-        sent = channel.send(stdin[:cls.BUFFSIZE])
-        if not sent:
-            logging.warn('Could not send a single stdin byte.')
-        else:
-            stdin = stdin[sent:]
-            if not stdin:
-                # no more stdin input, close output direction
-                channel.shutdown_write()
-        return stdin
-
-
-    def run(self, command, timeout=3600, ignore_status=False,
-            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
-            connect_timeout=30, stdin=None, verbose=True, args=()):
-        """
-        Run a command on the remote host.
-        @see common_lib.hosts.host.run()
-
-        @param connect_timeout: connection timeout (in seconds)
-        @param options: string with additional ssh command options
-        @param verbose: log the commands
-
-        @raises AutoservRunError: if the command failed
-        @raises AutoservSSHTimeout: ssh connection has timed out
-        """
-
-        stdout = utils.get_stream_tee_file(
-                stdout_tee, utils.DEFAULT_STDOUT_LEVEL,
-                prefix=utils.STDOUT_PREFIX)
-        stderr = utils.get_stream_tee_file(
-                stderr_tee, utils.get_stderr_level(ignore_status),
-                prefix=utils.STDERR_PREFIX)
-
-        for arg in args:
-            command += ' "%s"' % utils.sh_escape(arg)
-
-        if verbose:
-            logging.debug("Running (ssh-paramiko) '%s'" % command)
-
-        # start up the command
-        start_time = time.time()
-        try:
-            channel = self._open_channel(timeout)
-            channel.exec_command(command)
-        except (socket.error, paramiko.SSHException, EOFError), e:
-            # This has to match the string from paramiko *exactly*.
-            if str(e) != 'Channel closed.':
-                raise error.AutoservSSHTimeout("ssh failed: %s" % e)
-
-        # pull in all the stdout, stderr until the command terminates
-        raw_stdout, raw_stderr = [], []
-        timed_out = False
-        while not channel.exit_status_ready():
-            if channel.recv_ready():
-                raw_stdout.append(channel.recv(self.BUFFSIZE))
-                stdout.write(raw_stdout[-1])
-            if channel.recv_stderr_ready():
-                raw_stderr.append(channel.recv_stderr(self.BUFFSIZE))
-                stderr.write(raw_stderr[-1])
-            if timeout and time.time() - start_time > timeout:
-                timed_out = True
-                break
-            stdin = self.__send_stdin(channel, stdin)
-            time.sleep(1)
-
-        if timed_out:
-            exit_status = -signal.SIGTERM
-        else:
-            exit_status = channel.recv_exit_status()
-        channel.settimeout(10)
-        self._exhaust_stream(stdout, raw_stdout, channel.recv)
-        self._exhaust_stream(stderr, raw_stderr, channel.recv_stderr)
-        channel.close()
-        duration = time.time() - start_time
-
-        # create the appropriate results
-        stdout = "".join(raw_stdout)
-        stderr = "".join(raw_stderr)
-        result = utils.CmdResult(command, stdout, stderr, exit_status,
-                                 duration)
-        if exit_status == -signal.SIGHUP:
-            msg = "ssh connection unexpectedly terminated"
-            raise error.AutoservRunError(msg, result)
-        if timed_out:
-            logging.warn('Paramiko command timed out after %s sec: %s', timeout,
-                         command)
-            raise error.AutoservRunError("command timed out", result)
-        if not ignore_status and exit_status:
-            raise error.AutoservRunError(command, result)
-        return result
diff --git a/server/hosts/remote.py b/server/hosts/remote.py
deleted file mode 100644
index d1b4b46..0000000
--- a/server/hosts/remote.py
+++ /dev/null
@@ -1,272 +0,0 @@
-"""This class defines the Remote host class, mixing in the SiteHost class
-if it is available."""
-
-import os, logging, urllib
-from autotest_lib.client.common_lib import error
-from autotest_lib.server import utils
-from autotest_lib.server.hosts import base_classes, bootloader
-
-
-class RemoteHost(base_classes.Host):
-    """
-    This class represents a remote machine on which you can run
-    programs.
-
-    It may be accessed through a network, a serial line, ...
-    It is not the machine autoserv is running on.
-
-    Implementation details:
-    This is an abstract class, leaf subclasses must implement the methods
-    listed here and in parent classes which have no implementation. They
-    may reimplement methods which already have an implementation. You
-    must not instantiate this class but should instantiate one of those
-    leaf subclasses.
-    """
-
-    DEFAULT_REBOOT_TIMEOUT = base_classes.Host.DEFAULT_REBOOT_TIMEOUT
-    LAST_BOOT_TAG = object()
-    DEFAULT_HALT_TIMEOUT = 2 * 60
-
-    VAR_LOG_MESSAGES_COPY_PATH = "/var/tmp/messages.autotest_start"
-
-    def _initialize(self, hostname, autodir=None, *args, **dargs):
-        super(RemoteHost, self)._initialize(*args, **dargs)
-
-        self.hostname = hostname
-        self.autodir = autodir
-        self.tmp_dirs = []
-
-
-    def __repr__(self):
-        return "<remote host: %s>" % self.hostname
-
-
-    def close(self):
-        super(RemoteHost, self).close()
-        self.stop_loggers()
-
-        if hasattr(self, 'tmp_dirs'):
-            for dir in self.tmp_dirs:
-                try:
-                    self.run('rm -rf "%s"' % (utils.sh_escape(dir)))
-                except error.AutoservRunError:
-                    pass
-
-
-    def job_start(self):
-        """
-        Abstract method, called the first time a remote host object
-        is created for a specific host after a job starts.
-
-        This method depends on the create_host factory being used to
-        construct your host object. If you directly construct host objects
-        you will need to call this method yourself (and enforce the
-        single-call rule).
-        """
-        try:
-            self.run('rm -f %s' % self.VAR_LOG_MESSAGES_COPY_PATH)
-            self.run('cp /var/log/messages %s' %
-                     self.VAR_LOG_MESSAGES_COPY_PATH)
-        except Exception, e:
-            # Non-fatal error
-            logging.info('Failed to copy /var/log/messages at startup: %s', e)
-
-
-    def get_autodir(self):
-        return self.autodir
-
-
-    def set_autodir(self, autodir):
-        """
-        This method is called to make the host object aware of the
-        where autotest is installed. Called in server/autotest.py
-        after a successful install
-        """
-        self.autodir = autodir
-
-
-    def sysrq_reboot(self):
-        self.run('echo b > /proc/sysrq-trigger &')
-
-
-    def halt(self, timeout=DEFAULT_HALT_TIMEOUT, wait=True):
-        self.run('/sbin/halt')
-        if wait:
-            self.wait_down(timeout=timeout)
-
-
-    def reboot(self, timeout=DEFAULT_REBOOT_TIMEOUT, label=LAST_BOOT_TAG,
-               kernel_args=None, wait=True, fastsync=False,
-               reboot_cmd=None, **dargs):
-        """
-        Reboot the remote host.
-
-        Args:
-                timeout - How long to wait for the reboot.
-                label - The label we should boot into.  If None, we will
-                        boot into the default kernel.  If it's LAST_BOOT_TAG,
-                        we'll boot into whichever kernel was .boot'ed last
-                        (or the default kernel if we haven't .boot'ed in this
-                        job).  If it's None, we'll boot into the default kernel.
-                        If it's something else, we'll boot into that.
-                wait - Should we wait to see if the machine comes back up.
-                fastsync - Don't wait for the sync to complete, just start one
-                        and move on. This is for cases where rebooting prompty
-                        is more important than data integrity and/or the
-                        machine may have disks that cause sync to never return.
-                reboot_cmd - Reboot command to execute.
-        """
-        if self.job:
-            if label == self.LAST_BOOT_TAG:
-                label = self.job.last_boot_tag
-            else:
-                self.job.last_boot_tag = label
-
-        self.reboot_setup(label=label, kernel_args=kernel_args, **dargs)
-
-        if label or kernel_args:
-            if not label:
-                label = self.bootloader.get_default_title()
-            self.bootloader.boot_once(label)
-            if kernel_args:
-                self.bootloader.add_args(label, kernel_args)
-
-        # define a function for the reboot and run it in a group
-        print "Reboot: initiating reboot"
-        def reboot():
-            self.record("GOOD", None, "reboot.start")
-            try:
-                current_boot_id = self.get_boot_id()
-
-                # sync before starting the reboot, so that a long sync during
-                # shutdown isn't timed out by wait_down's short timeout
-                if not fastsync:
-                    self.run('sync; sync', timeout=timeout, ignore_status=True)
-
-                if reboot_cmd:
-                    self.run(reboot_cmd)
-                else:
-                  # Try several methods of rebooting in increasing harshness.
-                    self.run('(('
-                             ' sync &'
-                             ' sleep 5; reboot &'
-                             ' sleep 60; reboot -f &'
-                             ' sleep 10; reboot -nf &'
-                             ' sleep 10; telinit 6 &'
-                             ') </dev/null >/dev/null 2>&1 &)')
-            except error.AutoservRunError:
-                self.record("ABORT", None, "reboot.start",
-                              "reboot command failed")
-                raise
-            if wait:
-                self.wait_for_restart(timeout, old_boot_id=current_boot_id,
-                                      **dargs)
-
-        # if this is a full reboot-and-wait, run the reboot inside a group
-        if wait:
-            self.log_reboot(reboot)
-        else:
-            reboot()
-
-
-    def reboot_followup(self, *args, **dargs):
-        super(RemoteHost, self).reboot_followup(*args, **dargs)
-        if self.job:
-            self.job.profilers.handle_reboot(self)
-
-
-    def wait_for_restart(self, timeout=DEFAULT_REBOOT_TIMEOUT, **dargs):
-        """
-        Wait for the host to come back from a reboot. This wraps the
-        generic wait_for_restart implementation in a reboot group.
-        """
-        def reboot_func():
-            super(RemoteHost, self).wait_for_restart(timeout=timeout, **dargs)
-        self.log_reboot(reboot_func)
-
-
-    def cleanup(self):
-        super(RemoteHost, self).cleanup()
-        self.reboot()
-
-
-    def get_tmp_dir(self, parent='/tmp'):
-        """
-        Return the pathname of a directory on the host suitable
-        for temporary file storage.
-
-        The directory and its content will be deleted automatically
-        on the destruction of the Host object that was used to obtain
-        it.
-        """
-        self.run("mkdir -p %s" % parent)
-        template = os.path.join(parent, 'autoserv-XXXXXX')
-        dir_name = self.run("mktemp -d %s" % template).stdout.rstrip()
-        self.tmp_dirs.append(dir_name)
-        return dir_name
-
-
-    def get_platform_label(self):
-        """
-        Return the platform label, or None if platform label is not set.
-        """
-
-        if self.job:
-            keyval_path = os.path.join(self.job.resultdir, 'host_keyvals',
-                                       self.hostname)
-            keyvals = utils.read_keyval(keyval_path)
-            return keyvals.get('platform', None)
-        else:
-            return None
-
-
-    def get_all_labels(self):
-        """
-        Return all labels, or empty list if label is not set.
-        """
-        if self.job:
-            keyval_path = os.path.join(self.job.resultdir, 'host_keyvals',
-                                       self.hostname)
-            keyvals = utils.read_keyval(keyval_path)
-            all_labels = keyvals.get('labels', '')
-            if all_labels:
-                all_labels = all_labels.split(',')
-                return [urllib.unquote(label) for label in all_labels]
-        return []
-
-
-    def delete_tmp_dir(self, tmpdir):
-        """
-        Delete the given temporary directory on the remote machine.
-        """
-        self.run('rm -rf "%s"' % utils.sh_escape(tmpdir), ignore_status=True)
-        self.tmp_dirs.remove(tmpdir)
-
-
-    def check_uptime(self):
-        """
-        Check that uptime is available and monotonically increasing.
-        """
-        if not self.is_up():
-            raise error.AutoservHostError('Client does not appear to be up')
-        result = self.run("/bin/cat /proc/uptime", 30)
-        return result.stdout.strip().split()[0]
-
-
-    def are_wait_up_processes_up(self):
-        """
-        Checks if any HOSTS waitup processes are running yet on the
-        remote host.
-
-        Returns True if any the waitup processes are running, False
-        otherwise.
-        """
-        processes = self.get_wait_up_processes()
-        if len(processes) == 0:
-            return True # wait up processes aren't being used
-        for procname in processes:
-            exit_status = self.run("{ ps -e || ps; } | grep '%s'" % procname,
-                                   ignore_status=True).exit_status
-            if exit_status == 0:
-                return True
-        return False
diff --git a/server/hosts/remote_unittest.py b/server/hosts/remote_unittest.py
deleted file mode 100755
index b1f5c9c..0000000
--- a/server/hosts/remote_unittest.py
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/python
-
-import unittest
-import common
-
-from autotest_lib.server.hosts import remote
-
-
-class test_remote_host(unittest.TestCase):
-    def test_has_hostname(self):
-        host = remote.RemoteHost("myhost")
-        self.assertEqual(host.hostname, "myhost")
-
-
-if __name__ == "__main__":
-    unittest.main()
diff --git a/server/hosts/serial.py b/server/hosts/serial.py
deleted file mode 100644
index d363cb7..0000000
--- a/server/hosts/serial.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import os, sys, subprocess, logging
-
-from autotest_lib.client.common_lib import utils, error
-from autotest_lib.server import utils as server_utils
-from autotest_lib.server.hosts import remote
-
-
-SiteHost = utils.import_site_class(
-    __file__, "autotest_lib.server.hosts.site_host", "SiteHost",
-    remote.RemoteHost)
-
-
-class SerialHost(SiteHost):
-    DEFAULT_REBOOT_TIMEOUT = SiteHost.DEFAULT_REBOOT_TIMEOUT
-
-    def _initialize(self, conmux_server=None, conmux_attach=None,
-                    console_log="console.log", *args, **dargs):
-        super(SerialHost, self)._initialize(*args, **dargs)
-
-        self.__logger = None
-        self.__console_log = console_log
-
-        self.conmux_server = conmux_server
-        self.conmux_attach = self._get_conmux_attach(conmux_attach)
-
-
-    @classmethod
-    def _get_conmux_attach(cls, conmux_attach=None):
-        if conmux_attach:
-            return conmux_attach
-
-        # assume we're using the conmux-attach provided with autotest
-        server_dir = server_utils.get_server_dir()
-        path = os.path.join(server_dir, "..", "conmux", "conmux-attach")
-        path = os.path.abspath(path)
-        return path
-
-
-    @staticmethod
-    def _get_conmux_hostname(hostname, conmux_server):
-        if conmux_server:
-            return "%s/%s" % (conmux_server, hostname)
-        else:
-            return hostname
-
-
-    def get_conmux_hostname(self):
-        return self._get_conmux_hostname(self.hostname, self.conmux_server)
-
-
-    @classmethod
-    def host_is_supported(cls, hostname, conmux_server=None,
-                          conmux_attach=None):
-        """ Returns a boolean indicating if the remote host with "hostname"
-        supports use as a SerialHost """
-        conmux_attach = cls._get_conmux_attach(conmux_attach)
-        conmux_hostname = cls._get_conmux_hostname(hostname, conmux_server)
-        cmd = "%s %s echo 2> /dev/null" % (conmux_attach, conmux_hostname)
-        try:
-            result = utils.run(cmd, ignore_status=True, timeout=10)
-            return result.exit_status == 0
-        except error.CmdError:
-            logging.warning("Timed out while trying to attach to conmux")
-
-        return False
-
-
-    def start_loggers(self):
-        super(SerialHost, self).start_loggers()
-
-        if self.__console_log is None:
-            return
-
-        if not self.conmux_attach or not os.path.exists(self.conmux_attach):
-            return
-
-        r, w = os.pipe()
-        script_path = os.path.join(self.monitordir, 'console.py')
-        cmd = [self.conmux_attach, self.get_conmux_hostname(),
-               '%s %s %s %d' % (sys.executable, script_path,
-                                self.__console_log, w)]
-
-        self.__warning_stream = os.fdopen(r, 'r', 0)
-        if self.job:
-            self.job.warning_loggers.add(self.__warning_stream)
-
-        stdout = stderr = open(os.devnull, 'w')
-        self.__logger = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
-        os.close(w)
-
-
-    def stop_loggers(self):
-        super(SerialHost, self).stop_loggers()
-
-        if self.__logger:
-            utils.nuke_subprocess(self.__logger)
-            self.__logger = None
-            if self.job:
-                self.job.warning_loggers.discard(self.__warning_stream)
-            self.__warning_stream.close()
-
-
-    def run_conmux(self, cmd):
-        """
-        Send a command to the conmux session
-        """
-        if not self.conmux_attach or not os.path.exists(self.conmux_attach):
-            return False
-        cmd = '%s %s echo %s 2> /dev/null' % (self.conmux_attach,
-                                              self.get_conmux_hostname(),
-                                              cmd)
-        result = utils.system(cmd, ignore_status=True)
-        return result == 0
-
-
-    def hardreset(self, timeout=DEFAULT_REBOOT_TIMEOUT, wait=True,
-                  conmux_command='hardreset', num_attempts=1, halt=False,
-                  **wait_for_restart_kwargs):
-        """
-        Reach out and slap the box in the power switch.
-        @params conmux_command: The command to run via the conmux interface
-        @params timeout: timelimit in seconds before the machine is
-                         considered unreachable
-        @params wait: Whether or not to wait for the machine to reboot
-        @params num_attempts: Number of times to attempt hard reset erroring
-                              on the last attempt.
-        @params halt: Halts the machine before hardresetting.
-        @params **wait_for_restart_kwargs: keyword arguments passed to
-                wait_for_restart()
-        """
-        conmux_command = "'~$%s'" % conmux_command
-
-        # if the machine is up, grab the old boot id, otherwise use a dummy
-        # string and NOT None to ensure that wait_down always returns True,
-        # even if the machine comes back up before it's called
-        try:
-            old_boot_id = self.get_boot_id()
-        except error.AutoservSSHTimeout:
-            old_boot_id = 'unknown boot_id prior to SerialHost.hardreset'
-
-        def reboot():
-            if halt:
-                self.halt()
-            if not self.run_conmux(conmux_command):
-                self.record("ABORT", None, "reboot.start",
-                            "hard reset unavailable")
-                raise error.AutoservUnsupportedError(
-                    'Hard reset unavailable')
-            self.record("GOOD", None, "reboot.start", "hard reset")
-            if wait:
-                warning_msg = ('Serial console failed to respond to hard reset '
-                               'attempt (%s/%s)')
-                for attempt in xrange(num_attempts-1):
-                    try:
-                        self.wait_for_restart(timeout, log_failure=False,
-                                              old_boot_id=old_boot_id,
-                                              **wait_for_restart_kwargs)
-                    except error.AutoservShutdownError:
-                        logging.warning(warning_msg, attempt+1, num_attempts)
-                        # re-send the hard reset command
-                        self.run_conmux(conmux_command)
-                    else:
-                        break
-                else:
-                    # Run on num_attempts=1 or last retry
-                    try:
-                        self.wait_for_restart(timeout,
-                                              old_boot_id=old_boot_id,
-                                              **wait_for_restart_kwargs)
-                    except error.AutoservShutdownError:
-                        logging.warning(warning_msg, num_attempts, num_attempts)
-                        msg = "Host did not shutdown"
-                        raise error.AutoservShutdownError(msg)
-
-        if self.job:
-            self.job.disable_warnings("POWER_FAILURE")
-        try:
-            if wait:
-                self.log_reboot(reboot)
-            else:
-                reboot()
-        finally:
-            if self.job:
-                self.job.enable_warnings("POWER_FAILURE")
diff --git a/server/hosts/site_factory.py b/server/hosts/site_factory.py
deleted file mode 100644
index 59f7053..0000000
--- a/server/hosts/site_factory.py
+++ /dev/null
@@ -1,4 +0,0 @@
-def postprocess_classes(classes, hostname, **args):
-    # by default, do nothing
-    # insert site-specific processing of the class list here
-    pass
diff --git a/server/hosts/ssh_host.py b/server/hosts/ssh_host.py
deleted file mode 100644
index 2f5a080..0000000
--- a/server/hosts/ssh_host.py
+++ /dev/null
@@ -1,245 +0,0 @@
-#
-# Copyright 2007 Google Inc. Released under the GPL v2
-
-"""
-This module defines the SSHHost class.
-
-Implementation details:
-You should import the "hosts" package instead of importing each type of host.
-
-        SSHHost: a remote machine with a ssh access
-"""
-
-import sys, re, traceback, logging
-from autotest_lib.client.common_lib import error, pxssh
-from autotest_lib.server import utils
-from autotest_lib.server.hosts import abstract_ssh
-
-
-class SSHHost(abstract_ssh.AbstractSSHHost):
-    """
-    This class represents a remote machine controlled through an ssh
-    session on which you can run programs.
-
-    It is not the machine autoserv is running on. The machine must be
-    configured for password-less login, for example through public key
-    authentication.
-
-    It includes support for controlling the machine through a serial
-    console on which you can run programs. If such a serial console is
-    set up on the machine then capabilities such as hard reset and
-    boot strap monitoring are available. If the machine does not have a
-    serial console available then ordinary SSH-based commands will
-    still be available, but attempts to use extensions such as
-    console logging or hard reset will fail silently.
-
-    Implementation details:
-    This is a leaf class in an abstract class hierarchy, it must
-    implement the unimplemented methods in parent classes.
-    """
-
-    def _initialize(self, hostname, *args, **dargs):
-        """
-        Construct a SSHHost object
-
-        Args:
-                hostname: network hostname or address of remote machine
-        """
-        super(SSHHost, self)._initialize(hostname=hostname, *args, **dargs)
-        self.setup_ssh()
-
-
-    def ssh_command(self, connect_timeout=30, options='', alive_interval=300):
-        """
-        Construct an ssh command with proper args for this host.
-        """
-        options = "%s %s" % (options, self.master_ssh_option)
-        base_cmd = abstract_ssh.make_ssh_command(user=self.user, port=self.port,
-                                                opts=options,
-                                                hosts_file=self.known_hosts_fd,
-                                                connect_timeout=connect_timeout,
-                                                alive_interval=alive_interval)
-        return "%s %s" % (base_cmd, self.hostname)
-
-
-    def _run(self, command, timeout, ignore_status, stdout, stderr,
-             connect_timeout, env, options, stdin, args):
-        """Helper function for run()."""
-        ssh_cmd = self.ssh_command(connect_timeout, options)
-        if not env.strip():
-            env = ""
-        else:
-            env = "export %s;" % env
-        for arg in args:
-            command += ' "%s"' % utils.sh_escape(arg)
-        full_cmd = '%s "%s %s"' % (ssh_cmd, env, utils.sh_escape(command))
-        result = utils.run(full_cmd, timeout, True, stdout, stderr,
-                           verbose=False, stdin=stdin,
-                           stderr_is_expected=ignore_status)
-
-        # The error messages will show up in band (indistinguishable
-        # from stuff sent through the SSH connection), so we have the
-        # remote computer echo the message "Connected." before running
-        # any command.  Since the following 2 errors have to do with
-        # connecting, it's safe to do these checks.
-        if result.exit_status == 255:
-            if re.search(r'^ssh: connect to host .* port .*: '
-                         r'Connection timed out\r$', result.stderr):
-                raise error.AutoservSSHTimeout("ssh timed out", result)
-            if "Permission denied." in result.stderr:
-                msg = "ssh permission denied"
-                raise error.AutoservSshPermissionDeniedError(msg, result)
-
-        if not ignore_status and result.exit_status > 0:
-            raise error.AutoservRunError("command execution error", result)
-
-        return result
-
-
-    def run(self, command, timeout=3600, ignore_status=False,
-            stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS,
-            connect_timeout=30, options='', stdin=None, verbose=True, args=()):
-        """
-        Run a command on the remote host.
-        @see common_lib.hosts.host.run()
-
-        @param connect_timeout: connection timeout (in seconds)
-        @param options: string with additional ssh command options
-        @param verbose: log the commands
-
-        @raises AutoservRunError: if the command failed
-        @raises AutoservSSHTimeout: ssh connection has timed out
-        """
-        if verbose:
-            logging.debug("Running (ssh) '%s'" % command)
-
-        # Start a master SSH connection if necessary.
-        self.start_master_ssh()
-
-        env = " ".join("=".join(pair) for pair in self.env.iteritems())
-        try:
-            return self._run(command, timeout, ignore_status, stdout_tee,
-                             stderr_tee, connect_timeout, env, options,
-                             stdin, args)
-        except error.CmdError, cmderr:
-            # We get a CmdError here only if there is timeout of that command.
-            # Catch that and stuff it into AutoservRunError and raise it.
-            raise error.AutoservRunError(cmderr.args[0], cmderr.args[1])
-
-
-    def run_short(self, command, **kwargs):
-        """
-        Calls the run() command with a short default timeout.
-
-        Args:
-                Takes the same arguments as does run(),
-                with the exception of the timeout argument which
-                here is fixed at 60 seconds.
-                It returns the result of run.
-        """
-        return self.run(command, timeout=60, **kwargs)
-
-
-    def run_grep(self, command, timeout=30, ignore_status=False,
-                             stdout_ok_regexp=None, stdout_err_regexp=None,
-                             stderr_ok_regexp=None, stderr_err_regexp=None,
-                             connect_timeout=30):
-        """
-        Run a command on the remote host and look for regexp
-        in stdout or stderr to determine if the command was
-        successul or not.
-
-        Args:
-                command: the command line string
-                timeout: time limit in seconds before attempting to
-                        kill the running process. The run() function
-                        will take a few seconds longer than 'timeout'
-                        to complete if it has to kill the process.
-                ignore_status: do not raise an exception, no matter
-                        what the exit code of the command is.
-                stdout_ok_regexp: regexp that should be in stdout
-                        if the command was successul.
-                stdout_err_regexp: regexp that should be in stdout
-                        if the command failed.
-                stderr_ok_regexp: regexp that should be in stderr
-                        if the command was successul.
-                stderr_err_regexp: regexp that should be in stderr
-                        if the command failed.
-
-        Returns:
-                if the command was successul, raises an exception
-                otherwise.
-
-        Raises:
-                AutoservRunError:
-                - the exit code of the command execution was not 0.
-                - If stderr_err_regexp is found in stderr,
-                - If stdout_err_regexp is found in stdout,
-                - If stderr_ok_regexp is not found in stderr.
-                - If stdout_ok_regexp is not found in stdout,
-        """
-
-        # We ignore the status, because we will handle it at the end.
-        result = self.run(command, timeout, ignore_status=True,
-                          connect_timeout=connect_timeout,
-                          stderr_is_expected=ignore_status)
-
-        # Look for the patterns, in order
-        for (regexp, stream) in ((stderr_err_regexp, result.stderr),
-                                 (stdout_err_regexp, result.stdout)):
-            if regexp and stream:
-                err_re = re.compile (regexp)
-                if err_re.search(stream):
-                    raise error.AutoservRunError(
-                        '%s failed, found error pattern: "%s"' % (command,
-                                                                regexp), result)
-
-        for (regexp, stream) in ((stderr_ok_regexp, result.stderr),
-                                 (stdout_ok_regexp, result.stdout)):
-            if regexp and stream:
-                ok_re = re.compile (regexp)
-                if ok_re.search(stream):
-                    if ok_re.search(stream):
-                        return
-
-        if not ignore_status and result.exit_status > 0:
-            raise error.AutoservRunError("command execution error", result)
-
-
-    def setup_ssh_key(self):
-        logging.debug('Performing SSH key setup on %s:%d as %s.' %
-                      (self.hostname, self.port, self.user))
-
-        try:
-            host = pxssh.pxssh()
-            host.login(self.hostname, self.user, self.password,
-                        port=self.port)
-            public_key = utils.get_public_key()
-
-            host.sendline('mkdir -p ~/.ssh')
-            host.prompt()
-            host.sendline('chmod 700 ~/.ssh')
-            host.prompt()
-            host.sendline("echo '%s' >> ~/.ssh/authorized_keys; " %
-                            public_key)
-            host.prompt()
-            host.sendline('chmod 600 ~/.ssh/authorized_keys')
-            host.prompt()
-            host.logout()
-
-            logging.debug('SSH key setup complete.')
-
-        except:
-            logging.debug('SSH key setup has failed.')
-            try:
-                host.logout()
-            except:
-                pass
-
-
-    def setup_ssh(self):
-        if self.password:
-            try:
-                self.ssh_ping()
-            except error.AutoservSshPingHostError:
-                self.setup_ssh_key()
diff --git a/server/kvm.py b/server/kvm.py
index eddf767..724847d 100644
--- a/server/kvm.py
+++ b/server/kvm.py
@@ -15,8 +15,8 @@ stutsman@google.com (Ryan Stutsman)
 
 import os
 
-from autotest_lib.client.common_lib import error
-from autotest_lib.server import hypervisor, utils, hosts
+from autotest_lib.client.common_lib import error, hosts
+from autotest_lib.server import hypervisor, utils
 
 
 _qemu_ifup_script= """\
diff --git a/server/profilers.py b/server/profilers.py
index 9c42637..26adb89 100644
--- a/server/profilers.py
+++ b/server/profilers.py
@@ -1,8 +1,8 @@
 import os, shutil, tempfile, logging
 
 import common
-from autotest_lib.client.common_lib import utils, error, profiler_manager
-from autotest_lib.server import profiler, autotest, standalone_profiler, hosts
+from autotest_lib.client.common_lib import utils, error, profiler_manager, hosts
+from autotest_lib.server import profiler, autotest, standalone_profiler
 
 
 PROFILER_TMPDIR = '/tmp/profilers'
diff --git a/server/rpm_kernel_unittest.py b/server/rpm_kernel_unittest.py
index bec0ea5..6506c08 100755
--- a/server/rpm_kernel_unittest.py
+++ b/server/rpm_kernel_unittest.py
@@ -2,10 +2,10 @@
 
 import unittest, os
 import common
-from autotest_lib.client.common_lib import utils as common_utils
+from autotest_lib.client.common_lib import utils as common_utils, hosts
 from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.server import rpm_kernel, utils, hosts
-from autotest_lib.server.hosts import bootloader
+from autotest_lib.server import rpm_kernel, utils
+from autotest_lib.client.common_lib.hosts import bootloader
 
 
 class TestRpmKernel(unittest.TestCase):
diff --git a/server/server_job.py b/server/server_job.py
index 40dcd5b..e3ffbc8 100644
--- a/server/server_job.py
+++ b/server/server_job.py
@@ -11,9 +11,9 @@ import traceback, shutil, warnings, fcntl, pickle, logging, itertools, errno
 from autotest_lib.client.bin import sysinfo
 from autotest_lib.client.common_lib import base_job
 from autotest_lib.client.common_lib import error, log, utils, packages
-from autotest_lib.client.common_lib import logging_manager
-from autotest_lib.server import test, subcommand, profilers
-from autotest_lib.server.hosts import abstract_ssh
+from autotest_lib.client.common_lib import logging_manager, subcommand
+from autotest_lib.server import test, profilers
+from autotest_lib.client.common_lib.hosts import abstract_ssh
 from autotest_lib.tko import db as tko_db, status_lib, utils as tko_utils
 
 
@@ -964,11 +964,12 @@ class base_server_job(base_job.base_job):
         # the front of the control script.
         namespace.update(os=os, sys=sys, logging=logging)
         _import_names('autotest_lib.server',
-                ('hosts', 'autotest', 'kvm', 'git', 'standalone_profiler',
+                ('autotest', 'kvm', 'git', 'standalone_profiler',
                  'source_kernel', 'rpm_kernel', 'deb_kernel', 'git_kernel'))
-        _import_names('autotest_lib.server.subcommand',
+        _import_names('autotest_lib.client.common_lib', ('hosts',))
+        _import_names('autotest_lib.client.common_lib.subcommand',
                       ('parallel', 'parallel_simple', 'subcommand'))
-        _import_names('autotest_lib.server.utils',
+        _import_names('autotest_lib.client.common_lib.utils',
                       ('run', 'get_tmp_dir', 'sh_escape', 'parse_machine'))
         _import_names('autotest_lib.client.common_lib.error')
         _import_names('autotest_lib.client.common_lib.barrier', ('barrier',))
diff --git a/server/source_kernel_unittest.py b/server/source_kernel_unittest.py
index 6771b29..83c66d1 100755
--- a/server/source_kernel_unittest.py
+++ b/server/source_kernel_unittest.py
@@ -3,7 +3,8 @@
 import unittest
 import common
 from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.server import source_kernel, autotest, hosts
+from autotest_lib.client.common_lib import hosts
+from autotest_lib.server import source_kernel, autotest
 
 
 class TestSourceKernel(unittest.TestCase):
diff --git a/server/subcommand.py b/server/subcommand.py
deleted file mode 100644
index 8aa2d96..0000000
--- a/server/subcommand.py
+++ /dev/null
@@ -1,263 +0,0 @@
-__author__ = """Copyright Andy Whitcroft, Martin J. Bligh - 2006, 2007"""
-
-import sys, os, subprocess, time, signal, cPickle, logging
-
-from autotest_lib.client.common_lib import error, utils
-
-
-# entry points that use subcommand must set this to their logging manager
-# to get log redirection for subcommands
-logging_manager_object = None
-
-
-def parallel(tasklist, timeout=None, return_results=False):
-    """
-    Run a set of predefined subcommands in parallel.
-
-    @param tasklist: A list of subcommand instances to execute.
-    @param timeout: Number of seconds after which the commands should timeout.
-    @param return_results: If True instead of an AutoServError being raised
-            on any error a list of the results|exceptions from the tasks is
-            returned.  [default: False]
-    """
-    run_error = False
-    for task in tasklist:
-        task.fork_start()
-
-    remaining_timeout = None
-    if timeout:
-        endtime = time.time() + timeout
-
-    results = []
-    for task in tasklist:
-        if timeout:
-            remaining_timeout = max(endtime - time.time(), 1)
-        try:
-            status = task.fork_waitfor(timeout=remaining_timeout)
-        except error.AutoservSubcommandError:
-            run_error = True
-        else:
-            if status != 0:
-                run_error = True
-
-        results.append(cPickle.load(task.result_pickle))
-        task.result_pickle.close()
-
-    if return_results:
-        return results
-    elif run_error:
-        message = 'One or more subcommands failed:\n'
-        for task, result in zip(tasklist, results):
-            message += 'task: %s returned/raised: %r\n' % (task, result)
-        raise error.AutoservError(message)
-
-
-def parallel_simple(function, arglist, log=True, timeout=None,
-                    return_results=False):
-    """
-    Each element in the arglist used to create a subcommand object,
-    where that arg is used both as a subdir name, and a single argument
-    to pass to "function".
-
-    We create a subcommand object for each element in the list,
-    then execute those subcommand objects in parallel.
-
-    NOTE: As an optimization, if len(arglist) == 1 a subcommand is not used.
-
-    @param function: A callable to run in parallel once per arg in arglist.
-    @param arglist: A list of single arguments to be used one per subcommand;
-            typically a list of machine names.
-    @param log: If True, output will be written to output in a subdirectory
-            named after each subcommand's arg.
-    @param timeout: Number of seconds after which the commands should timeout.
-    @param return_results: If True instead of an AutoServError being raised
-            on any error a list of the results|exceptions from the function
-            called on each arg is returned.  [default: False]
-
-    @returns None or a list of results/exceptions.
-    """
-    if not arglist:
-        logging.warn('parallel_simple was called with an empty arglist, '
-                     'did you forget to pass in a list of machines?')
-    # Bypass the multithreading if only one machine.
-    if len(arglist) == 1:
-        arg = arglist[0]
-        if return_results:
-            try:
-                result = function(arg)
-            except Exception, e:
-                return [e]
-            return [result]
-        else:
-            function(arg)
-            return
-
-    subcommands = []
-    for arg in arglist:
-        args = [arg]
-        if log:
-            subdir = str(arg)
-        else:
-            subdir = None
-        subcommands.append(subcommand(function, args, subdir))
-    return parallel(subcommands, timeout, return_results=return_results)
-
-
-class subcommand(object):
-    fork_hooks, join_hooks = [], []
-
-    def __init__(self, func, args, subdir = None):
-        # func(args) - the subcommand to run
-        # subdir     - the subdirectory to log results in
-        if subdir:
-            self.subdir = os.path.abspath(subdir)
-            if not os.path.exists(self.subdir):
-                os.mkdir(self.subdir)
-            self.debug = os.path.join(self.subdir, 'debug')
-            if not os.path.exists(self.debug):
-                os.mkdir(self.debug)
-        else:
-            self.subdir = None
-            self.debug = None
-
-        self.func = func
-        self.args = args
-        self.lambda_function = lambda: func(*args)
-        self.pid = None
-        self.returncode = None
-
-
-    def __str__(self):
-        return str('subcommand(func=%s,  args=%s, subdir=%s)' %
-                   (self.func, self.args, self.subdir))
-
-
-    @classmethod
-    def register_fork_hook(cls, hook):
-        """ Register a function to be called from the child process after
-        forking. """
-        cls.fork_hooks.append(hook)
-
-
-    @classmethod
-    def register_join_hook(cls, hook):
-        """ Register a function to be called when from the child process
-        just before the child process terminates (joins to the parent). """
-        cls.join_hooks.append(hook)
-
-
-    def redirect_output(self):
-        if self.subdir and logging_manager_object:
-            tag = os.path.basename(self.subdir)
-            logging_manager_object.tee_redirect_debug_dir(self.debug, tag=tag)
-
-
-    def fork_start(self):
-        sys.stdout.flush()
-        sys.stderr.flush()
-        r, w = os.pipe()
-        self.returncode = None
-        self.pid = os.fork()
-
-        if self.pid:                            # I am the parent
-            os.close(w)
-            self.result_pickle = os.fdopen(r, 'r')
-            return
-        else:
-            os.close(r)
-
-        # We are the child from this point on. Never return.
-        signal.signal(signal.SIGTERM, signal.SIG_DFL) # clear handler
-        if self.subdir:
-            os.chdir(self.subdir)
-        self.redirect_output()
-
-        try:
-            for hook in self.fork_hooks:
-                hook(self)
-            result = self.lambda_function()
-            os.write(w, cPickle.dumps(result, cPickle.HIGHEST_PROTOCOL))
-            exit_code = 0
-        except Exception, e:
-            logging.exception('function failed')
-            exit_code = 1
-            os.write(w, cPickle.dumps(e, cPickle.HIGHEST_PROTOCOL))
-
-        os.close(w)
-
-        try:
-            for hook in self.join_hooks:
-                hook(self)
-        finally:
-            sys.stdout.flush()
-            sys.stderr.flush()
-            os._exit(exit_code)
-
-
-    def _handle_exitstatus(self, sts):
-        """
-        This is partially borrowed from subprocess.Popen.
-        """
-        if os.WIFSIGNALED(sts):
-            self.returncode = -os.WTERMSIG(sts)
-        elif os.WIFEXITED(sts):
-            self.returncode = os.WEXITSTATUS(sts)
-        else:
-            # Should never happen
-            raise RuntimeError("Unknown child exit status!")
-
-        if self.returncode != 0:
-            print "subcommand failed pid %d" % self.pid
-            print "%s" % (self.func,)
-            print "rc=%d" % self.returncode
-            print
-            if self.debug:
-                stderr_file = os.path.join(self.debug, 'autoserv.stderr')
-                if os.path.exists(stderr_file):
-                    for line in open(stderr_file).readlines():
-                        print line,
-            print "\n--------------------------------------------\n"
-            raise error.AutoservSubcommandError(self.func, self.returncode)
-
-
-    def poll(self):
-        """
-        This is borrowed from subprocess.Popen.
-        """
-        if self.returncode is None:
-            try:
-                pid, sts = os.waitpid(self.pid, os.WNOHANG)
-                if pid == self.pid:
-                    self._handle_exitstatus(sts)
-            except os.error:
-                pass
-        return self.returncode
-
-
-    def wait(self):
-        """
-        This is borrowed from subprocess.Popen.
-        """
-        if self.returncode is None:
-            pid, sts = os.waitpid(self.pid, 0)
-            self._handle_exitstatus(sts)
-        return self.returncode
-
-
-    def fork_waitfor(self, timeout=None):
-        if not timeout:
-            return self.wait()
-        else:
-            end_time = time.time() + timeout
-            while time.time() <= end_time:
-                returncode = self.poll()
-                if returncode is not None:
-                    return returncode
-                time.sleep(1)
-
-            utils.nuke_pid(self.pid)
-            print "subcommand failed pid %d" % self.pid
-            print "%s" % (self.func,)
-            print "timeout after %ds" % timeout
-            print
-            return None
diff --git a/server/subcommand_unittest.py b/server/subcommand_unittest.py
deleted file mode 100755
index d8e8b68..0000000
--- a/server/subcommand_unittest.py
+++ /dev/null
@@ -1,443 +0,0 @@
-#!/usr/bin/python
-# Copyright 2009 Google Inc. Released under the GPL v2
-
-import unittest
-
-import common
-from autotest_lib.client.common_lib.test_utils import mock
-from autotest_lib.server import subcommand
-
-
-def _create_subcommand(func, args):
-    # to avoid __init__
-    class wrapper(subcommand.subcommand):
-        def __init__(self, func, args):
-            self.func = func
-            self.args = args
-            self.subdir = None
-            self.debug = None
-            self.pid = None
-            self.returncode = None
-            self.lambda_function = lambda: func(*args)
-
-    return wrapper(func, args)
-
-
-class subcommand_test(unittest.TestCase):
-    def setUp(self):
-        self.god = mock.mock_god()
-
-
-    def tearDown(self):
-        self.god.unstub_all()
-        # cleanup the hooks
-        subcommand.subcommand.fork_hooks = []
-        subcommand.subcommand.join_hooks = []
-
-
-    def test_create(self):
-        def check_attributes(cmd, func, args, subdir=None, debug=None,
-                             pid=None, returncode=None, fork_hooks=[],
-                             join_hooks=[]):
-            self.assertEquals(cmd.func, func)
-            self.assertEquals(cmd.args, args)
-            self.assertEquals(cmd.subdir, subdir)
-            self.assertEquals(cmd.debug, debug)
-            self.assertEquals(cmd.pid, pid)
-            self.assertEquals(cmd.returncode, returncode)
-            self.assertEquals(cmd.fork_hooks, fork_hooks)
-            self.assertEquals(cmd.join_hooks, join_hooks)
-
-        def func(arg1, arg2):
-            pass
-
-        cmd = subcommand.subcommand(func, (2, 3))
-        check_attributes(cmd, func, (2, 3))
-        self.god.check_playback()
-
-        self.god.stub_function(subcommand.os.path, 'abspath')
-        self.god.stub_function(subcommand.os.path, 'exists')
-        self.god.stub_function(subcommand.os, 'mkdir')
-
-        subcommand.os.path.abspath.expect_call('dir').and_return('/foo/dir')
-        subcommand.os.path.exists.expect_call('/foo/dir').and_return(False)
-        subcommand.os.mkdir.expect_call('/foo/dir')
-
-        (subcommand.os.path.exists.expect_call('/foo/dir/debug')
-                .and_return(False))
-        subcommand.os.mkdir.expect_call('/foo/dir/debug')
-
-        cmd = subcommand.subcommand(func, (2, 3), subdir='dir')
-        check_attributes(cmd, func, (2, 3), subdir='/foo/dir',
-                         debug='/foo/dir/debug')
-        self.god.check_playback()
-
-
-    def _setup_fork_start_parent(self):
-        self.god.stub_function(subcommand.os, 'fork')
-
-        subcommand.os.fork.expect_call().and_return(1000)
-        func = self.god.create_mock_function('func')
-        cmd = _create_subcommand(func, [])
-        cmd.fork_start()
-
-        return cmd
-
-
-    def test_fork_start_parent(self):
-        cmd = self._setup_fork_start_parent()
-
-        self.assertEquals(cmd.pid, 1000)
-        self.god.check_playback()
-
-
-    def _setup_fork_start_child(self):
-        self.god.stub_function(subcommand.os, 'pipe')
-        self.god.stub_function(subcommand.os, 'fork')
-        self.god.stub_function(subcommand.os, 'close')
-        self.god.stub_function(subcommand.os, 'write')
-        self.god.stub_function(subcommand.cPickle, 'dumps')
-        self.god.stub_function(subcommand.os, '_exit')
-
-
-    def test_fork_start_child(self):
-        self._setup_fork_start_child()
-
-        func = self.god.create_mock_function('func')
-        fork_hook = self.god.create_mock_function('fork_hook')
-        join_hook = self.god.create_mock_function('join_hook')
-
-        subcommand.subcommand.register_fork_hook(fork_hook)
-        subcommand.subcommand.register_join_hook(join_hook)
-        cmd = _create_subcommand(func, (1, 2))
-
-        subcommand.os.pipe.expect_call().and_return((10, 20))
-        subcommand.os.fork.expect_call().and_return(0)
-        subcommand.os.close.expect_call(10)
-        fork_hook.expect_call(cmd)
-        func.expect_call(1, 2).and_return(True)
-        subcommand.cPickle.dumps.expect_call(True,
-                subcommand.cPickle.HIGHEST_PROTOCOL).and_return('True')
-        subcommand.os.write.expect_call(20, 'True')
-        subcommand.os.close.expect_call(20)
-        join_hook.expect_call(cmd)
-        subcommand.os._exit.expect_call(0)
-
-        cmd.fork_start()
-        self.god.check_playback()
-
-
-    def test_fork_start_child_error(self):
-        self._setup_fork_start_child()
-        self.god.stub_function(subcommand.logging, 'exception')
-
-        func = self.god.create_mock_function('func')
-        cmd = _create_subcommand(func, (1, 2))
-        error = Exception('some error')
-
-        subcommand.os.pipe.expect_call().and_return((10, 20))
-        subcommand.os.fork.expect_call().and_return(0)
-        subcommand.os.close.expect_call(10)
-        func.expect_call(1, 2).and_raises(error)
-        subcommand.logging.exception.expect_call('function failed')
-        subcommand.cPickle.dumps.expect_call(error,
-                subcommand.cPickle.HIGHEST_PROTOCOL).and_return('error')
-        subcommand.os.write.expect_call(20, 'error')
-        subcommand.os.close.expect_call(20)
-        subcommand.os._exit.expect_call(1)
-
-        cmd.fork_start()
-        self.god.check_playback()
-
-
-    def _setup_poll(self):
-        cmd = self._setup_fork_start_parent()
-        self.god.stub_function(subcommand.os, 'waitpid')
-        return cmd
-
-
-    def test_poll_running(self):
-        cmd = self._setup_poll()
-
-        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
-                .and_raises(subcommand.os.error('waitpid')))
-        self.assertEquals(cmd.poll(), None)
-        self.god.check_playback()
-
-
-    def test_poll_finished_success(self):
-        cmd = self._setup_poll()
-
-        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
-                .and_return((1000, 0)))
-        self.assertEquals(cmd.poll(), 0)
-        self.god.check_playback()
-
-
-    def test_poll_finished_failure(self):
-        cmd = self._setup_poll()
-        self.god.stub_function(cmd, '_handle_exitstatus')
-
-        (subcommand.os.waitpid.expect_call(1000, subcommand.os.WNOHANG)
-                .and_return((1000, 10)))
-        cmd._handle_exitstatus.expect_call(10).and_raises(Exception('fail'))
-
-        self.assertRaises(Exception, cmd.poll)
-        self.god.check_playback()
-
-
-    def test_wait_success(self):
-        cmd = self._setup_poll()
-
-        (subcommand.os.waitpid.expect_call(1000, 0)
-                .and_return((1000, 0)))
-
-        self.assertEquals(cmd.wait(), 0)
-        self.god.check_playback()
-
-
-    def test_wait_failure(self):
-        cmd = self._setup_poll()
-        self.god.stub_function(cmd, '_handle_exitstatus')
-
-        (subcommand.os.waitpid.expect_call(1000, 0)
-                .and_return((1000, 10)))
-
-        cmd._handle_exitstatus.expect_call(10).and_raises(Exception('fail'))
-        self.assertRaises(Exception, cmd.wait)
-        self.god.check_playback()
-
-
-    def _setup_fork_waitfor(self):
-        cmd = self._setup_fork_start_parent()
-        self.god.stub_function(cmd, 'wait')
-        self.god.stub_function(cmd, 'poll')
-        self.god.stub_function(subcommand.time, 'time')
-        self.god.stub_function(subcommand.time, 'sleep')
-        self.god.stub_function(subcommand.utils, 'nuke_pid')
-
-        return cmd
-
-
-    def test_fork_waitfor_no_timeout(self):
-        cmd = self._setup_fork_waitfor()
-
-        cmd.wait.expect_call().and_return(0)
-
-        self.assertEquals(cmd.fork_waitfor(), 0)
-        self.god.check_playback()
-
-
-    def test_fork_waitfor_success(self):
-        cmd = self._setup_fork_waitfor()
-        self.god.stub_function(cmd, 'wait')
-        timeout = 10
-
-        subcommand.time.time.expect_call().and_return(1)
-        for i in xrange(timeout):
-            subcommand.time.time.expect_call().and_return(i + 1)
-            cmd.poll.expect_call().and_return(None)
-            subcommand.time.sleep.expect_call(1)
-        subcommand.time.time.expect_call().and_return(i + 2)
-        cmd.poll.expect_call().and_return(0)
-
-        self.assertEquals(cmd.fork_waitfor(timeout=timeout), 0)
-        self.god.check_playback()
-
-
-    def test_fork_waitfor_failure(self):
-        cmd = self._setup_fork_waitfor()
-        self.god.stub_function(cmd, 'wait')
-        timeout = 10
-
-        subcommand.time.time.expect_call().and_return(1)
-        for i in xrange(timeout):
-            subcommand.time.time.expect_call().and_return(i + 1)
-            cmd.poll.expect_call().and_return(None)
-            subcommand.time.sleep.expect_call(1)
-        subcommand.time.time.expect_call().and_return(i + 3)
-        subcommand.utils.nuke_pid.expect_call(cmd.pid)
-
-        self.assertEquals(cmd.fork_waitfor(timeout=timeout), None)
-        self.god.check_playback()
-
-
-class parallel_test(unittest.TestCase):
-    def setUp(self):
-        self.god = mock.mock_god()
-        self.god.stub_function(subcommand.cPickle, 'load')
-
-
-    def tearDown(self):
-        self.god.unstub_all()
-
-
-    def _get_cmd(self, func, args):
-        cmd = _create_subcommand(func, args)
-        cmd.result_pickle = self.god.create_mock_class(file, 'file')
-        return self.god.create_mock_class(cmd, 'subcommand')
-
-
-    def _get_tasklist(self):
-        return [self._get_cmd(lambda x: x * 2, (3,)),
-                self._get_cmd(lambda: None, [])]
-
-
-    def _setup_common(self):
-        tasklist = self._get_tasklist()
-
-        for task in tasklist:
-            task.fork_start.expect_call()
-
-        return tasklist
-
-
-    def test_success(self):
-        tasklist = self._setup_common()
-
-        for task in tasklist:
-            task.fork_waitfor.expect_call(timeout=None).and_return(0)
-            (subcommand.cPickle.load.expect_call(task.result_pickle)
-                    .and_return(6))
-            task.result_pickle.close.expect_call()
-
-        subcommand.parallel(tasklist)
-        self.god.check_playback()
-
-
-    def test_failure(self):
-        tasklist = self._setup_common()
-
-        for task in tasklist:
-            task.fork_waitfor.expect_call(timeout=None).and_return(1)
-            (subcommand.cPickle.load.expect_call(task.result_pickle)
-                    .and_return(6))
-            task.result_pickle.close.expect_call()
-
-        self.assertRaises(subcommand.error.AutoservError, subcommand.parallel,
-                          tasklist)
-        self.god.check_playback()
-
-
-    def test_timeout(self):
-        self.god.stub_function(subcommand.time, 'time')
-
-        tasklist = self._setup_common()
-        timeout = 10
-
-        subcommand.time.time.expect_call().and_return(1)
-
-        for task in tasklist:
-            subcommand.time.time.expect_call().and_return(1)
-            task.fork_waitfor.expect_call(timeout=timeout).and_return(None)
-            (subcommand.cPickle.load.expect_call(task.result_pickle)
-                    .and_return(6))
-            task.result_pickle.close.expect_call()
-
-        self.assertRaises(subcommand.error.AutoservError, subcommand.parallel,
-                          tasklist, timeout=timeout)
-        self.god.check_playback()
-
-
-    def test_return_results(self):
-        tasklist = self._setup_common()
-
-        tasklist[0].fork_waitfor.expect_call(timeout=None).and_return(0)
-        (subcommand.cPickle.load.expect_call(tasklist[0].result_pickle)
-                .and_return(6))
-        tasklist[0].result_pickle.close.expect_call()
-
-        error = Exception('fail')
-        tasklist[1].fork_waitfor.expect_call(timeout=None).and_return(1)
-        (subcommand.cPickle.load.expect_call(tasklist[1].result_pickle)
-                .and_return(error))
-        tasklist[1].result_pickle.close.expect_call()
-
-        self.assertEquals(subcommand.parallel(tasklist, return_results=True),
-                          [6, error])
-        self.god.check_playback()
-
-
-class test_parallel_simple(unittest.TestCase):
-    def setUp(self):
-        self.god = mock.mock_god()
-        self.god.stub_function(subcommand, 'parallel')
-        ctor = self.god.create_mock_function('subcommand')
-        self.god.stub_with(subcommand, 'subcommand', ctor)
-
-
-    def tearDown(self):
-        self.god.unstub_all()
-
-
-    def test_simple_success(self):
-        func = self.god.create_mock_function('func')
-
-        func.expect_call(3)
-
-        subcommand.parallel_simple(func, (3,))
-        self.god.check_playback()
-
-
-    def test_simple_failure(self):
-        func = self.god.create_mock_function('func')
-
-        error = Exception('fail')
-        func.expect_call(3).and_raises(error)
-
-        self.assertRaises(Exception, subcommand.parallel_simple, func, (3,))
-        self.god.check_playback()
-
-
-    def test_simple_return_value(self):
-        func = self.god.create_mock_function('func')
-
-        result = 1000
-        func.expect_call(3).and_return(result)
-
-        self.assertEquals(subcommand.parallel_simple(func, (3,),
-                                                     return_results=True),
-                          [result])
-        self.god.check_playback()
-
-
-    def _setup_many(self, count, log):
-        func = self.god.create_mock_function('func')
-
-        args = []
-        cmds = []
-        for i in xrange(count):
-            arg = i + 1
-            args.append(arg)
-
-            if log:
-                subdir = str(arg)
-            else:
-                subdir = None
-
-            cmd = object()
-            cmds.append(cmd)
-
-            (subcommand.subcommand.expect_call(func, [arg], subdir)
-                    .and_return(cmd))
-
-        subcommand.parallel.expect_call(cmds, None, return_results=False)
-        return func, args
-
-
-    def test_passthrough(self):
-        func, args = self._setup_many(4, True)
-
-        subcommand.parallel_simple(func, args)
-        self.god.check_playback()
-
-
-    def test_nolog(self):
-        func, args = self._setup_many(3, False)
-
-        subcommand.parallel_simple(func, args, log=False)
-        self.god.check_playback()
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/server/test.py b/server/test.py
index 8fdba9d..0f2d78f 100644
--- a/server/test.py
+++ b/server/test.py
@@ -86,7 +86,8 @@ class _sysinfo_logger(object):
 
     def _install(self):
         if not self.host:
-            from autotest_lib.server import hosts, autotest
+            from autotest_lib.client.common_lib import hosts
+            from autotest_lib.server import autotest
             self.host = hosts.create_host(self.job.machines[0],
                                           auto_monitor=False)
             try:
diff --git a/server/tests/iperf/iperf.py b/server/tests/iperf/iperf.py
index 4529a70..d452118 100644
--- a/server/tests/iperf/iperf.py
+++ b/server/tests/iperf/iperf.py
@@ -1,4 +1,5 @@
-from autotest_lib.server import autotest, hosts, subcommand, test
+from autotest_lib.client.common_lib import subcommand, hosts
+from autotest_lib.server import autotest, test
 from autotest_lib.server import utils
 
 class iperf(test.test):
diff --git a/server/tests/netperf2/netperf2.py b/server/tests/netperf2/netperf2.py
index 108dab8..a4531b3 100644
--- a/server/tests/netperf2/netperf2.py
+++ b/server/tests/netperf2/netperf2.py
@@ -1,5 +1,5 @@
-from autotest_lib.server import autotest, hosts, subcommand, test
-from autotest_lib.server import utils
+from autotest_lib.client.common_lib import subcommand, hosts
+from autotest_lib.server import utils, autotest, test
 
 class netperf2(test.test):
     version = 2
diff --git a/server/tests/netpipe/netpipe.py b/server/tests/netpipe/netpipe.py
index 3b3bdc7..abeb40e 100644
--- a/server/tests/netpipe/netpipe.py
+++ b/server/tests/netpipe/netpipe.py
@@ -1,5 +1,5 @@
-from autotest_lib.server import autotest, hosts, subcommand, test
-from autotest_lib.server import utils
+from autotest_lib.client.common_lib import subcommand, hosts
+from autotest_lib.server import utils, autotest, test
 
 class netpipe(test.test):
     version = 2
diff --git a/server/tests/reinstall/control b/server/tests/reinstall/control
index 9b6c646..42e7dce 100644
--- a/server/tests/reinstall/control
+++ b/server/tests/reinstall/control
@@ -7,7 +7,7 @@ TEST_TYPE = "server"
 RUN_VERIFY = False
 DOC = """\
 This will re-install a machine, using the code in
-autotest_lib.server.hosts.Host.machine_install()."""
+autotest_lib.client.common_lib.hosts.Host.machine_install()."""
 
 def run(machine):
     host = hosts.create_host(machine, initialize=False)
-- 
1.7.4.4

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [AUTOTEST][PATCH 3/3] autotest: Client/server part unification.
  2011-08-26  7:12 Add ability client part starts autotest like server part Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to client part of autotest Jiří Župka
  2011-08-26  7:12 ` [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side to client side Jiří Župka
@ 2011-08-26  7:12 ` Jiří Župka
  2011-08-26 16:22 ` Add ability client part starts autotest like server part Lucas Meneghel Rodrigues
  3 siblings, 0 replies; 6+ messages in thread
From: Jiří Župka @ 2011-08-26  7:12 UTC (permalink / raw)
  To: kvm-autotest, kvm, autotest, lmr, ldoktor, akong; +Cc: jzupka

Add ability to start autotest like server from clinet part of autotest on others
systems over network.

This patch add ability to start autotest tests on other system over network
same way like server_job do this in server part of autotest. This patch remove necesary
to write some tests multiple in diferent environment (virt tests, client part tests, etc).

Usage:
   class subtest(test.test):
    version = 1

    def run_once(self, test_name, test_args):
        self.job.extend_to_server_job()
        guest = hosts.create_host("192.168.122.130")
        guest2 = hosts.create_host("192.168.122.88")
        at = autotest.Autotest(guest)
        at2 = autotest.Autotest(guest2)

        template = ''.join(["job.run_test('sleeptest', tag='%s', ",
                            "iterations=%d)"])
        guest_control = template % ("test", 10)
        guest_control2 = template % ("test", 1)

        def a1():
            at2.run(guest_control, guest2.hostname, background=False, tag="one")

        def a2():
            at.run(guest_control2, guest.hostname, background=True,tag="two")

    1) For start two independent test
         t = virt_utils.Thread(self.job.parallel,[[a2]])
         t2 = virt_utils.Thread(self.job.parallel,[[a1]])

    2) For start two parallel tests with out waiting for ending of this tests.
         t = virt_utils.Thread(self.job.parallel,[[a2], [a1]])

    3) For start two parallel test and wait for end of this tests.
         self.job.parallel([a2], [a1])

There is a little problem with keep compatibility with standard client
part job and server part. There is problem with logs indention.
This problem solves using self.job.parallel for start autotest.

There is also problem with start autotest on same system where is
already started another client, because client/job uses agressive
way how to clean directory and erase data for all jobs.
This problem can be easily resolve.

Signed-off-by: Jiří Župka <jzupka@redhat.com>
---
 client/bin/job.py                            |  143 +++++++++++++++++++++++++-
 client/bin/profilers.py                      |    5 +
 client/common_lib/autotest.py                |   42 ++++----
 client/common_lib/base_hosts/__init__.py     |   10 ++-
 client/common_lib/base_hosts/base_classes.py |   24 +++--
 client/common_lib/base_job.py                |  117 +++++++++++++++++++++-
 client/common_lib/hosts/monitors/console.py  |    2 +-
 server/autotest_unittest.py                  |    2 +
 server/server_job.py                         |   75 --------------
 server/tests/netperf2/netperf2.py            |    2 +
 10 files changed, 314 insertions(+), 108 deletions(-)

diff --git a/client/bin/job.py b/client/bin/job.py
index 1abdbcd..97bc32b 100644
--- a/client/bin/job.py
+++ b/client/bin/job.py
@@ -8,6 +8,7 @@ Copyright Andy Whitcroft, Martin J. Bligh 2006
 import copy, os, platform, re, shutil, sys, time, traceback, types, glob
 import logging, getpass, errno, weakref
 import cPickle as pickle
+import tempfile, fcntl
 from autotest_lib.client.bin import client_logging_config
 from autotest_lib.client.bin import utils, parallel, kernel, xen
 from autotest_lib.client.bin import profilers, boottool, harness
@@ -71,6 +72,17 @@ class status_indenter(base_job.status_indenter):
         self.job._record_indent -= 1
 
 
+    def get_context(self):
+        """Returns a context object for use by job.get_record_context."""
+        class context(object):
+            def __init__(self, indenter, indent):
+                self._indenter = indenter
+                self._indent = indent
+            def restore(self):
+                self._indenter._indent = self._indent
+        return context(self, self.job._record_indent)
+
+
 class base_client_job(base_job.base_job):
     """The client-side concrete implementation of base_job.
 
@@ -120,6 +132,56 @@ class base_client_job(base_job.base_job):
             raise
 
 
+    def use_external_logging(self):
+        """
+        Return True if external logging should be used.
+        """
+        return False
+
+
+    def extend_to_server_job(self, ssh_user="root", ssh_pass="", ssh_port=22,
+                             only_collect_crashinfo=False):
+        """
+        Extend client job to be able work as server part.
+        """
+        self._uncollected_log_file = None
+        created_uncollected_logs = False
+        if self.resultdir:
+            if only_collect_crashinfo:
+                # if this is a crashinfo-only run, and there were no existing
+                # uncollected logs, just bail out early
+                logging.info("No existing uncollected logs, "
+                             "skipping crashinfo collection")
+            else:
+                self._uncollected_log_file = os.path.join(self.resultdir,
+                                                          'uncollected_logs')
+                log_file = open(self._uncollected_log_file, "w")
+                pickle.dump([], log_file)
+                log_file.close()
+                created_uncollected_logs = True
+
+        from autotest_lib.client.common_lib import hosts
+        from autotest_lib.client.common_lib import autotest
+        hosts.factory.ssh_user = ssh_user
+        hosts.factory.ssh_port = ssh_port
+        hosts.factory.ssh_pass = ssh_pass
+        hosts.Host.job = self
+        autotest.Autotest.job = self
+
+        if self.resultdir:
+            os.chdir(self.resultdir)
+            # touch status.log so that the parser knows a job is running here
+            #open(self.get_status_log_path(), 'a').close()
+            self.enable_external_logging()
+
+        fd, sub_job_filepath = tempfile.mkstemp(dir=self.tmpdir)
+        os.close(fd)
+        self._sub_state = base_job.job_state()
+        self._sub_state.set_backing_file(sub_job_filepath)
+
+        self._sub_state.set('autotests', 'count', 0)
+
+
     @classmethod
     def _get_environ_autodir(cls):
         return os.environ['AUTODIR']
@@ -162,6 +224,7 @@ class base_client_job(base_job.base_job):
         As of now self.record() needs self.resultdir, self._group_level,
         self.harness and of course self._logger.
         """
+        #TODO: Fix delete all debugdir files.
         if not options.cont:
             self._cleanup_debugdir_files()
             self._cleanup_results_dir()
@@ -198,8 +261,9 @@ class base_client_job(base_job.base_job):
             self.harness.test_status(rendered_entry, msg_tag)
             # send the entry to stdout, if it's enabled
             logging.info(rendered_entry)
+        self._indenter = status_indenter(self)
         self._logger = base_job.status_logger(
-            self, status_indenter(self), record_hook=client_job_record_hook,
+            self, self._indenter, record_hook=client_job_record_hook,
             tap_writer=self._tap)
 
     def _post_record_init(self, control, options, drop_caches,
@@ -1201,6 +1265,83 @@ class base_client_job(base_job.base_job):
         self._state.set('client', 'sysinfo', state)
 
 
+    def preprocess_client_state(self):
+        """
+        Produce a state file for initializing the state of a client job.
+
+        Creates a new client state file with all the current server state, as
+        well as some pre-set client state.
+
+        @returns The path of the file the state was written into.
+        """
+        # initialize the sysinfo state
+        def group_func():
+            if self._state.has_namespace('client-s'):
+                self._sub_state.set('autotests','count',
+                                self._sub_state.get('autotests','count')+1)
+            else:
+                self._state.rename_namespace("client", "client-s")
+                self._sub_state.set('autotests','count',1)
+
+        self._state.atomic(group_func)
+
+        self._state.set('client', 'sysinfo', self.sysinfo.serialize())
+
+        # dump the state out to a tempfile
+        fd, file_path = tempfile.mkstemp(dir=self.tmpdir)
+        os.close(fd)
+
+        # write_to_file doesn't need locking, we exclusively own file_path
+        self._state.write_to_file(file_path)
+        return file_path
+
+
+    def postprocess_client_state(self, state_path):
+        """
+        Update the state of this job with the state from a client job.
+
+        Updates the state of the server side of a job with the final state
+        of a client job that was run. Updates the non-client-specific state,
+        pulls in some specific bits from the client-specific state, and then
+        discards the rest. Removes the state file afterwards
+
+        @param state_file A path to the state file from the client.
+        """
+        # update the on-disk state
+        try:
+            self._state.read_from_file(state_path)
+            os.remove(state_path)
+        except OSError, e:
+            # ignore file-not-found errors
+            if e.errno != errno.ENOENT:
+                raise
+            else:
+                logging.debug('Client state file %s not found', state_path)
+
+        # update the sysinfo state
+        if self._state.has('client', 'sysinfo'):
+            self.sysinfo.deserialize(self._state.get('client', 'sysinfo'))
+
+        # drop all the client-specific state
+        self._state.discard_namespace('client')
+
+
+    def clean_state(self):
+        """
+        Repair client namespace after sub client job ends.
+        """
+        def group_func():
+            if self._state.has_namespace('client-s'):
+                if self._sub_state.get('autotests','count') > 1:
+                    self._sub_state.set('autotests','count',
+                                    self._sub_state.get('autotests','count')-1)
+                else:
+                    if self._state.has_namespace('client'):
+                        self._state.discard_namespace('client')
+                    self._state.rename_namespace('client-s', 'client')
+        self._sub_state.atomic(group_func)
+
+
 class disk_usage_monitor:
     def __init__(self, logging_func, device, max_mb_per_hour):
         self.func = logging_func
diff --git a/client/bin/profilers.py b/client/bin/profilers.py
index df152d9..d3b1556 100644
--- a/client/bin/profilers.py
+++ b/client/bin/profilers.py
@@ -5,7 +5,12 @@ from autotest_lib.client.common_lib import utils, error, profiler_manager
 
 
 class profilers(profiler_manager.profiler_manager):
+    def __init__(self, job):
+        super(profilers, self).__init__(job)
+        self.add_log = {}
+
     def load_profiler(self, profiler, args, dargs):
+        self.add_log[profiler] = (args, dargs)
         prof_dir = os.path.join(self.job.autodir, "profilers", profiler)
 
         try:
diff --git a/client/common_lib/autotest.py b/client/common_lib/autotest.py
index b103fb3..01ef65b 100644
--- a/client/common_lib/autotest.py
+++ b/client/common_lib/autotest.py
@@ -7,14 +7,9 @@ from autotest_lib.client.common_lib import base_job, log, error, autotemp
 from autotest_lib.client.common_lib import global_config, packages
 from autotest_lib.client.common_lib import utils as client_utils
 
-AUTOTEST_SVN  = 'svn://test.kernel.org/autotest/trunk/client'
+AUTOTEST_SVN = 'svn://test.kernel.org/autotest/trunk/client'
 AUTOTEST_HTTP = 'http://test.kernel.org/svn/autotest/trunk/client'
 
-# Timeouts for powering down and up respectively
-HALT_TIME = 300
-BOOT_TIME = 1800
-CRASH_RECOVERY_TIME = 9000
-
 
 get_value = global_config.global_config.get_config_value
 autoserv_prebuild = get_value('AUTOSERV', 'enable_server_prebuild',
@@ -37,7 +32,7 @@ class BaseAutotest(installable_object.InstallableObject):
     implement the unimplemented methods in parent classes.
     """
 
-    def __init__(self, host = None):
+    def __init__(self, host=None):
         self.host = host
         self.got = False
         self.installed = False
@@ -223,7 +218,7 @@ class BaseAutotest(installable_object.InstallableObject):
             except (error.PackageInstallError, error.AutoservRunError,
                     global_config.ConfigError), e:
                 logging.info("Could not install autotest using the packaging "
-                             "system: %s. Trying other methods",  e)
+                             "system: %s. Trying other methods", e)
 
         # try to install from file or directory
         if self.source_material:
@@ -272,7 +267,7 @@ class BaseAutotest(installable_object.InstallableObject):
         self.installed = False
 
 
-    def get(self, location = None):
+    def get(self, location=None):
         if not location:
             location = os.path.join(self.serverdir, '../client')
             location = os.path.abspath(location)
@@ -290,7 +285,7 @@ class BaseAutotest(installable_object.InstallableObject):
 
     def run(self, control_file, results_dir='.', host=None, timeout=None,
             tag=None, parallel_flag=False, background=False,
-            client_disconnect_timeout=1800):
+            client_disconnect_timeout=None):
         """
         Run an autotest job on the remote machine.
 
@@ -307,7 +302,8 @@ class BaseAutotest(installable_object.InstallableObject):
                 a background job; the code calling run will be responsible
                 for monitoring the client and collecting the results.
         @param client_disconnect_timeout: Seconds to wait for the remote host
-                to come back after a reboot.  [default: 30 minutes]
+                to come back after a reboot. Defaults to the host setting for
+                DEFAULT_REBOOT_TIMEOUT.
 
         @raises AutotestRunError: If there is a problem executing
                 the control file.
@@ -315,6 +311,9 @@ class BaseAutotest(installable_object.InstallableObject):
         host = self._get_host_and_setup(host)
         results_dir = os.path.abspath(results_dir)
 
+        if client_disconnect_timeout is None:
+            client_disconnect_timeout = host.DEFAULT_REBOOT_TIMEOUT
+
         if tag:
             results_dir = os.path.join(results_dir, tag)
 
@@ -399,9 +398,12 @@ class BaseAutotest(installable_object.InstallableObject):
         if os.path.abspath(tmppath) != os.path.abspath(control_file):
             os.remove(tmppath)
 
-        atrun.execute_control(
-                timeout=timeout,
-                client_disconnect_timeout=client_disconnect_timeout)
+        try:
+            atrun.execute_control(
+                    timeout=timeout,
+                    client_disconnect_timeout=client_disconnect_timeout)
+        finally:
+            host.job.clean_state()
 
 
     def run_timed_test(self, test_name, results_dir='.', host=None,
@@ -700,12 +702,13 @@ class _BaseRun(object):
     def _wait_for_reboot(self, old_boot_id):
         logging.info("Client is rebooting")
         logging.info("Waiting for client to halt")
-        if not self.host.wait_down(HALT_TIME, old_boot_id=old_boot_id):
+        if not self.host.wait_down(self.host.WAIT_DOWN_REBOOT_TIMEOUT,
+                                   old_boot_id=old_boot_id):
             err = "%s failed to shutdown after %d"
-            err %= (self.host.hostname, HALT_TIME)
+            err %= (self.host.hostname, self.host.WAIT_DOWN_REBOOT_TIMEOUT)
             raise error.AutotestRunError(err)
         logging.info("Client down, waiting for restart")
-        if not self.host.wait_up(BOOT_TIME):
+        if not self.host.wait_up(self.host.DEFAULT_REBOOT_TIMEOUT):
             # since reboot failed
             # hardreset the machine once if possible
             # before failing this control file
@@ -719,7 +722,8 @@ class _BaseRun(object):
                 warning %= self.host.hostname
                 logging.warning(warning)
             raise error.AutotestRunError("%s failed to boot after %ds" %
-                                         (self.host.hostname, BOOT_TIME))
+                                         (self.host.hostname,
+                                          self.host.DEFAULT_REBOOT_TIMEOUT))
         self.host.reboot_followup()
 
 
@@ -765,7 +769,7 @@ class _BaseRun(object):
                 self.log_unexpected_abort(logger)
 
                 # give the client machine a chance to recover from a crash
-                self.host.wait_up(CRASH_RECOVERY_TIME)
+                self.host.wait_up(self.host.HOURS_TO_WAIT_FOR_RECOVERY * 3600)
                 msg = ("Aborting - unexpected final status message from "
                        "client on %s: %s\n") % (self.host.hostname, last)
                 raise error.AutotestRunError(msg)
diff --git a/client/common_lib/base_hosts/__init__.py b/client/common_lib/base_hosts/__init__.py
index c2b42ca..c7ef409 100644
--- a/client/common_lib/base_hosts/__init__.py
+++ b/client/common_lib/base_hosts/__init__.py
@@ -1,6 +1,14 @@
+# Copyright 2009 Google Inc. Released under the GPL v2
+
+"""This is a convenience module to import all available types of hosts.
+
+Implementation details:
+You should 'import hosts' instead of importing every available host module.
+"""
+
 from autotest_lib.client.common_lib import utils
 import base_classes
 
 Host = utils.import_site_class(
     __file__, "autotest_lib.client.common_lib.base_hosts.site_host", "SiteHost",
-    base_classes.Host)
\ No newline at end of file
+    base_classes.Host)
diff --git a/client/common_lib/base_hosts/base_classes.py b/client/common_lib/base_hosts/base_classes.py
index b267e79..68cabe8 100644
--- a/client/common_lib/base_hosts/base_classes.py
+++ b/client/common_lib/base_hosts/base_classes.py
@@ -50,10 +50,14 @@ class Host(object):
     """
 
     job = None
-    DEFAULT_REBOOT_TIMEOUT = 1800
-    WAIT_DOWN_REBOOT_TIMEOUT = 840
-    WAIT_DOWN_REBOOT_WARNING = 540
-    HOURS_TO_WAIT_FOR_RECOVERY = 2.5
+    DEFAULT_REBOOT_TIMEOUT = global_config.global_config.get_config_value(
+        "HOSTS", "default_reboot_timeout", type=int, default=1800)
+    WAIT_DOWN_REBOOT_TIMEOUT = global_config.global_config.get_config_value(
+        "HOSTS", "wait_down_reboot_timeout", type=int, default=840)
+    WAIT_DOWN_REBOOT_WARNING = global_config.global_config.get_config_value(
+        "HOSTS", "wait_down_reboot_warning", type=int, default=540)
+    HOURS_TO_WAIT_FOR_RECOVERY = global_config.global_config.get_config_value(
+        "HOSTS", "hours_to_wait_for_recovery", type=float, default=2.5)
     # the number of hardware repair requests that need to happen before we
     # actually send machines to hardware repair
     HARDWARE_REPAIR_REQUEST_THRESHOLD = 4
@@ -188,18 +192,18 @@ class Host(object):
 
 
     def wait_for_restart(self, timeout=DEFAULT_REBOOT_TIMEOUT,
+                         down_timeout=WAIT_DOWN_REBOOT_TIMEOUT,
+                         down_warning=WAIT_DOWN_REBOOT_WARNING,
                          log_failure=True, old_boot_id=None, **dargs):
         """ Wait for the host to come back from a reboot. This is a generic
         implementation based entirely on wait_up and wait_down. """
-        if not self.wait_down(timeout=self.WAIT_DOWN_REBOOT_TIMEOUT,
-                              warning_timer=self.WAIT_DOWN_REBOOT_WARNING,
+        if not self.wait_down(timeout=down_timeout,
+                              warning_timer=down_warning,
                               old_boot_id=old_boot_id):
             if log_failure:
                 self.record("ABORT", None, "reboot.verify", "shut down failed")
             raise error.AutoservShutdownError("Host did not shut down")
 
-        self.wait_up(timeout)
-        time.sleep(2)    # this is needed for complete reliability
         if self.wait_up(timeout):
             self.record("GOOD", None, "reboot.verify")
             self.reboot_followup(**dargs)
@@ -238,12 +242,12 @@ class Host(object):
 
         @raises AutoservDiskFullHostError if path has less than gb GB free.
         """
-        one_mb = 10**6  # Bytes (SI unit).
+        one_mb = 10 ** 6  # Bytes (SI unit).
         mb_per_gb = 1000.0
         logging.info('Checking for >= %s GB of space under %s on machine %s',
                      gb, path, self.hostname)
         df = self.run('df -PB %d %s | tail -1' % (one_mb, path)).stdout.split()
-        free_space_gb = int(df[3])/mb_per_gb
+        free_space_gb = int(df[3]) / mb_per_gb
         if free_space_gb < gb:
             raise error.AutoservDiskFullHostError(path, gb, free_space_gb)
         else:
diff --git a/client/common_lib/base_job.py b/client/common_lib/base_job.py
index eef9efc..300203e 100644
--- a/client/common_lib/base_job.py
+++ b/client/common_lib/base_job.py
@@ -348,6 +348,24 @@ class job_state(object):
 
 
     @with_backing_file
+    def rename_namespace(self, namespace, new_namespace):
+        """Saves the value given with the provided name.
+
+        This operation must be atomic.
+
+        @param namespace: The namespace that the property should be stored in.
+        @param new_namespace: The name the value should be saved with.
+        """
+        if namespace in self._state:
+            self._state[new_namespace] = self._state[namespace]
+            del self._state[namespace]
+            logging.debug('Namespace %s rename to %s', namespace,
+                          new_namespace)
+        elif not namespace in self._state:
+            raise KeyError('No namespace %s in namespaces' % (namespace))
+
+
+    @with_backing_file
     def has(self, namespace, name):
         """Return a boolean indicating if namespace.name is defined.
 
@@ -361,6 +379,17 @@ class job_state(object):
 
 
     @with_backing_file
+    def has_namespace(self, namespace):
+        """Return a boolean indicating if namespace.name is defined.
+
+        @param namespace: The namespace to check for a definition.
+
+        @return: True if the namespace defined False otherwise.
+        """
+        return namespace in self._state
+
+
+    @with_backing_file
     def discard(self, namespace, name):
         """If namespace.name is a defined value, deletes it.
 
@@ -389,6 +418,13 @@ class job_state(object):
         logging.debug('Persistent state %s.* deleted', namespace)
 
 
+    @with_backing_file
+    def atomic(self, func, *args, **kargs):
+        """Use state like synchronization tool between process.
+        """
+        return func(*args, **kargs)
+
+
     @staticmethod
     def property_factory(state_attribute, property_attribute, default,
                          namespace='global_properties'):
@@ -933,7 +969,7 @@ class base_job(object):
             Returns a status_logger instance for recording job status logs.
     """
 
-   # capture the dependency on several helper classes with factories
+    # capture the dependency on several helper classes with factories
     _job_directory = job_directory
     _job_state = job_state
 
@@ -1208,3 +1244,82 @@ class base_job(object):
                 logs should be written into the subdirectory status log file.
         """
         self._get_status_logger().record_entry(entry, log_in_subdir)
+
+
+    def clean_state(self):
+        pass
+
+
+    def _update_uncollected_logs_list(self, update_func):
+        """Updates the uncollected logs list in a multi-process safe manner.
+
+        @param update_func - a function that updates the list of uncollected
+            logs. Should take one parameter, the list to be updated.
+        """
+        if self._uncollected_log_file:
+            log_file = open(self._uncollected_log_file, "r+")
+            fcntl.flock(log_file, fcntl.LOCK_EX)
+            try:
+                uncollected_logs = pickle.load(log_file)
+                update_func(uncollected_logs)
+                log_file.seek(0)
+                log_file.truncate()
+                pickle.dump(uncollected_logs, log_file)
+                log_file.flush()
+            finally:
+                fcntl.flock(log_file, fcntl.LOCK_UN)
+                log_file.close()
+
+
+    def add_client_log(self, hostname, remote_path, local_path):
+        """Adds a new set of client logs to the list of uncollected logs,
+        to allow for future log recovery.
+
+        @param host - the hostname of the machine holding the logs
+        @param remote_path - the directory on the remote machine holding logs
+        @param local_path - the local directory to copy the logs into
+        """
+        def update_func(logs_list):
+            logs_list.append((hostname, remote_path, local_path))
+        self._update_uncollected_logs_list(update_func)
+
+
+    def remove_client_log(self, hostname, remote_path, local_path):
+        """Removes a set of client logs from the list of uncollected logs,
+        to allow for future log recovery.
+
+        @param host - the hostname of the machine holding the logs
+        @param remote_path - the directory on the remote machine holding logs
+        @param local_path - the local directory to copy the logs into
+        """
+        def update_func(logs_list):
+            logs_list.remove((hostname, remote_path, local_path))
+        self._update_uncollected_logs_list(update_func)
+
+
+    def get_client_logs(self):
+        """Retrieves the list of uncollected logs, if it exists.
+
+        @returns A list of (host, remote_path, local_path) tuples. Returns
+                 an empty list if no uncollected logs file exists.
+        """
+        log_exists = (self._uncollected_log_file and
+                      os.path.exists(self._uncollected_log_file))
+        if log_exists:
+            return pickle.load(open(self._uncollected_log_file))
+        else:
+            return []
+
+
+    def get_record_context(self):
+        """Returns an object representing the current job.record context.
+
+        The object returned is an opaque object with a 0-arg restore method
+        which can be called to restore the job.record context (i.e. indentation)
+        to the current level. The intention is that it should be used when
+        something external which generate job.record calls (e.g. an autotest
+        client) can fail catastrophically and the server job record state
+        needs to be reset to its original "known good" state.
+
+        @return: A context object with a 0-arg restore() method."""
+        return self._indenter.get_context()
diff --git a/client/common_lib/hosts/monitors/console.py b/client/common_lib/hosts/monitors/console.py
index c516f9f..60e561f 100755
--- a/client/common_lib/hosts/monitors/console.py
+++ b/client/common_lib/hosts/monitors/console.py
@@ -5,7 +5,7 @@
 
 import gzip, optparse, os, signal, sys, time
 import common
-from autotest_lib.server.hosts.monitors import monitors_util
+from autotest_lib.client.common_lib.hosts.monitors import monitors_util
 
 PATTERNS_PATH = os.path.join(os.path.dirname(__file__), 'console_patterns')
 
diff --git a/server/autotest_unittest.py b/server/autotest_unittest.py
index 78d0dec..1f038b4 100755
--- a/server/autotest_unittest.py
+++ b/server/autotest_unittest.py
@@ -234,6 +234,8 @@ class TestBaseAutotest(unittest.TestCase):
         run_obj.execute_control.expect_call(timeout=30,
                                             client_disconnect_timeout=1800)
 
+        self.host.job.clean_state.expect_call()
+
         # run and check output
         self.base_autotest.run(control, timeout=30)
         self.god.check_playback()
diff --git a/server/server_job.py b/server/server_job.py
index e3ffbc8..7da0cf0 100644
--- a/server/server_job.py
+++ b/server/server_job.py
@@ -748,20 +748,6 @@ class base_server_job(base_job.base_job):
         return subdirectory
 
 
-    def get_record_context(self):
-        """Returns an object representing the current job.record context.
-
-        The object returned is an opaque object with a 0-arg restore method
-        which can be called to restore the job.record context (i.e. indentation)
-        to the current level. The intention is that it should be used when
-        something external which generate job.record calls (e.g. an autotest
-        client) can fail catastrophically and the server job record state
-        needs to be reset to its original "known good" state.
-
-        @return: A context object with a 0-arg restore() method."""
-        return self._indenter.get_context()
-
-
     def record_summary(self, status_code, test_name, reason='', attributes=None,
                        distinguishing_attributes=(), child_test_ids=None):
         """Record a summary test result.
@@ -837,67 +823,6 @@ class base_server_job(base_job.base_job):
             return None
 
 
-    def _update_uncollected_logs_list(self, update_func):
-        """Updates the uncollected logs list in a multi-process safe manner.
-
-        @param update_func - a function that updates the list of uncollected
-            logs. Should take one parameter, the list to be updated.
-        """
-        if self._uncollected_log_file:
-            log_file = open(self._uncollected_log_file, "r+")
-            fcntl.flock(log_file, fcntl.LOCK_EX)
-        try:
-            uncollected_logs = pickle.load(log_file)
-            update_func(uncollected_logs)
-            log_file.seek(0)
-            log_file.truncate()
-            pickle.dump(uncollected_logs, log_file)
-            log_file.flush()
-        finally:
-            fcntl.flock(log_file, fcntl.LOCK_UN)
-            log_file.close()
-
-
-    def add_client_log(self, hostname, remote_path, local_path):
-        """Adds a new set of client logs to the list of uncollected logs,
-        to allow for future log recovery.
-
-        @param host - the hostname of the machine holding the logs
-        @param remote_path - the directory on the remote machine holding logs
-        @param local_path - the local directory to copy the logs into
-        """
-        def update_func(logs_list):
-            logs_list.append((hostname, remote_path, local_path))
-        self._update_uncollected_logs_list(update_func)
-
-
-    def remove_client_log(self, hostname, remote_path, local_path):
-        """Removes a set of client logs from the list of uncollected logs,
-        to allow for future log recovery.
-
-        @param host - the hostname of the machine holding the logs
-        @param remote_path - the directory on the remote machine holding logs
-        @param local_path - the local directory to copy the logs into
-        """
-        def update_func(logs_list):
-            logs_list.remove((hostname, remote_path, local_path))
-        self._update_uncollected_logs_list(update_func)
-
-
-    def get_client_logs(self):
-        """Retrieves the list of uncollected logs, if it exists.
-
-        @returns A list of (host, remote_path, local_path) tuples. Returns
-                 an empty list if no uncollected logs file exists.
-        """
-        log_exists = (self._uncollected_log_file and
-                      os.path.exists(self._uncollected_log_file))
-        if log_exists:
-            return pickle.load(open(self._uncollected_log_file))
-        else:
-            return []
-
-
     def _fill_server_control_namespace(self, namespace, protect=True):
         """
         Prepare a namespace to be used when executing server control files.
diff --git a/server/tests/netperf2/netperf2.py b/server/tests/netperf2/netperf2.py
index a4531b3..ac16453 100644
--- a/server/tests/netperf2/netperf2.py
+++ b/server/tests/netperf2/netperf2.py
@@ -1,5 +1,7 @@
 from autotest_lib.client.common_lib import subcommand, hosts
 from autotest_lib.server import utils, autotest, test
+from autotest_lib.client.common_lib import error
+import time as btime
 
 class netperf2(test.test):
     version = 2
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: Add ability client part starts autotest like server part
  2011-08-26  7:12 Add ability client part starts autotest like server part Jiří Župka
                   ` (2 preceding siblings ...)
  2011-08-26  7:12 ` [AUTOTEST][PATCH 3/3] autotest: Client/server part unification Jiří Župka
@ 2011-08-26 16:22 ` Lucas Meneghel Rodrigues
  2011-09-05  8:12   ` Jiri Zupka
  3 siblings, 1 reply; 6+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-08-26 16:22 UTC (permalink / raw)
  To: Jiří Župka; +Cc: kvm, autotest, kvm-autotest

On 08/26/2011 04:12 AM, Jiří Župka wrote:
> This patch series was created because client part of autotest
> started to be used like server part and there are lot of tests
> which can be unified to one test (multicast, netperf) if there
> will be able to start already done tests from client part of
> autotest on virtual machine.
>
> The patch series adds autotest client part ability for start
> autotest on remote system over network like server part of autotest.
> More info is in last patch from patch series.

Wow, awesome stuff Jiri! Congrats!

I know mbligh wanted something on this lines, and I think it makes 
perfect sense. However, we need to do some careful review, unittesting 
and testing of these changes. Also, we need to put documentation in shape.

Also, it is a good opportunity to ask our downstream parties and 
contributors whether they like this unification proposal or not. Copying 
some people that might be interested in take a look and express their 
concerns.

My idea is to stage your changes in one of my personal git repo on 
github (branch merge-server-client) and go iterating from there.

Cheers,

Lucas

> [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to
> [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side
> [AUTOTEST][PATCH 3/3] autotest: Client/server part unification.

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Add ability client part starts autotest like server part
  2011-08-26 16:22 ` Add ability client part starts autotest like server part Lucas Meneghel Rodrigues
@ 2011-09-05  8:12   ` Jiri Zupka
  0 siblings, 0 replies; 6+ messages in thread
From: Jiri Zupka @ 2011-09-05  8:12 UTC (permalink / raw)
  To: Lucas Meneghel Rodrigues
  Cc: kvm-autotest, kvm, autotest, ldoktor, akong, gps, jadmanski, Dale Curtis



----- Original Message -----
> On 08/26/2011 04:12 AM, Jiří Župka wrote:
> > This patch series was created because client part of autotest
> > started to be used like server part and there are lot of tests
> > which can be unified to one test (multicast, netperf) if there
> > will be able to start already done tests from client part of
> > autotest on virtual machine.
> >
> > The patch series adds autotest client part ability for start
> > autotest on remote system over network like server part of autotest.
> > More info is in last patch from patch series.
> 
> Wow, awesome stuff Jiri! Congrats!

Thank you:-)
> 
> I know mbligh wanted something on this lines, and I think it makes
> perfect sense. However, we need to do some careful review, unittesting
> and testing of these changes. 

What kind of unitesting you mean add new unittest modules for changes in autotest?

> Also, we need to put documentation in shape.

What kind of documentation you think? 

> 
> Also, it is a good opportunity to ask our downstream parties and
> contributors whether they like this unification proposal or not.
> Copying
> some people that might be interested in take a look and express their
> concerns.
> 
> My idea is to stage your changes in one of my personal git repo on
> github (branch merge-server-client) and go iterating from there.

Yes, it is good idea. Could you send me a link to this branch?

> 
> Cheers,
> 
> Lucas
> 
> > [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to
> > [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side
> > [AUTOTEST][PATCH 3/3] autotest: Client/server part unification.

Regards,
  Jiří Župka

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-09-05  8:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-26  7:12 Add ability client part starts autotest like server part Jiří Župka
2011-08-26  7:12 ` [AUTOTEST][PATCH 1/3] autotest: Move autotest.py from server part to client part of autotest Jiří Župka
2011-08-26  7:12 ` [AUTOTEST][PATCH 2/3] autotest: Move hosts package from server side to client side Jiří Župka
2011-08-26  7:12 ` [AUTOTEST][PATCH 3/3] autotest: Client/server part unification Jiří Župka
2011-08-26 16:22 ` Add ability client part starts autotest like server part Lucas Meneghel Rodrigues
2011-09-05  8:12   ` Jiri Zupka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.