All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM test: KSM overcommit test v4
@ 2010-02-22 20:29 Lucas Meneghel Rodrigues
  2010-02-25 20:14 ` Lucas Meneghel Rodrigues
  0 siblings, 1 reply; 2+ messages in thread
From: Lucas Meneghel Rodrigues @ 2010-02-22 20:29 UTC (permalink / raw)
  To: autotest; +Cc: kvm

This is an implementation of KSM testing. The basic idea behind the
test is to start guests, copy the script allocator.py to them. Once
executed, the process accepts input commands on its main loop.

The script will allow to fill up memory pages of the guests, according
to patterns, this way it's possible to have a large portion of guests
memory exactly the same, so KSM can do memory merge.

Then we can easily split memory by filling memory of some guests with
other values, and verify how KSM behaves according to each operation.

 Test :
  a] serial
   1) initialize, merge all mem to single page
   2) separate first guset mem
   3) separate rest of guest up to fill all mem
   4) kill all guests except for the last
   5) check if mem of last guest is ok
   6) kill guest

  b] parallel
   1) initialize, merge all mem to single page
   2) separate mem of guest
   3) verification of guest mem
   4) merge mem to one block
   5) verification of guests mem
   6) separate mem of guests by 96B
   7) check if mem is all right
   8) kill guest

 allocator.py (client side script)
   After start they wait for command witch they make in client side.
   mem_fill class implement commands to fill, check mem and return
   error to host.

Notes:
Jiri and Lukáš, please verify this last version, we are close
to commiting this test, at last.

Changelog:

v4 - Cleanup and bugfixing for the test, after a good round of
testing:
  * Moved the host_reserve_memory and guest_reserve_memory to
the config file, in order to have more flexibility. The 256MB
default memory reserve for guests was making guests to run out
of memory and trigger the kernel OOM killer, frequently killing
the ssh daemon and ruining the test, on a bare bones Fedora 12
guest.
  * Fixed up debug and info messages in general to be more clear
and consistent
  * The parallel test had 2 repeated operations, mem_fill(value)
and another mem_fill(value). From the logging messages, it
became clear the author meant a mem_fill(value) followed by a
mem_check(value)
  * Made MemFill.check_value() to accept a value, suitable for
testing memory contents that were not filled by the default
static value
  * Factored allocator.py operations to a function, saved up a
good deal of code.
  * General code cleanup and coding style

Signed-off-by: Jiri Zupka <jzupka@redhat.com>
Signed-off-by: Lukáš Doktor<ldoktor@redhat.com>
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
 client/tests/kvm/kvm_test_utils.py       |   36 ++-
 client/tests/kvm/kvm_utils.py            |   16 +
 client/tests/kvm/kvm_vm.py               |   17 +
 client/tests/kvm/scripts/allocator.py    |  234 +++++++++++++
 client/tests/kvm/tests/ksm_overcommit.py |  559 ++++++++++++++++++++++++++++++
 client/tests/kvm/tests_base.cfg.sample   |   23 ++
 6 files changed, 884 insertions(+), 1 deletions(-)
 create mode 100644 client/tests/kvm/scripts/allocator.py
 create mode 100644 client/tests/kvm/tests/ksm_overcommit.py

diff --git a/client/tests/kvm/kvm_test_utils.py b/client/tests/kvm/kvm_test_utils.py
index 02ec0cf..7d96d6e 100644
--- a/client/tests/kvm/kvm_test_utils.py
+++ b/client/tests/kvm/kvm_test_utils.py
@@ -22,7 +22,8 @@ More specifically:
 """
 
 import time, os, logging, re, commands
-from autotest_lib.client.common_lib import utils, error
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
 import kvm_utils, kvm_vm, kvm_subprocess
 
 
@@ -203,3 +204,36 @@ def get_time(session, time_command, time_filter_re, time_format):
     s = re.findall(time_filter_re, s)[0]
     guest_time = time.mktime(time.strptime(s, time_format))
     return (host_time, guest_time)
+
+
+def get_memory_info(lvms):
+    """
+    Get memory information from host and guests in format:
+    Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
+
+    @params lvms: List of VM objects
+    @return: String with memory info report
+    """
+    if not isinstance(lvms, list):
+        raise error.TestError("Invalid list passed to get_stat: %s " % lvms)
+
+    try:
+        meminfo = "Host: memfree = "
+        meminfo += str(int(utils.freememtotal()) / 1024) + "M; "
+        meminfo += "swapfree = "
+        mf = int(utils.read_from_meminfo("SwapFree")) / 1024
+        meminfo += str(mf) + "M; "
+    except Exception, e:
+        raise error.TestFail("Could not fetch host free memory info, "
+                             "reason: %s" % e)
+
+    meminfo += "Guests memsh = {"
+    for vm in lvms:
+        shm = vm.get_shared_meminfo()
+        if shm is None:
+            raise error.TestError("Could not get shared meminfo from "
+                                  "VM %s" % vm)
+        meminfo += "%dM; " % shm
+    meminfo = meminfo[0:-2] + "}"
+
+    return meminfo
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index e155951..4565dc1 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -696,6 +696,22 @@ def generate_random_string(length):
     return str
 
 
+def generate_tmp_file_name(file, ext=None, dir='/tmp/'):
+    """
+    Returns a temporary file name. The file is not created.
+    """
+    while True:
+        file_name = (file + '-' + time.strftime("%Y%m%d-%H%M%S-") +
+                     generate_random_string(4))
+        if ext:
+            file_name += '.' + ext
+        file_name = os.path.join(dir, file_name)
+        if not os.path.exists(file_name):
+            break
+
+    return file_name
+
+
 def format_str_for_message(str):
     """
     Format str so that it can be appended to a message.
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index c166ac9..921414d 100755
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -748,6 +748,23 @@ class VM:
         return self.process.get_pid()
 
 
+    def get_shared_meminfo(self):
+        """
+        Returns the VM's shared memory information.
+
+        @return: Shared memory used by VM (MB)
+        """
+        if self.is_dead():
+            logging.error("Could not get shared memory info from dead VM.")
+            return None
+
+        cmd = "cat /proc/%d/statm" % self.params.get('pid_' + self.name)
+        shm = int(os.popen(cmd).readline().split()[2])
+        # statm stores informations in pages, translate it to MB
+        shm = shm * 4 / 1024
+        return shm
+
+
     def remote_login(self, nic_index=0, timeout=10):
         """
         Log into the guest via SSH/Telnet/Netcat.
diff --git a/client/tests/kvm/scripts/allocator.py b/client/tests/kvm/scripts/allocator.py
new file mode 100644
index 0000000..1036893
--- /dev/null
+++ b/client/tests/kvm/scripts/allocator.py
@@ -0,0 +1,234 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+"""
+Auxiliary script used to allocate memory on guests.
+
+@copyright: 2008-2009 Red Hat Inc.
+@author: Jiri Zupka (jzupka@redhat.com)
+"""
+
+
+import os, array, sys, struct, random, copy, inspect, tempfile, datetime
+
+PAGE_SIZE = 4096 # machine page size
+
+
+class MemFill(object):
+    """
+    Fills guest memory according to certain patterns.
+    """
+    def __init__(self, mem, static_value, random_key):
+        """
+        Constructor of MemFill class.
+
+        @param mem: Amount of test memory in MB.
+        @param random_key: Seed of random series used for fill up memory.
+        @param static_value: Value used to fill all memory.
+        """
+        if (static_value < 0 or static_value > 255):
+            print ("FAIL: Initialization static value"
+                   "can be only in range (0..255)")
+            return
+
+        self.tmpdp = tempfile.mkdtemp()
+        ret_code = os.system("mount -o size=%dM tmpfs %s -t tmpfs" %
+                             ((mem + 25), self.tmpdp))
+        if ret_code != 0:
+            if os.getuid() != 0:
+                print ("FAIL: Unable to mount tmpfs "
+                       "(likely cause: you are not root)")
+            else:
+                print "FAIL: Unable to mount tmpfs"
+        else:
+            self.f = tempfile.TemporaryFile(prefix='mem', dir=self.tmpdp)
+            self.allocate_by = 'L'
+            self.npages = (mem * 1024 * 1024) / PAGE_SIZE
+            self.random_key = random_key
+            self.static_value = static_value
+            print "PASS: Initialization"
+
+
+    def __del__(self):
+        if os.path.ismount(self.tmpdp):
+            self.f.close()
+            os.system("umount %s" % (self.tmpdp))
+
+
+    def compare_page(self, original, inmem):
+        """
+        Compare pages of memory and print the differences found.
+
+        @param original: Data that was expected to be in memory.
+        @param inmem: Data in memory.
+        """
+        for ip in range(PAGE_SIZE / original.itemsize):
+            if (not original[ip] == inmem[ip]): # find which item is wrong
+                originalp = array.array("B")
+                inmemp = array.array("B")
+                originalp.fromstring(original[ip:ip+1].tostring())
+                inmemp.fromstring(inmem[ip:ip+1].tostring())
+                for ib in range(len(originalp)): # find wrong byte in item
+                    if not (originalp[ib] == inmemp[ib]):
+                        position = (self.f.tell() - PAGE_SIZE + ip *
+                                    original.itemsize + ib)
+                        print ("Mem error on position %d wanted 0x%Lx and is "
+                               "0x%Lx" % (position, originalp[ib], inmemp[ib]))
+
+
+    def value_page(self, value):
+        """
+        Create page filled by value.
+
+        @param value: String we want to fill the page with.
+        @return: return array of bytes size PAGE_SIZE.
+        """
+        a = array.array("B")
+        for i in range(PAGE_SIZE / a.itemsize):
+            try:
+                a.append(value)
+            except:
+                print "FAIL: Value can be only in range (0..255)"
+        return a
+
+
+    def random_page(self, seed):
+        """
+        Create page filled by static random series.
+
+        @param seed: Seed of random series.
+        @return: Static random array series.
+        """
+        random.seed(seed)
+        a = array.array(self.allocate_by)
+        for i in range(PAGE_SIZE / a.itemsize):
+            a.append(random.randrange(0, sys.maxint))
+        return a
+
+
+    def value_fill(self, value=None):
+        """
+        Fill memory page by page, with value generated with value_page.
+
+        @param value: Parameter to be passed to value_page. None to just use
+                what's on the attribute static_value.
+        """
+        self.f.seek(0)
+        if value is None:
+            value = self.static_value
+        page = self.value_page(value)
+        for pages in range(self.npages):
+            page.tofile(self.f)
+        print "PASS: Mem value fill"
+
+
+    def value_check(self, value=None):
+        """
+        Check memory to see if data is correct.
+
+        @param value: Parameter to be passed to value_page. None to just use
+                what's on the attribute static_value.
+        @return: if data in memory is correct return PASS
+                else print some wrong data and return FAIL
+        """
+        self.f.seek(0)
+        e = 2
+        failure = False
+        if value is None:
+            value = self.static_value
+        page = self.value_page(value)
+        for pages in range(self.npages):
+            pf = array.array("B")
+            pf.fromfile(self.f, PAGE_SIZE / pf.itemsize)
+            if not (page == pf):
+                failure = True
+                self.compare_page(page, pf)
+                e = e - 1
+                if e == 0:
+                    break
+        if failure:
+            print "FAIL: value verification"
+        else:
+            print "PASS: value verification"
+
+
+    def static_random_fill(self, n_bytes_on_end=PAGE_SIZE):
+        """
+        Fill memory by page with static random series with added special value
+        on random place in pages.
+
+        @param n_bytes_on_end: how many bytes on the end of page can be changed.
+        @return: PASS.
+        """
+        self.f.seek(0)
+        page = self.random_page(self.random_key)
+        random.seed(self.random_key)
+        p = copy.copy(page)
+
+        t_start = datetime.datetime.now()
+        for pages in range(self.npages):
+            rand = random.randint(((PAGE_SIZE / page.itemsize) - 1) -
+                                  (n_bytes_on_end / page.itemsize),
+                                  (PAGE_SIZE/page.itemsize) - 1)
+            p[rand] = pages
+            p.tofile(self.f)
+            p[rand] = page[rand]
+
+        t_end = datetime.datetime.now()
+        delta = t_end - t_start
+        milisec = delta.microseconds / 1e3 + delta.seconds * 1e3
+        print "PASS: filling duration = %Ld ms" % milisec
+
+
+    def static_random_verify(self, n_bytes_on_end=PAGE_SIZE):
+        """
+        Check memory to see if it contains correct contents.
+
+        @return: if data in memory is correct return PASS
+                else print some wrong data and return FAIL.
+        """
+        self.f.seek(0)
+        e = 2
+        page = self.random_page(self.random_key)
+        random.seed(self.random_key)
+        p = copy.copy(page)
+        failure = False
+        for pages in range(self.npages):
+            rand = random.randint(((PAGE_SIZE/page.itemsize) - 1) -
+                                  (n_bytes_on_end/page.itemsize),
+                                  (PAGE_SIZE/page.itemsize) - 1)
+            p[rand] = pages
+            pf = array.array(self.allocate_by)
+            pf.fromfile(self.f, PAGE_SIZE / pf.itemsize)
+            if not (p == pf):
+                failure = True
+                self.compare_page(p, pf)
+                e = e - 1
+                if e == 0:
+                    break
+            p[rand] = page[rand]
+        if failure:
+            print "FAIL: Random series verification"
+        else:
+            print "PASS: Random series verification"
+
+
+def die():
+    """
+    Quit allocator.
+    """
+    exit(0)
+
+
+def main():
+    """
+    Main (infinite) loop of allocator.
+    """
+    print "PASS: Start"
+    end = False
+    while not end:
+        str = raw_input()
+        exec str
+
+
+if __name__ == "__main__":
+    main()
diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
new file mode 100644
index 0000000..2dd46c4
--- /dev/null
+++ b/client/tests/kvm/tests/ksm_overcommit.py
@@ -0,0 +1,559 @@
+import logging, time, random, string, math, os, tempfile
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_preprocessing
+
+
+def run_ksm_overcommit(test, params, env):
+    """
+    Test how KSM (Kernel Shared Memory) act when more than physical memory is
+    used. In second part we also test how KVM handles a situation when the host
+    runs out of memory (it is expected to pause the guest system, wait until
+    some process returns memory and bring the guest back to life)
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test wnvironment.
+    """
+
+    def _start_allocator(vm, session, timeout):
+        """
+        Execute allocator.py on a guest, wait until it is initialized.
+
+        @param vm: VM object.
+        @param session: Remote session to a VM object.
+        @param timeout: Timeout that will be used to verify if allocator.py
+                started properly.
+        """
+        logging.debug("Starting allocator.py on guest %s", vm.name)
+        session.sendline("python /tmp/allocator.py")
+        (match, data) = session.read_until_last_line_matches(["PASS:", "FAIL:"],
+                                                             timeout)
+        if match == 1 or match is None:
+            raise error.TestFail("Command allocator.py on guest %s failed.\n"
+                                 "return code: %s\n output:\n%s" %
+                                 (vm.name, match, data))
+
+
+    def _execute_allocator(command, vm, session, timeout):
+        """
+        Execute a given command on allocator.py main loop, indicating the vm
+        the command was executed on.
+
+        @param command: Command that will be executed.
+        @param vm: VM object.
+        @param session: Remote session to VM object.
+        @param timeout: Timeout used to verify expected output.
+
+        @return: Tuple (match index, data)
+        """
+        logging.debug("Executing '%s' on allocator.py loop, vm: %s, timeout: %s",
+                      command, vm.name, timeout)
+        session.sendline(command)
+        (match, data) = session.read_until_last_line_matches(["PASS:","FAIL:"],
+                                                             timeout)
+        if match == 1 or match is None:
+            raise error.TestFail("Failed to execute '%s' on allocator.py, "
+                                 "vm: %s, output:\n%s" %
+                                 (command, vm.name, data))
+        return (match, data)
+
+
+    def initialize_guests():
+        """
+        Initialize guests (fill their memories with specified patterns).
+        """
+        logging.info("Phase 1: filling guest memory pages")
+        for session in lsessions:
+            vm = lvms[lsessions.index(session)]
+
+            logging.debug("Turning off swap on vm %s" % vm.name)
+            ret = session.get_command_status("swapoff -a", timeout=300)
+            if ret is None or ret:
+                raise error.TestFail("Failed to swapoff on VM %s" % vm.name)
+
+            # Start the allocator
+            _start_allocator(vm, session, 60 * perf_ratio)
+
+        # Execute allocator on guests
+        for i in range(0, vmsc):
+            vm = lvms[i]
+
+            a_cmd = "mem = MemFill(%d, %s, %s)" % (ksm_size, skeys[i], dkeys[i])
+            _execute_allocator(a_cmd, vm, lsessions[i], 60 * perf_ratio)
+
+            a_cmd = "mem.value_fill(%d)" % skeys[0]
+            _execute_allocator(a_cmd, vm, lsessions[i], 120 * perf_ratio)
+
+            # Let allocator.py do its job
+            # (until shared mem reaches expected value)
+            shm = 0
+            i = 0
+            logging.debug("Target shared meminfo for guest %s: %s", vm.name,
+                          ksm_size)
+            while shm < ksm_size:
+                if i > 64:
+                    logging.debug(kvm_test_utils.get_memory_info(lvms))
+                    raise error.TestError("SHM didn't merge the memory until "
+                                          "the DL on guest: %s" % vm.name)
+                st = ksm_size / 200 * perf_ratio
+                logging.debug("Waiting %ds before proceeding..." % st)
+                time.sleep(st)
+                shm = vm.get_shared_meminfo()
+                logging.debug("Shared meminfo for guest %s after "
+                              "iteration %s: %s", vm.name, i, shm)
+                i += 1
+
+        # Keep some reserve
+        rt = ksm_size / 200 * perf_ratio
+        logging.debug("Waiting %ds before proceeding...", rt)
+        time.sleep(rt)
+
+        logging.debug(kvm_test_utils.get_memory_info(lvms))
+        logging.info("Phase 1: PASS")
+
+
+    def separate_first_guest():
+        """
+        Separate memory of the first guest by generating special random series
+        """
+        logging.info("Phase 2: Split the pages on the first guest")
+
+        a_cmd = "mem.static_random_fill()"
+        (match, data) = _execute_allocator(a_cmd, lvms[0], lsessions[0],
+                                           120 * perf_ratio)
+
+        r_msg = data.splitlines()[-1]
+        logging.debug("Return message of static_random_fill: %s", r_msg)
+        out = int(r_msg.split()[4])
+        logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s", ksm_size, out,
+                     (ksm_size * 1000 / out))
+        logging.debug(kvm_test_utils.get_memory_info(lvms))
+        logging.debug("Phase 2: PASS")
+
+
+    def split_guest():
+        """
+        Sequential split of pages on guests up to memory limit
+        """
+        logging.info("Phase 3a: Sequential split of pages on guests up to "
+                     "memory limit")
+        last_vm = 0
+        session = None
+        vm = None
+        for i in range(1, vmsc):
+            vm = lvms[i]
+            session = lsessions[i]
+            a_cmd = "mem.static_random_fill()"
+            logging.debug("Executing %s on allocator.py loop, vm: %s",
+                          a_cmd, vm.name)
+            session.sendline(a_cmd)
+
+            out = ""
+            try:
+                logging.debug("Watching host memory while filling vm %s memory",
+                              vm.name)
+                while not out.startswith("PASS") and not out.startswith("FAIL"):
+                    free_mem = int(utils.read_from_meminfo("MemFree"))
+                    if (ksm_swap):
+                        free_mem = (free_mem +
+                                    int(utils.read_from_meminfo("SwapFree")))
+                    logging.debug("Free memory on host: %d" % (free_mem))
+
+                    # We need to keep some memory for python to run.
+                    if (free_mem < 64000) or (ksm_swap and
+                                              free_mem < (450000 * perf_ratio)):
+                        vm.send_monitor_cmd('stop')
+                        for j in range(0, i):
+                            lvms[j].destroy(gracefully = False)
+                        time.sleep(20)
+                        vm.send_monitor_cmd('c')
+                        logging.debug("Only %s free memory, killing %d guests" %
+                                      (free_mem, (i-1)))
+                        last_vm = i
+                        break
+                    out = session.read_nonblocking(0.1)
+                    time.sleep(2)
+            except OSError, (err):
+                logging.debug("Only %s host free memory, killing %d guests" %
+                              (free_mem, (i - 1)))
+                logging.debug("Stopping %s", vm.name)
+                vm.send_monitor_cmd('stop')
+                for j in range(0, i):
+                    logging.debug("Destroying %s", lvms[j].name)
+                    lvms[j].destroy(gracefully = False)
+                time.sleep(20)
+                vm.send_monitor_cmd('c')
+                last_vm = i
+
+            if last_vm != 0:
+                break
+            logging.debug("Memory filled for guest %s" % (vm.name))
+
+        logging.info("Phase 3a: PASS")
+
+        logging.info("Phase 3b: Check if memory in max loading guest is right")
+        for i in range(last_vm + 1, vmsc):
+            lsessions[i].close()
+            if i == (vmsc - 1):
+                logging.debug(kvm_test_utils.get_memory_info([lvms[i]]))
+            logging.debug("Destroying guest %s" % lvms[i].name)
+            lvms[i].destroy(gracefully = False)
+
+        # Verify last machine with randomly generated memory
+        a_cmd = "mem.static_random_verify()"
+        _execute_allocator(a_cmd, lvms[last_vm], session,
+                           (mem / 200 * 50 * perf_ratio))
+        logging.debug(kvm_test_utils.get_memory_info([lvms[last_vm]]))
+
+        (status, data) = lsessions[i].get_command_status_output("die()", 20)
+        lvms[last_vm].destroy(gracefully = False)
+        logging.info("Phase 3b: PASS")
+
+
+    def split_parallel():
+        """
+        Parallel page spliting
+        """
+        logging.info("Phase 1: parallel page spliting")
+        # We have to wait until allocator is finished (it waits 5 seconds to
+        # clean the socket
+
+        session = lsessions[0]
+        vm = lvms[0]
+        for i in range(1, max_alloc):
+            lsessions.append(kvm_utils.wait_for(vm.remote_login, 360, 0, 2))
+            if not lsessions[i]:
+                raise error.TestFail("Could not log into guest %s" %
+                                     vm.name)
+
+        ret = session.get_command_status("swapoff -a", timeout=300)
+        if ret != 0:
+            raise error.TestFail("Failed to turn off swap on %s" % vm.name)
+
+        for i in range(0, max_alloc):
+            # Start the allocator
+            _start_allocator(vm, lsessions[i], 60 * perf_ratio)
+
+        logging.info("Phase 1: PASS")
+
+        logging.info("Phase 2a: Simultaneous merging")
+        logging.debug("Memory used by allocator on guests = %dMB" %
+                     (ksm_size / max_alloc))
+
+        for i in range(0, max_alloc):
+            a_cmd = "mem = MemFill(%d, %s, %s)" % ((ksm_size / max_alloc),
+                                                   skeys[i], dkeys[i])
+            _execute_allocator(a_cmd, vm, lsessions[i], 60 * perf_ratio)
+
+            a_cmd = "mem.value_fill(%d)" % (skeys[0])
+            _execute_allocator(a_cmd, vm, lsessions[i], 90 * perf_ratio)
+
+        # Wait until allocator.py merges the pages (3 * ksm_size / 3)
+        shm = 0
+        i = 0
+        logging.debug("Target shared memory size: %s", ksm_size)
+        while shm < ksm_size:
+            if i > 64:
+                logging.debug(kvm_test_utils.get_memory_info(lvms))
+                raise error.TestError("SHM didn't merge the memory until DL")
+            wt = ksm_size / 200 * perf_ratio
+            logging.debug("Waiting %ds before proceed...", wt)
+            time.sleep(wt)
+            shm = vm.get_shared_meminfo()
+            logging.debug("Shared meminfo after attempt %s: %s", i, shm)
+            i += 1
+
+        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.info("Phase 2a: PASS")
+
+        logging.info("Phase 2b: Simultaneous spliting")
+        # Actual splitting
+        for i in range(0, max_alloc):
+            a_cmd = "mem.static_random_fill()"
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               90 * perf_ratio)
+
+            data = data.splitlines()[-1]
+            logging.debug(data)
+            out = int(data.split()[4])
+            logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s" %
+                         ((ksm_size / max_alloc), out,
+                          (ksm_size * 1000 / out / max_alloc)))
+        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.info("Phase 2b: PASS")
+
+        logging.info("Phase 2c: Simultaneous verification")
+        for i in range(0, max_alloc):
+            a_cmd = "mem.static_random_verify()"
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               (mem / 200 * 50 * perf_ratio))
+        logging.info("Phase 2c: PASS")
+
+        logging.info("Phase 2d: Simultaneous merging")
+        # Actual splitting
+        for i in range(0, max_alloc):
+            a_cmd = "mem.value_fill(%d)" % skeys[0]
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               120 * perf_ratio)
+        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.info("Phase 2d: PASS")
+
+        logging.info("Phase 2e: Simultaneous verification")
+        for i in range(0, max_alloc):
+            a_cmd = "mem.value_check(%d)" % skeys[0]
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               (mem / 200 * 50 * perf_ratio))
+        logging.info("Phase 2e: PASS")
+
+        logging.info("Phase 2f: Simultaneous spliting last 96B")
+        for i in range(0, max_alloc):
+            a_cmd = "mem.static_random_fill(96)"
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               60 * perf_ratio)
+
+            data = data.splitlines()[-1]
+            out = int(data.split()[4])
+            logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s",
+                         ksm_size/max_alloc, out,
+                         (ksm_size * 1000 / out / max_alloc))
+
+        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.info("Phase 2f: PASS")
+
+        logging.info("Phase 2g: Simultaneous verification last 96B")
+        for i in range(0, max_alloc):
+            a_cmd = "mem.static_random_verify(96)"
+            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
+                                               (mem / 200 * 50 * perf_ratio))
+        logging.debug(kvm_test_utils.get_memory_info([vm]))
+        logging.info("Phase 2g: PASS")
+
+        logging.debug("Cleaning up...")
+        for i in range(0, max_alloc):
+            lsessions[i].get_command_status_output("die()", 20)
+        session.close()
+        vm.destroy(gracefully = False)
+
+
+    # Main test code
+    logging.info("Starting phase 0: Initialization")
+    # host_reserve: mem reserve kept for the host system to run
+    host_reserve = int(params.get("ksm_host_reserve", 512))
+    # guest_reserve: mem reserve kept to avoid guest OS to kill processes
+    guest_reserve = int(params.get("ksm_guest_reserve", 1024))
+    logging.debug("Memory reserved for host to run: %d", host_reserve)
+    logging.debug("Memory reserved for guest to run: %d", guest_reserve)
+
+    max_vms = int(params.get("max_vms", 2))
+    overcommit = float(params.get("ksm_overcommit_ratio", 2.0))
+    max_alloc = int(params.get("ksm_parallel_ratio", 1))
+
+    # vmsc: count of all used VMs
+    vmsc = int(overcommit) + 1
+    vmsc = max(vmsc, max_vms)
+
+    if (params['ksm_mode'] == "serial"):
+        max_alloc = vmsc
+
+    host_mem = (int(utils.memtotal()) / 1024 - host_reserve)
+
+    ksm_swap = False
+    if params.get("ksm_swap") == "yes":
+        ksm_swap = True
+
+    # Performance ratio
+    perf_ratio = params.get("ksm_perf_ratio")
+    if perf_ratio:
+        perf_ratio = float(perf_ratio)
+    else:
+        perf_ratio = 1
+
+    if (params['ksm_mode'] == "parallel"):
+        vmsc = 1
+        overcommit = 1
+        mem = host_mem
+        # 32bit system adjustment
+        if not params['image_name'].endswith("64"):
+            logging.debug("Probably i386 guest architecture, "
+                          "max allocator mem = 2G")
+            # Guest can have more than 2G but
+            # kvm mem + 1MB (allocator itself) can't
+            if (host_mem > 3100):
+                mem = 3100
+
+        if os.popen("uname -i").readline().startswith("i386"):
+            logging.debug("Host is i386 architecture, max guest mem is 2G")
+            # Guest system with qemu overhead (64M) can't have more than 2G
+            if mem > 3100 - 64:
+                mem = 3100 - 64
+
+    else:
+        # mem: Memory of the guest systems. Maximum must be less than
+        # host's physical ram
+        mem = int(overcommit * host_mem / vmsc)
+
+        # 32bit system adjustment
+        if not params['image_name'].endswith("64"):
+            logging.debug("Probably i386 guest architecture, "
+                          "max allocator mem = 2G")
+            # Guest can have more than 2G but
+            # kvm mem + 1MB (allocator itself) can't
+            if mem - guest_reserve - 1 > 3100:
+                vmsc = int(math.ceil((host_mem * overcommit) /
+                                     (3100 + guest_reserve)))
+                mem = int(math.floor(host_mem * overcommit / vmsc))
+
+        if os.popen("uname -i").readline().startswith("i386"):
+            logging.debug("Host is i386 architecture, max guest mem is 2G")
+            # Guest system with qemu overhead (64M) can't have more than 2G
+            if mem > 3100 - 64:
+                vmsc = int(math.ceil((host_mem * overcommit) /
+                                     (3100 - 64.0)))
+                mem = int(math.floor(host_mem * overcommit / vmsc))
+
+    logging.debug("Checking KSM status...")
+    ksm_flag = 0
+    for line in os.popen('ksmctl info').readlines():
+        if line.startswith('flags'):
+            ksm_flag = int(line.split(' ')[1].split(',')[0])
+    if int(ksm_flag) != 1:
+        logging.info("KSM module is not loaded! Trying to load module and "
+                     "start ksmctl...")
+        try:
+            utils.run("modprobe ksm")
+            utils.run("ksmctl start 5000 100")
+        except error.CmdError, e:
+            raise error.TestFail("Failed to load KSM: %s" % e)
+    logging.debug("KSM module loaded and ksmctl started")
+
+    swap = int(utils.read_from_meminfo("SwapTotal")) / 1024
+
+    logging.debug("Overcommit = %f", overcommit)
+    logging.debug("True overcommit = %f ", (float(vmsc * mem) /
+                                            float(host_mem)))
+    logging.debug("Host memory = %dM", host_mem)
+    logging.debug("Guest memory = %dM", mem)
+    logging.debug("Using swap = %s", ksm_swap)
+    logging.debug("Swap = %dM", swap)
+    logging.debug("max_vms = %d", max_vms)
+    logging.debug("Count of all used VMs = %d", vmsc)
+    logging.debug("Performance_ratio = %f", perf_ratio)
+
+    # Generate unique keys for random series
+    skeys = []
+    dkeys = []
+    for i in range(0, max(vmsc, max_alloc)):
+        key = random.randrange(0, 255)
+        while key in skeys:
+            key = random.randrange(0, 255)
+        skeys.append(key)
+
+        key = random.randrange(0, 999)
+        while key in dkeys:
+            key = random.randrange(0, 999)
+        dkeys.append(key)
+
+    logging.debug("skeys: %s" % skeys)
+    logging.debug("dkeys: %s" % dkeys)
+
+    lvms = []
+    lsessions = []
+
+    # As we don't know the number and memory amount of VMs in advance,
+    # we need to specify and create them here (FIXME: not a nice thing)
+    vm_name = params.get("main_vm")
+    params['mem'] = mem
+    params['vms'] = vm_name
+    # Associate pidfile name
+    params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
+                                                                'pid')
+    if not params.get('extra_params'):
+        params['extra_params'] = ' '
+    params['extra_params_' + vm_name] = params.get('extra_params')
+    params['extra_params_' + vm_name] += (" -pidfile %s" %
+                                          (params.get('pid_' + vm_name)))
+    params['extra_params'] = params.get('extra_params_'+vm_name)
+
+    # ksm_size: amount of memory used by allocator
+    ksm_size = mem - guest_reserve
+    logging.debug("Memory used by allocator on guests = %dM" % (ksm_size))
+
+    # Creating the first guest
+    kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
+    lvms.append(kvm_utils.env_get_vm(env, vm_name))
+    if not lvms[0]:
+        raise error.TestError("VM object not found in environment")
+    if not lvms[0].is_alive():
+        raise error.TestError("VM seems to be dead; Test requires a living "
+                              "VM")
+
+    logging.debug("Booting first guest %s", lvms[0].name)
+
+    lsessions.append(kvm_utils.wait_for(lvms[0].remote_login, 360, 0, 2))
+    if not lsessions[0]:
+        raise error.TestFail("Could not log into first guest")
+    # Associate vm PID
+    try:
+        tmp = open(params.get('pid_' + vm_name), 'r')
+        params['pid_' + vm_name] = int(tmp.readline())
+    except:
+        raise error.TestFail("Could not get PID of %s" % (vm_name))
+
+    # Creating other guest systems
+    for i in range(1, vmsc):
+        vm_name = "vm" + str(i + 1)
+        params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
+                                                                    'pid')
+        params['extra_params_' + vm_name] = params.get('extra_params')
+        params['extra_params_' + vm_name] += (" -pidfile %s" %
+                                             (params.get('pid_' + vm_name)))
+        params['extra_params'] = params.get('extra_params_' + vm_name)
+
+        # Last VM is later used to run more allocators simultaneously
+        lvms.append(lvms[0].clone(vm_name, params))
+        kvm_utils.env_register_vm(env, vm_name, lvms[i])
+        params['vms'] += " " + vm_name
+
+        logging.debug("Booting guest %s" % lvms[i].name)
+        if not lvms[i].create():
+            raise error.TestFail("Cannot create VM %s" % lvms[i].name)
+        if not lvms[i].is_alive():
+            raise error.TestError("VM %s seems to be dead; Test requires a"
+                                  "living VM" % lvms[i].name)
+
+        lsessions.append(kvm_utils.wait_for(lvms[i].remote_login, 360, 0, 2))
+        if not lsessions[i]:
+            raise error.TestFail("Could not log into guest %s" %
+                                 lvms[i].name)
+        try:
+            tmp = open(params.get('pid_' + vm_name), 'r')
+            params['pid_' + vm_name] = int(tmp.readline())
+        except:
+            raise error.TestFail("Could not get PID of %s" % (vm_name))
+
+    # Let guests rest a little bit :-)
+    st = vmsc * 2 * perf_ratio
+    logging.debug("Waiting %ds before proceed", st)
+    time.sleep(vmsc * 2 * perf_ratio)
+    logging.debug(kvm_test_utils.get_memory_info(lvms))
+
+    # Copy allocator.py into guests
+    pwd = os.path.join(os.environ['AUTODIR'],'tests/kvm')
+    vksmd_src = os.path.join(pwd, "scripts/allocator.py")
+    dst_dir = "/tmp"
+    for vm in lvms:
+        if not vm.copy_files_to(vksmd_src, dst_dir):
+            raise error.TestFail("copy_files_to failed %s" % vm.name)
+    logging.info("Phase 0: PASS")
+
+    if params['ksm_mode'] == "parallel":
+        logging.info("Starting KSM test parallel mode")
+        split_parallel()
+        logging.info("KSM test parallel mode: PASS")
+    elif params['ksm_mode'] == "serial":
+        logging.info("Starting KSM test serial mode")
+        initialize_guests()
+        separate_first_guest()
+        split_guest()
+        logging.info("KSM test serial mode: PASS")
diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index e9fdd05..4516ed0 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -255,6 +255,28 @@ variants:
         type = physical_resources_check
         catch_uuid_cmd = dmidecode | awk -F: '/UUID/ {print $2}'
 
+    - ksm_overcommit:
+        # Don't preprocess any vms as we need to change its params
+        vms = ''
+        image_snapshot = yes
+        kill_vm_gracefully = no
+        type = ksm_overcommit
+        # Make host use swap (a value of 'no' will turn off host swap)
+        ksm_swap = yes
+        no hugepages
+        # Overcommit of host memmory
+        ksm_overcommit_ratio = 3
+        # Max paralel runs machine
+        ksm_parallel_ratio = 4
+        # Host memory reserve
+        ksm_host_reserve = 512
+        ksm_guest_reserve = 1024
+        variants:
+            - ksm_serial
+                ksm_mode = "serial"
+            - ksm_parallel
+                ksm_mode = "parallel"
+
     # system_powerdown, system_reset and shutdown *must* be the last ones
     # defined (in this order), since the effect of such tests can leave
     # the VM on a bad state.
@@ -278,6 +300,7 @@ variants:
         kill_vm_gracefully = no
     # Do not define test variants below shutdown
 
+
 # NICs
 variants:
     - @rtl8139:
-- 
1.6.6.1

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] KVM test: KSM overcommit test v4
  2010-02-22 20:29 [PATCH] KVM test: KSM overcommit test v4 Lucas Meneghel Rodrigues
@ 2010-02-25 20:14 ` Lucas Meneghel Rodrigues
  0 siblings, 0 replies; 2+ messages in thread
From: Lucas Meneghel Rodrigues @ 2010-02-25 20:14 UTC (permalink / raw)
  To: autotest; +Cc: kvm

FYI, test applied:

http://autotest.kernel.org/changeset/4264

On Mon, Feb 22, 2010 at 5:29 PM, Lucas Meneghel Rodrigues
<lmr@redhat.com> wrote:
> This is an implementation of KSM testing. The basic idea behind the
> test is to start guests, copy the script allocator.py to them. Once
> executed, the process accepts input commands on its main loop.
>
> The script will allow to fill up memory pages of the guests, according
> to patterns, this way it's possible to have a large portion of guests
> memory exactly the same, so KSM can do memory merge.
>
> Then we can easily split memory by filling memory of some guests with
> other values, and verify how KSM behaves according to each operation.
>
>  Test :
>  a] serial
>   1) initialize, merge all mem to single page
>   2) separate first guset mem
>   3) separate rest of guest up to fill all mem
>   4) kill all guests except for the last
>   5) check if mem of last guest is ok
>   6) kill guest
>
>  b] parallel
>   1) initialize, merge all mem to single page
>   2) separate mem of guest
>   3) verification of guest mem
>   4) merge mem to one block
>   5) verification of guests mem
>   6) separate mem of guests by 96B
>   7) check if mem is all right
>   8) kill guest
>
>  allocator.py (client side script)
>   After start they wait for command witch they make in client side.
>   mem_fill class implement commands to fill, check mem and return
>   error to host.
>
> Notes:
> Jiri and Lukáš, please verify this last version, we are close
> to commiting this test, at last.
>
> Changelog:
>
> v4 - Cleanup and bugfixing for the test, after a good round of
> testing:
>  * Moved the host_reserve_memory and guest_reserve_memory to
> the config file, in order to have more flexibility. The 256MB
> default memory reserve for guests was making guests to run out
> of memory and trigger the kernel OOM killer, frequently killing
> the ssh daemon and ruining the test, on a bare bones Fedora 12
> guest.
>  * Fixed up debug and info messages in general to be more clear
> and consistent
>  * The parallel test had 2 repeated operations, mem_fill(value)
> and another mem_fill(value). From the logging messages, it
> became clear the author meant a mem_fill(value) followed by a
> mem_check(value)
>  * Made MemFill.check_value() to accept a value, suitable for
> testing memory contents that were not filled by the default
> static value
>  * Factored allocator.py operations to a function, saved up a
> good deal of code.
>  * General code cleanup and coding style
>
> Signed-off-by: Jiri Zupka <jzupka@redhat.com>
> Signed-off-by: Lukáš Doktor<ldoktor@redhat.com>
> Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
> ---
>  client/tests/kvm/kvm_test_utils.py       |   36 ++-
>  client/tests/kvm/kvm_utils.py            |   16 +
>  client/tests/kvm/kvm_vm.py               |   17 +
>  client/tests/kvm/scripts/allocator.py    |  234 +++++++++++++
>  client/tests/kvm/tests/ksm_overcommit.py |  559 ++++++++++++++++++++++++++++++
>  client/tests/kvm/tests_base.cfg.sample   |   23 ++
>  6 files changed, 884 insertions(+), 1 deletions(-)
>  create mode 100644 client/tests/kvm/scripts/allocator.py
>  create mode 100644 client/tests/kvm/tests/ksm_overcommit.py
>
> diff --git a/client/tests/kvm/kvm_test_utils.py b/client/tests/kvm/kvm_test_utils.py
> index 02ec0cf..7d96d6e 100644
> --- a/client/tests/kvm/kvm_test_utils.py
> +++ b/client/tests/kvm/kvm_test_utils.py
> @@ -22,7 +22,8 @@ More specifically:
>  """
>
>  import time, os, logging, re, commands
> -from autotest_lib.client.common_lib import utils, error
> +from autotest_lib.client.common_lib import error
> +from autotest_lib.client.bin import utils
>  import kvm_utils, kvm_vm, kvm_subprocess
>
>
> @@ -203,3 +204,36 @@ def get_time(session, time_command, time_filter_re, time_format):
>     s = re.findall(time_filter_re, s)[0]
>     guest_time = time.mktime(time.strptime(s, time_format))
>     return (host_time, guest_time)
> +
> +
> +def get_memory_info(lvms):
> +    """
> +    Get memory information from host and guests in format:
> +    Host: memfree = XXXM; Guests memsh = {XXX,XXX,...}
> +
> +    @params lvms: List of VM objects
> +    @return: String with memory info report
> +    """
> +    if not isinstance(lvms, list):
> +        raise error.TestError("Invalid list passed to get_stat: %s " % lvms)
> +
> +    try:
> +        meminfo = "Host: memfree = "
> +        meminfo += str(int(utils.freememtotal()) / 1024) + "M; "
> +        meminfo += "swapfree = "
> +        mf = int(utils.read_from_meminfo("SwapFree")) / 1024
> +        meminfo += str(mf) + "M; "
> +    except Exception, e:
> +        raise error.TestFail("Could not fetch host free memory info, "
> +                             "reason: %s" % e)
> +
> +    meminfo += "Guests memsh = {"
> +    for vm in lvms:
> +        shm = vm.get_shared_meminfo()
> +        if shm is None:
> +            raise error.TestError("Could not get shared meminfo from "
> +                                  "VM %s" % vm)
> +        meminfo += "%dM; " % shm
> +    meminfo = meminfo[0:-2] + "}"
> +
> +    return meminfo
> diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
> index e155951..4565dc1 100644
> --- a/client/tests/kvm/kvm_utils.py
> +++ b/client/tests/kvm/kvm_utils.py
> @@ -696,6 +696,22 @@ def generate_random_string(length):
>     return str
>
>
> +def generate_tmp_file_name(file, ext=None, dir='/tmp/'):
> +    """
> +    Returns a temporary file name. The file is not created.
> +    """
> +    while True:
> +        file_name = (file + '-' + time.strftime("%Y%m%d-%H%M%S-") +
> +                     generate_random_string(4))
> +        if ext:
> +            file_name += '.' + ext
> +        file_name = os.path.join(dir, file_name)
> +        if not os.path.exists(file_name):
> +            break
> +
> +    return file_name
> +
> +
>  def format_str_for_message(str):
>     """
>     Format str so that it can be appended to a message.
> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
> index c166ac9..921414d 100755
> --- a/client/tests/kvm/kvm_vm.py
> +++ b/client/tests/kvm/kvm_vm.py
> @@ -748,6 +748,23 @@ class VM:
>         return self.process.get_pid()
>
>
> +    def get_shared_meminfo(self):
> +        """
> +        Returns the VM's shared memory information.
> +
> +        @return: Shared memory used by VM (MB)
> +        """
> +        if self.is_dead():
> +            logging.error("Could not get shared memory info from dead VM.")
> +            return None
> +
> +        cmd = "cat /proc/%d/statm" % self.params.get('pid_' + self.name)
> +        shm = int(os.popen(cmd).readline().split()[2])
> +        # statm stores informations in pages, translate it to MB
> +        shm = shm * 4 / 1024
> +        return shm
> +
> +
>     def remote_login(self, nic_index=0, timeout=10):
>         """
>         Log into the guest via SSH/Telnet/Netcat.
> diff --git a/client/tests/kvm/scripts/allocator.py b/client/tests/kvm/scripts/allocator.py
> new file mode 100644
> index 0000000..1036893
> --- /dev/null
> +++ b/client/tests/kvm/scripts/allocator.py
> @@ -0,0 +1,234 @@
> +#!/usr/bin/python
> +# -*- coding: utf-8 -*-
> +"""
> +Auxiliary script used to allocate memory on guests.
> +
> +@copyright: 2008-2009 Red Hat Inc.
> +@author: Jiri Zupka (jzupka@redhat.com)
> +"""
> +
> +
> +import os, array, sys, struct, random, copy, inspect, tempfile, datetime
> +
> +PAGE_SIZE = 4096 # machine page size
> +
> +
> +class MemFill(object):
> +    """
> +    Fills guest memory according to certain patterns.
> +    """
> +    def __init__(self, mem, static_value, random_key):
> +        """
> +        Constructor of MemFill class.
> +
> +        @param mem: Amount of test memory in MB.
> +        @param random_key: Seed of random series used for fill up memory.
> +        @param static_value: Value used to fill all memory.
> +        """
> +        if (static_value < 0 or static_value > 255):
> +            print ("FAIL: Initialization static value"
> +                   "can be only in range (0..255)")
> +            return
> +
> +        self.tmpdp = tempfile.mkdtemp()
> +        ret_code = os.system("mount -o size=%dM tmpfs %s -t tmpfs" %
> +                             ((mem + 25), self.tmpdp))
> +        if ret_code != 0:
> +            if os.getuid() != 0:
> +                print ("FAIL: Unable to mount tmpfs "
> +                       "(likely cause: you are not root)")
> +            else:
> +                print "FAIL: Unable to mount tmpfs"
> +        else:
> +            self.f = tempfile.TemporaryFile(prefix='mem', dir=self.tmpdp)
> +            self.allocate_by = 'L'
> +            self.npages = (mem * 1024 * 1024) / PAGE_SIZE
> +            self.random_key = random_key
> +            self.static_value = static_value
> +            print "PASS: Initialization"
> +
> +
> +    def __del__(self):
> +        if os.path.ismount(self.tmpdp):
> +            self.f.close()
> +            os.system("umount %s" % (self.tmpdp))
> +
> +
> +    def compare_page(self, original, inmem):
> +        """
> +        Compare pages of memory and print the differences found.
> +
> +        @param original: Data that was expected to be in memory.
> +        @param inmem: Data in memory.
> +        """
> +        for ip in range(PAGE_SIZE / original.itemsize):
> +            if (not original[ip] == inmem[ip]): # find which item is wrong
> +                originalp = array.array("B")
> +                inmemp = array.array("B")
> +                originalp.fromstring(original[ip:ip+1].tostring())
> +                inmemp.fromstring(inmem[ip:ip+1].tostring())
> +                for ib in range(len(originalp)): # find wrong byte in item
> +                    if not (originalp[ib] == inmemp[ib]):
> +                        position = (self.f.tell() - PAGE_SIZE + ip *
> +                                    original.itemsize + ib)
> +                        print ("Mem error on position %d wanted 0x%Lx and is "
> +                               "0x%Lx" % (position, originalp[ib], inmemp[ib]))
> +
> +
> +    def value_page(self, value):
> +        """
> +        Create page filled by value.
> +
> +        @param value: String we want to fill the page with.
> +        @return: return array of bytes size PAGE_SIZE.
> +        """
> +        a = array.array("B")
> +        for i in range(PAGE_SIZE / a.itemsize):
> +            try:
> +                a.append(value)
> +            except:
> +                print "FAIL: Value can be only in range (0..255)"
> +        return a
> +
> +
> +    def random_page(self, seed):
> +        """
> +        Create page filled by static random series.
> +
> +        @param seed: Seed of random series.
> +        @return: Static random array series.
> +        """
> +        random.seed(seed)
> +        a = array.array(self.allocate_by)
> +        for i in range(PAGE_SIZE / a.itemsize):
> +            a.append(random.randrange(0, sys.maxint))
> +        return a
> +
> +
> +    def value_fill(self, value=None):
> +        """
> +        Fill memory page by page, with value generated with value_page.
> +
> +        @param value: Parameter to be passed to value_page. None to just use
> +                what's on the attribute static_value.
> +        """
> +        self.f.seek(0)
> +        if value is None:
> +            value = self.static_value
> +        page = self.value_page(value)
> +        for pages in range(self.npages):
> +            page.tofile(self.f)
> +        print "PASS: Mem value fill"
> +
> +
> +    def value_check(self, value=None):
> +        """
> +        Check memory to see if data is correct.
> +
> +        @param value: Parameter to be passed to value_page. None to just use
> +                what's on the attribute static_value.
> +        @return: if data in memory is correct return PASS
> +                else print some wrong data and return FAIL
> +        """
> +        self.f.seek(0)
> +        e = 2
> +        failure = False
> +        if value is None:
> +            value = self.static_value
> +        page = self.value_page(value)
> +        for pages in range(self.npages):
> +            pf = array.array("B")
> +            pf.fromfile(self.f, PAGE_SIZE / pf.itemsize)
> +            if not (page == pf):
> +                failure = True
> +                self.compare_page(page, pf)
> +                e = e - 1
> +                if e == 0:
> +                    break
> +        if failure:
> +            print "FAIL: value verification"
> +        else:
> +            print "PASS: value verification"
> +
> +
> +    def static_random_fill(self, n_bytes_on_end=PAGE_SIZE):
> +        """
> +        Fill memory by page with static random series with added special value
> +        on random place in pages.
> +
> +        @param n_bytes_on_end: how many bytes on the end of page can be changed.
> +        @return: PASS.
> +        """
> +        self.f.seek(0)
> +        page = self.random_page(self.random_key)
> +        random.seed(self.random_key)
> +        p = copy.copy(page)
> +
> +        t_start = datetime.datetime.now()
> +        for pages in range(self.npages):
> +            rand = random.randint(((PAGE_SIZE / page.itemsize) - 1) -
> +                                  (n_bytes_on_end / page.itemsize),
> +                                  (PAGE_SIZE/page.itemsize) - 1)
> +            p[rand] = pages
> +            p.tofile(self.f)
> +            p[rand] = page[rand]
> +
> +        t_end = datetime.datetime.now()
> +        delta = t_end - t_start
> +        milisec = delta.microseconds / 1e3 + delta.seconds * 1e3
> +        print "PASS: filling duration = %Ld ms" % milisec
> +
> +
> +    def static_random_verify(self, n_bytes_on_end=PAGE_SIZE):
> +        """
> +        Check memory to see if it contains correct contents.
> +
> +        @return: if data in memory is correct return PASS
> +                else print some wrong data and return FAIL.
> +        """
> +        self.f.seek(0)
> +        e = 2
> +        page = self.random_page(self.random_key)
> +        random.seed(self.random_key)
> +        p = copy.copy(page)
> +        failure = False
> +        for pages in range(self.npages):
> +            rand = random.randint(((PAGE_SIZE/page.itemsize) - 1) -
> +                                  (n_bytes_on_end/page.itemsize),
> +                                  (PAGE_SIZE/page.itemsize) - 1)
> +            p[rand] = pages
> +            pf = array.array(self.allocate_by)
> +            pf.fromfile(self.f, PAGE_SIZE / pf.itemsize)
> +            if not (p == pf):
> +                failure = True
> +                self.compare_page(p, pf)
> +                e = e - 1
> +                if e == 0:
> +                    break
> +            p[rand] = page[rand]
> +        if failure:
> +            print "FAIL: Random series verification"
> +        else:
> +            print "PASS: Random series verification"
> +
> +
> +def die():
> +    """
> +    Quit allocator.
> +    """
> +    exit(0)
> +
> +
> +def main():
> +    """
> +    Main (infinite) loop of allocator.
> +    """
> +    print "PASS: Start"
> +    end = False
> +    while not end:
> +        str = raw_input()
> +        exec str
> +
> +
> +if __name__ == "__main__":
> +    main()
> diff --git a/client/tests/kvm/tests/ksm_overcommit.py b/client/tests/kvm/tests/ksm_overcommit.py
> new file mode 100644
> index 0000000..2dd46c4
> --- /dev/null
> +++ b/client/tests/kvm/tests/ksm_overcommit.py
> @@ -0,0 +1,559 @@
> +import logging, time, random, string, math, os, tempfile
> +from autotest_lib.client.common_lib import error
> +from autotest_lib.client.bin import utils
> +import kvm_subprocess, kvm_test_utils, kvm_utils, kvm_preprocessing
> +
> +
> +def run_ksm_overcommit(test, params, env):
> +    """
> +    Test how KSM (Kernel Shared Memory) act when more than physical memory is
> +    used. In second part we also test how KVM handles a situation when the host
> +    runs out of memory (it is expected to pause the guest system, wait until
> +    some process returns memory and bring the guest back to life)
> +
> +    @param test: kvm test object.
> +    @param params: Dictionary with test parameters.
> +    @param env: Dictionary with the test wnvironment.
> +    """
> +
> +    def _start_allocator(vm, session, timeout):
> +        """
> +        Execute allocator.py on a guest, wait until it is initialized.
> +
> +        @param vm: VM object.
> +        @param session: Remote session to a VM object.
> +        @param timeout: Timeout that will be used to verify if allocator.py
> +                started properly.
> +        """
> +        logging.debug("Starting allocator.py on guest %s", vm.name)
> +        session.sendline("python /tmp/allocator.py")
> +        (match, data) = session.read_until_last_line_matches(["PASS:", "FAIL:"],
> +                                                             timeout)
> +        if match == 1 or match is None:
> +            raise error.TestFail("Command allocator.py on guest %s failed.\n"
> +                                 "return code: %s\n output:\n%s" %
> +                                 (vm.name, match, data))
> +
> +
> +    def _execute_allocator(command, vm, session, timeout):
> +        """
> +        Execute a given command on allocator.py main loop, indicating the vm
> +        the command was executed on.
> +
> +        @param command: Command that will be executed.
> +        @param vm: VM object.
> +        @param session: Remote session to VM object.
> +        @param timeout: Timeout used to verify expected output.
> +
> +        @return: Tuple (match index, data)
> +        """
> +        logging.debug("Executing '%s' on allocator.py loop, vm: %s, timeout: %s",
> +                      command, vm.name, timeout)
> +        session.sendline(command)
> +        (match, data) = session.read_until_last_line_matches(["PASS:","FAIL:"],
> +                                                             timeout)
> +        if match == 1 or match is None:
> +            raise error.TestFail("Failed to execute '%s' on allocator.py, "
> +                                 "vm: %s, output:\n%s" %
> +                                 (command, vm.name, data))
> +        return (match, data)
> +
> +
> +    def initialize_guests():
> +        """
> +        Initialize guests (fill their memories with specified patterns).
> +        """
> +        logging.info("Phase 1: filling guest memory pages")
> +        for session in lsessions:
> +            vm = lvms[lsessions.index(session)]
> +
> +            logging.debug("Turning off swap on vm %s" % vm.name)
> +            ret = session.get_command_status("swapoff -a", timeout=300)
> +            if ret is None or ret:
> +                raise error.TestFail("Failed to swapoff on VM %s" % vm.name)
> +
> +            # Start the allocator
> +            _start_allocator(vm, session, 60 * perf_ratio)
> +
> +        # Execute allocator on guests
> +        for i in range(0, vmsc):
> +            vm = lvms[i]
> +
> +            a_cmd = "mem = MemFill(%d, %s, %s)" % (ksm_size, skeys[i], dkeys[i])
> +            _execute_allocator(a_cmd, vm, lsessions[i], 60 * perf_ratio)
> +
> +            a_cmd = "mem.value_fill(%d)" % skeys[0]
> +            _execute_allocator(a_cmd, vm, lsessions[i], 120 * perf_ratio)
> +
> +            # Let allocator.py do its job
> +            # (until shared mem reaches expected value)
> +            shm = 0
> +            i = 0
> +            logging.debug("Target shared meminfo for guest %s: %s", vm.name,
> +                          ksm_size)
> +            while shm < ksm_size:
> +                if i > 64:
> +                    logging.debug(kvm_test_utils.get_memory_info(lvms))
> +                    raise error.TestError("SHM didn't merge the memory until "
> +                                          "the DL on guest: %s" % vm.name)
> +                st = ksm_size / 200 * perf_ratio
> +                logging.debug("Waiting %ds before proceeding..." % st)
> +                time.sleep(st)
> +                shm = vm.get_shared_meminfo()
> +                logging.debug("Shared meminfo for guest %s after "
> +                              "iteration %s: %s", vm.name, i, shm)
> +                i += 1
> +
> +        # Keep some reserve
> +        rt = ksm_size / 200 * perf_ratio
> +        logging.debug("Waiting %ds before proceeding...", rt)
> +        time.sleep(rt)
> +
> +        logging.debug(kvm_test_utils.get_memory_info(lvms))
> +        logging.info("Phase 1: PASS")
> +
> +
> +    def separate_first_guest():
> +        """
> +        Separate memory of the first guest by generating special random series
> +        """
> +        logging.info("Phase 2: Split the pages on the first guest")
> +
> +        a_cmd = "mem.static_random_fill()"
> +        (match, data) = _execute_allocator(a_cmd, lvms[0], lsessions[0],
> +                                           120 * perf_ratio)
> +
> +        r_msg = data.splitlines()[-1]
> +        logging.debug("Return message of static_random_fill: %s", r_msg)
> +        out = int(r_msg.split()[4])
> +        logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s", ksm_size, out,
> +                     (ksm_size * 1000 / out))
> +        logging.debug(kvm_test_utils.get_memory_info(lvms))
> +        logging.debug("Phase 2: PASS")
> +
> +
> +    def split_guest():
> +        """
> +        Sequential split of pages on guests up to memory limit
> +        """
> +        logging.info("Phase 3a: Sequential split of pages on guests up to "
> +                     "memory limit")
> +        last_vm = 0
> +        session = None
> +        vm = None
> +        for i in range(1, vmsc):
> +            vm = lvms[i]
> +            session = lsessions[i]
> +            a_cmd = "mem.static_random_fill()"
> +            logging.debug("Executing %s on allocator.py loop, vm: %s",
> +                          a_cmd, vm.name)
> +            session.sendline(a_cmd)
> +
> +            out = ""
> +            try:
> +                logging.debug("Watching host memory while filling vm %s memory",
> +                              vm.name)
> +                while not out.startswith("PASS") and not out.startswith("FAIL"):
> +                    free_mem = int(utils.read_from_meminfo("MemFree"))
> +                    if (ksm_swap):
> +                        free_mem = (free_mem +
> +                                    int(utils.read_from_meminfo("SwapFree")))
> +                    logging.debug("Free memory on host: %d" % (free_mem))
> +
> +                    # We need to keep some memory for python to run.
> +                    if (free_mem < 64000) or (ksm_swap and
> +                                              free_mem < (450000 * perf_ratio)):
> +                        vm.send_monitor_cmd('stop')
> +                        for j in range(0, i):
> +                            lvms[j].destroy(gracefully = False)
> +                        time.sleep(20)
> +                        vm.send_monitor_cmd('c')
> +                        logging.debug("Only %s free memory, killing %d guests" %
> +                                      (free_mem, (i-1)))
> +                        last_vm = i
> +                        break
> +                    out = session.read_nonblocking(0.1)
> +                    time.sleep(2)
> +            except OSError, (err):
> +                logging.debug("Only %s host free memory, killing %d guests" %
> +                              (free_mem, (i - 1)))
> +                logging.debug("Stopping %s", vm.name)
> +                vm.send_monitor_cmd('stop')
> +                for j in range(0, i):
> +                    logging.debug("Destroying %s", lvms[j].name)
> +                    lvms[j].destroy(gracefully = False)
> +                time.sleep(20)
> +                vm.send_monitor_cmd('c')
> +                last_vm = i
> +
> +            if last_vm != 0:
> +                break
> +            logging.debug("Memory filled for guest %s" % (vm.name))
> +
> +        logging.info("Phase 3a: PASS")
> +
> +        logging.info("Phase 3b: Check if memory in max loading guest is right")
> +        for i in range(last_vm + 1, vmsc):
> +            lsessions[i].close()
> +            if i == (vmsc - 1):
> +                logging.debug(kvm_test_utils.get_memory_info([lvms[i]]))
> +            logging.debug("Destroying guest %s" % lvms[i].name)
> +            lvms[i].destroy(gracefully = False)
> +
> +        # Verify last machine with randomly generated memory
> +        a_cmd = "mem.static_random_verify()"
> +        _execute_allocator(a_cmd, lvms[last_vm], session,
> +                           (mem / 200 * 50 * perf_ratio))
> +        logging.debug(kvm_test_utils.get_memory_info([lvms[last_vm]]))
> +
> +        (status, data) = lsessions[i].get_command_status_output("die()", 20)
> +        lvms[last_vm].destroy(gracefully = False)
> +        logging.info("Phase 3b: PASS")
> +
> +
> +    def split_parallel():
> +        """
> +        Parallel page spliting
> +        """
> +        logging.info("Phase 1: parallel page spliting")
> +        # We have to wait until allocator is finished (it waits 5 seconds to
> +        # clean the socket
> +
> +        session = lsessions[0]
> +        vm = lvms[0]
> +        for i in range(1, max_alloc):
> +            lsessions.append(kvm_utils.wait_for(vm.remote_login, 360, 0, 2))
> +            if not lsessions[i]:
> +                raise error.TestFail("Could not log into guest %s" %
> +                                     vm.name)
> +
> +        ret = session.get_command_status("swapoff -a", timeout=300)
> +        if ret != 0:
> +            raise error.TestFail("Failed to turn off swap on %s" % vm.name)
> +
> +        for i in range(0, max_alloc):
> +            # Start the allocator
> +            _start_allocator(vm, lsessions[i], 60 * perf_ratio)
> +
> +        logging.info("Phase 1: PASS")
> +
> +        logging.info("Phase 2a: Simultaneous merging")
> +        logging.debug("Memory used by allocator on guests = %dMB" %
> +                     (ksm_size / max_alloc))
> +
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem = MemFill(%d, %s, %s)" % ((ksm_size / max_alloc),
> +                                                   skeys[i], dkeys[i])
> +            _execute_allocator(a_cmd, vm, lsessions[i], 60 * perf_ratio)
> +
> +            a_cmd = "mem.value_fill(%d)" % (skeys[0])
> +            _execute_allocator(a_cmd, vm, lsessions[i], 90 * perf_ratio)
> +
> +        # Wait until allocator.py merges the pages (3 * ksm_size / 3)
> +        shm = 0
> +        i = 0
> +        logging.debug("Target shared memory size: %s", ksm_size)
> +        while shm < ksm_size:
> +            if i > 64:
> +                logging.debug(kvm_test_utils.get_memory_info(lvms))
> +                raise error.TestError("SHM didn't merge the memory until DL")
> +            wt = ksm_size / 200 * perf_ratio
> +            logging.debug("Waiting %ds before proceed...", wt)
> +            time.sleep(wt)
> +            shm = vm.get_shared_meminfo()
> +            logging.debug("Shared meminfo after attempt %s: %s", i, shm)
> +            i += 1
> +
> +        logging.debug(kvm_test_utils.get_memory_info([vm]))
> +        logging.info("Phase 2a: PASS")
> +
> +        logging.info("Phase 2b: Simultaneous spliting")
> +        # Actual splitting
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.static_random_fill()"
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               90 * perf_ratio)
> +
> +            data = data.splitlines()[-1]
> +            logging.debug(data)
> +            out = int(data.split()[4])
> +            logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s" %
> +                         ((ksm_size / max_alloc), out,
> +                          (ksm_size * 1000 / out / max_alloc)))
> +        logging.debug(kvm_test_utils.get_memory_info([vm]))
> +        logging.info("Phase 2b: PASS")
> +
> +        logging.info("Phase 2c: Simultaneous verification")
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.static_random_verify()"
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               (mem / 200 * 50 * perf_ratio))
> +        logging.info("Phase 2c: PASS")
> +
> +        logging.info("Phase 2d: Simultaneous merging")
> +        # Actual splitting
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.value_fill(%d)" % skeys[0]
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               120 * perf_ratio)
> +        logging.debug(kvm_test_utils.get_memory_info([vm]))
> +        logging.info("Phase 2d: PASS")
> +
> +        logging.info("Phase 2e: Simultaneous verification")
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.value_check(%d)" % skeys[0]
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               (mem / 200 * 50 * perf_ratio))
> +        logging.info("Phase 2e: PASS")
> +
> +        logging.info("Phase 2f: Simultaneous spliting last 96B")
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.static_random_fill(96)"
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               60 * perf_ratio)
> +
> +            data = data.splitlines()[-1]
> +            out = int(data.split()[4])
> +            logging.debug("Performance: %dMB * 1000 / %dms = %dMB/s",
> +                         ksm_size/max_alloc, out,
> +                         (ksm_size * 1000 / out / max_alloc))
> +
> +        logging.debug(kvm_test_utils.get_memory_info([vm]))
> +        logging.info("Phase 2f: PASS")
> +
> +        logging.info("Phase 2g: Simultaneous verification last 96B")
> +        for i in range(0, max_alloc):
> +            a_cmd = "mem.static_random_verify(96)"
> +            (match, data) = _execute_allocator(a_cmd, vm, lsessions[i],
> +                                               (mem / 200 * 50 * perf_ratio))
> +        logging.debug(kvm_test_utils.get_memory_info([vm]))
> +        logging.info("Phase 2g: PASS")
> +
> +        logging.debug("Cleaning up...")
> +        for i in range(0, max_alloc):
> +            lsessions[i].get_command_status_output("die()", 20)
> +        session.close()
> +        vm.destroy(gracefully = False)
> +
> +
> +    # Main test code
> +    logging.info("Starting phase 0: Initialization")
> +    # host_reserve: mem reserve kept for the host system to run
> +    host_reserve = int(params.get("ksm_host_reserve", 512))
> +    # guest_reserve: mem reserve kept to avoid guest OS to kill processes
> +    guest_reserve = int(params.get("ksm_guest_reserve", 1024))
> +    logging.debug("Memory reserved for host to run: %d", host_reserve)
> +    logging.debug("Memory reserved for guest to run: %d", guest_reserve)
> +
> +    max_vms = int(params.get("max_vms", 2))
> +    overcommit = float(params.get("ksm_overcommit_ratio", 2.0))
> +    max_alloc = int(params.get("ksm_parallel_ratio", 1))
> +
> +    # vmsc: count of all used VMs
> +    vmsc = int(overcommit) + 1
> +    vmsc = max(vmsc, max_vms)
> +
> +    if (params['ksm_mode'] == "serial"):
> +        max_alloc = vmsc
> +
> +    host_mem = (int(utils.memtotal()) / 1024 - host_reserve)
> +
> +    ksm_swap = False
> +    if params.get("ksm_swap") == "yes":
> +        ksm_swap = True
> +
> +    # Performance ratio
> +    perf_ratio = params.get("ksm_perf_ratio")
> +    if perf_ratio:
> +        perf_ratio = float(perf_ratio)
> +    else:
> +        perf_ratio = 1
> +
> +    if (params['ksm_mode'] == "parallel"):
> +        vmsc = 1
> +        overcommit = 1
> +        mem = host_mem
> +        # 32bit system adjustment
> +        if not params['image_name'].endswith("64"):
> +            logging.debug("Probably i386 guest architecture, "
> +                          "max allocator mem = 2G")
> +            # Guest can have more than 2G but
> +            # kvm mem + 1MB (allocator itself) can't
> +            if (host_mem > 3100):
> +                mem = 3100
> +
> +        if os.popen("uname -i").readline().startswith("i386"):
> +            logging.debug("Host is i386 architecture, max guest mem is 2G")
> +            # Guest system with qemu overhead (64M) can't have more than 2G
> +            if mem > 3100 - 64:
> +                mem = 3100 - 64
> +
> +    else:
> +        # mem: Memory of the guest systems. Maximum must be less than
> +        # host's physical ram
> +        mem = int(overcommit * host_mem / vmsc)
> +
> +        # 32bit system adjustment
> +        if not params['image_name'].endswith("64"):
> +            logging.debug("Probably i386 guest architecture, "
> +                          "max allocator mem = 2G")
> +            # Guest can have more than 2G but
> +            # kvm mem + 1MB (allocator itself) can't
> +            if mem - guest_reserve - 1 > 3100:
> +                vmsc = int(math.ceil((host_mem * overcommit) /
> +                                     (3100 + guest_reserve)))
> +                mem = int(math.floor(host_mem * overcommit / vmsc))
> +
> +        if os.popen("uname -i").readline().startswith("i386"):
> +            logging.debug("Host is i386 architecture, max guest mem is 2G")
> +            # Guest system with qemu overhead (64M) can't have more than 2G
> +            if mem > 3100 - 64:
> +                vmsc = int(math.ceil((host_mem * overcommit) /
> +                                     (3100 - 64.0)))
> +                mem = int(math.floor(host_mem * overcommit / vmsc))
> +
> +    logging.debug("Checking KSM status...")
> +    ksm_flag = 0
> +    for line in os.popen('ksmctl info').readlines():
> +        if line.startswith('flags'):
> +            ksm_flag = int(line.split(' ')[1].split(',')[0])
> +    if int(ksm_flag) != 1:
> +        logging.info("KSM module is not loaded! Trying to load module and "
> +                     "start ksmctl...")
> +        try:
> +            utils.run("modprobe ksm")
> +            utils.run("ksmctl start 5000 100")
> +        except error.CmdError, e:
> +            raise error.TestFail("Failed to load KSM: %s" % e)
> +    logging.debug("KSM module loaded and ksmctl started")
> +
> +    swap = int(utils.read_from_meminfo("SwapTotal")) / 1024
> +
> +    logging.debug("Overcommit = %f", overcommit)
> +    logging.debug("True overcommit = %f ", (float(vmsc * mem) /
> +                                            float(host_mem)))
> +    logging.debug("Host memory = %dM", host_mem)
> +    logging.debug("Guest memory = %dM", mem)
> +    logging.debug("Using swap = %s", ksm_swap)
> +    logging.debug("Swap = %dM", swap)
> +    logging.debug("max_vms = %d", max_vms)
> +    logging.debug("Count of all used VMs = %d", vmsc)
> +    logging.debug("Performance_ratio = %f", perf_ratio)
> +
> +    # Generate unique keys for random series
> +    skeys = []
> +    dkeys = []
> +    for i in range(0, max(vmsc, max_alloc)):
> +        key = random.randrange(0, 255)
> +        while key in skeys:
> +            key = random.randrange(0, 255)
> +        skeys.append(key)
> +
> +        key = random.randrange(0, 999)
> +        while key in dkeys:
> +            key = random.randrange(0, 999)
> +        dkeys.append(key)
> +
> +    logging.debug("skeys: %s" % skeys)
> +    logging.debug("dkeys: %s" % dkeys)
> +
> +    lvms = []
> +    lsessions = []
> +
> +    # As we don't know the number and memory amount of VMs in advance,
> +    # we need to specify and create them here (FIXME: not a nice thing)
> +    vm_name = params.get("main_vm")
> +    params['mem'] = mem
> +    params['vms'] = vm_name
> +    # Associate pidfile name
> +    params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
> +                                                                'pid')
> +    if not params.get('extra_params'):
> +        params['extra_params'] = ' '
> +    params['extra_params_' + vm_name] = params.get('extra_params')
> +    params['extra_params_' + vm_name] += (" -pidfile %s" %
> +                                          (params.get('pid_' + vm_name)))
> +    params['extra_params'] = params.get('extra_params_'+vm_name)
> +
> +    # ksm_size: amount of memory used by allocator
> +    ksm_size = mem - guest_reserve
> +    logging.debug("Memory used by allocator on guests = %dM" % (ksm_size))
> +
> +    # Creating the first guest
> +    kvm_preprocessing.preprocess_vm(test, params, env, vm_name)
> +    lvms.append(kvm_utils.env_get_vm(env, vm_name))
> +    if not lvms[0]:
> +        raise error.TestError("VM object not found in environment")
> +    if not lvms[0].is_alive():
> +        raise error.TestError("VM seems to be dead; Test requires a living "
> +                              "VM")
> +
> +    logging.debug("Booting first guest %s", lvms[0].name)
> +
> +    lsessions.append(kvm_utils.wait_for(lvms[0].remote_login, 360, 0, 2))
> +    if not lsessions[0]:
> +        raise error.TestFail("Could not log into first guest")
> +    # Associate vm PID
> +    try:
> +        tmp = open(params.get('pid_' + vm_name), 'r')
> +        params['pid_' + vm_name] = int(tmp.readline())
> +    except:
> +        raise error.TestFail("Could not get PID of %s" % (vm_name))
> +
> +    # Creating other guest systems
> +    for i in range(1, vmsc):
> +        vm_name = "vm" + str(i + 1)
> +        params['pid_' + vm_name] = kvm_utils.generate_tmp_file_name(vm_name,
> +                                                                    'pid')
> +        params['extra_params_' + vm_name] = params.get('extra_params')
> +        params['extra_params_' + vm_name] += (" -pidfile %s" %
> +                                             (params.get('pid_' + vm_name)))
> +        params['extra_params'] = params.get('extra_params_' + vm_name)
> +
> +        # Last VM is later used to run more allocators simultaneously
> +        lvms.append(lvms[0].clone(vm_name, params))
> +        kvm_utils.env_register_vm(env, vm_name, lvms[i])
> +        params['vms'] += " " + vm_name
> +
> +        logging.debug("Booting guest %s" % lvms[i].name)
> +        if not lvms[i].create():
> +            raise error.TestFail("Cannot create VM %s" % lvms[i].name)
> +        if not lvms[i].is_alive():
> +            raise error.TestError("VM %s seems to be dead; Test requires a"
> +                                  "living VM" % lvms[i].name)
> +
> +        lsessions.append(kvm_utils.wait_for(lvms[i].remote_login, 360, 0, 2))
> +        if not lsessions[i]:
> +            raise error.TestFail("Could not log into guest %s" %
> +                                 lvms[i].name)
> +        try:
> +            tmp = open(params.get('pid_' + vm_name), 'r')
> +            params['pid_' + vm_name] = int(tmp.readline())
> +        except:
> +            raise error.TestFail("Could not get PID of %s" % (vm_name))
> +
> +    # Let guests rest a little bit :-)
> +    st = vmsc * 2 * perf_ratio
> +    logging.debug("Waiting %ds before proceed", st)
> +    time.sleep(vmsc * 2 * perf_ratio)
> +    logging.debug(kvm_test_utils.get_memory_info(lvms))
> +
> +    # Copy allocator.py into guests
> +    pwd = os.path.join(os.environ['AUTODIR'],'tests/kvm')
> +    vksmd_src = os.path.join(pwd, "scripts/allocator.py")
> +    dst_dir = "/tmp"
> +    for vm in lvms:
> +        if not vm.copy_files_to(vksmd_src, dst_dir):
> +            raise error.TestFail("copy_files_to failed %s" % vm.name)
> +    logging.info("Phase 0: PASS")
> +
> +    if params['ksm_mode'] == "parallel":
> +        logging.info("Starting KSM test parallel mode")
> +        split_parallel()
> +        logging.info("KSM test parallel mode: PASS")
> +    elif params['ksm_mode'] == "serial":
> +        logging.info("Starting KSM test serial mode")
> +        initialize_guests()
> +        separate_first_guest()
> +        split_guest()
> +        logging.info("KSM test serial mode: PASS")
> diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
> index e9fdd05..4516ed0 100644
> --- a/client/tests/kvm/tests_base.cfg.sample
> +++ b/client/tests/kvm/tests_base.cfg.sample
> @@ -255,6 +255,28 @@ variants:
>         type = physical_resources_check
>         catch_uuid_cmd = dmidecode | awk -F: '/UUID/ {print $2}'
>
> +    - ksm_overcommit:
> +        # Don't preprocess any vms as we need to change its params
> +        vms = ''
> +        image_snapshot = yes
> +        kill_vm_gracefully = no
> +        type = ksm_overcommit
> +        # Make host use swap (a value of 'no' will turn off host swap)
> +        ksm_swap = yes
> +        no hugepages
> +        # Overcommit of host memmory
> +        ksm_overcommit_ratio = 3
> +        # Max paralel runs machine
> +        ksm_parallel_ratio = 4
> +        # Host memory reserve
> +        ksm_host_reserve = 512
> +        ksm_guest_reserve = 1024
> +        variants:
> +            - ksm_serial
> +                ksm_mode = "serial"
> +            - ksm_parallel
> +                ksm_mode = "parallel"
> +
>     # system_powerdown, system_reset and shutdown *must* be the last ones
>     # defined (in this order), since the effect of such tests can leave
>     # the VM on a bad state.
> @@ -278,6 +300,7 @@ variants:
>         kill_vm_gracefully = no
>     # Do not define test variants below shutdown
>
> +
>  # NICs
>  variants:
>     - @rtl8139:
> --
> 1.6.6.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Lucas
_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-02-25 20:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-22 20:29 [PATCH] KVM test: KSM overcommit test v4 Lucas Meneghel Rodrigues
2010-02-25 20:14 ` Lucas Meneghel Rodrigues

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.