All of lore.kernel.org
 help / color / mirror / Atom feed
* [OSSTEST PATCH v3 0/3] Test case for cpupools
@ 2015-10-03  0:39 Dario Faggioli
  2015-10-03  0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Dario Faggioli @ 2015-10-03  0:39 UTC (permalink / raw)
  To: xen-devel

Hey,

This is v3 of my cpupools OSSTest test case. I know, quite a few time passed...
Sorry for that, I've been sidetracked.  In fact, v2 was here:

  http://xen-devel.narkive.com/x8VxgO3c/osstest-patch-v2-0-2-test-case-for-cpupools

Since then, I reworked the patch quite a bit. Of course, I did consider and
address the comments I got with v2.

The test case tries to create one cpupool, and then plays a bit with it, e.g.,
by moving pCPUs and domains around.  It does so for all the schedulers we
support, i.e., it creates one cpupool with one scheduler, plays with it as said
above, then destroys it and goes ahead with the next scheduler, etc.

A git branch is available here:

  git://xenbits.xen.org/people/dariof/osstest.git  tests/cpupools-v3
  http://xenbits.xen.org/gitweb/?p=people/dariof/osstest.git;a=shortlog;h=refs/heads/tests/cpupools-v3

There are some host related considerations. In fact, this test case requires
that an host with at least 2 pCPUs is used. v2 was failing the test, if that
was not the case. Now, I'm just skipping doing pretty much everything, but I'm
not reporting failure.  In any case, while reviewing v2, IanC said:

 "The proper fix would be a property in the hostdb which was used to
  constrain which hosts the jobs containing this test could run on. (e.g.
  we have pcipassthrough-nic).

  Maybe this way is OK until we find we are commissioning a machine with a
  single CPU, at which point this failure will seem pretty obvious. Ian?"

I do like this. I, actually, even started to do this already, for other
reasons, and I am fine continuing working to make this happen, but I'd need
some help, or at least some pointers.

I've never interacted with the hostdb, so I'd appreciate pointers for knowing
where to look.

What I drafted was right that kind (or so it looks to me) of host properties,
but meant at standalone mode, and I did it like in the attached patch... Any
thoughts on this?

What I also miss, in both the cases of standalone mode config file defined
properties, and hostdb ones, it the logic for telling the host allocator to
consider such properties when choosing the host to be used for a job. I'll go
studying how to do it, but if anyone feels the irresistible need of advising,
feel free to go ahead... :-)

Thanks and Regards,
Dario
---
Dario Faggioli (3):
      ts-cpupools: new test script
      Testing cpupools: recipe for it and job definition
      ts-logs-capture: include some cpupools info in the captured logs.

 make-flight     |   12 +++++
 sg-run-job      |    7 +++
 ts-cpupools     |  121 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 ts-logs-capture |    2 +
 4 files changed, 142 insertions(+)
 create mode 100755 ts-cpupools
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [OSSTEST PATCH v3 1/3] ts-cpupools: new test script
  2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
@ 2015-10-03  0:39 ` Dario Faggioli
  2015-10-08 16:38   ` Ian Campbell
  2015-10-03  0:39 ` [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition Dario Faggioli
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Dario Faggioli @ 2015-10-03  0:39 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Ian Jackson, Ian Campbell

for smoke testing cpupools a bit. It tries to partition
a live host in two cpupools, trying out the following 3
schedulers for the new cpupool (one after the other):
 credit, credit2 and RTDS.

It also tries to migrating a domain to the new cpupool
and then back to Pool-0.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v2:
 * reorganized internal subroutines;
 * avoid failing (just don't run the test) if we find
   only 1 pCPU on the host;
 * use 'map' and 'grep' in place of foreach loops, as
   suggested during review;
 * fix the check for the default cpupool configuration
   to be in place when starting the test, as identified
   during review;
 * fix quoting for the name of the cpupool, as idenfied
   during review;
 * use target_cmd_root(), instead of target_cmd_output_root(),
   as requested during review;
 * check for the toolstack to be xl and only xl, as
   requested during review.
---
 ts-cpupools |  121 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)
 create mode 100755 ts-cpupools

diff --git a/ts-cpupools b/ts-cpupools
new file mode 100755
index 0000000..7fe9a27
--- /dev/null
+++ b/ts-cpupools
@@ -0,0 +1,121 @@
+#!/usr/bin/perl -w
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+use strict qw(vars);
+use DBI;
+use Osstest;
+use Osstest::TestSupport;
+
+tsreadconfig();
+
+our ($ho,$gho) = ts_get_host_guest(@ARGV);
+
+our $nr_cpus;
+our $default_pool= "Pool-0";
+our @schedulers= ("credit","credit2","rtds");
+our @cpulist;
+
+# Figure out the number of pCPUs of the host. We need to know that for
+# deciding with what pCPUs we'll create the test pool.
+sub check_cpus () {
+  my $xlinfo= target_cmd_output_root($ho, "xl info");
+  $xlinfo =~ /nr_cpus\s*:\s([0-9]*)/;
+  $nr_cpus= $1;
+  logm("Found $nr_cpus pCPUs");
+  logm("$nr_cpus is yoo few pCPUs for testing cpupools");
+}
+
+# At the beginning:
+#  * only 1 pool must exist,
+#  * it must be the default pool.
+sub check () {
+  my $cppinfo= target_cmd_output_root($ho, "xl cpupool-list");
+  my $nr_cpupools= $cppinfo =~ tr/\n//;
+
+  logm("Found $nr_cpupools cpupools");
+  die "There already is more than one cpu pool!" if $nr_cpupools > 1;
+  die "Non-default cpupool configuration detected"
+      unless $cppinfo =~ /$default_pool/;
+
+  die "This test is meant for xl only"
+      if toolstack($ho)->{Name} ne "xl";
+}
+
+# Odd pCPUs will end up in out test pool
+sub prep_cpulist () {
+  @cpulist = grep { $_ % 2 } (0..$nr_cpus);
+  logm("Using the following cpus fo the test pool: @cpulist");
+}
+
+sub prep_pool ($) {
+  my ($sched)= @_;
+  my @cpustr;
+
+  my @cpustr= map { $_ == -1 ? "[ " : $_ == $#cpulist+1 ? " ]" :
+      "\"$cpulist[$_]\"," } (-1 .. $#cpulist+1);
+
+  target_putfilecontents_stash($ho,100,<<"END","/etc/xen/cpupool-test-$sched");
+name = \"cpupool-test-$sched\"
+sched = \"$sched\"
+cpus = @cpustr
+END
+}
+
+# For each cpupool:
+#  * create it
+#  * rename it
+#  * move a domain in it
+#  * move back a domain out of it
+#  * add back the pcpus from it to the default pool
+#  * destroy it
+sub run ($) {
+  my ($sched)= @_;
+
+  foreach my $cpu (@cpulist) {
+    target_cmd_root($ho,"xl cpupool-cpu-remove $default_pool $cpu");
+  }
+  target_cmd_root($ho, "xl cpupool-list -c");
+  target_cmd_root($ho, "xl cpupool-create /etc/xen/cpupool-test-$sched");
+  target_cmd_root($ho, "xl cpupool-rename cpupool-test-$sched cpupool-test");
+  target_cmd_root($ho, "xl cpupool-list -c");
+
+  target_cmd_root($ho, "xl cpupool-migrate $gho->{Name} cpupool-test");
+  target_cmd_root($ho, "xl cpupool-list");
+  target_cmd_root($ho, "xl vcpu-list");
+
+  target_cmd_root($ho, "xl cpupool-migrate $gho->{Name} Pool-0");
+  target_cmd_root($ho, "xl cpupool-list");
+
+  foreach my $cpu (@cpulist) {
+    target_cmd_root($ho,"xl cpupool-cpu-remove cpupool-test $cpu");
+    target_cmd_root($ho,"xl cpupool-cpu-add $default_pool $cpu");
+  }
+  target_cmd_output_root($ho, "xl cpupool-list -c");
+
+  target_cmd_root($ho, "xl cpupool-destroy cpupool-test");
+  target_cmd_root($ho, "xl cpupool-list");
+}
+
+check();
+check_cpus();
+if ($nr_cpus > 1) {
+  prep_cpulist();
+  foreach my $s (@schedulers) {
+    prep_pool("$s");
+    run("$s");
+  }
+}

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition
  2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
  2015-10-03  0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
@ 2015-10-03  0:39 ` Dario Faggioli
  2015-10-09 14:34   ` Ian Campbell
  2015-10-03  0:39 ` [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs Dario Faggioli
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Dario Faggioli @ 2015-10-03  0:39 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Ian Jackson, Ian Campbell

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v2:
 * restrict test generation to xl only.

Changes from v1:
 * added invocation to ts-guest-stop in the recipe to kill
   leak-check complaints (which went unnoticed during v1
   testing, sorry)
 * moved the test before the "ARM cutoff", and remove the
   per-arch filtering, so that the test can run on ARM
   hardware too
---
 make-flight |   12 ++++++++++++
 sg-run-job  |    7 +++++++
 2 files changed, 19 insertions(+)

diff --git a/make-flight b/make-flight
index 8c75a9c..d27a02c 100755
--- a/make-flight
+++ b/make-flight
@@ -373,6 +373,16 @@ do_multivcpu_tests () {
                     $debian_runvars all_hostflags=$most_hostflags
 }
 
+do_cpupools_tests () {
+  if [ x$toolstack != xxl -a $xenarch != $dom0arch ]; then
+    return
+  fi
+
+  job_create_test test-$xenarch$kern-$dom0arch-xl-cpupools            \
+                    test-cpupools xl $xenarch $dom0arch               \
+                    $debian_runvars all_hostflags=$most_hostflags
+}
+
 do_passthrough_tests () {
   if [ $xenarch != amd64 -o $dom0arch != amd64 -o "$kern" != "" ]; then
     return
@@ -498,6 +508,8 @@ test_matrix_do_one () {
   do_rtds_tests
   do_credit2_tests
 
+  do_cpupools_tests
+
   # No further arm tests at the moment
   if [ $dom0arch = armhf ]; then
       return
diff --git a/sg-run-job b/sg-run-job
index 66145b8..ea48a03 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -296,6 +296,13 @@ proc run-job/test-debianhvm {} {
     test-guest debianhvm
 }
 
+proc need-hosts/test-cpupools {} { return host }
+proc run-job/test-cpupools {} {
+    install-guest-debian
+    run-ts . = ts-cpupools + host debian
+    run-ts . = ts-guest-stop + host debian
+}
+
 proc setup-test-pair {} {
     run-ts . =              ts-debian-install      dst_host
     run-ts . =              ts-debian-fixup        dst_host          + debian

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs.
  2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
  2015-10-03  0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
  2015-10-03  0:39 ` [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition Dario Faggioli
@ 2015-10-03  0:39 ` Dario Faggioli
  2015-10-09 14:36   ` Ian Campbell
  2015-10-03  0:45 ` [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
  2015-10-08 15:20 ` Ian Campbell
  4 siblings, 1 reply; 9+ messages in thread
From: Dario Faggioli @ 2015-10-03  0:39 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Ian Jackson, Ian Campbell

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v2:
 * new patch, the introduction of which was suggested
   during review.
---
 ts-logs-capture |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/ts-logs-capture b/ts-logs-capture
index b99b1db..b1e7012 100755
--- a/ts-logs-capture
+++ b/ts-logs-capture
@@ -186,6 +186,8 @@ sub fetch_logs_host () {
          'cat /proc/cpuinfo',
          'xl list',
          'xl vcpu-list',
+         'xl cpupool-list',
+         'xl cpupool-list -c',
          'xm list',
          'xm list --long',
          'xenstore-ls -fp',

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [OSSTEST PATCH v3 0/3] Test case for cpupools
  2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
                   ` (2 preceding siblings ...)
  2015-10-03  0:39 ` [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs Dario Faggioli
@ 2015-10-03  0:45 ` Dario Faggioli
  2015-10-08 15:20 ` Ian Campbell
  4 siblings, 0 replies; 9+ messages in thread
From: Dario Faggioli @ 2015-10-03  0:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Juergen Gross, Ian Campbell


[-- Attachment #1.1.1: Type: text/plain, Size: 659 bytes --]

Cc-ing people I wanted to Cc in the first place, and...

On Sat, 2015-10-03 at 02:39 +0200, Dario Faggioli wrote:
> Hey,
>
> [...]
>
> What I drafted was right that kind (or so it looks to me) of host
> properties,
> but meant at standalone mode, and I did it like in the attached
> patch... Any
> thoughts on this?
> 
Actually attaching the patch! :-)

Sorry for the mess,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.1.2: hwspecs-host-props --]
[-- Type: text/plain, Size: 7698 bytes --]

commit ee4834c6d7200ee23cfc9756bdfd28916b0884c9
Author: Dario Faggioli <raistlin@linux.it>
Date:   Thu Oct 30 18:10:21 2014 +0100

    Osstest/TestSupport.pm: read hosts' hardware characteristics
    
    if defined, in the form of host properties. In standalone
    mode, that should happen via the config file.
    
    Methods are introduced to read those host properties or,
    if they are not defined, to fetch the information by
    querying the host directly.
    
    The host properties always take precedence. This means
    that, if they're defined, no command is run on the host,
    and the values stored in the properties are used.
    
    This commit also introduces a simple bash script that,
    if run on the host, retrieves and prints such host
    hardware properties, for convenience and/or testing.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
    Cc: Wei Liu <wei.liu2@citrix.com>
    Cc: Ian Campbell <Ian.Campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 7cc5be6..251668a 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -58,7 +58,7 @@ BEGIN {
                       target_put_guest_image target_editfile
                       target_editfile_root target_file_exists
                       target_catfile_stash target_catfile_root_stash
-                      target_run_apt
+                      target_file_contains target_run_apt
                       target_install_packages target_install_packages_norec
                       target_jobdir target_extract_jobdistpath_subdir
                       target_extract_jobdistpath target_guest_lv_name
@@ -67,6 +67,7 @@ BEGIN {
                       contents_make_cpio file_simple_write_contents
 
                       selecthost get_hostflags get_host_property
+                      get_host_cpus get_host_numanodes get_host_memory
                       get_host_native_linux_console
                       power_state power_cycle power_cycle_time
                       serial_fetch_logs
@@ -519,6 +520,15 @@ sub target_file_exists ($$) {
     die "$rfile $out ?";
 }
 
+sub target_file_contains ($$$) {
+  my ($ho,$rfile,$filecont) = @_;
+  return 0 unless target_file_exists($ho,$rfile);
+  my $out= target_cmd_output($ho, "grep $filecont $rfile");
+  return 1 if ($out ne "");
+  return 0 if ($out eq "");
+  die "$rfile $filecont $out ?";
+}
+
 sub teditfileex {
     my $user= shift @_;
     my $code= pop @_;
@@ -866,6 +876,11 @@ sub selecthost ($) {
     }
     $ho->{Ip}= $ho->{IpStatic};
 
+    #----- HW specs -----
+    $ho->{Cpus} = get_host_property($ho,'cpus');
+    $ho->{Memory} = get_host_property($ho,'memory');
+    $ho->{Nodes} = get_host_property($ho,'nodes');
+
     #----- tftp -----
 
     my $tftpscope = get_host_property($ho, 'TftpScope', $c{TftpDefaultScope});
@@ -937,6 +952,66 @@ sub get_host_method_object ($$$) {
     return $mo;
 }
 
+sub get_host_cpus ($) {
+    my ($ho) = @_;
+
+    # Let's first try if there's an host property defined;
+    # if no, we'll "ask" the host directly.
+    my $cpus= get_host_property($ho,'cpus',undef);
+    return $cpus if defined $cpus;
+
+    # Is the host running Dom0 or baremetal?
+    if (target_file_contains($ho,"/proc/xen/capabilities","control_d")) {
+        $cpus= target_cmd_output_root($ho,
+            "xl info | grep ^nr_cpus | awk '{print \$3}'");
+    } else {
+        $cpus= target_cmd_output_root($ho,
+            "cat /proc/cpuinfo | grep '^processor' | wc -l");
+    }
+
+    return $cpus;
+}
+
+sub get_host_numanodes ($) {
+    my ($ho) = @_;
+
+    # Let's first try if there's an host property defined;
+    # if no, we'll "ask" the host directly.
+    my $nodes= get_host_property($ho,'nodes',undef);
+    return $nodes if defined $nodes;
+
+    # Is the host running Dom0 or baremetal?
+    if (target_file_contains($ho,"/proc/xen/capabilities","control_d")) {
+        $nodes= target_cmd_output_root($ho,
+            "xl info | grep ^nr_nodes | awk '{print \$3}'");
+    } else {
+        $nodes= target_cmd_output_root($ho,
+            "which numactl && numactl --hardware | grep ^available: | awk '{print \$2}'");
+    }
+
+    return $nodes;
+}
+
+sub get_host_memory ($) {
+    my ($ho) = @_;
+
+    # Let's first try if there's an host property defined;
+    # if no, we'll "ask" the host directly.
+    my $mem= get_host_property($ho,'memory',undef);
+    return $mem if defined $mem;
+
+    # Is the host running Dom0 or baremetal?
+    if (target_file_contains($ho,"/proc/xen/capabilities","control_d")) {
+        $mem= target_cmd_output_root($ho,
+            "xl info | grep ^total_memory | awk '{print \$3}'");
+    } else {
+        $mem= target_cmd_output_root($ho,
+            "free -m | grep ^Mem: | awk '{print \$2}'");
+    }
+
+    return $mem;
+}
+
 #---------- stashed files ----------
 
 sub open_unique_stashfile ($) {
diff --git a/README b/README
index 45d1498..b3880b5 100644
--- a/README
+++ b/README
@@ -343,6 +343,21 @@ HostProp_<testbox>_TftpScope
    Defines the Tftp scope (i.e. subnet) where this host resides. See
    "TftpFoo_<scope> and TftpFoo" below.
 
+HostProp_<testbox>_Cpus
+   Tells how many physical CPUs the testbox has. If this is defined,
+   no further investigation is performed to figure out such information
+   and the value provided here is considered reliable and consumed.
+
+HostProp_<testbox>_Memory
+   Tells how much physical memory the testbox has. If this is defined,
+   no further investigation is performed to figure out such information
+   and the value provided here is considered reliable and consumed.
+
+HostProp_<testbox>_Nodes
+   Tells how many NUMA nodes the testbox has. If this is defined,
+   no further investigation is performed to figure out such information
+   and the value provided here is considered reliable and consumed.
+
 HostFlags_<testbox>
    Defines a set of flags for the host. Flags is a list separated by
    whitespace, comma or semi-colon. A flag can be unset by prepending
diff --git a/mg-host-hw-specs b/mg-host-hw-specs
new file mode 100755
index 0000000..a47d72d
--- /dev/null
+++ b/mg-host-hw-specs
@@ -0,0 +1,35 @@
+#!/bin/bash
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -e
+
+# Is the host running Dom0 or baremetal?
+if [[ -e /proc/xen/capabilities ]] && \
+   [[ `grep ^control_d$ /proc/xen/capabilities` == "control_d" ]]; then
+  cpus=`xl info | grep ^nr_cpus | awk '{print \$3}'`
+  memory=`xl info | grep ^total_memory | awk '{print \$3}'`
+  nodes=`xl info | grep ^nr_nodes | awk '{print \$3}'`
+else
+  cpus=`cat /proc/cpuinfo | grep "^processor" | wc -l`
+  memory=`free -m | grep ^Mem: | awk '{print $2}'`
+  nodes="?"
+  if [[ `which numactl` != "" ]] && [ -x `which numactl` ]; then
+    nodes=`numactl --hardware | grep ^available: | awk '{print $2}'`
+  fi
+fi
+
+echo >&2 "cpus=$cpus / memory=$memory / nodes=$nodes"

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [OSSTEST PATCH v3 0/3] Test case for cpupools
  2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
                   ` (3 preceding siblings ...)
  2015-10-03  0:45 ` [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
@ 2015-10-08 15:20 ` Ian Campbell
  4 siblings, 0 replies; 9+ messages in thread
From: Ian Campbell @ 2015-10-08 15:20 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel, Ian Jackson

On Sat, 2015-10-03 at 02:39 +0200, Dario Faggioli wrote:
> There are some host related considerations. In fact, this test case
> requires
> that an host with at least 2 pCPUs is used. v2 was failing the test, if that
> was not the case. Now, I'm just skipping doing pretty much everything, but I'm
> not reporting failure.  In any case, while reviewing v2, IanC said:
> 
>  "The proper fix would be a property in the hostdb which was used to
>   constrain which hosts the jobs containing this test could run on. (e.g.
>   we have pcipassthrough-nic).
> 
>   Maybe this way is OK until we find we are commissioning a machine with a
>   single CPU, at which point this failure will seem pretty obvious. Ian?"
> 
> I do like this. I, actually, even started to do this already, for other
> reasons, and I am fine continuing working to make this happen, but I'd need
> some help, or at least some pointers.
> 
> I've never interacted with the hostdb, so I'd appreciate pointers for knowing
> where to look.

It's basically a (host,key)=>(value) database table, in standalone mode it
is HostProp_<host>_<key> in your cfg (as you've found) and in Executive
mode it is an actual database table.

>From the PoV of code though you just use get_host_property and don't care
which it comes from.

> What I drafted was right that kind (or so it looks to me) of host properties,
> but meant at standalone mode, and I did it like in the attached patch... Any
> thoughts on this?

Personally I don't think this "look in the db and fallback to asking the
host" behaviour is desirable. I think this information should always be in
the host db.

That's not to say you couldn't also have a helper script which queries the
hosts and provides the output in a form which is easy to massage into the
db. That could even be in the form of a specialised setup flight with jobs
which would install the host and query it to produce (or even update) the
necessary db contents (probably not live updates, just in a convenient form
to put into the db).

> What I also miss, in both the cases of standalone mode config file defined
> properties, and hostdb ones, it the logic for telling the host allocator to
> consider such properties when choosing the host to be used for a job. I'll go
> studying how to do it, but if anyone feels the irresistible need of advising,
> feel free to go ahead... :-)

Until recently the Executive allocator could only make decisions based on
host "flags", which is a separate table of boolean host properties. The
host allocator matches the all_hostflags+<ident>_hostflags (where <ident>
is "host" or "src_host" etc.

e.g. in a typical flight:

    test-armhf-armhf-xl-xsm all_hostflags  arch-armhf,arch-xen-armhf,suite-jessie,purpose-test                                                
    build-amd64             host_hostflags share-build-jessie-amd64,arch-amd64,suite-jessie,purpose-build                         

So for build-amd64 the chosen host has to have those flags. For the test-*
one any hosts involved in the test have to have those flags. It's not test
-* vs build-* which determines which you use, just a coincidence here. If
e.g. a migration test had differing requirments for source and host then
src_host_hostflags might differ from dst_host_hostflags and all_hostflags
would be common.

Until recently all you could do was simple matching of those boolean
properties (there are actually some magic ones, like equiv-1, please ignore
those).

Recently though I added support for evaluating hosts based on their host
props as well as flags. See 9f33933526d3 "Add support for selecting
resources based on their properties." which added one particular comparator
in Osstest::ResourceCondition::PropMinVer as a way to limit things to hosts
which were supported by a given version of Linux (e.g. requisite drivers
and such). 

So for example in a linux-4.1 flight we now have:

    test-armhf-armhf-xl-vhd all_hostflags arch-armhf,arch-xen-armhf,suite-jessie,purpose-test,PropMinVer:LinuxKernelMin:4.1                    

Where the last bit causes the hosts LinuxKernelMin property to be compared
against 4.1. Any host which has a LinuxKernelMin host prop won't run linux
-4.1 tests. In this case absence of the prop means "no specific
requirement", but other comparators may differ.

It would be quite easy to add a PropMin module which similarly required
that some host prop existed and had a particular minimum value. Then you
could use "PropMin:Cpus:2" to ensure you get a host with a Cpus hostprop of
at least 2.

The is no allocator in Standalone mode, you have to provide a host which
meets the requirements yourself and if you don't it might break.

Clear as mud?

Ian.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [OSSTEST PATCH v3 1/3] ts-cpupools: new test script
  2015-10-03  0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
@ 2015-10-08 16:38   ` Ian Campbell
  0 siblings, 0 replies; 9+ messages in thread
From: Ian Campbell @ 2015-10-08 16:38 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, Ian Jackson

On Sat, 2015-10-03 at 02:39 +0200, Dario Faggioli wrote:
> Copyright (C) 2009-2014 Citrix Inc.

Year.

> +our $default_pool= "Pool-0";
> +our @schedulers= ("credit","credit2","rtds");

I think @schedulers probably ought to come from a runvar (comma-separated).
Consider testing cpupools on 4.4 (which didn't have rtds) or Xen 7.2 which
has the xyzzy scheduler for example.

I'm less sure about $default_pool, I think that one is probably pretty
inherent and not worth generlising?

> +our @cpulist;
> +
> +# Figure out the number of pCPUs of the host. We need to know that for
> +# deciding with what pCPUs we'll create the test pool.
> +sub check_cpus () {
> +  my $xlinfo= target_cmd_output_root($ho, "xl info");
> +  $xlinfo =~ /nr_cpus\s*:\s([0-9]*)/;
> +  $nr_cpus= $1;
> +  logm("Found $nr_cpus pCPUs");
> +  logm("$nr_cpus is yoo few pCPUs for testing cpupools");

"too" and apparently no actual condition check?

But based on discussion on 0/3 I'm hoping this check will go away, or maybe
it will become a die.

> +}
> +
> +# At the beginning:
> +#  * only 1 pool must exist,
> +#  * it must be the default pool.
> +sub check () {
> +  my $cppinfo= target_cmd_output_root($ho, "xl cpupool-list");
> +  my $nr_cpupools= $cppinfo =~ tr/\n//;

The output of "xl cpupool-list" is 
----
Name               CPUs   Sched     Active   Domain count
Pool-0               8    credit       y          4
----

Is $nr_cpupools not therefore 2 when there is a single pool? (2 "\n", one
after the header, one after the data)

> +
> +  logm("Found $nr_cpupools cpupools");
> +  die "There already is more than one cpu pool!" if $nr_cpupools > 1;
> +  die "Non-default cpupool configuration detected"
> +      unless $cppinfo =~ /$default_pool/;

This won't barf on e.g. "Pool-01". Some use of \b might help.

> +
> +  die "This test is meant for xl only"
> +      if toolstack($ho)->{Name} ne "xl";
> +}
> +
> +# Odd pCPUs will end up in out test pool

s/out/our/

> +sub prep_cpulist () {
> +  @cpulist = grep { $_ % 2 } (0..$nr_cpus);
> +  logm("Using the following cpus fo the test pool: @cpulist");

s/fo/for/

> +}
> +
> +sub prep_pool ($) {
> +  my ($sched)= @_;
> +  my @cpustr;
> +
> +  my @cpustr= map { $_ == -1 ? "[ " : $_ == $#cpulist+1 ? " ]" :
> +      "\"$cpulist[$_]\"," } (-1 .. $#cpulist+1);

I think I would write
	my @cpustr = ("[ ".$#cpulist+1." ]");
        push @cpustr, map { "\"$cpulist[$_]\"," } (0 .. $#cpulist+1);
at which point I would realise that the push was something like:
        push @cpustr, map { "\"$_\"," } @cpulist;

I'd also do the "," bit using 
    my $cpustr = join ",", @cpustr;

otherwise you get a trailing "," which you may not want.

(Disclaimer: I'm not 100% sure what output string you are trying to make
here).

> +
> +  target_putfilecontents_stash($ho,100,<<"END","/etc/xen/cpupool-test
> -$sched");
> +name = \"cpupool-test-$sched\"
> +sched = \"$sched\"

Do the quotes really need escaping in this context? I wouldn't have
expected so.

> +cpus = @cpustr
> +END
> +}
> +
> +
> +check();
> +check_cpus();
> +if ($nr_cpus > 1) {

This will go away I hope.

> +  prep_cpulist();
> +  foreach my $s (@schedulers) {
> +    prep_pool("$s");
> +    run("$s");

I think you just want $s, not "$s" in both places. $s is already a string.

> +  }
> +}
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition
  2015-10-03  0:39 ` [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition Dario Faggioli
@ 2015-10-09 14:34   ` Ian Campbell
  0 siblings, 0 replies; 9+ messages in thread
From: Ian Campbell @ 2015-10-09 14:34 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, Ian Jackson

On Sat, 2015-10-03 at 02:39 +0200, Dario Faggioli wrote:
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>

This looks correct to me as it stands, but I think it will be impacted by
the changes relating to host flags for numbers of cpus etc.

> ---
> Changes from v2:
>  * restrict test generation to xl only.
> 
> Changes from v1:
>  * added invocation to ts-guest-stop in the recipe to kill
>    leak-check complaints (which went unnoticed during v1
>    testing, sorry)
>  * moved the test before the "ARM cutoff", and remove the
>    per-arch filtering, so that the test can run on ARM
>    hardware too
> ---
>  make-flight |   12 ++++++++++++
>  sg-run-job  |    7 +++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/make-flight b/make-flight
> index 8c75a9c..d27a02c 100755
> --- a/make-flight
> +++ b/make-flight
> @@ -373,6 +373,16 @@ do_multivcpu_tests () {
>                      $debian_runvars all_hostflags=$most_hostflags
>  }
>  
> +do_cpupools_tests () {
> +  if [ x$toolstack != xxl -a $xenarch != $dom0arch ]; then
> +    return
> +  fi
> +
> +  job_create_test test-$xenarch$kern-$dom0arch-xl-cpupools            \
> +                    test-cpupools xl $xenarch $dom0arch               \
> +                    $debian_runvars all_hostflags=$most_hostflags
> +}
> +
>  do_passthrough_tests () {
>    if [ $xenarch != amd64 -o $dom0arch != amd64 -o "$kern" != "" ]; then
>      return
> @@ -498,6 +508,8 @@ test_matrix_do_one () {
>    do_rtds_tests
>    do_credit2_tests
>  
> +  do_cpupools_tests
> +
>    # No further arm tests at the moment
>    if [ $dom0arch = armhf ]; then
>        return
> diff --git a/sg-run-job b/sg-run-job
> index 66145b8..ea48a03 100755
> --- a/sg-run-job
> +++ b/sg-run-job
> @@ -296,6 +296,13 @@ proc run-job/test-debianhvm {} {
>      test-guest debianhvm
>  }
>  
> +proc need-hosts/test-cpupools {} { return host }
> +proc run-job/test-cpupools {} {
> +    install-guest-debian
> +    run-ts . = ts-cpupools + host debian
> +    run-ts . = ts-guest-stop + host debian
> +}
> +
>  proc setup-test-pair {} {
>      run-ts . =              ts-debian-install      dst_host
>      run-ts . =              ts-debian-fixup        dst_host          +
> debian
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs.
  2015-10-03  0:39 ` [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs Dario Faggioli
@ 2015-10-09 14:36   ` Ian Campbell
  0 siblings, 0 replies; 9+ messages in thread
From: Ian Campbell @ 2015-10-09 14:36 UTC (permalink / raw)
  To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, Ian Jackson

On Sat, 2015-10-03 at 02:39 +0200, Dario Faggioli wrote:
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

There's probably no need for this to wait for the rest.

> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> Changes from v2:
>  * new patch, the introduction of which was suggested
>    during review.
> ---
>  ts-logs-capture |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/ts-logs-capture b/ts-logs-capture
> index b99b1db..b1e7012 100755
> --- a/ts-logs-capture
> +++ b/ts-logs-capture
> @@ -186,6 +186,8 @@ sub fetch_logs_host () {
>           'cat /proc/cpuinfo',
>           'xl list',
>           'xl vcpu-list',
> +         'xl cpupool-list',
> +         'xl cpupool-list -c',
>           'xm list',
>           'xm list --long',
>           'xenstore-ls -fp',
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-10-09 14:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-03  0:39 [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
2015-10-03  0:39 ` [OSSTEST PATCH v3 1/3] ts-cpupools: new test script Dario Faggioli
2015-10-08 16:38   ` Ian Campbell
2015-10-03  0:39 ` [OSSTEST PATCH v3 2/3] Testing cpupools: recipe for it and job definition Dario Faggioli
2015-10-09 14:34   ` Ian Campbell
2015-10-03  0:39 ` [OSSTEST PATCH v3 3/3] ts-logs-capture: include some cpupools info in the captured logs Dario Faggioli
2015-10-09 14:36   ` Ian Campbell
2015-10-03  0:45 ` [OSSTEST PATCH v3 0/3] Test case for cpupools Dario Faggioli
2015-10-08 15:20 ` Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.