All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 00/11] Series short description
@ 2014-06-20 16:18 Dario Faggioli
  2014-06-20 16:18 ` [PATCH v10 01/11] libxc/libxl: bump library SONAMEs Dario Faggioli
                   ` (11 more replies)
  0 siblings, 12 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

Hello,

v10 of the soft affinity (toolstack side) series.

I restructured the series as Ian suggested: patch "libxl/xl: deprecate the
build_info->cpumap field" was created to only host the deprecation of cpumap.
The xl changes that this requires (and the improvement that it allows) live in
their own patch ("xl: move away from the (deprecated) use of cpumap for hard
affinity").

I also added "Change the libxl default for b_info->{cpu,node}map to "not
allocated"". It is the right thing to do, and it affects some of the patches in
this series, and that's why I'm including it here, instead than submitting it
as a separate patch.

So, the patches that need attention (and Ack-s! :-P) are:

 10/11 libxl: libxl/xl: make it possible to specify soft-affinity in domain config file
 09/11 xl: move the vcpu affinity parsing in a function
 08/11 xl: move away from the (deprecated) use of cpumap for hard affinity
 07/11 libxl/xl: deprecate the build_info->cpumap field
 06/11 libxl/xl: push VCPU affinity pinning down to libxl
 05/11 libxl: Change default for b_info->{cpu,node}map to "not allocated"

while the other ones have all they need to go in.

Regards,
Dario
---
Dario Faggioli (10):
      libxc/libxl: bump library SONAMEs
      libxc: get and set soft and hard affinity
      libxl: get and set soft affinity
      xl: enable getting and setting soft affinity
      libxl: Change default for b_info->{cpu,node}map to "not allocated"
      libxl/xl: deprecate the build_info->cpumap field
      xl: move away from the (deprecated) use of cpumap for hard affinity
      xl: move the vcpu affinity parsing in a function
      libxl/xl: make it possible to specify soft-affinity in domain config file
      libxl: automatic NUMA placement affects soft affinity

Wei Liu (1):
      libxl/xl: push VCPU affinity pinning down to libxl


 docs/man/xl.cfg.pod.5                |   50 ++++---
 docs/man/xl.pod.1                    |   32 +++-
 docs/misc/xl-numa-placement.markdown |   14 ++
 tools/libxc/Makefile                 |    2 
 tools/libxc/xc_domain.c              |   68 ++++++---
 tools/libxc/xenctrl.h                |   55 +++++++
 tools/libxl/Makefile                 |    2 
 tools/libxl/libxl.c                  |   97 +++++++++++--
 tools/libxl/libxl.h                  |   48 ++++++
 tools/libxl/libxl_create.c           |   12 --
 tools/libxl/libxl_dom.c              |  105 ++++++++++++--
 tools/libxl/libxl_types.idl          |   13 +-
 tools/libxl/libxl_utils.h            |   25 +++
 tools/libxl/xl_cmdimpl.c             |  259 ++++++++++++++++++----------------
 tools/libxl/xl_cmdtable.c            |    2 
 tools/ocaml/libs/xc/xenctrl_stubs.c  |    8 +
 tools/python/xen/lowlevel/xc/xc.c    |    6 +
 17 files changed, 582 insertions(+), 216 deletions(-)

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v10 01/11] libxc/libxl: bump library SONAMEs
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
@ 2014-06-20 16:18 ` Dario Faggioli
  2014-06-20 16:19 ` [PATCH v10 02/11] libxc: get and set soft and hard affinity Dario Faggioli
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

The following two patches break both libxc and libxl ABI and
API, so we better bump the MAJORs.

Of course, for libxl, proper measures are taken (in the
relevant patch) in order to guarantee API stability.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/Makefile |    2 +-
 tools/libxl/Makefile |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 63be3d3..f77677c 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR    = 4.4
+MAJOR    = 4.5
 MINOR    = 0
 
 CTRL_SRCS-y       :=
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 7fc42c8..8dd8c95 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR = 4.4
+MAJOR = 4.5
 MINOR = 0
 
 XLUMAJOR = 4.3

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 02/11] libxc: get and set soft and hard affinity
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
  2014-06-20 16:18 ` [PATCH v10 01/11] libxc/libxl: bump library SONAMEs Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-20 16:19 ` [PATCH v10 03/11] libxl: get and set soft affinity Dario Faggioli
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

by using the flag and the new cpumap arguments introduced in
the parameters of the DOMCTL_{get,set}_vcpuaffinity hypercalls.

Now, both xc_vcpu_setaffinity() and xc_vcpu_getaffinity() have
a new flag parameter, to specify whether the user wants to
set/get hard affinity, soft affinity or both. They also have
two cpumap parameters instead of only one. This way, it is
possible to set/get both hard and soft affinity at the same
time (and, in case of set, each one to its own value).

In xc_vcpu_setaffinity(), the cpumaps are IN/OUT parameters,
as it is for the corresponding arguments of the
DOMCTL_set_vcpuaffinity hypercall. What Xen puts there is the
hard and soft effective affinity, that is what Xen will actually
use for scheduling.

In-tree callers are also fixed to cope with the new interface.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v4:
 * update toward the new hypercall interface;
 * migrate to hypercall BOUNCEs instead of BUFFERs, as
   suggested during (v3) review;

Changes from v2:
 * better cleanup logic in _vcpu_setaffinity() (regarding
   xc_hypercall_buffer_{alloc,free}()), as suggested during
   review;
 * make it more evident that DOMCTL_setvcpuaffinity has an out
   parameter, by calling ecpumap_out, and improving the comment
   wrt that;
 * change the interface and have xc_vcpu_[sg]etaffinity() so
   that they take the new parameters (flags and ecpumap_out) and
   fix the in tree callers.
---
 tools/libxc/xc_domain.c             |   68 +++++++++++++++++++++++------------
 tools/libxc/xenctrl.h               |   55 +++++++++++++++++++++++++++-
 tools/libxl/libxl.c                 |    6 ++-
 tools/ocaml/libs/xc/xenctrl_stubs.c |    8 +++-
 tools/python/xen/lowlevel/xc/xc.c   |    6 ++-
 5 files changed, 111 insertions(+), 32 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index ef470a5..b2eb6b2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -202,10 +202,15 @@ int xc_domain_node_getaffinity(xc_interface *xch,
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap)
+                        xc_cpumap_t cpumap_hard_inout,
+                        xc_cpumap_t cpumap_soft_inout,
+                        uint32_t flags)
 {
     DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_hard_inout, 0,
+                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_soft_inout, 0,
+                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int ret = -1;
     int cpusize;
 
@@ -213,32 +218,37 @@ int xc_vcpu_setaffinity(xc_interface *xch,
     if (cpusize <= 0)
     {
         PERROR("Could not get number of cpus");
-        goto out;
+        return -1;
     }
 
-    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
-    if ( local == NULL )
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_hard_inout, cpusize);
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_soft_inout, cpusize);
+
+    if ( xc_hypercall_bounce_pre(xch, cpumap_hard_inout) ||
+         xc_hypercall_bounce_pre(xch, cpumap_soft_inout) )
     {
-        PERROR("Could not allocate memory for setvcpuaffinity domctl hypercall");
+        PERROR("Could not allocate hcall buffers for DOMCTL_setvcpuaffinity");
         goto out;
     }
 
     domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu = vcpu;
-    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
-
-    memcpy(local, cpumap, cpusize);
-
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
+    domctl.u.vcpuaffinity.flags = flags;
 
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap,
+                         cpumap_hard_inout);
     domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_soft.bitmap,
+                         cpumap_soft_inout);
+    domctl.u.vcpuaffinity.cpumap_soft.nr_bits = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
-    xc_hypercall_buffer_free(xch, local);
-
  out:
+    xc_hypercall_bounce_post(xch, cpumap_hard_inout);
+    xc_hypercall_bounce_post(xch, cpumap_soft_inout);
+
     return ret;
 }
 
@@ -246,10 +256,13 @@ int xc_vcpu_setaffinity(xc_interface *xch,
 int xc_vcpu_getaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap)
+                        xc_cpumap_t cpumap_hard,
+                        xc_cpumap_t cpumap_soft,
+                        uint32_t flags)
 {
     DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_hard, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_soft, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int ret = -1;
     int cpusize;
 
@@ -257,30 +270,37 @@ int xc_vcpu_getaffinity(xc_interface *xch,
     if (cpusize <= 0)
     {
         PERROR("Could not get number of cpus");
-        goto out;
+        return -1;
     }
 
-    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
-    if (local == NULL)
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_hard, cpusize);
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_soft, cpusize);
+
+    if ( xc_hypercall_bounce_pre(xch, cpumap_hard) ||
+         xc_hypercall_bounce_pre(xch, cpumap_soft) )
     {
-        PERROR("Could not allocate memory for getvcpuaffinity domctl hypercall");
+        PERROR("Could not allocate hcall buffers for DOMCTL_getvcpuaffinity");
         goto out;
     }
 
     domctl.cmd = XEN_DOMCTL_getvcpuaffinity;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu = vcpu;
-    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
+    domctl.u.vcpuaffinity.flags = flags;
 
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap,
+                         cpumap_hard);
     domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_soft.bitmap,
+                         cpumap_soft);
+    domctl.u.vcpuaffinity.cpumap_soft.nr_bits = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
-    memcpy(cpumap, local, cpusize);
+ out:
+    xc_hypercall_bounce_post(xch, cpumap_hard);
+    xc_hypercall_bounce_post(xch, cpumap_soft);
 
-    xc_hypercall_buffer_free(xch, local);
-out:
     return ret;
 }
 
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index b55d857..0249534 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -600,14 +600,65 @@ int xc_domain_node_getaffinity(xc_interface *xch,
                                uint32_t domind,
                                xc_nodemap_t nodemap);
 
+/**
+ * This function specifies the CPU affinity for a vcpu.
+ *
+ * There are two kinds of affinity. Soft affinity is on what CPUs a vcpu
+ * prefers to run. Hard affinity is on what CPUs a vcpu is allowed to run.
+ * If flags contains XEN_VCPUAFFINITY_SOFT, the soft affinity it is set to
+ * what cpumap_soft_inout contains. If flags contains XEN_VCPUAFFINITY_HARD,
+ * the hard affinity is set to what cpumap_hard_inout contains. Both flags
+ * can be set at the same time, in which case both soft and hard affinity are
+ * set to what the respective parameter contains.
+ *
+ * The function also returns the effective hard or/and soft affinity, still
+ * via the cpumap_soft_inout and cpumap_hard_inout parameters. Effective
+ * affinity is, in case of soft affinity, the intersection of soft affinity,
+ * hard affinity and the cpupool's online CPUs for the domain, and is returned
+ * in cpumap_soft_inout, if XEN_VCPUAFFINITY_SOFT is set in flags. In case of
+ * hard affinity, it is the intersection between hard affinity and the
+ * cpupool's online CPUs, and is returned in cpumap_hard_inout, if
+ * XEN_VCPUAFFINITY_HARD is set in flags. If both flags are set, both soft
+ * and hard affinity are returned in the respective parameter.
+ *
+ * We do report it back as effective affinity is what the Xen scheduler will
+ * actually use, and we thus allow checking whether or not that matches with,
+ * or at least is good enough for, the caller's purposes.
+ *
+ * @param xch a handle to an open hypervisor interface.
+ * @param domid the id of the domain to which the vcpu belongs
+ * @param vcpu the vcpu id wihin the domain
+ * @param cpumap_hard_inout specifies(/returns) the (effective) hard affinity
+ * @param cpumap_soft_inout specifies(/returns) the (effective) soft affinity
+ * @param flags what we want to set
+ */
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap);
+                        xc_cpumap_t cpumap_hard_inout,
+                        xc_cpumap_t cpumap_soft_inout,
+                        uint32_t flags);
+
+/**
+ * This function retrieves hard and soft CPU affinity of a vcpu,
+ * depending on what flags are set.
+ *
+ * Soft affinity is returned in cpumap_soft if XEN_VCPUAFFINITY_SOFT is set.
+ * Hard affinity is returned in cpumap_hard if XEN_VCPUAFFINITY_HARD is set.
+ *
+ * @param xch a handle to an open hypervisor interface.
+ * @param domid the id of the domain to which the vcpu belongs
+ * @param vcpu the vcpu id wihin the domain
+ * @param cpumap_hard is where hard affinity is returned
+ * @param cpumap_soft is where soft affinity is returned
+ * @param flags what we want get
+ */
 int xc_vcpu_getaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap);
+                        xc_cpumap_t cpumap_hard,
+                        xc_cpumap_t cpumap_soft,
+                        uint32_t flags);
 
 
 /**
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 62e251a..43aadea 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4611,7 +4611,8 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
             goto err;
         }
         if (xc_vcpu_getaffinity(ctx->xch, domid, *nr_vcpus_out,
-                                ptr->cpumap.map) == -1) {
+                                ptr->cpumap.map, NULL,
+                                XEN_VCPUAFFINITY_HARD) == -1) {
             LOGE(ERROR, "getting vcpu affinity");
             goto err;
         }
@@ -4635,7 +4636,8 @@ err:
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
                            libxl_bitmap *cpumap)
 {
-    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map)) {
+    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map, NULL,
+                            XEN_VCPUAFFINITY_HARD)) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting vcpu affinity");
         return ERROR_FAIL;
     }
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index ff29b47..f0810eb 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -438,7 +438,9 @@ CAMLprim value stub_xc_vcpu_setaffinity(value xch, value domid,
 			c_cpumap[i/8] |= 1 << (i&7);
 	}
 	retval = xc_vcpu_setaffinity(_H(xch), _D(domid),
-	                             Int_val(vcpu), c_cpumap);
+				     Int_val(vcpu),
+				     c_cpumap, NULL,
+				     XEN_VCPUAFFINITY_HARD);
 	free(c_cpumap);
 
 	if (retval < 0)
@@ -460,7 +462,9 @@ CAMLprim value stub_xc_vcpu_getaffinity(value xch, value domid,
 		failwith_xc(_H(xch));
 
 	retval = xc_vcpu_getaffinity(_H(xch), _D(domid),
-	                             Int_val(vcpu), c_cpumap);
+				     Int_val(vcpu),
+				     c_cpumap, NULL,
+				     XEN_VCPUAFFINITY_HARD);
 	if (retval < 0) {
 		free(c_cpumap);
 		failwith_xc(_H(xch));
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index cb34446..54e8799 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -256,7 +256,8 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self,
         }
     }
   
-    if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap) != 0 )
+    if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap,
+                             NULL, XEN_VCPUAFFINITY_HARD) != 0 )
     {
         free(cpumap);
         return pyxc_error_to_exception(self->xc_handle);
@@ -403,7 +404,8 @@ static PyObject *pyxc_vcpu_getinfo(XcObject *self,
     if(cpumap == NULL)
         return pyxc_error_to_exception(self->xc_handle);
 
-    rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, cpumap);
+    rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, cpumap,
+                             NULL, XEN_VCPUAFFINITY_HARD);
     if ( rc < 0 )
     {
         free(cpumap);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 03/11] libxl: get and set soft affinity
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
  2014-06-20 16:18 ` [PATCH v10 01/11] libxc/libxl: bump library SONAMEs Dario Faggioli
  2014-06-20 16:19 ` [PATCH v10 02/11] libxc: get and set soft and hard affinity Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-20 16:19 ` [PATCH v10 04/11] xl: enable getting and setting " Dario Faggioli
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

Make space a new cpumap in vcpu_info, called cpumap_soft,
for retrieving soft affinity, and amend the relevant API
accordingly.

libxl_set_vcpuaffinity() now takes two cpumaps, one for hard
and one for soft affinity (LIBXL_API_VERSION is exploited to
retain source level backword compatibility). Either of the
two cpumap can be NULL, in which case, only the affinity
corresponding to the non-NULL cpumap will be affected.

Getting soft affinity happens indirectly (see, e.g.,
`xl vcpu-list'), as it is already for hard affinity).

This commit also introduces some logic to check whether the
affinity which will be used by Xen to schedule the vCPU(s)
does actually match with the cpumaps provided. In fact, we
want to allow every possible combination of hard and soft
affinity to be set, but we warn the user upon particularly
weird situations (e.g., hard and soft being disjoint sets
of pCPUs).

This very change also update the error handling for calls
to libxl_set_vcpuaffinity() in xl, as that can now be any
libxl error code, not just only -1.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v7
 * Added 0x40500 to the list of valid libxl versions;
 * killed the cpumap_soft field from libxl_build_info.
   That is not necessary, as subsequent patches will
   introduce and alternative way to set soft affinity.
 * fix a typo.

Changes from v6:
 * fix a typo and update LIBXL_API_VERSION appropriately,
   as requested during review;
 * libxl_bitmap_equal() rewritten in such a way that it now
   automatically deals with bitmaps of different sizes. Bits
   non present in one of the maps are just considered to be
   0, as suggested during review;
 * there is no need for libxl_bitmap_bit_valid() any longer,
   so killed it;
 * inside libxl, retrieve and set both hard and soft affinity
   atomically, in just one call (to xc_vcpu_getaffinity() and
   xc_vcpu_setaffinity(), respectively), as requested during
   review.
 * update the changelog about changing the error handling for
   the call to libxl_set_vcpuaffinity() in xl.

Changes from v4:
 * get rid of inline stubs inside the LIBXL_API_VERSION_XXX
   block and just use define, as suggested during review
 * adapt to the new xc interface;
 * avoid leaking cpumap_soft in libxl_list_vcpu on error, as
   requested during review;
 * fix bogous `return 0' in libxl_set_vcpuaffinity, as
   requested during review;
 * clarify the comment for LIBXL_HAVE_SOFT_AFFINITY, as
   suggested during review;
 * renamed libxl_bitmap_valid() to libxl_bitmap_bit_valid(),
   as suggested uring review.

Changes from v3:
 * only introduce one LIBXL_HAVE_ symbol for soft affinity,
   as requested during review;
 * use LIBXL_API_VERSION instead of having multiple version
   of the same function, as suggested during review;
 * use libxl_get_nr_cpus() rather than libxl_get_cputopology(),
   as suggested during review;
 * use LOGE() instead of LIBXL__LOG_ERRNO(), as requested
   during review;
 * kill the flags and use just one _set_vcpuaffinity()
   function with two cpumaps, allowing either of them to
   be NULL, as suggested during review;
 * avoid overflowing the bitmaps in libxl_bitmap_equal(),
   as suggested during review.

Changes from v2:
 * interface completely redesigned, as discussed during
   review.
---
 tools/libxl/libxl.c         |   99 ++++++++++++++++++++++++++++++++++++++-----
 tools/libxl/libxl.h         |   24 ++++++++++
 tools/libxl/libxl_dom.c     |    3 +
 tools/libxl/libxl_types.idl |    3 +
 tools/libxl/libxl_utils.h   |   25 +++++++++++
 tools/libxl/xl_cmdimpl.c    |    6 +--
 6 files changed, 141 insertions(+), 19 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 43aadea..16cede8 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4606,13 +4606,17 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
         libxl_bitmap_init(&ptr->cpumap);
         if (libxl_cpu_bitmap_alloc(ctx, &ptr->cpumap, 0))
             goto err;
+        libxl_bitmap_init(&ptr->cpumap_soft);
+        if (libxl_cpu_bitmap_alloc(ctx, &ptr->cpumap_soft, 0))
+            goto err;
         if (xc_vcpu_getinfo(ctx->xch, domid, *nr_vcpus_out, &vcpuinfo) == -1) {
             LOGE(ERROR, "getting vcpu info");
             goto err;
         }
+
         if (xc_vcpu_getaffinity(ctx->xch, domid, *nr_vcpus_out,
-                                ptr->cpumap.map, NULL,
-                                XEN_VCPUAFFINITY_HARD) == -1) {
+                                ptr->cpumap.map, ptr->cpumap_soft.map,
+                                XEN_VCPUAFFINITY_SOFT|XEN_VCPUAFFINITY_HARD) == -1) {
             LOGE(ERROR, "getting vcpu affinity");
             goto err;
         }
@@ -4628,34 +4632,105 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
 
 err:
     libxl_bitmap_dispose(&ptr->cpumap);
+    libxl_bitmap_dispose(&ptr->cpumap_soft);
     free(ret);
     GC_FREE;
     return NULL;
 }
 
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
-                           libxl_bitmap *cpumap)
+                           const libxl_bitmap *cpumap_hard,
+                           const libxl_bitmap *cpumap_soft)
 {
-    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map, NULL,
-                            XEN_VCPUAFFINITY_HARD)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting vcpu affinity");
-        return ERROR_FAIL;
+    GC_INIT(ctx);
+    libxl_bitmap hard, soft;
+    int rc, flags = 0;
+
+    libxl_bitmap_init(&hard);
+    libxl_bitmap_init(&soft);
+
+    if (!cpumap_hard && !cpumap_soft) {
+        rc = ERROR_INVAL;
+        goto out;
     }
-    return 0;
+
+    /*
+     * Xen wants writable hard and/or soft cpumaps, to put back in them
+     * the effective hard and/or soft affinity that will be used.
+     */
+    if (cpumap_hard) {
+        rc = libxl_cpu_bitmap_alloc(ctx, &hard, 0);
+        if (rc)
+            goto out;
+
+        libxl_bitmap_copy(ctx, &hard, cpumap_hard);
+        flags = XEN_VCPUAFFINITY_HARD;
+    }
+    if (cpumap_soft) {
+        rc = libxl_cpu_bitmap_alloc(ctx, &soft, 0);
+        if (rc)
+            goto out;
+
+        libxl_bitmap_copy(ctx, &soft, cpumap_soft);
+        flags |= XEN_VCPUAFFINITY_SOFT;
+    }
+
+    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid,
+                            cpumap_hard ? hard.map : NULL,
+                            cpumap_soft ? soft.map : NULL,
+                            flags)) {
+        LOGE(ERROR, "setting vcpu affinity");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    /*
+     * Let's check the results. Hard affinity will never be empty, but it
+     * is possible that Xen will use something different from what we asked
+     * for various reasons. If that's the case, report it.
+     */
+    if (cpumap_hard &&
+        !libxl_bitmap_equal(cpumap_hard, &hard, 0))
+        LOG(DEBUG, "New hard affinity for vcpu %d has unreachable cpus",
+        vcpuid);
+    /*
+     * Soft affinity can both be different from what asked and empty. Check
+     * for (and report) both.
+     */
+    if (cpumap_soft) {
+        if (!libxl_bitmap_equal(cpumap_soft, &soft, 0))
+            LOG(DEBUG, "New soft affinity for vcpu %d has unreachable cpus",
+                vcpuid);
+        if (libxl_bitmap_is_empty(&soft))
+            LOG(WARN, "all cpus in soft affinity of vcpu %d are unreachable."
+                " Only hard affinity will be considered for scheduling",
+                vcpuid);
+    }
+
+    rc = 0;
+ out:
+    libxl_bitmap_dispose(&hard);
+    libxl_bitmap_dispose(&soft);
+    GC_FREE;
+    return rc;
 }
 
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
-                               unsigned int max_vcpus, libxl_bitmap *cpumap)
+                               unsigned int max_vcpus,
+                               const libxl_bitmap *cpumap_hard,
+                               const libxl_bitmap *cpumap_soft)
 {
+    GC_INIT(ctx);
     int i, rc = 0;
 
     for (i = 0; i < max_vcpus; i++) {
-        if (libxl_set_vcpuaffinity(ctx, domid, i, cpumap)) {
-            LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
-                       "failed to set affinity for %d", i);
+        if (libxl_set_vcpuaffinity(ctx, domid, i, cpumap_hard, cpumap_soft)) {
+            LOG(WARN, "failed to set affinity for %d", i);
             rc = ERROR_FAIL;
         }
     }
+
+    GC_FREE;
     return rc;
 }
 
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 69ceac8..a8477c9 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -104,6 +104,13 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY indicates that a 'cpumap_soft'
+ * field (of libxl_bitmap type) is present in libxl_vcpuinfo,
+ * containing the soft affinity of a vcpu.
+ */
+#define LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY 1
+
+/*
  * LIBXL_HAVE_DEVICE_DISK_DIRECT_IO_SAFE indicates that a
  * 'direct_io_safe' field (of boolean type) is present in
  * libxl_device_disk.
@@ -1136,9 +1143,22 @@ int libxl_userdata_retrieve(libxl_ctx *ctx, uint32_t domid,
 
 int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo);
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
-                           libxl_bitmap *cpumap);
+                           const libxl_bitmap *cpumap_hard,
+                           const libxl_bitmap *cpumap_soft);
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
-                               unsigned int max_vcpus, libxl_bitmap *cpumap);
+                               unsigned int max_vcpus,
+                               const libxl_bitmap *cpumap_hard,
+                               const libxl_bitmap *cpumap_soft);
+
+#if defined (LIBXL_API_VERSION) && LIBXL_API_VERSION < 0x040500
+
+#define libxl_set_vcpuaffinity(ctx, domid, vcpuid, map) \
+    libxl_set_vcpuaffinity((ctx), (domid), (vcpuid), (map), NULL)
+#define libxl_set_vcpuaffinity_all(ctx, domid, max_vcpus, map) \
+    libxl_set_vcpuaffinity_all((ctx), (domid), (max_vcpus), (map), NULL)
+
+#endif
+
 int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
                                   libxl_bitmap *nodemap);
 int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 661999c..a90a8d5 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -261,7 +261,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
     }
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
+    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                               &info->cpumap, NULL);
 
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 1018142..37df854 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -520,7 +520,8 @@ libxl_vcpuinfo = Struct("vcpuinfo", [
     ("blocked", bool),
     ("running", bool),
     ("vcpu_time", uint64), # total vcpu time ran (ns)
-    ("cpumap", libxl_bitmap), # current cpu's affinities
+    ("cpumap", libxl_bitmap), # current hard cpu affinity
+    ("cpumap_soft", libxl_bitmap), # current soft cpu affinity
     ], dir=DIR_OUT)
 
 libxl_physinfo = Struct("physinfo", [
diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h
index 8bfb81b..13b42a1 100644
--- a/tools/libxl/libxl_utils.h
+++ b/tools/libxl/libxl_utils.h
@@ -101,6 +101,31 @@ static inline int libxl_bitmap_cpu_valid(libxl_bitmap *bitmap, int bit)
 #define libxl_for_each_set_bit(v, m) for (v = 0; v < (m).size * 8; v++) \
                                              if (libxl_bitmap_test(&(m), v))
 
+/*
+ * Compares two bitmaps bit by bit, up to nr_bits or, if nr_bits is 0, up
+ * to the size of the largest bitmap. If sizes does not match, bits past the
+ * of a bitmap are considered as being 0, which matches with the semantic and
+ * implementation of libxl_bitmap_test I think().
+ *
+ * So, basically, [0,1,0] and [0,1] are considered equal, while [0,1,1] and
+ * [0,1] are different.
+ */
+static inline int libxl_bitmap_equal(const libxl_bitmap *ba,
+                                     const libxl_bitmap *bb,
+                                     int nr_bits)
+{
+    int i;
+
+    if (nr_bits == 0)
+        nr_bits = ba->size > bb->size ? ba->size * 8 : bb->size * 8;
+
+    for (i = 0; i < nr_bits; i++) {
+        if (libxl_bitmap_test(ba, i) != libxl_bitmap_test(bb, i))
+            return 0;
+    }
+    return 1;
+}
+
 int libxl_cpu_bitmap_alloc(libxl_ctx *ctx, libxl_bitmap *cpumap, int max_cpus);
 int libxl_node_bitmap_alloc(libxl_ctx *ctx, libxl_bitmap *nodemap,
                             int max_nodes);
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index be041f2..4d5e0d5 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2198,7 +2198,7 @@ start:
             } else {
                 libxl_bitmap_set_any(&vcpu_cpumap);
             }
-            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap)) {
+            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
                 fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
                 libxl_bitmap_dispose(&vcpu_cpumap);
                 free(vcpu_to_pcpu);
@@ -4622,7 +4622,7 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
     }
 
     if (vcpuid != -1) {
-        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap) == -1) {
+        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap, NULL)) {
             fprintf(stderr, "Could not set affinity for vcpu `%u'.\n", vcpuid);
             goto out;
         }
@@ -4634,7 +4634,7 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         }
         for (i = 0; i < nb_vcpu; i++) {
             if (libxl_set_vcpuaffinity(ctx, domid, vcpuinfo[i].vcpuid,
-                                       &cpumap) == -1) {
+                                       &cpumap, NULL)) {
                 fprintf(stderr, "libxl_set_vcpuaffinity failed"
                                 " on vcpu `%u'.\n", vcpuinfo[i].vcpuid);
             }

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 04/11] xl: enable getting and setting soft affinity
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (2 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 03/11] libxl: get and set soft affinity Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-27 12:33   ` Ian Campbell
  2014-06-20 16:19 ` [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated" Dario Faggioli
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

Getting happens via `xl vcpu-list', which now looks like this:

 # xl vcpu-list -s
 Name       ID VCPU CPU State Time(s) Affinity (Hard / Soft)
 Domain-0   0     0  11  -b-     5.4  8-15 / all
 Domain-0   0     1  11  -b-     1.0  8-15 / all
 Domain-0   0    14  13  -b-     1.4  8-15 / all
 Domain-0   0    15   8  -b-     1.6  8-15 / all
 vm-test    3     0   4  -b-     2.5  0-12 / 0-7
 vm-test    3     1   0  -b-     3.2  0-12 / 0-7

Setting happens by specifying two pCPU masks to the `xl vcpu-pin'
command, the first one will be hard affinity, the second soft
affinity. If only one mask is specified, it is only hard affinity
that is affected. To change only soft affinity, '-' can be used
as the hard affinity mask parameter, and it will be left alone.

xl manual page is updated accordingly.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v7:
 * 'const char *' used instead of #define/#undef for string
   constants, as requested during review;
 * more consistently use the 'hard' and 'soft' pointers rather
   than the actual cpumap variables, as suggested during review;
 * bitmap holding hard affinity renamed from 'cpumap' to
   'cpumap_hard'.

Changes from v6:
 * removed a remnant from previous iteration, when `vcpu-pin'
   was tacking a porameter;
 * tried to simplify parsing inside of `xl vcpu-pin', as suggested
   during review. Could not take the exact approach described in the
   emails, but still did my best for making this more clear and easy
   to read;
 * fixed a few typos and leftovers.

Changes from v5:
 * change command line interface for 'vcpu-pin', as suggested during
   review.

Changes from v4:
 * fix and rephrased the manual entry, as suggested during review;
 * more refactoring to remove some leftover special casing, as
   suggested during review.

Changes from v3:
 * fix typos in doc, rephrased the help message and changed
   the title of the column for hard/soft affinity, as suggested
   during review.

Changes from v2:
 * this patch folds what in v2 were patches 13 and 14;
 * `xl vcpu-pin' always shows both had and soft affinity,
   without the need of passing '-s'.
---
 docs/man/xl.pod.1         |   32 +++++++++++---
 tools/libxl/xl_cmdimpl.c  |  101 ++++++++++++++++++++++++++++++---------------
 tools/libxl/xl_cmdtable.c |    2 -
 3 files changed, 92 insertions(+), 43 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index 30bd4bf..9d1c2a5 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -651,16 +651,32 @@ after B<vcpu-set>, go to B<SEE ALSO> section for information.
 Lists VCPU information for a specific domain.  If no domain is
 specified, VCPU information for all domains will be provided.
 
-=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus>
+=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus hard> I<cpus soft>
 
-Pins the VCPU to only run on the specific CPUs.  The keyword
-B<all> can be used to apply the I<cpus> list to all VCPUs in the
-domain.
+Set hard and soft affinity for a I<vcpu> of <domain-id>. Normally VCPUs
+can float between available CPUs whenever Xen deems a different run state
+is appropriate.
 
-Normally VCPUs can float between available CPUs whenever Xen deems a
-different run state is appropriate.  Pinning can be used to restrict
-this, by ensuring certain VCPUs can only run on certain physical
-CPUs.
+Hard affinity can be used to restrict this, by ensuring certain VCPUs
+can only run on certain physical CPUs. Soft affinity specifies a I<preferred>
+set of CPUs. Soft affinity needs special support in the scheduler, which is
+only provided in credit1.
+
+The keyword B<all> can be used to apply the hard and soft affinity masks to
+all the VCPUs in the domain. The symbol '-' can be used to leave either
+hard or soft affinity alone.
+
+For example:
+
+ xl vcpu-pin 0 3 - 6-9
+
+will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
+leaving its hard affinity untouched. On the othe hand:
+
+ xl vcpu-pin 0 3 3,4 6-9
+
+will set both hard and soft affinity, the former to pCPUs 3 and 4, the
+latter to pCPUs 6,7,8, and 9.
 
 =item B<vm-list>
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4d5e0d5..c639c78 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -656,7 +656,7 @@ static int update_cpumap_range(const char *str, libxl_bitmap *cpumap)
  * single cpus or as eintire NUMA nodes) and turns it into the
  * corresponding libxl_bitmap (in cpumap).
  */
-static int vcpupin_parse(char *cpu, libxl_bitmap *cpumap)
+static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
 {
     char *ptr, *saveptr = NULL;
     int rc = 0;
@@ -4506,8 +4506,10 @@ static void print_vcpuinfo(uint32_t tdomid,
     }
     /*      TIM */
     printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
-    /* CPU AFFINITY */
+    /* CPU HARD AND SOFT AFFINITY */
     print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
+    printf(" / ");
+    print_bitmap(vcpuinfo->cpumap_soft.map, nr_cpus, stdout);
     printf("\n");
 }
 
@@ -4542,7 +4544,8 @@ static void vcpulist(int argc, char **argv)
     }
 
     printf("%-32s %5s %5s %5s %5s %9s %s\n",
-           "Name", "ID", "VCPU", "CPU", "State", "Time(s)", "CPU Affinity");
+           "Name", "ID", "VCPU", "CPU", "State", "Time(s)",
+           "Affinity (Hard / Soft)");
     if (!argc) {
         if (!(dominfo = libxl_list_domain(ctx, &nb_domain))) {
             fprintf(stderr, "libxl_list_domain failed.\n");
@@ -4575,17 +4578,29 @@ int main_vcpulist(int argc, char **argv)
     return 0;
 }
 
-static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
+int main_vcpupin(int argc, char **argv)
 {
     libxl_vcpuinfo *vcpuinfo;
-    libxl_bitmap cpumap;
-
-    uint32_t vcpuid;
+    libxl_bitmap cpumap_hard, cpumap_soft;;
+    libxl_bitmap *soft = &cpumap_soft, *hard = &cpumap_hard;
+    uint32_t vcpuid, domid;
+    const char *vcpu, *hard_str, *soft_str;
     char *endptr;
-    int i, nb_cpu, nb_vcpu, rc = -1;
+    int opt, nb_cpu, nb_vcpu, rc = -1;
+
+    libxl_bitmap_init(&cpumap_hard);
+    libxl_bitmap_init(&cpumap_soft);
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
+        /* No options */
+    }
 
-    libxl_bitmap_init(&cpumap);
+    domid = find_domain(argv[optind]);
+    vcpu = argv[optind+1];
+    hard_str = argv[optind+2];
+    soft_str = (argc > optind+3) ? argv[optind+3] : NULL;
 
+    /* Figure out with which vCPU we are dealing with */
     vcpuid = strtoul(vcpu, &endptr, 10);
     if (vcpu == endptr) {
         if (strcmp(vcpu, "all")) {
@@ -4595,10 +4610,35 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         vcpuid = -1;
     }
 
-    if (libxl_cpu_bitmap_alloc(ctx, &cpumap, 0))
+    if (libxl_cpu_bitmap_alloc(ctx, &cpumap_hard, 0) ||
+        libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0))
         goto out;
 
-    if (vcpupin_parse(cpu, &cpumap))
+    /*
+     * Syntax is: xl vcpu-pin <domid> <vcpu> <hard> <soft>
+     * We want to handle all the following cases ('-' means
+     * "leave it alone"):
+     *  xl vcpu-pin 0 3 3,4
+     *  xl vcpu-pin 0 3 3,4 -
+     *  xl vcpu-pin 0 3 - 6-9
+     *  xl vcpu-pin 0 3 3,4 6-9
+     */
+
+    /*
+     * Hard affinity is always present. However, if it's "-", all we need
+     * is passing a NULL pointer to the libxl_set_vcpuaffinity() call below.
+     */
+    if (!strcmp(hard_str, "-"))
+        hard = NULL;
+    else if (vcpupin_parse(hard_str, hard))
+        goto out;
+    /*
+     * Soft affinity is handled similarly. Only difference: we also want
+     * to pass NULL to libxl_set_vcpuaffinity() if it is not specified.
+     */
+    if (argc <= optind+3 || !strcmp(soft_str, "-"))
+        soft = NULL;
+    else if (vcpupin_parse(soft_str, soft))
         goto out;
 
     if (dryrun_only) {
@@ -4609,7 +4649,14 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         }
 
         fprintf(stdout, "cpumap: ");
-        print_bitmap(cpumap.map, nb_cpu, stdout);
+        if (hard)
+            print_bitmap(hard->map, nb_cpu, stdout);
+        else
+            fprintf(stdout, "-");
+        if (soft) {
+            fprintf(stdout, " ");
+            print_bitmap(soft->map, nb_cpu, stdout);
+        }
         fprintf(stdout, "\n");
 
         if (ferror(stdout) || fflush(stdout)) {
@@ -4622,43 +4669,29 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
     }
 
     if (vcpuid != -1) {
-        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap, NULL)) {
-            fprintf(stderr, "Could not set affinity for vcpu `%u'.\n", vcpuid);
+        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, hard, soft)) {
+            fprintf(stderr, "Could not set affinity for vcpu `%u'.\n",
+                    vcpuid);
             goto out;
         }
     }
     else {
-        if (!(vcpuinfo = libxl_list_vcpu(ctx, domid, &nb_vcpu, &i))) {
+        if (!(vcpuinfo = libxl_list_vcpu(ctx, domid, &nb_vcpu, &nb_cpu))) {
             fprintf(stderr, "libxl_list_vcpu failed.\n");
             goto out;
         }
-        for (i = 0; i < nb_vcpu; i++) {
-            if (libxl_set_vcpuaffinity(ctx, domid, vcpuinfo[i].vcpuid,
-                                       &cpumap, NULL)) {
-                fprintf(stderr, "libxl_set_vcpuaffinity failed"
-                                " on vcpu `%u'.\n", vcpuinfo[i].vcpuid);
-            }
-        }
+        if (libxl_set_vcpuaffinity_all(ctx, domid, nb_vcpu, hard, soft))
+            fprintf(stderr, "Could not set affinity.\n");
         libxl_vcpuinfo_list_free(vcpuinfo, nb_vcpu);
     }
 
     rc = 0;
  out:
-    libxl_bitmap_dispose(&cpumap);
+    libxl_bitmap_dispose(&cpumap_soft);
+    libxl_bitmap_dispose(&cpumap_hard);
     return rc;
 }
 
-int main_vcpupin(int argc, char **argv)
-{
-    int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
-        /* No options */
-    }
-
-    return vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
-}
-
 static void vcpuset(uint32_t domid, const char* nr_vcpus, int check_host)
 {
     char *endptr;
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index 4279b9f..7b7fa92 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -218,7 +218,7 @@ struct cmd_spec cmd_table[] = {
     { "vcpu-pin",
       &main_vcpupin, 1, 1,
       "Set which CPUs a VCPU can use",
-      "<Domain> <VCPU|all> <CPUs|all>",
+      "<Domain> <VCPU|all> <Hard affinity|-|all> <Soft affinity|-|all>",
     },
     { "vcpu-set",
       &main_vcpuset, 0, 1,

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated"
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (3 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 04/11] xl: enable getting and setting " Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-27 10:44   ` Ian Campbell
  2014-06-20 16:19 ` [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

by avoiding allocating them in libxl__domain_build_info_setdefault.
In fact, back in 7e449837 ("libxl: provide _init and _setdefault for
libxl_domain_build_info") and a5d30c23 ("libxl: allow for explicitly
specifying node-affinity"), it was decided that the default for these
fields was for them to be allocted and filled.

That is now causing problem, whenever we have to figure out whether
the caller is using or not one of those fields. In fact, when we see
a full bitmap, is it just the default value, or is the user that
wants it that way?

Since that kind of knowledge has become important, change the default
to be "bitmap not allocated". It then becomes easy to know whether a
libxl caller is using one of the fields, just by checking whether the
bitmap is actually there with a non-zero size.

This is very important for the following patches introducing new ways
of specifying hard and soft affinity. It also allows us to improve
the checks around NUMA automatic placement, during domain creation
(and that bit is done in this very patch).

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Chenges from v9:
 * new patch. Doing something like thiswas discussed on xen-devel,
   during v9 and (I think) v7 review.
---
 tools/libxl/libxl_create.c |   12 ------------
 tools/libxl/libxl_dom.c    |   35 +++++++++++++++++++++++------------
 2 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index d6b8a29..8cc3ca5 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -187,20 +187,8 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     } else if (b_info->avail_vcpus.size > HVM_MAX_VCPUS)
         return ERROR_FAIL;
 
-    if (!b_info->cpumap.size) {
-        if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
-            return ERROR_FAIL;
-        libxl_bitmap_set_any(&b_info->cpumap);
-    }
-
     libxl_defbool_setdefault(&b_info->numa_placement, true);
 
-    if (!b_info->nodemap.size) {
-        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
-            return ERROR_FAIL;
-        libxl_bitmap_set_any(&b_info->nodemap);
-    }
-
     if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->max_memkb = 32 * 1024;
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index a90a8d5..5d8a0be 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -241,28 +241,39 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     }
 
     /*
-     * Check if the domain has any CPU affinity. If not, try to build
-     * up one. In case numa_place_domain() find at least a suitable
-     * candidate, it will affect info->nodemap accordingly; if it
-     * does not, it just leaves it as it is. This means (unless
-     * some weird error manifests) the subsequent call to
-     * libxl_domain_set_nodeaffinity() will do the actual placement,
-     * whatever that turns out to be.
+     * Check if the domain has any CPU or node affinity already. If not, try
+     * to build up the latter via automatic NUMA placement. In fact, in case
+     * numa_place_domain() manage to find a placement, in info->nodemap is
+     * updated accordingly; if it does not manage, inf->nodemap is just left
+     * alone. It is then the the subsequent call to
+     * libxl_domain_set_nodeaffinity() that enacts the actual placement.
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (!libxl_bitmap_is_full(&info->cpumap)) {
+        if (info->cpumap.size) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
-                       "affinity is specified");
+                       "affinity is specified explicitly");
             return ERROR_INVAL;
         }
+        if (info->nodemap.size) {
+            LOG(ERROR, "Can run NUMA placement only if the domain does not "
+                       "have any NUMA node affinity set already");
+            return ERROR_INVAL;
+        }
+
+        rc = libxl_node_bitmap_alloc(ctx, &info->nodemap, 0);
+        if (rc)
+            return rc;
+        libxl_bitmap_set_any(&info->nodemap);
 
         rc = numa_place_domain(gc, domid, info);
         if (rc)
             return rc;
     }
-    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
-                               &info->cpumap, NULL);
+    if (info->nodemap.size)
+        libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
+    if (info->cpumap.size)
+        libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                                   &info->cpumap, NULL);
 
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (4 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated" Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-27 10:55   ` Ian Campbell
  2014-06-20 16:19 ` [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

From: Wei Liu <wei.liu2@citrix.com>

This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
to preserve all information to construct a domain.

The array accommodates at most max_vcpus elements, each containing the
affinity of the respective VCPU. If less than max_vcpus bitmaps are
present, the VCPUs associated to the missing elements will just stay with
their default affinity (they'll be free to execute on every PCPU).

In case both this new field, and the already existing cpumap field are
used, the content of the array will override what's set in cpumap. (In
xl, we make sure that this never happens in xl, by using only one of the
two at any given time.)

The proper macro to mark the API change (called
LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS) is added but it is commented.
It will be uncommented by the patch in the series that completes the
process, by adding the "vcpu_soft_affinity" array. This is because, after
all, these two fields are being added sort-of together, and are very
very similar, in both meaning and usage, so it makes sense for them to
share the same marker.

This patch was originally part of Wei's series about pushing as much
information as possible on domain configuration in libxl, rather than
xl. See here, for more details:
  http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
  http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>
---
Changes from v9:
 * update the LIBXL_HAVE_xxx macro so that it can account for both
   vcpu_hard_affinity array, introduced in this patch, and
   vcpu_soft_affinity array, introduced in another patch of the
   series, as suggested during review;
 * update the LIBXL_HAVE_xxx macro comment accordingly. Also,
   added the part explaining what happens if both cpumap and
   vcpu_hard_affinity are specified, as requested during review;
 * in xl code, avoid preallocating the whole vcpu_hard_affinity array.
   That is not necessary: vcpus not present in the array will
   stay with their default affinity, no need to pass any empty,
   or full, or whatever cpumap for them;
 * still in xl, while dealing with a list ("cpus = [2, 3]"), it
   is not necessary to mess with cpumap, so kill any reference to
   that.

Changes from v8:
 * typo in the changelog.

Changes from v7:
 This patch is new in the series, and it is coming from Wei's series
 about pushing domain config information down to libxl. It is being
 incorporated in this series to reduce as much as possible the
 inter-dependencies between the two patch series, i.e., for ease
 of development and review.
---
 tools/libxl/libxl.h         |   25 ++++++++++++++
 tools/libxl/libxl_dom.c     |   17 ++++++++-
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   79 ++++++++++---------------------------------
 4 files changed, 60 insertions(+), 62 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a8477c9..d63cd11 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -340,6 +340,31 @@ typedef struct libxl__ctx libxl_ctx;
 #endif
 
 /*
+ * LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS
+ *
+ * If this is defined, then the libxl_domain_build_info structure will
+ * contain two arrays of libxl_bitmap-s, with all the necessary information
+ * to set the hard affinity (vcpu_hard_affinity) and the soft affinity
+ * (vcpu_soft_affinity) of the VCPUs.
+ *
+ * Note that, if the vcpu_hard_affinity array is used, libxl will ignore
+ * the content of the cpumap field of libxl_domain_build_info. That is to
+ * say, if the array is allocated and used by the caller, it is it and
+ * only it that determines the hard affinity of the domain's VCPUs.
+ *
+ * The number of libxl_bitmap-s in the arrays should be equal to the
+ * maximum number of VCPUs of the domain. If there only are N elements in
+ * an array, with N smaller the the maximum number of VCPUs, the hard or
+ * soft affinity (depending on which array we are talking about) will be
+ * set only for the first N VCPUs. The other VCPUs will just have affinity,
+ * both hard and soft, with all the host PCPUs.
+ * Each bitmap should be big enough to accommodate the maximum number of
+ * PCPUs of the host.
+ */
+/* to be uncommented when soft array added */
+/* #define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS 1 */
+
+/*
  * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
  *
  * If this is defined, then the libxl_domain_build_info structure will
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 5d8a0be..7a60ee2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -249,7 +249,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      * libxl_domain_set_nodeaffinity() that enacts the actual placement.
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (info->cpumap.size) {
+        if (info->cpumap.size || info->num_vcpu_hard_affinity) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
                        "affinity is specified explicitly");
             return ERROR_INVAL;
@@ -271,10 +271,23 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     }
     if (info->nodemap.size)
         libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    if (info->cpumap.size)
+    /* As mentioned in libxl.h, vcpu_hard_array takes precedence */
+    if (info->num_vcpu_hard_affinity) {
+        int i;
+
+        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+            if (libxl_set_vcpuaffinity(ctx, domid, i,
+                                       &info->vcpu_hard_affinity[i],
+                                       NULL)) {
+                LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
+                return ERROR_FAIL;
+            }
+        }
+    } else if (info->cpumap.size)
         libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                    &info->cpumap, NULL);
 
+
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 37df854..5607ea7 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -303,6 +303,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("avail_vcpus",     libxl_bitmap),
     ("cpumap",          libxl_bitmap),
     ("nodemap",         libxl_bitmap),
+    ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c639c78..f2f5fb2 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -88,9 +88,6 @@ xlchild children[child_max];
 static const char *common_domname;
 static int fd_lock = -1;
 
-/* Stash for specific vcpu to pcpu mappping */
-static int *vcpu_to_pcpu;
-
 static const char savefileheader_magic[32]=
     "Xen saved domain, xl format\n \0 \r";
 
@@ -703,7 +700,7 @@ static void parse_config_data(const char *config_source,
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms;
     XLU_ConfigList *ioports, *irqs, *iomem;
-    int num_ioports, num_irqs, num_iomem;
+    int num_ioports, num_irqs, num_iomem, num_cpus;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -800,43 +797,32 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxvcpus", &l, 0))
         b_info->max_vcpus = l;
 
-    if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
-        int n_cpus = 0;
+    if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1)) {
+        int j = 0;
 
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
-            exit(1);
-        }
+        /* Silently ignore values corresponding to non existing vcpus */
+        if (num_cpus > b_info->max_vcpus)
+            num_cpus = b_info->max_vcpus;
 
-        /* Prepare the array for single vcpu to pcpu mappings */
-        vcpu_to_pcpu = xmalloc(sizeof(int) * b_info->max_vcpus);
-        memset(vcpu_to_pcpu, -1, sizeof(int) * b_info->max_vcpus);
+        b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
 
-        /*
-         * Idea here is to let libxl think all the domain's vcpus
-         * have cpu affinity with all the pcpus on the list.
-         * It is then us, here in xl, that matches each single vcpu
-         * to its pcpu (and that's why we need to stash such info in
-         * the vcpu_to_pcpu array now) after the domain has been created.
-         * Doing it like this saves the burden of passing to libxl
-         * some big array hosting the single mappings. Also, using
-         * the cpumap derived from the list ensures memory is being
-         * allocated on the proper nodes anyway.
-         */
-        libxl_bitmap_set_none(&b_info->cpumap);
-        while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
+        while ((buf = xlu_cfg_get_listitem(cpus, j)) != NULL && j < num_cpus) {
             i = atoi(buf);
-            if (!libxl_bitmap_cpu_valid(&b_info->cpumap, i)) {
-                fprintf(stderr, "cpu %d illegal\n", i);
+
+            libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
+            if (libxl_cpu_bitmap_alloc(ctx,
+                                       &b_info->vcpu_hard_affinity[j], 0)) {
+                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
                 exit(1);
             }
-            libxl_bitmap_set(&b_info->cpumap, i);
-            if (n_cpus < b_info->max_vcpus)
-                vcpu_to_pcpu[n_cpus] = i;
-            n_cpus++;
+            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[j]);
+            libxl_bitmap_set(&b_info->vcpu_hard_affinity[j], i);
+
+            j++;
         }
+        b_info->num_vcpu_hard_affinity = num_cpus;
 
-        /* We have a cpumap, disable automatic placement */
+        /* We have a list of cpumaps, disable automatic placement */
         libxl_defbool_set(&b_info->numa_placement, false);
     }
     else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
@@ -2183,33 +2169,6 @@ start:
     if ( ret )
         goto error_out;
 
-    /* If single vcpu to pcpu mapping was requested, honour it */
-    if (vcpu_to_pcpu) {
-        libxl_bitmap vcpu_cpumap;
-
-        ret = libxl_cpu_bitmap_alloc(ctx, &vcpu_cpumap, 0);
-        if (ret)
-            goto error_out;
-        for (i = 0; i < d_config.b_info.max_vcpus; i++) {
-
-            if (vcpu_to_pcpu[i] != -1) {
-                libxl_bitmap_set_none(&vcpu_cpumap);
-                libxl_bitmap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
-            } else {
-                libxl_bitmap_set_any(&vcpu_cpumap);
-            }
-            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
-                fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
-                libxl_bitmap_dispose(&vcpu_cpumap);
-                free(vcpu_to_pcpu);
-                ret = ERROR_FAIL;
-                goto error_out;
-            }
-        }
-        libxl_bitmap_dispose(&vcpu_cpumap);
-        free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
-    }
-
     ret = libxl_userdata_store(ctx, domid, "xl",
                                     config_data, config_len);
     if (ret) {

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (5 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-27 10:57   ` Ian Campbell
  2014-06-20 16:19 ` [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity Dario Faggioli
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

as, thanks to previous change ("libxl/xl: push VCPU affinity
pinning down to libxl"), we now have an array of libxl_bitmap-s
that can be used to transfer to libxl the vcpu (hard) affinity
of each vcpu of the domain.

The cpumap field, used to provide libxl with just one cpumap
that should be applied to all vcpus, is therefore of no more
use. Actually, it is duplicated information and can lead to
confusion.

With this change, we make it even more clear that the preferred
way of specifying VCPU affinity is from now on, the vcpu_hard_affinity
array. To do so, mark the cpumap field as DEPRECATED. It still is
there, and it is still honoured, for backward compatibility reasons,
but it should no longer be used.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v9:
 * get rid of the xl part, which now lives in its separate
   patch, as requested during review;
 * updated the comment in libxl_types.idl, mentioning what
   happens if both cpumap and vcpu_hard_affinity are used,
   as requested during review.

Changes from v8:
 * don't get rid of b_info->cpumap handling, so old apps
   continue to work, as requested during review;
 * changelog and code comments updated accordingly.
---
 tools/libxl/libxl_dom.c     |    9 ++++++++-
 tools/libxl/libxl_types.idl |    8 +++++++-
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 7a60ee2..81e77c0 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -283,9 +283,16 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
                 return ERROR_FAIL;
             }
         }
-    } else if (info->cpumap.size)
+    } else if (info->cpumap.size) {
+        /*
+         * Although info->cpumap is DEPRECATED, we still want old
+         * applications that may be using it to continue working.
+         */
+        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
+                  "Please, use the vcpu_hard_affinity array instead");
         libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                    &info->cpumap, NULL);
+    }
 
 
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 5607ea7..477b2a6 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -301,7 +301,13 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
-    ("cpumap",          libxl_bitmap),
+    ("cpumap",          libxl_bitmap), # DEPRECATED!
+    # The cpumap field above has been deprecated by the introduction of the
+    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
+    # API stability and backward compatibility reasons, but should not be used
+    # any longer. The vcpu_hard_affinity array is what should be used instead.
+    # As per libxl.h, if both cpumap and vcpu_hard_affinity are used, the latter
+    # will prevail.
     ("nodemap",         libxl_bitmap),
     ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
     ("numa_placement",  libxl_defbool),

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (6 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
@ 2014-06-20 16:19 ` Dario Faggioli
  2014-06-27 11:00   ` Ian Campbell
  2014-06-20 16:20 ` [PATCH v10 09/11] xl: move the vcpu affinity parsing in a function Dario Faggioli
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

and start using the vcpu_hard_affinity array instead. This comes
with a few bonuses:

 - allows us to unify the parsing of the two was VCPU affinity
   is specified in the domain config file (i.e., cpus="1,3,10-15"
   and cpus=[2, 4, 8]);

 - unifying the parsing makes it possible to do things like this:

      cpus = ["3-4", "2-6"]

   which it was not before. What it means is that VCPU 0 must be
   pinned to PCPU 3,4 and VCPU 1 to PCPUs 2,3,4,5,6. Before this
   change, in fact, the list variant (cpus=[xx, yy]) only supported
   only single values. (Of course, the old [2, 3] syntax continues
   to work, although, without the '"' quotes, it is not possible
   to specify ranges.)

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v9:
 * new patch, basically containing the xl bits of what was the
   cpumap deprecation patch in v9.
---
 docs/man/xl.cfg.pod.5    |    8 ++++----
 tools/libxl/xl_cmdimpl.c |   47 ++++++++++++++++++++++++----------------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index c087cbc..af48622 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning "all,^nodes:1"
 results in all the vcpus of the guest running on all the cpus on the
 host, except for the cpus belonging to the host NUMA node 1.
 
-=item ["2", "3"] (or [2, 3])
+=item ["2", "3-8,^5"]
 
-To ask for specific vcpu mapping. That means (in this example), vcpu #0
-of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
-run on cpu #3 of the host.
+To ask for specific vcpu mapping. That means (in this example), vcpu 0
+of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
+run on cpus 3,4,6,7,8 of the host.
 
 =back
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index f2f5fb2..06478a8 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -656,14 +656,16 @@ static int update_cpumap_range(const char *str, libxl_bitmap *cpumap)
 static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
 {
     char *ptr, *saveptr = NULL;
+    char *buf = strdup(cpu);
     int rc = 0;
 
-    for (ptr = strtok_r(cpu, ",", &saveptr); ptr;
+    for (ptr = strtok_r(buf, ",", &saveptr); ptr;
          ptr = strtok_r(NULL, ",", &saveptr)) {
         rc = update_cpumap_range(ptr, cpumap);
         if (rc)
             break;
     }
+    free(buf);
 
     return rc;
 }
@@ -797,17 +799,31 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxvcpus", &l, 0))
         b_info->max_vcpus = l;
 
-    if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1)) {
+    buf = NULL; num_cpus = 0;
+    if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1) ||
+        !xlu_cfg_get_string (config, "cpus", &buf, 0)) {
+        const char *buf2 = NULL; //XXX Trick the compiler!!!
         int j = 0;
 
+        /*
+         * If we are here, and buf is !NULL, we're dealing with a string. What
+         * we do in this case is parse it, and put the result in _all_ (up to
+         * b_info->max_vcpus) the elements of the vcpu affinity array.
+         *
+         * If buf is NULL, we have a list, and what we do is putting in the
+         * i-eth element of the vcpu affinity array the result of the parsing
+         * of the i-eth entry of the list. If there are more vcpus than
+         * entries, it is fine to just not touch the last array elements.
+         */
+
         /* Silently ignore values corresponding to non existing vcpus */
-        if (num_cpus > b_info->max_vcpus)
+        if (num_cpus > b_info->max_vcpus || buf)
             num_cpus = b_info->max_vcpus;
 
         b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
 
-        while ((buf = xlu_cfg_get_listitem(cpus, j)) != NULL && j < num_cpus) {
-            i = atoi(buf);
+        while ((buf || (buf2 = xlu_cfg_get_listitem(cpus, j)) != NULL) &&
+               j < num_cpus) {
 
             libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
             if (libxl_cpu_bitmap_alloc(ctx,
@@ -815,8 +831,10 @@ static void parse_config_data(const char *config_source,
                 fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
                 exit(1);
             }
-            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[j]);
-            libxl_bitmap_set(&b_info->vcpu_hard_affinity[j], i);
+
+            if (vcpupin_parse(buf ? buf : buf2,
+                              &b_info->vcpu_hard_affinity[j]))
+                exit(1);
 
             j++;
         }
@@ -825,21 +843,6 @@ static void parse_config_data(const char *config_source,
         /* We have a list of cpumaps, disable automatic placement */
         libxl_defbool_set(&b_info->numa_placement, false);
     }
-    else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
-        char *buf2 = strdup(buf);
-
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
-            exit(1);
-        }
-
-        libxl_bitmap_set_none(&b_info->cpumap);
-        if (vcpupin_parse(buf2, &b_info->cpumap))
-            exit(1);
-        free(buf2);
-
-        libxl_defbool_set(&b_info->numa_placement, false);
-    }
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 09/11] xl: move the vcpu affinity parsing in a function
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (7 preceding siblings ...)
  2014-06-20 16:19 ` [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity Dario Faggioli
@ 2014-06-20 16:20 ` Dario Faggioli
  2014-06-20 16:20 ` [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file Dario Faggioli
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

so that the same code can be shared for parsing also the
soft affinity, being introduced in the next patch.

This is pure code motion, so no functional change intended,
and it is being done as a separate patch for the sake of
separating code motion from actual changes, i.e., for
facilitating reviews.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Chenges from v9:
 * this is still only code motion but, on the basis that the
  code being moved changed by quite a bit, I am not sticking
  the Acked-by  IanC provided during it.
---
 tools/libxl/xl_cmdimpl.c |   89 ++++++++++++++++++++++++----------------------
 1 file changed, 47 insertions(+), 42 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 06478a8..8c94745 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -692,6 +692,51 @@ static void parse_top_level_sdl_options(XLU_Config *config,
     xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
 }
 
+static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
+                                libxl_domain_build_info *b_info,
+                                const char *buf, int num_cpus)
+{
+    const char *buf2 = NULL; //XXX Trick the compiler!!!
+    int j = 0;
+
+    /*
+     * If we are here, and buf is !NULL, we're dealing with a string. What
+     * we do in this case is parse it, and put the result in _all_ (up to
+     * b_info->max_vcpus) the elements of the vcpu affinity array.
+     *
+     * If buf is NULL, we have a list, and what we do is putting in the
+     * i-eth element of the vcpu affinity array the result of the parsing
+     * of the i-eth entry of the list. If there are more vcpus than
+     * entries, it is fine to just not touch the last array elements.
+     */
+
+    /* Silently ignore values corresponding to non existing vcpus */
+    if (num_cpus > b_info->max_vcpus || buf)
+        num_cpus = b_info->max_vcpus;
+
+    b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
+
+    while ((buf || (buf2 = xlu_cfg_get_listitem(cpus, j)) != NULL) &&
+           j < num_cpus) {
+
+        libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
+        if (libxl_cpu_bitmap_alloc(ctx,
+                                       &b_info->vcpu_hard_affinity[j], 0)) {
+                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
+                exit(1);
+            }
+
+        if (vcpupin_parse(buf ? buf : buf2, &b_info->vcpu_hard_affinity[j]))
+            exit(1);
+
+        j++;
+    }
+    b_info->num_vcpu_hard_affinity = num_cpus;
+
+    /* We have a list of cpumaps, disable automatic placement */
+    libxl_defbool_set(&b_info->numa_placement, false);
+}
+
 static void parse_config_data(const char *config_source,
                               const char *config_data,
                               int config_len,
@@ -801,48 +846,8 @@ static void parse_config_data(const char *config_source,
 
     buf = NULL; num_cpus = 0;
     if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1) ||
-        !xlu_cfg_get_string (config, "cpus", &buf, 0)) {
-        const char *buf2 = NULL; //XXX Trick the compiler!!!
-        int j = 0;
-
-        /*
-         * If we are here, and buf is !NULL, we're dealing with a string. What
-         * we do in this case is parse it, and put the result in _all_ (up to
-         * b_info->max_vcpus) the elements of the vcpu affinity array.
-         *
-         * If buf is NULL, we have a list, and what we do is putting in the
-         * i-eth element of the vcpu affinity array the result of the parsing
-         * of the i-eth entry of the list. If there are more vcpus than
-         * entries, it is fine to just not touch the last array elements.
-         */
-
-        /* Silently ignore values corresponding to non existing vcpus */
-        if (num_cpus > b_info->max_vcpus || buf)
-            num_cpus = b_info->max_vcpus;
-
-        b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
-
-        while ((buf || (buf2 = xlu_cfg_get_listitem(cpus, j)) != NULL) &&
-               j < num_cpus) {
-
-            libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
-            if (libxl_cpu_bitmap_alloc(ctx,
-                                       &b_info->vcpu_hard_affinity[j], 0)) {
-                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
-                exit(1);
-            }
-
-            if (vcpupin_parse(buf ? buf : buf2,
-                              &b_info->vcpu_hard_affinity[j]))
-                exit(1);
-
-            j++;
-        }
-        b_info->num_vcpu_hard_affinity = num_cpus;
-
-        /* We have a list of cpumaps, disable automatic placement */
-        libxl_defbool_set(&b_info->numa_placement, false);
-    }
+        !xlu_cfg_get_string (config, "cpus", &buf, 0))
+        parse_vcpu_affinity(config, cpus, b_info, buf, num_cpus);
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (8 preceding siblings ...)
  2014-06-20 16:20 ` [PATCH v10 09/11] xl: move the vcpu affinity parsing in a function Dario Faggioli
@ 2014-06-20 16:20 ` Dario Faggioli
  2014-06-27 12:47   ` Ian Campbell
  2014-06-20 16:20 ` [PATCH v10 11/11] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
  2014-06-27 13:32 ` [PATCH v10 00/11] Series short description Ian Campbell
  11 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

To do so, we add the vcpu_soft_affinity array to build_info, and
treat it much like vcpu_hard_affinity. The new config option is
called "cpus_soft".

Note that the vcpu_hard_affinity array, introduced in a previous
patch, and the vcpu_soft_affinity array, introduced here, share
the same LIBXL_HAVE_xxx macro, in libxl.h. That is called
LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS, and was introduced
together with vcpu_hard_affinity, but only inside a comment.
In this change, we uncomment, and hence properly define it.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v9:
 * patch reworked again, due to changes in the preceding ones
   in the series. The structure is similar, it's still based
   on adding some indirection, so that the same code can be
   used to pars and enact both hard and soft affinity, but
   the code did change, I'm afraid.

Changes from v8:
 * fix a type in the LIBXL_HAVE_xxx macro name.

Changes from v7:
 * WARNING: this patch underwent quite a fundamental rework,
   given it's now building on top of Wei's "push vcpu affinity
   to libxl" patch. That's why I think it should be re-reviewed
   almost from scratch (sorry! :-P), and that's why I did not
   add IanC's ack, although he provided it to the v7 version
   of it.

Changes from v6:
 * update and improve the changelog.

Changes from v4:
 * fix typos and rephrase docs, as suggested during review;
 * more refactoring, i.e., more addressing factor of potential
   common code, as requested during review.

Changes from v3:
 * fix typos and language issues in docs and comments, as
   suggested during review;
 * common code to soft and hard affinity parsing factored
   together, as requested uring review.

Changes from v2:
 * use the new libxl API. Although the implementation changed
   only a little bit, I removed IanJ's Acked-by, although I am
   here saying that he did provided it, as requested.
---
 docs/man/xl.cfg.pod.5       |   23 ++++++++++++++++++++---
 tools/libxl/libxl.h         |    3 +--
 tools/libxl/libxl_dom.c     |   31 +++++++++++++++++++++++++------
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   39 ++++++++++++++++++++++++++++++---------
 5 files changed, 77 insertions(+), 20 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index af48622..25a4ff7 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -152,19 +152,36 @@ run on cpus 3,4,6,7,8 of the host.
 =back
 
 If this option is not specified, no vcpu to cpu pinning is established,
-and the vcpus of the guest can run on all the cpus of the host.
+and the vcpus of the guest can run on all the cpus of the host. If this
+option is specified, the intersection of the vcpu pinning mask, provided
+here, and the soft affinity mask, provided via B<cpus\_soft=> (if any),
+is utilized to compute the domain node-affinity, for driving memory
+allocations.
 
 If we are on a NUMA machine (i.e., if the host has more than one NUMA
 node) and this option is not specified, libxl automatically tries to
 place the guest on the least possible number of nodes. That, however,
 will not affect vcpu pinning, so the guest will still be able to run on
-all the cpus, it will just prefer the ones from the node it has been
-placed on. A heuristic approach is used for choosing the best node (or
+all the cpus. A heuristic approach is used for choosing the best node (or
 set of nodes), with the goals of maximizing performance for the guest
 and, at the same time, achieving efficient utilization of host cpus
 and memory. See F<docs/misc/xl-numa-placement.markdown> for more
 details.
 
+=item B<cpus_soft="CPU-LIST">
+
+Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
+(hard affinity). When using the credit scheduler, this means what cpus
+the vcpus of the domain prefer.
+
+A C<CPU-LIST> is specified exactly as above, for B<cpus=>.
+
+If this option is not specified, the vcpus of the guest will not have
+any preference regarding on what cpu to run. If this option is specified,
+the intersection of the soft affinity mask, provided here, and the vcpu
+pinning, provided via B<cpus=> (if any), is utilized to compute the
+domain node-affinity, for driving memory allocations.
+
 =back
 
 =head3 CPU Scheduling
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index d63cd11..e0720f1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -361,8 +361,7 @@ typedef struct libxl__ctx libxl_ctx;
  * Each bitmap should be big enough to accommodate the maximum number of
  * PCPUs of the host.
  */
-/* to be uncommented when soft array added */
-/* #define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS 1 */
+#define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS 1
 
 /*
  * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 81e77c0..076573a 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -272,18 +272,37 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     if (info->nodemap.size)
         libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     /* As mentioned in libxl.h, vcpu_hard_array takes precedence */
-    if (info->num_vcpu_hard_affinity) {
-        int i;
+    if (info->num_vcpu_hard_affinity || info->num_vcpu_soft_affinity) {
+        libxl_bitmap *hard_affinity, *soft_affinity;
+        int i, n_vcpus;
+
+        n_vcpus = info->num_vcpu_hard_affinity > info->num_vcpu_soft_affinity ?
+            info->num_vcpu_hard_affinity : info->num_vcpu_soft_affinity;
+
+        for (i = 0; i < n_vcpus; i++) {
+            /*
+             * Prepare hard and soft affinity pointers in a way that allows
+             * us to issue only one call to libxl_set_vcpuaffinity(), setting,
+             * for each vcpu, both hard and soft affinity "atomically".
+             */
+            hard_affinity = NULL;
+            if (info->num_vcpu_hard_affinity &&
+                i < info->num_vcpu_hard_affinity)
+                hard_affinity = &info->vcpu_hard_affinity[i];
+
+            soft_affinity = NULL;
+            if (info->num_vcpu_soft_affinity &&
+                i < info->num_vcpu_soft_affinity)
+                soft_affinity = &info->vcpu_soft_affinity[i];
 
-        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
             if (libxl_set_vcpuaffinity(ctx, domid, i,
-                                       &info->vcpu_hard_affinity[i],
-                                       NULL)) {
+                                       hard_affinity, soft_affinity)) {
                 LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
                 return ERROR_FAIL;
             }
         }
-    } else if (info->cpumap.size) {
+    }
+    if (info->cpumap.size && !info->num_vcpu_hard_affinity) {
         /*
          * Although info->cpumap is DEPRECATED, we still want old
          * applications that may be using it to continue working.
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 477b2a6..18355d9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -310,6 +310,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # will prevail.
     ("nodemap",         libxl_bitmap),
     ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
+    ("vcpu_soft_affinity", Array(libxl_bitmap, "num_vcpu_soft_affinity")),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 8c94745..462da05 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -694,9 +694,10 @@ static void parse_top_level_sdl_options(XLU_Config *config,
 
 static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
                                 libxl_domain_build_info *b_info,
-                                const char *buf, int num_cpus)
+                                const char *buf, int num_cpus, bool is_hard)
 {
-    const char *buf2 = NULL; //XXX Trick the compiler!!!
+    const char *buf2 = NULL; /* initialize to trick gcc >= 4.9.0. See below */
+    libxl_bitmap *vcpu_affinity_array;
     int j = 0;
 
     /*
@@ -714,24 +715,37 @@ static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
     if (num_cpus > b_info->max_vcpus || buf)
         num_cpus = b_info->max_vcpus;
 
-    b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
+    if (is_hard) {
+        b_info->num_vcpu_hard_affinity = num_cpus;
+        b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
+        vcpu_affinity_array = b_info->vcpu_hard_affinity;
+    } else {
+        b_info->num_vcpu_soft_affinity = num_cpus;
+        b_info->vcpu_soft_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
+        vcpu_affinity_array = b_info->vcpu_soft_affinity;
+    }
 
     while ((buf || (buf2 = xlu_cfg_get_listitem(cpus, j)) != NULL) &&
            j < num_cpus) {
 
-        libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
-        if (libxl_cpu_bitmap_alloc(ctx,
-                                       &b_info->vcpu_hard_affinity[j], 0)) {
+        libxl_bitmap_init(&vcpu_affinity_array[j]);
+        if (libxl_cpu_bitmap_alloc(ctx, &vcpu_affinity_array[j], 0)) {
                 fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
                 exit(1);
             }
 
-        if (vcpupin_parse(buf ? buf : buf2, &b_info->vcpu_hard_affinity[j]))
+        /*
+         * If buf2 is not initialized above, gcc >= 4.9.0 complains with
+         * a '[-Werror=maybe-uninitialized]'. That can't happen, though, since
+         * we use buf2 only if buf is NULL, and if we are here and buf is NULL,
+         * it means buf2 is non-NULL, and contains the j-eth element of the
+         * list, as per the condition of the while().
+         * */
+        if (vcpupin_parse(buf ? buf : buf2, &vcpu_affinity_array[j]))
             exit(1);
 
         j++;
     }
-    b_info->num_vcpu_hard_affinity = num_cpus;
 
     /* We have a list of cpumaps, disable automatic placement */
     libxl_defbool_set(&b_info->numa_placement, false);
@@ -847,7 +861,14 @@ static void parse_config_data(const char *config_source,
     buf = NULL; num_cpus = 0;
     if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1) ||
         !xlu_cfg_get_string (config, "cpus", &buf, 0))
-        parse_vcpu_affinity(config, cpus, b_info, buf, num_cpus);
+        parse_vcpu_affinity(config, cpus, b_info, buf,
+                            num_cpus, /* is_hard */ true);
+
+    buf = NULL; num_cpus = 0;
+    if (!xlu_cfg_get_list (config, "cpus_soft", &cpus, &num_cpus, 1) ||
+        !xlu_cfg_get_string (config, "cpus_soft", &buf, 0))
+        parse_vcpu_affinity(config, cpus, b_info, buf,
+                            num_cpus, /* is_hard */ false);
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v10 11/11] libxl: automatic NUMA placement affects soft affinity
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (9 preceding siblings ...)
  2014-06-20 16:20 ` [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file Dario Faggioli
@ 2014-06-20 16:20 ` Dario Faggioli
  2014-06-27 13:32 ` [PATCH v10 00/11] Series short description Ian Campbell
  11 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-20 16:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew.Cooper3, Wei Liu, Ian.Campbell, Ian.Jackson

vCPU soft affinity and NUMA-aware scheduling does not have
to be related. However, soft affinity is how NUMA-aware
scheduling is actually implemented, and therefore, by default,
the results of automatic NUMA placement (at VM creation time)
are also used to set the soft affinity of all the vCPUs of
the domain.

Of course, this only happens if automatic NUMA placement is
enabled and actually takes place (for instance, if the user
does not specify any hard and soft affiniy in the xl config
file).

This also takes care of the vice-versa, i.e., don't trigger
automatic placement if the config file specifies either an
hard (the check for which was already there) or a soft (the
check for which is introduced by this commit) affinity.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v8:
 * fixed a trivial memory leak (of a libxl_bitmap) introduced
   in v8 itself.

Changes from v7:
 * the patch has been reworked, following the major changes
   in the preceding ones. However, such rework involves
   --I think-- only straightforward code transformation,
   so I'm actually retaining the Ack-s this time.

Changes from v3:
 * rephrase comments and docs, as suggestd during review.
---
 docs/man/xl.cfg.pod.5                |   21 ++++++++++----------
 docs/misc/xl-numa-placement.markdown |   14 +++++++++++--
 tools/libxl/libxl_dom.c              |   36 +++++++++++++++++++++++++++++++---
 3 files changed, 56 insertions(+), 15 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 25a4ff7..fe6e56d 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -158,16 +158,6 @@ here, and the soft affinity mask, provided via B<cpus\_soft=> (if any),
 is utilized to compute the domain node-affinity, for driving memory
 allocations.
 
-If we are on a NUMA machine (i.e., if the host has more than one NUMA
-node) and this option is not specified, libxl automatically tries to
-place the guest on the least possible number of nodes. That, however,
-will not affect vcpu pinning, so the guest will still be able to run on
-all the cpus. A heuristic approach is used for choosing the best node (or
-set of nodes), with the goals of maximizing performance for the guest
-and, at the same time, achieving efficient utilization of host cpus
-and memory. See F<docs/misc/xl-numa-placement.markdown> for more
-details.
-
 =item B<cpus_soft="CPU-LIST">
 
 Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
@@ -182,6 +172,17 @@ the intersection of the soft affinity mask, provided here, and the vcpu
 pinning, provided via B<cpus=> (if any), is utilized to compute the
 domain node-affinity, for driving memory allocations.
 
+If this option is not specified (and B<cpus=> is not specified either),
+libxl automatically tries to place the guest on the least possible
+number of nodes. A heuristic approach is used for choosing the best
+node (or set of nodes), with the goal of maximizing performance for
+the guest and, at the same time, achieving efficient utilization of
+host cpus and memory. In that case, the soft affinity of all the vcpus
+of the domain will be set to the pcpus belonging to the NUMA nodes
+chosen during placement.
+
+For more details, see F<docs/misc/xl-numa-placement.markdown>.
+
 =back
 
 =head3 CPU Scheduling
diff --git a/docs/misc/xl-numa-placement.markdown b/docs/misc/xl-numa-placement.markdown
index 9d64eae..f863492 100644
--- a/docs/misc/xl-numa-placement.markdown
+++ b/docs/misc/xl-numa-placement.markdown
@@ -126,10 +126,20 @@ or Xen won't be able to guarantee the locality for their memory accesses.
 That, of course, also mean the vCPUs of the domain will only be able to
 execute on those same pCPUs.
 
+It is is also possible to have a "cpus\_soft=" option in the xl config file,
+to specify the soft affinity for all the vCPUs of the domain. This affects
+the NUMA placement in the following way:
+
+ * if only "cpus\_soft=" is present, the VM's node-affinity will be equal
+   to the nodes to which the pCPUs in the soft affinity mask belong;
+ * if both "cpus\_soft=" and "cpus=" are present, the VM's node-affinity
+   will be equal to the nodes to which the pCPUs present both in hard and
+   soft affinity belong.
+
 ### Placing the guest automatically ###
 
-If no "cpus=" option is specified in the config file, libxl tries
-to figure out on its own on which node(s) the domain could fit best.
+If neither "cpus=" nor "cpus\_soft=" are present in the config file, libxl
+tries to figure out on its own on which node(s) the domain could fit best.
 If it finds one (some), the domain's node affinity get set to there,
 and both memory allocations and NUMA aware scheduling (for the credit
 scheduler and starting from Xen 4.3) will comply with it. Starting from
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 076573a..7facc15 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -247,11 +247,20 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      * updated accordingly; if it does not manage, inf->nodemap is just left
      * alone. It is then the the subsequent call to
      * libxl_domain_set_nodeaffinity() that enacts the actual placement.
+     *
+     * As far as scheduling is concerned, we achieve NUMA-aware scheduling
+     * by having the results of placement affect the soft affinity of all
+     * the vcpus of the domain. Of course, we want that iff placement is
+     * enabled and actually happens, so we only change info->cpumap_soft to
+     * reflect the placement result if that is the case
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (info->cpumap.size || info->num_vcpu_hard_affinity) {
+        libxl_bitmap cpumap_soft;
+
+        if (info->cpumap.size ||
+            info->num_vcpu_hard_affinity || info->num_vcpu_soft_affinity) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
-                       "affinity is specified explicitly");
+                       "(hard or soft) affinity is specified explicitly");
             return ERROR_INVAL;
         }
         if (info->nodemap.size) {
@@ -265,9 +274,30 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
         libxl_bitmap_set_any(&info->nodemap);
 
-        rc = numa_place_domain(gc, domid, info);
+        rc = libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0);
         if (rc)
             return rc;
+
+        rc = numa_place_domain(gc, domid, info);
+        if (rc) {
+            libxl_bitmap_dispose(&cpumap_soft);
+            return rc;
+        }
+
+        /*
+         * All we need to do now is converting the result of automatic
+         * placement from nodemap to cpumap, and then use such cpumap as
+         * the soft affinity for all the vcpus of the domain.
+         *
+         * When calling libxl_set_vcpuaffinity_all(), it is ok to use NULL
+         * as hard affinity, as we know we don't have one, or we won't be
+         * here.
+         */
+        libxl_nodemap_to_cpumap(ctx, &info->nodemap, &cpumap_soft);
+        libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                                   NULL, &cpumap_soft);
+
+        libxl_bitmap_dispose(&cpumap_soft);
     }
     if (info->nodemap.size)
         libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated"
  2014-06-20 16:19 ` [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated" Dario Faggioli
@ 2014-06-27 10:44   ` Ian Campbell
  2014-06-27 12:07     ` Dario Faggioli
  0 siblings, 1 reply; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 10:44 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> by avoiding allocating them in libxl__domain_build_info_setdefault.
> In fact, back in 7e449837 ("libxl: provide _init and _setdefault for
> libxl_domain_build_info") and a5d30c23 ("libxl: allow for explicitly
> specifying node-affinity"), it was decided that the default for these
> fields was for them to be allocted and filled.

"allocated"

> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> +     * updated accordingly; if it does not manage, inf->nodemap is just left

info-> ?

> +     * alone. It is then the the subsequent call to
> +     * libxl_domain_set_nodeaffinity() that enacts the actual placement.
>       */
>      if (libxl_defbool_val(info->numa_placement)) {
> -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> +        if (info->cpumap.size) {

Not for now but a helper for this might be nice on day.

>              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> -                       "affinity is specified");
> +                       "affinity is specified explicitly");

OOI, this says "Can run", I think what it really means is "I will run"?
No change needed just curious.

I'll fix the two typos on commit, assuming nothing in the remainder of
the series requires a resend.

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-20 16:19 ` [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
@ 2014-06-27 10:55   ` Ian Campbell
  0 siblings, 0 replies; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 10:55 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
> in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
> to preserve all information to construct a domain.
> 
> The array accommodates at most max_vcpus elements, each containing the
> affinity of the respective VCPU. If less than max_vcpus bitmaps are
> present, the VCPUs associated to the missing elements will just stay with
> their default affinity (they'll be free to execute on every PCPU).
> 
> In case both this new field, and the already existing cpumap field are
> used, the content of the array will override what's set in cpumap. (In
> xl, we make sure that this never happens in xl, by using only one of the
> two at any given time.)
> 
> The proper macro to mark the API change (called
> LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS) is added but it is commented.
> It will be uncommented by the patch in the series that completes the
> process, by adding the "vcpu_soft_affinity" array. This is because, after
> all, these two fields are being added sort-of together, and are very
> very similar, in both meaning and usage, so it makes sense for them to
> share the same marker.
> 
> This patch was originally part of Wei's series about pushing as much
> information as possible on domain configuration in libxl, rather than
> xl. See here, for more details:
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field
  2014-06-20 16:19 ` [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
@ 2014-06-27 10:57   ` Ian Campbell
  2014-06-27 12:08     ` Dario Faggioli
  0 siblings, 1 reply; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 10:57 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 7a60ee2..81e77c0 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -283,9 +283,16 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>                  return ERROR_FAIL;
>              }
>          }
> -    } else if (info->cpumap.size)
> +    } else if (info->cpumap.size) {
> +        /*
> +         * Although info->cpumap is DEPRECATED, we still want old
> +         * applications that may be using it to continue working.
> +         */
> +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> +                  "Please, use the vcpu_hard_affinity array instead");

I think this logging is going too far. It is perfectly valid to be using
this field if the application is interested in compatibility with Xen
4.4 and earlier.

>          libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
>                                     &info->cpumap, NULL);
> +    }
>  
> 
>      if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 5607ea7..477b2a6 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -301,7 +301,13 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> -    ("cpumap",          libxl_bitmap),
> +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> +    # The cpumap field above has been deprecated by the introduction of the
> +    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
> +    # API stability and backward compatibility reasons, but should not be used
> +    # any longer. The vcpu_hard_affinity array is what should be used instead.
> +    # As per libxl.h, if both cpumap and vcpu_hard_affinity are used, the latter
> +    # will prevail.

Likewise I think here. Perhaps "DEPRECATED: Use vcpu_hard_affinity"
would be OK. But I'm more inclined to suggest we just drop this entire
patch. After all we can effectively never get rid of this field and
people are entitled to keep using it.

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity
  2014-06-20 16:19 ` [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity Dario Faggioli
@ 2014-06-27 11:00   ` Ian Campbell
  2014-06-27 12:24     ` Dario Faggioli
  0 siblings, 1 reply; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 11:00 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> +        const char *buf2 = NULL; //XXX Trick the compiler!!!

This comment needs to be removed one way or another...

I'm unsure about the "= NULL" since I'm not sure what it is trying to
achieve.

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated"
  2014-06-27 10:44   ` Ian Campbell
@ 2014-06-27 12:07     ` Dario Faggioli
  2014-06-27 12:44       ` Ian Campbell
  0 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 12:07 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 2429 bytes --]

On ven, 2014-06-27 at 11:44 +0100, Ian Campbell wrote:
> On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> > by avoiding allocating them in libxl__domain_build_info_setdefault.
> > In fact, back in 7e449837 ("libxl: provide _init and _setdefault for
> > libxl_domain_build_info") and a5d30c23 ("libxl: allow for explicitly
> > specifying node-affinity"), it was decided that the default for these
> > fields was for them to be allocted and filled.
> 
> "allocated"
> 
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> > +     * updated accordingly; if it does not manage, inf->nodemap is just left
> 
> info-> ?
> 
Indeed.

And I'll fix these and resend, since it looks like I need to :-)

> > +     * alone. It is then the the subsequent call to
> > +     * libxl_domain_set_nodeaffinity() that enacts the actual placement.
> >       */
> >      if (libxl_defbool_val(info->numa_placement)) {
> > -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> > +        if (info->cpumap.size) {
> 
> Not for now but a helper for this might be nice on day.
> 
Noted.

> >              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> > -                       "affinity is specified");
> > +                       "affinity is specified explicitly");
> 
> OOI, this says "Can run", I think what it really means is "I will run"?
> No change needed just curious.
> 
"Can run" is a bad choice of words. The idea is that, if the user
specified any of those fields, NUMA node affinity will be based on that,
so we don't want automatic placement to take place. So, yes, something
like "NUMA placement won't run" or similar would have been more
appropriate.

Anyway, I've got an unrelated patch touching this code, which I'll
submit as soon as this one will be in. If you're fine, I'll leave it
alone in the repost of this series, and amend it there.

> I'll fix the two typos on commit, assuming nothing in the remainder of
> the series requires a resend.
> 
As said, I'm resending, so I'll also take care of the typos.

Thanks,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field
  2014-06-27 10:57   ` Ian Campbell
@ 2014-06-27 12:08     ` Dario Faggioli
  0 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 12:08 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1628 bytes --]

On ven, 2014-06-27 at 11:57 +0100, Ian Campbell wrote:

> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -301,7 +301,13 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
> >  libxl_domain_build_info = Struct("domain_build_info",[
> >      ("max_vcpus",       integer),
> >      ("avail_vcpus",     libxl_bitmap),
> > -    ("cpumap",          libxl_bitmap),
> > +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> > +    # The cpumap field above has been deprecated by the introduction of the
> > +    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
> > +    # API stability and backward compatibility reasons, but should not be used
> > +    # any longer. The vcpu_hard_affinity array is what should be used instead.
> > +    # As per libxl.h, if both cpumap and vcpu_hard_affinity are used, the latter
> > +    # will prevail.
> 
> Likewise I think here. Perhaps "DEPRECATED: Use vcpu_hard_affinity"
> would be OK. But I'm more inclined to suggest we just drop this entire
> patch. After all we can effectively never get rid of this field and
> people are entitled to keep using it.
> 
Understood. Andagreed. I'll get rid of this patch for v11, and change
the commit messages of the other patches in the series that refers to
this deprecation.

Thanks,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity
  2014-06-27 11:00   ` Ian Campbell
@ 2014-06-27 12:24     ` Dario Faggioli
  0 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 12:24 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1217 bytes --]

On ven, 2014-06-27 at 12:00 +0100, Ian Campbell wrote:
> On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> > +        const char *buf2 = NULL; //XXX Trick the compiler!!!
> 
> This comment needs to be removed one way or another...
> 
Wow, sorry. I could have sworn I removed it before posting! :-(

> I'm unsure about the "= NULL" since I'm not sure what it is trying to
> achieve.
> 
Ok, Now I've seen what happened. I removed the comment from the wrong
patch, and then did not realize the fact that it made it into the series
in two other patches. Will fix this.

Anyway, the explanation of why I think I need the =NULL is in patch 10,
when the comment is actually replaced with something more sensible.

It would be wonderful if you could at least have a look at that hunk,
and tell me whether you think the reasoning behind suppressing the
warning is sound, and the explanation clear enough.

Thanks in advance,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 04/11] xl: enable getting and setting soft affinity
  2014-06-20 16:19 ` [PATCH v10 04/11] xl: enable getting and setting " Dario Faggioli
@ 2014-06-27 12:33   ` Ian Campbell
  2014-06-27 13:18     ` Dario Faggioli
  0 siblings, 1 reply; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 12:33 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> Getting happens via `xl vcpu-list', which now looks like this:

With this I'm seeing:
        
        xl_cmdimpl.c: In function ‘vcpupin_parse’:
        xl_cmdimpl.c:661:5: error: passing argument 1 of ‘__strtok_r_1c’ discards ‘const’ qualifier from pointer target type [-Werror]
        In file included from /usr/include/string.h:637:0,
                         from xl_cmdimpl.c:21:
        
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated"
  2014-06-27 12:07     ` Dario Faggioli
@ 2014-06-27 12:44       ` Ian Campbell
  0 siblings, 0 replies; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 12:44 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson

On Fri, 2014-06-27 at 14:07 +0200, Dario Faggioli wrote:
> On ven, 2014-06-27 at 11:44 +0100, Ian Campbell wrote:
> > On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> > > by avoiding allocating them in libxl__domain_build_info_setdefault.
> > > In fact, back in 7e449837 ("libxl: provide _init and _setdefault for
> > > libxl_domain_build_info") and a5d30c23 ("libxl: allow for explicitly
> > > specifying node-affinity"), it was decided that the default for these
> > > fields was for them to be allocted and filled.
> > 
> > "allocated"
> > 
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > > +     * updated accordingly; if it does not manage, inf->nodemap is just left
> > 
> > info-> ?
> > 
> Indeed.
> 
> And I'll fix these and resend, since it looks like I need to :-)

I'm currently running my commit tests on a branch which includes:

194e718 libxl: Change default for b_info->{cpu, node}map to "not alloc
871b43a libxl: get and set soft affinity
6cc89d3 libxc: get and set soft and hard affinity
d172e6b libxc/libxl: bump library SONAMEs

So no need to resend this one. I'll send the usual "applied" mail once
those tests pass.

> > >              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> > > -                       "affinity is specified");
> > > +                       "affinity is specified explicitly");
> > 
> > OOI, this says "Can run", I think what it really means is "I will run"?
> > No change needed just curious.
> > 
> "Can run" is a bad choice of words. The idea is that, if the user
> specified any of those fields, NUMA node affinity will be based on that,
> so we don't want automatic placement to take place. So, yes, something
> like "NUMA placement won't run" or similar would have been more
> appropriate.
> 
> Anyway, I've got an unrelated patch touching this code, which I'll
> submit as soon as this one will be in. If you're fine, I'll leave it
> alone in the repost of this series, and amend it there.

Sure that's fine.

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file
  2014-06-20 16:20 ` [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file Dario Faggioli
@ 2014-06-27 12:47   ` Ian Campbell
  2014-06-27 13:07     ` Dario Faggioli
  2014-06-27 16:41     ` Dario Faggioli
  0 siblings, 2 replies; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 12:47 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:20 +0200, Dario Faggioli wrote:
> +        /*
> +         * If buf2 is not initialized above, gcc >= 4.9.0 complains with
> +         * a '[-Werror=maybe-uninitialized]'. That can't happen, though, since
> +         * we use buf2 only if buf is NULL, and if we are here and buf is NULL,
> +         * it means buf2 is non-NULL, and contains the j-eth element of the
> +         * list, as per the condition of the while().

You state that buf2 is non NULL only if buf is NULL (buf2 != NULL => buf
== NULL). But the logic here is based on the inverse (buf == NULL =>
buf2 != NULL).

I haven't read the code carefully to decide if you actually meant "we
use buf2 only iff buf is NULL".

A comment about a workaround such as this belongs next to the
workaround, i.e. next to the spurious looking = NULL. No need to then
back reference it.

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file
  2014-06-27 12:47   ` Ian Campbell
@ 2014-06-27 13:07     ` Dario Faggioli
  2014-06-27 16:41     ` Dario Faggioli
  1 sibling, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 13:07 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1339 bytes --]

On ven, 2014-06-27 at 13:47 +0100, Ian Campbell wrote:
> On Fri, 2014-06-20 at 18:20 +0200, Dario Faggioli wrote:
> > +        /*
> > +         * If buf2 is not initialized above, gcc >= 4.9.0 complains with
> > +         * a '[-Werror=maybe-uninitialized]'. That can't happen, though, since
> > +         * we use buf2 only if buf is NULL, and if we are here and buf is NULL,
> > +         * it means buf2 is non-NULL, and contains the j-eth element of the
> > +         * list, as per the condition of the while().
> 
> You state that buf2 is non NULL only if buf is NULL (buf2 != NULL => buf
> == NULL). But the logic here is based on the inverse (buf == NULL =>
> buf2 != NULL).
> 
> I haven't read the code carefully to decide if you actually meant "we
> use buf2 only iff buf is NULL".
> 
Ok, I'll improve the comment for v11...

> A comment about a workaround such as this belongs next to the
> workaround, i.e. next to the spurious looking = NULL. No need to then
> back reference it.
> 
... and I will put it close to there.

Thanks,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 04/11] xl: enable getting and setting soft affinity
  2014-06-27 12:33   ` Ian Campbell
@ 2014-06-27 13:18     ` Dario Faggioli
  2014-06-27 13:22       ` Ian Campbell
  0 siblings, 1 reply; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 13:18 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1718 bytes --]

On ven, 2014-06-27 at 13:33 +0100, Ian Campbell wrote:
> On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> > Getting happens via `xl vcpu-list', which now looks like this:
> 
> With this I'm seeing:
>         
>         xl_cmdimpl.c: In function ‘vcpupin_parse’:
>         xl_cmdimpl.c:661:5: error: passing argument 1 of ‘__strtok_r_1c’ discards ‘const’ qualifier from pointer target type [-Werror]
>         In file included from /usr/include/string.h:637:0,
>                          from xl_cmdimpl.c:21:
>
Do you? You mean with patches 1-to-4 (this one) applied? That's weird,
as I just performed a clean build of that scenario to verify this, and
it ended without errors for me.

It's even more strange as the patch contains the hunk below, which is
specifically meant at avoiding what you report... :-O

--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -656,7 +656,7 @@ static int update_cpumap_range(const char *str,
libxl_bitmap *cpumap)
  * single cpus or as eintire NUMA nodes) and turns it into the
  * corresponding libxl_bitmap (in cpumap).
  */
-static int vcpupin_parse(char *cpu, libxl_bitmap *cpumap)
+static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
 {
     char *ptr, *saveptr = NULL;
     int rc = 0;

Anyway, to be sure, I just compile tested the full series, patch after
patch, and that also went well.

I'm not sure what's going on!

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 04/11] xl: enable getting and setting soft affinity
  2014-06-27 13:18     ` Dario Faggioli
@ 2014-06-27 13:22       ` Ian Campbell
  2014-06-27 13:45         ` Dario Faggioli
  0 siblings, 1 reply; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 13:22 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson

On Fri, 2014-06-27 at 15:18 +0200, Dario Faggioli wrote:
> On ven, 2014-06-27 at 13:33 +0100, Ian Campbell wrote:
> > On Fri, 2014-06-20 at 18:19 +0200, Dario Faggioli wrote:
> > > Getting happens via `xl vcpu-list', which now looks like this:
> > 
> > With this I'm seeing:
> >         
> >         xl_cmdimpl.c: In function ‘vcpupin_parse’:
> >         xl_cmdimpl.c:661:5: error: passing argument 1 of ‘__strtok_r_1c’ discards ‘const’ qualifier from pointer target type [-Werror]
> >         In file included from /usr/include/string.h:637:0,
> >                          from xl_cmdimpl.c:21:
> >
> Do you? You mean with patches 1-to-4 (this one) applied?

Correct.

>  That's weird,
> as I just performed a clean build of that scenario to verify this, and
> it ended without errors for me.
> 
> It's even more strange as the patch contains the hunk below, which is
> specifically meant at avoiding what you report... :-O

But strtok_r takes a non-const char *, so when you pass the now const
cpu to it you get exactly this error, don't you?

> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -656,7 +656,7 @@ static int update_cpumap_range(const char *str,
> libxl_bitmap *cpumap)
>   * single cpus or as eintire NUMA nodes) and turns it into the
>   * corresponding libxl_bitmap (in cpumap).
>   */
> -static int vcpupin_parse(char *cpu, libxl_bitmap *cpumap)
> +static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
>  {
>      char *ptr, *saveptr = NULL;
>      int rc = 0;
> 
> Anyway, to be sure, I just compile tested the full series, patch after
> patch, and that also went well.
> 
> I'm not sure what's going on!

My compiler is stricter than yours?

I'm building on Debian Wheezy, specifically on the machine "cosworth"
here in Cambridge. You should have access...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 00/11] Series short description
  2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
                   ` (10 preceding siblings ...)
  2014-06-20 16:20 ` [PATCH v10 11/11] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
@ 2014-06-27 13:32 ` Ian Campbell
  11 siblings, 0 replies; 28+ messages in thread
From: Ian Campbell @ 2014-06-27 13:32 UTC (permalink / raw)
  To: Dario Faggioli; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel

On Fri, 2014-06-20 at 18:18 +0200, Dario Faggioli wrote:
>       libxc/libxl: bump library SONAMEs
>       libxc: get and set soft and hard affinity
>       libxl: get and set soft affinity
>       libxl: Change default for b_info->{cpu,node}map to "not allocated"

Applied these ones.

Tried and failed to apply (YHM, didn't compile):
>       xl: enable getting and setting soft affinity

Didn't apply these yet (comments):
>       libxl/xl: deprecate the build_info->cpumap field
>       xl: move away from the (deprecated) use of cpumap for hard affinity
>       xl: move the vcpu affinity parsing in a function
>       libxl/xl: make it possible to specify soft-affinity in domain config file
>       libxl: automatic NUMA placement affects soft affinity
> 
> Wei Liu (1):

Applied this:
>       libxl/xl: push VCPU affinity pinning down to libxl

Ian.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 04/11] xl: enable getting and setting soft affinity
  2014-06-27 13:22       ` Ian Campbell
@ 2014-06-27 13:45         ` Dario Faggioli
  0 siblings, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 13:45 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, Ian.Jackson, Wei Liu, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2093 bytes --]

On ven, 2014-06-27 at 14:22 +0100, Ian Campbell wrote:
> On Fri, 2014-06-27 at 15:18 +0200, Dario Faggioli wrote:
> 
> >  That's weird,
> > as I just performed a clean build of that scenario to verify this, and
> > it ended without errors for me.
> > 
> > It's even more strange as the patch contains the hunk below, which is
> > specifically meant at avoiding what you report... :-O
> 
> But strtok_r takes a non-const char *, so when you pass the now const
> cpu to it you get exactly this error, don't you?
> 
Right.

> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -656,7 +656,7 @@ static int update_cpumap_range(const char *str,
> > libxl_bitmap *cpumap)
> >   * single cpus or as eintire NUMA nodes) and turns it into the
> >   * corresponding libxl_bitmap (in cpumap).
> >   */
> > -static int vcpupin_parse(char *cpu, libxl_bitmap *cpumap)
> > +static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
> >  {
> >      char *ptr, *saveptr = NULL;
> >      int rc = 0;
> > 
Yeah, I was looking at this hunk 'backward' mind-tricked by the fact
that it compiles cleanly for me here. :-/

> > Anyway, to be sure, I just compile tested the full series, patch after
> > patch, and that also went well.
> > 
> > I'm not sure what's going on!
> 
> My compiler is stricter than yours?
> 
> I'm building on Debian Wheezy, specifically on the machine "cosworth"
> here in Cambridge. You should have access...
> 
Well, looks like it's strictier, which I find weird, for issues like
this one. I'm building on Debian Sid, with gcc 4.9, which proved
elsewhere to be strictier (see the =NULL in this series, as well as the
blktap2 issue) rather than looser!

Anyway, I'll change this and try with Wheezy's compiler before
resubmitting.

Thanks,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file
  2014-06-27 12:47   ` Ian Campbell
  2014-06-27 13:07     ` Dario Faggioli
@ 2014-06-27 16:41     ` Dario Faggioli
  1 sibling, 0 replies; 28+ messages in thread
From: Dario Faggioli @ 2014-06-27 16:41 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Andrew.Cooper3, xen-devel, Wei Liu, Ian.Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1292 bytes --]

On ven, 2014-06-27 at 13:47 +0100, Ian Campbell wrote:
> On Fri, 2014-06-20 at 18:20 +0200, Dario Faggioli wrote:
> > +        /*
> > +         * If buf2 is not initialized above, gcc >= 4.9.0 complains with
> > +         * a '[-Werror=maybe-uninitialized]'. That can't happen, though, since
> > +         * we use buf2 only if buf is NULL, and if we are here and buf is NULL,
> > +         * it means buf2 is non-NULL, and contains the j-eth element of the
> > +         * list, as per the condition of the while().
> 
> You state that buf2 is non NULL only if buf is NULL (buf2 != NULL => buf
> == NULL). But the logic here is based on the inverse (buf == NULL =>
> buf2 != NULL).
> 
> I haven't read the code carefully to decide if you actually meant "we
> use buf2 only iff buf is NULL".
> 
BTW, in v11, I changed the code just enough to not need a second buffer,
and this kind of spurious initialization any longer.

I am convinced it was logically ok, but did not like it much myself.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2014-06-27 16:41 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-20 16:18 [PATCH v10 00/11] Series short description Dario Faggioli
2014-06-20 16:18 ` [PATCH v10 01/11] libxc/libxl: bump library SONAMEs Dario Faggioli
2014-06-20 16:19 ` [PATCH v10 02/11] libxc: get and set soft and hard affinity Dario Faggioli
2014-06-20 16:19 ` [PATCH v10 03/11] libxl: get and set soft affinity Dario Faggioli
2014-06-20 16:19 ` [PATCH v10 04/11] xl: enable getting and setting " Dario Faggioli
2014-06-27 12:33   ` Ian Campbell
2014-06-27 13:18     ` Dario Faggioli
2014-06-27 13:22       ` Ian Campbell
2014-06-27 13:45         ` Dario Faggioli
2014-06-20 16:19 ` [PATCH v10 05/11] libxl: Change default for b_info->{cpu, node}map to "not allocated" Dario Faggioli
2014-06-27 10:44   ` Ian Campbell
2014-06-27 12:07     ` Dario Faggioli
2014-06-27 12:44       ` Ian Campbell
2014-06-20 16:19 ` [PATCH v10 06/11] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
2014-06-27 10:55   ` Ian Campbell
2014-06-20 16:19 ` [PATCH v10 07/11] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
2014-06-27 10:57   ` Ian Campbell
2014-06-27 12:08     ` Dario Faggioli
2014-06-20 16:19 ` [PATCH v10 08/11] xl: move away from the (deprecated) use of cpumap for hard affinity Dario Faggioli
2014-06-27 11:00   ` Ian Campbell
2014-06-27 12:24     ` Dario Faggioli
2014-06-20 16:20 ` [PATCH v10 09/11] xl: move the vcpu affinity parsing in a function Dario Faggioli
2014-06-20 16:20 ` [PATCH v10 10/11] libxl/xl: make it possible to specify soft-affinity in domain config file Dario Faggioli
2014-06-27 12:47   ` Ian Campbell
2014-06-27 13:07     ` Dario Faggioli
2014-06-27 16:41     ` Dario Faggioli
2014-06-20 16:20 ` [PATCH v10 11/11] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
2014-06-27 13:32 ` [PATCH v10 00/11] Series short description Ian Campbell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.