All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side)
@ 2014-06-18 14:27 Dario Faggioli
  2014-06-18 14:27 ` [PATCH v9 1/9] libxc/libxl: bump library SONAMEs Dario Faggioli
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

Hi!

And here it comes v9, which only includes the tools side fo the thing, as Jan
already applied the hypervisor part of the series during v8 (thanks!).

The only thing that changed from v8, is that I now, while deprecating the
cpumap field of libxl_build_info, I am still honouring it's content in libxl,
to maintain compatibility with old applications, as pointed out by Ian and Wei.

Patches in need of review and acks are the ones marked with '*' in the below
summary.

This is available in form of a git tree here:
 git://xenbits.xen.org/people/dariof/xen.git numa/per-vcpu-affinity-v9

Regards,
Dario
---
Dario Faggioli (8):
      libxc/libxl: bump library SONAMEs
      libxc: get and set soft and hard affinity
      libxl: get and set soft affinity
   *  xl: enable getting and setting soft affinity
   *  libxl/xl: deprecate the build_info->cpumap field
   *  xl: move the vcpu affinity parsing in a function
   *  libxl/xl: enable for specifying soft-affinity in the config file
      libxl: automatic NUMA placement affects soft affinity

Wei Liu (1):
   *  libxl/xl: push VCPU affinity pinning down to libxl


 docs/man/xl.cfg.pod.5                |   50 +++++--
 docs/man/xl.pod.1                    |   32 +++-
 docs/misc/xl-numa-placement.markdown |   14 ++
 tools/libxc/Makefile                 |    2 
 tools/libxc/xc_domain.c              |   68 ++++++---
 tools/libxc/xenctrl.h                |   55 +++++++-
 tools/libxl/Makefile                 |    2 
 tools/libxl/libxl.c                  |   97 ++++++++++++-
 tools/libxl/libxl.h                  |   55 +++++++-
 tools/libxl/libxl_dom.c              |   63 ++++++++-
 tools/libxl/libxl_types.idl          |   12 +-
 tools/libxl/libxl_utils.h            |   25 +++
 tools/libxl/xl_cmdimpl.c             |  246 ++++++++++++++++++----------------
 tools/libxl/xl_cmdtable.c            |    2 
 tools/ocaml/libs/xc/xenctrl_stubs.c  |    8 +
 tools/python/xen/lowlevel/xc/xc.c    |    6 +
 16 files changed, 541 insertions(+), 196 deletions(-)

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v9 1/9] libxc/libxl: bump library SONAMEs
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
@ 2014-06-18 14:27 ` Dario Faggioli
  2014-06-18 14:27 ` [PATCH v9 2/9] libxc: get and set soft and hard affinity Dario Faggioli
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

The following two patches break both libxc and libxl ABI and
API, so we better bump the MAJORs.

Of course, for libxl, proper measures are taken (in the
relevant patch) in order to guarantee API stability.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/Makefile |    2 +-
 tools/libxl/Makefile |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 215101d..a688e10 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR    = 4.4
+MAJOR    = 4.5
 MINOR    = 0
 
 CTRL_SRCS-y       :=
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 4cfa275..1bf9358 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR = 4.4
+MAJOR = 4.5
 MINOR = 0
 
 XLUMAJOR = 4.3

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 2/9] libxc: get and set soft and hard affinity
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
  2014-06-18 14:27 ` [PATCH v9 1/9] libxc/libxl: bump library SONAMEs Dario Faggioli
@ 2014-06-18 14:27 ` Dario Faggioli
  2014-06-18 14:28 ` [PATCH v9 3/9] libxl: get and set soft affinity Dario Faggioli
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

by using the flag and the new cpumap arguments introduced in
the parameters of the DOMCTL_{get,set}_vcpuaffinity hypercalls.

Now, both xc_vcpu_setaffinity() and xc_vcpu_getaffinity() have
a new flag parameter, to specify whether the user wants to
set/get hard affinity, soft affinity or both. They also have
two cpumap parameters instead of only one. This way, it is
possible to set/get both hard and soft affinity at the same
time (and, in case of set, each one to its own value).

In xc_vcpu_setaffinity(), the cpumaps are IN/OUT parameters,
as it is for the corresponding arguments of the
DOMCTL_set_vcpuaffinity hypercall. What Xen puts there is the
hard and soft effective affinity, that is what Xen will actually
use for scheduling.

In-tree callers are also fixed to cope with the new interface.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v4:
 * update toward the new hypercall interface;
 * migrate to hypercall BOUNCEs instead of BUFFERs, as
   suggested during (v3) review;

Changes from v2:
 * better cleanup logic in _vcpu_setaffinity() (regarding
   xc_hypercall_buffer_{alloc,free}()), as suggested during
   review;
 * make it more evident that DOMCTL_setvcpuaffinity has an out
   parameter, by calling ecpumap_out, and improving the comment
   wrt that;
 * change the interface and have xc_vcpu_[sg]etaffinity() so
   that they take the new parameters (flags and ecpumap_out) and
   fix the in tree callers.
---
 tools/libxc/xc_domain.c             |   68 +++++++++++++++++++++++------------
 tools/libxc/xenctrl.h               |   55 +++++++++++++++++++++++++++-
 tools/libxl/libxl.c                 |    6 ++-
 tools/ocaml/libs/xc/xenctrl_stubs.c |    8 +++-
 tools/python/xen/lowlevel/xc/xc.c   |    6 ++-
 5 files changed, 111 insertions(+), 32 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 26edbaf..b438cf6 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -202,10 +202,15 @@ int xc_domain_node_getaffinity(xc_interface *xch,
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap)
+                        xc_cpumap_t cpumap_hard_inout,
+                        xc_cpumap_t cpumap_soft_inout,
+                        uint32_t flags)
 {
     DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_hard_inout, 0,
+                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_soft_inout, 0,
+                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int ret = -1;
     int cpusize;
 
@@ -213,32 +218,37 @@ int xc_vcpu_setaffinity(xc_interface *xch,
     if (cpusize <= 0)
     {
         PERROR("Could not get number of cpus");
-        goto out;
+        return -1;
     }
 
-    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
-    if ( local == NULL )
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_hard_inout, cpusize);
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_soft_inout, cpusize);
+
+    if ( xc_hypercall_bounce_pre(xch, cpumap_hard_inout) ||
+         xc_hypercall_bounce_pre(xch, cpumap_soft_inout) )
     {
-        PERROR("Could not allocate memory for setvcpuaffinity domctl hypercall");
+        PERROR("Could not allocate hcall buffers for DOMCTL_setvcpuaffinity");
         goto out;
     }
 
     domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu = vcpu;
-    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
-
-    memcpy(local, cpumap, cpusize);
-
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
+    domctl.u.vcpuaffinity.flags = flags;
 
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap,
+                         cpumap_hard_inout);
     domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_soft.bitmap,
+                         cpumap_soft_inout);
+    domctl.u.vcpuaffinity.cpumap_soft.nr_bits = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
-    xc_hypercall_buffer_free(xch, local);
-
  out:
+    xc_hypercall_bounce_post(xch, cpumap_hard_inout);
+    xc_hypercall_bounce_post(xch, cpumap_soft_inout);
+
     return ret;
 }
 
@@ -246,10 +256,13 @@ int xc_vcpu_setaffinity(xc_interface *xch,
 int xc_vcpu_getaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap)
+                        xc_cpumap_t cpumap_hard,
+                        xc_cpumap_t cpumap_soft,
+                        uint32_t flags)
 {
     DECLARE_DOMCTL;
-    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_hard, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpumap_soft, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int ret = -1;
     int cpusize;
 
@@ -257,30 +270,37 @@ int xc_vcpu_getaffinity(xc_interface *xch,
     if (cpusize <= 0)
     {
         PERROR("Could not get number of cpus");
-        goto out;
+        return -1;
     }
 
-    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
-    if (local == NULL)
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_hard, cpusize);
+    HYPERCALL_BOUNCE_SET_SIZE(cpumap_soft, cpusize);
+
+    if ( xc_hypercall_bounce_pre(xch, cpumap_hard) ||
+         xc_hypercall_bounce_pre(xch, cpumap_soft) )
     {
-        PERROR("Could not allocate memory for getvcpuaffinity domctl hypercall");
+        PERROR("Could not allocate hcall buffers for DOMCTL_getvcpuaffinity");
         goto out;
     }
 
     domctl.cmd = XEN_DOMCTL_getvcpuaffinity;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu = vcpu;
-    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
+    domctl.u.vcpuaffinity.flags = flags;
 
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap, local);
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_hard.bitmap,
+                         cpumap_hard);
     domctl.u.vcpuaffinity.cpumap_hard.nr_bits = cpusize * 8;
+    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap_soft.bitmap,
+                         cpumap_soft);
+    domctl.u.vcpuaffinity.cpumap_soft.nr_bits = cpusize * 8;
 
     ret = do_domctl(xch, &domctl);
 
-    memcpy(cpumap, local, cpusize);
+ out:
+    xc_hypercall_bounce_post(xch, cpumap_hard);
+    xc_hypercall_bounce_post(xch, cpumap_soft);
 
-    xc_hypercall_buffer_free(xch, local);
-out:
     return ret;
 }
 
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index b55d857..0249534 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -600,14 +600,65 @@ int xc_domain_node_getaffinity(xc_interface *xch,
                                uint32_t domind,
                                xc_nodemap_t nodemap);
 
+/**
+ * This function specifies the CPU affinity for a vcpu.
+ *
+ * There are two kinds of affinity. Soft affinity is on what CPUs a vcpu
+ * prefers to run. Hard affinity is on what CPUs a vcpu is allowed to run.
+ * If flags contains XEN_VCPUAFFINITY_SOFT, the soft affinity it is set to
+ * what cpumap_soft_inout contains. If flags contains XEN_VCPUAFFINITY_HARD,
+ * the hard affinity is set to what cpumap_hard_inout contains. Both flags
+ * can be set at the same time, in which case both soft and hard affinity are
+ * set to what the respective parameter contains.
+ *
+ * The function also returns the effective hard or/and soft affinity, still
+ * via the cpumap_soft_inout and cpumap_hard_inout parameters. Effective
+ * affinity is, in case of soft affinity, the intersection of soft affinity,
+ * hard affinity and the cpupool's online CPUs for the domain, and is returned
+ * in cpumap_soft_inout, if XEN_VCPUAFFINITY_SOFT is set in flags. In case of
+ * hard affinity, it is the intersection between hard affinity and the
+ * cpupool's online CPUs, and is returned in cpumap_hard_inout, if
+ * XEN_VCPUAFFINITY_HARD is set in flags. If both flags are set, both soft
+ * and hard affinity are returned in the respective parameter.
+ *
+ * We do report it back as effective affinity is what the Xen scheduler will
+ * actually use, and we thus allow checking whether or not that matches with,
+ * or at least is good enough for, the caller's purposes.
+ *
+ * @param xch a handle to an open hypervisor interface.
+ * @param domid the id of the domain to which the vcpu belongs
+ * @param vcpu the vcpu id wihin the domain
+ * @param cpumap_hard_inout specifies(/returns) the (effective) hard affinity
+ * @param cpumap_soft_inout specifies(/returns) the (effective) soft affinity
+ * @param flags what we want to set
+ */
 int xc_vcpu_setaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap);
+                        xc_cpumap_t cpumap_hard_inout,
+                        xc_cpumap_t cpumap_soft_inout,
+                        uint32_t flags);
+
+/**
+ * This function retrieves hard and soft CPU affinity of a vcpu,
+ * depending on what flags are set.
+ *
+ * Soft affinity is returned in cpumap_soft if XEN_VCPUAFFINITY_SOFT is set.
+ * Hard affinity is returned in cpumap_hard if XEN_VCPUAFFINITY_HARD is set.
+ *
+ * @param xch a handle to an open hypervisor interface.
+ * @param domid the id of the domain to which the vcpu belongs
+ * @param vcpu the vcpu id wihin the domain
+ * @param cpumap_hard is where hard affinity is returned
+ * @param cpumap_soft is where soft affinity is returned
+ * @param flags what we want get
+ */
 int xc_vcpu_getaffinity(xc_interface *xch,
                         uint32_t domid,
                         int vcpu,
-                        xc_cpumap_t cpumap);
+                        xc_cpumap_t cpumap_hard,
+                        xc_cpumap_t cpumap_soft,
+                        uint32_t flags);
 
 
 /**
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 9054c3b..d4cd4a9 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4597,7 +4597,8 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
             goto err;
         }
         if (xc_vcpu_getaffinity(ctx->xch, domid, *nr_vcpus_out,
-                                ptr->cpumap.map) == -1) {
+                                ptr->cpumap.map, NULL,
+                                XEN_VCPUAFFINITY_HARD) == -1) {
             LOGE(ERROR, "getting vcpu affinity");
             goto err;
         }
@@ -4621,7 +4622,8 @@ err:
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
                            libxl_bitmap *cpumap)
 {
-    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map)) {
+    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map, NULL,
+                            XEN_VCPUAFFINITY_HARD)) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting vcpu affinity");
         return ERROR_FAIL;
     }
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index ff29b47..f0810eb 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -438,7 +438,9 @@ CAMLprim value stub_xc_vcpu_setaffinity(value xch, value domid,
 			c_cpumap[i/8] |= 1 << (i&7);
 	}
 	retval = xc_vcpu_setaffinity(_H(xch), _D(domid),
-	                             Int_val(vcpu), c_cpumap);
+				     Int_val(vcpu),
+				     c_cpumap, NULL,
+				     XEN_VCPUAFFINITY_HARD);
 	free(c_cpumap);
 
 	if (retval < 0)
@@ -460,7 +462,9 @@ CAMLprim value stub_xc_vcpu_getaffinity(value xch, value domid,
 		failwith_xc(_H(xch));
 
 	retval = xc_vcpu_getaffinity(_H(xch), _D(domid),
-	                             Int_val(vcpu), c_cpumap);
+				     Int_val(vcpu),
+				     c_cpumap, NULL,
+				     XEN_VCPUAFFINITY_HARD);
 	if (retval < 0) {
 		free(c_cpumap);
 		failwith_xc(_H(xch));
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index cb34446..54e8799 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -256,7 +256,8 @@ static PyObject *pyxc_vcpu_setaffinity(XcObject *self,
         }
     }
   
-    if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap) != 0 )
+    if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap,
+                             NULL, XEN_VCPUAFFINITY_HARD) != 0 )
     {
         free(cpumap);
         return pyxc_error_to_exception(self->xc_handle);
@@ -403,7 +404,8 @@ static PyObject *pyxc_vcpu_getinfo(XcObject *self,
     if(cpumap == NULL)
         return pyxc_error_to_exception(self->xc_handle);
 
-    rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, cpumap);
+    rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, cpumap,
+                             NULL, XEN_VCPUAFFINITY_HARD);
     if ( rc < 0 )
     {
         free(cpumap);

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 3/9] libxl: get and set soft affinity
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
  2014-06-18 14:27 ` [PATCH v9 1/9] libxc/libxl: bump library SONAMEs Dario Faggioli
  2014-06-18 14:27 ` [PATCH v9 2/9] libxc: get and set soft and hard affinity Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 14:28 ` [PATCH v9 4/9] xl: enable getting and setting " Dario Faggioli
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

Make space a new cpumap in vcpu_info, called cpumap_soft,
for retrieving soft affinity, and amend the relevant API
accordingly.

libxl_set_vcpuaffinity() now takes two cpumaps, one for hard
and one for soft affinity (LIBXL_API_VERSION is exploited to
retain source level backword compatibility). Either of the
two cpumap can be NULL, in which case, only the affinity
corresponding to the non-NULL cpumap will be affected.

Getting soft affinity happens indirectly (see, e.g.,
`xl vcpu-list'), as it is already for hard affinity).

This commit also introduces some logic to check whether the
affinity which will be used by Xen to schedule the vCPU(s)
does actually match with the cpumaps provided. In fact, we
want to allow every possible combination of hard and soft
affinity to be set, but we warn the user upon particularly
weird situations (e.g., hard and soft being disjoint sets
of pCPUs).

This very change also update the error handling for calls
to libxl_set_vcpuaffinity() in xl, as that can now be any
libxl error code, not just only -1.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v7
 * Added 0x40500 to the list of valid libxl versions;
 * killed the cpumap_soft field from libxl_build_info.
   That is not necessary, as subsequent patches will
   introduce and alternative way to set soft affinity.
 * fix a typo.

Changes from v6:
 * fix a typo and update LIBXL_API_VERSION appropriately,
   as requested during review;
 * libxl_bitmap_equal() rewritten in such a way that it now
   automatically deals with bitmaps of different sizes. Bits
   non present in one of the maps are just considered to be
   0, as suggested during review;
 * there is no need for libxl_bitmap_bit_valid() any longer,
   so killed it;
 * inside libxl, retrieve and set both hard and soft affinity
   atomically, in just one call (to xc_vcpu_getaffinity() and
   xc_vcpu_setaffinity(), respectively), as requested during
   review.
 * update the changelog about changing the error handling for
   the call to libxl_set_vcpuaffinity() in xl.

Changes from v4:
 * get rid of inline stubs inside the LIBXL_API_VERSION_XXX
   block and just use define, as suggested during review
 * adapt to the new xc interface;
 * avoid leaking cpumap_soft in libxl_list_vcpu on error, as
   requested during review;
 * fix bogous `return 0' in libxl_set_vcpuaffinity, as
   requested during review;
 * clarify the comment for LIBXL_HAVE_SOFT_AFFINITY, as
   suggested during review;
 * renamed libxl_bitmap_valid() to libxl_bitmap_bit_valid(),
   as suggested uring review.

Changes from v3:
 * only introduce one LIBXL_HAVE_ symbol for soft affinity,
   as requested during review;
 * use LIBXL_API_VERSION instead of having multiple version
   of the same function, as suggested during review;
 * use libxl_get_nr_cpus() rather than libxl_get_cputopology(),
   as suggested during review;
 * use LOGE() instead of LIBXL__LOG_ERRNO(), as requested
   during review;
 * kill the flags and use just one _set_vcpuaffinity()
   function with two cpumaps, allowing either of them to
   be NULL, as suggested during review;
 * avoid overflowing the bitmaps in libxl_bitmap_equal(),
   as suggested during review.

Changes from v2:
 * interface completely redesigned, as discussed during
   review.
---
 tools/libxl/libxl.c         |   99 ++++++++++++++++++++++++++++++++++++++-----
 tools/libxl/libxl.h         |   26 ++++++++++-
 tools/libxl/libxl_dom.c     |    3 +
 tools/libxl/libxl_types.idl |    3 +
 tools/libxl/libxl_utils.h   |   25 +++++++++++
 tools/libxl/xl_cmdimpl.c    |    6 +--
 6 files changed, 142 insertions(+), 20 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index d4cd4a9..9e9adaf 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4592,13 +4592,17 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
         libxl_bitmap_init(&ptr->cpumap);
         if (libxl_cpu_bitmap_alloc(ctx, &ptr->cpumap, 0))
             goto err;
+        libxl_bitmap_init(&ptr->cpumap_soft);
+        if (libxl_cpu_bitmap_alloc(ctx, &ptr->cpumap_soft, 0))
+            goto err;
         if (xc_vcpu_getinfo(ctx->xch, domid, *nr_vcpus_out, &vcpuinfo) == -1) {
             LOGE(ERROR, "getting vcpu info");
             goto err;
         }
+
         if (xc_vcpu_getaffinity(ctx->xch, domid, *nr_vcpus_out,
-                                ptr->cpumap.map, NULL,
-                                XEN_VCPUAFFINITY_HARD) == -1) {
+                                ptr->cpumap.map, ptr->cpumap_soft.map,
+                                XEN_VCPUAFFINITY_SOFT|XEN_VCPUAFFINITY_HARD) == -1) {
             LOGE(ERROR, "getting vcpu affinity");
             goto err;
         }
@@ -4614,34 +4618,105 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
 
 err:
     libxl_bitmap_dispose(&ptr->cpumap);
+    libxl_bitmap_dispose(&ptr->cpumap_soft);
     free(ret);
     GC_FREE;
     return NULL;
 }
 
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
-                           libxl_bitmap *cpumap)
+                           const libxl_bitmap *cpumap_hard,
+                           const libxl_bitmap *cpumap_soft)
 {
-    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid, cpumap->map, NULL,
-                            XEN_VCPUAFFINITY_HARD)) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "setting vcpu affinity");
-        return ERROR_FAIL;
+    GC_INIT(ctx);
+    libxl_bitmap hard, soft;
+    int rc, flags = 0;
+
+    libxl_bitmap_init(&hard);
+    libxl_bitmap_init(&soft);
+
+    if (!cpumap_hard && !cpumap_soft) {
+        rc = ERROR_INVAL;
+        goto out;
     }
-    return 0;
+
+    /*
+     * Xen wants writable hard and/or soft cpumaps, to put back in them
+     * the effective hard and/or soft affinity that will be used.
+     */
+    if (cpumap_hard) {
+        rc = libxl_cpu_bitmap_alloc(ctx, &hard, 0);
+        if (rc)
+            goto out;
+
+        libxl_bitmap_copy(ctx, &hard, cpumap_hard);
+        flags = XEN_VCPUAFFINITY_HARD;
+    }
+    if (cpumap_soft) {
+        rc = libxl_cpu_bitmap_alloc(ctx, &soft, 0);
+        if (rc)
+            goto out;
+
+        libxl_bitmap_copy(ctx, &soft, cpumap_soft);
+        flags |= XEN_VCPUAFFINITY_SOFT;
+    }
+
+    if (xc_vcpu_setaffinity(ctx->xch, domid, vcpuid,
+                            cpumap_hard ? hard.map : NULL,
+                            cpumap_soft ? soft.map : NULL,
+                            flags)) {
+        LOGE(ERROR, "setting vcpu affinity");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    /*
+     * Let's check the results. Hard affinity will never be empty, but it
+     * is possible that Xen will use something different from what we asked
+     * for various reasons. If that's the case, report it.
+     */
+    if (cpumap_hard &&
+        !libxl_bitmap_equal(cpumap_hard, &hard, 0))
+        LOG(DEBUG, "New hard affinity for vcpu %d has unreachable cpus",
+        vcpuid);
+    /*
+     * Soft affinity can both be different from what asked and empty. Check
+     * for (and report) both.
+     */
+    if (cpumap_soft) {
+        if (!libxl_bitmap_equal(cpumap_soft, &soft, 0))
+            LOG(DEBUG, "New soft affinity for vcpu %d has unreachable cpus",
+                vcpuid);
+        if (libxl_bitmap_is_empty(&soft))
+            LOG(WARN, "all cpus in soft affinity of vcpu %d are unreachable."
+                " Only hard affinity will be considered for scheduling",
+                vcpuid);
+    }
+
+    rc = 0;
+ out:
+    libxl_bitmap_dispose(&hard);
+    libxl_bitmap_dispose(&soft);
+    GC_FREE;
+    return rc;
 }
 
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
-                               unsigned int max_vcpus, libxl_bitmap *cpumap)
+                               unsigned int max_vcpus,
+                               const libxl_bitmap *cpumap_hard,
+                               const libxl_bitmap *cpumap_soft)
 {
+    GC_INIT(ctx);
     int i, rc = 0;
 
     for (i = 0; i < max_vcpus; i++) {
-        if (libxl_set_vcpuaffinity(ctx, domid, i, cpumap)) {
-            LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
-                       "failed to set affinity for %d", i);
+        if (libxl_set_vcpuaffinity(ctx, domid, i, cpumap_hard, cpumap_soft)) {
+            LOG(WARN, "failed to set affinity for %d", i);
             rc = ERROR_FAIL;
         }
     }
+
+    GC_FREE;
     return rc;
 }
 
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 17b8a7b..c2b143b 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -104,6 +104,13 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY indicates that a 'cpumap_soft'
+ * field (of libxl_bitmap type) is present in libxl_vcpuinfo,
+ * containing the soft affinity of a vcpu.
+ */
+#define LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY 1
+
+/*
  * LIBXL_HAVE_DEVICE_DISK_DIRECT_IO_SAFE indicates that a
  * 'direct_io_safe' field (of boolean type) is present in
  * libxl_device_disk.
@@ -325,7 +332,7 @@
 /* API compatibility. */
 #ifdef LIBXL_API_VERSION
 #if LIBXL_API_VERSION != 0x040200 && LIBXL_API_VERSION != 0x040300 && \
-    LIBXL_API_VERSION != 0x040400
+    LIBXL_API_VERSION != 0x040400 && LIBXL_API_VERSION != 0x040500
 #error Unknown LIBXL_API_VERSION
 #endif
 #endif
@@ -1108,9 +1115,22 @@ int libxl_userdata_retrieve(libxl_ctx *ctx, uint32_t domid,
 
 int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo);
 int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t domid, uint32_t vcpuid,
-                           libxl_bitmap *cpumap);
+                           const libxl_bitmap *cpumap_hard,
+                           const libxl_bitmap *cpumap_soft);
 int libxl_set_vcpuaffinity_all(libxl_ctx *ctx, uint32_t domid,
-                               unsigned int max_vcpus, libxl_bitmap *cpumap);
+                               unsigned int max_vcpus,
+                               const libxl_bitmap *cpumap_hard,
+                               const libxl_bitmap *cpumap_soft);
+
+#if defined (LIBXL_API_VERSION) && LIBXL_API_VERSION < 0x040500
+
+#define libxl_set_vcpuaffinity(ctx, domid, vcpuid, map) \
+    libxl_set_vcpuaffinity((ctx), (domid), (vcpuid), (map), NULL)
+#define libxl_set_vcpuaffinity_all(ctx, domid, max_vcpus, map) \
+    libxl_set_vcpuaffinity_all((ctx), (domid), (max_vcpus), (map), NULL)
+
+#endif
+
 int libxl_domain_set_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
                                   libxl_bitmap *nodemap);
 int libxl_domain_get_nodeaffinity(libxl_ctx *ctx, uint32_t domid,
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 661999c..a90a8d5 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -261,7 +261,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
     }
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
+    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                               &info->cpumap, NULL);
 
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f0f6e34..2da58e2 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -514,7 +514,8 @@ libxl_vcpuinfo = Struct("vcpuinfo", [
     ("blocked", bool),
     ("running", bool),
     ("vcpu_time", uint64), # total vcpu time ran (ns)
-    ("cpumap", libxl_bitmap), # current cpu's affinities
+    ("cpumap", libxl_bitmap), # current hard cpu affinity
+    ("cpumap_soft", libxl_bitmap), # current soft cpu affinity
     ], dir=DIR_OUT)
 
 libxl_physinfo = Struct("physinfo", [
diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h
index 8bfb81b..13b42a1 100644
--- a/tools/libxl/libxl_utils.h
+++ b/tools/libxl/libxl_utils.h
@@ -101,6 +101,31 @@ static inline int libxl_bitmap_cpu_valid(libxl_bitmap *bitmap, int bit)
 #define libxl_for_each_set_bit(v, m) for (v = 0; v < (m).size * 8; v++) \
                                              if (libxl_bitmap_test(&(m), v))
 
+/*
+ * Compares two bitmaps bit by bit, up to nr_bits or, if nr_bits is 0, up
+ * to the size of the largest bitmap. If sizes does not match, bits past the
+ * of a bitmap are considered as being 0, which matches with the semantic and
+ * implementation of libxl_bitmap_test I think().
+ *
+ * So, basically, [0,1,0] and [0,1] are considered equal, while [0,1,1] and
+ * [0,1] are different.
+ */
+static inline int libxl_bitmap_equal(const libxl_bitmap *ba,
+                                     const libxl_bitmap *bb,
+                                     int nr_bits)
+{
+    int i;
+
+    if (nr_bits == 0)
+        nr_bits = ba->size > bb->size ? ba->size * 8 : bb->size * 8;
+
+    for (i = 0; i < nr_bits; i++) {
+        if (libxl_bitmap_test(ba, i) != libxl_bitmap_test(bb, i))
+            return 0;
+    }
+    return 1;
+}
+
 int libxl_cpu_bitmap_alloc(libxl_ctx *ctx, libxl_bitmap *cpumap, int max_cpus);
 int libxl_node_bitmap_alloc(libxl_ctx *ctx, libxl_bitmap *nodemap,
                             int max_nodes);
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 64a1c77..860ec04 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -2232,7 +2232,7 @@ start:
             } else {
                 libxl_bitmap_set_any(&vcpu_cpumap);
             }
-            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap)) {
+            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
                 fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
                 libxl_bitmap_dispose(&vcpu_cpumap);
                 free(vcpu_to_pcpu);
@@ -4663,7 +4663,7 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
     }
 
     if (vcpuid != -1) {
-        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap) == -1) {
+        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap, NULL)) {
             fprintf(stderr, "Could not set affinity for vcpu `%u'.\n", vcpuid);
             goto out;
         }
@@ -4675,7 +4675,7 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         }
         for (i = 0; i < nb_vcpu; i++) {
             if (libxl_set_vcpuaffinity(ctx, domid, vcpuinfo[i].vcpuid,
-                                       &cpumap) == -1) {
+                                       &cpumap, NULL)) {
                 fprintf(stderr, "libxl_set_vcpuaffinity failed"
                                 " on vcpu `%u'.\n", vcpuinfo[i].vcpuid);
             }

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 4/9] xl: enable getting and setting soft affinity
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (2 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 3/9] libxl: get and set soft affinity Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 15:37   ` Ian Campbell
  2014-06-18 14:28 ` [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

Getting happens via `xl vcpu-list', which now looks like this:

 # xl vcpu-list -s
 Name       ID VCPU CPU State Time(s) Affinity (Hard / Soft)
 Domain-0   0     0  11  -b-     5.4  8-15 / all
 Domain-0   0     1  11  -b-     1.0  8-15 / all
 Domain-0   0    14  13  -b-     1.4  8-15 / all
 Domain-0   0    15   8  -b-     1.6  8-15 / all
 vm-test    3     0   4  -b-     2.5  0-12 / 0-7
 vm-test    3     1   0  -b-     3.2  0-12 / 0-7

Setting happens by specifying two pCPU masks to the `xl vcpu-pin'
command, the first one will be hard affinity, the second soft
affinity. If only one mask is specified, it is only hard affinity
that is affected. To change only soft affinity, '-' can be used
as the hard affinity mask parameter, and it will be left alone.

xl manual page is updated accordingly.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v7:
 * 'const char *' used instead of #define/#undef for string
   constants, as requested during review;
 * more consistently use the 'hard' and 'soft' pointers rather
   than the actual cpumap variables, as suggested during review;
 * bitmap holding hard affinity renamed from 'cpumap' to
   'cpumap_hard'.

Changes from v6:
 * removed a remnant from previous iteration, when `vcpu-pin'
   was tacking a porameter;
 * tried to simplify parsing inside of `xl vcpu-pin', as suggested
   during review. Could not take the exact approach described in the
   emails, but still did my best for making this more clear and easy
   to read;
 * fixed a few typos and leftovers.

Changes from v5:
 * change command line interface for 'vcpu-pin', as suggested during
   review.

Changes from v4:
 * fix and rephrased the manual entry, as suggested during review;
 * more refactoring to remove some leftover special casing, as
   suggested during review.

Changes from v3:
 * fix typos in doc, rephrased the help message and changed
   the title of the column for hard/soft affinity, as suggested
   during review.

Changes from v2:
 * this patch folds what in v2 were patches 13 and 14;
 * `xl vcpu-pin' always shows both had and soft affinity,
   without the need of passing '-s'.
---
 docs/man/xl.pod.1         |   32 +++++++++++---
 tools/libxl/xl_cmdimpl.c  |  101 ++++++++++++++++++++++++++++++---------------
 tools/libxl/xl_cmdtable.c |    2 -
 3 files changed, 92 insertions(+), 43 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index 30bd4bf..9d1c2a5 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -651,16 +651,32 @@ after B<vcpu-set>, go to B<SEE ALSO> section for information.
 Lists VCPU information for a specific domain.  If no domain is
 specified, VCPU information for all domains will be provided.
 
-=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus>
+=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus hard> I<cpus soft>
 
-Pins the VCPU to only run on the specific CPUs.  The keyword
-B<all> can be used to apply the I<cpus> list to all VCPUs in the
-domain.
+Set hard and soft affinity for a I<vcpu> of <domain-id>. Normally VCPUs
+can float between available CPUs whenever Xen deems a different run state
+is appropriate.
 
-Normally VCPUs can float between available CPUs whenever Xen deems a
-different run state is appropriate.  Pinning can be used to restrict
-this, by ensuring certain VCPUs can only run on certain physical
-CPUs.
+Hard affinity can be used to restrict this, by ensuring certain VCPUs
+can only run on certain physical CPUs. Soft affinity specifies a I<preferred>
+set of CPUs. Soft affinity needs special support in the scheduler, which is
+only provided in credit1.
+
+The keyword B<all> can be used to apply the hard and soft affinity masks to
+all the VCPUs in the domain. The symbol '-' can be used to leave either
+hard or soft affinity alone.
+
+For example:
+
+ xl vcpu-pin 0 3 - 6-9
+
+will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9,
+leaving its hard affinity untouched. On the othe hand:
+
+ xl vcpu-pin 0 3 3,4 6-9
+
+will set both hard and soft affinity, the former to pCPUs 3 and 4, the
+latter to pCPUs 6,7,8, and 9.
 
 =item B<vm-list>
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 860ec04..516a4d7 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -656,7 +656,7 @@ static int update_cpumap_range(const char *str, libxl_bitmap *cpumap)
  * single cpus or as eintire NUMA nodes) and turns it into the
  * corresponding libxl_bitmap (in cpumap).
  */
-static int vcpupin_parse(char *cpu, libxl_bitmap *cpumap)
+static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
 {
     char *ptr, *saveptr = NULL;
     int rc = 0;
@@ -4547,8 +4547,10 @@ static void print_vcpuinfo(uint32_t tdomid,
     }
     /*      TIM */
     printf("%9.1f  ", ((float)vcpuinfo->vcpu_time / 1e9));
-    /* CPU AFFINITY */
+    /* CPU HARD AND SOFT AFFINITY */
     print_bitmap(vcpuinfo->cpumap.map, nr_cpus, stdout);
+    printf(" / ");
+    print_bitmap(vcpuinfo->cpumap_soft.map, nr_cpus, stdout);
     printf("\n");
 }
 
@@ -4583,7 +4585,8 @@ static void vcpulist(int argc, char **argv)
     }
 
     printf("%-32s %5s %5s %5s %5s %9s %s\n",
-           "Name", "ID", "VCPU", "CPU", "State", "Time(s)", "CPU Affinity");
+           "Name", "ID", "VCPU", "CPU", "State", "Time(s)",
+           "Affinity (Hard / Soft)");
     if (!argc) {
         if (!(dominfo = libxl_list_domain(ctx, &nb_domain))) {
             fprintf(stderr, "libxl_list_domain failed.\n");
@@ -4616,17 +4619,29 @@ int main_vcpulist(int argc, char **argv)
     return 0;
 }
 
-static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
+int main_vcpupin(int argc, char **argv)
 {
     libxl_vcpuinfo *vcpuinfo;
-    libxl_bitmap cpumap;
-
-    uint32_t vcpuid;
+    libxl_bitmap cpumap_hard, cpumap_soft;;
+    libxl_bitmap *soft = &cpumap_soft, *hard = &cpumap_hard;
+    uint32_t vcpuid, domid;
+    const char *vcpu, *hard_str, *soft_str;
     char *endptr;
-    int i, nb_cpu, nb_vcpu, rc = -1;
+    int opt, nb_cpu, nb_vcpu, rc = -1;
+
+    libxl_bitmap_init(&cpumap_hard);
+    libxl_bitmap_init(&cpumap_soft);
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
+        /* No options */
+    }
 
-    libxl_bitmap_init(&cpumap);
+    domid = find_domain(argv[optind]);
+    vcpu = argv[optind+1];
+    hard_str = argv[optind+2];
+    soft_str = (argc > optind+3) ? argv[optind+3] : NULL;
 
+    /* Figure out with which vCPU we are dealing with */
     vcpuid = strtoul(vcpu, &endptr, 10);
     if (vcpu == endptr) {
         if (strcmp(vcpu, "all")) {
@@ -4636,10 +4651,35 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         vcpuid = -1;
     }
 
-    if (libxl_cpu_bitmap_alloc(ctx, &cpumap, 0))
+    if (libxl_cpu_bitmap_alloc(ctx, &cpumap_hard, 0) ||
+        libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0))
         goto out;
 
-    if (vcpupin_parse(cpu, &cpumap))
+    /*
+     * Syntax is: xl vcpu-pin <domid> <vcpu> <hard> <soft>
+     * We want to handle all the following cases ('-' means
+     * "leave it alone"):
+     *  xl vcpu-pin 0 3 3,4
+     *  xl vcpu-pin 0 3 3,4 -
+     *  xl vcpu-pin 0 3 - 6-9
+     *  xl vcpu-pin 0 3 3,4 6-9
+     */
+
+    /*
+     * Hard affinity is always present. However, if it's "-", all we need
+     * is passing a NULL pointer to the libxl_set_vcpuaffinity() call below.
+     */
+    if (!strcmp(hard_str, "-"))
+        hard = NULL;
+    else if (vcpupin_parse(hard_str, hard))
+        goto out;
+    /*
+     * Soft affinity is handled similarly. Only difference: we also want
+     * to pass NULL to libxl_set_vcpuaffinity() if it is not specified.
+     */
+    if (argc <= optind+3 || !strcmp(soft_str, "-"))
+        soft = NULL;
+    else if (vcpupin_parse(soft_str, soft))
         goto out;
 
     if (dryrun_only) {
@@ -4650,7 +4690,14 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
         }
 
         fprintf(stdout, "cpumap: ");
-        print_bitmap(cpumap.map, nb_cpu, stdout);
+        if (hard)
+            print_bitmap(hard->map, nb_cpu, stdout);
+        else
+            fprintf(stdout, "-");
+        if (soft) {
+            fprintf(stdout, " ");
+            print_bitmap(soft->map, nb_cpu, stdout);
+        }
         fprintf(stdout, "\n");
 
         if (ferror(stdout) || fflush(stdout)) {
@@ -4663,43 +4710,29 @@ static int vcpupin(uint32_t domid, const char *vcpu, char *cpu)
     }
 
     if (vcpuid != -1) {
-        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, &cpumap, NULL)) {
-            fprintf(stderr, "Could not set affinity for vcpu `%u'.\n", vcpuid);
+        if (libxl_set_vcpuaffinity(ctx, domid, vcpuid, hard, soft)) {
+            fprintf(stderr, "Could not set affinity for vcpu `%u'.\n",
+                    vcpuid);
             goto out;
         }
     }
     else {
-        if (!(vcpuinfo = libxl_list_vcpu(ctx, domid, &nb_vcpu, &i))) {
+        if (!(vcpuinfo = libxl_list_vcpu(ctx, domid, &nb_vcpu, &nb_cpu))) {
             fprintf(stderr, "libxl_list_vcpu failed.\n");
             goto out;
         }
-        for (i = 0; i < nb_vcpu; i++) {
-            if (libxl_set_vcpuaffinity(ctx, domid, vcpuinfo[i].vcpuid,
-                                       &cpumap, NULL)) {
-                fprintf(stderr, "libxl_set_vcpuaffinity failed"
-                                " on vcpu `%u'.\n", vcpuinfo[i].vcpuid);
-            }
-        }
+        if (libxl_set_vcpuaffinity_all(ctx, domid, nb_vcpu, hard, soft))
+            fprintf(stderr, "Could not set affinity.\n");
         libxl_vcpuinfo_list_free(vcpuinfo, nb_vcpu);
     }
 
     rc = 0;
  out:
-    libxl_bitmap_dispose(&cpumap);
+    libxl_bitmap_dispose(&cpumap_soft);
+    libxl_bitmap_dispose(&cpumap_hard);
     return rc;
 }
 
-int main_vcpupin(int argc, char **argv)
-{
-    int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "vcpu-pin", 3) {
-        /* No options */
-    }
-
-    return vcpupin(find_domain(argv[optind]), argv[optind+1] , argv[optind+2]);
-}
-
 static void vcpuset(uint32_t domid, const char* nr_vcpus, int check_host)
 {
     char *endptr;
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index 4279b9f..7b7fa92 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -218,7 +218,7 @@ struct cmd_spec cmd_table[] = {
     { "vcpu-pin",
       &main_vcpupin, 1, 1,
       "Set which CPUs a VCPU can use",
-      "<Domain> <VCPU|all> <CPUs|all>",
+      "<Domain> <VCPU|all> <Hard affinity|-|all> <Soft affinity|-|all>",
     },
     { "vcpu-set",
       &main_vcpuset, 0, 1,

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (3 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 4/9] xl: enable getting and setting " Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 15:44   ` Ian Campbell
  2014-06-20  8:17   ` Dario Faggioli
  2014-06-18 14:28 ` [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

From: Wei Liu <wei.liu2@citrix.com>

This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
to preserve all information to construct a domain.

Also define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY in libxl.h to
mark the change in API.

This patch was originally part of Wei's series about pushing as much
information as possible on domain configuration in libxl, rather than
xl. See here, for more details:
  http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
  http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>
---
Changes from v8:
 * type in the changelog.

Changes from v7:
 This patch is new in the series, and it is coming from Wei's series
 about pushing domain config information down to libxl. It is being
 incorporated in this series to reduce as much as possible the
 inter-dependencies between the two patch series, i.e., for ease
 of development and review.
---
 tools/libxl/libxl.h         |   15 ++++++++++
 tools/libxl/libxl_dom.c     |   14 +++++++++
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   65 ++++++++++++-------------------------------
 4 files changed, 48 insertions(+), 47 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index c2b143b..663abe2 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -338,6 +338,21 @@
 #endif
 
 /*
+ * LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY
+ *
+ * If this is defined, then libxl_domain_build_info structure will
+ * contain vcpu_hard_affinity, an array of libxl_bitmap that contains
+ * the necessary information to set the hard affinity of each VCPU to
+ * a set of PCPUs. Libxl will try to pin VCPUs to PCPUs according to
+ * this list.
+ *
+ * The number of libxl_bitmap in the array equals to the maximum number
+ * of VCPUs. The size of each bitmap is computed basing on the maximum
+ * number of PCPUs.
+ */
+#define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY 1
+
+/*
  * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
  *
  * If this is defined, then the libxl_domain_build_info structure will
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index a90a8d5..484ad84 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -264,6 +264,20 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                &info->cpumap, NULL);
 
+    /* If we have the vcpu hard affinity list, honour it */
+    if (info->num_vcpu_hard_affinity) {
+        int i;
+
+        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+            if (libxl_set_vcpuaffinity(ctx, domid, i,
+                                       &info->vcpu_hard_affinity[i],
+                                       NULL)) {
+                LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 2da58e2..05978d7 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -299,6 +299,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("avail_vcpus",     libxl_bitmap),
     ("cpumap",          libxl_bitmap),
     ("nodemap",         libxl_bitmap),
+    ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 516a4d7..ac603c8 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -88,9 +88,6 @@ xlchild children[child_max];
 static const char *common_domname;
 static int fd_lock = -1;
 
-/* Stash for specific vcpu to pcpu mappping */
-static int *vcpu_to_pcpu;
-
 static const char savefileheader_magic[32]=
     "Xen saved domain, xl format\n \0 \r";
 
@@ -825,6 +822,7 @@ static void parse_config_data(const char *config_source,
         b_info->max_vcpus = l;
 
     if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
+        b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
         int n_cpus = 0;
 
         if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
@@ -832,21 +830,19 @@ static void parse_config_data(const char *config_source,
             exit(1);
         }
 
-        /* Prepare the array for single vcpu to pcpu mappings */
-        vcpu_to_pcpu = xmalloc(sizeof(int) * b_info->max_vcpus);
-        memset(vcpu_to_pcpu, -1, sizeof(int) * b_info->max_vcpus);
+        b_info->vcpu_hard_affinity =
+            xmalloc(b_info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
+
+        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
+            libxl_bitmap_init(&b_info->vcpu_hard_affinity[i]);
+            if (libxl_cpu_bitmap_alloc(ctx,
+                                       &b_info->vcpu_hard_affinity[i], 0)) {
+                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
+                exit(1);
+            }
+            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
+        }
 
-        /*
-         * Idea here is to let libxl think all the domain's vcpus
-         * have cpu affinity with all the pcpus on the list.
-         * It is then us, here in xl, that matches each single vcpu
-         * to its pcpu (and that's why we need to stash such info in
-         * the vcpu_to_pcpu array now) after the domain has been created.
-         * Doing it like this saves the burden of passing to libxl
-         * some big array hosting the single mappings. Also, using
-         * the cpumap derived from the list ensures memory is being
-         * allocated on the proper nodes anyway.
-         */
         libxl_bitmap_set_none(&b_info->cpumap);
         while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
             i = atoi(buf);
@@ -855,12 +851,14 @@ static void parse_config_data(const char *config_source,
                 exit(1);
             }
             libxl_bitmap_set(&b_info->cpumap, i);
-            if (n_cpus < b_info->max_vcpus)
-                vcpu_to_pcpu[n_cpus] = i;
+            if (n_cpus < b_info->max_vcpus) {
+                libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[n_cpus]);
+                libxl_bitmap_set(&b_info->vcpu_hard_affinity[n_cpus], i);
+            }
             n_cpus++;
         }
 
-        /* We have a cpumap, disable automatic placement */
+        /* We have a list of cpumaps, disable automatic placement */
         libxl_defbool_set(&b_info->numa_placement, false);
     }
     else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
@@ -2217,33 +2215,6 @@ start:
     if ( ret )
         goto error_out;
 
-    /* If single vcpu to pcpu mapping was requested, honour it */
-    if (vcpu_to_pcpu) {
-        libxl_bitmap vcpu_cpumap;
-
-        ret = libxl_cpu_bitmap_alloc(ctx, &vcpu_cpumap, 0);
-        if (ret)
-            goto error_out;
-        for (i = 0; i < d_config.b_info.max_vcpus; i++) {
-
-            if (vcpu_to_pcpu[i] != -1) {
-                libxl_bitmap_set_none(&vcpu_cpumap);
-                libxl_bitmap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
-            } else {
-                libxl_bitmap_set_any(&vcpu_cpumap);
-            }
-            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
-                fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
-                libxl_bitmap_dispose(&vcpu_cpumap);
-                free(vcpu_to_pcpu);
-                ret = ERROR_FAIL;
-                goto error_out;
-            }
-        }
-        libxl_bitmap_dispose(&vcpu_cpumap);
-        free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
-    }
-
     ret = libxl_userdata_store(ctx, domid, "xl",
                                     config_data, config_len);
     if (ret) {

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (4 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 15:09   ` Wei Liu
  2014-06-18 15:53   ` Ian Campbell
  2014-06-18 14:28 ` [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function Dario Faggioli
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

as, thanks to previous change ("libxl/xl: push VCPU affinity
pinning down to libxl"), we now have an array of libxl_bitmap-s
that can be used to transfer to libxl the vcpu (hard) affinity
of each vcpu of the domain. Therefore, the cpumap field is no
longer necessary: if we want all the vcpus to have the same
affinity, we just put it in all the elements of the array.

This makes the libxl code simpler and easier to understand
and maintain (only one place where to read the affinity), and
does not complicate things much on the xl side, that is why
we go for it.

Another benefit is that, by unifying the parsing (at the xl
level) and the place where the information is consumed and the
affinity are actually set (at the libxl level), it becomes
possible to do things like:

  cpus = ["3-4", "2-6"]

meaning we want vcpu 0 to be pinned to pcpu 3,4 and vcpu 1 to
be pinned to pcpu 2,3,4,5,6. Before this change, in fact, the
list variant (["xx", "yy"]) supported only single values.
(Of course, the old [2, 3] syntax, so no '"' continues to work,
although it's not possible to specify ranges with it.)

IN SUMMARY, although it is still there, and it is still honoured,
for backward compatibility reasons, the cpumap field in build_info
should not be used any longer. The vcpu_hard_affinity array is
what should be used instead.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v8:
 * don't get rid of b_info->cpumap handling, so old apps
   continue to work, as requested during review;
 * changelog and code comments updated accordingly.
---
 docs/man/xl.cfg.pod.5       |    8 +++---
 tools/libxl/libxl_dom.c     |   10 ++++++-
 tools/libxl/libxl_types.idl |    7 ++++-
 tools/libxl/xl_cmdimpl.c    |   61 +++++++++++++++++--------------------------
 4 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index c087cbc..af48622 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning "all,^nodes:1"
 results in all the vcpus of the guest running on all the cpus on the
 host, except for the cpus belonging to the host NUMA node 1.
 
-=item ["2", "3"] (or [2, 3])
+=item ["2", "3-8,^5"]
 
-To ask for specific vcpu mapping. That means (in this example), vcpu #0
-of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
-run on cpu #3 of the host.
+To ask for specific vcpu mapping. That means (in this example), vcpu 0
+of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
+run on cpus 3,4,6,7,8 of the host.
 
 =back
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 484ad84..b22b41e 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -250,7 +250,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      * whatever that turns out to be.
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (!libxl_bitmap_is_full(&info->cpumap)) {
+        if (!libxl_bitmap_is_full(&info->cpumap) ||
+            info->num_vcpu_hard_affinity) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
                        "affinity is specified");
             return ERROR_INVAL;
@@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
     }
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
+    /*
+     * info->cpumap is DEPRECATED, but we still want old applications
+     * that may be using it to continue working.
+     */
+    if (!libxl_bitmap_is_full(&info->cpumap))
+        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
+                  "Please, use the vcpu_hard_affinity array instead");
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                &info->cpumap, NULL);
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 05978d7..0b3e4e9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -297,7 +297,12 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
-    ("cpumap",          libxl_bitmap),
+    ("cpumap",          libxl_bitmap), # DEPRECATED!
+    # The cpumap field above has been deprecated by the introduction of the
+    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
+    # API stability and backward compatibility reasons, but should not be used
+    # any longer. The vcpu_hard_affinity array is what should be used instead,
+    # to set the hard affinity of the various vCPUs.
     ("nodemap",         libxl_bitmap),
     ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
     ("numa_placement",  libxl_defbool),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index ac603c8..d9e235e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -656,14 +656,16 @@ static int update_cpumap_range(const char *str, libxl_bitmap *cpumap)
 static int vcpupin_parse(const char *cpu, libxl_bitmap *cpumap)
 {
     char *ptr, *saveptr = NULL;
+    char *buf = strdup(cpu);
     int rc = 0;
 
-    for (ptr = strtok_r(cpu, ",", &saveptr); ptr;
+    for (ptr = strtok_r(buf, ",", &saveptr); ptr;
          ptr = strtok_r(NULL, ",", &saveptr)) {
         rc = update_cpumap_range(ptr, cpumap);
         if (rc)
             break;
     }
+    free(buf);
 
     return rc;
 }
@@ -821,14 +823,11 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxvcpus", &l, 0))
         b_info->max_vcpus = l;
 
-    if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
+    buf = NULL;
+    if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1) ||
+        !xlu_cfg_get_string (config, "cpus", &buf, 0)) {
         b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
-        int n_cpus = 0;
-
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
-            exit(1);
-        }
+        const char *buf2;
 
         b_info->vcpu_hard_affinity =
             xmalloc(b_info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
@@ -840,42 +839,30 @@ static void parse_config_data(const char *config_source,
                 fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
                 exit(1);
             }
-            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
+            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[i]);
         }
 
-        libxl_bitmap_set_none(&b_info->cpumap);
-        while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
-            i = atoi(buf);
-            if (!libxl_bitmap_cpu_valid(&b_info->cpumap, i)) {
-                fprintf(stderr, "cpu %d illegal\n", i);
-                exit(1);
-            }
-            libxl_bitmap_set(&b_info->cpumap, i);
-            if (n_cpus < b_info->max_vcpus) {
-                libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[n_cpus]);
-                libxl_bitmap_set(&b_info->vcpu_hard_affinity[n_cpus], i);
-            }
-            n_cpus++;
+        /*
+         * When buf is !NULL, we've been passed a string, and what we do
+         * is parse it and put the result in all the entries of the vcpu
+         * affinity array. If it's NULL, what we have is a list, and what
+         * we put in each entry of the vcpu affinity array is the result of
+         * the parsing of each element of the list (if there are more
+         * vcpus than elements, the missing ones have their affinity masks
+         * completely full).
+         */
+        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
+            if (buf || ((buf2 = xlu_cfg_get_listitem(cpus, i)) != NULL)) {
+                if (vcpupin_parse(buf ? buf : buf2,
+                                  &b_info->vcpu_hard_affinity[i]))
+                    exit(1);
+            } else
+                libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
         }
 
         /* We have a list of cpumaps, disable automatic placement */
         libxl_defbool_set(&b_info->numa_placement, false);
     }
-    else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
-        char *buf2 = strdup(buf);
-
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
-            exit(1);
-        }
-
-        libxl_bitmap_set_none(&b_info->cpumap);
-        if (vcpupin_parse(buf2, &b_info->cpumap))
-            exit(1);
-        free(buf2);
-
-        libxl_defbool_set(&b_info->numa_placement, false);
-    }
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (5 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 15:55   ` Ian Campbell
  2014-06-18 14:28 ` [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file Dario Faggioli
  2014-06-18 14:28 ` [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
  8 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

so that the same code can be shared for the parsing of the
soft affinity, being introduced in the next patch.

This is pure code motion, so no functional change intended,
and it is being done as a separate patch for the sake of
separating code motion from actual changes, i.e., for
facilitating reviews.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
 tools/libxl/xl_cmdimpl.c |   81 ++++++++++++++++++++++++----------------------
 1 file changed, 43 insertions(+), 38 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d9e235e..d08a149 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -692,6 +692,47 @@ static void parse_top_level_sdl_options(XLU_Config *config,
     xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
 }
 
+static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
+                                const char *buf, libxl_domain_build_info *info)
+{
+    info->num_vcpu_hard_affinity = info->max_vcpus;
+    const char *buf2;
+    int i;
+
+    info->vcpu_hard_affinity =
+        xmalloc(info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
+
+    for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+        libxl_bitmap_init(&info->vcpu_hard_affinity[i]);
+        if (libxl_cpu_bitmap_alloc(ctx, &info->vcpu_hard_affinity[i], 0)) {
+            fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
+            exit(1);
+        }
+        libxl_bitmap_set_none(&info->vcpu_hard_affinity[i]);
+    }
+
+    /*
+     * When buf is !NULL, we've been passed a string, and what we do
+     * is parse it and put the result in all the entries of the vcpu
+     * affinity array. If it's NULL, what we have is a list, and what
+     * we put in each entry of the vcpu affinity array is the result of
+     * the parsing of each element of the list (if there are more
+     * vcpus than elements, the missing ones have their affinity masks
+     * completely full).
+     */
+    for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+        if (buf || ((buf2 = xlu_cfg_get_listitem(cpus, i)) != NULL)) {
+            if (vcpupin_parse(buf ? buf : buf2,
+                              &info->vcpu_hard_affinity[i]))
+                exit(1);
+        } else
+            libxl_bitmap_set_any(&info->vcpu_hard_affinity[i]);
+    }
+
+    /* We have a list of cpumaps, disable automatic placement */
+    libxl_defbool_set(&info->numa_placement, false);
+}
+
 static void parse_config_data(const char *config_source,
                               const char *config_data,
                               int config_len,
@@ -825,44 +866,8 @@ static void parse_config_data(const char *config_source,
 
     buf = NULL;
     if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1) ||
-        !xlu_cfg_get_string (config, "cpus", &buf, 0)) {
-        b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
-        const char *buf2;
-
-        b_info->vcpu_hard_affinity =
-            xmalloc(b_info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
-
-        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
-            libxl_bitmap_init(&b_info->vcpu_hard_affinity[i]);
-            if (libxl_cpu_bitmap_alloc(ctx,
-                                       &b_info->vcpu_hard_affinity[i], 0)) {
-                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
-                exit(1);
-            }
-            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[i]);
-        }
-
-        /*
-         * When buf is !NULL, we've been passed a string, and what we do
-         * is parse it and put the result in all the entries of the vcpu
-         * affinity array. If it's NULL, what we have is a list, and what
-         * we put in each entry of the vcpu affinity array is the result of
-         * the parsing of each element of the list (if there are more
-         * vcpus than elements, the missing ones have their affinity masks
-         * completely full).
-         */
-        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
-            if (buf || ((buf2 = xlu_cfg_get_listitem(cpus, i)) != NULL)) {
-                if (vcpupin_parse(buf ? buf : buf2,
-                                  &b_info->vcpu_hard_affinity[i]))
-                    exit(1);
-            } else
-                libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
-        }
-
-        /* We have a list of cpumaps, disable automatic placement */
-        libxl_defbool_set(&b_info->numa_placement, false);
-    }
+        !xlu_cfg_get_string (config, "cpus", &buf, 0))
+        parse_vcpu_affinity(config, cpus, buf, b_info);
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (6 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-18 16:00   ` Ian Campbell
  2014-06-18 14:28 ` [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
  8 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

in a similar way to how hard-affinity is specified (i.e.,
exactly how plain vcpu-affinity was being specified before
this change).

To do so, we add a vcpu_soft_affinity array to build_info,
and treat it much like vcpu_hard_affinity. The new config
option is called "cpus_soft".

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Changes from v8:
 * fix a type in the LIBXL_HAVE_xxx macro name.

Changes from v7:
 * WARNING: this patch underwent quite a fundamental rework,
   given it's now building on top of Wei's "push vcpu affinity
   to libxl" patch. That's why I think it should be re-reviewed
   almost from scratch (sorry! :-P), and that's why I did not
   add IanC's ack, although he provided it to the v7 version
   of it.

Changes from v6:
 * update and improve the changelog.

Changes from v4:
 * fix typos and rephrase docs, as suggested during review;
 * more refactoring, i.e., more addressing factor of potential
   common code, as requested during review.

Changes from v3:
 * fix typos and language issues in docs and comments, as
   suggested during review;
 * common code to soft and hard affinity parsing factored
   together, as requested uring review.

Changes from v2:
 * use the new libxl API. Although the implementation changed
   only a little bit, I removed IanJ's Acked-by, although I am
   here saying that he did provided it, as requested.
---
 docs/man/xl.cfg.pod.5       |   23 ++++++++++++++++++++---
 tools/libxl/libxl.h         |   14 ++++++++++++++
 tools/libxl/libxl_dom.c     |   18 ++++++++++++------
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   40 +++++++++++++++++++++++++++-------------
 5 files changed, 74 insertions(+), 22 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index af48622..25a4ff7 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -152,19 +152,36 @@ run on cpus 3,4,6,7,8 of the host.
 =back
 
 If this option is not specified, no vcpu to cpu pinning is established,
-and the vcpus of the guest can run on all the cpus of the host.
+and the vcpus of the guest can run on all the cpus of the host. If this
+option is specified, the intersection of the vcpu pinning mask, provided
+here, and the soft affinity mask, provided via B<cpus\_soft=> (if any),
+is utilized to compute the domain node-affinity, for driving memory
+allocations.
 
 If we are on a NUMA machine (i.e., if the host has more than one NUMA
 node) and this option is not specified, libxl automatically tries to
 place the guest on the least possible number of nodes. That, however,
 will not affect vcpu pinning, so the guest will still be able to run on
-all the cpus, it will just prefer the ones from the node it has been
-placed on. A heuristic approach is used for choosing the best node (or
+all the cpus. A heuristic approach is used for choosing the best node (or
 set of nodes), with the goals of maximizing performance for the guest
 and, at the same time, achieving efficient utilization of host cpus
 and memory. See F<docs/misc/xl-numa-placement.markdown> for more
 details.
 
+=item B<cpus_soft="CPU-LIST">
+
+Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
+(hard affinity). When using the credit scheduler, this means what cpus
+the vcpus of the domain prefer.
+
+A C<CPU-LIST> is specified exactly as above, for B<cpus=>.
+
+If this option is not specified, the vcpus of the guest will not have
+any preference regarding on what cpu to run. If this option is specified,
+the intersection of the soft affinity mask, provided here, and the vcpu
+pinning, provided via B<cpus=> (if any), is utilized to compute the
+domain node-affinity, for driving memory allocations.
+
 =back
 
 =head3 CPU Scheduling
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 663abe2..59dc3ea 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -353,6 +353,20 @@
 #define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY 1
 
 /*
+ * LIBXL_HAVE_BUILDINFO_VCPU_SOFT_AFFINITY_ARRAY
+ *
+ * If this is defined, then libxl_domain_build_info structure will
+ * contain vcpu_soft_affinity, an array of libxl_bitmap that contains
+ * the necessary information to set the soft affinity of each VCPU to
+ * a set of PCPUs.
+ *
+ * The number of libxl_bitmap in the array equals to the maximum number
+ * of VCPUs. The size of each bitmap is computed basing on the maximum
+ * number of PCPUs.
+ */
+#define LIBXL_HAVE_BUILDINFO_VCPU_SOFT_AFFINITY_ARRAY 1
+
+/*
  * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
  *
  * If this is defined, then the libxl_domain_build_info structure will
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index b22b41e..4a52da1 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -272,14 +272,20 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                &info->cpumap, NULL);
 
-    /* If we have the vcpu hard affinity list, honour it */
-    if (info->num_vcpu_hard_affinity) {
+    /* If we have the vcpu hard and/or soft affinity list, honour it/them */
+    if (info->num_vcpu_hard_affinity || info->num_vcpu_soft_affinity) {
         int i;
 
-        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
-            if (libxl_set_vcpuaffinity(ctx, domid, i,
-                                       &info->vcpu_hard_affinity[i],
-                                       NULL)) {
+        for (i = 0; i < info->max_vcpus; i++) {
+            libxl_bitmap *hard_affinity, *soft_affinity;
+
+            hard_affinity = info->num_vcpu_hard_affinity ?
+                &info->vcpu_hard_affinity[i] : NULL;
+            soft_affinity = d_config->b_info.num_vcpu_soft_affinity ?
+                &info->vcpu_soft_affinity[i] : NULL;
+
+            if (libxl_set_vcpuaffinity(ctx, domid, i, hard_affinity,
+                                       soft_affinity)) {
                 LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
                 return ERROR_FAIL;
             }
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 0b3e4e9..35e36b0 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -305,6 +305,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # to set the hard affinity of the various vCPUs.
     ("nodemap",         libxl_bitmap),
     ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
+    ("vcpu_soft_affinity", Array(libxl_bitmap, "num_vcpu_soft_affinity")),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d08a149..d0a8557 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -693,22 +693,32 @@ static void parse_top_level_sdl_options(XLU_Config *config,
 }
 
 static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
-                                const char *buf, libxl_domain_build_info *info)
+                                const char *buf, libxl_domain_build_info *info,
+                                bool is_hard)
 {
-    info->num_vcpu_hard_affinity = info->max_vcpus;
     const char *buf2;
+    libxl_bitmap *vcpu_affinity_array;
     int i;
 
-    info->vcpu_hard_affinity =
-        xmalloc(info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
+    if (is_hard) {
+        info->num_vcpu_hard_affinity = info->max_vcpus;
+        info->vcpu_hard_affinity =
+            xmalloc(info->max_vcpus * sizeof(libxl_bitmap));
+        vcpu_affinity_array = info->vcpu_hard_affinity;
+    } else {
+        info->num_vcpu_soft_affinity = info->max_vcpus;
+        info->vcpu_soft_affinity =
+            xmalloc(info->max_vcpus * sizeof(libxl_bitmap));
+        vcpu_affinity_array = info->vcpu_soft_affinity;
+    }
 
-    for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
-        libxl_bitmap_init(&info->vcpu_hard_affinity[i]);
-        if (libxl_cpu_bitmap_alloc(ctx, &info->vcpu_hard_affinity[i], 0)) {
+    for (i = 0; i < info->max_vcpus; i++) {
+        libxl_bitmap_init(&vcpu_affinity_array[i]);
+        if (libxl_cpu_bitmap_alloc(ctx, &vcpu_affinity_array[i], 0)) {
             fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
             exit(1);
         }
-        libxl_bitmap_set_none(&info->vcpu_hard_affinity[i]);
+        libxl_bitmap_set_none(&vcpu_affinity_array[i]);
     }
 
     /*
@@ -720,13 +730,12 @@ static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
      * vcpus than elements, the missing ones have their affinity masks
      * completely full).
      */
-    for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+    for (i = 0; i < info->max_vcpus; i++) {
         if (buf || ((buf2 = xlu_cfg_get_listitem(cpus, i)) != NULL)) {
-            if (vcpupin_parse(buf ? buf : buf2,
-                              &info->vcpu_hard_affinity[i]))
+            if (vcpupin_parse(buf ? buf : buf2, &vcpu_affinity_array[i]))
                 exit(1);
         } else
-            libxl_bitmap_set_any(&info->vcpu_hard_affinity[i]);
+            libxl_bitmap_set_any(&vcpu_affinity_array[i]);
     }
 
     /* We have a list of cpumaps, disable automatic placement */
@@ -867,7 +876,12 @@ static void parse_config_data(const char *config_source,
     buf = NULL;
     if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1) ||
         !xlu_cfg_get_string (config, "cpus", &buf, 0))
-        parse_vcpu_affinity(config, cpus, buf, b_info);
+        parse_vcpu_affinity(config, cpus, buf, b_info, /* is_hard */ true);
+
+    buf = NULL;
+    if (!xlu_cfg_get_list (config, "cpus_soft", &cpus, 0, 1) ||
+        !xlu_cfg_get_string (config, "cpus_soft", &buf, 0))
+        parse_vcpu_affinity(config, cpus, buf, b_info, /* is_hard */ false);
 
     if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
         b_info->max_memkb = l * 1024;

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity
  2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
                   ` (7 preceding siblings ...)
  2014-06-18 14:28 ` [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file Dario Faggioli
@ 2014-06-18 14:28 ` Dario Faggioli
  2014-06-19 17:33   ` Dario Faggioli
  8 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3

vCPU soft affinity and NUMA-aware scheduling does not have
to be related. However, soft affinity is how NUMA-aware
scheduling is actually implemented, and therefore, by default,
the results of automatic NUMA placement (at VM creation time)
are also used to set the soft affinity of all the vCPUs of
the domain.

Of course, this only happens if automatic NUMA placement is
enabled and actually takes place (for instance, if the user
does not specify any hard and soft affiniy in the xl config
file).

This also takes care of the vice-versa, i.e., don't trigger
automatic placement if the config file specifies either an
hard (the check for which was already there) or a soft (the
check for which is introduced by this commit) affinity.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
Changes from v8:
 * fixed a trivial memory leak (of a libxl_bitmap) introduced
   in v8 itself.

Changes from v7:
 * the patch has been reworked, following the major changes
   in the preceding ones. However, such rework involves
   --I think-- only straightforward code transformation,
   so I'm actually retaining the Ack-s this time.

Changes from v3:
 * rephrase comments and docs, as suggestd during review.
---
 docs/man/xl.cfg.pod.5                |   21 +++++++++++----------
 docs/misc/xl-numa-placement.markdown |   14 ++++++++++++--
 tools/libxl/libxl_dom.c              |   32 +++++++++++++++++++++++++++++---
 3 files changed, 52 insertions(+), 15 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 25a4ff7..fe6e56d 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -158,16 +158,6 @@ here, and the soft affinity mask, provided via B<cpus\_soft=> (if any),
 is utilized to compute the domain node-affinity, for driving memory
 allocations.
 
-If we are on a NUMA machine (i.e., if the host has more than one NUMA
-node) and this option is not specified, libxl automatically tries to
-place the guest on the least possible number of nodes. That, however,
-will not affect vcpu pinning, so the guest will still be able to run on
-all the cpus. A heuristic approach is used for choosing the best node (or
-set of nodes), with the goals of maximizing performance for the guest
-and, at the same time, achieving efficient utilization of host cpus
-and memory. See F<docs/misc/xl-numa-placement.markdown> for more
-details.
-
 =item B<cpus_soft="CPU-LIST">
 
 Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
@@ -182,6 +172,17 @@ the intersection of the soft affinity mask, provided here, and the vcpu
 pinning, provided via B<cpus=> (if any), is utilized to compute the
 domain node-affinity, for driving memory allocations.
 
+If this option is not specified (and B<cpus=> is not specified either),
+libxl automatically tries to place the guest on the least possible
+number of nodes. A heuristic approach is used for choosing the best
+node (or set of nodes), with the goal of maximizing performance for
+the guest and, at the same time, achieving efficient utilization of
+host cpus and memory. In that case, the soft affinity of all the vcpus
+of the domain will be set to the pcpus belonging to the NUMA nodes
+chosen during placement.
+
+For more details, see F<docs/misc/xl-numa-placement.markdown>.
+
 =back
 
 =head3 CPU Scheduling
diff --git a/docs/misc/xl-numa-placement.markdown b/docs/misc/xl-numa-placement.markdown
index 9d64eae..f863492 100644
--- a/docs/misc/xl-numa-placement.markdown
+++ b/docs/misc/xl-numa-placement.markdown
@@ -126,10 +126,20 @@ or Xen won't be able to guarantee the locality for their memory accesses.
 That, of course, also mean the vCPUs of the domain will only be able to
 execute on those same pCPUs.
 
+It is is also possible to have a "cpus\_soft=" option in the xl config file,
+to specify the soft affinity for all the vCPUs of the domain. This affects
+the NUMA placement in the following way:
+
+ * if only "cpus\_soft=" is present, the VM's node-affinity will be equal
+   to the nodes to which the pCPUs in the soft affinity mask belong;
+ * if both "cpus\_soft=" and "cpus=" are present, the VM's node-affinity
+   will be equal to the nodes to which the pCPUs present both in hard and
+   soft affinity belong.
+
 ### Placing the guest automatically ###
 
-If no "cpus=" option is specified in the config file, libxl tries
-to figure out on its own on which node(s) the domain could fit best.
+If neither "cpus=" nor "cpus\_soft=" are present in the config file, libxl
+tries to figure out on its own on which node(s) the domain could fit best.
 If it finds one (some), the domain's node affinity get set to there,
 and both memory allocations and NUMA aware scheduling (for the credit
 scheduler and starting from Xen 4.3) will comply with it. Starting from
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 4a52da1..58f46e3 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -248,18 +248,44 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      * some weird error manifests) the subsequent call to
      * libxl_domain_set_nodeaffinity() will do the actual placement,
      * whatever that turns out to be.
+     *
+     * As far as scheduling is concerned, we achieve NUMA-aware scheduling
+     * by having the results of placement affect the soft affinity of all
+     * the vcpus of the domain. Of course, we want that iff placement is
+     * enabled and actually happens, so we only change info->cpumap_soft to
+     * reflect the placement result if that is the case
      */
     if (libxl_defbool_val(info->numa_placement)) {
+        libxl_bitmap cpumap_soft;
+
+        /* We require both hard and soft affinity not to be set */
         if (!libxl_bitmap_is_full(&info->cpumap) ||
-            info->num_vcpu_hard_affinity) {
+            info->num_vcpu_hard_affinity || info->num_vcpu_soft_affinity) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
-                       "affinity is specified");
+                       "(hard or soft) affinity is specified");
             return ERROR_INVAL;
         }
 
-        rc = numa_place_domain(gc, domid, info);
+        rc = libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0);
         if (rc)
             return rc;
+
+        rc = numa_place_domain(gc, domid, info);
+        if (rc) {
+            libxl_bitmap_dispose(&cpumap_soft);
+            return rc;
+        }
+
+        /*
+         * All we need to do now is converting the result of automatic
+         * placement from nodemap to cpumap, and then use such cpumap as
+         * the soft affinity for all the vcpus of the domain.
+         */
+        libxl_nodemap_to_cpumap(ctx, &info->nodemap, &cpumap_soft);
+        libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                                   NULL, &cpumap_soft);
+
+        libxl_bitmap_dispose(&cpumap_soft);
     }
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     /*

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 14:28 ` [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
@ 2014-06-18 15:09   ` Wei Liu
  2014-06-18 16:40     ` Dario Faggioli
  2014-06-18 15:53   ` Ian Campbell
  1 sibling, 1 reply; 34+ messages in thread
From: Wei Liu @ 2014-06-18 15:09 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Wei Liu, Ian.Campbell, Andrew.Cooper3, George.Dunlap, xen-devel,
	Ian.Jackson

On Wed, Jun 18, 2014 at 04:28:25PM +0200, Dario Faggioli wrote:
[...]
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 484ad84..b22b41e 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -250,7 +250,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>       * whatever that turns out to be.
>       */
>      if (libxl_defbool_val(info->numa_placement)) {
> -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> +        if (!libxl_bitmap_is_full(&info->cpumap) ||
> +            info->num_vcpu_hard_affinity) {

Not related to this patch, I'm wondering why you used
libxl_bitmap_is_full in the first place. (And I admit I missed this spot
when I wrote my patch, sorry)

>              LOG(ERROR, "Can run NUMA placement only if no vcpu "
>                         "affinity is specified");
>              return ERROR_INVAL;
> @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>              return rc;
>      }
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> +    /*
> +     * info->cpumap is DEPRECATED, but we still want old applications
> +     * that may be using it to continue working.
> +     */
> +    if (!libxl_bitmap_is_full(&info->cpumap))
> +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> +                  "Please, use the vcpu_hard_affinity array instead");

Shouldn't it be !libxl_bitmap_is_empty?

Wei.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 4/9] xl: enable getting and setting soft affinity
  2014-06-18 14:28 ` [PATCH v9 4/9] xl: enable getting and setting " Dario Faggioli
@ 2014-06-18 15:37   ` Ian Campbell
  0 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 15:37 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> +    /*
> +     * Soft affinity is handled similarly. Only difference: we also want
> +     * to pass NULL to libxl_set_vcpuaffinity() if it is not specified.
> +     */
> +    if (argc <= optind+3 || !strcmp(soft_str, "-"))

soft_str == NULL might have been better, given you've already got the
argc test there, but regardless:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 14:28 ` [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
@ 2014-06-18 15:44   ` Ian Campbell
  2014-06-18 16:32     ` Dario Faggioli
  2014-06-20  8:17   ` Dario Faggioli
  1 sibling, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 15:44 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
> in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
> to preserve all information to construct a domain.
> 
> Also define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY in libxl.h to
> mark the change in API.
> 
> This patch was originally part of Wei's series about pushing as much
> information as possible on domain configuration in libxl, rather than
> xl. See here, for more details:
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>
> ---
> Changes from v8:
>  * type in the changelog.
> 
> Changes from v7:
>  This patch is new in the series, and it is coming from Wei's series
>  about pushing domain config information down to libxl. It is being
>  incorporated in this series to reduce as much as possible the
>  inter-dependencies between the two patch series, i.e., for ease
>  of development and review.
> ---
>  tools/libxl/libxl.h         |   15 ++++++++++
>  tools/libxl/libxl_dom.c     |   14 +++++++++
>  tools/libxl/libxl_types.idl |    1 +
>  tools/libxl/xl_cmdimpl.c    |   65 ++++++++++++-------------------------------
>  4 files changed, 48 insertions(+), 47 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index c2b143b..663abe2 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -338,6 +338,21 @@
>  #endif
>  
>  /*
> + * LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY
> + *
> + * If this is defined, then libxl_domain_build_info structure will
> + * contain vcpu_hard_affinity, an array of libxl_bitmap that contains
> + * the necessary information to set the hard affinity of each VCPU to
> + * a set of PCPUs. Libxl will try to pin VCPUs to PCPUs according to
> + * this list.

Something needs to describe somewhere how this relates to the existing
cpumap field. I think they are sort of unioned (i.e. cpumap is applied
then this new thing can override it)?

Is that the most desirable semantics, it seems potentially confusing to
me. Perhaps cpumap should be ignored if this array is of non-zero size?

> + * The number of libxl_bitmap in the array equals to the maximum number
> + * of VCPUs. The size of each bitmap is computed basing on the maximum
> + * number of PCPUs.

These are all things which the caller is expect to arrange by making
appropriately sized allocations, not inherent properties of the API, I
think.

So "the number of libxl_bitmap in the array *should* be equal". "The
size of each bitmap should ..." etc.

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 14:28 ` [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
  2014-06-18 15:09   ` Wei Liu
@ 2014-06-18 15:53   ` Ian Campbell
  2014-06-18 16:26     ` Dario Faggioli
  1 sibling, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 15:53 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> as, thanks to previous change ("libxl/xl: push VCPU affinity
> pinning down to libxl"), we now have an array of libxl_bitmap-s
> that can be used to transfer to libxl the vcpu (hard) affinity
> of each vcpu of the domain. Therefore, the cpumap field is no
> longer necessary: if we want all the vcpus to have the same
> affinity, we just put it in all the elements of the array.
> 
> This makes the libxl code simpler and easier to understand
> and maintain (only one place where to read the affinity), and
> does not complicate things much on the xl side, that is why
> we go for it.
> 
> Another benefit is that, by unifying the parsing (at the xl
> level) and the place where the information is consumed and the
> affinity are actually set (at the libxl level), it becomes
> possible to do things like:
> 
>   cpus = ["3-4", "2-6"]
> 
> meaning we want vcpu 0 to be pinned to pcpu 3,4 and vcpu 1 to
> be pinned to pcpu 2,3,4,5,6. Before this change, in fact, the
> list variant (["xx", "yy"]) supported only single values.
> (Of course, the old [2, 3] syntax, so no '"' continues to work,
> although it's not possible to specify ranges with it.)
> 
> IN SUMMARY, although it is still there, and it is still honoured,
> for backward compatibility reasons, the cpumap field in build_info
> should not be used any longer. The vcpu_hard_affinity array is
> what should be used instead.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Changes from v8:
>  * don't get rid of b_info->cpumap handling, so old apps
>    continue to work, as requested during review;
>  * changelog and code comments updated accordingly.
> ---
>  docs/man/xl.cfg.pod.5       |    8 +++---
>  tools/libxl/libxl_dom.c     |   10 ++++++-
>  tools/libxl/libxl_types.idl |    7 ++++-
>  tools/libxl/xl_cmdimpl.c    |   61 +++++++++++++++++--------------------------
>  4 files changed, 43 insertions(+), 43 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index c087cbc..af48622 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning "all,^nodes:1"
>  results in all the vcpus of the guest running on all the cpus on the
>  host, except for the cpus belonging to the host NUMA node 1.
>  
> -=item ["2", "3"] (or [2, 3])
> +=item ["2", "3-8,^5"]
>  
> -To ask for specific vcpu mapping. That means (in this example), vcpu #0
> -of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
> -run on cpu #3 of the host.
> +To ask for specific vcpu mapping. That means (in this example), vcpu 0
> +of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
> +run on cpus 3,4,6,7,8 of the host.

Why is deprecating a field in the libxl API changing the xl
configuration file syntax?

> @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>              return rc;
>      }
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> +    /*
> +     * info->cpumap is DEPRECATED, but we still want old applications
> +     * that may be using it to continue working.
> +     */
> +    if (!libxl_bitmap_is_full(&info->cpumap))

The caller is expected to initialise this unused field to a non-default
state? That doesn't sound right. Did you mean !is_empty?

TBH I think you'd be better off just silently ignoring cpumap if the new
thing is set.

Or maybe converting the cpumap into the new array so the rest of the
libxl internals only needs to deal with one.

> +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> +                  "Please, use the vcpu_hard_affinity array instead");
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
>                                 &info->cpumap, NULL);
>  
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 05978d7..0b3e4e9 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -297,7 +297,12 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> -    ("cpumap",          libxl_bitmap),
> +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> +    # The cpumap field above has been deprecated by the introduction of the
> +    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
> +    # API stability and backward compatibility reasons, but should not be used
> +    # any longer. The vcpu_hard_affinity array is what should be used instead,
> +    # to set the hard affinity of the various vCPUs.

This comment needs to talk about the precedence between the two fields
in the event that both are present.

WRT the structure of the series: All of the libxl deprecation stuff here
could be squashed into the previous patch which added the new field.
That would make more sense since otherwise you have a middle state where
both fields are present and valid and it is ill defined what is what.

All the xl stuff could then come next as a "move away from deprecated
interface" patch.

As it is each patch seems to do half of each thing. I'm not entirely
sure what the intermediate state is supposed to be.

> @@ -840,42 +839,30 @@ static void parse_config_data(const char *config_source,
>                  fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
>                  exit(1);
>              }
> -            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
> +            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[i]);

What do these sorts of changes have to do with the deprecation of
another field?

It looks to me like the previous patch has just done something wrong and
you are fixing it here for some reason.


Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function
  2014-06-18 14:28 ` [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function Dario Faggioli
@ 2014-06-18 15:55   ` Ian Campbell
  0 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 15:55 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> so that the same code can be shared for the parsing of the
> soft affinity, being introduced in the next patch.
> 
> This is pure code motion, so no functional change intended,
> and it is being done as a separate patch for the sake of
> separating code motion from actual changes, i.e., for
> facilitating reviews.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

On the understanding that it is just code motion:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file
  2014-06-18 14:28 ` [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file Dario Faggioli
@ 2014-06-18 16:00   ` Ian Campbell
  2014-06-18 17:02     ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 16:00 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:

$subject starts "enable for specifying". I think a word must be missing.

> in a similar way to how hard-affinity is specified (i.e.,
> exactly how plain vcpu-affinity was being specified before
> this change).
> 
> To do so, we add a vcpu_soft_affinity array to build_info,
> and treat it much like vcpu_hard_affinity. The new config
> option is called "cpus_soft".

Not any more.

>  /*
> + * LIBXL_HAVE_BUILDINFO_VCPU_SOFT_AFFINITY_ARRAY

You could probably combine this #define with the HARD ones, after all
they arrived at the same time.

i.e. in the hard affinity patch:
/* to be uncommented when soft array added */
/* #define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS */
and then uncomment it here.

At the very least they could share a doc comment.

> + * If this is defined, then libxl_domain_build_info structure will
> + * contain vcpu_soft_affinity, an array of libxl_bitmap that contains
> + * the necessary information to set the soft affinity of each VCPU to
> + * a set of PCPUs.
> + *
> + * The number of libxl_bitmap in the array equals to the maximum number
> + * of VCPUs. The size of each bitmap is computed basing on the maximum
> + * number of PCPUs.

Like with the previous version this should be worded in terms of what is
expected of the caller.

> + */
> +#define LIBXL_HAVE_BUILDINFO_VCPU_SOFT_AFFINITY_ARRAY 1
> +
> +/*
>   * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
>   *
>   * If this is defined, then the libxl_domain_build_info structure will
> @@ -720,13 +730,12 @@ static void parse_vcpu_affinity(XLU_Config *config, XLU_ConfigList *cpus,
>       * vcpus than elements, the missing ones have their affinity masks
>       * completely full).
>       */
> -    for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
> +    for (i = 0; i < info->max_vcpus; i++) {

Would have been better to just start this way.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 15:53   ` Ian Campbell
@ 2014-06-18 16:26     ` Dario Faggioli
  2014-06-18 16:44       ` Ian Campbell
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 16:26 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6402 bytes --]

On mer, 2014-06-18 at 16:53 +0100, Ian Campbell wrote:
> On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:

> > diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> > index c087cbc..af48622 100644
> > --- a/docs/man/xl.cfg.pod.5
> > +++ b/docs/man/xl.cfg.pod.5
> > @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning "all,^nodes:1"
> >  results in all the vcpus of the guest running on all the cpus on the
> >  host, except for the cpus belonging to the host NUMA node 1.
> >  
> > -=item ["2", "3"] (or [2, 3])
> > +=item ["2", "3-8,^5"]
> >  
> > -To ask for specific vcpu mapping. That means (in this example), vcpu #0
> > -of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
> > -run on cpu #3 of the host.
> > +To ask for specific vcpu mapping. That means (in this example), vcpu 0
> > +of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
> > +run on cpus 3,4,6,7,8 of the host.
> 
> Why is deprecating a field in the libxl API changing the xl
> configuration file syntax?
> 
Because what happens as a consequence of, in xl, filling the array
instead of just setting b_info->cpumap, is that we get to use the same
parsing function (vcpupin_parse()) for all the elements of the array
itself, in both the following cases:

 cpus="1-4,7"

(which sort of becomes equivalent to ["1-4,7", "1-4,7"])

and:

 cpus=["3", "4"]

Up to the previous patch, the latter case was handled asking the
elements of the list to be integers. Here we start using vcpupin_parse()
on them, and that is more powerful, and allows the elements to be
ranges.

Anyway, I'll see if I can split this patch in two, as you suggest below,
to make things more evident.

> > @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >              return rc;
> >      }
> >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > +    /*
> > +     * info->cpumap is DEPRECATED, but we still want old applications
> > +     * that may be using it to continue working.
> > +     */
> > +    if (!libxl_bitmap_is_full(&info->cpumap))
> 
> The caller is expected to initialise this unused field to a non-default
> state? That doesn't sound right. Did you mean !is_empty?
> 
Nope. The default for this is to be full, so what I'm checking is really
that it stayed default. See libxl__domain_build_info_setdefault():

    ...
    if (!b_info->cpumap.size) {
        if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
            return ERROR_FAIL;
        libxl_bitmap_set_any(&b_info->cpumap);
    }
    ...

Can I change this? If I can, I think the best would be to remove the
allocation from libxl__domain_build_info_setdefault(), so that all the
checks could become something like `if(cpumap.size)'.

But does stop allocating the bitmap qualifies as an incompatible API
change? If yes, I think we're stuck with checking whether or not it's
full, to see if the user is doing something with it or not.

> TBH I think you'd be better off just silently ignoring cpumap if the new
> thing is set.
> 
This, I can try.

> Or maybe converting the cpumap into the new array so the rest of the
> libxl internals only needs to deal with one.
> 
Converting it to the new array where? AFAICT, this is the only place
where cpumap is used, so I don't think it's worth converting it.
Ignoring if the array is specified is a better solution, I think

> > +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> > +                  "Please, use the vcpu_hard_affinity array instead");
> >      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
> >                                 &info->cpumap, NULL);
> >  
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index 05978d7..0b3e4e9 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -297,7 +297,12 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
> >  libxl_domain_build_info = Struct("domain_build_info",[
> >      ("max_vcpus",       integer),
> >      ("avail_vcpus",     libxl_bitmap),
> > -    ("cpumap",          libxl_bitmap),
> > +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> > +    # The cpumap field above has been deprecated by the introduction of the
> > +    # vcpu_hard_affinity array. It is not removed and it is still honoured, for
> > +    # API stability and backward compatibility reasons, but should not be used
> > +    # any longer. The vcpu_hard_affinity array is what should be used instead,
> > +    # to set the hard affinity of the various vCPUs.
> 
> This comment needs to talk about the precedence between the two fields
> in the event that both are present.
> 
Right. I'll add a word about it.

> WRT the structure of the series: All of the libxl deprecation stuff here
> could be squashed into the previous patch which added the new field.
> That would make more sense since otherwise you have a middle state where
> both fields are present and valid and it is ill defined what is what.
> 
> All the xl stuff could then come next as a "move away from deprecated
> interface" patch.
> 
> As it is each patch seems to do half of each thing. I'm not entirely
> sure what the intermediate state is supposed to be.
> 
> > @@ -840,42 +839,30 @@ static void parse_config_data(const char *config_source,
> >                  fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
> >                  exit(1);
> >              }
> > -            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
> > +            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[i]);
> 
> What do these sorts of changes have to do with the deprecation of
> another field?
> 
> It looks to me like the previous patch has just done something wrong and
> you are fixing it here for some reason.
> 
It's not like that, but I think I see what you mean. I'll try to
restructure the series so it does not gives this impression.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 15:44   ` Ian Campbell
@ 2014-06-18 16:32     ` Dario Faggioli
  2014-06-18 16:46       ` Ian Campbell
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 16:32 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2063 bytes --]

On mer, 2014-06-18 at 16:44 +0100, Ian Campbell wrote:
> On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:

> > diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> > index c2b143b..663abe2 100644
> > --- a/tools/libxl/libxl.h
> > +++ b/tools/libxl/libxl.h
> > @@ -338,6 +338,21 @@
> >  #endif
> >  
> >  /*
> > + * LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY
> > + *
> > + * If this is defined, then libxl_domain_build_info structure will
> > + * contain vcpu_hard_affinity, an array of libxl_bitmap that contains
> > + * the necessary information to set the hard affinity of each VCPU to
> > + * a set of PCPUs. Libxl will try to pin VCPUs to PCPUs according to
> > + * this list.
> 
> Something needs to describe somewhere how this relates to the existing
> cpumap field. I think they are sort of unioned (i.e. cpumap is applied
> then this new thing can override it)?
> 
Ok, I'll add a note about this here too.

> Is that the most desirable semantics, it seems potentially confusing to
> me. Perhaps cpumap should be ignored if this array is of non-zero size?
> 
As I said in the other reply, yes, I'll see if I can do that.

> > + * The number of libxl_bitmap in the array equals to the maximum number
> > + * of VCPUs. The size of each bitmap is computed basing on the maximum
> > + * number of PCPUs.
> 
> These are all things which the caller is expect to arrange by making
> appropriately sized allocations, not inherent properties of the API, I
> think.
> 
> So "the number of libxl_bitmap in the array *should* be equal". "The
> size of each bitmap should ..." etc.
> 
Will fix. I will also add a check that this is actually what we get from
the caller in libxl_dom.c (where the arrays are consumed).

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 15:09   ` Wei Liu
@ 2014-06-18 16:40     ` Dario Faggioli
  2014-06-19 11:06       ` Wei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 16:40 UTC (permalink / raw)
  To: Wei Liu
  Cc: Ian.Jackson, George.Dunlap, Andrew.Cooper3, Ian.Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2764 bytes --]

On mer, 2014-06-18 at 16:09 +0100, Wei Liu wrote:
> On Wed, Jun 18, 2014 at 04:28:25PM +0200, Dario Faggioli wrote:
> [...]
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index 484ad84..b22b41e 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -250,7 +250,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >       * whatever that turns out to be.
> >       */
> >      if (libxl_defbool_val(info->numa_placement)) {
> > -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> > +        if (!libxl_bitmap_is_full(&info->cpumap) ||
> > +            info->num_vcpu_hard_affinity) {
> 
> Not related to this patch, I'm wondering why you used
> libxl_bitmap_is_full in the first place. (And I admit I missed this spot
> when I wrote my patch, sorry)
> 
Why I used _is_full() in the first place instead of doing what? As said
to Ian in another email of this thread, the default set in
libxl__build_info_set_defaults() for info->cpumap is "mask full", so the
idea there is to check whether a cpumap is being specified by the
caller, or if the default value is still there.

I remember this being discussed quite thoroughly at the time, and it
seemed the best option. Re-thinking about it right now, it probably
would have been better to just leave it alone (I mean, not even allocate
the bitmap!) in _set_defaults(), and check for it having been allocated
by the caller here (with `if(cpumap.size)').

I'm not sure we can change it like that, without it being considered an
API compatibility breakage.... If we can, I'm all for it (if you
remember, was trying to do right that in v8).

> >              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> >                         "affinity is specified");
> >              return ERROR_INVAL;
> > @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >              return rc;
> >      }
> >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > +    /*
> > +     * info->cpumap is DEPRECATED, but we still want old applications
> > +     * that may be using it to continue working.
> > +     */
> > +    if (!libxl_bitmap_is_full(&info->cpumap))
> > +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> > +                  "Please, use the vcpu_hard_affinity array instead");
> 
> Shouldn't it be !libxl_bitmap_is_empty?
> 
Nope (see above).

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 16:26     ` Dario Faggioli
@ 2014-06-18 16:44       ` Ian Campbell
  2014-06-18 17:11         ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 16:44 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 18:26 +0200, Dario Faggioli wrote:
> On mer, 2014-06-18 at 16:53 +0100, Ian Campbell wrote:
> > On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> 
> > > diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> > > index c087cbc..af48622 100644
> > > --- a/docs/man/xl.cfg.pod.5
> > > +++ b/docs/man/xl.cfg.pod.5
> > > @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning "all,^nodes:1"
> > >  results in all the vcpus of the guest running on all the cpus on the
> > >  host, except for the cpus belonging to the host NUMA node 1.
> > >  
> > > -=item ["2", "3"] (or [2, 3])
> > > +=item ["2", "3-8,^5"]
> > >  
> > > -To ask for specific vcpu mapping. That means (in this example), vcpu #0
> > > -of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
> > > -run on cpu #3 of the host.
> > > +To ask for specific vcpu mapping. That means (in this example), vcpu 0
> > > +of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
> > > +run on cpus 3,4,6,7,8 of the host.
> > 
> > Why is deprecating a field in the libxl API changing the xl
> > configuration file syntax?
> > 
> Because what happens as a consequence of, in xl, filling the array
> instead of just setting b_info->cpumap, is that we get to use the same
> parsing function (vcpupin_parse()) for all the elements of the array
> itself, in both the following cases:
> 
>  cpus="1-4,7"
> 
> (which sort of becomes equivalent to ["1-4,7", "1-4,7"])
> 
> and:
> 
>  cpus=["3", "4"]
> 
> Up to the previous patch, the latter case was handled asking the
> elements of the list to be integers. Here we start using vcpupin_parse()
> on them, and that is more powerful, and allows the elements to be
> ranges.
> 
> Anyway, I'll see if I can split this patch in two, as you suggest below,
> to make things more evident.

Thanks.

> 
> > > @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> > >              return rc;
> > >      }
> > >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > > +    /*
> > > +     * info->cpumap is DEPRECATED, but we still want old applications
> > > +     * that may be using it to continue working.
> > > +     */
> > > +    if (!libxl_bitmap_is_full(&info->cpumap))
> > 
> > The caller is expected to initialise this unused field to a non-default
> > state? That doesn't sound right. Did you mean !is_empty?
> > 
> Nope. The default for this is to be full, so what I'm checking is really
> that it stayed default. See libxl__domain_build_info_setdefault():
> 
>     ...
>     if (!b_info->cpumap.size) {
>         if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
>             return ERROR_FAIL;
>         libxl_bitmap_set_any(&b_info->cpumap);
>     }
>     ...
> 
> Can I change this? If I can, I think the best would be to remove the
> allocation from libxl__domain_build_info_setdefault(), so that all the
> checks could become something like `if(cpumap.size)'.

I agree.

> But does stop allocating the bitmap qualifies as an incompatible API
> change?

I don't think so, do you think it might for some reason?

>  If yes, I think we're stuck with checking whether or not it's
> full, to see if the user is doing something with it or not.
> 
> > TBH I think you'd be better off just silently ignoring cpumap if the new
> > thing is set.
> > 
> This, I can try.
> 
> > Or maybe converting the cpumap into the new array so the rest of the
> > libxl internals only needs to deal with one.
> > 
> Converting it to the new array where? AFAICT, this is the only place
> where cpumap is used, so I don't think it's worth converting it.

e.g. in setdefaults:
   if (cpumap.size && arraything == NULL)
       allocate an array and fill it from cpumap

but if it is only used once then I guess it doesn't matter.

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 16:32     ` Dario Faggioli
@ 2014-06-18 16:46       ` Ian Campbell
  2014-06-18 17:00         ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2014-06-18 16:46 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel

On Wed, 2014-06-18 at 18:32 +0200, Dario Faggioli wrote:

> > > + * The number of libxl_bitmap in the array equals to the maximum number
> > > + * of VCPUs. The size of each bitmap is computed basing on the maximum
> > > + * number of PCPUs.
> > 
> > These are all things which the caller is expect to arrange by making
> > appropriately sized allocations, not inherent properties of the API, I
> > think.
> > 
> > So "the number of libxl_bitmap in the array *should* be equal". "The
> > size of each bitmap should ..." etc.
> > 
> Will fix. I will also add a check that this is actually what we get from
> the caller in libxl_dom.c (where the arrays are consumed).

Or define what will happen if the array is too short of the bitmaps not
size appropriately.

I think it would be find to only pin the first N cpus for which an array
entry is present and leave the rest floating, and likewise to simply
assume any pcpus past the end of the bitmap are == 0.

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 16:46       ` Ian Campbell
@ 2014-06-18 17:00         ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 17:00 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1499 bytes --]

On mer, 2014-06-18 at 17:46 +0100, Ian Campbell wrote:
> On Wed, 2014-06-18 at 18:32 +0200, Dario Faggioli wrote:
> 
> > > > + * The number of libxl_bitmap in the array equals to the maximum number
> > > > + * of VCPUs. The size of each bitmap is computed basing on the maximum
> > > > + * number of PCPUs.
> > > 
> > > These are all things which the caller is expect to arrange by making
> > > appropriately sized allocations, not inherent properties of the API, I
> > > think.
> > > 
> > > So "the number of libxl_bitmap in the array *should* be equal". "The
> > > size of each bitmap should ..." etc.
> > > 
> > Will fix. I will also add a check that this is actually what we get from
> > the caller in libxl_dom.c (where the arrays are consumed).
> 
> Or define what will happen if the array is too short of the bitmaps not
> size appropriately.
> 
> I think it would be find to only pin the first N cpus for which an array
> entry is present and leave the rest floating, and likewise to simply
> assume any pcpus past the end of the bitmap are == 0.
> 
Sure. It's a bit trickier then to deal with both arrays (hard and soft)
at the same time, but it's probably worth a try.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file
  2014-06-18 16:00   ` Ian Campbell
@ 2014-06-18 17:02     ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 17:02 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1475 bytes --]

On mer, 2014-06-18 at 17:00 +0100, Ian Campbell wrote:
> On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> 
> $subject starts "enable for specifying". I think a word must be missing.
> 
I think I just meant something like "make it possible to specify...".
I'll word it like that.

> > in a similar way to how hard-affinity is specified (i.e.,
> > exactly how plain vcpu-affinity was being specified before
> > this change).
> > 
> > To do so, we add a vcpu_soft_affinity array to build_info,
> > and treat it much like vcpu_hard_affinity. The new config
> > option is called "cpus_soft".
> 
> Not any more.
> 
Ehm... is it not?

> >  /*
> > + * LIBXL_HAVE_BUILDINFO_VCPU_SOFT_AFFINITY_ARRAY
> 
> You could probably combine this #define with the HARD ones, after all
> they arrived at the same time.
> 
> i.e. in the hard affinity patch:
> /* to be uncommented when soft array added */
> /* #define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS */
> and then uncomment it here.
> 
Ok, I'll do like this.

> At the very least they could share a doc comment.
> 
Right, although I think I'd be at least amending the comment in the this
patch.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 16:44       ` Ian Campbell
@ 2014-06-18 17:11         ` Dario Faggioli
  2014-06-19  9:07           ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-18 17:11 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, Andrew.Cooper3, Wei Liu, George.Dunlap, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2482 bytes --]

On mer, 2014-06-18 at 17:44 +0100, Ian Campbell wrote:
> On Wed, 2014-06-18 at 18:26 +0200, Dario Faggioli wrote:
> > On mer, 2014-06-18 at 16:53 +0100, Ian Campbell wrote:
> > > On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:

> > > > @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> > > >              return rc;
> > > >      }
> > > >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > > > +    /*
> > > > +     * info->cpumap is DEPRECATED, but we still want old applications
> > > > +     * that may be using it to continue working.
> > > > +     */
> > > > +    if (!libxl_bitmap_is_full(&info->cpumap))
> > > 
> > > The caller is expected to initialise this unused field to a non-default
> > > state? That doesn't sound right. Did you mean !is_empty?
> > > 
> > Nope. The default for this is to be full, so what I'm checking is really
> > that it stayed default. See libxl__domain_build_info_setdefault():
> > 
> >     ...
> >     if (!b_info->cpumap.size) {
> >         if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
> >             return ERROR_FAIL;
> >         libxl_bitmap_set_any(&b_info->cpumap);
> >     }
> >     ...
> > 
> > Can I change this? If I can, I think the best would be to remove the
> > allocation from libxl__domain_build_info_setdefault(), so that all the
> > checks could become something like `if(cpumap.size)'.
> 
> I agree.
> 
> > But does stop allocating the bitmap qualifies as an incompatible API
> > change?
> 
> I don't think so, do you think it might for some reason?
> 
Not sure... May an existing application rely on the fact that this is
being allocated already? I thought it may, but perhaps I'm
misunderstanding when exactly _setdefaults() is supposed to be called.

If existing apps do as xl, then it's not an issue to change the default
as said above. In fact, in xl, _setdefaults() is called (via
freemem()->libxl_domain_need_memory()) after the config file has been
parsed already, meaning the user has to allocate cpumap himself if he
wants to use it. Is this the intended usage? If yes, I'll happily get
rid of that initializer!

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 17:11         ` Dario Faggioli
@ 2014-06-19  9:07           ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-19  9:07 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Ian.Jackson, xen-devel, Wei Liu, George.Dunlap, Andrew.Cooper3


[-- Attachment #1.1: Type: text/plain, Size: 1503 bytes --]

On mer, 2014-06-18 at 19:11 +0200, Dario Faggioli wrote:
> On mer, 2014-06-18 at 17:44 +0100, Ian Campbell wrote:
> > On Wed, 2014-06-18 at 18:26 +0200, Dario Faggioli wrote:

> > > But does stop allocating the bitmap qualifies as an incompatible API
> > > change?
> > 
> > I don't think so, do you think it might for some reason?
> > 
> Not sure... May an existing application rely on the fact that this is
> being allocated already? I thought it may, but perhaps I'm
> misunderstanding when exactly _setdefaults() is supposed to be called.
> 
> If existing apps do as xl, then it's not an issue to change the default
> as said above. In fact, in xl, _setdefaults() is called (via
> freemem()->libxl_domain_need_memory()) after the config file has been
> parsed already, meaning the user has to allocate cpumap himself if he
> wants to use it. Is this the intended usage? If yes, I'll happily get
> rid of that initializer!
> 
I re-checked the code and re-thought about this issue, and I'm convinced
it actually is a non-issue.

So, yes, I'll change how we handle cpumap (and while at it, nodemap too)
in libxl__build_info_setdefaults(). I'll add a patch to that effect in
v10 of this series.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-18 16:40     ` Dario Faggioli
@ 2014-06-19 11:06       ` Wei Liu
  2014-06-19 14:00         ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Wei Liu @ 2014-06-19 11:06 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Wei Liu, Ian.Campbell, Andrew.Cooper3, George.Dunlap, xen-devel,
	Ian.Jackson

On Wed, Jun 18, 2014 at 06:40:30PM +0200, Dario Faggioli wrote:
> On mer, 2014-06-18 at 16:09 +0100, Wei Liu wrote:
> > On Wed, Jun 18, 2014 at 04:28:25PM +0200, Dario Faggioli wrote:
> > [...]
> > > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > > index 484ad84..b22b41e 100644
> > > --- a/tools/libxl/libxl_dom.c
> > > +++ b/tools/libxl/libxl_dom.c
> > > @@ -250,7 +250,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> > >       * whatever that turns out to be.
> > >       */
> > >      if (libxl_defbool_val(info->numa_placement)) {
> > > -        if (!libxl_bitmap_is_full(&info->cpumap)) {
> > > +        if (!libxl_bitmap_is_full(&info->cpumap) ||
> > > +            info->num_vcpu_hard_affinity) {
> > 
> > Not related to this patch, I'm wondering why you used
> > libxl_bitmap_is_full in the first place. (And I admit I missed this spot
> > when I wrote my patch, sorry)
> > 
> Why I used _is_full() in the first place instead of doing what? As said
> to Ian in another email of this thread, the default set in
> libxl__build_info_set_defaults() for info->cpumap is "mask full", so the
> idea there is to check whether a cpumap is being specified by the
> caller, or if the default value is still there.
> 
> I remember this being discussed quite thoroughly at the time, and it
> seemed the best option. Re-thinking about it right now, it probably
> would have been better to just leave it alone (I mean, not even allocate
> the bitmap!) in _set_defaults(), and check for it having been allocated
> by the caller here (with `if(cpumap.size)').
> 
> I'm not sure we can change it like that, without it being considered an
> API compatibility breakage.... If we can, I'm all for it (if you
> remember, was trying to do right that in v8).
> 

I think this is library internal details. As long as the application
sees the same behavior then we are fine.

In V8 setting the default value and honoring cpumap are both removed so
that's wrong. If you choose to not set the full map then check the size
later I think it's also OK. That implies making the default value from a
full bitmap to an empty bitmap, but it's all library internal
implementation. The application still sees the same behavior.

> > >              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> > >                         "affinity is specified");
> > >              return ERROR_INVAL;
> > > @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> > >              return rc;
> > >      }
> > >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > > +    /*
> > > +     * info->cpumap is DEPRECATED, but we still want old applications
> > > +     * that may be using it to continue working.
> > > +     */
> > > +    if (!libxl_bitmap_is_full(&info->cpumap))
> > > +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> > > +                  "Please, use the vcpu_hard_affinity array instead");
> > 
> > Shouldn't it be !libxl_bitmap_is_empty?
> > 
> Nope (see above).
> 

OK. If you don't use empty bitmap as default value. It is probably a bit
hard to WARN at this point because you cannot distinguish whether a
"full" bitmap is the "default value" or "user specified value". Probably
you can do it in libxl__domain_build_info_setdefault if you stick with
the full bitmap approach?

Wei.


> Thanks and Regards, Dario
> 
> -- <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software
> Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-19 11:06       ` Wei Liu
@ 2014-06-19 14:00         ` Dario Faggioli
  2014-06-19 14:18           ` Wei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-19 14:00 UTC (permalink / raw)
  To: Wei Liu
  Cc: Andrew.Cooper3, Ian.Jackson, xen-devel, Ian.Campbell, George.Dunlap


[-- Attachment #1.1: Type: text/plain, Size: 2628 bytes --]

On gio, 2014-06-19 at 12:06 +0100, Wei Liu wrote:
> On Wed, Jun 18, 2014 at 06:40:30PM +0200, Dario Faggioli wrote:

> > I'm not sure we can change it like that, without it being considered an
> > API compatibility breakage.... If we can, I'm all for it (if you
> > remember, was trying to do right that in v8).
> > 
> 
> I think this is library internal details. As long as the application
> sees the same behavior then we are fine.
> 
I've looked at the code more closely, understood better how
*_setdefaults() works, and have come to the same determination.

> In V8 setting the default value and honoring cpumap are both removed so
> that's wrong. 
>
Sure. And in fact, what I want --and am doing for v10-- is to kill the
initializer in libxl__build_info_setdefaults() (as it was being done in
v8) while keeping the honoring, if the map is allocated and used by the
caller (as you and Ian suggested, and as it was in v9).

> If you choose to not set the full map then check the size
> later I think it's also OK. That implies making the default value from a
> full bitmap to an empty bitmap, but it's all library internal
> implementation. The application still sees the same behavior.
> 
Not an empty bitmap: a 'not-even-allocated bitmap' (which may be what
you meant already, but you know, one better be sure :-P ).

> > > Shouldn't it be !libxl_bitmap_is_empty?
> > > 
> > Nope (see above).
> > 
> 
> OK. If you don't use empty bitmap as default value. It is probably a bit
> hard to WARN at this point because you cannot distinguish whether a
> "full" bitmap is the "default value" or "user specified value". Probably
> you can do it in libxl__domain_build_info_setdefault if you stick with
> the full bitmap approach?
> 
That is exactly why I want to back off from the "full bitmap as default"
thing! :-) It was a very bad choice (to which, I must say, covering my
head with ashes, I contributed myself... but, in my defense, it were my
very early Xen days... :-/)

And the problem it's not even this new WARN, it's already there in the
code, and there are already decisions being made out of it (as you've
seen yourself!), which are all suffering for that same issue you are
rising here.

Let me state this once more: I'm very happy of being steering away from
this. :-)

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field
  2014-06-19 14:00         ` Dario Faggioli
@ 2014-06-19 14:18           ` Wei Liu
  0 siblings, 0 replies; 34+ messages in thread
From: Wei Liu @ 2014-06-19 14:18 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Wei Liu, Ian.Campbell, Andrew.Cooper3, George.Dunlap, xen-devel,
	Ian.Jackson

On Thu, Jun 19, 2014 at 04:00:55PM +0200, Dario Faggioli wrote:
> On gio, 2014-06-19 at 12:06 +0100, Wei Liu wrote:
> > On Wed, Jun 18, 2014 at 06:40:30PM +0200, Dario Faggioli wrote:
> 
> > > I'm not sure we can change it like that, without it being considered an
> > > API compatibility breakage.... If we can, I'm all for it (if you
> > > remember, was trying to do right that in v8).
> > > 
> > 
> > I think this is library internal details. As long as the application
> > sees the same behavior then we are fine.
> > 
> I've looked at the code more closely, understood better how
> *_setdefaults() works, and have come to the same determination.
> 
> > In V8 setting the default value and honoring cpumap are both removed so
> > that's wrong. 
> >
> Sure. And in fact, what I want --and am doing for v10-- is to kill the
> initializer in libxl__build_info_setdefaults() (as it was being done in
> v8) while keeping the honoring, if the map is allocated and used by the
> caller (as you and Ian suggested, and as it was in v9).
> 

I think that will do.

> > If you choose to not set the full map then check the size
> > later I think it's also OK. That implies making the default value from a
> > full bitmap to an empty bitmap, but it's all library internal
> > implementation. The application still sees the same behavior.
> > 
> Not an empty bitmap: a 'not-even-allocated bitmap' (which may be what
> you meant already, but you know, one better be sure :-P ).
> 

Yes, that's what I meant. :-)

Wei.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity
  2014-06-18 14:28 ` [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
@ 2014-06-19 17:33   ` Dario Faggioli
  2014-06-19 17:42     ` Wei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-19 17:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3


[-- Attachment #1.1: Type: text/plain, Size: 2077 bytes --]

BTW, I've got a question about libxl cleanup/garbage collection, etc.

So, in the code below:

On mer, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:

> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 4a52da1..58f46e3 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -248,18 +248,44 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>       * some weird error manifests) the subsequent call to
>       * libxl_domain_set_nodeaffinity() will do the actual placement,
>       * whatever that turns out to be.
>
>      if (libxl_defbool_val(info->numa_placement)) {
> +        libxl_bitmap cpumap_soft;
> +
> +        /* We require both hard and soft affinity not to be set */
>          if (!libxl_bitmap_is_full(&info->cpumap) ||
> -            info->num_vcpu_hard_affinity) {
> +            info->num_vcpu_hard_affinity || info->num_vcpu_soft_affinity) {
>              LOG(ERROR, "Can run NUMA placement only if no vcpu "
> -                       "affinity is specified");
> +                       "(hard or soft) affinity is specified");
>              return ERROR_INVAL;
>          }
>  
> -        rc = numa_place_domain(gc, domid, info);
> +        rc = libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0);
>          if (rc)
>              return rc;
> +
> +        rc = numa_place_domain(gc, domid, info);
> +        if (rc) {
> +            libxl_bitmap_dispose(&cpumap_soft);
> +            return rc;
> +        }
> +
>
Do I need this call to libxl_bitmap_dispose(), to avoid leaking what
libxl_cpu_bitmap_alloc() did upon return, or does libxl cleans this up
for me? :-)

As you can see, I'm talking about the case where I'm inside libxl (i.e.,
within libxl__build_pre() )

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity
  2014-06-19 17:33   ` Dario Faggioli
@ 2014-06-19 17:42     ` Wei Liu
  2014-06-20  7:18       ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Wei Liu @ 2014-06-19 17:42 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Wei Liu, Ian.Campbell, Andrew.Cooper3, George.Dunlap, xen-devel,
	Ian.Jackson

On Thu, Jun 19, 2014 at 07:33:50PM +0200, Dario Faggioli wrote:
[...]
> > -        rc = numa_place_domain(gc, domid, info);
> > +        rc = libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0);
> >          if (rc)
> >              return rc;
> > +
> > +        rc = numa_place_domain(gc, domid, info);
> > +        if (rc) {
> > +            libxl_bitmap_dispose(&cpumap_soft);
> > +            return rc;
> > +        }
> > +
> >
> Do I need this call to libxl_bitmap_dispose(), to avoid leaking what
> libxl_cpu_bitmap_alloc() did upon return, or does libxl cleans this up
> for me? :-)
> 
> As you can see, I'm talking about the case where I'm inside libxl (i.e.,
> within libxl__build_pre() )
> 

Yes, you need to call _dispose. The bitmap is allocated with NOGC so
libxl won't free that for you.

And I think there's other occurences of this pattern in libxl as well.

Wei.

> Thanks and Regards,
> Dario
> 
> -- 
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity
  2014-06-19 17:42     ` Wei Liu
@ 2014-06-20  7:18       ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-20  7:18 UTC (permalink / raw)
  To: Wei Liu
  Cc: Ian.Jackson, George.Dunlap, Andrew.Cooper3, Ian.Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1344 bytes --]

On gio, 2014-06-19 at 18:42 +0100, Wei Liu wrote:
> On Thu, Jun 19, 2014 at 07:33:50PM +0200, Dario Faggioli wrote:
> [...]
> > > -        rc = numa_place_domain(gc, domid, info);
> > > +        rc = libxl_cpu_bitmap_alloc(ctx, &cpumap_soft, 0);
> > >          if (rc)
> > >              return rc;
> > > +
> > > +        rc = numa_place_domain(gc, domid, info);
> > > +        if (rc) {
> > > +            libxl_bitmap_dispose(&cpumap_soft);
> > > +            return rc;
> > > +        }
> > > +
> > >
> > Do I need this call to libxl_bitmap_dispose(), to avoid leaking what
> > libxl_cpu_bitmap_alloc() did upon return, or does libxl cleans this up
> > for me? :-)
> > 
> > As you can see, I'm talking about the case where I'm inside libxl (i.e.,
> > within libxl__build_pre() )
> > 
> 
> Yes, you need to call _dispose. The bitmap is allocated with NOGC so
> libxl won't free that for you.
> 
Ah, right, found it! Also, re-read the memory management stuff in
libxl.h, and it makes sense now.

Thanks (and sorry for the stupid question).

Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-18 14:28 ` [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
  2014-06-18 15:44   ` Ian Campbell
@ 2014-06-20  8:17   ` Dario Faggioli
  2014-06-20  8:58     ` Wei Liu
  1 sibling, 1 reply; 34+ messages in thread
From: Dario Faggioli @ 2014-06-20  8:17 UTC (permalink / raw)
  To: xen-devel
  Cc: Ian.Jackson, George.Dunlap, Wei Liu, Ian.Campbell, Andrew.Cooper3


[-- Attachment #1.1.1: Type: text/plain, Size: 10022 bytes --]

On mer, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
Oh and, Wei, one more thing about this patch.

While preparing v10, I happen to have made quite a few changes to the
patch itself. It still accomplishes  its goal, and its still based on
your one, of course, but it does look a bit different from it now.

I'm attaching it to this email, let me know what you want me to do about
authorship. I'm totally fine with keeping the `From: Wei Liu' tag, but
I'll keep it only if you also are... I don't want you to be blamed for
my bugs! :-P

To have an idea of what happened, have a look at the 'Changes from v9'
section. It's nothing too mind-blowing at all, but the code does look
different.

[sorry for the kind-of top-posting. I'm leaving the content of the
original e-mail here, so that one can easily compare the old (here in
the message body) and new (attached) look of the patch]

Regards,
Dario

> This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
> in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
> to preserve all information to construct a domain.
> 
> Also define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY in libxl.h to
> mark the change in API.
> 
> This patch was originally part of Wei's series about pushing as much
> information as possible on domain configuration in libxl, rather than
> xl. See here, for more details:
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>
> ---
> Changes from v8:
>  * type in the changelog.
> 
> Changes from v7:
>  This patch is new in the series, and it is coming from Wei's series
>  about pushing domain config information down to libxl. It is being
>  incorporated in this series to reduce as much as possible the
>  inter-dependencies between the two patch series, i.e., for ease
>  of development and review.
> ---
>  tools/libxl/libxl.h         |   15 ++++++++++
>  tools/libxl/libxl_dom.c     |   14 +++++++++
>  tools/libxl/libxl_types.idl |    1 +
>  tools/libxl/xl_cmdimpl.c    |   65 ++++++++++++-------------------------------
>  4 files changed, 48 insertions(+), 47 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index c2b143b..663abe2 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -338,6 +338,21 @@
>  #endif
>  
>  /*
> + * LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY
> + *
> + * If this is defined, then libxl_domain_build_info structure will
> + * contain vcpu_hard_affinity, an array of libxl_bitmap that contains
> + * the necessary information to set the hard affinity of each VCPU to
> + * a set of PCPUs. Libxl will try to pin VCPUs to PCPUs according to
> + * this list.
> + *
> + * The number of libxl_bitmap in the array equals to the maximum number
> + * of VCPUs. The size of each bitmap is computed basing on the maximum
> + * number of PCPUs.
> + */
> +#define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY 1
> +
> +/*
>   * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
>   *
>   * If this is defined, then the libxl_domain_build_info structure will
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index a90a8d5..484ad84 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -264,6 +264,20 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
>                                 &info->cpumap, NULL);
>  
> +    /* If we have the vcpu hard affinity list, honour it */
> +    if (info->num_vcpu_hard_affinity) {
> +        int i;
> +
> +        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
> +            if (libxl_set_vcpuaffinity(ctx, domid, i,
> +                                       &info->vcpu_hard_affinity[i],
> +                                       NULL)) {
> +                LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
> +                return ERROR_FAIL;
> +            }
> +        }
> +    }
> +
>      if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
>          LIBXL_MAXMEM_CONSTANT) < 0) {
>          LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 2da58e2..05978d7 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -299,6 +299,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>      ("avail_vcpus",     libxl_bitmap),
>      ("cpumap",          libxl_bitmap),
>      ("nodemap",         libxl_bitmap),
> +    ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
>      ("numa_placement",  libxl_defbool),
>      ("tsc_mode",        libxl_tsc_mode),
>      ("max_memkb",       MemKB),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 516a4d7..ac603c8 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -88,9 +88,6 @@ xlchild children[child_max];
>  static const char *common_domname;
>  static int fd_lock = -1;
>  
> -/* Stash for specific vcpu to pcpu mappping */
> -static int *vcpu_to_pcpu;
> -
>  static const char savefileheader_magic[32]=
>      "Xen saved domain, xl format\n \0 \r";
>  
> @@ -825,6 +822,7 @@ static void parse_config_data(const char *config_source,
>          b_info->max_vcpus = l;
>  
>      if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
> +        b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
>          int n_cpus = 0;
>  
>          if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
> @@ -832,21 +830,19 @@ static void parse_config_data(const char *config_source,
>              exit(1);
>          }
>  
> -        /* Prepare the array for single vcpu to pcpu mappings */
> -        vcpu_to_pcpu = xmalloc(sizeof(int) * b_info->max_vcpus);
> -        memset(vcpu_to_pcpu, -1, sizeof(int) * b_info->max_vcpus);
> +        b_info->vcpu_hard_affinity =
> +            xmalloc(b_info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
> +
> +        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
> +            libxl_bitmap_init(&b_info->vcpu_hard_affinity[i]);
> +            if (libxl_cpu_bitmap_alloc(ctx,
> +                                       &b_info->vcpu_hard_affinity[i], 0)) {
> +                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
> +                exit(1);
> +            }
> +            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
> +        }
>  
> -        /*
> -         * Idea here is to let libxl think all the domain's vcpus
> -         * have cpu affinity with all the pcpus on the list.
> -         * It is then us, here in xl, that matches each single vcpu
> -         * to its pcpu (and that's why we need to stash such info in
> -         * the vcpu_to_pcpu array now) after the domain has been created.
> -         * Doing it like this saves the burden of passing to libxl
> -         * some big array hosting the single mappings. Also, using
> -         * the cpumap derived from the list ensures memory is being
> -         * allocated on the proper nodes anyway.
> -         */
>          libxl_bitmap_set_none(&b_info->cpumap);
>          while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
>              i = atoi(buf);
> @@ -855,12 +851,14 @@ static void parse_config_data(const char *config_source,
>                  exit(1);
>              }
>              libxl_bitmap_set(&b_info->cpumap, i);
> -            if (n_cpus < b_info->max_vcpus)
> -                vcpu_to_pcpu[n_cpus] = i;
> +            if (n_cpus < b_info->max_vcpus) {
> +                libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[n_cpus]);
> +                libxl_bitmap_set(&b_info->vcpu_hard_affinity[n_cpus], i);
> +            }
>              n_cpus++;
>          }
>  
> -        /* We have a cpumap, disable automatic placement */
> +        /* We have a list of cpumaps, disable automatic placement */
>          libxl_defbool_set(&b_info->numa_placement, false);
>      }
>      else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
> @@ -2217,33 +2215,6 @@ start:
>      if ( ret )
>          goto error_out;
>  
> -    /* If single vcpu to pcpu mapping was requested, honour it */
> -    if (vcpu_to_pcpu) {
> -        libxl_bitmap vcpu_cpumap;
> -
> -        ret = libxl_cpu_bitmap_alloc(ctx, &vcpu_cpumap, 0);
> -        if (ret)
> -            goto error_out;
> -        for (i = 0; i < d_config.b_info.max_vcpus; i++) {
> -
> -            if (vcpu_to_pcpu[i] != -1) {
> -                libxl_bitmap_set_none(&vcpu_cpumap);
> -                libxl_bitmap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
> -            } else {
> -                libxl_bitmap_set_any(&vcpu_cpumap);
> -            }
> -            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
> -                fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
> -                libxl_bitmap_dispose(&vcpu_cpumap);
> -                free(vcpu_to_pcpu);
> -                ret = ERROR_FAIL;
> -                goto error_out;
> -            }
> -        }
> -        libxl_bitmap_dispose(&vcpu_cpumap);
> -        free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
> -    }
> -
>      ret = libxl_userdata_store(ctx, domid, "xl",
>                                      config_data, config_len);
>      if (ret) {
> 

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.1.2: libxl-xl-vcpu-1to1-maping-in-libxl_from-Wei.patch --]
[-- Type: text/x-patch, Size: 12024 bytes --]

commit ac8c06d07721053ebcd4115ad30a6fa06cfa4086
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Wed Jun 11 10:52:20 2014 +0200

    libxl/xl: push VCPU affinity pinning down to libxl
    
    This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
    in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
    to preserve all information to construct a domain.
    
    The array accommodates at most max_vcpus elements, each containing the
    affinity of the respective VCPU. If less than max_vcpus bitmaps are
    present, the VCPUs associated to the missing elements will just stay with
    their default affinity (they'll be free to execute on every PCPU).
    
    In case both this new field, and the already existing cpumap field are
    used, the content of the array will override what's set in cpumap. (In
    xl, we make sure that this never happens in xl, by using only one of the
    two at any given time.)
    
    The proper macro to mark the API change (called
    LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS) is added but it is commented.
    It will be uncommented by the patch in the series that completes the
    process, by adding the "vcpu_soft_affinity" array. This is because, after
    all, these two fields are being added sort-of together, and are very
    very similar, in both meaning and usage, so it makes sense for them to
    share the same marker.
    
    This patch was originally part of Wei's series about pushing as much
    information as possible on domain configuration in libxl, rather than
    xl. See here, for more details:
      http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
      http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>>
    ---
    Changes from v9:
     * update the LIBXL_HAVE_xxx macro so that it can account for both
       vcpu_hard_affinity array, introduced in this patch, and
       vcpu_soft_affinity array, introduced in another patch of the
       series, as suggested during review;
     * update the LIBXL_HAVE_xxx macro comment accordingly. Also,
       reword it as requested during review;
     * in xl code, avoid preallocating the whole vcpu_hard_affinity array.
       That is not necessary: vcpus not present in the array will
       stay with their default affinity, no need to pass any empty,
       or full, or whatever cpumap for them;
     * still in xl, while dealing with a list ("cpus = [2, 3]"), it
       is not necessary to mess with cpumap, so kill any reference to
       that.
    
    Changes from v8:
     * typo in the changelog.
    
    Changes from v7:
     This patch is new in the series, and it is coming from Wei's series
     about pushing domain config information down to libxl. It is being
     incorporated in this series to reduce as much as possible the
     inter-dependencies between the two patch series, i.e., for ease
     of development and review.

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a8477c9..d63cd11 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -340,6 +340,31 @@ typedef struct libxl__ctx libxl_ctx;
 #endif
 
 /*
+ * LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS
+ *
+ * If this is defined, then the libxl_domain_build_info structure will
+ * contain two arrays of libxl_bitmap-s, with all the necessary information
+ * to set the hard affinity (vcpu_hard_affinity) and the soft affinity
+ * (vcpu_soft_affinity) of the VCPUs.
+ *
+ * Note that, if the vcpu_hard_affinity array is used, libxl will ignore
+ * the content of the cpumap field of libxl_domain_build_info. That is to
+ * say, if the array is allocated and used by the caller, it is it and
+ * only it that determines the hard affinity of the domain's VCPUs.
+ *
+ * The number of libxl_bitmap-s in the arrays should be equal to the
+ * maximum number of VCPUs of the domain. If there only are N elements in
+ * an array, with N smaller the the maximum number of VCPUs, the hard or
+ * soft affinity (depending on which array we are talking about) will be
+ * set only for the first N VCPUs. The other VCPUs will just have affinity,
+ * both hard and soft, with all the host PCPUs.
+ * Each bitmap should be big enough to accommodate the maximum number of
+ * PCPUs of the host.
+ */
+/* to be uncommented when soft array added */
+/* #define LIBXL_HAVE_BUILDINFO_VCPU_AFFINITY_ARRAYS 1 */
+
+/*
  * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
  *
  * If this is defined, then the libxl_domain_build_info structure will
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 5d8a0be..7a60ee2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -249,7 +249,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      * libxl_domain_set_nodeaffinity() that enacts the actual placement.
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (info->cpumap.size) {
+        if (info->cpumap.size || info->num_vcpu_hard_affinity) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
                        "affinity is specified explicitly");
             return ERROR_INVAL;
@@ -271,10 +271,23 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     }
     if (info->nodemap.size)
         libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    if (info->cpumap.size)
+    /* As mentioned in libxl.h, vcpu_hard_array takes precedence */
+    if (info->num_vcpu_hard_affinity) {
+        int i;
+
+        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
+            if (libxl_set_vcpuaffinity(ctx, domid, i,
+                                       &info->vcpu_hard_affinity[i],
+                                       NULL)) {
+                LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
+                return ERROR_FAIL;
+            }
+        }
+    } else if (info->cpumap.size)
         libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
                                    &info->cpumap, NULL);
 
+
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 37df854..5607ea7 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -303,6 +303,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("avail_vcpus",     libxl_bitmap),
     ("cpumap",          libxl_bitmap),
     ("nodemap",         libxl_bitmap),
+    ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
     ("numa_placement",  libxl_defbool),
     ("tsc_mode",        libxl_tsc_mode),
     ("max_memkb",       MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c639c78..f2f5fb2 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -88,9 +88,6 @@ xlchild children[child_max];
 static const char *common_domname;
 static int fd_lock = -1;
 
-/* Stash for specific vcpu to pcpu mappping */
-static int *vcpu_to_pcpu;
-
 static const char savefileheader_magic[32]=
     "Xen saved domain, xl format\n \0 \r";
 
@@ -703,7 +700,7 @@ static void parse_config_data(const char *config_source,
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms;
     XLU_ConfigList *ioports, *irqs, *iomem;
-    int num_ioports, num_irqs, num_iomem;
+    int num_ioports, num_irqs, num_iomem, num_cpus;
     int pci_power_mgmt = 0;
     int pci_msitranslate = 0;
     int pci_permissive = 0;
@@ -800,43 +797,32 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxvcpus", &l, 0))
         b_info->max_vcpus = l;
 
-    if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
-        int n_cpus = 0;
+    if (!xlu_cfg_get_list (config, "cpus", &cpus, &num_cpus, 1)) {
+        int j = 0;
 
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
-            exit(1);
-        }
+        /* Silently ignore values corresponding to non existing vcpus */
+        if (num_cpus > b_info->max_vcpus)
+            num_cpus = b_info->max_vcpus;
 
-        /* Prepare the array for single vcpu to pcpu mappings */
-        vcpu_to_pcpu = xmalloc(sizeof(int) * b_info->max_vcpus);
-        memset(vcpu_to_pcpu, -1, sizeof(int) * b_info->max_vcpus);
+        b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
 
-        /*
-         * Idea here is to let libxl think all the domain's vcpus
-         * have cpu affinity with all the pcpus on the list.
-         * It is then us, here in xl, that matches each single vcpu
-         * to its pcpu (and that's why we need to stash such info in
-         * the vcpu_to_pcpu array now) after the domain has been created.
-         * Doing it like this saves the burden of passing to libxl
-         * some big array hosting the single mappings. Also, using
-         * the cpumap derived from the list ensures memory is being
-         * allocated on the proper nodes anyway.
-         */
-        libxl_bitmap_set_none(&b_info->cpumap);
-        while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
+        while ((buf = xlu_cfg_get_listitem(cpus, j)) != NULL && j < num_cpus) {
             i = atoi(buf);
-            if (!libxl_bitmap_cpu_valid(&b_info->cpumap, i)) {
-                fprintf(stderr, "cpu %d illegal\n", i);
+
+            libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
+            if (libxl_cpu_bitmap_alloc(ctx,
+                                       &b_info->vcpu_hard_affinity[j], 0)) {
+                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
                 exit(1);
             }
-            libxl_bitmap_set(&b_info->cpumap, i);
-            if (n_cpus < b_info->max_vcpus)
-                vcpu_to_pcpu[n_cpus] = i;
-            n_cpus++;
+            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[j]);
+            libxl_bitmap_set(&b_info->vcpu_hard_affinity[j], i);
+
+            j++;
         }
+        b_info->num_vcpu_hard_affinity = num_cpus;
 
-        /* We have a cpumap, disable automatic placement */
+        /* We have a list of cpumaps, disable automatic placement */
         libxl_defbool_set(&b_info->numa_placement, false);
     }
     else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
@@ -2183,33 +2169,6 @@ start:
     if ( ret )
         goto error_out;
 
-    /* If single vcpu to pcpu mapping was requested, honour it */
-    if (vcpu_to_pcpu) {
-        libxl_bitmap vcpu_cpumap;
-
-        ret = libxl_cpu_bitmap_alloc(ctx, &vcpu_cpumap, 0);
-        if (ret)
-            goto error_out;
-        for (i = 0; i < d_config.b_info.max_vcpus; i++) {
-
-            if (vcpu_to_pcpu[i] != -1) {
-                libxl_bitmap_set_none(&vcpu_cpumap);
-                libxl_bitmap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
-            } else {
-                libxl_bitmap_set_any(&vcpu_cpumap);
-            }
-            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
-                fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
-                libxl_bitmap_dispose(&vcpu_cpumap);
-                free(vcpu_to_pcpu);
-                ret = ERROR_FAIL;
-                goto error_out;
-            }
-        }
-        libxl_bitmap_dispose(&vcpu_cpumap);
-        free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
-    }
-
     ret = libxl_userdata_store(ctx, domid, "xl",
                                     config_data, config_len);
     if (ret) {

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-20  8:17   ` Dario Faggioli
@ 2014-06-20  8:58     ` Wei Liu
  2014-06-20 12:01       ` Dario Faggioli
  0 siblings, 1 reply; 34+ messages in thread
From: Wei Liu @ 2014-06-20  8:58 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Wei Liu, Ian.Campbell, Andrew.Cooper3, George.Dunlap, xen-devel,
	Ian.Jackson

On Fri, Jun 20, 2014 at 10:17:17AM +0200, Dario Faggioli wrote:
> On mer, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > 
> Oh and, Wei, one more thing about this patch.
> 
> While preparing v10, I happen to have made quite a few changes to the
> patch itself. It still accomplishes  its goal, and its still based on
> your one, of course, but it does look a bit different from it now.
> 
> I'm attaching it to this email, let me know what you want me to do about
> authorship. I'm totally fine with keeping the `From: Wei Liu' tag, but
> I'll keep it only if you also are... I don't want you to be blamed for
> my bugs! :-P
> 

The "From:" tag doesn't matter that much IMO. What matters is the SoB.
If this patch is still based on mine (as far as I can tell it still is),
please keep my SoB, otherwise just drop it. I'm happy with whatever
thing you do with the "From:" tag.

> [sorry for the kind-of top-posting. I'm leaving the content of the
> original e-mail here, so that one can easily compare the old (here in
> the message body) and new (attached) look of the patch]
> 

The patch looks good to me.

Wei.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl
  2014-06-20  8:58     ` Wei Liu
@ 2014-06-20 12:01       ` Dario Faggioli
  0 siblings, 0 replies; 34+ messages in thread
From: Dario Faggioli @ 2014-06-20 12:01 UTC (permalink / raw)
  To: Wei Liu
  Cc: Andrew.Cooper3, Ian.Jackson, xen-devel, Ian.Campbell, George.Dunlap


[-- Attachment #1.1: Type: text/plain, Size: 1991 bytes --]

On ven, 2014-06-20 at 09:58 +0100, Wei Liu wrote:
> On Fri, Jun 20, 2014 at 10:17:17AM +0200, Dario Faggioli wrote:
> > On mer, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> > > From: Wei Liu <wei.liu2@citrix.com>
> > > 
> > Oh and, Wei, one more thing about this patch.
> > 
> > While preparing v10, I happen to have made quite a few changes to the
> > patch itself. It still accomplishes  its goal, and its still based on
> > your one, of course, but it does look a bit different from it now.
> > 
> > I'm attaching it to this email, let me know what you want me to do about
> > authorship. I'm totally fine with keeping the `From: Wei Liu' tag, but
> > I'll keep it only if you also are... I don't want you to be blamed for
> > my bugs! :-P
> > 
> 
> The "From:" tag doesn't matter that much IMO. What matters is the SoB.
> If this patch is still based on mine (as far as I can tell it still is),
> please keep my SoB, otherwise just drop it. I'm happy with whatever
> thing you do with the "From:" tag.
> 
Well, it's what tells who the Author (e.g., as per `git log') of the
patch is, so I think it does matter. SoB has more to to with the
developer's certificate of origin:
http://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches#Signing_off_a_patch

Or at least this is how I see this.

Anyway, since you say it's fine for you, I'll stick with putting your
name in 'From:'

> > [sorry for the kind-of top-posting. I'm leaving the content of the
> > original e-mail here, so that one can easily compare the old (here in
> > the message body) and new (attached) look of the patch]
> > 
> 
> The patch looks good to me.
> 
Thanks for looking at it. :-)

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2014-06-20 12:01 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-18 14:27 [PATCH v9 0/9] Implement vcpu soft affinity for credit1 (toolstack side) Dario Faggioli
2014-06-18 14:27 ` [PATCH v9 1/9] libxc/libxl: bump library SONAMEs Dario Faggioli
2014-06-18 14:27 ` [PATCH v9 2/9] libxc: get and set soft and hard affinity Dario Faggioli
2014-06-18 14:28 ` [PATCH v9 3/9] libxl: get and set soft affinity Dario Faggioli
2014-06-18 14:28 ` [PATCH v9 4/9] xl: enable getting and setting " Dario Faggioli
2014-06-18 15:37   ` Ian Campbell
2014-06-18 14:28 ` [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl Dario Faggioli
2014-06-18 15:44   ` Ian Campbell
2014-06-18 16:32     ` Dario Faggioli
2014-06-18 16:46       ` Ian Campbell
2014-06-18 17:00         ` Dario Faggioli
2014-06-20  8:17   ` Dario Faggioli
2014-06-20  8:58     ` Wei Liu
2014-06-20 12:01       ` Dario Faggioli
2014-06-18 14:28 ` [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field Dario Faggioli
2014-06-18 15:09   ` Wei Liu
2014-06-18 16:40     ` Dario Faggioli
2014-06-19 11:06       ` Wei Liu
2014-06-19 14:00         ` Dario Faggioli
2014-06-19 14:18           ` Wei Liu
2014-06-18 15:53   ` Ian Campbell
2014-06-18 16:26     ` Dario Faggioli
2014-06-18 16:44       ` Ian Campbell
2014-06-18 17:11         ` Dario Faggioli
2014-06-19  9:07           ` Dario Faggioli
2014-06-18 14:28 ` [PATCH v9 7/9] xl: move the vcpu affinity parsing in a function Dario Faggioli
2014-06-18 15:55   ` Ian Campbell
2014-06-18 14:28 ` [PATCH v9 8/9] libxl/xl: enable for specifying soft-affinity in the config file Dario Faggioli
2014-06-18 16:00   ` Ian Campbell
2014-06-18 17:02     ` Dario Faggioli
2014-06-18 14:28 ` [PATCH v9 9/9] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
2014-06-19 17:33   ` Dario Faggioli
2014-06-19 17:42     ` Wei Liu
2014-06-20  7:18       ` Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.