All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
@ 2015-05-07 17:05 Chong Li
  2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Chong Li @ 2015-05-07 17:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, george.dunlap, dario.faggioli, mengxu,
	jbeulich, lichong659, dgolomb

[Goal]
The current xl sched-rtds tool can only set the VCPUs of a domain to the same parameter
although the scheduler supports VCPUs with different parameters. This patchset is to
enable xl sched-rtds tool to configure the VCPUs of a domain with different parameters.

This per-VCPU settings can be used in many scenarios. For example, based on Dario's statement in our pervious discussion(http://lists.xen.org/archives/html/xen-devel/2014-09/msg00423.html), if there are two real-time applications, which have different timing requirements, running in a multi-VCPU guest domain, it is beneficial to pin these two applications to two seperate VCPUs with different scheduling parameters.

What this patchset includes is a wanted and planned feature for RTDS scheudler(http://wiki.xenproject.org/wiki/RTDS-Based-Scheduler) in Xen 4.6. The interface design of the xl sched-rtds tool is based on Meng's previous discussion with Dario, George and Wei(http://lists.xen.org/archives/html/xen-devel/2015-02/msg02606.html). Basically, there are two main changes:

1) in xl, we create an array that records all VCPUs whose parameters are about to change.

2) in libxl, we receive the array and call different xc functions to handle it.

3) in xen and libxc, we use XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo(introduced by this patchset) as the hypercall for per-VCPU operations(get/set method).


[Usage]
With this patchset in use, xl sched-rtds tool can:

1) show the budget and period of each VCPU of each domain, by using "xl sched-rtds" command. An example would be like:

# xl sched-rtds
Cpupool Pool-0: sched=RTDS
Name                                ID VCPU    Period    Budget
Domain-0                             0    0     10000      4000
vm1                                  1    0      1000       500
vm1                                  1    1      2000      1000
vm2                                  2    0      4000      4000
vm2                                  2    1     10000      4000


2) show the budget and period of each VCPU of a specific domain, by using,       
e.g., "xl sched-rtds -d vm1" command. The output would be like:

# xl sched-rtds -d vm1
Name                                ID VCPU    Period    Budget
vm1                                  1    0      1000       500
vm1                                  1    1      2000      1000


3) set the budget and period of each VCPU of a specific domain, by using,
e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50" command (where "-v 0" specifies
the VCPU with ID=0). The parameters would be like:

# xl sched-rtds -d vm1
Name                                ID VCPU    Period    Budget
vm1                                  1    0       100        50
vm1                                  1    1      2000      1000


Users can also set the budget and period of multiple VCPUs of a specific domain 
with only one command, e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 1 -p 300 -b 150".
The parameters would be like:

# xl sched-rtds -d vm1
Name                                ID VCPU    Period    Budget
vm1                                  1    0       200       100
vm1                                  1    1       300       150

4) Users can still set the per-domain parameters (previous xl rtds tool already supported this).
e.g., "xl sched-rtds -d vm1 -p 500 -b 250". The parameters would be like:

# xl sched-rtds -d vm1
Name                                ID VCPU    Period    Budget
vm1                                  1    0       500       250
vm1                                  1    1       500       250



Chong Li (4):
  xen: enabling XL to set per-VCPU parameters of a domain for RTDS
    scheduler
  libxc: enabling XL to set per-VCPU parameters of a domain for RTDS
    scheduler
  libxl: enabling XL to set per-VCPU parameters of a domain for RTDS
    scheduler
  xl: enabling XL to set per-VCPU parameters of a domain for RTDS
    scheduler

 tools/libxc/include/xenctrl.h |   9 +++
 tools/libxc/xc_rt.c           |  52 +++++++++++++++
 tools/libxl/libxl.c           | 143 ++++++++++++++++++++++++++++++++----------
 tools/libxl/libxl.h           |   6 ++
 tools/libxl/libxl_types.idl   |  11 ++++
 tools/libxl/xl_cmdimpl.c      | 131 +++++++++++++++++++++++++++++---------
 xen/common/sched_rt.c         |  64 +++++++++++++++++++
 xen/common/schedule.c         |   4 +-
 xen/include/public/domctl.h   |  22 ++++++-
 9 files changed, 376 insertions(+), 66 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
@ 2015-05-07 17:05 ` Chong Li
  2015-05-08  7:49   ` Jan Beulich
  2015-05-11 13:11   ` Dario Faggioli
  2015-05-07 17:05 ` [PATCH v1 2/4] libxc: " Chong Li
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 25+ messages in thread
From: Chong Li @ 2015-05-07 17:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	mengxu, jbeulich, lichong659, dgolomb

Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to get/set a domain's
per-VCPU parameters. Hypercalls are handled in function rt_dom_cntl.

Add an array pointer in struct xen_domctl_sched_rtds(an union in struct xen_domctl_scheduler_op),
which is used for transferring data between tool and hypervisor.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>
---
 xen/common/sched_rt.c       | 64 +++++++++++++++++++++++++++++++++++++++++++++
 xen/common/schedule.c       |  4 ++-
 xen/include/public/domctl.h | 22 +++++++++++++---
 3 files changed, 86 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 7c39a9e..9add5a4 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1085,6 +1085,9 @@ rt_dom_cntl(
     struct list_head *iter;
     unsigned long flags;
     int rc = 0;
+    xen_domctl_sched_rtds_params_t *local_sched;
+    int vcpu_index=0;
+    int i;
 
     switch ( op->cmd )
     {
@@ -1110,6 +1113,67 @@ rt_dom_cntl(
         }
         spin_unlock_irqrestore(&prv->lock, flags);
         break;
+    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
+        op->u.rtds.nr_vcpus = 0;
+        spin_lock_irqsave(&prv->lock, flags);
+        list_for_each( iter, &sdom->vcpu )
+            vcpu_index++;
+        spin_unlock_irqrestore(&prv->lock, flags);
+        op->u.rtds.nr_vcpus = vcpu_index;
+        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
+                vcpu_index);
+        if( local_sched == NULL )
+        {
+            return -ENOMEM;
+        }
+        vcpu_index = 0;
+        spin_lock_irqsave(&prv->lock, flags);
+        list_for_each( iter, &sdom->vcpu )
+        {
+            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
+
+            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
+            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
+            local_sched[vcpu_index].index = vcpu_index;
+            vcpu_index++;
+        }
+        spin_unlock_irqrestore(&prv->lock, flags);
+        copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
+        xfree(local_sched);
+        rc = 0;
+        break;
+    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
+        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
+                op->u.rtds.nr_vcpus);
+        if( local_sched == NULL )
+        {
+            return -ENOMEM;
+        }
+        copy_from_guest(local_sched, op->u.rtds.vcpus, op->u.rtds.nr_vcpus);
+
+        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
+        {
+            vcpu_index = 0;
+            spin_lock_irqsave(&prv->lock, flags);
+            list_for_each( iter, &sdom->vcpu )
+            {
+                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
+                if ( local_sched[i].index == vcpu_index )
+                {
+                    if ( local_sched[i].period <= 0 || local_sched[i].budget <= 0 )
+                         return -EINVAL;
+
+                    svc->period = MICROSECS(local_sched[i].period);
+                    svc->budget = MICROSECS(local_sched[i].budget);
+                    break;
+                }
+                vcpu_index++;
+            }
+            spin_unlock_irqrestore(&prv->lock, flags);
+        }
+        xfree(local_sched);
+        rc = 0;
+        break;
     }
 
     return rc;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index f5a2e55..f820946 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
 
     if ( (op->sched_id != DOM2OP(d)->sched_id) ||
          ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
-          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
+          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
+          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
+          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
         return -EINVAL;
 
     /* NB: the pluggable scheduler code needs to take care
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 10b51ef..490a6b6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -342,6 +342,16 @@ struct xen_domctl_max_vcpus {
 typedef struct xen_domctl_max_vcpus xen_domctl_max_vcpus_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
 
+struct xen_domctl_sched_rtds_params {
+    /* vcpus' info */
+    uint64_t period;
+    uint64_t budget;
+    uint16_t index;
+    uint16_t padding[3];
+};
+typedef struct xen_domctl_sched_rtds_params xen_domctl_sched_rtds_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_sched_rtds_params_t);
+
 
 /* XEN_DOMCTL_scheduler_op */
 /* Scheduler types. */
@@ -351,9 +361,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
 #define XEN_SCHEDULER_ARINC653 7
 #define XEN_SCHEDULER_RTDS     8
 
-/* Set or get info? */
+/* Set or get info */
 #define XEN_DOMCTL_SCHEDOP_putinfo 0
 #define XEN_DOMCTL_SCHEDOP_getinfo 1
+#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 2
+#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 3
+
 struct xen_domctl_scheduler_op {
     uint32_t sched_id;  /* XEN_SCHEDULER_* */
     uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
@@ -373,8 +386,11 @@ struct xen_domctl_scheduler_op {
             uint16_t weight;
         } credit2;
         struct xen_domctl_sched_rtds {
-            uint32_t period;
-            uint32_t budget;
+            uint64_t period;
+            uint64_t budget;
+            XEN_GUEST_HANDLE_64(xen_domctl_sched_rtds_params_t) vcpus;
+            uint16_t nr_vcpus;
+            uint16_t padding[3];
         } rtds;
     } u;
 };
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 2/4] libxc: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
  2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
@ 2015-05-07 17:05 ` Chong Li
  2015-05-11 13:27   ` Dario Faggioli
  2015-05-07 17:05 ` [PATCH v1 3/4] libxl: " Chong Li
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-07 17:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	mengxu, jbeulich, lichong659, dgolomb

Add xc_sched_rtds_vcpu_get/set functions to interact with Xen to get/set a domain's
per-VCPU parameters.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>
---
 tools/libxc/include/xenctrl.h |  9 ++++++++
 tools/libxc/xc_rt.c           | 52 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 6994c51..45cbf91 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -892,6 +892,15 @@ int xc_sched_rtds_domain_get(xc_interface *xch,
                             uint32_t domid,
                             struct xen_domctl_sched_rtds *sdom);
 
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+                            uint32_t domid,
+                            struct xen_domctl_sched_rtds_params *sdom,
+                            uint16_t num_vcpus);
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+                            uint32_t domid,
+                            struct xen_domctl_sched_rtds_params *sdom,
+                            uint16_t num_vcpus);
+
 int
 xc_sched_arinc653_schedule_set(
     xc_interface *xch,
diff --git a/tools/libxc/xc_rt.c b/tools/libxc/xc_rt.c
index b2d1cc5..e752815 100644
--- a/tools/libxc/xc_rt.c
+++ b/tools/libxc/xc_rt.c
@@ -63,3 +63,55 @@ int xc_sched_rtds_domain_get(xc_interface *xch,
 
     return rc;
 }
+
+int xc_sched_rtds_vcpu_set(xc_interface *xch,
+                           uint32_t domid,
+                           struct xen_domctl_sched_rtds_params *sdom,
+                           uint16_t num_vcpus)
+{
+    int rc;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(sdom, sizeof(*sdom) * num_vcpus,
+            XC_HYPERCALL_BUFFER_BOUNCE_IN);
+    if ( xc_hypercall_bounce_pre(xch, sdom) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_scheduler_op;
+    domctl.domain = (domid_t) domid;
+    domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+    domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_putvcpuinfo;
+    domctl.u.scheduler_op.u.rtds.nr_vcpus = num_vcpus;
+    set_xen_guest_handle(domctl.u.scheduler_op.u.rtds.vcpus, sdom);
+
+    rc = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, sdom);
+    return rc;
+}
+
+int xc_sched_rtds_vcpu_get(xc_interface *xch,
+                           uint32_t domid,
+                           struct xen_domctl_sched_rtds_params *sdom,
+                           uint16_t num_vcpus)
+{
+    int rc;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(sdom, sizeof(*sdom) * num_vcpus,
+            XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, sdom) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_scheduler_op;
+    domctl.domain = (domid_t) domid;
+    domctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RTDS;
+    domctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_getvcpuinfo;
+    set_xen_guest_handle(domctl.u.scheduler_op.u.rtds.vcpus, sdom);
+
+    rc = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, sdom);
+    return rc;
+}
+
+
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
  2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
  2015-05-07 17:05 ` [PATCH v1 2/4] libxc: " Chong Li
@ 2015-05-07 17:05 ` Chong Li
  2015-05-11 14:06   ` Dario Faggioli
  2015-05-12 10:01   ` Dario Faggioli
  2015-05-07 17:05 ` [PATCH v1 4/4] xl: " Chong Li
  2015-05-11  9:56 ` [PATCH v1 0/4] Enabling " Dario Faggioli
  4 siblings, 2 replies; 25+ messages in thread
From: Chong Li @ 2015-05-07 17:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	mengxu, jbeulich, lichong659, dgolomb

Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>
---
 tools/libxl/libxl.c         | 143 +++++++++++++++++++++++++++++++++-----------
 tools/libxl/libxl.h         |   6 ++
 tools/libxl/libxl_types.idl |  11 ++++
 3 files changed, 126 insertions(+), 34 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index feb3aa9..5f66753 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5800,10 +5800,17 @@ static int sched_sedf_domain_set(libxl__gc *gc, uint32_t domid,
 static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
                                libxl_domain_sched_params *scinfo)
 {
-    struct xen_domctl_sched_rtds sdom;
-    int rc;
-
-    rc = xc_sched_rtds_domain_get(CTX->xch, domid, &sdom);
+    uint16_t num_vcpus;
+    int rc, i;
+    xc_dominfo_t info;
+    rc = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    if (rc < 0) {
+        LOGE(ERROR, "getting domain info");
+        return ERROR_FAIL;
+    }
+    num_vcpus = info.nr_online_vcpus;
+    struct xen_domctl_sched_rtds_params  sdom[num_vcpus];
+    rc = xc_sched_rtds_vcpu_get(CTX->xch, domid, sdom, num_vcpus);
     if (rc != 0) {
         LOGE(ERROR, "getting domain sched rtds");
         return ERROR_FAIL;
@@ -5812,8 +5819,15 @@ static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
     libxl_domain_sched_params_init(scinfo);
 
     scinfo->sched = LIBXL_SCHEDULER_RTDS;
-    scinfo->period = sdom.period;
-    scinfo->budget = sdom.budget;
+    scinfo->rtds.num_vcpus = num_vcpus;
+    scinfo->rtds.vcpus = (libxl_vcpu *)
+            libxl__malloc(NOGC, sizeof(libxl_vcpu) * num_vcpus);
+
+    for( i = 0; i < num_vcpus; i++) {
+        scinfo->rtds.vcpus[i].period = sdom[i].period;
+        scinfo->rtds.vcpus[i].budget = sdom[i].budget;
+        scinfo->rtds.vcpus[i].index = sdom[i].index;
+    }
 
     return 0;
 }
@@ -5821,43 +5835,104 @@ static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
 static int sched_rtds_domain_set(libxl__gc *gc, uint32_t domid,
                                const libxl_domain_sched_params *scinfo)
 {
-    struct xen_domctl_sched_rtds sdom;
     int rc;
+    int i;
 
-    rc = xc_sched_rtds_domain_get(CTX->xch, domid, &sdom);
-    if (rc != 0) {
-        LOGE(ERROR, "getting domain sched rtds");
-        return ERROR_FAIL;
-    }
+    if(scinfo->rtds.num_vcpus <= 0) {/*set per-dom rtds parameters*/
+        struct xen_domctl_sched_rtds  sdom;
+        rc = xc_sched_rtds_domain_get(CTX->xch, domid, &sdom);
+        if (rc != 0) {
+            LOGE(ERROR, "getting domain sched rtds");
+            return ERROR_FAIL;
+        }
 
-    if (scinfo->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
-        if (scinfo->period < 1) {
-            LOG(ERROR, "VCPU period is not set or out of range, "
-                       "valid values are larger than 1");
-            return ERROR_INVAL;
+        if (scinfo->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
+            if (scinfo->period < 1) {
+                LOG(ERROR, "VCPU period is not set or out of range, "
+                           "valid values are larger than 1");
+                return ERROR_INVAL;
+            }
+            sdom.period = scinfo->period;
+        }
+
+        if (scinfo->budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
+            if (scinfo->budget < 1) {
+                LOG(ERROR, "VCPU budget is not set or out of range, "
+                           "valid values are larger than 1");
+                return ERROR_INVAL;
+            }
+            sdom.budget = scinfo->budget;
         }
-        sdom.period = scinfo->period;
-    }
 
-    if (scinfo->budget != LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT) {
-        if (scinfo->budget < 1) {
-            LOG(ERROR, "VCPU budget is not set or out of range, "
-                       "valid values are larger than 1");
+        if (sdom.budget > sdom.period) {
+            LOG(ERROR, "VCPU budget is larger than VCPU period, "
+                       "VCPU budget should be no larger than VCPU period");
             return ERROR_INVAL;
         }
-        sdom.budget = scinfo->budget;
-    }
 
-    if (sdom.budget > sdom.period) {
-        LOG(ERROR, "VCPU budget is larger than VCPU period, "
-                   "VCPU budget should be no larger than VCPU period");
-        return ERROR_INVAL;
-    }
+        rc = xc_sched_rtds_domain_set(CTX->xch, domid, &sdom);
+        if (rc < 0) {
+            LOGE(ERROR, "setting domain sched rtds");
+            return ERROR_FAIL;
+        }
 
-    rc = xc_sched_rtds_domain_set(CTX->xch, domid, &sdom);
-    if (rc < 0) {
-        LOGE(ERROR, "setting domain sched rtds");
-        return ERROR_FAIL;
+        return rc;
+    } else { /*set per-vcpu rtds parameters*/
+        uint16_t num_vcpus;
+        xc_dominfo_t info;
+        rc = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+        if (rc < 0) {
+            LOGE(ERROR, "getting domain info");
+            return ERROR_FAIL;
+        }
+        num_vcpus = info.nr_online_vcpus;
+        for (i = 0; i < scinfo->rtds.num_vcpus; i++) {
+            int vcpu_index, budget, period;
+            vcpu_index = scinfo->rtds.vcpus[i].index;
+            budget = scinfo->rtds.vcpus[i].budget;
+            period = scinfo->rtds.vcpus[i].period;
+
+            if(budget > period) {
+                LOG(ERROR, "VCPU budget is larger than VCPU period, "
+                           "VCPU %d budget should be no larger than period",
+                           vcpu_index);
+                return ERROR_INVAL;
+            }
+
+            if (vcpu_index < 0 || vcpu_index >= num_vcpus) {
+                LOG(ERROR, "VCPU index is out of range, "
+                           "valid values are within range from 0 to %d",
+                           num_vcpus);
+                return ERROR_INVAL;
+            }
+
+            if (period < 1) {
+                LOG(ERROR, "VCPU period is out of range, "
+                           "valid values are larger than 1");
+                return ERROR_INVAL;
+            }
+
+            if (budget < 1) {
+                LOG(ERROR, "VCPU budget is out of range, "
+                           "valid values are larger than 1");
+                return ERROR_INVAL;
+            }
+        }
+
+        struct xen_domctl_sched_rtds_params  sdom[scinfo->rtds.num_vcpus];
+        for (i = 0; i < scinfo->rtds.num_vcpus; i++) {
+            sdom[i].index = scinfo->rtds.vcpus[i].index;
+            sdom[i].budget = scinfo->rtds.vcpus[i].budget;
+            sdom[i].period = scinfo->rtds.vcpus[i].period;
+        }
+        rc = xc_sched_rtds_vcpu_set(CTX->xch, domid,
+                       &sdom[0], scinfo->rtds.num_vcpus);
+        if (rc < 0) {
+            LOGE(ERROR, "setting domain sched rtds");
+            return ERROR_FAIL;
+        }
+
+        return rc;
     }
 
     return 0;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 44bd8e2..8284ce1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1460,6 +1460,12 @@ int libxl_sched_credit_params_set(libxl_ctx *ctx, uint32_t poolid,
 #define LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT -1
 #define LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT    -1
 
+/*RTDS Per-VCPU parameters*/
+#define LIBXL_DOMAIN_SCHED_PARAM_VCPU_INDEX_DEFAULT -1
+
+/* Consistent with XEN_LEGACY_MAX_VCPUS xen/arch-x86/xen.h*/
+#define LIBXL_XEN_LEGACY_MAX_VCPUS                  32
+
 int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
                                   libxl_domain_sched_params *params);
 int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 117b61d..806316a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -347,6 +347,16 @@ libxl_domain_restore_params = Struct("domain_restore_params", [
     ("checkpointed_stream", integer),
     ])
 
+libxl_rtds_vcpu = Struct("vcpu",[
+    ("period",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
+    ("budget",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
+    ("index",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_VCPU_INDEX_DEFAULT'}),
+    ])
+
+libxl_domain_sched_rtds_params = Struct("domain_sched_rtds_params",[
+    ("vcpus",        Array(libxl_rtds_vcpu, "num_vcpus")),
+    ])
+
 libxl_domain_sched_params = Struct("domain_sched_params",[
     ("sched",        libxl_scheduler),
     ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
@@ -356,6 +366,7 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
     ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
     ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
     ("budget",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
+    ("rtds",             libxl_domain_sched_rtds_params),  
     ])
 
 libxl_vnode_info = Struct("vnode_info", [
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v1 4/4] xl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
                   ` (2 preceding siblings ...)
  2015-05-07 17:05 ` [PATCH v1 3/4] libxl: " Chong Li
@ 2015-05-07 17:05 ` Chong Li
  2015-05-14 14:24   ` Meng Xu
  2015-05-11  9:56 ` [PATCH v1 0/4] Enabling " Dario Faggioli
  4 siblings, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-07 17:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	mengxu, jbeulich, lichong659, dgolomb

Change main_sched_rtds and related output functions to support per-VCPU settings
for xl sched-rtds tool.

Signed-off-by: Chong Li <chong.li@wustl.edu>
Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
Signed-off-by: Sisu Xi <xisisu@gmail.com>
---
 tools/libxl/xl_cmdimpl.c | 131 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 103 insertions(+), 28 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 648ca08..1e9b0d8 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5709,9 +5709,11 @@ static int sched_rtds_domain_output(
     char *domname;
     libxl_domain_sched_params scinfo;
     int rc = 0;
+    int i;
 
-    if (domid < 0) {
-        printf("%-33s %4s %9s %9s\n", "Name", "ID", "Period", "Budget");
+   if (domid < 0) {
+        printf("%-33s %4s %4s %9s %9s\n", "Name", "ID", 
+                        "VCPU", "Period", "Budget");
         return 0;
     }
 
@@ -5721,11 +5723,14 @@ static int sched_rtds_domain_output(
         goto out;
 
     domname = libxl_domid_to_name(ctx, domid);
-    printf("%-33s %4d %9d %9d\n",
-        domname,
-        domid,
-        scinfo.period,
-        scinfo.budget);
+     for( i = 0; i < scinfo.rtds.num_vcpus; i++ ) {
+        printf("%-33s %4d %4d %9"PRIu64" %9"PRIu64"\n",
+            domname,
+            domid,
+            scinfo.rtds.vcpus[i].index,
+            scinfo.rtds.vcpus[i].period,
+            scinfo.rtds.vcpus[i].budget);
+    }
     free(domname);
 
 out:
@@ -5744,6 +5749,7 @@ static int sched_rtds_pool_output(uint32_t poolid)
     return 0;
 }
 
+
 static int sched_default_pool_output(uint32_t poolid)
 {
     char *poolname;
@@ -6120,38 +6126,87 @@ int main_sched_rtds(int argc, char **argv)
 {
     const char *dom = NULL;
     const char *cpupool = NULL;
-    int period = 0; /* period is in microsecond */
-    int budget = 0; /* budget is in microsecond */
+
+    int vcpus[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* indices of VCPUs that change */
+    int periods[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* period is in microsecond */
+    int budgets[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* budget is in microsecond */
+    int index=0; /*index of the arrays above*/
+    bool flag_b = false;
+    bool flag_p = false;
+    bool flag_v = false;
     bool opt_p = false;
     bool opt_b = false;
-    int opt, rc;
+    bool opt_v = false;
+    int opt, rc, i;
     static struct option opts[] = {
         {"domain", 1, 0, 'd'},
         {"period", 1, 0, 'p'},
         {"budget", 1, 0, 'b'},
+        {"vcpu",1, 0, 'v'},
         {"cpupool", 1, 0, 'c'},
         COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
 
-    SWITCH_FOREACH_OPT(opt, "d:p:b:c:h", opts, "sched-rtds", 0) {
+    SWITCH_FOREACH_OPT(opt, "d:p:b:v:c:h", opts, "sched-rtds", 0) {
     case 'd':
         dom = optarg;
         break;
     case 'p':
-        period = strtol(optarg, NULL, 10);
+        periods[index] = strtol(optarg, NULL, 10);
         opt_p = 1;
+        if (flag_p == 1) { /* budget or vcpuID is missed */
+            fprintf(stderr, "Must specify period, budget and vcpuID\n");
+            return 1;
+        }
+        flag_p = 1;
+        if (flag_p && flag_b && flag_v) {
+            /* 
+             * Get one complete set of per-VCPU parameters 
+             * (period, budget, vcpuID). 
+             */
+            flag_p = 0;
+            flag_b = 0;
+            flag_v = 0;
+            index++;
+        }
         break;
     case 'b':
-        budget = strtol(optarg, NULL, 10);
+        budgets[index] = strtol(optarg, NULL, 10);
         opt_b = 1;
+        if (flag_b == 1) { /* period or vcpuID is missed */
+            fprintf(stderr, "Must specify period, budget and vcpuID\n");
+            return 1;
+        }
+        flag_b = 1;
+        if (flag_p && flag_b && flag_v) {
+            flag_p = 0;
+            flag_b = 0;
+            flag_v = 0;
+            index++;
+        }
+        break;
+    case 'v':
+        vcpus[index] = strtol(optarg, NULL, 10);
+        opt_v = 1;
+        if (flag_v == 1) { /* period or budget is missed */
+            fprintf(stderr, "Must specify period, budget and vcpuID\n");
+            return 1;
+        }
+        flag_v = 1;
+        if (flag_p && flag_b && flag_v) {
+            flag_p = 0;
+            flag_b = 0;
+            flag_v = 0;
+            index++;
+        }
         break;
     case 'c':
         cpupool = optarg;
         break;
     }
 
-    if (cpupool && (dom || opt_p || opt_b)) {
+    if (cpupool && (dom || opt_p || opt_b || opt_v)) {
         fprintf(stderr, "Specifying a cpupool is not allowed with "
                 "other options.\n");
         return 1;
@@ -6164,29 +6219,49 @@ int main_sched_rtds(int argc, char **argv)
         fprintf(stderr, "Must specify period and budget\n");
         return 1;
     }
+    if (opt_v && (flag_b|| flag_v || flag_p)) {
+        fprintf(stderr, "Must specify period and budget and vcpuID\n");
+        return 1;
+    }
 
-    if (!dom) { /* list all domain's rt scheduler info */
+    if (!dom) { /* list all domain's rtds scheduler info */
         return -sched_domain_output(LIBXL_SCHEDULER_RTDS,
                                     sched_rtds_domain_output,
                                     sched_rtds_pool_output,
                                     cpupool);
     } else {
         uint32_t domid = find_domain(dom);
-        if (!opt_p && !opt_b) { /* output rt scheduler info */
+        if (!opt_p && !opt_b && !opt_v) { /* output rtds scheduler info */
             sched_rtds_domain_output(-1);
             return -sched_rtds_domain_output(domid);
-        } else { /* set rt scheduler paramaters */
-            libxl_domain_sched_params scinfo;
-            libxl_domain_sched_params_init(&scinfo);
-            scinfo.sched = LIBXL_SCHEDULER_RTDS;
-            scinfo.period = period;
-            scinfo.budget = budget;
-
-            rc = sched_domain_set(domid, &scinfo);
-            libxl_domain_sched_params_dispose(&scinfo);
-            if (rc)
-                return -rc;
-        }
+    } else if (opt_v) { /* set per-vcpu rtds scheduler paramaters */
+        libxl_domain_sched_params scinfo;
+        libxl_domain_sched_params_init(&scinfo);
+        scinfo.sched = LIBXL_SCHEDULER_RTDS;
+        scinfo.rtds.num_vcpus = index;
+        scinfo.rtds.vcpus = (libxl_vcpu *)
+                malloc(sizeof(libxl_vcpu) * (index));
+        for (i = 0; i < index; i++) {
+            scinfo.rtds.vcpus[i].index = vcpus[i];
+            scinfo.rtds.vcpus[i].period = periods[i];
+            scinfo.rtds.vcpus[i].budget = budgets[i];
+        }
+        rc = sched_domain_set(domid, &scinfo);
+        libxl_domain_sched_params_dispose(&scinfo);
+        if (rc)
+            return -rc;
+    } else { /* set per-dom rtds scheduler paramaters */
+        libxl_domain_sched_params scinfo;
+        libxl_domain_sched_params_init(&scinfo);
+        scinfo.sched = LIBXL_SCHEDULER_RTDS;
+        scinfo.rtds.num_vcpus = 0;
+        scinfo.period = periods[0];
+        scinfo.budget = budgets[0];
+        rc = sched_domain_set(domid, &scinfo);
+        libxl_domain_sched_params_dispose(&scinfo);
+        if (rc)
+            return -rc;
+      }
     }
 
     return 0;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
@ 2015-05-08  7:49   ` Jan Beulich
  2015-05-10 22:04     ` Chong Li
  2015-05-11 13:11   ` Dario Faggioli
  1 sibling, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2015-05-08  7:49 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, mengxu, dgolomb

>>> On 07.05.15 at 19:05, <lichong659@gmail.com> wrote:
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -1085,6 +1085,9 @@ rt_dom_cntl(
>      struct list_head *iter;
>      unsigned long flags;
>      int rc = 0;
> +    xen_domctl_sched_rtds_params_t *local_sched;
> +    int vcpu_index=0;
> +    int i;

unsigned int

> @@ -1110,6 +1113,67 @@ rt_dom_cntl(
>          }
>          spin_unlock_irqrestore(&prv->lock, flags);
>          break;
> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> +        op->u.rtds.nr_vcpus = 0;
> +        spin_lock_irqsave(&prv->lock, flags);
> +        list_for_each( iter, &sdom->vcpu )
> +            vcpu_index++;
> +        spin_unlock_irqrestore(&prv->lock, flags);
> +        op->u.rtds.nr_vcpus = vcpu_index;

Does dropping of the lock here and re-acquiring it below really work
race free?

> +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> +                vcpu_index);
> +        if( local_sched == NULL )
> +        {
> +            return -ENOMEM;
> +        }

Pointless braces.

> +        vcpu_index = 0;
> +        spin_lock_irqsave(&prv->lock, flags);
> +        list_for_each( iter, &sdom->vcpu )
> +        {
> +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> +
> +            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
> +            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
> +            local_sched[vcpu_index].index = vcpu_index;

What use is this index to the caller? I think you rather want to tell it
the vCPU number. That's especially also taking the use case of a
get/set pair into account - unless you tell me that these indexes can
never change, the indexes passed back into the set operation would
risk to have become stale by the time the hypervisor processes the
request.

> +            vcpu_index++;
> +        }
> +        spin_unlock_irqrestore(&prv->lock, flags);
> +        copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
> +        xfree(local_sched);
> +        rc = 0;
> +        break;
> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> +                op->u.rtds.nr_vcpus);

While above using xzalloc_array() is warranted for security reasons,
I don't see why you wouldn't be able to use xmalloc_array() here.

> +        if( local_sched == NULL )
> +        {
> +            return -ENOMEM;
> +        }
> +        copy_from_guest(local_sched, op->u.rtds.vcpus, op->u.rtds.nr_vcpus);
> +
> +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> +        {
> +            vcpu_index = 0;
> +            spin_lock_irqsave(&prv->lock, flags);
> +            list_for_each( iter, &sdom->vcpu )
> +            {
> +                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> +                if ( local_sched[i].index == vcpu_index )
> +                {
> +                    if ( local_sched[i].period <= 0 || local_sched[i].budget <= 0 )
> +                         return -EINVAL;
> +
> +                    svc->period = MICROSECS(local_sched[i].period);
> +                    svc->budget = MICROSECS(local_sched[i].budget);
> +                    break;
> +                }
> +                vcpu_index++;
> +            }
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +        }

Considering a maximum size guest, these two nested loops could
require a couple of million iterations. That's too much without any
preemption checks in the middle.

> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
>  
>      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
> +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
> +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )

Imo this should become a switch now.

> @@ -351,9 +361,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
>  #define XEN_SCHEDULER_ARINC653 7
>  #define XEN_SCHEDULER_RTDS     8
>  
> -/* Set or get info? */
> +/* Set or get info */

Pointless change (and if you really mean to do it, then please such
that the comment in the end matches our coding style).

> @@ -373,8 +386,11 @@ struct xen_domctl_scheduler_op {
>              uint16_t weight;
>          } credit2;
>          struct xen_domctl_sched_rtds {
> -            uint32_t period;
> -            uint32_t budget;
> +            uint64_t period;
> +            uint64_t budget;

This widening of the fields seems both unrelated and unnecessary.

Jan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-08  7:49   ` Jan Beulich
@ 2015-05-10 22:04     ` Chong Li
  2015-05-11  6:57       ` Jan Beulich
  0 siblings, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-10 22:04 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Chong Li, Wei Liu, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 6056 bytes --]

On Fri, May 8, 2015 at 2:49 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 07.05.15 at 19:05, <lichong659@gmail.com> wrote:
> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > @@ -1085,6 +1085,9 @@ rt_dom_cntl(
> >      struct list_head *iter;
> >      unsigned long flags;
> >      int rc = 0;
> > +    xen_domctl_sched_rtds_params_t *local_sched;
> > +    int vcpu_index=0;
> > +    int i;
>
> unsigned int
>
> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
> >          }
> >          spin_unlock_irqrestore(&prv->lock, flags);
> >          break;
> > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> > +        op->u.rtds.nr_vcpus = 0;
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        list_for_each( iter, &sdom->vcpu )
> > +            vcpu_index++;
> > +        spin_unlock_irqrestore(&prv->lock, flags);
> > +        op->u.rtds.nr_vcpus = vcpu_index;
>
> Does dropping of the lock here and re-acquiring it below really work
> race free?
>

Here, the lock is used in the same way as the ones in the two cases
above (XEN_DOMCTL_SCHEDOP_get/putinfo). So I think if race free
is guaranteed in that two cases, the lock in this case works race free
as well.


> > +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> > +                vcpu_index);
> > +        if( local_sched == NULL )
> > +        {
> > +            return -ENOMEM;
> > +        }
>
> Pointless braces.
>
> > +        vcpu_index = 0;
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        list_for_each( iter, &sdom->vcpu )
> > +        {
> > +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +
> > +            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
> > +            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
> > +            local_sched[vcpu_index].index = vcpu_index;
>
> What use is this index to the caller? I think you rather want to tell it
> the vCPU number. That's especially also taking the use case of a
> get/set pair into account - unless you tell me that these indexes can
> never change, the indexes passed back into the set operation would
> risk to have become stale by the time the hypervisor processes the
> request.
>

I don't quite understand what the "stale" means. The array here
(local_sched[ ])
and the array (in libxc) that local_sched[ ] is copied to are both used for
this get
operation only. When users set per-vcpu parameters, there are also
dedicated
arrays for that set operation.


>
> > +            vcpu_index++;
> > +        }
> > +        spin_unlock_irqrestore(&prv->lock, flags);
> > +        copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
> > +        xfree(local_sched);
> > +        rc = 0;
> > +        break;
> > +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> > +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> > +                op->u.rtds.nr_vcpus);
>
> While above using xzalloc_array() is warranted for security reasons,
> I don't see why you wouldn't be able to use xmalloc_array() here.
>
> > +        if( local_sched == NULL )
> > +        {
> > +            return -ENOMEM;
> > +        }
> > +        copy_from_guest(local_sched, op->u.rtds.vcpus,
> op->u.rtds.nr_vcpus);
> > +
> > +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> > +        {
> > +            vcpu_index = 0;
> > +            spin_lock_irqsave(&prv->lock, flags);
> > +            list_for_each( iter, &sdom->vcpu )
> > +            {
> > +                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +                if ( local_sched[i].index == vcpu_index )
> > +                {
> > +                    if ( local_sched[i].period <= 0 ||
> local_sched[i].budget <= 0 )
> > +                         return -EINVAL;
> > +
> > +                    svc->period = MICROSECS(local_sched[i].period);
> > +                    svc->budget = MICROSECS(local_sched[i].budget);
> > +                    break;
> > +                }
> > +                vcpu_index++;
> > +            }
> > +            spin_unlock_irqrestore(&prv->lock, flags);
> > +        }
>
> Considering a maximum size guest, these two nested loops could
> require a couple of million iterations. That's too much without any
> preemption checks in the middle.
>

The section protected by the lock is only the "list_for_each" loop, whose
running time is limited by the number of vcpus of a domain (32 at most).
If this does cause problems, I think adding a "hypercall_preempt_check()"
at the outside "for" loop may help. Is that right?


> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct
> xen_domctl_scheduler_op *op)
> >
> >      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
> >           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> > -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
>
> Imo this should become a switch now.
>

Do you mean "switch ( op->cmd )" ? I'm afraid that would make it look more
complicated.


> > @@ -351,9 +361,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
> >  #define XEN_SCHEDULER_ARINC653 7
> >  #define XEN_SCHEDULER_RTDS     8
> >
> > -/* Set or get info? */
> > +/* Set or get info */
>
> Pointless change (and if you really mean to do it, then please such
> that the comment in the end matches our coding style).
>
> > @@ -373,8 +386,11 @@ struct xen_domctl_scheduler_op {
> >              uint16_t weight;
> >          } credit2;
> >          struct xen_domctl_sched_rtds {
> > -            uint32_t period;
> > -            uint32_t budget;
> > +            uint64_t period;
> > +            uint64_t budget;
>
> This widening of the fields seems both unrelated and unnecessary.
>
> Jan
>
>


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

[-- Attachment #1.2: Type: text/html, Size: 8933 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-10 22:04     ` Chong Li
@ 2015-05-11  6:57       ` Jan Beulich
  2015-05-14 22:27         ` Chong Li
  0 siblings, 1 reply; 25+ messages in thread
From: Jan Beulich @ 2015-05-11  6:57 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, Dagaen Golomb

>>> On 11.05.15 at 00:04, <lichong659@gmail.com> wrote:
> On Fri, May 8, 2015 at 2:49 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 07.05.15 at 19:05, <lichong659@gmail.com> wrote:
>> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
>> >          }
>> >          spin_unlock_irqrestore(&prv->lock, flags);
>> >          break;
>> > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> > +        op->u.rtds.nr_vcpus = 0;
>> > +        spin_lock_irqsave(&prv->lock, flags);
>> > +        list_for_each( iter, &sdom->vcpu )
>> > +            vcpu_index++;
>> > +        spin_unlock_irqrestore(&prv->lock, flags);
>> > +        op->u.rtds.nr_vcpus = vcpu_index;
>>
>> Does dropping of the lock here and re-acquiring it below really work
>> race free?
>>
> 
> Here, the lock is used in the same way as the ones in the two cases
> above (XEN_DOMCTL_SCHEDOP_get/putinfo). So I think if race free
> is guaranteed in that two cases, the lock in this case works race free
> as well.

No - the difference is that in the {get,put}info cases it is being
acquired just once each.

>> > +        vcpu_index = 0;
>> > +        spin_lock_irqsave(&prv->lock, flags);
>> > +        list_for_each( iter, &sdom->vcpu )
>> > +        {
>> > +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
>> sdom_elem);
>> > +
>> > +            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
>> > +            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
>> > +            local_sched[vcpu_index].index = vcpu_index;
>>
>> What use is this index to the caller? I think you rather want to tell it
>> the vCPU number. That's especially also taking the use case of a
>> get/set pair into account - unless you tell me that these indexes can
>> never change, the indexes passed back into the set operation would
>> risk to have become stale by the time the hypervisor processes the
>> request.
>>
> 
> I don't quite understand what the "stale" means. The array here
> (local_sched[ ])
> and the array (in libxc) that local_sched[ ] is copied to are both used for
> this get
> operation only. When users set per-vcpu parameters, there are also
> dedicated
> arrays for that set operation.

Just clarify this for me (and maybe yourself): Is the vCPU number
<-> vcpu_index mapping invariable for the lifetime of a domain?
If it isn't, the vCPU for a particular vcpu_index during a "get"
may be different from that for the same vcpu_index during a
subsequent "set".

>> > +        if( local_sched == NULL )
>> > +        {
>> > +            return -ENOMEM;
>> > +        }
>> > +        copy_from_guest(local_sched, op->u.rtds.vcpus,
>> op->u.rtds.nr_vcpus);
>> > +
>> > +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
>> > +        {
>> > +            vcpu_index = 0;
>> > +            spin_lock_irqsave(&prv->lock, flags);
>> > +            list_for_each( iter, &sdom->vcpu )
>> > +            {
>> > +                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
>> sdom_elem);
>> > +                if ( local_sched[i].index == vcpu_index )
>> > +                {
>> > +                    if ( local_sched[i].period <= 0 ||
>> local_sched[i].budget <= 0 )
>> > +                         return -EINVAL;
>> > +
>> > +                    svc->period = MICROSECS(local_sched[i].period);
>> > +                    svc->budget = MICROSECS(local_sched[i].budget);
>> > +                    break;
>> > +                }
>> > +                vcpu_index++;
>> > +            }
>> > +            spin_unlock_irqrestore(&prv->lock, flags);
>> > +        }
>>
>> Considering a maximum size guest, these two nested loops could
>> require a couple of million iterations. That's too much without any
>> preemption checks in the middle.
>>
> 
> The section protected by the lock is only the "list_for_each" loop, whose
> running time is limited by the number of vcpus of a domain (32 at most).

Since when is 32 the limit on the number of vCPU-s in a domain?

> If this does cause problems, I think adding a "hypercall_preempt_check()"
> at the outside "for" loop may help. Is that right?

Yes.

>> > --- a/xen/common/schedule.c
>> > +++ b/xen/common/schedule.c
>> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct
>> xen_domctl_scheduler_op *op)
>> >
>> >      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>> >           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
>> > -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
>> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
>> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
>> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
>>
>> Imo this should become a switch now.
>>
> 
> Do you mean "switch ( op->cmd )" ? I'm afraid that would make it look more
> complicated.

This may be a matter of taste to a certain degree, but I personally
don't think a series of four almost identical comparisons reads any
better than its switch() replacement. But it being a style issue, the
ultimate decision is with George as the maintainer anyway.

Jan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
                   ` (3 preceding siblings ...)
  2015-05-07 17:05 ` [PATCH v1 4/4] xl: " Chong Li
@ 2015-05-11  9:56 ` Dario Faggioli
  2015-05-14 17:08   ` Chong Li
  4 siblings, 1 reply; 25+ messages in thread
From: Dario Faggioli @ 2015-05-11  9:56 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, george.dunlap, xen-devel, mengxu, jbeulich, dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 1885 bytes --]

On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> [Goal]
> The current xl sched-rtds tool can only set the VCPUs of a domain to the same parameter
> although the scheduler supports VCPUs with different parameters. This patchset is to
> enable xl sched-rtds tool to configure the VCPUs of a domain with different parameters.
> 
> This per-VCPU settings can be used in many scenarios. For example, based on Dario's statement in our pervious discussion(http://lists.xen.org/archives/html/xen-devel/2014-09/msg00423.html), if there are two real-time applications, which have different timing requirements, running in a multi-VCPU guest domain, it is beneficial to pin these two applications to two seperate VCPUs with different scheduling parameters.
> 
Right. And in fact, I'm glad to see this is happening, thanks for doing
this work! :-)

> 1) show the budget and period of each VCPU of each domain, by using "xl sched-rtds" command. An example would be like:
>
> [..]
>
> 2) show the budget and period of each VCPU of a specific domain, by using,       
> e.g., "xl sched-rtds -d vm1" command. The output would be like:
>
> [..]
>
> 3) set the budget and period of each VCPU of a specific domain, by using,
> e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50" command (where "-v 0" specifies
> the VCPU with ID=0). The parameters would be like:
>
> [..]
>
> 4) Users can still set the per-domain parameters (previous xl rtds tool already supported this).
> e.g., "xl sched-rtds -d vm1 -p 500 -b 250". The parameters would be like:
>
The CLI looks nice to me. I'm wondering, what happens if the user tries
to only alter the budget or the period of a vcpu (or of a domain)? I
think that is not possible right now, is it?

Would it make sense to allow that? I think it would, but this can well
happen later, once we will have this in.

Regards,
Dario

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
  2015-05-08  7:49   ` Jan Beulich
@ 2015-05-11 13:11   ` Dario Faggioli
  2015-05-14 22:15     ` Chong Li
  1 sibling, 1 reply; 25+ messages in thread
From: Dario Faggioli @ 2015-05-11 13:11 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, xen-devel, mengxu,
	jbeulich, dgolomb

On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to get/set a domain's
> per-VCPU parameters. Hypercalls are handled in function rt_dom_cntl.
> 
And that is because, right now, only code in sched_rt.c is able to deal
with per-vcpu parameters getting and setting.

That's of course true, but these two new hypercalls are, potentially,
generic, i.e., other schedulers may want to use them at some point. So,
why not just put them in good shape for that from the beginning?

To do so, you could with the new DOMCTLs in a similar way as
XEN_DOMCTL_SCHEDOP_{get,put}info are handled, and add a
new .adjust_vcpu(s?) hook in the scheduler interface.

> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 7c39a9e..9add5a4 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -1085,6 +1085,9 @@ rt_dom_cntl(
>      struct list_head *iter;
>      unsigned long flags;
>      int rc = 0;
> +    xen_domctl_sched_rtds_params_t *local_sched;
> +    int vcpu_index=0;
>
So, what's this vcpu_index intended meaning/usage?

> @@ -1110,6 +1113,67 @@ rt_dom_cntl(
>          }
>          spin_unlock_irqrestore(&prv->lock, flags);
>          break;
> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> +        op->u.rtds.nr_vcpus = 0;
> +        spin_lock_irqsave(&prv->lock, flags);
> +        list_for_each( iter, &sdom->vcpu )
> +            vcpu_index++;
>
> +        spin_unlock_irqrestore(&prv->lock, flags);
>
This gives you the number of vcpus of sdom, doesn't it? It feels rather
nasty (especially the lock being dropped and taken again below!).

Aren't there other ways to get the same information that suits your
needs (e.g., d->max_vcpus)? If not, I think you should consider adding a
'nr_vcpu' field in rt_dom, exactly as Credit2 is doing in csched2_dom.

> +        spin_lock_irqsave(&prv->lock, flags);
> +        list_for_each( iter, &sdom->vcpu )
> +        {
> +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> +
> +            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
> +            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
> +            local_sched[vcpu_index].index = vcpu_index;
> +            vcpu_index++;
>
And that's why I was asking about index. As Jan is pointing out already,
used like this, this index/vcpu_index is rather useless.

I mean, you're passing up nr_vcpus structs in an array in which
the .index field of the i-eth element is equal to i. How is this
important? The caller could well iterate, count, and retrieve the
position of each elements by itself!

What you probably are after, is the vcpu id, isn't it?

> +        }
> +        spin_unlock_irqrestore(&prv->lock, flags);
> +        copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
>
I'm sure we want some checks about whether we are overflowing the
userspace provided buffer (and something similar below, for put). I
appreciate that you, in this patch series, are only calling this from
libxl, which properly dimension things, etc., but that can not always be
the case.

There are several examples in the code base on the route to take for
similar operations. For example, you can try to do some checks and only
fill as much elements as the buffer allows, or you can give a special
semantic to calling the hypercall with NULL/0 as parameters, i.e., use
that for asking Xen about the proper sizes, etc.

Have a look at how XEN_SYSCTL_numainfo and XEN_SYSCTL_cputopoinfo are
implemented (in Xen, but also in libxc and libxl, to properly understand
things).

> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> +                op->u.rtds.nr_vcpus);
> +        if( local_sched == NULL )
> +        {
> +            return -ENOMEM;
> +        }
> +        copy_from_guest(local_sched, op->u.rtds.vcpus, op->u.rtds.nr_vcpus);
> +
> +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> +        {
> +            vcpu_index = 0;
> +            spin_lock_irqsave(&prv->lock, flags);
> +            list_for_each( iter, &sdom->vcpu )
> +            {
>
But why the nested loop? I think this is still that 'index' thing
causing problems. If you use vcpu numbers/ids, you can just use the
d->vcpu[] array, and get rid of the one of the for-s!

Look at, for instance, XEN_DOMCTL_{set,get}vcpuaffinity. You're after
something that is pretty similar (i.e., altering a per-vcpu property),
you just want to do it on more than one vcpu at a time.

> +                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> +                if ( local_sched[i].index == vcpu_index )
> +                {
> +                    if ( local_sched[i].period <= 0 || local_sched[i].budget <= 0 )
> +                         return -EINVAL;
> +
> +                    svc->period = MICROSECS(local_sched[i].period);
> +                    svc->budget = MICROSECS(local_sched[i].budget);
> +                    break;
> +                }
> +                vcpu_index++;
> +            }
> +            spin_unlock_irqrestore(&prv->lock, flags);
>
Lock dropping and reacquiring again... This is less scary than above,
but doing it is pretty much always asking for troubles. Of course there
are legitimate use case of such a pattern, but I'd always ask myself 6
or 7 times whether I'm dealing with one of those before actually using
it. 99% of them, you aren't. :-P

Anyway, if you go with vcpu ids instead than with index, that is not a
problem, as there will be only one loop.

> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 10b51ef..490a6b6 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -342,6 +342,16 @@ struct xen_domctl_max_vcpus {
>  typedef struct xen_domctl_max_vcpus xen_domctl_max_vcpus_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
>  
> +struct xen_domctl_sched_rtds_params {
> +    /* vcpus' info */
> +    uint64_t period;
> +    uint64_t budget;
>
We settled for uint32_t while reviewing RTDS patches, and nothing
changed since then, as far as I can tell, so you should just use that as
width.

> +    uint16_t index;
> +    uint16_t padding[3];
> +};
> +typedef struct xen_domctl_sched_rtds_params xen_domctl_sched_rtds_params_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_sched_rtds_params_t);
> +
>  
>  /* XEN_DOMCTL_scheduler_op */
>  /* Scheduler types. */
> @@ -351,9 +361,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
>  #define XEN_SCHEDULER_ARINC653 7
>  #define XEN_SCHEDULER_RTDS     8
>  
> -/* Set or get info? */
> +/* Set or get info */
>  #define XEN_DOMCTL_SCHEDOP_putinfo 0
>  #define XEN_DOMCTL_SCHEDOP_getinfo 1
> +#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 2
> +#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 3
> +
>  struct xen_domctl_scheduler_op {
>      uint32_t sched_id;  /* XEN_SCHEDULER_* */
>      uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
> @@ -373,8 +386,11 @@ struct xen_domctl_scheduler_op {
>              uint16_t weight;
>          } credit2;
>          struct xen_domctl_sched_rtds {
> -            uint32_t period;
> -            uint32_t budget;
> +            uint64_t period;
> +            uint64_t budget;
>
Ditto.

> +            XEN_GUEST_HANDLE_64(xen_domctl_sched_rtds_params_t) vcpus;
> +            uint16_t nr_vcpus;
> +            uint16_t padding[3];
>          } rtds;
>
I don't like this. I find it both confusing and redundant as an
interface.

In fact, this way, we have both budget, period and an array of budget
and period. What is the meaning of this? What if one fills both the
budget and period members and a couple of array elements?

You should either adapt xen_domctl_sched_rtds to be able to deal with
both per-domain ad per-vcpu parameter getting/setting, but with less
redundancy and a more clear and better defined semantic, or introduce a
new member of the union u to with the array (and the number of elements)
in it (e.g., xen_domctl_sched_rtds_vcpus), or (and that's what I like
most) just go with something completely new, e.g., 

#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 2
#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 3
struct xen_domctl_scheduler_vcpu_op {
    uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_{get,put}vcpuinfo */
    union {
        struct xen_domctl_sched_rtds_vcpus {
            XEN_GUEST_HANDLE_64(xen_domctl_sched_rtds_params_t) vcpus
            uint16_t nr_vcpus;
            uint16_t padding[3];
        } rtds;
    } u;
}

Regards,
Dario

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 2/4] libxc: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 2/4] libxc: " Chong Li
@ 2015-05-11 13:27   ` Dario Faggioli
  0 siblings, 0 replies; 25+ messages in thread
From: Dario Faggioli @ 2015-05-11 13:27 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, xen-devel, mengxu,
	jbeulich, dgolomb

On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> Add xc_sched_rtds_vcpu_get/set functions to interact with Xen to get/set a domain's
> per-VCPU parameters.
> 
> Signed-off-by: Chong Li <chong.li@wustl.edu>
> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
> Signed-off-by: Sisu Xi <xisisu@gmail.com>
>
These looks same xc wrappers to the hypercall to me. I'm not going into
much more details as I think the hypervisor interfaces that they wrap
needs changing, so that would be rather pointless. :-)

Regards,
Dario

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 3/4] libxl: " Chong Li
@ 2015-05-11 14:06   ` Dario Faggioli
  2015-05-15 15:24     ` Chong Li
  2015-05-12 10:01   ` Dario Faggioli
  1 sibling, 1 reply; 25+ messages in thread
From: Dario Faggioli @ 2015-05-11 14:06 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, george.dunlap, xen-devel, mengxu,
	jbeulich, dgolomb

On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
> 
> Signed-off-by: Chong Li <chong.li@wustl.edu>
> Signed-off-by: Meng Xu <mengxu@cis.upenn.edu>
> Signed-off-by: Sisu Xi <xisisu@gmail.com>
> ---
>  tools/libxl/libxl.c         | 143 +++++++++++++++++++++++++++++++++-----------
>  tools/libxl/libxl.h         |   6 ++
>  tools/libxl/libxl_types.idl |  11 ++++
>  3 files changed, 126 insertions(+), 34 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index feb3aa9..5f66753 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -5800,10 +5800,17 @@ static int sched_sedf_domain_set(libxl__gc *gc, uint32_t domid,
>  static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
>                                 libxl_domain_sched_params *scinfo)
>  {
> -    struct xen_domctl_sched_rtds sdom;
> -    int rc;
> -
> -    rc = xc_sched_rtds_domain_get(CTX->xch, domid, &sdom);
> +    uint16_t num_vcpus;
> +    int rc, i;
> +    xc_dominfo_t info;
>
Coding style: we want a blank line between variable definitions and
actual code.

> +    rc = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> +    if (rc < 0) {
> +        LOGE(ERROR, "getting domain info");
> +        return ERROR_FAIL;
> +    }
> +    num_vcpus = info.nr_online_vcpus;
>
It looks like the most of other places in libxl where this is necessary
use libxl_list_vcpu(), which, if you want to open code it, uses
info.max_vcpu_id. I'd do the same.

> +    struct xen_domctl_sched_rtds_params  sdom[num_vcpus];
>
And, please, all the variable definitions go together, at the top of the
function. Avoid having them scattered around the body.

I see why this is here, but no, that's not a good reason... For
allocating arrays in libxl, have a look at GCNEW_ARRAY, or
libxl__calloc, libxl__malloc, libxl__realloc, etc.

> +    rc = xc_sched_rtds_vcpu_get(CTX->xch, domid, sdom, num_vcpus);
>      if (rc != 0) {
>          LOGE(ERROR, "getting domain sched rtds");
>          return ERROR_FAIL;

> @@ -5821,43 +5835,104 @@ static int sched_rtds_domain_get(libxl__gc *gc, uint32_t domid,
>  static int sched_rtds_domain_set(libxl__gc *gc, uint32_t domid,
>                                 const libxl_domain_sched_params *scinfo)
>  {
> -    struct xen_domctl_sched_rtds sdom;
>      int rc;
> +    int i;
>  
> -    rc = xc_sched_rtds_domain_get(CTX->xch, domid, &sdom);
> -    if (rc != 0) {
> -        LOGE(ERROR, "getting domain sched rtds");
> -        return ERROR_FAIL;
> -    }
> +    if(scinfo->rtds.num_vcpus <= 0) {/*set per-dom rtds parameters*/
>
The comment is better before or after, rather than inline.

But, really, can you define another helper function and call it, instead
than re-indenting everything and making this one much more long and
complex?

Oh, and the error checking and error reporting code should be factored
and shared, not duplicated.

I mean this:

> +        if (scinfo->period != LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT) {
> +            if (scinfo->period < 1) {
> +                LOG(ERROR, "VCPU period is not set or out of range, "
> +                           "valid values are larger than 1");
> +                return ERROR_INVAL;
> +            }
> +            sdom.period = scinfo->period;
> +        }

And this:
> +            if (period < 1) {
> +                LOG(ERROR, "VCPU period is out of range, "
> +                           "valid values are larger than 1");
> +                return ERROR_INVAL;
> +            }
>
And the same for every other one.

Regards,
Dario

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 3/4] libxl: " Chong Li
  2015-05-11 14:06   ` Dario Faggioli
@ 2015-05-12 10:01   ` Dario Faggioli
  2015-05-15 16:35     ` Chong Li
  2015-05-22 17:57     ` Chong Li
  1 sibling, 2 replies; 25+ messages in thread
From: Dario Faggioli @ 2015-05-12 10:01 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, wei.liu2, Sisu Xi, George Dunlap, Ian Jackson,
	xen-devel, Ian Campbell, mengxu, dgolomb


[-- Attachment #1.1: Type: text/plain, Size: 5456 bytes --]

[Adjusting the Cc list:
  - removing hypervisor only people
  - adding more tools maintainers
  - adding George]

On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
> 
More on this patch (I had to run yesterday).

> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 44bd8e2..8284ce1 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h

> +/* Consistent with XEN_LEGACY_MAX_VCPUS xen/arch-x86/xen.h*/
> +#define LIBXL_XEN_LEGACY_MAX_VCPUS                  32
> +
Kill this. Nothing good can possibly come from relying on things like
this (i.e., copying constants --and even worse arch specific ones, in
this case!-- around).

>  int libxl_domain_sched_params_get(libxl_ctx *ctx, uint32_t domid,
>                                    libxl_domain_sched_params *params);
>  int libxl_domain_sched_params_set(libxl_ctx *ctx, uint32_t domid,
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 117b61d..806316a 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -347,6 +347,16 @@ libxl_domain_restore_params = Struct("domain_restore_params", [
>      ("checkpointed_stream", integer),
>      ])
>  
> +libxl_rtds_vcpu = Struct("vcpu",[
>
                     ^Struct("rtds_vcpu",[

> +    ("period",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
> +    ("budget",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
> +    ("index",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_VCPU_INDEX_DEFAULT'}),
>
Call this last member vcpuid, if you want to be able to pass a
sparse/incomplete array, and hence you need to know to what vcpu each
element refers to, or just get rid of it, it you always pass all the
elements.

I'd go with the former.

> +    ])
> +
> +libxl_domain_sched_rtds_params = Struct("domain_sched_rtds_params",[
>
I'd call this something like "vcpus_sched_rtds_params". Well, actually,
I don't think this is needed at all (see below).

> +    ("vcpus",        Array(libxl_rtds_vcpu, "num_vcpus")),
> +    ])
> +
>  libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("sched",        libxl_scheduler),
>      ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
> @@ -356,6 +366,7 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>      ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
>      ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>      ("budget",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
> +    ("rtds",             libxl_domain_sched_rtds_params),  
>      ])
>  
I've got the same concerns I've already expressed about the hypervisor
interface. Doing things like this, we will have
libxl_domain_sched_params that hosts both per-domain and per-vcpu
parameters, without a clear way to know what happens if one specifies
both.

Also, if at some point other schedulers want to support per-vcpu
parameters, the said information duplication (and ambiguity) will apply
to them too!

Finally, I don't think I would call the way the result returned by a
call to libxl_domain_sched_params_{get,set}() changes backward
compatible, which OTOH is something we want to retain in libxl.

So, if API compatibility/stability wasn't an issue, fiddling with
libxl_domain_sched_params would probably be the best solution. Since it
is, I'd leave that alone as much as possible, and introduce something
completely new, for dealing with per-vcpu parameters. Something like
this:

typedef struct libxl_vcpu_sched_params {
    libxl_domain_sched_params vcpus[];
    int num_vcpus;
} libxl_vcpu_sched_params;

[*]

And I'll deal with this in new API functions:

int libxl_vcpu_sched_params_get(libxl_ctx *ctx, uint32_t domid,
                                libxl_domain_vcpu_params *params);
int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
                                const libxl_vcpu_sched_params *params);

At this point, if, say, Credit2 wants to start supporting per-vcpu
scheduling parameters, we're all set for that!

What do the tools' maintainers think?

The only thing I'm not sure about, in the described scenario, is what a
call to libxl_domain_sched_params_get() should return for a scheduler
which supports per-vcpu parameters, and actually has different
scheduling parameters for the various vcpus...

I mean, a call to libxl_domain_sched_params_set() can be interpreted
like 'set the passed params for all the vcpus', _get(), I don't know...
perhaps it should return some sort of ERROR_INVAL, to inform the caller
that it is libxl_vcpu_sched_params_get() that should be used?

Again, tools maintainers? :-)

Oh, and in any case, you need to mark that something is being added to
the libxl API using the '#define LIBXL_HAVE_*' mechanism. Look for
examples in libxl.h, there are several. If going for my proposal, I'd
call the constant LIBXL_HAVE_VCPU_SCHED_PARAMS.

Regards,
Dario

[*] I don't like the fact that the vcpus[] array is of
'libxl_domain_sched_params' type, especially the *domain* part of the
type name, but I don't think we can change that, for the said API
stability reasons.

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 4/4] xl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-07 17:05 ` [PATCH v1 4/4] xl: " Chong Li
@ 2015-05-14 14:24   ` Meng Xu
  2015-05-14 14:39     ` Dario Faggioli
  0 siblings, 1 reply; 25+ messages in thread
From: Meng Xu @ 2015-05-14 14:24 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Dario Faggioli,
	xen-devel, mengxu, Jan Beulich, Dagaen Golomb

> @@ -5744,6 +5749,7 @@ static int sched_rtds_pool_output(uint32_t poolid)
>      return 0;
>  }
>
> +
>  static int sched_default_pool_output(uint32_t poolid)
>  {
>      char *poolname;
> @@ -6120,38 +6126,87 @@ int main_sched_rtds(int argc, char **argv)
>  {
>      const char *dom = NULL;
>      const char *cpupool = NULL;
> -    int period = 0; /* period is in microsecond */
> -    int budget = 0; /* budget is in microsecond */
> +
> +    int vcpus[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* indices of VCPUs that change */
> +    int periods[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* period is in microsecond */
> +    int budgets[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* budget is in microsecond */


We know this is not good (ugly), but not sure about the best approach
to do this. Basically, if users tries to input several pairs of period
and budget, what is the best way to hold those data? Should we just
increase the array size by using something like remalloc when we find
more parameters are inputted?


> +    int index=0; /*index of the arrays above*/
> +    bool flag_b = false;
> +    bool flag_p = false;
> +    bool flag_v = false;
>      bool opt_p = false;
>      bool opt_b = false;
> -    int opt, rc;
> +    bool opt_v = false;
> +    int opt, rc, i;
>      static struct option opts[] = {
>          {"domain", 1, 0, 'd'},
>          {"period", 1, 0, 'p'},
>          {"budget", 1, 0, 'b'},
> +        {"vcpu",1, 0, 'v'},
>          {"cpupool", 1, 0, 'c'},
>          COMMON_LONG_OPTS,
>          {0, 0, 0, 0}
>      };
>
> -    SWITCH_FOREACH_OPT(opt, "d:p:b:c:h", opts, "sched-rtds", 0) {
> +    SWITCH_FOREACH_OPT(opt, "d:p:b:v:c:h", opts, "sched-rtds", 0) {
>      case 'd':
>          dom = optarg;
>          break;
>      case 'p':
> -        period = strtol(optarg, NULL, 10);
> +        periods[index] = strtol(optarg, NULL, 10);
>          opt_p = 1;
> +        if (flag_p == 1) { /* budget or vcpuID is missed */
> +            fprintf(stderr, "Must specify period, budget and vcpuID\n");
> +            return 1;
> +        }
> +        flag_p = 1;
> +        if (flag_p && flag_b && flag_v) {
> +            /*
> +             * Get one complete set of per-VCPU parameters
> +             * (period, budget, vcpuID).
> +             */
> +            flag_p = 0;
> +            flag_b = 0;
> +            flag_v = 0;
> +            index++;
> +        }
>          break;
>      case 'b':
> -        budget = strtol(optarg, NULL, 10);
> +        budgets[index] = strtol(optarg, NULL, 10);
>          opt_b = 1;
> +        if (flag_b == 1) { /* period or vcpuID is missed */
> +            fprintf(stderr, "Must specify period, budget and vcpuID\n");
> +            return 1;
> +        }
> +        flag_b = 1;
> +        if (flag_p && flag_b && flag_v) {
> +            flag_p = 0;
> +            flag_b = 0;
> +            flag_v = 0;
> +            index++;
> +        }
> +        break;
> +    case 'v':
> +        vcpus[index] = strtol(optarg, NULL, 10);
> +        opt_v = 1;
> +        if (flag_v == 1) { /* period or budget is missed */
> +            fprintf(stderr, "Must specify period, budget and vcpuID\n");
> +            return 1;
> +        }
> +        flag_v = 1;
> +        if (flag_p && flag_b && flag_v) {
> +            flag_p = 0;
> +            flag_b = 0;
> +            flag_v = 0;
> +            index++;
> +        }
>          break;
>      case 'c':
>          cpupool = optarg;
>          break;
>      }
>
> -    if (cpupool && (dom || opt_p || opt_b)) {
> +    if (cpupool && (dom || opt_p || opt_b || opt_v)) {
>          fprintf(stderr, "Specifying a cpupool is not allowed with "
>                  "other options.\n");
>          return 1;

Thanks,

Meng


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 4/4] xl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-14 14:24   ` Meng Xu
@ 2015-05-14 14:39     ` Dario Faggioli
  2015-05-14 14:43       ` Meng Xu
  0 siblings, 1 reply; 25+ messages in thread
From: Dario Faggioli @ 2015-05-14 14:39 UTC (permalink / raw)
  To: Meng Xu
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, mengxu,
	Jan Beulich, Chong Li, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 1716 bytes --]

On Thu, 2015-05-14 at 10:24 -0400, Meng Xu wrote:
> > @@ -5744,6 +5749,7 @@ static int sched_rtds_pool_output(uint32_t poolid)
> >      return 0;
> >  }
> >
> > +
> >  static int sched_default_pool_output(uint32_t poolid)
> >  {
> >      char *poolname;
> > @@ -6120,38 +6126,87 @@ int main_sched_rtds(int argc, char **argv)
> >  {
> >      const char *dom = NULL;
> >      const char *cpupool = NULL;
> > -    int period = 0; /* period is in microsecond */
> > -    int budget = 0; /* budget is in microsecond */
> > +
> > +    int vcpus[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* indices of VCPUs that change */
> > +    int periods[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* period is in microsecond */
> > +    int budgets[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* budget is in microsecond */
> 
> 
> We know this is not good (ugly), but not sure about the best approach
> to do this. Basically, if users tries to input several pairs of period
> and budget, what is the best way to hold those data? Should we just
> increase the array size by using something like remalloc when we find
> more parameters are inputted?
> 
In xl and/or libxl, a couple of remalloc() are not a problem. there are
very few hot paths in them, and this is certainly not one of them.

In xl, we have xrealloc() defined for that purpose, in libxl,
GCNEW_ARRAY() and GCREALLOC_ARRAY(), so really, no big deal. It may
might sense to put think at something to avoid realloc()-ing 17 times by
1 element (like doubling the allocation at each time, and then, of
course, only use the actually filled elements).

In any case, the problem here is not (only) the static array, it is the
use of LIBXL_XEN_LEGACY_MAX_VCPUS.

Regards,
Dario

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 4/4] xl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-14 14:39     ` Dario Faggioli
@ 2015-05-14 14:43       ` Meng Xu
  0 siblings, 0 replies; 25+ messages in thread
From: Meng Xu @ 2015-05-14 14:43 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, mengxu,
	Jan Beulich, Chong Li, Dagaen Golomb

Hi Dario,

2015-05-14 10:39 GMT-04:00 Dario Faggioli <dario.faggioli@citrix.com>:
> On Thu, 2015-05-14 at 10:24 -0400, Meng Xu wrote:
>> > @@ -5744,6 +5749,7 @@ static int sched_rtds_pool_output(uint32_t poolid)
>> >      return 0;
>> >  }
>> >
>> > +
>> >  static int sched_default_pool_output(uint32_t poolid)
>> >  {
>> >      char *poolname;
>> > @@ -6120,38 +6126,87 @@ int main_sched_rtds(int argc, char **argv)
>> >  {
>> >      const char *dom = NULL;
>> >      const char *cpupool = NULL;
>> > -    int period = 0; /* period is in microsecond */
>> > -    int budget = 0; /* budget is in microsecond */
>> > +
>> > +    int vcpus[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* indices of VCPUs that change */
>> > +    int periods[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* period is in microsecond */
>> > +    int budgets[LIBXL_XEN_LEGACY_MAX_VCPUS]; /* budget is in microsecond */
>>
>>
>> We know this is not good (ugly), but not sure about the best approach
>> to do this. Basically, if users tries to input several pairs of period
>> and budget, what is the best way to hold those data? Should we just
>> increase the array size by using something like remalloc when we find
>> more parameters are inputted?
>>
> In xl and/or libxl, a couple of remalloc() are not a problem. there are
> very few hot paths in them, and this is certainly not one of them.
>
> In xl, we have xrealloc() defined for that purpose, in libxl,
> GCNEW_ARRAY() and GCREALLOC_ARRAY(), so really, no big deal. It may
> might sense to put think at something to avoid realloc()-ing 17 times by
> 1 element (like doubling the allocation at each time, and then, of
> course, only use the actually filled elements).

I see. OK. This makes sense.

>
> In any case, the problem here is not (only) the static array, it is the
> use of LIBXL_XEN_LEGACY_MAX_VCPUS.

Right as you already pointed out in the libxl. :-)

Thanks,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-11  9:56 ` [PATCH v1 0/4] Enabling " Dario Faggioli
@ 2015-05-14 17:08   ` Chong Li
  0 siblings, 0 replies; 25+ messages in thread
From: Chong Li @ 2015-05-14 17:08 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 2313 bytes --]

On Mon, May 11, 2015 at 4:56 AM, Dario Faggioli <dario.faggioli@citrix.com>
wrote:

> On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> > [Goal]
> > The current xl sched-rtds tool can only set the VCPUs of a domain to the
> same parameter
> > although the scheduler supports VCPUs with different parameters. This
> patchset is to
> > enable xl sched-rtds tool to configure the VCPUs of a domain with
> different parameters.
> >
> > This per-VCPU settings can be used in many scenarios. For example, based
> on Dario's statement in our pervious discussion(
> http://lists.xen.org/archives/html/xen-devel/2014-09/msg00423.html), if
> there are two real-time applications, which have different timing
> requirements, running in a multi-VCPU guest domain, it is beneficial to pin
> these two applications to two seperate VCPUs with different scheduling
> parameters.
> >
> Right. And in fact, I'm glad to see this is happening, thanks for doing
> this work! :-)
>
> > 1) show the budget and period of each VCPU of each domain, by using "xl
> sched-rtds" command. An example would be like:
> >
> > [..]
> >
> > 2) show the budget and period of each VCPU of a specific domain, by
> using,
> > e.g., "xl sched-rtds -d vm1" command. The output would be like:
> >
> > [..]
> >
> > 3) set the budget and period of each VCPU of a specific domain, by using,
> > e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50" command (where "-v 0"
> specifies
> > the VCPU with ID=0). The parameters would be like:
> >
> > [..]
> >
> > 4) Users can still set the per-domain parameters (previous xl rtds tool
> already supported this).
> > e.g., "xl sched-rtds -d vm1 -p 500 -b 250". The parameters would be like:
> >
> The CLI looks nice to me. I'm wondering, what happens if the user tries
> to only alter the budget or the period of a vcpu (or of a domain)? I
> think that is not possible right now, is it?
>

You're right. The current design requires both budget and period in a 'set'
command.


>
> Would it make sense to allow that? I think it would, but this can well
> happen later, once we will have this in.
>

Yes, we can definitely implement that, after all the other issues in this
patch are well solved.


>
> Regards,
> Dario
>



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

[-- Attachment #1.2: Type: text/html, Size: 3758 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-11 13:11   ` Dario Faggioli
@ 2015-05-14 22:15     ` Chong Li
  0 siblings, 0 replies; 25+ messages in thread
From: Chong Li @ 2015-05-14 22:15 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 9314 bytes --]

On Mon, May 11, 2015 at 8:11 AM, Dario Faggioli <dario.faggioli@citrix.com>
wrote:

> On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> > Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to
> get/set a domain's
> > per-VCPU parameters. Hypercalls are handled in function rt_dom_cntl.
> >
> And that is because, right now, only code in sched_rt.c is able to deal
> with per-vcpu parameters getting and setting.
>
> That's of course true, but these two new hypercalls are, potentially,
> generic, i.e., other schedulers may want to use them at some point. So,
> why not just put them in good shape for that from the beginning?
>
> To do so, you could with the new DOMCTLs in a similar way as
> XEN_DOMCTL_SCHEDOP_{get,put}info are handled, and add a
> new .adjust_vcpu(s?) hook in the scheduler interface.
>
> > diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> > index 7c39a9e..9add5a4 100644
> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > @@ -1085,6 +1085,9 @@ rt_dom_cntl(
> >      struct list_head *iter;
> >      unsigned long flags;
> >      int rc = 0;
> > +    xen_domctl_sched_rtds_params_t *local_sched;
> > +    int vcpu_index=0;
> >
> So, what's this vcpu_index intended meaning/usage?
>

The vcpu_index here equals vcpu_id.

>
> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
> >          }
> >          spin_unlock_irqrestore(&prv->lock, flags);
> >          break;
> > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> > +        op->u.rtds.nr_vcpus = 0;
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        list_for_each( iter, &sdom->vcpu )
> > +            vcpu_index++;
> >
> > +        spin_unlock_irqrestore(&prv->lock, flags);
> >
> This gives you the number of vcpus of sdom, doesn't it? It feels rather
> nasty (especially the lock being dropped and taken again below!).
>
> Aren't there other ways to get the same information that suits your
> needs (e.g., d->max_vcpus)? If not, I think you should consider adding a
> 'nr_vcpu' field in rt_dom, exactly as Credit2 is doing in csched2_dom.
>
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        list_for_each( iter, &sdom->vcpu )
> > +        {
> > +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +
> > +            local_sched[vcpu_index].budget = svc->budget / MICROSECS(1);
> > +            local_sched[vcpu_index].period = svc->period / MICROSECS(1);
> > +            local_sched[vcpu_index].index = vcpu_index;
> > +            vcpu_index++;
> >
> And that's why I was asking about index. As Jan is pointing out already,
> used like this, this index/vcpu_index is rather useless.
>
> I mean, you're passing up nr_vcpus structs in an array in which
> the .index field of the i-eth element is equal to i. How is this
> important? The caller could well iterate, count, and retrieve the
> position of each elements by itself!
>
> What you probably are after, is the vcpu id, isn't it?
>

Yes, it is. Now I use vcpuid instead of vcpu_index.

>
> > +        }
> > +        spin_unlock_irqrestore(&prv->lock, flags);
> > +        copy_to_guest(op->u.rtds.vcpus, local_sched, vcpu_index);
> >
> I'm sure we want some checks about whether we are overflowing the
> userspace provided buffer (and something similar below, for put). I
> appreciate that you, in this patch series, are only calling this from
> libxl, which properly dimension things, etc., but that can not always be
> the case.
>
> There are several examples in the code base on the route to take for
> similar operations. For example, you can try to do some checks and only
> fill as much elements as the buffer allows, or you can give a special
> semantic to calling the hypercall with NULL/0 as parameters, i.e., use
> that for asking Xen about the proper sizes, etc.
>
> Have a look at how XEN_SYSCTL_numainfo and XEN_SYSCTL_cputopoinfo are
> implemented (in Xen, but also in libxc and libxl, to properly understand
> things).
>
> > +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> > +        local_sched = xzalloc_array(xen_domctl_sched_rtds_params_t,
> > +                op->u.rtds.nr_vcpus);
> > +        if( local_sched == NULL )
> > +        {
> > +            return -ENOMEM;
> > +        }
> > +        copy_from_guest(local_sched, op->u.rtds.vcpus,
> op->u.rtds.nr_vcpus);
> > +
> > +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> > +        {
> > +            vcpu_index = 0;
> > +            spin_lock_irqsave(&prv->lock, flags);
> > +            list_for_each( iter, &sdom->vcpu )
> > +            {
> >
> But why the nested loop? I think this is still that 'index' thing
> causing problems. If you use vcpu numbers/ids, you can just use the
> d->vcpu[] array, and get rid of the one of the for-s!
>
> Look at, for instance, XEN_DOMCTL_{set,get}vcpuaffinity. You're after
> something that is pretty similar (i.e., altering a per-vcpu property),
> you just want to do it on more than one vcpu at a time.
>
> > +                struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> sdom_elem);
> > +                if ( local_sched[i].index == vcpu_index )
> > +                {
> > +                    if ( local_sched[i].period <= 0 ||
> local_sched[i].budget <= 0 )
> > +                         return -EINVAL;
> > +
> > +                    svc->period = MICROSECS(local_sched[i].period);
> > +                    svc->budget = MICROSECS(local_sched[i].budget);
> > +                    break;
> > +                }
> > +                vcpu_index++;
> > +            }
> > +            spin_unlock_irqrestore(&prv->lock, flags);
> >
> Lock dropping and reacquiring again... This is less scary than above,
> but doing it is pretty much always asking for troubles. Of course there
> are legitimate use case of such a pattern, but I'd always ask myself 6
> or 7 times whether I'm dealing with one of those before actually using
> it. 99% of them, you aren't. :-P
>
> Anyway, if you go with vcpu ids instead than with index, that is not a
> problem, as there will be only one loop.
>
> > diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> > index 10b51ef..490a6b6 100644
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -342,6 +342,16 @@ struct xen_domctl_max_vcpus {
> >  typedef struct xen_domctl_max_vcpus xen_domctl_max_vcpus_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
> >
> > +struct xen_domctl_sched_rtds_params {
> > +    /* vcpus' info */
> > +    uint64_t period;
> > +    uint64_t budget;
> >
> We settled for uint32_t while reviewing RTDS patches, and nothing
> changed since then, as far as I can tell, so you should just use that as
> width.
>
> > +    uint16_t index;
> > +    uint16_t padding[3];
> > +};
> > +typedef struct xen_domctl_sched_rtds_params
> xen_domctl_sched_rtds_params_t;
> > +DEFINE_XEN_GUEST_HANDLE(xen_domctl_sched_rtds_params_t);
> > +
> >
> >  /* XEN_DOMCTL_scheduler_op */
> >  /* Scheduler types. */
> > @@ -351,9 +361,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_vcpus_t);
> >  #define XEN_SCHEDULER_ARINC653 7
> >  #define XEN_SCHEDULER_RTDS     8
> >
> > -/* Set or get info? */
> > +/* Set or get info */
> >  #define XEN_DOMCTL_SCHEDOP_putinfo 0
> >  #define XEN_DOMCTL_SCHEDOP_getinfo 1
> > +#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 2
> > +#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 3
> > +
> >  struct xen_domctl_scheduler_op {
> >      uint32_t sched_id;  /* XEN_SCHEDULER_* */
> >      uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
> > @@ -373,8 +386,11 @@ struct xen_domctl_scheduler_op {
> >              uint16_t weight;
> >          } credit2;
> >          struct xen_domctl_sched_rtds {
> > -            uint32_t period;
> > -            uint32_t budget;
> > +            uint64_t period;
> > +            uint64_t budget;
> >
> Ditto.
>
> > +            XEN_GUEST_HANDLE_64(xen_domctl_sched_rtds_params_t) vcpus;
> > +            uint16_t nr_vcpus;
> > +            uint16_t padding[3];
> >          } rtds;
> >
> I don't like this. I find it both confusing and redundant as an
> interface.
>
> In fact, this way, we have both budget, period and an array of budget
> and period. What is the meaning of this? What if one fills both the
> budget and period members and a couple of array elements?
>
> You should either adapt xen_domctl_sched_rtds to be able to deal with
> both per-domain ad per-vcpu parameter getting/setting, but with less
> redundancy and a more clear and better defined semantic, or introduce a
> new member of the union u to with the array (and the number of elements)
> in it (e.g., xen_domctl_sched_rtds_vcpus), or (and that's what I like
> most) just go with something completely new, e.g.,
>
> #define XEN_DOMCTL_SCHEDOP_getvcpuinfo 2
> #define XEN_DOMCTL_SCHEDOP_putvcpuinfo 3
> struct xen_domctl_scheduler_vcpu_op {
>     uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_{get,put}vcpuinfo */
>     union {
>         struct xen_domctl_sched_rtds_vcpus {
>             XEN_GUEST_HANDLE_64(xen_domctl_sched_rtds_params_t) vcpus
>             uint16_t nr_vcpus;
>             uint16_t padding[3];
>         } rtds;
>     } u;
> }
>

Yes, I think so. These changes will appear in the next version of the
patch.

>
> Regards,
> Dario
>
>


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

[-- Attachment #1.2: Type: text/html, Size: 12158 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-11  6:57       ` Jan Beulich
@ 2015-05-14 22:27         ` Chong Li
  2015-05-15 14:42           ` Dario Faggioli
  0 siblings, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-14 22:27 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Chong Li, Wei Liu, Sisu Xi, george.dunlap, dario.faggioli,
	xen-devel, Meng Xu, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 5737 bytes --]

On Mon, May 11, 2015 at 1:57 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 11.05.15 at 00:04, <lichong659@gmail.com> wrote:
> > On Fri, May 8, 2015 at 2:49 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >> >>> On 07.05.15 at 19:05, <lichong659@gmail.com> wrote:
> >> > @@ -1110,6 +1113,67 @@ rt_dom_cntl(
> >> >          }
> >> >          spin_unlock_irqrestore(&prv->lock, flags);
> >> >          break;
> >> > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> >> > +        op->u.rtds.nr_vcpus = 0;
> >> > +        spin_lock_irqsave(&prv->lock, flags);
> >> > +        list_for_each( iter, &sdom->vcpu )
> >> > +            vcpu_index++;
> >> > +        spin_unlock_irqrestore(&prv->lock, flags);
> >> > +        op->u.rtds.nr_vcpus = vcpu_index;
> >>
> >> Does dropping of the lock here and re-acquiring it below really work
> >> race free?
> >>
> >
> > Here, the lock is used in the same way as the ones in the two cases
> > above (XEN_DOMCTL_SCHEDOP_get/putinfo). So I think if race free
> > is guaranteed in that two cases, the lock in this case works race free
> > as well.
>
> No - the difference is that in the {get,put}info cases it is being
> acquired just once each.
>

I see. I changed it based on Dario's suggestions.

>
> >> > +        vcpu_index = 0;
> >> > +        spin_lock_irqsave(&prv->lock, flags);
> >> > +        list_for_each( iter, &sdom->vcpu )
> >> > +        {
> >> > +            struct rt_vcpu *svc = list_entry(iter, struct rt_vcpu,
> >> sdom_elem);
> >> > +
> >> > +            local_sched[vcpu_index].budget = svc->budget /
> MICROSECS(1);
> >> > +            local_sched[vcpu_index].period = svc->period /
> MICROSECS(1);
> >> > +            local_sched[vcpu_index].index = vcpu_index;
> >>
> >> What use is this index to the caller? I think you rather want to tell it
> >> the vCPU number. That's especially also taking the use case of a
> >> get/set pair into account - unless you tell me that these indexes can
> >> never change, the indexes passed back into the set operation would
> >> risk to have become stale by the time the hypervisor processes the
> >> request.
> >>
> >
> > I don't quite understand what the "stale" means. The array here
> > (local_sched[ ])
> > and the array (in libxc) that local_sched[ ] is copied to are both used
> for
> > this get
> > operation only. When users set per-vcpu parameters, there are also
> > dedicated
> > arrays for that set operation.
>
> Just clarify this for me (and maybe yourself): Is the vCPU number
> <-> vcpu_index mapping invariable for the lifetime of a domain?
> If it isn't, the vCPU for a particular vcpu_index during a "get"
> may be different from that for the same vcpu_index during a
> subsequent "set".
>

Here the vcpu_index means the vcpu_id. I'll use svc->vcpu.vcpu_id instead
of the
vcpu_index in next version.


>
> >> > +        if( local_sched == NULL )
> >> > +        {
> >> > +            return -ENOMEM;
> >> > +        }
> >> > +        copy_from_guest(local_sched, op->u.rtds.vcpus,
> >> op->u.rtds.nr_vcpus);
> >> > +
> >> > +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> >> > +        {
> >> > +            vcpu_index = 0;
> >> > +            spin_lock_irqsave(&prv->lock, flags);
> >> > +            list_for_each( iter, &sdom->vcpu )
> >> > +            {
> >> > +                struct rt_vcpu *svc = list_entry(iter, struct
> rt_vcpu,
> >> sdom_elem);
> >> > +                if ( local_sched[i].index == vcpu_index )
> >> > +                {
> >> > +                    if ( local_sched[i].period <= 0 ||
> >> local_sched[i].budget <= 0 )
> >> > +                         return -EINVAL;
> >> > +
> >> > +                    svc->period = MICROSECS(local_sched[i].period);
> >> > +                    svc->budget = MICROSECS(local_sched[i].budget);
> >> > +                    break;
> >> > +                }
> >> > +                vcpu_index++;
> >> > +            }
> >> > +            spin_unlock_irqrestore(&prv->lock, flags);
> >> > +        }
> >>
> >> Considering a maximum size guest, these two nested loops could
> >> require a couple of million iterations. That's too much without any
> >> preemption checks in the middle.
> >>
> >
> > The section protected by the lock is only the "list_for_each" loop, whose
> > running time is limited by the number of vcpus of a domain (32 at most).
>
> Since when is 32 the limit on the number of vCPU-s in a domain?
>

Based on Dario's suggestion, I'll use vcpu_id to locate the vcpu to set,
which cost much
less time.


>
> > If this does cause problems, I think adding a "hypercall_preempt_check()"
> > at the outside "for" loop may help. Is that right?
>
> Yes.
>
> >> > --- a/xen/common/schedule.c
> >> > +++ b/xen/common/schedule.c
> >> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d, struct
> >> xen_domctl_scheduler_op *op)
> >> >
> >> >      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
> >> >           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
> >> > -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
> >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
> >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
> >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
> >>
> >> Imo this should become a switch now.
> >>
> >
> > Do you mean "switch ( op->cmd )" ? I'm afraid that would make it look
> more
> > complicated.
>
> This may be a matter of taste to a certain degree, but I personally
> don't think a series of four almost identical comparisons reads any
> better than its switch() replacement. But it being a style issue, the
> ultimate decision is with George as the maintainer anyway.
>
> Jan
>



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

[-- Attachment #1.2: Type: text/html, Size: 8710 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-14 22:27         ` Chong Li
@ 2015-05-15 14:42           ` Dario Faggioli
  0 siblings, 0 replies; 25+ messages in thread
From: Dario Faggioli @ 2015-05-15 14:42 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, george.dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 3840 bytes --]

Hey,

2 things first:
 - can you avoid HTML emails?
 - can you trim the quotes, which means only leaving in the reply
   message the bits of the conversation that are useful for bringing it
   forward, and cut the rest (I'm doing this in this mail too)

On Thu, 2015-05-14 at 17:27 -0500, Chong Li wrote:


> On Mon, May 11, 2015 at 1:57 AM, Jan Beulich <JBeulich@suse.com>
> wrote:
>         >>> On 11.05.15 at 00:04, <lichong659@gmail.com> wrote:
>         > On Fri, May 8, 2015 at 2:49 AM, Jan Beulich
>         <JBeulich@suse.com> 
>         
>         >> Considering a maximum size guest, these two nested loops
>         could
>         >> require a couple of million iterations. That's too much
>         without any
>         >> preemption checks in the middle.
>         >>
>         >
>         > The section protected by the lock is only the
>         "list_for_each" loop, whose
>         > running time is limited by the number of vcpus of a domain
>         (32 at most).

>         Since when is 32 the limit on the number of vCPU-s in a
>         domain?

> Based on Dario's suggestion, I'll use vcpu_id to locate the vcpu to
> set, which cost much
> less time.
>
I will indeed be quicker. However, this is probably not enough to
invalidate Jan's point: a guest can have enough vCPUs to make it
undesirable that the hypervisor goes doing something on each of them
without interruption/preemption.

In fact, even with only one for loop, the number of iterations may still
be high and, more important, is not under our (read: Xen) complete
control, as it depends on how many vCPUs the user is operating on.

For this reason, I think we also want to make this preemptable, at least
if the number of vCPUs passed is above a threshold that we can define.
Or perhaps by doing a bunch of vCPUs, tell toolstack how far we got and
have them reissue the hypercall.

Here's some (rather random) references:
 * http://xenbits.xen.org/xsa/advisory-77.html
 * http://xenbits.xen.org/xsa/advisory-125.html
 * http://lists.xenproject.org/archives/html/xen-devel/2014-12/msg00735.html
 * how the recently introduced XEN_SYSCTL_pcitopoinfo has been implemented

Regards,
Dario

>  
>         
>         > If this does cause problems, I think adding a
>         "hypercall_preempt_check()"
>         > at the outside "for" loop may help. Is that right?
>         
>         Yes.
>         
>         >> > --- a/xen/common/schedule.c
>         >> > +++ b/xen/common/schedule.c
>         >> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d,
>         struct
>         >> xen_domctl_scheduler_op *op)
>         >> >
>         >> >      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>         >> >           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
>         >> > -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
>         >>
>         >> Imo this should become a switch now.
>         >>
>         >
>         > Do you mean "switch ( op->cmd )" ? I'm afraid that would
>         make it look more
>         > complicated.
>         
>         This may be a matter of taste to a certain degree, but I
>         personally
>         don't think a series of four almost identical comparisons
>         reads any
>         better than its switch() replacement. But it being a style
>         issue, the
>         ultimate decision is with George as the maintainer anyway.
>         
>         Jan
> 
> 
> 
> 
> -- 
> Chong Li
> Department of Computer Science and Engineering
> Washington University in St.louis


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-11 14:06   ` Dario Faggioli
@ 2015-05-15 15:24     ` Chong Li
  2015-05-15 23:09       ` Dario Faggioli
  0 siblings, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-15 15:24 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb

On Mon, May 11, 2015 at 9:06 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
>
> On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> > Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
> >

>
> > +    rc = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> > +    if (rc < 0) {
> > +        LOGE(ERROR, "getting domain info");
> > +        return ERROR_FAIL;
> > +    }
> > +    num_vcpus = info.nr_online_vcpus;
> >
> It looks like the most of other places in libxl where this is necessary
> use libxl_list_vcpu(), which, if you want to open code it, uses
> info.max_vcpu_id. I'd do the same.

Do you mean invoking libxl_list_vcpu() here, or still using
xc_domain_getinfo() (but get the
total number of vcpus by info.max_vcpu_id instead of
info.nr_online_vcpus)? I'm asking
because it seems that libxl_list_vcpu() does much more things than what I need.

>
> Regards,
> Dario
>

Chong

-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-12 10:01   ` Dario Faggioli
@ 2015-05-15 16:35     ` Chong Li
  2015-05-15 23:02       ` Dario Faggioli
  2015-05-22 17:57     ` Chong Li
  1 sibling, 1 reply; 25+ messages in thread
From: Chong Li @ 2015-05-15 16:35 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Ian Jackson,
	xen-devel, Ian Campbell, Meng Xu, Dagaen Golomb

On Tue, May 12, 2015 at 5:01 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> [Adjusting the Cc list:
>   - removing hypervisor only people
>   - adding more tools maintainers
>   - adding George]
>
> On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
>> Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
>>
> More on this patch (I had to run yesterday).
>

>>  libxl_domain_sched_params = Struct("domain_sched_params",[
>>      ("sched",        libxl_scheduler),
>>      ("weight",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_WEIGHT_DEFAULT'}),
>> @@ -356,6 +366,7 @@ libxl_domain_sched_params = Struct("domain_sched_params",[
>>      ("latency",      integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_LATENCY_DEFAULT'}),
>>      ("extratime",    integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_EXTRATIME_DEFAULT'}),
>>      ("budget",       integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
>> +    ("rtds",             libxl_domain_sched_rtds_params),
>>      ])
>>
> I've got the same concerns I've already expressed about the hypervisor
> interface. Doing things like this, we will have
> libxl_domain_sched_params that hosts both per-domain and per-vcpu
> parameters, without a clear way to know what happens if one specifies
> both.
>
> Also, if at some point other schedulers want to support per-vcpu
> parameters, the said information duplication (and ambiguity) will apply
> to them too!
>
> Finally, I don't think I would call the way the result returned by a
> call to libxl_domain_sched_params_{get,set}() changes backward
> compatible, which OTOH is something we want to retain in libxl.
>
> So, if API compatibility/stability wasn't an issue, fiddling with
> libxl_domain_sched_params would probably be the best solution. Since it
> is, I'd leave that alone as much as possible, and introduce something
> completely new, for dealing with per-vcpu parameters. Something like
> this:
>
> typedef struct libxl_vcpu_sched_params {
>     libxl_domain_sched_params vcpus[];
>     int num_vcpus;
> } libxl_vcpu_sched_params;
>
> [*]
>
> And I'll deal with this in new API functions:
>
> int libxl_vcpu_sched_params_get(libxl_ctx *ctx, uint32_t domid,
>                                 libxl_domain_vcpu_params *params);
> int libxl_vcpu_sched_params_set(libxl_ctx *ctx, uint32_t domid,
>                                 const libxl_vcpu_sched_params *params);
>
> At this point, if, say, Credit2 wants to start supporting per-vcpu
> scheduling parameters, we're all set for that!
>
> What do the tools' maintainers think?
>
> The only thing I'm not sure about, in the described scenario, is what a
> call to libxl_domain_sched_params_get() should return for a scheduler
> which supports per-vcpu parameters, and actually has different
> scheduling parameters for the various vcpus...
>
> I mean, a call to libxl_domain_sched_params_set() can be interpreted
> like 'set the passed params for all the vcpus', _get(), I don't know...
> perhaps it should return some sort of ERROR_INVAL, to inform the caller
> that it is libxl_vcpu_sched_params_get() that should be used?
>
> Again, tools maintainers? :-)
>
> Oh, and in any case, you need to mark that something is being added to
> the libxl API using the '#define LIBXL_HAVE_*' mechanism. Look for
> examples in libxl.h, there are several. If going for my proposal, I'd
> call the constant LIBXL_HAVE_VCPU_SCHED_PARAMS.
>
> Regards,
> Dario
>
> [*] I don't like the fact that the vcpus[] array is of
> 'libxl_domain_sched_params' type, especially the *domain* part of the
> type name, but I don't think we can change that, for the said API
> stability reasons.

Hi, Dario,

Yes. To be consistent with the hypervisor interface (seperate per-dom
and per-vcpu data
structures), this idea seems better.

I can do it quick after getting feedbacks from tools maintainers.

Chong


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-15 16:35     ` Chong Li
@ 2015-05-15 23:02       ` Dario Faggioli
  0 siblings, 0 replies; 25+ messages in thread
From: Dario Faggioli @ 2015-05-15 23:02 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Ian Jackson,
	xen-devel, Ian Campbell, Meng Xu, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 1215 bytes --]

On Fri, 2015-05-15 at 11:35 -0500, Chong Li wrote:
> On Tue, May 12, 2015 at 5:01 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> >
> > I mean, a call to libxl_domain_sched_params_set() can be interpreted
> > like 'set the passed params for all the vcpus', _get(), I don't know...
> > perhaps it should return some sort of ERROR_INVAL, to inform the caller
> > that it is libxl_vcpu_sched_params_get() that should be used?
> >
> > Again, tools maintainers? :-)
> >
> Hi, Dario,
> 
> Yes. To be consistent with the hypervisor interface (seperate per-dom
> and per-vcpu data
> structures), this idea seems better.
> 
Just allow me to note that hypervisor's and tools' are different
interfaces, so they don't necessarily have to match.

So, yes, if you like my idea/proposal, then fine, but I just wanted to
point out as clearly as possible that I wasn't implying or asking that
it should be the same as Xen interface... Of course, it happens quite a
few time that the interfaces match, which is fine too, it just should
not be treated as a goal! :-)

> I can do it quick after getting feedbacks from tools maintainers.
> 
Exactly, let's hear from them first.

Regards,
Dario

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-15 15:24     ` Chong Li
@ 2015-05-15 23:09       ` Dario Faggioli
  0 siblings, 0 replies; 25+ messages in thread
From: Dario Faggioli @ 2015-05-15 23:09 UTC (permalink / raw)
  To: Chong Li
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, xen-devel, Meng Xu,
	Jan Beulich, Dagaen Golomb


[-- Attachment #1.1: Type: text/plain, Size: 1511 bytes --]

On Fri, 2015-05-15 at 10:24 -0500, Chong Li wrote:
> On Mon, May 11, 2015 at 9:06 AM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> >
> > On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
> > > Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
> > >
> 
> >
> > > +    rc = xc_domain_getinfo(CTX->xch, domid, 1, &info);
> > > +    if (rc < 0) {
> > > +        LOGE(ERROR, "getting domain info");
> > > +        return ERROR_FAIL;
> > > +    }
> > > +    num_vcpus = info.nr_online_vcpus;
> > >
> > It looks like the most of other places in libxl where this is necessary
> > use libxl_list_vcpu(), which, if you want to open code it, uses
> > info.max_vcpu_id. I'd do the same.
> 
> Do you mean invoking libxl_list_vcpu() here, or still using
> xc_domain_getinfo() (but get the
> total number of vcpus by info.max_vcpu_id instead of
> info.nr_online_vcpus)?
>
The latter: use info.max_vcpu_id instead of nr_online_vcpus.

I cited libxl_list_vcpu() as an example, to point you an example that
does something similar to what you want to do here, i.e., main_vcpupin()
in xl_cmdimpl.c, which calls libxl_list_vcpus() to retrieve the maximum
number of vcpus, and then uses it... But, really, the point was that you
should use the max.

In fact, it makes sense to me that the user should be able to set the
scheduling parameters of all the vcpus, even the ones that are, at the
moment, offlined, don't you think?

Regards,
Dario

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v1 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
  2015-05-12 10:01   ` Dario Faggioli
  2015-05-15 16:35     ` Chong Li
@ 2015-05-22 17:57     ` Chong Li
  1 sibling, 0 replies; 25+ messages in thread
From: Chong Li @ 2015-05-22 17:57 UTC (permalink / raw)
  To: Dario Faggioli
  Cc: Chong Li, Wei Liu, Sisu Xi, George Dunlap, Ian Jackson,
	xen-devel, Ian Campbell, Meng Xu, Dagaen Golomb

On Tue, May 12, 2015 at 5:01 AM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> [Adjusting the Cc list:
>   - removing hypervisor only people
>   - adding more tools maintainers
>   - adding George]
>
> On Thu, 2015-05-07 at 12:05 -0500, Chong Li wrote:
>> Change sched_rtds_domain_get/set functions to support per-VCPU settings for RTDS scheduler.
>>
> More on this patch (I had to run yesterday).
>
>> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
>> index 44bd8e2..8284ce1 100644
>> --- a/tools/libxl/libxl.h
>> +++ b/tools/libxl/libxl.h
>

>> +libxl_rtds_vcpu = Struct("vcpu",[
>>
>                      ^Struct("rtds_vcpu",[
>
>> +    ("period",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_PERIOD_DEFAULT'}),
>> +    ("budget",       uint64, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_BUDGET_DEFAULT'}),
>> +    ("index",        integer, {'init_val': 'LIBXL_DOMAIN_SCHED_PARAM_VCPU_INDEX_DEFAULT'}),
>>
> Call this last member vcpuid, if you want to be able to pass a
> sparse/incomplete array, and hence you need to know to what vcpu each
> element refers to, or just get rid of it, it you always pass all the
> elements.
>
> I'd go with the former.
>
>
> So, if API compatibility/stability wasn't an issue, fiddling with
> libxl_domain_sched_params would probably be the best solution. Since it
> is, I'd leave that alone as much as possible, and introduce something
> completely new, for dealing with per-vcpu parameters. Something like
> this:
>
> typedef struct libxl_vcpu_sched_params {
>     libxl_domain_sched_params vcpus[];
>     int num_vcpus;
> } libxl_vcpu_sched_params;
>
> [*]
>

If we use libxl_domain_sched_params vcpus[] here, then each time we
set only one per-vcpu
param, we need to copy the whole vcpus array to the hypervisor (e.g.,
the array length is 32
when there are 32 vcpus in the domain), because
libxl_domain_sched_params has no a
member serving as "vcpuid".(so we can only locate the vcpu based on
the array index).

My idea is using "libxl_rtds_vcpu vcpus[]" (libxl_rtds_vcpu includes
budget, period and
vcpuid, as shown above) in libxl_vcpu_sched_params. Is that Ok?

>
> Regards,
> Dario
>
> [*] I don't like the fact that the vcpus[] array is of
> 'libxl_domain_sched_params' type, especially the *domain* part of the
> type name, but I don't think we can change that, for the said API
> stability reasons.



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2015-05-22 17:57 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-07 17:05 [PATCH v1 0/4] Enabling XL to set per-VCPU parameters of a domain for RTDS scheduler Chong Li
2015-05-07 17:05 ` [PATCH v1 1/4] xen: enabling " Chong Li
2015-05-08  7:49   ` Jan Beulich
2015-05-10 22:04     ` Chong Li
2015-05-11  6:57       ` Jan Beulich
2015-05-14 22:27         ` Chong Li
2015-05-15 14:42           ` Dario Faggioli
2015-05-11 13:11   ` Dario Faggioli
2015-05-14 22:15     ` Chong Li
2015-05-07 17:05 ` [PATCH v1 2/4] libxc: " Chong Li
2015-05-11 13:27   ` Dario Faggioli
2015-05-07 17:05 ` [PATCH v1 3/4] libxl: " Chong Li
2015-05-11 14:06   ` Dario Faggioli
2015-05-15 15:24     ` Chong Li
2015-05-15 23:09       ` Dario Faggioli
2015-05-12 10:01   ` Dario Faggioli
2015-05-15 16:35     ` Chong Li
2015-05-15 23:02       ` Dario Faggioli
2015-05-22 17:57     ` Chong Li
2015-05-07 17:05 ` [PATCH v1 4/4] xl: " Chong Li
2015-05-14 14:24   ` Meng Xu
2015-05-14 14:39     ` Dario Faggioli
2015-05-14 14:43       ` Meng Xu
2015-05-11  9:56 ` [PATCH v1 0/4] Enabling " Dario Faggioli
2015-05-14 17:08   ` Chong Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.